id
stringlengths
11
20
paper_text
stringlengths
29
163k
review
stringlengths
666
24.3k
nips_2017_3170
Generalizing GANs: A Turing Perspective Recently, a new class of machine learning algorithms has emerged, where models and discriminators are generated in a competitive setting. The most prominent example is Generative Adversarial Networks (GANs). In this paper we examine how these algorithms relate to the Turing test, and derive what-from a Turing perspective-can be considered their defining features. Based on these features, we outline directions for generalizing GANs-resulting in the family of algorithms referred to as Turing Learning. One such direction is to allow the discriminators to interact with the processes from which the data samples are obtained, making them "interrogators", as in the Turing test. We validate this idea using two case studies. In the first case study, a computer infers the behavior of an agent while controlling its environment. In the second case study, a robot infers its own sensor configuration while controlling its movements. The results confirm that by allowing discriminators to interrogate, the accuracy of models is improved.
This paper proposes a generalization of GAN, which the authors refer to as Turing Learning. The idea is to let the discriminator to interact with the generator, and thus work as an active interrogator that can influence the sampling behavior of the generator. To show that this strategy is superior to existing GAN which has the discriminator to function passive responder, the authors perform two case studies of the proposed (high-level) model (interactive), comparing against models with passive discriminators. The results show that the interactive model largely outperforms the passive models, with much less variances across multiple runs. Pros 1) The paper is written well, with clear motivation. The idea of having the discriminator to work as active interrogator to influence the generator is novel and makes perfect sense. 2) The results show that the proposed addition of discriminator-generator interaction actually helps the model work more accurately. Cons 3) The proposed learning strategy is too general while the implemented models are too specific to each problem. I was hoping to see a specific GAN learning framework that implements the idea, but no concrete model is proposed and the interaction between the discriminator and the generator is implemented differently from model to model, in the two case studies. This severely limits the application of the model to new problems. 4) The two case studies consider toy problems, and it is not clear how this model will work in real-world problems. This gives me the impression that the work is still preliminary. Summary In sum, the paper presents a novel and interesting idea that can generalize and potentially improve the existing generative adversarial learning. However, the idea is not implemented into a concrete model that can generalize to general learning cases, and is only validated with two proof-of-concept case studies on toy problems. This limits the applicability of the model to new problems. Thus, despite its novelty and the potential impact, I think this is a preliminary work at its current status, and do not strongly support for its acceptance.
nips_2017_3171
Scalable Log Determinants for Gaussian Process Kernel Learning For applications as varied as Bayesian neural networks, determinantal point processes, elliptical graphical models, and kernel learning for Gaussian processes (GPs), one must compute a log determinant of an n × n positive definite matrix, and its derivatives -leading to prohibitive O(n 3 ) computations. We propose novel O(n) approaches to estimating these quantities from only fast matrix vector multiplications (MVMs). These stochastic approximations are based on Chebyshev, Lanczos, and surrogate models, and converge quickly even for kernel matrices that have challenging spectra. We leverage these approximations to develop a scalable Gaussian process approach to kernel learning. We find that Lanczos is generally superior to Chebyshev for kernel learning, and that a surrogate approach can be highly efficient and accurate with popular kernels.
Summary of the paper: This paper describes two techniques that can be used to estimate (stochastically) the log determinant of a positive definite matrix and its gradients. The aim is to speed up the inference process several probabilistic models that involve such computations. The example given is a Gaussian Process in which the marginal likelihood has to be maximized to find good hyper-parameters. The methods proposed are based on using the trace of the logarithm of a matrix to approximate the log determinant. Two techniques are proposed. The first one based on Chevyshev polynomials and the second one based on the Lanczos algorithm. The first one seems to be restricted to the case in which the eigenvalues lie in the interval [-1,1]. The proposed method are compared with sparse GP approximations based on the FITC approximation and with related methods based on scaled eigenvalues on several experiments involving GPs. Detailed comments: Clarity: The paper is very well written and all details seem to be clear with a few exceptions. For example, the reader may not be familiar with the logarithm of a matrix and similar advanced algebra operations. There are few typos in the paper also. In the Eq. above (2) it is not clear what beta_m or e_m are. They have not being defined. It is also known that the Lanczos algorithm is unstable and extra computations have to be done. The authors may comment on these too. Quality: I think the quality of the paper is high. In particular, the authors analyze the method described in several experiments, comparing with other important and related methods. These experiments seem to indicate that the proposed method gives competitive results at a small computational cost. Originality: The paper a strong related work section and the methods described seem to be original. Significance: My main concern with this paper is its significance. While it is true that the proposed method allow for a faster evaluation of the log determinant of the covariance matrix and its gradients. One still has to compute such a matrix with squared cost in the number of samples (or squared cost in the number of inducing points, if sparse approximations are used). This limits a bit the practical applicability of the proposed method. Furthermore, it seems that the experiments carried out by the authors involve a single repetition. Thus, there are no error bars in the results and one can not say whether or not the results are statistical significant. The differences with respect to the scaled eigenvalues problems (ref. [32]) in the paper seem very small.
nips_2017_1745
Rotting Bandits The Multi-Armed Bandits (MAB) framework highlights the trade-off between acquiring new knowledge (Exploration) and leveraging available knowledge (Exploitation). In the classical MAB problem, a decision maker must choose an arm at each time step, upon which she receives a reward. The decision maker's objective is to maximize her cumulative expected reward over the time horizon. The MAB problem has been studied extensively, specifically under the assumption of the arms' rewards distributions being stationary, or quasi-stationary, over time. We consider a variant of the MAB framework, which we termed Rotting Bandits, where each arm's expected reward decays as a function of the number of times it has been pulled. We are motivated by many real-world scenarios such as online advertising, content recommendation, crowdsourcing, and more. We present algorithms, accompanied by simulations, and derive theoretical guarantees.
The paper introduces a novel variant of the multi-armed problem, motivated by practice. The approach is technically sound and well grounded. There are a few typos and minor concerns (listed below), despite which I believe this paper can be accepted. Decaying bandits - e.g., Komiyama and Qin 2014, Heidari, Kearns and Roth 2016 - are mentioned, but Deteriorating bandits - e.g., Gittins 1979, Mandelbaum 1987, Kaspi and Mandelbaum 1998, etc. - are not Can't you use algorithms from non-stationary MABs, which you say are "most related to our problem", for benchmarking? It's quite unfair to use only UCB variants. Line 15: "expanse" -> "expense" Line 19: you have skipped a number of key formulations and break-throughs between 1933 and 1985 Line 31-32: probably a word is missing in the sentence Line 63: "not including" what? Line 68: "equal objective" -> "equivalent objective" Line 68: this is an unusual definition of regret, in fact this expression is usually called "suboptimality", as you only compare to the optimal policy which is based on past allocations and observations. Regret is usually defined as distance from the objective achievable if one knew the unknown parameters. Line 105: in the non-vanishing case, do you allow \mu_i^c = 0 or negative? Line 106-118: some accompanying text describing the intuition behind these expressions would be helpful Line 119,144: headings AV and ANV would be more reasonable than names of the heuristics Line 160: "w.h.p" -> "w.h.p." Table 1: "CTO" is ambiguous, please use the full algorithm acronyms Figure 1: it would be desirable to explain the apparent non-monotonicity Line 256: It would be helpful for the reader if this sentence appeared in the Introduction. Btw, the term "rested bandits" is not very common; instead "classic bandits" is a much more popular term
nips_2017_1522
ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games In this paper, we propose ELF, an Extensive, Lightweight and Flexible platform for fundamental reinforcement learning research. Using ELF, we implement a highly customizable real-time strategy (RTS) engine with three game environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a miniature version of StarCraft, captures key game dynamics and runs at 40K frameper-second (FPS) per core on a laptop. When coupled with modern reinforcement learning methods, the system can train a full-game bot against built-in AIs endto-end in one day with 6 CPUs and 1 GPU. In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environments like ALE [4]. Using ELF, we thoroughly explore training parameters and show that a network with Leaky ReLU [17] and Batch Normalization [11] coupled with long-horizon training and progressive curriculum beats the rule-based built-in AI more than 70% of the time in the full game of Mini-RTS. Strong performance is also achieved on the other two games. In game replays, we show our agents learn interesting strategies. ELF, along with its RL platform, is open sourced at https://github.com/facebookresearch/ELF.
The main proposal of the paper is a real-time strategy simulator specifically designed for reinforcement learning purposes. The paper presents with several details the architecture of the simulator, along with how gaming is done on it and some experimentations with the software with some RL techniques implemented in the software. Although I think there are good values in making with software for research, I don’t think that NIPS is the right forum for presenting technical papers on them. Machine Learning Open Source Software (MLOSS) track from JMLR or relevant workshop are much relevant for that. And in the current case, a publication in the IEEE Computational Intelligence and Games (IEEE-CIG) conference might be a much better fit. That being said, I am also quite unsure of the high relevance of real-time strategies (RTS) games for demonstrating reinforcement learning (RL). The use of RTS for benchmarking AI is not new, it has been done several time in the past. I am not convinced it will the next Atari or Go. It lacks the simplicity and elegance of Go, while not being particularly useful for practical applications. The proposed implementation is sure designed to allow fast during simulation, making learning can be done much faster (assuming that the bottleneck was the simulator, not the learning algorithms). But such a feat does not justify a NIPS paper per se, there contribution is implementation-wise, not on learning by itself. *** Update following rebuttal phase and comments from area chair *** It seems that the paper still fit in the scope of the conference. I updated my evaluation accordingly, being less harsh in my scoring. However, I maintain my opinion on this paper, I still think RTS for AI is not a new proposal, and I really don't feel it will be the next Atari or Go. The work proposed appears correct in term of implementation, but in term of general usefulness, I still remain to be convinced.
nips_2017_3425
Kernel functions based on triplet comparisons Given only information in the form of similarity triplets "Object A is more similar to object B than to object C" about a data set, we propose two ways of defining a kernel function on the data set. While previous approaches construct a lowdimensional Euclidean embedding of the data set that reflects the given similarity triplets, we aim at defining kernel functions that correspond to high-dimensional embeddings. These kernel functions can subsequently be used to apply any kernel method to the data set.
The authors propose two kernel functions based on comparing triplets of data examples. The first kernel function is based on ranking of all the other objects with respect to the objects being compared. The second function is based on comparing whether the compared objects rank similarly with respect to the other objects. The idea is interesting and it is potentially impactful to create such “ordinal” kernels, where the data are not embedded in metric spaces. Comments: 1. The authors should provide their main claims somewhere and demonstrate it well in the paper. Why should anybody use a triplet comparison kernel? From what I understand, with triplet comparison kernel, less number of triplets are sufficient to capture the same information that a full pairwise similarity matrix can capture (sec. 2.2). It is also faster. Is that correct? 2. How is the first kernel (k_1) and the one proposed in (Jiao & Vert 2015) different? 3. In Figure 3, can you quantify how well the proposed kernel performs with respect to computing all pairwise similarities? The synthetic datasets are not hard, and it may also be useful to demonstrate some harder case? 4. The kernels seem to have time advantages compared to other embedding approaches such as GNMDS (from the experiments in Sec. 4.2). However in terms of clustering performance they are outperformed by embedding approaches (the authors claim that the performance is close for all algorithms). How many landmark points were chosen for computing the kernels? Can you comment on the size of landmark set vs. running times? 5. Also, in terms of running times, can you comment whether the reason for the slow performance of the embedding algorithms is the inherent computational complexity? Could it be an implementation issue? 6. The authors should attempt to provide results that are legible and easy to read. If there is lack of space, supplementary material can be provided with the rest of results. Update ------- The authors have answered several of my comments satisfactorily.
nips_2017_301
Pose Guided Person Image Generation This paper proposes the novel Pose Guided Person Generation Network (PG 2 ) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG 2 utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128×64 re-identification images and 256×256 fashion photos show that our model generates high-quality person images with convincing details.
The paper proposes a human image generator conditioned on appearance and human pose. The proposed generation is based on adversarial training architecture where two-step generative networks that produces high resolution image to feed into a discriminator. In the generator part, the first generator produce a coarse image using a U-shape network given appearance and pose map, then the second generator takes the coarse input with the original appearance to predict residual to refine the coarse image. The paper utilizes the DeepFashion dataset for evaluation. The paper proposes a few important ideas. * Task novelty: introducing the idea of conditioning on appearance and pose map for human image generation * Techniques: stacked architecture that predicts difference map rather than direct upsampling, and loss design The paper can improve in terms of the following points to stand out. * Still needs quality improvement * Significance: the paper could be seen one of yet-another GAN architecture * Problem domain: good vision/graphics application, but difficult to generalize to other learning problems The paper is well organized to convey the key aspects of the proposed architecture. Conditioned on appearance and pose information, the proposed generator stacks two networks to adopt a coarse-to-fine strategy. This paper effectively utilize the generation strategy in the dressing problem. The proposed approach looks appropriate to the concerned problem scenario. The difference map generation also looks a small but nice technique in generating higher resolution images. Probably the major complaints to the paper is that the generated results contain visible artifacts and still requires a lot of improvement for application perspective. For example, patterns in ID346 of Fig 4 results in black dots in the final result. Even though the second generator mitigates the blurry image from the first generator, it seems the model is still insufficient to recover high-frequency components in the target appearance. Another possible but not severe concern is that some might say the proposed approach is an application of conditional GANs. Conditioning or stacking of generators for adversarial training have been proposed in the past; e.g., below, though they are arXiv papers. The paper includes application-specific challenges, but this might not appeal to large number of audiences. * Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaolei Huang, Xiaogang Wang, Dimitris Metaxas, "StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks", arXiv:1612.03242. * Xun Huang, Yixuan Li, Omid Poursaeed, John Hopcroft, Serge Belongie, Stacked Generative Adversarial Networks, arXiv:1612.04357. In overall, the paper successfully proposed a solution to the pose-conditioned image problem, and properly conducts evaluation. The proposed approach sufficiently presents technical novelty. The resulting images still needs quality improvement, but the proposed model at least generate something visually consistent images. My initial rating is accept.
nips_2017_166
Group Sparse Additive Machine A family of learning algorithms generated from additive models have attracted much attention recently for their flexibility and interpretability in high dimensional data analysis. Among them, learning models with grouped variables have shown competitive performance for prediction and variable selection. However, the previous works mainly focus on the least squares regression problem, not the classification task. Thus, it is desired to design the new additive classification model with variable selection capability for many real-world applications which focus on high-dimensional data classification. To address this challenging problem, in this paper, we investigate the classification with group sparse additive models in reproducing kernel Hilbert spaces. A novel classification method, called as group sparse additive machine (GroupSAM), is proposed to explore and utilize the structure information among the input variables. Generalization error bound is derived and proved by integrating the sample error analysis with empirical covering numbers and the hypothesis error estimate with the stepping stone technique. Our new bound shows that GroupSAM can achieve a satisfactory learning rate with polynomial decay. Experimental results on synthetic data and seven benchmark datasets consistently show the effectiveness of our new approach.
This paper presents a new additive classification model (GroupSAM) with grouped variables based on group based additive models in kernel Hilbert spaces, and an upper bound for the generalization error is provided in Theorem 1 to show that GroupSAM converges to the optimal error with a polynomial decay under certain conditions. Experiments are performed on synthetic data and seven UCI datasets that have been modified in order to get some group structure in the feature space. The results provided in the paper show that the proposed method is able to take into account grouped features while yielding good performance in terms of classification accuracy. As far as I know, the paper presents a novel method for group sparse nonlinear classification with additive models. However, the degree of contribution of this paper with respect to Yin, Junming, Xi Chen, and Eric P. Xing. "Group sparse additive models." Proceedings of the... International Conference on Machine Learning. International Conference on Machine Learning. Vol. 2012. NIH Public Access, 2012. Is not clear to me as the framework presented in the above paper seems to be easy to extend to classification settings. In fact, Yin et al. apply their group sparse additive model for regression to address a classification task (classification of breast cancer data) by taking the sign of the predicted responses. On the other hand, I appreciate authors’ efforts in providing a theoretical analysis on the generalization error bound of GroupSAM and tis convergence. Regarding the empirical results, I do not think that the experiments show the real significance of the proposed method as they real-world data has been “arbitrarily” tuned to enforce group structure. I think that the use of real data with group structure by itself is mandatory in order to properly measure the contribution of this work. For example, in bioinformatics in can be useful to build predictive models that take into account genes’ pathway information. I also miss comparisons with other parametric approaches such as Group Lasso or the GroupSpSAM model adapted to the classification setting. Additionally, I cannot see a clear advantage with respect to Gaussian SVMs in terms of classification accuracy. I also would like to know whether the Gaussian kernel parameter was properly tuned. Finally, I would like to know how multiclass classification was performed in the toy example as the GroupSAM method is formulated in Section 2 for binary classification problems.  Overall, the paper is well written and it has a well structure. However, I think that the introduction does not clearly present the differences between the proposed approach and other sparse additive models and group sparse models in parametric settings (e.g. Group Lasso).
nips_2017_2223
Non-parametric Structured Output Networks Deep neural networks (DNNs) and probabilistic graphical models (PGMs) are the two main tools for statistical modeling. While DNNs provide the ability to model rich and complex relationships between input and output variables, PGMs provide the ability to encode dependencies among the output variables themselves. End-to-end training methods for models with structured graphical dependencies on top of neural predictions have recently emerged as a principled way of combining these two paradigms. While these models have proven to be powerful in discriminative settings with discrete outputs, extensions to structured continuous spaces, as well as performing efficient inference in these spaces, are lacking. We propose non-parametric structured output networks (NSON), a modular approach that cleanly separates a non-parametric, structured posterior representation from a discriminative inference scheme but allows joint end-to-end training of both components. Our experiments evaluate the ability of NSONs to capture structured posterior densities (modeling) and to compute complex statistics of those densities (inference). We compare our model to output spaces of varying expressiveness and popular variational and sampling-based inference algorithms.
The paper proposes Non-parametric Neural Networks (N3) a method that combines advantages of deep models for learning strong relations between input and output variables with the capabilities of probabilistic graphical models at modeling relationships between the output variables. Towards this goal, the proposed method is designed based on three components: a) a deep neural network (DNN) which learns the parameters of local non-parametric distributions conditioned on the input variables, b) a non-parametric graphical model (NGM) which defines a graph structure on the local distributions considered by the DNN. and, c) a recurrent inference network (RIN), which is trained/used to perform inference on the structured space defined by the DNN+NGM. The proposed method is sound, well motivated and each of its components are properly presented. The method is evaluated covering a good set of baselines and an ablation study showing variants of the proposed method. The evaluation shows that state of the art results are achieved by the proposed method. However, due to its technical nature, there is a large number of variables and terms that complicate the reading at times. Still this is just a minor point. Overall all the paper proposes a method with good potential in a variety of fields. However, one of my main concern is that its evaluation is conducted in the self-defined task of analyzing the statistics of natural pixels from the MS COCO dataset. In this regard, I would recommend adding more details regarding the evaluation, especially on how performance is measured for that task. Moreover, given that the evaluation is related with visual data, I would highly recommend evaluating their method on a more standard task from computer vision. In addition, very recently a paper with the very same title as this manuscript ("non-parametric neural networks") came out, i.e. Phillip and Carbonell, ICLR'17, while the method and goal approached in this paper is different from that from ICLR, I would strongly suggest either, a) make an explicit and strong positioning of your paper w.r.t. Phillip and Carbonell, ICLR'17, or b) consider an alternative (or extended) title for your paper. In this regard, I consider that your paper is better suited for the N3 title, however, this is a conflict that should be addressed adequately. Finally, given the potential of the proposed method I would encourage the authors to release the code related to the experiments presented in the manuscript. Are there any plans for releasing the code? I believe, this should encourage the adoption of the proposed method by the research community. I would appreciate if the authors address the last three paragraphs in their rebuttal.
nips_2017_2889
Online control of the false discovery rate with decaying memory In the online multiple testing problem, p-values corresponding to different null hypotheses are observed one by one, and the decision of whether or not to reject the current hypothesis must be made immediately, after which the next pvalue is observed. Alpha-investing algorithms to control the false discovery rate (FDR), formulated by Foster and Stine, have been generalized and applied to many settings, including quality-preserving databases in science and multiple A/B or multi-armed bandit tests for internet commerce. This paper improves the class of generalized alpha-investing algorithms (GAI) in four ways: (a) we show how to uniformly improve the power of the entire class of monotone GAI procedures by awarding more alpha-wealth for each rejection, giving a win-win resolution to a recent dilemma raised by Javanmard and Montanari, (b) we demonstrate how to incorporate prior weights to indicate domain knowledge of which hypotheses are likely to be non-null, (c) we allow for differing penalties for false discoveries to indicate that some hypotheses may be more important than others, (d) we define a new quantity called the decaying memory false discovery rate (mem-FDR) that may be more meaningful for truly temporal applications, and which alleviates problems that we describe and refer to as "piggybacking" and "alpha-death." Our GAI++ algorithms incorporate all four generalizations simultaneously, and reduce to more powerful variants of earlier algorithms when the weights and decay are all set to unity. Finally, we also describe a simple method to derive new online FDR rules based on an estimated false discovery proportion.
This paper unifies and extends concepts related to the online false discovery rate (FDR) control. This is a recent trendy setting where null hypotheses are coming in a sequential manner over time and the user should make an (irrevocable) decision about rejecting (or not) the null coming at each time. The aim is to make a sequential inference so that the expected proportion of errors among the rejected nulls is bounded by a prescribed value $\alpha$ (at any stopping time). My opinion is that this paper is of very high quality: the writing is good and easy to follow, the motivation is clear and interesting, the new procedures and concepts are clever and the proofs are really nicely written. My only concern is that the strongest innovation (decaying memory procedure) is maybe not sufficiently well put forward in the main paper. It only comes at the last 1.5 pages and there is no illustration for Alpha-death recovery. This concept seems however more important than the double-weighting of Section 4 that can be put in appendix safely (more standard). I would also suggest to put Section D.3 of the supplement in the main paper (illustration of Piggybacking). Here are some other comments and typos : line 58 "will equal 1" (only with high probability); line 131 "mst"; line 143 "keeping with past work"; Notation "W(t)" I suggest to use W_t (if you agree); Lemma 1 f_i is not F_i measurable but just measurable (right?); Section D.1, line 469-470 :" mu_j follows 0" should read "mu_j=0"; line 528 : equation (13) should read equation (10); line 529-530 : -phi_t (R_t-1) should read +phi_t (R_t-1).
nips_2017_3079
Unbounded cache model for online language modeling with open vocabulary Recently, continuous cache models were proposed as extensions to recurrent neural network language models, to adapt their predictions to local changes in the data distribution. These models only capture the local context, of up to a few thousands tokens. In this paper, we propose an extension of continuous cache models, which can scale to larger contexts. In particular, we use a large scale non-parametric memory component that stores all the hidden activations seen in the past. We leverage recent advances in approximate nearest neighbor search and quantization algorithms to store millions of representations while searching them efficiently. We conduct extensive experiments showing that our approach significantly improves the perplexity of pre-trained language models on new distributions, and can scale efficiently to much larger contexts than previously proposed local cache models.
This paper discusses an extensions to the recently proposed continuous cache models by Grave et al. The authors propose a continuous cache model that is unbounded, hence can take into account events that happened an indefinitely long time ago. While interesting, the paper fails to provide good experimental evidence of its merits. Its main statement is that this model is better than Grave et al., but then does not compare with it. It only seems to compare with cache models from the nineties (Kuhn et al.), although that is not even clear as they spend only one line (line 206) discussing the models they compare with. "the static model interpolated with the unigram probability distribution observed up to time t" does sound like Kuhn et al. and is definitely not Grave et al. The authors also mention the importance of large vocabularies, yet fail to specify the vocabulary size for any of their experiments. I also don't understand why all the datasets were lowercased if large vocabularies are the target? This (and the above) is a real pity, because sections 1-3 looked very promising. We advise the authors to spend some more care on the experiments section to make this paper publishable. Minor comments: * line 5: stores -> store * line 13: twice "be" * line 30: speach -> speech * line 31: "THE model" * line 59: "assumptions ABOUT" * line 72: no comma after "and" * line 79: algorithm -> algorithmS * line 79: approach -> approachES * line 97: non-parameteric -> non-parametric * line 99: "THE nineties" * line 110: "aN update rule" * line 127: Khun -> Kuhn * line 197: motivationS * line 197: "adapt to A changing distribution" * line 211: "time steps" * line 211: adaptative -> adaptive * table 2: the caption mentions that the model is trained on news 2007, but in fact this varies throughout the table? * line 216: interest -> interested * line 235: millions -> million * line 267: experimentS * line 269: where do these percentages come from? they seem wrong... * line 281: "THE static model" * line 283: Set
nips_2017_1282
Elementary Symmetric Polynomials for Optimal Experimental Design We revisit the classical problem of optimal experimental design (OED) under a new mathematical model grounded in a geometric motivation. Specifically, we introduce models based on elementary symmetric polynomials; these polynomials capture "partial volumes" and offer a graded interpolation between the widely used A-optimal design and D-optimal design models, obtaining each of them as special cases. We analyze properties of our models, and derive both greedy and convex-relaxation algorithms for computing the associated designs. Our analysis establishes approximation guarantees on these algorithms, while our empirical results substantiate our claims and demonstrate a curious phenomenon concerning our greedy method. Finally, as a byproduct, we obtain new results on the theory of elementary symmetric polynomials that may be of independent interest.
The authors present a novel objective for experiment design that interpolates A- and D-optimality. They present a convex relaxation for the problem which includes novel results on elementary symmetric polynomials, and an efficient greedy algorithm based on the convex relaxation with performance guarantees for the original task. They present results on synthetic data, and present results on a concrete compressive strength task. I am not an expert in this area but I believe I understand the contribution and that it is significant. The paper is extremely well-presented, with minor points of clarification needed that I have identified below. The main suggestion I have is that the authors more explicitly explain the benefits of the new method in terms of the properties of solutions that interpolate between A and D -- In 6.2, the authors present the ability to use their method to get a spars(er) design matrix as an important feature. This should be stated at the beginning of the paper if it is a true aim (and not simply an interesting side-effect) and the authors should review other methods for obtaining a sparse design matrix. (I presume that there are e.g. L1 penalized methods that can achieve sparsity in this area but I do not know.) In general, a more explicit statement of the practical benefits of the contributions in the paper would be useful for the reader. l21. is a bit confusing regarding use of "vector" -- \epsilon_i is a scalar, yes? l22. Gauss-Markov doesn't require \epsilon_i be Gaussian but it *does* require homoskedasticity. Clarify. l45. "deeply study" - studied l77. I think (2.1) is confusing with v as vectors unless the product is element-wise. Is this intended? l107. "Clearly..." - Make a more precise statement. Is an obvious special case hard? What problem of known hardness reduces to this problem? If you mean "no method for finding an exact solution is known" or "we do not believe an efficient exact solution is possible" then say that.
nips_2017_73
Hunt For The Unique, Stable, Sparse And Fast Feature Learning On Graphs For the purpose of learning on graphs, we hunt for a graph feature representation that exhibit certain uniqueness, stability and sparsity properties while also being amenable to fast computation. This leads to the discovery of family of graph spectral distances (denoted as FGSD) and their based graph feature representations, which we prove to possess most of these desired properties. To both evaluate the quality of graph features produced by FGSD and demonstrate their utility, we apply them to the graph classification problem. Through extensive experiments, we show that a simple SVM based classification algorithm, driven with our powerful FGSD based graph features, significantly outperforms all the more sophisticated state-of-art algorithms on the unlabeled node datasets in terms of both accuracy and speed; it also yields very competitive results on the labeled datasets -despite the fact it does not utilize any node label information.
The authors propose a kernel for unlabeled graphs based on the histogram of the pairwise node distances where the notion of distance is defined via the Eigen decomposition of the laplacian matrix. They show that such a notion enjoys useful uniqueness and stability properties, and that when viewed as an embedding it is a generalization of known graph embedding and dimensional reduction approaches. The work would benefit from a clearer exposition (briefly anticipating the notion of "distance" or of invariance, uniqueness, stability and sparsity albeit in an informal way). Figure 3 needs to be rethought as it is not informative in the current form (perhaps showing the histogram of the number of non zero elements in the two cases?) The authors should be careful not to oversell the results of Theorem 1, as in practice it is not known how does the number of non unique harmonic spectra increases with the increase in graph size (after all graphs in real world cases have sizes well exceeding 10 nodes). Also it is not clear whether having near identical spectra could be compatible with large structural changes as measured by other sensible graph metrics. Theorem 5 in the current form is not informative as it encapsulates in \theta a variability that is hard for the reader to gauge. The stability analysis seems to be based on the condition that the variation of weights be positive; perhaps the authors want to add a brief comment on what could be expected if the variation is unrestricted? An empirical analysis of the effect of the histogram bin size should probably be included. The work of Costa and De Grave. "Fast neighborhood subgraph pairwise distance kernel." ICML 2010, seems to be related as they use the histogram of (shortest path) distances between (subgraphs centered around) each pair of nodes, and should probably be referenced and/or compared with. There are a few typos and badly formatted references. However the work is of interest and of potential impact.
nips_2017_72
Dilated Recurrent Neural Networks Learning with recurrent neural networks (RNNs) on long sequences is a notoriously difficult task. There are three major challenges: 1) complex dependencies, 2) vanishing and exploding gradients, and 3) efficient parallelization. In this paper, we introduce a simple yet effective RNN connection structure, the DILATEDRNN, which simultaneously tackles all of these challenges. The proposed architecture is characterized by multi-resolution dilated recurrent skip connections, and can be combined flexibly with diverse RNN cells. Moreover, the DILATEDRNN reduces the number of parameters needed and enhances training efficiency significantly, while matching state-of-the-art performance (even with standard RNN cells) in tasks involving very long-term dependencies. To provide a theory-based quantification of the architecture's advantages, we introduce a memory capacity measure, the mean recurrent length, which is more suitable for RNNs with long skip connections than existing measures. We rigorously prove the advantages of the DILATEDRNN over other recurrent neural architectures. The code for our method is publicly available 1 .
The paper proposes to build RNNs out of recurrent layers with skip connections of exponentially increasing lenghts, i.e. the recurrent equivalent of stacks of dilated convolutions. Compared to previous work on RNNs with skip connections, the connections to the immediately preceding timestep are fully replaced here, allowing for increased parallelism in the computations. Dilated RNNs are compared against several other recurrent architectures designed to improve learning of long-range temporal correlations, and found to perform comparably or better for some tasks. The idea is simple and potentially effective, and related work is adequately covered in the introduction, but the evaluation is somewhat flawed. While the idea of using dilated connections in RNNs is clearly inspired by their use in causal CNNs (e.g. WaveNet), the evaluation never includes them. Only several other recurrent architectures are compared against. I think this is especially limiting as the paper claims computational advantages for dilated RNNs, but these computational advantages could be even more outspoken for causal CNNs, which allow for training and certain forms of inference to be fully parallelised across time. The noisy MNIST task described from L210 onwards (in Section 4.2) seems to be nonstandard and I don't really understand the point. It's nice that dilated RNNs are robust to this type of corruption, but sequences padded with uniform noise seem fairly unlikely to occur in the real world. In Section 4.3, raw audio modelling is found to be less effective than MFCC-based modelling, so this also defeats the point of the dilated RNN a bit. Also, trying dilated RNNs combined with batch normalization would be an interesting additional experiment. Overall, I don't find the evidence quite as strong as it is claimed to be in the conclusion of the paper. Remarks: - Please have the manuscript proofread for spelling and grammar. - In the introduction, it is stated that RNNs can potentially model infinitely long dependencies, but this is not actually achievable in practice (at least with the types of models that are currently popular). To date, this advantage of RNNs over CNNs is entirely theoretical. - It is unclear whether dilated RNNs also have repeated stacks of dilation stages, like e.g. WaveNet. If not, this would mean that large-scale features (from layers with large dilation) never feed into fine-scale features (layers with small dilation). That seems like it could be a severe limitation, so this should be clarified. - Figures 4, 5, 6 are a bit small and hard to read, removing the stars from the curves might make things more clear. UPDATE: in the author feedback it is stated that the purpose of the paper is to propose an enhancement for RNN models, but since this enhancement is clearly inspired by dilation in CNNs, I don't think this justifies the lack of a comparison to dilated CNNs at all. Ultimately, we want to have models that perform well for some given task -- we do not want to use RNNs just for the sake of it. That said, the additional experiments that were conducted do address my concerns somewhat (although I wish they had been done on a different task, I still don't really get the point of "noisy MNIST".) So I have raised the score accordingly.
nips_2017_786
Predictive-State Decoders: Encoding the Future into Recurrent Networks Recurrent neural networks (RNNs) are a vital modeling technique that rely on internal states learned indirectly by optimization of a supervised, unsupervised, or reinforcement training loss. RNNs are used to model dynamic processes that are characterized by underlying latent states whose form is often unknown, precluding its analytic representation inside an RNN. In the Predictive-State Representation (PSR) literature, latent state processes are modeled by an internal state representation that directly models the distribution of future observations, and most recent work in this area has relied on explicitly representing and targeting sufficient statistics of this probability distribution. We seek to combine the advantages of RNNs and PSRs by augmenting existing state-of-the-art recurrent neural networks with PREDICTIVE-STATE DECODERS (PSDs), which add supervision to the network's internal state representation to target predicting future observations. PSDs are simple to implement and easily incorporated into existing training pipelines via additional loss regularization. We demonstrate the effectiveness of PSDs with experimental results in three different domains: probabilistic filtering, Imitation Learning, and Reinforcement Learning. In each, our method improves statistical performance of state-of-the-art recurrent baselines and does so with fewer iterations and less data.
The authors present a simple method for regularizing recurrent neural networks (specifically applied to low-dimensional control tasks trained by filtering, imitation, and reinforcement learning) that is inspired by the literature on Predictive State Representations (PSRs). In PSRs, processes with latent states are modeled by directly identifying the latent state with a representation of the sufficient statistics of future observations. This stands in contrast to RNNs, which use an opaque hidden state that is optimized by backpropagation to make the whole system model the target of interest through evolution of the hidden state. They propose a simple regularization method for RNNs inspired by PSRs, called Predictive State Decoders, which maintains the parametric form of a standard RNN, but augments the training objective to make the hidden state more directly predict statistics of distant future observations. While direct training of an RNN by backpropagation implicitly encourages the internal state to be predictive of the future, the authors demonstrate that this explicit additional regularization term gives significant improvements on several control tasks. The idea specifically is to augment the standard training loss for an RNN controller (either for filtering, imitation learning, or reinforcement learning) with an additional term that maps the hidden state at each timestep through an additional learned parametric function F, and penalizes the difference between that and some statistics of up to 10 future observations. This encourages the internal state of the RNN to more directly capture the future distribution of observations without over-fitting to next-step prediction, and architecturally improves gradient flow through the network similar to skip-connections, attention, or the intermediate classifiers added in FitNets or Inception. I like the main idea of the paper. It is simple, well-motivated, and connects RNN-based models to a large body of theoretically-grounded research in modeling latent state processes, of which the authors do a good literature review. The experiments and analysis are thorough, and demonstrate significant improvements over the baseline on many of the tasks in the OpenAI gym. The baselines use strong, well-known techniques and optimization methods for imitation learning and RL. I have some concerns that the paper does not offer enough detail about the exact architectures used. For the filtering experiment, it is stated that phi is the identity function, which I assume is true for the future experiments. Additionally, unless I am missing something, the exact functional form of F, the decoder, is never mentioned in the text. Is this a feedforward neural network? Given that the predicted states can sometimes be 10 times the size of the observation dimension, have the authors thought about how to apply this method to models with very large dimensional observations? More detail would be required on exactly the form of F in a camera-ready. Overall, I think this is a well-executed paper that demonstrates a nice synergy between the PSR and RNN methods for latent state process modeling. The method seems simple to implement and convincingly improves over the baselines.
nips_2017_970
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {−1, 0, 1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet doesn't incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available 1 .
The authors tried to reduce communication cost in distributed deep learning. They proposed a new method called Ternary Gradients to reduce the overhead of gradient synchronization. The technique is correct and the authors made some experiments to evaluate their method. Overall, the method is novel and interesting and may have significant impact on large scale deep learning. Some comments below for the authors to improve their paper. 1. Some notations and words should be made clearer. For example, s is defined as the max value of abs(g), but the domain of the max operation fails to show up in the equation. 2. The author mentioned that they tried to narrow the gap between the two bounds in the paper for better convergence, but not mentioned for better convergence rate or better convergence probability. And it’s a pity that there’s no theoretical analysis of the convergence rate. 3. One big drawback is that the author made too strong assumption, i.e. the cost function has only one single minimum, which means the objective function is convex. It’s rare in practice. To my best knowledge, there’re many efficient algorithms for convex optimization in distributed systems. Therefore, the proposed method will not have big impacts. 4. The author should use more metrics in the experiments instead of just accuracy.
nips_2017_787
Optimistic posterior sampling for reinforcement learning: worst-case regret bounds We present an algorithm based on posterior sampling (aka Thompson sampling) that achieves near-optimal worst-case regret bounds when the underlying Markov Decision Process (MDP) is communicating with a finite, though unknown, diameter. Our main result is a high probability regret upper bound ofÕ(D √ SAT ) for any communicating MDP with S states, A actions and diameter D, when T ≥ S 5 A. Here, regret compares the total reward achieved by the algorithm to the total expected reward of an optimal infinite-horizon undiscounted average reward policy, in time horizon T . This result improves over the best previously known upper bound ofÕ(DS √ AT ) achieved by any algorithm in this setting, and matches the dependence on S in the established lower bound of Ω( √ DSAT ) for this problem. Our techniques involve proving some novel results about the anti-concentration of Dirichlet distribution, which may be of independent interest.
Posterior Sampling for Reinforcement Learning: Worst-Case Regret Bounds ======================================================================= This paper presents a new algorithm for efficient exploration in Markov decision processes. This algorithm is an optimistic variant of posterior sampling, similar in flavour to BOSS. The authors prove new performance bounds for this approach in a minimax setting that are state of the art in this setting. There are a lot of things to like about this paper: - The paper is well written and clear overall. - The analysis and rigour is of a high standard. I would say that most of the key insights do come from the earlier "Gaussian-Dirichlet dominance" of Osband et al, but there are some significant extensions and results that may be of wider interest to the community. - The resulting bound, which is O(D \sqrt{SAT}) for large enough T is the first to match the lower bounds in this specific setting (communicating MDPs with worst-case guarantees) up to logarthmic factors. Overall I think that this is a good paper and will be of significant interest to the community. I would recommend acceptance, however there are several serious issues that need to be addressed before a finalized version: - This paper doesn't do a great job addressing the previous Bayesian analyses of PSRL, and the significant similarities in the work. Yes, these new bounds are "worst case" rather than in expectation and for communicating MDPs instead of finite horizon... however these differences are not so drastic to just be dismissed as incomparable. For example, this paper never actually mentions that PSRL/RLSVI used this identical stochastic optimism condition to perform a similar reduction of S to \sqrt{S}. - Further, this algorithm is definitively not "PSRL" (it's really much more similar to BOSS) and it needs a specific name. My belief is that the current analysis would not apply to PSRL, where only one sample is taken, but PSRL actually seems to perform pretty well in experiments (see Osband and Van Roy 2016)... do the authors think that these extra samples are really necessary? - The new bounds aren't really an overall improvement in big O scaling apart from very large T. I think this blurs a lot of the superiority of the "worst-case" analysis over "expected" regret... none of these "bounds" are exactly practical, but their main value is in terms of the insight towards efficient exploration algorithms. It seems a significant downside to lose the natural simplicity/scalability of PSRL all in the name of these bounds, but at a huge computational cost and an algorithm that no longer really scales naturally to domains with generalization. - It would be nice to add some experimental evaluation of this new algorithm, perhaps through the code published as part of (Osband and Van Roy, 2016)... especially if this can add some intuition/insight to the relative practical performance of these algorithms. ============================================================================== = Rebuttal comments ============================================================================== Thanks for the author feedback, my overall opinion is unchanged and I definitely recommend acceptance... but I *strongly* hope that this algorithm gets a clear name separate to PSRL because I think it will only lead to confusion in the field otherwise. On the subject of bounds I totally agree these new bounds (and techniques) are significant. I do believe it would give some nice context to mention that at a high level the BayesRegret analysis for PSRL also reduced the S -> sqrt{S}. On the subject of multiple samples the authors mention “An Optimistic Posterior Sampling Strategy for Bayesian Reinforcement Learning, Raphael Fonteneau, Nathan Korda, Remi Munos”, which I think is a good reference. One thing to note is that this workshop paper (incorrectly) compared a version of PSRL (and optimistic PSRL) that resamples every timestep... this removes the element of "deep exploration" due to sampling noise every timestep. If you actually do the correct style of PSRL (resampling once per episode) then multiple samples does not clearly lead to improved performance (see Appendix D.3 https://arxiv.org/pdf/1607.00215.pdf ) On the subject of bounds and large T, I do agree with the authors, but I think they should make sure to stress that for small T these bounds are not as good and whether these "trailing" terms are real or just an artefact of loose analysis.
nips_2017_3139
Hierarchical Clustering Beyond the Worst-Case Hiererachical clustering, that is computing a recursive partitioning of a dataset to obtain clusters at increasingly finer granularity is a fundamental problem in data analysis. Although hierarchical clustering has mostly been studied through procedures such as linkage algorithms, or top-down heuristics, rather than as optimization problems, Dasgupta [9] recently proposed an objective function for hierarchical clustering and initiated a line of work developing algorithms that explicitly optimize an objective (see also [7, 22, 8]). In this paper, we consider a fairly general random graph model for hierarchical clustering, called the hierarchical stochastic block model (HSBM), and show that in certain regimes the SVD approach of McSherry [18] combined with specific linkage methods results in a clustering that give an Op1q approximation to Dasgupta's cost function. Finally, we report empirical evaluation on synthetic and real-world data showing that our proposed SVD-based method does indeed achieve a better cost than other widely-used heurstics and also results in a better classification accuracy when the underlying problem was that of multi-class classification.
Summary of paper: The paper studies hierarchical clustering in the recent framework of [1], which provides a formal cost function for the problem. [4] proposed a hierarchical stochastic block model (HSBM) for this problem, and showed that under this model, one can obtain a constant factor approximation of Dasgupta’s cost function [1]. The present work generalises the HSBM model slightly, where they do not restrict the number of clusters to be fixed, and hence, the cluster sizes may be sub-linear in the data size. The authors prove guarantees of a proposed algorithm under this model. They further extend their analysis to a semi-random setting where an adversary may remove some inter-cluster edges. They also perform experiments on some benchmark datasets to show the performance of the method. Quality: The paper is of decent quality. While the paper is theoretically sound, the experiment section is not very convincing. Why would one consider 5 flat cluster datasets and only one hierarchical dataset in a hierarchical clustering paper? Even for the last case, only a subset of 6 classes are used. Further, 3 out of 5 methods in comparison are variations of Linkage++, the other two being two standard agglomerative techniques. What about divisive algorithms? Clarity: Most parts of the paper can be followed, but in some places, the authors compressed the material a lot and one needs to refer to cited papers for intuitive understanding (for instance, Definition 2.1). There are minor typos throughout the paper, example in lines 41, 114, 126, 179, 201, 342. Originality: The paper derives a considerable portion from [4]. Though some extensions are claimed, I wonder if there is any technical challenges in proving the additional results (I did not see the details in supplementary). For example, is there any difference between Linkage++ and the method in [4]? Apparently I did not find any significant difference. The analysis on HSBM may allow growing k, but is there truly a challenge in doing so. At least for first step of algorithm, achieving n_min > \sqrt{n} does not seem a big hurdle. Analysis on the semi-random setting is new, but I wonder if there are any challenges in this scenario. Or does it immediately follow from combining HSBM analysis with standard techniques of semi-random graph partitioning? Significance: In general, hierarchical clustering in the framework of [1] has considerable significance, and [4] is also a quite interesting paper. However, I consider the present paper a minor extension of [4], with the only possible interesting aspect being the result in the semi-random setting. --------------------------- Update after rebuttal: I thank the authors for clarifying the technical challenges with proving the new results. Hence, I have now increased my score (though I am still concerned about the algorithmic and experimental contributions). Regarding the experiments, I do agree that there are not many well-defined benchmark out there. The papers that I know are also from early 2000s, but the authors may want to check the datasets used here: 1. Zhao, Karypis. Hierarchical Clustering Algorithms for Document Datasets. Data Mining and Knowledge Discovery, 2005 2. Alizadeh et al. Distinct types of diffuse large B-cell lymphoma identified by gene expression profiling. Nature, 2000 Apart from these, UCI repository has Glass and Reuters that I know have been used for hierarchical clustering. Perhaps, there are also more recent document / gene expression datasets in recent data mining papers.
nips_2017_2558
Maximum Margin Interval Trees Learning a regression function using censored or interval-valued output data is an important problem in fields such as genomics and medicine. The goal is to learn a real-valued prediction function, and the training output labels indicate an interval of possible values. Whereas most existing algorithms for this task are linear models, in this paper we investigate learning nonlinear tree models. We propose to learn a tree by minimizing a margin-based discriminative objective function, and we provide a dynamic programming algorithm for computing the optimal solution in log-linear time. We show empirically that this algorithm achieves state-of-the-art speed and prediction accuracy in a benchmark of several data sets.
The authors of this paper present a new decision tree algorithm for the interval regression problem. Leaves are partitioned using a margin based hinge loss similar to the L1-regularized hinge loss in Rigaill et al, Proc ICML 2013. However, the regression tree algorithm presented in this work is not limited to modeling linear patterns as the L1-regularized linear models in Rigaill et al. For training the non linear tree model, a sequence of convex optimization subproblems are optimally solved in log-linear time by Dynamic Programming (DP). The new maximum margin interval tree (MMIT) algorithm is compared with state-of-the-art margin-based and non-margin-based methods in several real and simulated datasets. - In terms of originality, the proposed margin based hinge loss is similar to the L1-regularized hinge loss in Rigaill et al, Proc ICML 2013. However, the regression tree algorithm presented in this work is not limited to modeling linear patterns as the L1-regularized linear models in Rigaill et al. MMIT achieves low test errors on nonlinear datasets and yields accurate models on simulated linear datasets as well. - In terms of significance: interval regression is a fundamental problem in fields such as survival analysis and computational biology. There are very few algorithms designed for this task, and most of them are linear models. A method learning nonlinear tree models like Maximum Margin Interval Trees (MMIT) could be helpful for practitioners in the field. - It terms of quality, the paper is fairly executed, the authors compare MMIT with state-of-the-art margin-based and non-margin-based methods in several real and simulated datasets. The 11-page supplementary material contains proofs, pseudocode, details of the open source implementation (link was hidden for anonymity) and of the experiments. - In terms of clarity, this is a well written paper. In summary, my opinion is that this is a well written and nicely organized work; the proposed method would be useful in real-world applications, and the novelty of the work satisfies the NIPS standards. Thus I recommend this work for publication.
nips_2017_1861
Character-Level Language Modeling with Recurrent Highway Hypernetworks We present extensive experimental and theoretical support for the efficacy of recurrent highway networks (RHNs) and recurrent hypernetworks complimentary to the original works. Where the original RHN work primarily provides theoretical treatment of the subject, we demonstrate experimentally that RHNs benefit from far better gradient flow than LSTMs in addition to their improved task accuracy. The original hypernetworks work presents detailed experimental results but leaves several theoretical issues unresolved-we consider these in depth and frame several feasible solutions that we believe will yield further gains in the future. We demonstrate that these approaches are complementary: by combining RHNs and hypernetworks, we make a significant improvement over current state-of-the-art character-level language modeling performance on Penn Treebank while relying on much simpler regularization. Finally, we argue for RHNs as a drop-in replacement for LSTMs (analogous to LSTMs for vanilla RNNs) and for hypernetworks as a de-facto augmentation (analogous to attention) for recurrent architectures.
The paper describes how to achieve state-of-the-art results on a small character prediction task. Some detailed comments below: - Title: First of all, I would not call work on PTB using characters "language modeling", it is character prediction. Yes, theoretically the same thing but actual language model applications typically use much bigger vocabularies and much more data, and this is where most of the problems in language modeling lie. Replacing "language modeling" by "character prediction" would be a more appropriate title - Results (experimental): I agree that PTB is OK to use as a database to test character prediction models. However, it would have been good to also test/show your findings on larger databases like for example the 1 Billion Google corpus or similar. Public implementations for training these relatively quickly on limited hardware exist and it would have made this a much stronger paper, as eventually I believe you hope that your findings will be in fact used for larger, real problems. To show in your paper that you can handle large problems is always better than finding reasons to not do it as you do in line 113-118 and in other places throughout the paper. - Results (experimental): You list accuracy/perplexity/bpc -- it would be good to define exactly how these are calculated to avoid confusion. - 3.2: 2-3 days of training for PTB is quite long, as you note yourself. It would have been nice to compare the different models you compare yourself over time by plotting perplexity vs. training time for all the models to compare how efficiently they can be trained. - 5.1: You say that you are almost certainly within a few percent of maximum performance on this task -- why? Maybe that is true but maybe it isn't. Diminishing gains on an absolute scale can have many reasons, including the one that possibly many people do not work on PTB for character prediction anymore. Also, the last sentence: Why is accuracy better than perplexity to show relative improvement? BTW, I agree that accuracy is a good measure in general (because everybody knows it is bounded between 0 and 100%), but perplexity is simply the standard measure for these kinds of predictions tasks. - 5.4: You say that how and when to visualize gradient flow has not been receiving direct treatment. Yes, true, but I would argue the main reason is that it is simply not so interesting -- what matters in training these networks is not a theoretical treatment of some definition of gradient flow but simply time to convergence for a given size model on a given task. This goes also back to my comment from above, what would have been more interesting is to show training time vs accuracy for several different networks and different sizes. - 6: Why can you compare gradient norms of completely different architectures? I would argue it is quite unclear whether gradients of different architectures have to have similar norms -- the error surface could look very different. Also, even if you can compare them, the learning algorithm matters a lot -- SGD vs Adam vs Rprop or whatnot have all a completely different effect given a certain gradient (in some cases only the sign matters etc.). So overall, I would argue that gradient flow is simply not important, it would be better to focus to convergence optimizing learning algorithms, hyperparameters etc.
nips_2017_2104
Incorporating Side Information by Adaptive Convolution Computer vision tasks often have side information available that is helpful to solve the task. For example, for crowd counting, the camera perspective (e.g., camera angle and height) gives a clue about the appearance and scale of people in the scene. While side information has been shown to be useful for counting systems using traditional hand-crafted features, it has not been fully utilized in counting systems based on deep learning. In order to incorporate the available side information, we propose an adaptive convolutional neural network (ACNN), where the convolution filter weights adapt to the current scene context via the side information. In particular, we model the filter weights as a low-dimensional manifold within the high-dimensional space of filter weights. The filter weights are generated using a learned "filter manifold" sub-network, whose input is the side information. With the help of side information and adaptive weights, the ACNN can disentangle the variations related to the side information, and extract discriminative features related to the current context (e.g. camera perspective, noise level, blur kernel parameters). We demonstrate the effectiveness of ACNN incorporating side information on 3 tasks: crowd counting, corrupted digit recognition, and image deblurring. Our experiments show that ACNN improves the performance compared to a plain CNN with a similar number of parameters. Since existing crowd counting datasets do not contain ground-truth side information, we collect a new dataset with the ground-truth camera angle and height as the side information.
Summary of the Paper: This work proposes to use adaptive convolutions (also called 'cross convolutions') to incorporate side information (e.g., camera angle) into CNN architectures for vision tasks (e.g., crowd counting). The filter weights in each adaptive convolution layer are predicted using a separate neural network (one network for each set of filter weights) with is a multi-layer perceptron. This network is referred to as 'Filter Manifold Network' which takes the auxiliary side information as input and predicts the filter weights. Experiments on three vision tasks of crowd counting, digit recognition and image deconvolution indicate the potential of the proposed technique for incorporating auxiliary information. In addition, this paper contributes a new dataset for crowd counting with different camera heights and angles. Paper Strengths: - A simple yet working technique for incorporating auxiliary information into CNNs with adaptive convolutions. - With this technique, a single network is sufficient for different dataset settings (for instance, denoising with different blur kernel parameters) and authors demonstrated generalization capability of the proposed technique to interpolate between auxiliary information. - Experiments on 3 different vision tasks with good results compared to similar CNN architectures with non-adaptive convolutions. - A new dataset on crowd counting. Weaknesses: - There is almost no discussion or analysis on the 'filter manifold network' (FMN) which forms the main part of the technique. Did authors experiment with any other architectures for FMN? How does the adaptive convolutions scale with the number of filter parameters? It seems that in all the experiments, the number of input and output channels is small (around 32). Can FMN scale reasonably well when the number of filter parameters is huge (say, 128 to 512 input and output channels which is common to many CNN architectures)? - From the experimental results, it seems that replacing normal convolutions with adaptive convolutions in not always a good. In Table-3, ACNN-v3 (all adaptive convolutions) performed worse that ACNN-v2 (adaptive convolutions only in the last layer). So, it seems that the placement of adaptive convolutions is important, but there is no analysis or comments on this aspect of the technique. - The improvements on image deconvolution is minimal with CNN-X working better than ACNN when all the dataset is considered. This shows that the adaptive convolutions are not universally applicable when the side information is available. Also, there are no comparisons with state-of-the-art network architectures for digit recognition and image deconvolution. Suggestions: - It would be good to move some visual results from supplementary to the main paper. In the main paper, there is almost no visual results on crowd density estimation which forms the main experiment of the paper. At present, there are 3 different figures for illustrating the proposed network architecture. Probably, authors can condense it to two and make use of that space for some visual results. - It would be great if authors can address some of the above weaknesses in the revision to make this a good paper. Review Summary: - Despite some drawbacks in terms of experimental analysis and the general applicability of the proposed technique, the paper has several experiments and insights that would be interesting to the community. ------------------ After the Rebuttal: ------------------ My concern with this paper is insufficient analysis of 'filter manifold network' architecture and the placement of adaptive convolutions in a given CNN. Authors partially addressed these points in their rebuttal while promising to add the discussion into a revised version and deferring some other parts to future work. With the expectation that authors would revise the paper and also since other reviewers are fairly positive about this work, I recommend this paper for acceptance.
nips_2017_878
A Dirichlet Mixture Model of Hawkes Processes for Event Sequence Clustering How to cluster event sequences generated via different point processes is an interesting and important problem in statistical machine learning. To solve this problem, we propose and discuss an effective model-based clustering method based on a novel Dirichlet mixture model of a special but significant type of point processesHawkes process. The proposed model generates the event sequences with different clusters from the Hawkes processes with different parameters, and uses a Dirichlet distribution as the prior distribution of the clusters. We prove the identifiability of our mixture model and propose an effective variational Bayesian inference algorithm to learn our model. An adaptive inner iteration allocation strategy is designed to accelerate the convergence of our algorithm. Moreover, we investigate the sample complexity and the computational complexity of our learning algorithm in depth. Experiments on both synthetic and real-world data show that the clustering method based on our model can learn structural triggering patterns hidden in asynchronous event sequences robustly and achieve superior performance on clustering purity and consistency compared to existing methods.
In this work, the authors introduce a novel method to cluster sequences of events. This method relies on a Dirichlet mixture model of Hawkes processes to describe the data and assign events to clusters. This differs from previous work using Hawkes Processes for sequential data by focusing directly on clustering the sequences themselves rather than sub-events; in particular, compared to prior work such as by Luo et al., 2015, the clustering is done as part of the modeling itself rather than applied after learning the parameters of the model, which leads to a sizable improvement in experimental results. Quality: This paper is of good quality. Clarity: This paper is overall clear, but contains some grammar and style mistakes - some have been noted below in the detailed comments, but the authors ideally should have their manuscript proof-read for English grammatical and style issues. Figure 2 is too small to be easily readable. Originality: This paper introduces a new method to model sequences of events. Significance: This paper appears to be significant. Detailed comments: - In section 2, it would be valuable to provide a formal definition of Hawkes Process; in particular, it would improve readability to put the intensity equation (line 67) in its own equation environment, and then refer to it by number in Section 3 when relevant. - When comparing line 67 to Eq. 1, it seems that the sum over the event types (c_i) is missing; could you please clarify this? - Line 156: you state that the runtime of the algorithm is linearly proportional to the number of inner iterations - could you write out the complexity explicitly? - Along the same lines, could you please state the complexity of MMHP in section 4.4? - The second paragraph of section 4.2 needs to be made clearer. Clarity comments: - You do not need to define R+ line 101 - line 23: "the previous two" - line 39: "prior of the clusters" - line 51: "the problems of overfitting" - line 61: "A Hawkes" - line 73: "was first proposed" - line 77: "was used" - line 100: "basis" - line 109: "a lot of work" should be rephrased - line 125: "goes to infinity" - line 232: "state of the art" - line 255: spacing
nips_2017_1462
Information-theoretic analysis of generalization capability of learning algorithms We derive upper bounds on the generalization error of a learning algorithm in terms of the mutual information between its input and output. The bounds provide an information-theoretic understanding of generalization in learning problems, and give theoretical guidelines for striking the right balance between data fit and generalization by controlling the input-output mutual information. We propose a number of methods for this purpose, among which are algorithms that regularize the ERM algorithm with relative entropy or with random noise. Our work extends and leads to nontrivial improvements on the recent results of Russo and Zou.
This paper derives upper bounds on the generalization error of learning algorithms in terms of the mutual information between the data S and the hypothesis W. Using Lemma 1, their first result is to show that the generalization error can be upper-bounded by a quantity that is proportional to the square root of the mutual information between S and W, as long as the loss is \sigma-subgaussian. Lemma 1 also enables the authors to immediately recover a previous result by Russo and Zou. They furthermore provide a high-probability bound and an expected bound on the absolute difference between the population risk and the empirical risk. The next section then applies these bounds to specific algorithms: two of them have an hypothesis set that can be easily upper bounded; the next two have a bounded mutual info between S and W that can be upper bounded; then the authors discuss practical methods to effectively bound I(S;W); and finally, the generalization error of a composite learning algorithm is analyzed. Finally, a similar analysis is then employed to “adaptive data analytics”. PROS: - The authors attempt to bridge information-theoretic ideas with learning theory. This is something that I believe would enormously favor both communities. - The first bounds on the generalization error based on the mutual information are intuitively appealing. (It is however unclear to me how novel they are relative to Russo and Zou's prior work). CONS: - The paper is very dense as it is, and would have benefited from more concrete, lively examples that the NIPS community can relate to in order to illustrate the results. In particular, I feel it's a terrible shame that most likely only minute fraction of the NIPS community will find interest in the results of this paper, just because of the way it is written. This is the main reason why I felt obliged to decrease my score. - All the results seem to require the sigma-subgaussianity of the loss. How restrictive is this? Can you give concrete examples where the analysis holds and where it fails?
nips_2017_3256
Semisupervised Clustering, AND-Queries and Locally Encodable Source Coding Source coding is the canonical problem of data compression in information theory. In a locally encodable source coding, each compressed bit depends on only few bits of the input. In this paper, we show that a recently popular model of semisupervised clustering is equivalent to locally encodable source coding. In this model, the task is to perform multiclass labeling of unlabeled elements. At the beginning, we can ask in parallel a set of simple queries to an oracle who provides (possibly erroneous) binary answers to the queries. The queries cannot involve more than two (or a fixed constant number ∆ of) elements. Now the labeling of all the elements (or clustering) must be performed based on the (noisy) query answers. The goal is to recover all the correct labelings while minimizing the number of such queries. The equivalence to locally encodable source codes leads us to find lower bounds on the number of queries required in variety of scenarios. We are also able to show fundamental limitations of pairwise 'same cluster' queries -and propose pairwise AND queries, that provably performs better in many situations.
This paper explores a clustering problem of which the goal is to reconstruct the true labels of elements from possibly noisy & non-adaptive query-answers, each associating only a few elements. It derives lower bounds on the number of queries required to reconstruct the ground-truth labels exactly or partially in a variety of scenarios. It also proposes a so-called "AND-queries" algorithm that applies to arbitrary k-cluster case, and evaluates the performance of the algorithm under a real data set. While the paper attempts to provide theoretical analyses for various scenarios as well as develops an algorithm together with a real-data simulation, it comes with limitations in presentation, clarity of the claimed results, and comparison to related works. Perhaps significant changes and possibly new developments are to be made for publication. See below for detailed comments. 1. (Repeating the same questions is not allowed): This constraint looks a bit restricted. Also it may degrade the performance compared to the no-restriction case, as allowing the same questions is one natural way to combat the noise effect. One proper way to justify the restricted model might be providing an extensive set of real-data simulations with a comparison to an algorithm tailored for the no-restriction case, and possibly showing a better performance of the proposed algorithm. If that is not the case, the no-restriction case is more plausible to investigate instead. The authors seem to concern about independence-vs-dependence issue for multiple answers w.r.t the same query, which might make analysis challenging. But this issue can be resolved focusing on a setting in which the same query is given to distinct annotators and therefore the answers from the annotators are assumed to be independent. 2. (Theorem 1): The noiseless setting with exact recovery was also studied in a recent work [Ahn-Lee-Suh] which also considers the XOR-operation ('same-cluster' queries) under a non-adaptive measurement setting. But the result in [Ahn-Lee-Suh] does not coincide with the one claimed in Theorem 1. For instance, when p=1/2 and \Delta is a constant, the query complexity in Theorem 1 reads m=n/log2, while the one in [Ahn-Lee-Suh] reads m=nlogn/\Delta. This needs to be clarified in details. Moreover, the proof of Theorem 1 is hard to follow. 3. (Theorem 3): The expression of the lower bound on \delta is difficult to interpret. In particular, it is difficult to infer the bound on m in terms of n and \delta, from which one can directly see the performance improvement due to the relaxed constraint, reflected in \delta. 4. (Theorem 4): The same comment given w.r.t. Theorem 3 applies here. In addition, how different is the upper bound from the lower bound in Theorem 3? Can be the gap unbounded? Again the proof is hard to follow. 5. (Figure 2): No comparison is made w.r.t. the same cluster query method – there is a comparison only to the same-cluster-query 'lower' bound. Wondering if there is a crossing point also when compared to the same-cluster-query 'method'. If that is the case, any intuition behind that? Actually the paper claims that the proposed approach outperforms the same-cluster query scheme, which cannot be supported from Figure 2. 6. (Algorithm 1): The proposed algorithm requires the prior knowledge on p or the relative sizes of clusters (in view of Theorem 6). How to obtain such information in practice? A proper justification might validate the practicality of the algorithm. Also the pseudo-code is hard to follow without relying on the detailed proof of Theorem 5, which is also difficult to grasp. 7. (Figure 3 & 4 & 5): No comparison is made w.r.t. the state of the arts such as the same-cluster-query method, which is crucial to support the claim made in several places: the AND-queries algorithm outperforms the same-cluster-query scheme. [Ahn-Lee-Suh] K. Ahn, K. Lee, and C. Suh, “Community recovery in hypergraphs,” Proceedings of Allerton Conference on Communication, Control, and Computing, 2016. ------------ (Additional comments) I have rebuttal. I found some comments properly addressed. For further clarification: 1. Still not clear as to whether repeating the same question is not helpful. Did the Nature paper show that it is not useful? Perhaps this can be clarified in a revision. 2. Re. the comparison to the same-query method: it would be clearer to include the detailed discussion as addressed in the response.
nips_2017_573
PredRNN: Recurrent Neural Networks for Predictive Learning using Spatiotemporal LSTMs The predictive learning of spatiotemporal sequences aims to generate future images by learning from the historical frames, where spatial appearances and temporal variations are two crucial structures. This paper models these structures by presenting a predictive recurrent neural network (PredRNN). This architecture is enlightened by the idea that spatiotemporal predictive learning should memorize both spatial appearances and temporal variations in a unified memory pool. Concretely, memory states are no longer constrained inside each LSTM unit. Instead, they are allowed to zigzag in two directions: across stacked RNN layers vertically and through all RNN states horizontally. The core of this network is a new Spatiotemporal LSTM (ST-LSTM) unit that extracts and memorizes spatial and temporal representations simultaneously. PredRNN achieves the state-of-the-art prediction performance on three video prediction datasets and is a more general framework, that can be easily extended to other predictive learning tasks by integrating with other architectures.
This paper deals with predictive learning (mainly video prediction), using a RNN type of structure. A new (to my knowledge) variation of LSTM is introduced, called ST-LSTM, with recurrent connections not only in the forward time direction. The predictive network is composed of ST-LSTM blocks. Each of these block resemble a convolutional LSTM unit, with some differences to include an additional input. This extra input comes from the last layer of the previous time step, and enters at the first layer of the current time step, it is then propagated through the layers at the current time step. The resulting ST-LSTM is similar to the combination of two independent LSTM units, interacting through concatenation of the memories (figure 2(left)). I couldn't find an explicit formulation of the loss (the closest I could find is equation 1). I am assuming that it is a MSE loss, but this should be confirmed, since other forms are possible. Results are presented on the Moving MNIST (synthetic) and KTH (natural) datasets, showing both PSNR/SSIM and generations. The authors compare their PredRNN with other baselines, and show the best results on these datasets. On Moving MNIST, there is a comparison of different LSTM schemes, and ST-LSTM show the best results, which is a good result. However, the experimental section could be stronger by having more datasets (the two datasets presented have little ambiguity in their future, it could be interesting to see how the model preforms in less contained dataset, such as Sports1m, UCF101 or the Google "Push" dataset, for instance). Although this paper present a complex (and, as it seems, good) structure for the predictor, the loss function is almost never mentioned. As (among others) [17] and [19] mention, a simple loss such as the MSE cannot work well when the future distribution is multimodal. The paper would also be stronger if compared with other methods (non LSTM-based), such as the ones presented in section 1.2. In particular, VPNs and GANs seem like strong competitors. Notes: - Figure 1: Although the figure is clear, I do not quite understand what the orange arrows mean (compared to the black arrows). - Figure 2(right): As I understand it, each W box should also have an Xt input (not just W1)
nips_2017_2824
Plan, Attend, Generate: Planning for Sequence-to-Sequence Models We investigate the integration of a planning mechanism into sequence-to-sequence models using attention. We develop a model which can plan ahead in the future when it computes its alignments between input and output sequences, constructing a matrix of proposed future alignments and a commitment vector that governs whether to follow or recompute the plan. This mechanism is inspired by the recently proposed strategic attentive reader and writer (STRAW) model for Reinforcement Learning. Our proposed model is end-to-end trainable using primarily differentiable operations. We show that it outperforms a strong baseline on character-level translation tasks from WMT'15, the algorithmic task of finding Eulerian circuits of graphs, and question generation from the text. Our analysis demonstrates that the model computes qualitatively intuitive alignments, converges faster than the baselines, and achieves superior performance with fewer parameters.
The paper introduces the planning mechanism into the encoder-decoder model with attention. The key idea is that instead of computing the attention weights on-the-fly for each decoding time step, an attention template is generated for the next k time steps based on the current decoding hidden state s_{t-1}, and a commitment vector is used to decide whether to follow this alignment plan or to recompute it. Overall, the paper is well written, and technical discussion are clear. The effect of the planning mechanism is demonstrated by Figure 2, where planning mechanism tends to generate smoother alignments compared to the vanilla attention approach. In this paper, the attention template (matrix A) is computed as eq(3). It would be interesting to see if some other type of features could be used, and if they could control the language generation, and produce more diverse and controllable responses. This might be evaluated on a conversation task, however, I understand that it is out of the scope of this paper. In general, this is an interesting idea to me. A couple of minor questions: -- In Table 1, why the Baseline + layer norm is missing? PAG and rPAG seem only outperform the baseline when layer norm is used. --You used k=10 for all the experiments. Is there any motivation behind that? 10 time steps only correspond to 1 to 2 English words for character MT. --Do you have any results with word-level MT? Does the planning approach help? --Can you say anything about the computation complexity? Does it slow down training a lot?
nips_2017_596
Matching on Balanced Nonlinear Representations for Treatment Effects Estimation Estimating treatment effects from observational data is challenging due to the missing counterfactuals. Matching is an effective strategy to tackle this problem. The widely used matching estimators such as nearest neighbor matching (NNM) pair the treated units with the most similar control units in terms of covariates, and then estimate treatment effects accordingly. However, the existing matching estimators have poor performance when the distributions of control and treatment groups are unbalanced. Moreover, theoretical analysis suggests that the bias of causal effect estimation would increase with the dimension of covariates. In this paper, we aim to address these problems by learning low-dimensional balanced and nonlinear representations (BNR) for observational data. In particular, we convert counterfactual prediction as a classification problem, develop a kernel learning model with domain adaptation constraint, and design a novel matching estimator. The dimension of covariates will be significantly reduced after projecting data to a low-dimensional subspace. Experiments on several synthetic and real-world datasets demonstrate the effectiveness of our approach.
The paper proposes a new matching algorithm for estimation of treatment effects in observational studies. The algorithm is appropriate when the number of covariates is large. It finds a projection of the original covariate space into a lower dimensional one and applies a standard matching approach such as nearest neighbor matching in this new space. The proposed low dimensional nonlinear projection has two objectives: (1) maximize separation between outcomes and (2) maximize the overlap between cases and controls. The new algorithm is evaluated on one synthetic and two real data sets. The results indicate the proposed method is better than several alternatives. Positives: + the proposed objective function is intuitively appealing + the experiments were well designed and the results are positive + the paper is easy to read Negatives: - the proposed ordinal scatter discrepancy criterion is a heuristic that is sounds reasonable. However, it would require a deeper experimental or theoretical study to truly understand its advantages and drawbacks. - Any type of causal study on the observational data needs to come with caveats and understanding of potential pitfalls. This paper is not discussing it and presents the algorithm only in a positive light. It would be useful for the paper to at least have a "buyer beware" discussion. - There are several moving parts required to make the proposed algorithm work, such as discretizing the outcomes, selecting the kernel, deciding on the dimensionality, selecting the hyperparameters. It would be important to see how sensitive the algorithm is to those choices. From the presented results, it is fair to ask if the shown results might be due to some overfitting - the paper will need to be carefully proofread -- there are numerous grammar mistakes throughout.
nips_2017_2825
Task-based End-to-end Model Learning in Stochastic Optimization With the increasing popularity of machine learning techniques, it has become common to see prediction algorithms operating within some larger process. However, the criteria by which we train these algorithms often differ from the ultimate criteria on which we evaluate them. This paper proposes an end-to-end approach for learning probabilistic machine learning models in a manner that directly captures the ultimate task-based objective for which they will be used, within the context of stochastic programming. We present three experimental evaluations of the proposed approach: a classical inventory stock problem, a real-world electrical grid scheduling task, and a real-world energy storage arbitrage task. We show that the proposed approach can outperform both traditional modeling and purely black-box policy optimization approaches in these applications.
--Brief summary of the paper: The paper proposes a learning method for solving two-stage stochastic programming problems which involve minimizing f(x,y,z) w.r.t. z. The main idea of the paper is to learn a predictive model p(y|x;theta) such that the task's objective function f is directly optimized. In contrast, traditional approaches learn p(y|x;theta) to minimize a prediction error without considering f. The main technical challenge in the paper is to solve a sub-optimization problem involving argmin w.r.t. z, and the proposed method can do so in an efficient manner by assuming that the optimization problem is convex in z. The method is experimentally evaluated on two problems and it is shown to outperform traditional methods. --Major comments: The idea of adopting end-to-end learning to solve two-stage stochastic programming is interesting. However, I have a major concern for the proposed method which is the lack of convergence guarantees. Since the optimization problem is assumed to be convex in z, the obtained solution z*(x;theta) is supposed to be the "true" optimal if data is drawn from the true distribution p(x,y). However, a solution obtained using the predictive model p(y|x;theta) is unlikely to be true optimal unless p(y|x;theta) is the true conditional distribution p(y|x). (This issue is commonly known as model bias in the context of model-based reinforcement learning which usually involves non-convex objectives.) Since the proposed method does not theoretically guarantee that p(y|x;theta) converges to p(y|x) even when the model hypothesis is correct, it seems likely that even for a convex optimization problem the method may only find a sub-optimal solution. For this reason, I think having convergence guarantees or error bounds either for the predictive model or for the obtained solution itself are very important to theoretically justify the method and would be a significant contribution to the paper. --Questions: 1) It is not clear why Algorithm 1 requires mini-batches training since Line 7 of the algorithm only checks the constraint for a single sample. 2) In the first experiment, why does the performance of the end-to-end policy optimization method depend on the model hypothesis when it does not rely on a predictive model? --Minor suggestions: 1) In line 154 the paper argue that the model-free approach requires a rich policy class and is data inefficient. However, the model-based approach also requires a rich model class as well. Moreover, the model-based approach can suffer from model bias while the model-free approach cannot. 2) The applicability of the proposed method is quite limited. As mentioned in the paper, solving a sub-optimization problem with argmin is not trivial and convexity assumption can help in this regard. However, practical decision making problems may involve non-convex or unknown objective functions. A variant of the proposed method that is applicable to these tasks would make the method more appealing. 3) The last term of Eq.(4) should have an expectation over the density of x. --Comments after author's response: I feel more positive about the paper after reading the author’s response. Now I think that the proposed method is an important contribution to the field and I will increase my score. However, I am still not convince that the proposed method will be useful outside domains with convex objectives without empirical evidences.
nips_2017_3328
Approximation Algorithms for 0 -Low Rank Approximation We study the 0 -Low Rank Approximation Problem, where the goal is, given an m × n matrix A, to output a rank-k matrix A for which A − A 0 is minimized. Here, for a matrix B, B 0 denotes the number of its non-zero entries. This NPhard variant of low rank approximation is natural for problems with no underlying metric, and its goal is to minimize the number of disagreeing data positions. We provide approximation algorithms which significantly improve the running time and approximation factor of previous work. For k > 1, we show how to find, in poly(mn) time for every k, a rank O(k log(n/k)) matrix A for which To the best of our knowledge, this is the first algorithm with provable guarantees for the 0 -Low Rank Approximation Problem for k > 1, even for bicriteria algorithms. For the well-studied case when k = 1, we give a (2+ )-approximation in sublinear time, which is impossible for other variants of low rank approximation such as for the Frobenius norm. We strengthen this for the well-studied case of binary matrices to obtain a (1 + O(ψ))-approximation in sublinear time, where ψ = OPT / A 0 . For small ψ, our approximation factor is 1 + o(1).
This paper considers the problem of approximating matrices when no clear metric is present on the data and the l_0 norm is used (the number of non-zero elements in the difference between the original data and the approximation). A low-rank solution is proposed. The paper doesn't seem to give a practical case where this is useful. In fact, "low rank" implies some linearity and hence implicitly a metric, which lay not make a lot of sense if we can't interprete the data with a metric to start with. E.g., for k=1 one can easily make the first row and then one element in each other row of A-A' equal to zero (so 2n-1 elements), but then predicting linearly the values of the other elements seems arbitrarily. Also, if the elements in A are continuous variables drawn from a distribution with no linear properties, the probability that for any low rank approximation A' more than these minimal number of elements of A-A' can be made equal to *exactly* zero (the requirement of the l_0 norm) is equal to 0. In that sense, if the data is drawn randomly from a continuous distribution, with very high probability, OPT^{(k)} is something like (n-k)(m-k). If k is "low", then theorems as Theorem 2 talking about multiples of OPT^{(k)} are rather trivial (because O(k^2) OPT^{(k)} > 2.OPT^{(k)} > mn) for "random" data. So maybe the problem is to "discover" that by some external cause the data is very close already to low rank, in the sense that a lot of elements happen to match exactly the linear low rank relation to other elements. In amongst others line 272, the authors too indicate that the work is most interesting if A is already close to low rank. The mathematical elaboration and randomized algorithms are quite interesting from a theory point of view. Moving some details to the supplementary material, and making a consistent story connecting the presented algorithms to clear learning problems, could make this a nice paper. Some details: * Line 48: supplementary -> supplementary material * Line 244: having two times "w.h.p." in the same statement is not really necessary (but could be a compact way to present the result).
nips_2017_1048
Positive-Unlabeled Learning with Non-Negative Risk Estimator From only positive (P) and unlabeled (U) data, a binary classifier could be trained with PU learning, in which the state of the art is unbiased PU learning. However, if its model is very flexible, empirical risks on training data will go negative, and we will suffer from serious overfitting. In this paper, we propose a non-negative risk estimator for PU learning: when getting minimized, it is more robust against overfitting, and thus we are able to use very flexible models (such as deep neural networks) given limited P data. Moreover, we analyze the bias, consistency, and mean-squared-error reduction of the proposed risk estimator, and bound the estimation error of the resulting empirical risk minimizer. Experiments demonstrate that our risk estimator fixes the overfitting problem of its unbiased counterparts.
Summary The paper builds on the literature of Positive and Unlabeled (PU) learning and formulates a new risk objective that is negatively bounded. The need is mainly motivated by the fact that the state of the art risk formulation for PU can be unbounded from below. This is a serious issue when dealing with models with high capacity (such as deep neural networks), because the model can overfit. A new lower-bounded formulation is provided. Despite not being unbiased, the new risk is consistent and its bias decreases exponentially with the sample size. Generalization bounds are also proven. Experiments confirm that state of the art cannot be used with high-capacity models, while the new risk is robust to overfitting and can be applied for classification with simple deep neural networks. Comments The paper is very well written, easy to follow and tells a coherent story. The theoretical results are flawless from my point of view; I have checked the proofs in the supplementary material. The theory is also supported by many comments helping intuition of the arguments behind the proofs. This is very good. I am only confused by the meaning of those two sentences in Section 4: - Theorem 3 is not a necessary condition even for meeting (3): what is the condition the authors refer to? - In the proof, the reduction in MSE was expressed as a Lebesgue-Stieltjes integral [29]: why is this relevant? The authors successfully show that optimisation of this new risk formulation may be effective in practice, on deep neural networks, with only a few (positively) labeled data points. From this, it could help to see what is the point of view of the authors in comparing PU with current trends in deep learning, such as few-shots learning. The dataset `epsilon` is never mentioned before talking about the model trained on it; I find that confusing for the reader. Experimental results well address the main point of the paper (i.e. loweboundness/overfitting), but could be expanded. Fig. 1 shows two graphs with a very similar information: PU overfits when training a deep net. A curious reader who does not know about [16] could wonder whether the previous approach actually works at all. Why not showing the same graph (for one among the 2 losses) on a scenario where [16] is effective? For example, given the thesis of the paper, [16] should work fine if we train a linear model on MNIST, right? Additionally, it would be interesting to see what is the effect of estimating the class-prior on performance. Experiments in Fig. 2 are designed to show that NNPU can be even better than PN. But what happens if the class priors are estimated by a state of the art method? Since the paper is mostly focus on the theory side, I think this lack is not a major issue for acceptance. Although it will be for any realistic implementation of the proposed method. Minor 53: Menon et al. ICML 2015 is another valid method for estimating class priors other than [22, 23, 24] 56: Curiosity on notation: why not denoting R_n(g) as R_n^-(g) ? That would be symmetric with R_n^+ (same for the empirical risks) 237: performance of [35] on CIFAR10 has been surpassed, see https://www.eff.org/ai/metrics or http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#43494641522d3130 I think it might be said that [35] is "among the best performing architectures for classifying on CIFAR10"
nips_2017_2748
Identification of Gaussian Process State Space Models The Gaussian process state space model (GPSSM) is a non-linear dynamical system, where unknown transition and/or measurement mappings are described by GPs. Most research in GPSSMs has focussed on the state estimation problem, i.e., computing a posterior of the latent state given the model. However, the key challenge in GPSSMs has not been satisfactorily addressed yet: system identification, i.e., learning the model. To address this challenge, we impose a structured Gaussian variational posterior distribution over the latent states, which is parameterised by a recognition model in the form of a bi-directional recurrent neural network. Inference with this structure allows us to recover a posterior smoothed over sequences of data. We provide a practical algorithm for efficiently computing a lower bound on the marginal likelihood using the reparameterisation trick. This further allows for the use of arbitrary kernels within the GPSSM. We demonstrate that the learnt GPSSM can efficiently generate plausible future trajectories of the identified system after only observing a small number of episodes from the true system.
The authors derive a variational objective for inference and hyperparameter learning in a GPSSM. The authors apply a mean field variational approximation to the distribution over inducing points and a Gaussian approximation with Markov structure to the distribution over the sequence of latent states. The parameters of the latter depend on a bi-RNN. The variational bound is optimised using doubly stochastic gradient optimisation. The authors apply their algorithm to three simulated data examples, showing that particular applications may require the ability to flexibly choose kernel functions and that the algorithm recovers meaningful structure in the latent states. Overall, the paper is well written and provides an interesting combination of sparse variational GP approximations and methods for armotised inference in state space models. However, it is difficult to judge how much the proposed algorithm adds compared to existing sparse variational GPSSM models, since the authors did not make any comparisons or motivate the algorithm with an application for which existing approaches fail. From the paper alone, it is not clear whether related approaches could perform similarly on the given examples or what extensions of the model make it possible to go beyond what previous approaches can do. Comparisons to related models and a more quantitative analysis of the given algorithm would be essential for this. (The authors include RMSE comparisons to a Kalman filter and autoregressive GP in the rebuttal) Even though the motivation of the paper focuses on system identification, it would be interesting to see if the given approach is successful in correctly learning the model parameters (i.e. linear mapping of observations) and hyperparameters. Currently all results refer to the latent states or transition function. This would be interesting in order to evaluate biases that may be introduced via the choice of approximations and could also help with highlighting the need for the approximate inference scheme presented here (e.g. if a naive Gaussian parameterisation without the bi-RNN introduces more biases in learning). Lastly, the authors do not apply their algorithm to any real data examples, which again would be useful for motivating the proposed algorithm.
nips_2017_1486
Accuracy First: Selecting a Differential Privacy Level for Accuracy-Constrained ERM Traditional approaches to differential privacy assume a fixed privacy requirement ε for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint. As differential privacy is increasingly deployed in practical settings, it may often be that there is instead a fixed accuracy requirement for a given computation and the data analyst would like to maximize the privacy of the computation subject to the accuracy constraint. This raises the question of how to find and run a maximally private empirical risk minimizer subject to a given accuracy requirement. We propose a general "noise reduction" framework that can apply to a variety of private empirical risk minimization (ERM) algorithms, using them to "search" the space of privacy levels to find the empirically strongest one that meets the accuracy constraint, and incurring only logarithmic overhead in the number of privacy levels searched. The privacy analysis of our algorithm leads naturally to a version of differential privacy where the privacy parameters are dependent on the data, which we term ex-post privacy, and which is related to the recently introduced notion of privacy odometers. We also give an ex-post privacy analysis of the classical AboveThreshold privacy tool, modifying it to allow for queries chosen depending on the database. Finally, we apply our approach to two common objective functions, regularized linear and logistic regression, and empirically compare our noise reduction methods to (i) inverting the theoretical utility guarantees of standard private ERM algorithms and (ii) a stronger, empirical baseline based on binary search.
Summary: This paper considers a different perspective on differentially private algorithms where the goal is to give the strongest privacy guarantee possible, subject to the a constraint on the acceptable accuracy. This is in contrast to the more common setting where the we wish to achieve the highest possible accuracy for a given minimum level of privacy. The paper introduces the idea of ex-post differential privacy which provides similar guarantees to differential privacy, but for which the bound on privacy loss is allowed to depend on the output of the algorithm. They then present a general method for private accuracy constrained empirical risk minimization which combines ideas similar to the noise reduction techniques of Koufogiannis et al. and the Above Threshold algorithm. At a high level, their algorithm computes ERM models for increasing privacy parameters (using the noise reduction techniques, rather than independent instantiations of a private learning algorithm) and then uses the above threshold algorithm to find the strongest privacy parameter satisfying the accuracy constraint. The noise reduction techniques allow them to pay only the privacy cost for the least private model they release (rather than the usual sum of privacy costs for the first k models, which would be required under the usual additive composition laws) and the modified above threshold algorithm has privacy cost scaling only logarithmically with the number of models tested. They apply this algorithm to logistic and ridge regression and demonstrate that it is able to provide significantly better ex-post privacy guarantees than two natural baselines: inverting the privacy privacy guarantees to determine the minimum value of epsilon that guarantees sufficiently high accuracy, and a doubling trick that repeatedly doubles the privacy parameter until one is found that satisfies the accuracy constraint. Comments: Providing algorithms with strong utility guarantees that also give privacy when possible is an important task and may be more likely to be adopted by companies/organizations that would like to preserve privacy but are unwilling to compromise on utility. The paper is clearly written and each step is well motivated, including discussion of why simpler baselines either do not work or would give worse guarantees.
nips_2017_1327
Expectation Propagation for t-Exponential Family Using q-Algebra Exponential family distributions are highly useful in machine learning since their calculation can be performed efficiently through natural parameters. The exponential family has recently been extended to the t-exponential family, which contains Student-t distributions as family members and thus allows us to handle noisy data well. However, since the t-exponential family is defined by the deformed exponential, an efficient learning algorithm for the t-exponential family such as expectation propagation (EP) cannot be derived in the same way as the ordinary exponential family. In this paper, we borrow the mathematical tools of q-algebra from statistical physics and show that the pseudo additivity of distributions allows us to perform calculation of t-exponential family distributions through natural parameters. We then develop an expectation propagation (EP) algorithm for the t-exponential family, which provides a deterministic approximation to the posterior or predictive distribution with simple moment matching. We finally apply the proposed EP algorithm to the Bayes point machine and Student-t process classification, and demonstrate their performance numerically.
The authors proposed an expectation propagation algorithm that can work with distributions in the t-exponential family, a generalization of the exponential family that contains the t-distribution as a particular case. The proposed approach is based on using q-algebra operations to work with the pseudo additivity of t-exponential distributions. The gains of the proposed EP algorithm with respect to assumed density filtering are illustrated in experiments with a Bayes point machine. The authors also illustrate the proposed EP algorithms in experiments with a t-process for classification in the presence of outliers. Quality The proposed method seems technically sound, although I did not check the derivations in detail. The experiments illustrate that the proposed method works and it is useful. Clarity The paper is clearly written and easy to follow and understand. I only miss an explanation of how the authors compute the integrals in equation (33) in the experiments. I assume that they have to integrate likelihood factors with respect to Student t distributions. How is this done in practice? Originality The paper is original. Up to my knowledge, this is the first generalization of EP to work with t-exponential families. Significance The proposed method seems to be highly significant, extending the applicability of EP with analytic operations to a wider family of distributions, besides the exponential one.
nips_2017_2426
Non-Stationary Spectral Kernels We propose non-stationary spectral kernels for Gaussian process regression by modelling the spectral density of a non-stationary kernel function as a mixture of input-dependent Gaussian process frequency density surfaces. We solve the generalised Fourier transform with such a model, and present a family of non-stationary and non-monotonic kernels that can learn input-dependent and potentially longrange, non-monotonic covariances between inputs. We derive efficient inference using model whitening and marginalized posterior, and show with case studies that these kernels are necessary when modelling even rather simple time series, image or geospatial data with non-stationary characteristics.
--- Updated Comments Since Rebuttal --- This seems like very nice work and I hope you do keep exploring in your experiments. I think your approach could be used to solve some quite exciting problems. Some of the experiments are conflating aspects of the model (non-stationarity in particular) with what are likely engineering and implementation details, training differences (e.g. MAP), and initialization. The treadplate pattern, for instance, is clearly mostly stationary. Some of these experiments should be modified or replaced with experiments more enlightening about your model, and less misleading about the alternatives. It would also be valuable, in other experiments, to look at the the learned GP functions, to see what meaningful non-stationarity is being discovered. Your paper is also very closely related to "Fast Kernel Learning for Multidimensional Pattern Extrapolation" (Wilson et. al, NIPS 2014), which has all the same real experiments, same approach to scalability, developed GPs specifically for this type of extrapolation, and generally laid the foundation for a lot of this work. This paper should be properly discussed in the text, in addition to [28] and [11], beyond swapping references. Note also that the computational complexity results and extension to incomplete grids is introduced in this same Wilson et. al (2014) paper and not Flaxman et. al (2015), which simply makes use of these results. Wilson & Nickisch (2015) generalize these methods from incomplete grids to arbitrary inputs. Note also that a related proposal for non-stationarity is in Wilson (2014), PhD thesis, sections 4.4.2 - 4.5, involving input dependent mixtures of spectral mixture kernels. Nonetheless, this is overall very nice work and I look forward to seeing continued experimentation. I think it would make a good addition to NIPS. I am happy to trust that the questions raised will be properly addressed in the final version. --- The authors propose non-stationary spectral mixture kernels, by (i) modelling the spectral density in the generalized Bochner's theorem with a Gaussian mixture, (ii) replacing the resulting RBF component with a Gibbs kernel, and (iii) modelling the weights, length-scales and frequencies with Gaussian processes. Overall, the paper is nice. It is quite clearly written, well motivated, and technically solid. While interesting, the weakest part of the paper is the experiments and comparisons for texture extrapolation. A stronger experimental validation, clearly showing the benefit of non-stationarity, would greatly improve the paper; hopefully such changes could appear in a camera ready version. The paper also should make explicit connections to references [28] and [11] clear earlier in the paper, and should cite and discuss "Fast Kernel Learning for Multidimensional Pattern Extrapolation", which has all of the same real experiments, and the same approach to scalability. Detailed comments for improvement: - I didn't find there was much of a visual improvement between the SM kernel and the GSM kernel results on the wood texture synthesis, and similar performance from the SM kernel as the GSM kernel could be achieved for extrapolating the treadplate pattern, with a better initialization. It looks like the SM result on treadplate got trapped in a bad local optima, which could happen as easily with the GSM kernel. E.g., in this particular treadplate example, the results seem more likely to be a fluke of initialization than due to profound methodological differences. The comparison on treadplate should be re-evaluated. The paper should also look at some harder non-stationary inpainting problems -- it looks like there is promise to solve some really challenging problems here. Also the advantage of a non-stationary approach for land surface temperature forecasting is not made too clear here -- how does the spectral mixture kernel compare? - While the methodology can be developed for non-grid data, it appears the implementation developed for this paper has a grid restriction, based on the descriptions, experiments, and the imputation used in the land-surface temperature data. Incidentally, from an implementation perspective it is not hard to alleviate the need for imputation with virtual grid points. The grid restriction of the approach implemented here should be made clear in the text of the paper. - The logit transform for the frequency functions is very nice, but it should be clarified that the Nyquist approach here only applies to regularly sampled data. It would be easy to generalize this approach to non-grid data. It would strengthen the method overall to generalize the implementation to non-grid data and to consider a wider variety of problems. Then the rather general benefits of the approach could be realized much more easily, and the impact would be greater. - I suspect it is quite hard to learn Gaussian processes for the length-scale functions, and also somewhat for the frequency functions. Do you have any comments on this? It would be enlightening to visualize the Gaussian process posteriors on the mixture components so we can see what is learned for each problem and how the solutions differ from stationarity. One of the really promising aspects of your formulation is that the non-stationarity can be made very interpretable through these functions. - Why not perform inference over the Gaussian processes? - There should be more discussion of reference [11] on "Generalized spectral kernels", earlier in the paper; Kom Samo and Roberts also developed a non-stationary extension to the spectral mixture kernel leveraging the generalized Bochner's theorem. The difference here appears to be in using Gaussian processes for mixture components, the Gibbs kernel for the RBF part, and Kronecker for scalability. "Fast kernel learning for multidimensional pattern extrapolation" [*] should also be referenced and discussed, since it proposes a multidimensional spectral mixture product kernel in combination with Kronecker inference, and develops the approach for texture extrapolation and land surface temperature forecasting, using the same datasets in this paper. Generally, the connection to [28] should also be made clear much earlier in the paper, and section 4 should appear earlier. Moreover, the results in [*] "Fast Kernel Learning for Multidimensional Pattern Extrapolation", which have exactly the same real experiments, indicate that some of the results presented in your paper are from a bad initialization rather than a key methodological difference. This needs to be remedied and there need to be more new experiments. - Minor: "the non-masked regions in Figure 3a and 3f" in 5.2 is meant to refer to Figure 4.
nips_2017_994
Nearest-Neighbor Sample Compression: Efficiency, Consistency, Infinite Dimensions We examine the Bayes-consistency of a recently proposed 1-nearest-neighbor-based multiclass learning algorithm. This algorithm is derived from sample compression bounds and enjoys the statistical advantages of tight, fully empirical generalization bounds, as well as the algorithmic advantages of a faster runtime and memory savings. We prove that this algorithm is strongly Bayes-consistent in metric spaces with finite doubling dimension -the first consistency result for an efficient nearest-neighbor sample compression scheme. Rather surprisingly, we discover that this algorithm continues to be Bayes-consistent even in a certain infinitedimensional setting, in which the basic measure-theoretic conditions on which classic consistency proofs hinge are violated. This is all the more surprising, since it is known that k-NN is not Bayes-consistent in this setting. We pose several challenging open problems for future research.
This work develops a compression-based algorithm for multiclass learning; the authors claim the method is both efficient and strongly Bayes-consistent in spaces of finite doubling dimension. They also provide one example of a space of infinite doubling dimension with a particular measure for which their method is weakly Bayes consistent, whereas the same construction leads to inconsistency of k-NN rules. Overall, I think this paper is technically strong and seems to develop interesting results, but I have a few concerns about the significance of this paper which I will discuss below. If the authors can address these concerns, I would support this paper for acceptance. Detailed comments: I did not check the proofs in the appendix in detail but the main ideas appear to be correct. My main comments are on the efficiency of the method (which I expect the authors can easily address) and on the significant of the results in infinite dimensions. (I) I have a question about the efficiency of the algorithm: It is claimed that the complexity of Algorithm 1 is $O(n^4)$. I can see that the for loop traverses a set $\Gamma$ of cardinality at most $O(n^2)$, and in line 5 of the algorithm we pay at most $O(n)$ (which globally yields $O(n^3)$). What is the cost of constructing a $\gamma$-net? Is this $O(n^2)$, and if so, what is the algorithm for doing this? I do not yet believe the efficiency claim by the authors until this part is explained. It is absolutely necessary that Algorithm 1 is efficient, as otherwise one can just turn to standard $k$-NN methods (not based on compression) in order to obtain the same results, although through different methods. (II) Infinite dimensions: It is not clear to me why the results/discussion related to finite dimensions is significant. If I understand correctly, the results of Cérou and Guyader state that, for a particular (realizable) distribution, no $k$-NN learner (where $k$ grows with $n$ in the appropriate way) is Bayes-consistent. Thus, for a particular distribution, $k$-NN fails to be Bayes-consistent. But perhaps it succeeds for other realizable distributions satisfying equation (3). I can see that in the line after Theorem 3, the authors *seem to* claim that in fact the paper of Cérou and Guyader implies that *any* (realizable) distribution satisfying equation (3) will cause $k$-NN to be Bayes inconsistent; am I correct in this interpretation of Line 221? If so, is this easy to see from Cérou and Guyader's paper? On the other hand, the authors show that for a particular metric space of infinite doubling dimension and for a particular realizable distribution, KSU is Bayes-consistent. This is interesting provided that $k$-NN fails to be Bayes consistent for all constructions satisfying equation (3), but it is not interesting if the point is simply that, for the Preiss's construction, $k$-NN is not Bayes consistent while KSU is. It is therefore critical if the authors clarify the extent to which $k$-NN fails to be Bayes consistent (is it for all constructions satisfying equation (3) or just for Preiss's construction?). Also, the authors should mention Biau, Bunea, Wegkamp (2005), ``On the kernel rule for function classification'', because the paper of Cérou and Guyader (near the top of page 13 therein, or page 352 from internal numbering) claims that Biau et al. also show a nearest neighbor-type method that is weakly Bayes consistent in a space of infinite dimension. Mentioning this result (slightly) weakens the force of the authors' results in infinite dimensions. Minor comments: Lines 78-80: How do the authors know about the key motivation behind [15, 23, 16]? Line 277: I think $n(\tilde{\gamma})$ should be replaced by $n(\gamma)$. In the 3-point proof sketch of Theorem 2, from lines 262-281: The paper would benefit from a more explicit/longer explanation part 3. UPDATE AFTER REBUTTAL I am satisfied with the authors' responses and recommend acceptance. Also, regarding the confusion about the citations, I'm happy the authors were able to straighten out which paper was actually intended, while also pointing out the disadvantages of the approach from the older paper.
nips_2017_2255
Policy Gradient With Value Function Approximation For Collective Multiagent Planning Decentralized (PO)MDPs provide an expressive framework for sequential decision making in a multiagent system. Given their computational complexity, recent research has focused on tractable yet practical subclasses of Dec-POMDPs. We address such a subclass called CDec-POMDP where the collective behavior of a population of agents affects the joint-reward and environment dynamics. Our main contribution is an actor-critic (AC) reinforcement learning method for optimizing CDec-POMDP policies. Vanilla AC has slow convergence for larger problems. To address this, we show how a particular decomposition of the approximate action-value function over agents leads to effective updates, and also derive a new way to train the critic based on local reward signals. Comparisons on a synthetic benchmark and a real world taxi fleet optimization problem show that our new AC approach provides better quality solutions than previous best approaches.
The paper presents a policy gradient algorithm for a multiagent cooperative problem, modeled in a formalism (CDEC-POMDP) whose dynamics, like congestion games, depend on groups of agents rather than individuals. This paper follows the theme of several similar advances in theis field of complex multiagent planning, using factored models to propose practical/tractable approximations. The novelty here is the use of parameterized policies and training algorithms inspired by reinforcement learning (policy gradients). The work is well-motivated, relevant, and particularly well-presented. The theoretical results are new and important. The experiments (on a large taxi domain) are convincing. Overall, the paper advances the state-of-the-art. I have several questions: Q1. What is meant by a "DBN" (... dynamic Bayesian network?) and how does one sample from it? Why is this model shown without even a high-level description? How is it relevant to the rest of the paper? Q2. What is the architecture of the neural net? What is the representation of the input to the network? How large is it (number of parameters, number of layers, units per layer, non-linearities, etc.) Are there two entirely separate networks, one for the critic and one for the policy, or is it a multiheaded network? What optimizer is used, what learning rates, etc. Without these details, these results would not be reproducible. Q3. What is the main motivation for using policy gradient? At first, I thought it would reduce memory as opposed to sample-based tabular value methods. But, it seems like the count tables may dominate the space complexity. Is that true? If not, it would be great to show the reduction in memory by using this method. If so, is the answer then that the benefit of parameterized policy + policy gradient is due to generalization across similar states? Q4. The results show that the algorithm produces better results than the competitor, but it's possible that this setup is particularly more sensitive to parameter values. Even through the authors' claims through the paper, it is implied that the algorithm could be sensitive to the particular choices of training setup. Is the algorithm robust to slight perturbations in values? Comments to improve the paper (does not affect decision): C1. Give some background, even if just a few sentences, on the relationship between policy gradient and actor-critic. This seemed to be assumed, but some readers may not be familiar with this and may be confused without any introduction. (Same goes for DBN, or just drop the term if it's unnecessary.) C2. Somewhere it should be stated that proofs are in the appendix. C3. The min_w should not be in expression (11), sicne the authors just call it a 'loss function', but this is the entire optimization problem. C4. The notation using a superscript xi to represent samples is non-standard and slightly awkward. This xi has no meaning as far as I can tell, so why not use something more familiar like \hat{R}, \hat{\mathbf{n}}, or \tilde{...}?
nips_2017_2032
Off-policy evaluation for slate recommendation This paper studies the evaluation of policies that recommend an ordered set of items (e.g., a ranking) based on some context-a common scenario in web search, ads, and recommendation. We build on techniques from combinatorial bandits to introduce a new practical estimator that uses logged data to estimate a policy's performance. A thorough empirical evaluation on real-world data reveals that our estimator is accurate in a variety of settings, including as a subroutine in a learningto-rank task, where it achieves competitive performance. We derive conditions under which our estimator is unbiased-these conditions are weaker than prior heuristics for slate evaluation-and experimentally demonstrate a smaller bias than parametric approaches, even when these conditions are violated. Finally, our theory and experiments also show exponential savings in the amount of required data compared with general unbiased estimators.
This paper addresses the problem of off-policy evaluation in combinatorial contextual bandits. A common scenario in many real world contextual bandit problems is when there is a significant amount of logged data (for example logged search queries, results and clicks) but training or evaluating a model online in real time through e.g. an A/B test is expensive or infeasible. Ideally one would evaluate/train the model off-policy (i.e. on the logged data), but this incurs a significant bias since the data was not generated by the policy being evaluated. The authors show in their experiments that this bias significantly impacts the quality of models that are directly trained on the logs. They devise a method called the 'pseudo-inverse estimator' to remove the bias and provide theoretical results showing that the method is unbiased under some strong assumptions (although in experiments it appears that these assumptions can be violated). They demonstrate empirically in carefully crafted synthetic and real-world experiments that the method outperforms baseline approaches. This paper is very well written and, although dense, is quite clear. The careful explanation of notation is appreciated. In the figures in the appendices, it looks as though the direct method (DM-tree) outperforms or matches the proposed strategy on a variety of the synthetic problems (particularly on ERR vs NDCG). The authors justify this by noting that ERR violates the assumption of linearity which seems like a reasonable justification (and it's appreciated that both metrics are included). However, then in Section 4.3 the experiment shows PI doing slightly worse on NDCG and better on ERR. This seems inconsistent with the linearity hypothesis. The authors note that the baseline could be stronger for ERR, which suggests that perhaps the proposed method wouldn't outperform SUP on either baseline? It seems curious that although wIPS is provably unbiased under weaker assumptions than IP, it performs so much more poorly in experiments. Is this because the variance of the estimator is so high due to the cardinality of S(x)? Overall, this is a very neat paper proposing a tidy solution with compelling theoretical and empirical results and well thought out experiments. The empirical results, however, don't suggest to me that this method is absolutely better than the fairly naive (admittedly unsatisfying) approach of directly regressing the logs with e.g. a regression tree. There's a typo on line 116: "in on". The assumption on rewards on 113 seems like it could be a strong one. This line is saying essentially that all the rewards given actions in the slate are independent of all the other actions. In practice, this must almost certainly be violated in most cases (e.g. people's clicks will vary based on context). Although I suppose in some cases, e.g. in search where the user is looking for a specific single result, this may be fine. The notation in the paragraph starting on line 117 is a little confusing to me. The text refers to vectors, but they're indexed into with pairs (given the notation in the rest of the paper, I assume that they are vectors and the pairs map to a vector index?).
nips_2017_1956
Efficient Optimization for Linear Dynamical Systems with Applications to Clustering and Sparse Coding Linear Dynamical Systems (LDSs) are fundamental tools for modeling spatiotemporal data in various disciplines. Though rich in modeling, analyzing LDSs is not free of difficulty, mainly because LDSs do not comply with Euclidean geometry and hence conventional learning techniques can not be applied directly. In this paper, we propose an efficient projected gradient descent method to minimize a general form of a loss function and demonstrate how clustering and sparse coding with LDSs can be solved by the proposed method efficiently. To this end, we first derive a novel canonical form for representing the parameters of an LDS, and then show how gradient-descent updates through the projection on the space of LDSs can be achieved dexterously. In contrast to previous studies, our solution avoids any approximation in LDS modeling or during the optimization process. Extensive experiments reveal the superior performance of the proposed method in terms of the convergence and classification accuracy over state-of-the-art techniques. the system parameters [1], the space of LDSs is non-Euclidean. This in turn makes the direct use of traditional techniques (e.g., conventional sparse solvers) inapplicable. To get around the difficulties induced by the non-Euclidean geometry, previous studies (e.g., [11,12,13,5]) resort to various approximations, either in modeling or during optimization. For instance, the authors in [11] approximated the clustering mean by finding the closest sample under a certain embedding. As we will see in our experiments, involving approximations into the solutions exhibits inevitable limitations to the algorithmic performance. This paper develops a gradient-based method to solve the clustering and sparse coding tasks efficiently without any approximation involved. To this end, we reformulate the optimization problems for these two different tasks and then unify them into one common problem by making use of the kernel trick. However, there exist several challenges to address this common problem efficiently. The first challenge comes from the aforementioned invariance property on the LDS parameters. To attack this challenge, we introduce a novel canonical form of the system parameters that is insensitive to the equivalent changes. The second challenge comes from the fact that the optimization problem of interest requires solving Discrete Lyapunov Equations (DLEs). At first glance, such a dependency makes backpropagating the gradients through DLEs more complicated. Interestingly, we prove that the gradients can be exactly derived by solving another DLE in the end, which makes our optimization much simpler and more efficient. Finally, as suggested by [14], the LDS parameters, i.e., the transition and measurement matrices require to be stable and orthogonal, respectively. Under our canonical representation, the stability constraint is reduced to the bound constraint. We then make use of the Cayley-transformation [15] to maintain orthogonality and perform the bound-normalization to accomplish stability. Clustering and sparse coding can be combined with high-level pooling frameworks (e.g., bag-of-systems [11] and spatial-temporal-pyramid-matching [16]) for classifying dynamic textures. Our experiments on such kind of data demonstrate that the proposed methods outperform state-of-the-art techniques in terms of the convergence and classification accuracy.
Summary: the paper proposes a clustering method for linear dynamical systems, based on minimizing the kernalized norm between extended observability spaces. Since the objective function contains terms involving discrete Lypanunov equations (DLEs), the paper derives how to pass the derivatives through, yielding a gradient descent algorithm. Experiments are presented on 2 datasets, for LDS clustering and sparse codebook learning. Originality: +1) novelty: the paper is moderately novel, but fairly straightforward. The difference with other related works [3,9,10,11,12] in terms of objective functions, assumptions/constraints, canonical forms should be discussed more. Quality: There are a few potential problems in the derivations and motivation. -1) Theorem 1: The canonical form should have an additional ineqaultiy constraint on Lambda, which is: lambda_1 > lambda_2 > lambda_3 > ... lambda_n which comes from Lambda being a matrix of the singular values of A. Imposing this constraint removes equivalent representations due to permutations of the state vector. This inequality constraint could also be included in (12) to truly enforce the canonical form. However, I think it is not necessary (see next point). -2) It's unclear why we actually need the canonical form to solve (10). Since gradient descent is used, we only care that we obtain a feasible solution, and the actual solution that we converge to will depend on the initial starting point. I suppose that using the canonical form allows for an easier way to handle the stability constraint of A, rho(A)<1, but this needs to be discussed more, and compared with the straightforward approach of directly optimizing (A,C). -3) Theorem 4: I'm not entirely convinced by the proof. In the supplemental, Line 35 shows an explicit equation for the vectorized R_12. Then why in (34) does it need to be transformed into a DLE? There is something not quite correct about the vectorization and differentiation step. Furthermore, it seems that the gradients could be directly computed from (24). For example, taking the derivative wrt A1: d/dA1 (A1^T G12 A2 - G12) = d/dA1 (-C1^T C2) G12 A2 + A1^T (dG12/dA1) A2 - (dG12/dA1) = 0 Hence dG12/dA1 can be obtained directly as a DLE. -4) Given that there are previous works on LDS clustering, experiments are somewhat lacking: - Since a kernel method is used, it would be interesting to see the results for directly using the linear kernel, or the other kernels in (19,20). - In the video classification experiment, there is no comparison to [3,8,11], which also perform LDS clustering. - No comparison with probabilistic clustering methods [21,22]. Clarity: - L210 - Theorem 4 cannot be "directly employed" on new kernels, since the theorem is specific to the kernel in (5). Some extension, or partial re-derivation is needed. - Other minor typos: - Eq 8: it should be mentioned why the k(S_m,S_m) term can be dropped. - Eq 9: the minimization is missing z_i. - Eq 10: the text is missing how (9) is solved. I'm assuming there are two sub-problems: sparse code estmiation, and codebook estimation with z_i fixed. (10) corresponds to the second step. Significance: Overall, the approach is reasonable and offers a solution to the problem of clustering LDS. However, more could have been done in terms of relating the work with previous similar methods, exploring other kernel functions, and more thorough comparison in the expeirments. Some steps in the derivation need to be clarified.
nips_2017_183
Inferring Generative Model Structure with Static Analysis Obtaining enough labeled data to robustly train complex discriminative models is a major bottleneck in the machine learning pipeline. A popular solution is combining multiple sources of weak supervision using generative models. The structure of these models affects the quality of the training labels, but is difficult to learn without any ground truth labels. We instead rely on weak supervision sources having some structure by virtue of being encoded programmatically. We present Coral, a paradigm that infers generative model structure by statically analyzing the code for these heuristics, thus significantly reducing the amount of data required to learn structure. We prove that Coral's sample complexity scales quasilinearly with the number of heuristics and number of relations identified, improving over the standard sample complexity, which is exponential in n for learning n th degree relations. Empirically, Coral matches or outperforms traditional structure learning approaches by up to 3.81 F1 points. Using Coral to model dependencies instead of assuming independence results in better performance than a fully supervised model by 3.07 accuracy points when heuristics are used to label radiology data without ground truth labels.
The authors propose using static analysis to improve the synthesis of generative models designed to combine multiple heuristic functions. They leverage the fact that expressing heuristics as programs often encodes information about their relationship to other heuristics. In particular, this occurs when multiple heuristics depend (directly or indirectly) on the same value, such as the perimeter of a bounding box in a vision application. The proposed system, Coral, infers and explicitly encodes these relationships in a factor graph model. The authors show empirically how Coral leads to improvements over other related techniques. I believe the idea is interesting and worth considering for acceptance. However, I have various queries, and points that might strengthen the paper: * I found it difficult at first to see how Coral relates to other work in this area. It might be useful to position the "Related Work" section earlier in the paper. It might also help to more explicitly describe how Coral relates to [2] (Bach et al.), which seems closest to this work -- in particular, there is quite a lot of similarity between the factor graphs considered by the two models. (Coral seems to be obtained by inserting the phi_i^HF and p_i nodes between the lambda_i and phi^DSP nodes, and by restricting only to pairwise terms in the phi^DSP factor.) * It isn't stated anywhere I could see that lambda_i \in {-1, 1}, but based on the definition of phi^Acc_i (line 147), this must be the case, right? If so, the "True" and "False" values in Figure 3 (a) should potentially be replaced with 1 and -1 for clarity. * Why does (i) (line 156) constitute a problem for the model? Surely over-specification is OK? (I can definitely see how under-specification in (ii) is an issue, on the other hand.) * What is the justification for using the indicator function I[p_i = p_j] for phi_ij^Sim? I would expect this hardly ever to have the value 1 for most primitives of interest. It is mentioned that, for instance, we might expect the area and perimeter of a tumour to be related, but surely we do not expect these values to be identical? (I note that similar correlation dependencies occur in [2] (Bach et al.), but these compare the heuristic functions rather than the primitives as here. Since the heuristic functions return an estimate of the label Y, it seems much more reasonable to expect their values might be equal.) * It is stated in line 176 that the supplementary materials contains a proof that "if any additional factor would have improved the accuracy of the model, then the provided primitive dependencies would have to be incorrect". However, I couldn't find this -- where exactly can it be found? Finally, some minor (possible) typos: * Line 133: "if an edge point" should be "if an edge points" * Figure 3 (a): The first "return False" has an extra indent * The double summation in the definition of P between lines 170 and 171 seems to double count certain terms, and is slightly different from the definition of phi^DSP between lines 169 and 170 * Line 227: "structure learning structure learning" * Line 269: "structure learning performed better than Coral when we looked only used static analysis to infer dependencies"
nips_2017_2965
Training Quantized Nets: A Deeper Understanding Currently, deep neural networks are deployed on low-power portable devices by first training a full-precision model using powerful hardware, and then deriving a corresponding lowprecision model for efficient inference on such systems. However, training models directly with coarsely quantized weights is a key step towards learning on embedded platforms that have limited computing resources, memory capacity, and power consumption. Numerous recent publications have studied methods for training quantized networks, but these studies have mostly been empirical. In this work, we investigate training methods for quantized neural networks from a theoretical viewpoint. We first explore accuracy guarantees for training methods under convexity assumptions. We then look at the behavior of these algorithms for non-convex problems, and show that training algorithms that exploit high-precision representations have an important greedy search phase that purely quantized training methods lack, which explains the difficulty of training using low-precision arithmetic.
Summary --------- This paper analyzes the performance of quantized networks at a theoretical level. It proofs convergence results for two types of quantization (stochastic rounding and binary connect) in a convex setting and provides theoretical insights into the superiority of binary connect over stochastic rounding in non-convex applications. General comments —----------------- The paper states that the motivation for quantization is training of neural networks on embedded devices with limited power, memory and/or support of floating-point arithmetic. I do not agree with this argumentation, because we can easily train a neural network on a powerful GPU grid and still do the inexpensive forward-pass on a mobile device. Furthermore, the vast majority of devices does support floating-point arithmetic. And finally, binary connect, which the paper finds to be superior to stochastic rounding, still needs floating-point arithmetic. Presentation ------------- The paper is very well written and structured. The discussion of related work is comprehensive. The notation is clean and consistent. All proofs can be found in the (extensive) supplemental material, which is nice because the reader can appreciate the main results without getting lost in too much detail. Technical section ------------------ The paper’s theoretical contributions can be divided into convex and non-convex results: For convex problems, the paper provides ergodic convergence results for quantization through stochastic rounding (SR) and binary connect (BC). In particular, it shows that both methods behave fundamentally different for strongly convex problems. While I appreciate the importance of those results at a theoretical level, they are limited in their applicability because today’s optimization problems, and neural networks in particular, are rarely convex. For non-convex problems, the paper proofs that SR does not enter an exploitation phase for small learning rates (Theorem 5) but just gets slow (Theorem 6), because its behavior is essentially independent of the learning rate below a certain threshold. The paper claims that this is in contrast to BC, although no formal proof supporting this claim is provided for the non-convex case. Experiments ------------- The experimental evaluation compares the effect of different types of quantization on the performance of a computer vision task. The main purpose of the experiments is to understand the impact of the theoretical results in practice. - First of all, the paper should clearly state the task (image classification) and quantization step (binary). Experimental results for more challenging tasks and finer quantization steps would have been interesting. - Standard SGD with learning rates following a Robbins-Monro sequence should have been used instead of Adam. After all, the theoretical results were not derived for Adam’s adaptive learning rate, which I am concerned might conceal some of the effects of standard SGD. - Table 1 feels a little incoherent, because different network architectures were used for different datasets. - The test error on ImageNet is very high. Is this top-1 accuracy? - Why only quantize convolutional weights (l.222)? The vast majority of weights are in the fully-connected layers, so it is not surprising that the quantized performance is close to the unquantized networks if most weights are the same. - A second experiment computes the percentage of changed weights to give insights into the exploration-exploitation behaviour of SR and BC. I am not sure this is a meaningful experiment. Due to the redundancy in neural networks, changing a large numbers of weights can still result in a very similar function. Also, it seems that the curves in Fig. 3(c) do saturate, meaning that only very few weights are changed between epochs 80 and 100 and indicating exploitation even with SR. Minor comments ----------------- l.76: Citation missing. Theorem 1: w^* not defined (probably global minimizer). l.137: F_i not defined. l.242: The percentage of changed weights is around 40%, not 50%. Conclusion ----------- The paper makes important contributions to the theoretical understanding of quantized nets. Although I am not convinced by the experiments and concerned about the practical relevance, this paper should be accepted due to its strong theoretical contributions.
nips_2017_847
Shape and Material from Sound Hearing an object falling onto the ground, humans can recover rich information including its rough shape, material, and falling height. In this paper, we build machines to approximate such competency. We first mimic human knowledge of the physical world by building an efficient, physics-based simulation engine. Then, we present an analysis-by-synthesis approach to infer properties of the falling object. We further accelerate the process by learning a mapping from a sound wave to object properties, and using the predicted values to initialize the inference. This mapping can be viewed as an approximation of human commonsense learned from past experience. Our model performs well on both synthetic audio clips and real recordings without requiring any annotated data. We conduct behavior studies to compare human responses with ours on estimating object shape, material, and falling height from sound. Our model achieves near-human performance.
This paper contains a lot of ideas and methods packed into a tightly wound package - less would have been certainly more: The general aim is to build a system that can mimick the human ability to recognise shape and material objects from their sounds. Certainly a nice idea to explore within a NIPS community. To this end a generative model is defined, and a process for inference is proposed ("Physics-Based Audio Engine") that uses a physics simulation, this physics simulation is coupled (somehow) to a sound generation algorithm. This appears to use (somehow) the physics simulation engine representation of the vibrating surfaces of the objects to render the sounds. This engine is then used to create training stimuli for 4 variants of machine learning algorithms to learn appropriate representations. Again the descriptions are rather curt, so it is difficult to understand how the algorithms actually operate and how they perform. Into the paper mix we also have now human experiments that aims to compare the algorithms with that of the human observer. A description of how the experiment was conducted with human users is missing, however human users are almost always outperformed by the model. While it is up to this point unclear why the authors did not use real data when assessing their models, we are treated to real at the end (Figure 7) but without any meaningful information for us to understand how their method performs. In my opinion these are the major issues 1. Too much information therefore too superficially presented. 2. It is unclear how good the generative model is, comparison to real data would have been help here in anchoring it first. 3. The use of 4 different algorithms for classification is a nice effort, but it remains unclear to me what the point is (do the authors have a clear hypothesis?) 4. The human data is in itself interesting, but at this point it appears unclear how it was obtained, and how it informs us further on the work done in the paper.
nips_2017_2792
Hybrid Reward Architecture for Reinforcement Learning One of the main challenges in reinforcement learning (RL) is generalisation. In typical deep RL methods this is achieved by approximating the optimal value function with a low-dimensional representation using a deep network. While this approach works well in many domains, in domains where the optimal value function cannot easily be reduced to a low-dimensional representation, learning can be very slow and unstable. This paper contributes towards tackling such challenging domains, by proposing a new method, called Hybrid Reward Architecture (HRA). HRA takes as input a decomposed reward function and learns a separate value function for each component reward function. Because each component typically only depends on a subset of all features, the corresponding value function can be approximated more easily by a low-dimensional representation, enabling more effective learning. We demonstrate HRA on a toy-problem and the Atari game Ms. Pac-Man, where HRA achieves above-human performance.
R5: Summary: This paper builds on the basic idea of the Horde architecture: learning many value functions in parallel with off-policy reinforcement learning. This paper shows that learning many value functions in parallel improves the performance on a single main task. The novelty here lies in a particular strategy for generating many different reward functions and how to combine them to generate behavior. The results show large improvements in performance in an illustrative grid world and Miss Pac-man. Decision: This paper is difficult to access. In terms of positives: (1) the paper lends another piece of evidence to support the hypothesis that learning many things is key to learning in complex domains. (2) The paper is well written, highly polished with excellent figures, graphs and results. (3) The key ideas are clearly illustrated in a small domain and then scaled up to Atari. (4) The proposed approach achieves substantial improvement over several variants of one of the latest high-performance deep learning systems. My main issue with the paper is that it is not well paced between two related ideas from the literature, namely the UNREAL architecture [1], and Diuk’s [2] object oriented approach successfully used many years ago in pitfall—a domain most Deep RL systems perform very poorly on. It’s unclear to me if these two systems should be used as baselines for evaluating hydra. However, it is clear that a discussion should be added of where the Hydra architecture fits between these two philosophical extremes. I recommend accept, because the positives outweigh my concerns, and I am interested to here the author’s thoughts. The two main ideas to discuss are auxiliary tasks and object-oriented RL. Auxiliary tasks are also built on the Horde idea, where the demons might be predicting or controlling pixels and features. Although pixel control works best, it is perhaps too specific to visual domains, but the feature control idea is quiet general. The nice thing about the UNREAL architecture [1] is that it was perhaps one of the first to show that learning many value functions in parallel can improve performance on some main task. The additional tasks somehow regularize the internal state resulting in a better overall representation. The behavior policy did not require weighting over the different value functions it simply shared the representation. This is a major point of departure from Hydra; what is the best design choice here? Much discussion is needed. Another natural question is how well does pixel control work in Miss Pac-man, this was not reported in [1]. I see hydra as somewhat less general than auxiliary tasks idea. In Hydra a person must designate in the image what pixel groupings correspond to interesting value functions to learn. One could view Hydra as the outcome of UREAL pixel control combined with some automatic way to determine which pixel control GVFs are most useful for sculpting the representation for good main task behavior. In this way Hydra represents a benchmark for automatic GVF construction and curation. On the other extreme with have Diuk’s [2] object oriented approach in pitfall. Assuming some way to extract the objects in an Pitfall screen, the objects were converted into a first-order logic representation of the game. Using this approach, optimal behavior could be achieved in one episode. This is interesting because SOTA deep-learning systems don't even report results on this domain. Hydra is somewhat like this but is perhaps more general. Carefully and thoughtfully relating the Hydra approach to these extremes is critical for accurately discussing the precise contribution of hydra and this paper under review here, and I invite the authors to do that. The final result of the paper (honestly admitted by the authors) exploits the determinist of Atari domain to achieve super-human play in miss packman. This result shows that A3C and others do not exploit this determinism. I am not sure how generally applicable this result is. Another method not discussed, Model Free Episodic Control [3], performs well in Miss Pac-man—much better than A3C. Episodic control defines a kernel function and memory built from agent states generated by a deep network. This allows the system to quickly the query the values of related previously seen states, resulting in much faster initial learning in many domains. I would be interested to see the performance of episodic control in miss pac-man with both representations. It is interesting that DQN does not seem to exploit the additional domain knowledge in the fruit experiment. I am trying to figure out the main point here. Is the suggestion that we will usually have access to such information, and hydra will often exploit it more effectively? Or that prior information will usually be of this particular form that hydra uses so effectively? Could unreal’s aux tasks make use of manually ignoring irrelevant features? If that is not feasible in UNREAL, this would be yet another major difference between the architectures! This raises an interesting question: how often can this feature irrelevance be automatically observed and exploited without causing major problems in other domains. It seems like UNREAL’s independent GVFs could eventually learn to ignore parts of it’s shared representation. Low-dim representations: this is a meme that runs throughout the paper, and a claim I am not sure I agree with. Several recent papers have suggested that the capacity of deep learning systems is massive and that they can effetely memorize large data-sets. Do you have evidence to support this seemingly opposite intuition, outside of the fact that Hydra works well? This seems like a post-hoc explanation for why hydra works well. This should be discussed more in the paper [1] Jaderberg, M., Mnih, V., Czarnecki, W. M., Schaul, T., Leibo, J. Z., Silver, D., & Kavukcuoglu, K. (2016). Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397. [2] Diuk, C., Cohen, A., & Littman, M. L. (2008, July). An object-oriented representation for efficient reinforcement learning. In Proceedings of the 25th international conference on Machine learning (pp. 240-247). ACM. [3] Blundell, C., Uria, B., Pritzel, A., Li, Y., Ruderman, A., Leibo, J. Z., ... & Hassabis, D. (2016). Model-free episodic control. arXiv preprint arXiv:1606.04460. Small things that did not effect the final decision: line 23: performed > > achieved 26: DQN carries out a strong generalisation > > DQN utilizes global generalization—like any neural net— 28: model of the optimal value function. I don’t think you mean that 230: how important are the details of these choices? Could it be -500 or -2000?
nips_2017_1092
Convergent Block Coordinate Descent for Training Tikhonov Regularized Deep Neural Networks By lifting the ReLU function into a higher dimensional space, we develop a smooth multi-convex formulation for training feed-forward deep neural networks (DNNs). This allows us to develop a block coordinate descent (BCD) training algorithm consisting of a sequence of numerically well-behaved convex optimizations. Using ideas from proximal point methods in convex analysis, we prove that this BCD algorithm will converge globally to a stationary point with R-linear convergence rate of order one. In experiments with the MNIST database, DNNs trained with this BCD algorithm consistently yielded better test-set error rates than identical DNN architectures trained via all the stochastic gradient descent (SGD) variants in the Caffe toolbox.
This manuscript proposes a modification of feed-forward neural network optimization problem when the activation function is the ReLU function. My major concern is about the convergence proof for the block coordinate descent algorithm. In particular, Theorem 1 is incorrect for the following reason: A. It assumed that the sequence generated by the algorithm has a limit point. This may not be the case as the set U is not compact (closed but not bounded). Therefore, the sequence may not converge to a finite point. B. It stated that the sequence generated by the algorithm has a unique limit point (line 263: "the" limit point). Even the algorithm has limit points, it may not be unique. Consider the sequence x_i = (-1)^n - 1/n, clearly it has 2 limit points: +1 and -1. For the rest parts of the paper, my comments are as follows. 1. The new formulation seems interesting, and it can be separately discussed from the block coordinate descent part, and it looks to me the major novelty is the new formulation but not the algorithm. However, the major concern for the new formulation is that it has much more variables, which can potentially lead to spatial unfeasibility when there are more data instances. This problem hinders people from applying the formulation to large-scale problems and thus its usefulness is limited. 2. I do not see that the problem in line 149 is always convex with respect to W. This requires further elaboration. 3. In the introduction, some efforts were put in describing the issue of saddle points. However, this manuscript does not handle this problem as well. I'd suggest the authors to remove the related discussion. 4. The format of paper reference and equation reference is rather non-standard. Usually [1] is used for paper reference and (1) is for equation reference. 5. The comparison with Caffe solvers in term of time is meaningless as the implementation platforms are totally different. I do not understand the sentence "using MATLAB still run significantly faster" especially regarding the word "still", as matlab should be expected to be faster than python. 6. The comparison of the objective value in 3(a) is comparing apples to oranges, as the two are solving different problems. === After feedback === I did notice that in Line 100 the authors mentioned that U,V,W are compact sets, but as we see in line 124, U is set to be the nonnegative half-space, which is clearly unbounded and thus non-compact. Therefore, indeed when U is compact, this is correct that there will be at least a limit point and thus the convergence analysis may hold true (see details below), it is not the case for the problem (4) or (5). Regarding the proof, I don't agree with the concept of "the" limit point when t = \infty. I don't think such thing exists by nature. As stated in my original review, there might be several limit points, and by definition we always have all limit points are associated with t = \infty, just in different ways. The authors said in the rebuttal that because their algorithm converges so "the" limit point exists, but this is using the result proved under this assumption to prove that the assumption is true. I don't think that is a valid way of proof. In addition, in the extremest case, consider that the sets U,V,W all consist of one point, say u,v,w, and the gradient at (u,v,w) is not zero. Clearly in this case there is one and only one limit point (u,v,w), but apparently it is not a stationary point as the gradient is non-zero. Therefore, I will keep the same score.
nips_2017_2095
InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations The goal of imitation learning is to mimic expert behavior without access to an explicit reward signal. Expert demonstrations provided by humans, however, often show significant variability due to latent factors that are typically not explicitly modeled. In this paper, we propose a new algorithm that can infer the latent structure of expert demonstrations in an unsupervised way. Our method, built on top of Generative Adversarial Imitation Learning, can not only imitate complex behaviors, but also learn interpretable and meaningful representations of complex behavioral data, including visual demonstrations. In the driving domain, we show that a model learned from human demonstrations is able to both accurately reproduce a variety of behaviors and accurately anticipate human actions using raw visual inputs. Compared with various baselines, our method can better capture the latent structure underlying expert demonstrations, often recovering semantically meaningful factors of variation in the data.
Paper Summary: This paper focuses on using GANs for imitation learning using trajectories from an expert. The authors extend the GAIL (Generative Adversarial Imitation Learning) framework by including a term in the objective function to incorporate latent structure (similar to InfoGAN). The authors then proceed to show that using their framework, which they call InfoGAIL, they are able to learn interpretable latent structure when the expert policy has multiple modes and that in some setting this robustness allows them to outperform current methods. Paper Overview: The paper is generally well written. I appreciated that the authors first demon- started how the mechanism works on a toy 2D plane example before moving onto more complex driving simulation environment. This helped illustrate the core concepts of allowing the learned policy to be conditioned on a latent variable in a minimalistic setting before moving on to a more complex 3D driving simulation. My main concern with the paper is that, in integrating many different ideas, it was hard to distil the core contribution of the paper which, as I understood, was the inclusion of latent variables to learn modes within an overall policy when using GAIL. Major Comments: 1. In integrating many different ideas, it was difficult to distil the core contribution of this paper (the inclusion of latent variables to learn modes within a policy). As an illustration of this problem, In subsection 3.1 the authors refer to "InfoGAIL" in its simplest form, but the supplementary material they present a more complex algorithm which includes weight clipping, Wasserstein GAN objective as well as an optional term for reward augmentation and this is also called "InfoGAIL". I felt the paper could beneft from concretely defining InfoGAIL in its most simple form, and then when adding additional enhancements later and making it clear this is InfoGail with these extra enhancements. 2. Linked to my first point, I felt the paper would benefit from an algorithm box for the simplest form of InfoGAIL in the paper itself (not supplement- tary). This would add to the readability of the paper and allow a reader not too familiar with all the details of GANs to quickly grasp the main ideas of the algorithm. I'd suggest adding this before the section 4 on optimization. Then keeping the InfoGAIL with extensions in supplementary material (and renaming it accordingly there). 3. The authors show how InfoGAIL works in an idealized setting where the number of modes are known. It would be useful to see how InfoGAIL behaves when the number of modes is not known and the number of latent variables are set incorrectly. For example, in the 2D plane environment there are 3 modes. What happens if we use InfoGail but set the number of latent variables to 2, 4, 5, 10 etc. Can it still infer the modes of the expert policy? Is the method robust enough to handle these cases? Minor Comments: 1. subsection 4.1 refers to reward augmentation. But the experiment in sub- section 5.2 don't mention if reward augmentation is used to obtain results (reward augmentation is only mentioned for experiments in subsection 5.3). It's not clear if the authors use reward augmentation in this experiment and if so how it is defined. 2. The heading of the paper is "Inferring The Latent Structure of Human Decision-Making from Raw Visual Inputs". Does InfoGAIL only work in domains where we have raw visual inputs. Also we may assume the method would work if the expert trajectories do not come from a human but some other noisy source. The authors may want to revise the title to make the generality of their framework more clear.
nips_2017_2094
Influence Maximization with ε-Almost Submodular Threshold Functions Influence maximization is the problem of selecting k nodes in a social network to maximize their influence spread. The problem has been extensively studied but most works focus on the submodular influence diffusion models. In this paper, motivated by empirical evidences, we explore influence maximization in the nonsubmodular regime. In particular, we study the general threshold model in which a fraction of nodes have non-submodular threshold functions, but their threshold functions are closely upper-and lower-bounded by some submodular functions (we call them ε-almost submodular). We first show a strong hardness result: there is no 1/n γ/c approximation for influence maximization (unless P = NP) for all networks with up to n γ ε-almost submodular nodes, where γ is in (0,1) and c is a parameter depending on ε. This indicates that influence maximization is still hard to approximate even though threshold functions are close to submodular. We then provide (1 − ε) (1 − 1/e) approximation algorithms when the number of ε-almost submodular nodes is . Finally, we conduct experiments on a number of real-world datasets, and the results demonstrate that our approximation algorithms outperform other baseline algorithms.
This paper studies influence maximization in the general threshold diffusion model, specifically when node threshold functions are almost submodular (or epsilon-submodular). The key results are: 1) a proof of inapproximability for the influence maximization problem under this model when there are sufficiently many epsilon-submodular threshold nodes; 2) an efficient, simple and principled approximation algorithm for this problem when there is some fixed number of epsilon-submodular threshold nodes. This is a strong paper for the following reasons: 1- Influence maximization under the General Threshold model is a challenging and poorly studied problem; 2- The epsilon-submodularity condition is very reasonable because it is consistent with empirical evidence in real diffusion processes; 3- The hardness result and the approximate algorithms are intuitive and non-trivial. Barring minor typos and the like (see below for details), the paper is very well-written. In my understanding, the technical proofs are correct. The experimental evaluation is thorough and convincing, and the results are consistent with the theoretical guarantees. Minor comments (by line): 102: shouldn't the equation right-hand side have 1/(1-\epsilon) instead of (1-\epsilon)? 237: "linear" instead of "liner" 241: "satisfying" instead of "satisfies" 262: "PageRank" instead of "PagrRank" 297: "compared to 2.05" instead of "while 2.05" 298: "spread" instead of "spreads" 299: "The more $\epsilon$-AS nodes there are in the network, the more improvement is obtained"
nips_2017_684
Revisiting Perceptron: Efficient and Label-Optimal Learning of Halfspaces It has been a long-standing problem to efficiently learn a halfspace using as few labels as possible in the presence of noise. In this work, we propose an efficient Perceptron-based algorithm for actively learning homogeneous halfspaces under the uniform distribution over the unit sphere. Under the bounded noise condition [49], where each label is flipped with probability at most η < 1 2 , our algorithm achieves a near-optimal label complexity ofÕ the adversarial noise condition [6,45,42], where at most aΩ(�) fraction of labels can be flipped, our algorithm achieves a near-optimal label complexity ofÕ . Furthermore, we show that our active learning algorithm can be converted to an efficient passive learning algorithm that has near-optimal sample complexities with respect to � and d.
This paper analyzes a variant of the Perceptron algorithm for active learing of origin-centered halfspaces under the uniform distribution on the unit sphere. It shows that their algorithm learns using a nearly tight number of samples in the random independent noise of bounded rate. Previous work had exponentially worse dependence on the noise rate. In addition, it shows that this algorithm can deal with adversarial noise of sufficiently low rate. The latter result improves polynomially on the sample complexity but requires a stronger condtion on the noise rate. The assumptions in this setting are very strong and as a result are highly unlikely to hold in any realistic problem. At the same time there has been a significant amount of prior theoretical work on this and related settings since it is essentially the only setting where active learning gives an exponential improvement over passive learning in high dimensions. This work improves on some aspects of several of these works (although it is somewhat more restrictive in some other aspects). Another nice feature of this work is that it relies on a simple variant of the classical Perceptron algorithm and hence can be seen as analyzing a practically relevant algorithm in a simplified scenario. So overall this is solid theoretical contribution to a line of work that aims to better understand the power of active learning (albeit in a very restricted setting). A weakness of this work is that beyond the technical analysis it does not have much of an algorithmic or conceptual message. So while interesting to several experts it's unlikely to be of broader interest at NIPS. ---response to rebuttal --- >> "This label query strategy is different from uncertainty sampling methods in practice, which always query for the point that is closest to the boundary. We believe that our novel sampling strategy sheds lights on the design of other practical active learning algorithms as well." This is an interesting claim, but I do not see any support for this in the paper. The strategy might be just an artifact of the analysis even in your restricted setting (let alone a more practical one). So my evaluation remains positive about the technical achievement but skeptical about this work making a step toward more realistic scenarios.
nips_2017_2950
Learning Populations of Parameters Consider the following estimation problem: there are n entities, each with an unknown parameter p i ∈ [0, 1], and we observe n independent random variables, X 1 , . . . , X n , with X i ∼ Binomial(t, p i ). How accurately can one recover the "histogram" (i.e. cumulative density function) of the p i 's? While the empirical estimates would recover the histogram to earth mover distance Θ( 1 distance between the CDFs), we show that, provided n is sufficiently large, we can achieve error O( 1 t ) which is information theoretically optimal. We also extend our results to the multi-dimensional parameter case, capturing settings where each member of the population has multiple associated parameters. Beyond the theoretical results, we demonstrate that the recovery algorithm performs well in practice on a variety of datasets, providing illuminating insights into several domains, including politics, sports analytics, and variation in the gender ratio of offspring.
In this paper, authors study the problem of learning a set of Binomial parameters in Wasserstein distance and demonstrate that sorted set of probabilities can be learned at a rate of (1/t) where t is the parameter in the Binomial distribution. The result seems interesting and I had a great time reading it. However, I would like to understand the technical novelty of the paper better. Can the authors please expand on the differences between this paper and one of the cited papers: “Instance optimal learning of discrete distributions”, by Valiant and Valiant? To be more specific: In many of the discrete learning problems, Poisson approximation holds i.e., Binomial (t,p_i) ~ Poisson (tp_i). Hence, the proposed problem is very similar to estimating n Poisson random variables with means tp_1, tp_2, \ldots tp_n. Similarly the problem in the VV paper boils down to estimating many Poisson random variables. Hence, aren't they almost the same problem? Furthermore, in VV’s paper, they use empirical estimates for large Poisson random variables (mean > log n), and use a moment based approach for small Poisson random variables (mean < log n). In this paper, authors use moment based approach for small Poisson random variables (mean < t). In both the cases, the moment based approach yields an improvement of 1/poly(mean) ~1/poly(t) improvement in earthmovers / Wasserstein’s distance. Given the above, can you please elaborate the differences between the two papers? Few other questions to the authors: 1. In Theorem 1: What is the dependence of t in the term O_{delta, t}(1/\sqrt{n}). 2. In Theorem 1: s is not known in advance, it is a parameter to be tuned/ supplied to the algorithm. Hence, how do you obtain min_s in the right hand side of the equation? 3. Why Wasserstein’s distance? 4. Good-Turing approach (McAllester Schapire ‘00) seems to perform well for instance optimality of discrete distributions in KL distance (Orlitsky Suresh ‘15). However, VV showed that Good-Turing does not work well for total variation distance. In your problem, you are looking at Wasserstein’s distance. Can you please comment if you think Good-Turing approaches work well for this distance? Can you given example / compare these approaches at least experimentally? Edits after the rebuttal: Changed to Good paper, accept. > For ANY value of t, there exist dists/sets of p_i's such that the VV and Good-Turing approaches would recover dists that are no better (and might be worse?) than the empirical distribution (i.e. they would get a rate of convergence > O(1/sqrt(t)) rather than the O(1/t) that we get). Can you please add these examples in the paper, at least in the appendix? It would be also nice if you can can compare with these approaches experimentally. > Suppose p_i = 0.5 for all i, t=10, and n is large. The analogous "poissonized" setting consists of taking m<--Poi(5n) draws from the uniform distribution over n items, and the expected number of times the ith item is seen is distributed as Poi(5). Poi(5) has variance 5, though Binomial(10,0.5) has variance 10/4 = 2.5, and in general, for constant p, Poi(t*p) and Binomial(t,p) have constant L1 distance and variances that differ by a factor of (1-p). While I agree that they have a constant L1 distance, I still am not convinced that these problems are 'very different'. Do you believe that your algorithm (or a small variation of it) would not work for the problem where the goal is to estimate n Poisson random variables? If so / no, please comment. If not, at least say something about it in the paper. > [A "bad" construction is the following: consider a distribution of p_i's s.t. the corresponding mixture of binomials approximates a Poisson, in which case VV and Good-Turing will conclude wrongly that all the p_i's are identical.] I do not agree with this comment too. Note that for this argument to work it is not sufficient to approximate each Poisson coordinate with a mixture of binomials coordinate. One has to approximate a Poisson product distribution over n coordinates, with a mixture of binomial random variables with n coordinates. I believe this task gets harder as n-> infty. > It depends on the distribution (and is stated in the theorem), but can be as bad as 2^t. Please mention it in the paper.
nips_2017_1035
Predicting User Activity Level In Point Processes With Mass Transport Equation Point processes are powerful tools to model user activities and have a plethora of applications in social sciences. Predicting user activities based on point processes is a central problem. However, existing works are mostly problem specific, use heuristics, or simplify the stochastic nature of point processes. In this paper, we propose a framework that provides an efficient estimator of the probability mass function of point processes. In particular, we design a key reformulation of the prediction problem, and further derive a differential-difference equation to compute a conditional probability mass function. Our framework is applicable to general point processes and prediction tasks, and achieves superb predictive and efficiency performance in diverse real-world applications compared to the state of the art.
In my opinion this paper is quite good, although I do not consider myself an expert on point processes, which should be taken into account when reading this review. I consider it in the top 50% of NIPS papers as a lower bound, admitting that I do not know enough to evaluated it further. I have asked the committee to ensure that at least one person who can judge the proofs sufficiently be assigned to this paper and they promised me this would be the case. I do hope if is published as I would like to try using it. Overall I think the main issue with this paper is that it would more comfortably fit in about 20 pages. Here are my detailed comments within the scope of my expertise: Coming from a background of modeling user activity as someone who interprets time series regarding heart rate and accelerators, this paper was not what I expected based on the title. To some extent I think the paper would have been better titled "HYBRID: a framework that provides an unbiased estimator of the probability mass function of point processes," but I do see why you called it this after reading the paper. There are many contributions to in this paper that comprise the framework: the reformulation of the prediction problem using a conditional expectation that incorporates history, allowing sample efficiency for prediction due to a lower variance in the new estimator, the derivation of the differential-difference equation to compute the conditional estimator and both a theoretical evaluation of the framework on synthetic data as well as two evaluations on real world data. Overall, the paper seems quite good and well supported. It is also well written. Prior Art: My one comment on the prior art is that one cluster of author (the authors of 5,7,8,9,11,12,24,25,26,27 and 31) does seem to dominate the references. These citations are necessary as the authors of this document do need to reference this material, however, for the datasets used it would be more appropriate to cite the original authors who contributed the data rather than simply 12. Demetris Antoniades and Constantine Dovrolis. Co-evolutionary dynamics in social networks: A case study of twitter. Computational Social Networks, 2(1):14, 2015. My guess is that since there is so much material here and 12 does need to be cited as well the authors were trying to save space, this would be a more appropriate citation for the dataset. Similarly, I believe this paper: Yehia Elkhatib ; Mu Mu ; Nicholas Race "Dataset on usage of a live & VoD P2P IPTV service" P2P, London, UK 2014 is a more appropriate citation for one of the datasets data used in Section 6.1.2 cited as "the datasets used in [24] The other dataset "used in [24]" seems to be also the Antoniades and Dovrolis set. It would be better to reference these directly. Also, it would be great to know if this area of research is really restricted primarily to this one group or if other groups are actively pursuing these topics. Again, I feel it is a likely space limitation issue so perhaps this is more a comment for a journal version of the paper. Solution Overview (Section 3) With respect to the claim to generalizability, could the authors please give an example of a point process that is not covered by prior art (e.g not represented by a reinforced Poisson process or a Hawkes Process) as a person not overly familiar with point processes, this would help me understand better the impact of this work. This was done nicely in lines 31 and 31 for why the generalization of the function is important. In general I think this is well presented, your contribution is a more general framework that can preserves the stochasticity of the function and yet requires fewer samples than MC methods. With respect to the claim of a new random variable, g(H_t_minus), the idea of a conditional intensity function, conditioned on the history of the intensity function is not new, it seems to be a known concept based on a few quick web searches. I have to assume that you mean that using it as a random variable for prediction is a new idea. The closest similar work I found to this is the use of a "mark-specific conditional intensity function" in "Marked Temporal Dynamics Modeling based on Recurrent Neural Network" by Yongqing Wang, Shenghua Liu, Huawei Shen, Xueqi Cheng on arXiv: (https://arxiv.org/pdf/1701.03918.pdf). Again unfortunately by the method of google searching as I am not expert in this area. If you think it is similar you can reference it, if not you can ignore this. Again, it seems related from my point of view, but my expertise is lacking in this area. With respect to the proof contained in Appendix C: It looks good to me. I had to look up taking the expectation of a conditional random variable, http://www.baskent.edu.tr/~mudogan/eem611/ConditionalExpectation.pdf and the conditions under which a variance could be negative and I believe that you proved your point well. It can't negative or zero because it right progressing and there is an impulse at t(0) which makes it non-constant and therefore non-zero if I understand you correctly. I am hoping supplemental material will be included in the proceedings. With respect to the contribution of the formulation of the novel transport equation, this does seem novel and useful. I could find nothing similar. The closest match I found was: "Probability approximation of point processes with Papangelou conditional intensity," by Giovanni Luca Torris forthcoming in the Bernoulli sociaty (http://www.bernoulli-society.org/index.php/publications/bernoulli-journal/bernoulli-journal-papers) again based on a web search and not my knowledge, so take it as a paper of potential interest for you not a challenge to your novelty. With respect to the mass transport equation: This does seem like a very good idea. Intrinsically i am understanding that as intensity process generates events these contribute to that mass and that the accumulated mass is modeled as decreasing over time on absence of new events. I understand that the authors used tools from numerical methods for approximating the integration of the probability mass function and in the end solved their equation with ode45 from Matlab. Everything Matlab says about the method is consistent with what is presented here. "ODE23 and ODE45 are functions for the numerical solution of ordinary differential equations. They employ variable step size Runge-Kutta integration methods. ODE23 uses a simple 2nd and 3rd order pair of formulas for medium accuracy and ODE45 uses a 4th and 5th order pair for higher accuracy." I cannot comment with much expertise on the proof in Appendix A. I get the general idea and it seems plausible. I looked up the fundamental lemma of the calculus of variations and tried to follow the math, but I cannot guarantee that I did not miss something as I am not vary familiar with numerical methods. I found Figure 2 very helpful for getting an intuitive understanding for what was being described by the equations. I found the section on event information fairly compelling with respect to understanding why they are able to reduce the samples used for the estimate (increased sample efficiency). For the experiments int Figure 3. Did you train on 70% and test on 30% or did you use a train,test and hold-out set (e.g. 70/15/15)? Overall this looks like quite a promising and solid idea and framework, so I would recommend inclusion at NIPS based on my understanding of the content. Writing: here are some type-os I noticed as I was reading the paper. 10: "state-of-arts" -> "the state of the art" is better 36 "adversely influence" -> "adversely influences" 64: "the phenomena of interests" -> "the phenomena of interest" 75: The root mean square error (RMSE) is defines as -> is defined as
nips_2017_3354
Neural Expectation Maximization Many real world tasks such as reasoning and physical interaction require identification and manipulation of conceptual entities. A first step towards solving these tasks is the automated discovery of distributed symbol-like representations. In this paper, we explicitly formalize this problem as inference in a spatial mixture model where each component is parametrized by a neural network. Based on the Expectation Maximization framework we then derive a differentiable clustering method that simultaneously learns how to group and represent individual entities. We evaluate our method on the (sequential) perceptual grouping task and find that it is able to accurately recover the constituent objects. We demonstrate that the learned representations are useful for next-step prediction.
The paper presents two related deep neural networks for clustering entities together into roughly independant components. In their implementations the entities are always pixels in an image. The first technique they brand as neural expectation maximization, and show how the network is performing an unrolled form of generalized EM. The E-step predicts the probability that a given pixel was generated by each of the components. The M-step, then updates the parameters of each component using these probabilities. This is neural, because each of these steps relies on a neural network which maps from the parameters of a component to the pixel values. By unrolling this process they can train this network in an end-to-end basis. They also proposed a related model which modifies this process by updating the parameters of the components at each step in the unrolling using an RNN-style update. This allows them to extend the model to sequential/video data where the image is changing at each step. The show results only on toy datasets consisting of a small number of shapes or MNIST images (3-5) superimposed on top of each other. Their neural EM approach performs significantly worse than recent work on Tagger for static data, however the RNN version of their model slightly out performs Tagger. However Tagger significantly outperforms even the RNN version of their work if Tagger is trained with batch normalization. In the video setting, their model performs significantly better, which is not surprising given the additional information available in video. They show both higher scores for predicting the pixel to component mapping, but also much better reconstruction MSE than a standard (fully entangled) autoencoder model (which is the same a their model with 1 component). Overall these seems like a solid paper with solid results on a toy dataset. My main concern is that the technique may not work on messier real world data, since they only show results on relatively toy datasets where the components are completely independant, and the component boundaries are relatively clear when they are not overlapping. Results in a more realistic domain would make the paper much strong, but even in the current state I believe both the technique and the results are interesting.
nips_2017_3132
Asynchronous Coordinate Descent under More Realistic Assumption * Asynchronous-parallel algorithms have the potential to vastly speed up algorithms by eliminating costly synchronization. However, our understanding of these algorithms is limited because the current convergence theory of asynchronous block coordinate descent algorithms is based on somewhat unrealistic assumptions. In particular, the age of the shared optimization variables being used to update blocks is assumed to be independent of the block being updated. Additionally, it is assumed that the updates are applied to randomly chosen blocks. In this paper, we argue that these assumptions either fail to hold or will imply less efficient implementations. We then prove the convergence of asynchronous-parallel block coordinate descent under more realistic assumptions, in particular, always without the independence assumption. The analysis permits both the deterministic (essentially) cyclic and random rules for block choices. Because a bound on the asynchronous delays may or may not be available, we establish convergence for both bounded delays and unbounded delays. The analysis also covers nonconvex, weakly convex, and strongly convex functions. The convergence theory involves a Lyapunov function that directly incorporates both objective progress and delays. A continuous-time ODE is provided to motivate the construction at a high level.
The paper studies coordinate descent with asynchronous updates. The key contributions include a relaxation of the assumptions on how the block coordinates are chosen and how large could the delays be. Descent lemma and convergence rates are obtained for a variety of settings using the ode analysis and discretization. I think the paper is well-grounded because existing assumptions in the asynchronous analyses of coordinate descents are highly unsatisfactory; and this work explained the problem well and proposed a somewhat more satisfactory solution. For this reason I vote for accept. Detailed evaluation: 1. I did not have time to check the proof, but I think the results are believable based on the steps that the authors outlined in the main paper. 2. The conditions for cyclic choice of coordinates and unbounded delays could be used in other settings. 3. In a related problem of asynchronous block coordinate frank wolfe, it was established under weaker tail bound assumptions of the delay random variable that there could be unbounded stochastic delay. The delay random variable also are not required to be independent. Establishing deterministic sequence in AP-BCFW or more heavy-tailed stochastic delay for AP-BCD could both be useful. - Wang, Y. X., Sadhanala, V., Dai, W., Neiswanger, W., Sra, S., & Xing, E. (2016, June). Parallel and distributed block-coordinate Frank-Wolfe algorithms. In International Conference on Machine Learning (pp. 1548-1557).
nips_2017_617
Self-Normalizing Neural Networks Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are "scaled exponential linear units" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization schemes, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs, and other machine learning methods such as random forests and support vector machines. For FNNs we considered (i) ReLU networks without normalization, (ii) batch normalization, (iii) layer normalization, (iv) weight normalization, (v) highway networks, and (vi) residual networks. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep.
The paper proposes a new (class of) activation function f to more efficiently train very deep feed-forward neural networks (networks with f are called SNNs). f is a combination of scaled ReLU (positive x) and an exponential (negative x). The authors argue that 1) SNNs converge towards normalized activation distributions 2) SGD is more stable as the SNN approx preserves variance from layer to layer. In fact, f is part of a family of activation functions, for which theoretical guarantees for fixed-point convergence exist. These functions are contraction mappings and characterized by mean/variance preservation across layers at the fixed point -- solving these constraints allows finding other "self-normalizing" f, in principle. Whether f converges to a fixed point, is sensitive to the choice of hyper-parameters: the authors demonstrate certain weight initializations and parameters settings that give the fixed-point behavior. Also, authors analyze the performance of SNNs with a modified version of dropout, which preserves mean/variance while using f. In support, authors provide theoretical proofs and experimental results on many datasets, although these vary in size (and quality?). As such, I think the paper warrants an accept -- although I have questions / concerns that I hope the authors can address. Questions / concerns: 1. Most exps are run on small datasets. This is understandable since only FNNs are considered, but do we get similar results on networks that also include e.g. convolutional / recurrent layers? How can the theoretical analysis work / apply in that case (e.g. convolutions / pooling / memory cells are not always contraction mappings)? 2. Ad point 1. What are the results for SNNs on MNIST / Cifar-10? 3. The authors only considered layers without bias units. What happens when these are included? (This likely make it harder to preserve domains, for instance) 4. During training, the mean/var of the weights do change, and are not guaranteed to be in a basin of attraction (see Thm 1) anymore. Why is there still convergence to a fixed-point, even though no e.g. explicit weight clipping / normalization is applied (to make Thm 1 apply)? 5. How does this analysis interact with different optimizers? What is the difference between using momentum-based / Adagrad etc for SNNs? 6. Hyper-parameters of f need to be set precisely, otherwise f is not domain-preserving / contraction mapping anymore. How sensitive is the model to this choice? 7. It is unclear how the fixed-point analysis theoretically relates to the generalization performance of the found solution. Is there a theoretical connection?
nips_2017_2007
Limitations on Variance-Reduction and Acceleration Schemes for Finite Sum Optimization We study the conditions under which one is able to efficiently apply variancereduction and acceleration schemes on finite sum optimization problems. First, we show that, perhaps surprisingly, the finite sum structure by itself, is not sufficient for obtaining a complexity bound ofÕ((n + L/µ) ln(1/ )) for L-smooth and µ-strongly convex individual functions -one must also know which individual function is being referred to by the oracle at each iteration. Next, we show that for a broad class of first-order and coordinate-descent finite sum algorithms (including, e.g., SDCA, SVRG, SAG), it is not possible to get an 'accelerated' complexity bound ofÕ((n+ nL/µ) ln(1/ )), unless the strong convexity parameter is given explicitly. Lastly, we show that when this class of algorithms is used for minimizing L-smooth and convex finite sums, the iteration complexity is bounded from below by Ω(n + L/ ), assuming that (on average) the same update rule is used in any iteration, and Ω(n + nL/ ) otherwise.
This papers covers some theoretical limitations on variance-reduced stochastic algorithms for finite sum problems (SAG, SDCA, SAGA, SVRG etc.). Such algorithms have had some important impact in machine learning & convex optimization as they have been shown to reach a linear convergence for strongly convex problems with lipschitz gradient (log term) while having a factor in $n + \kappa$ (and not as $n\kappa$). A first result of this paper is that the sequential iterative algorithm needs to know which individual function has been returned by the oracle to reach the linear convergence rate. Typically SAG or SAGA need this knowledge to update the gradient memory buffer so it is not terribly surprising that knowing the function index is necessary. It is then argued that such variance-reduced methods cannot be used with data augmentation during learning. This is only true if the data augmentation is stochastic and if there is not a finite set of augmented samples. A second result concerns the possibility to accelerate in the "Nesterov sense" such variance-reduced algorithm (go from $\kappa$ to $\sqrt{\kappa}}$). It is shown that "oblivious" algorithms whose update rules are (on average) fixed for any iteration cannot be accelerated. A practical consequence of the first result in 3.2 is that knowing the value of the strong convexity parameter is necessary to obtain an accelerated convergence when the algorithm is stationary. Such oblivious algorithms are further analysed using restarts which are well known techniques to alleviate the non-adaptive nature of accelerated algorithms in the presence of unknown local conditionning. The paper is overhaul well written with some pedagogy. Minor: CLI is first defined in Def 2 but is used before. Typo: L224 First, let us we describe -> First, let us describe
nips_2017_342
Continuous DR-submodular Maximization: Structure and Algorithms DR-submodular continuous functions are important objectives with wide real-world applications spanning MAP inference in determinantal point processes (DPPs), and mean-field inference for probabilistic submodular models, amongst others. DR-submodularity captures a subclass of non-convex functions that enables both exact minimization and approximate maximization in polynomial time. In this work we study the problem of maximizing non-monotone continuous DRsubmodular functions under general down-closed convex constraints. We start by investigating geometric properties that underlie such objectives, e.g., a strong relation between (approximately) stationary points and global optimum is proved. These properties are then used to devise two optimization algorithms with provable guarantees. Concretely, we first devise a "two-phase" algorithm with 1/4 approximation guarantee. This algorithm allows the use of existing methods for finding (approximately) stationary points as a subroutine, thus, harnessing recent progress in non-convex optimization. Then we present a non-monotone FRANK-WOLFE variant with 1/e approximation guarantee and sublinear convergence rate. Finally, we extend our approach to a broader class of generalized DR-submodular continuous functions, which captures a wider spectrum of applications. Our theoretical findings are validated on synthetic and real-world problem instances.
This paper presents a new approach for non-monotone DR sub modular optimisation. The strength of this paper over existing approaches is proposing a solution to a more general problem setting than having an approach subject to constraints such as the box constraints. The proofs are well constructed and presented. However the experimental results are somehow very limited. The parameters used for the simulated data are very limited and not sure if the authors have performed a much more simulations with different parameters to assess more in depth the "performance" of their approach over the existing ones. Also, for the real data set example, it is not very clear if that the proposed two-phase Frank-Wolfe algorithm performs much better than the others. From the figures, it seems performing just slightly better but not very convincing. For real-world problems, it is typically difficult to verify whether the assumptions hold or not, so a comparison with the DoubleGreedy algorithm would have been appropriate to see how much gain we have by considering a more general constraint.
nips_2017_2260
Adversarial Symmetric Variational Autoencoder A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from (i) and (ii), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets.
# Summary Motivated by the failure of maximum-likelihood training objectives to produce models that create good samples, this paper introduces a new formulation of a generative model. This model is based on a symmetric form of the VAE training objective. The goal of this symmetric formulation is to learn a distribution that both assigns high probability to points in the data distribution and low probability to points not in the data distribution (unlike a maximum-likelihood objective which focuses on learning distributions that assign high probability for points in the data distribution). As the resulting model is not tractable, the authors propose to estimate the arising KL-divergences using adversarial training. # Clarity This paper is mostly well-written and easy to understand. # Significance This paper describes an interesting approach to generative models, that attempts to fix some problems of methods based on maximum-likelihood learning. The proposed method is interesting in that it combines advantages from ALI/BiGans (good samples) with properties of VAEs (faithful reconstructions). However, by using auxiliary discriminators, the approach is only a rough approximation to the true objective, thus making it difficult to interpret the log-likelihoods estimates (see next section). The authors therefore should have evaluated the reliability of their log-likelihood estimates, e.g. by using annealed importance sampling. # Correctness === UPDATE: this concern was succesfully addressed by the authors' feedback === While the derivation of the method seems correct, the evaluation seems to be flawed. First of all, the authors evaluate their method with respect to the log-likelihood estimates produced by their method which can be very inaccurate. In fact, the discriminators have to compare the distributions q(x,z) to q(x)q(z) and p(x,z) to p(x)p(z) respectively which are usually very different. This is precisely the reason why adaptive contrast was introduced in the AVB-paper by Mescheder al. In this context, the authors' statement that "AVB does not optimize the original variational lower bound connected to its theory" (ll. 220-221) when using adaptive contrast is misleading, as adaptive contrast is a technique for making the estimate of the variational lower bound more accurate, not an entirely different training objective. It is not clear why the proposed method should produce good estimates of the KL-divergences without such a technique. For a more careful evaluation, the author's could have run annealed importance sampling (Neal, 1998) to evaluate the correctness of their log-likelihood estimates. At the very least, the authors should have made clear that the obtained log-likelihood values are only approximations and have to be taken with a grain of salt. It is also unclear why the author did not use the ELBO from equation (1) (which has an analytic expression as long as p(z), q(z | x) and p(x | z) are tractable) instead of L_VAEx which has to be approximated using an auxiliary discriminator network. If the authors indeed used L_VAEx only for training and used the ELBO from (1) (without a discriminator) for evaluation, this should be clarified in the paper. # Overall rating While the technique introduced in this paper is interesting, the evaluation seems to be flawed. At the very least, the authors should have stated that their method produces only approximate log-likelihood estimates which can therefore not be compared directly to other methods. # Response to author's feedback Thank you for your feedback. The authors addressed my major concern about the evaluation. I therefore raised the final rating to "6 - Marginally above acceptance threshold". Note, however, that AR #2 still has major concerns about the correctness of the experimental evaluation that should be addressed by the authors in the final manuscript. In particular, the authors should show the split of the ELBO for cifar-10 into reconstruction and KL-divergence to better understand why the method can achieve good results even though the reconstructions are apparently not that good. The authors should also make sure that they performed the MNIST-experiment according to the standard protocol (binarization etc.).
nips_2017_1560
#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domaindependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.
This paper is already available on arxiv and cited 10 times. It is a very good paper introducing a new approach to count-based exploration in deep reinforcement learning based on using binary hashcodes. The approach is interesting, the presentation is didactical, the results are good and the related literature is well covered. I learned a lot from reading this paper, so my only criticisms are on secondary aspects. I have few global points: - I think it would be interesting to first train the convolutional autoencoder, and only then use your exploration method to compare its performance to state-of-the-art methods. This would allow for a more detailed analysis by disentangling the factors of potential inefficiency. - I would put the "related work" section more forward in the paper, as the presentation of the methods would benefit from the points made in the first paragraph of related work and in the comparison with [4]. As a reader, I found that these points were important and arriving too late. - Maybe section 5.4 of supplementary material should move into the main paper, and I don't share your main conclusion about it: If you look closely at Table 6, using n(s,a) is most often better than using n(s). To me, prefering n(s) to n(s,a) based just on the max in your table is inadequate, because maybe with an "optimized" (k,\beta) pair, you will find that using n(s,a) is better... More local points: line 41: "However, we have not seen a very simple and fast method that can work across different "domains." How do you disqualify VIME here? line 113: "An alternative method must be used to ensure that distinct states are mapped to distinct binary codes." Maybe the writing could be more direc there, presenting directly the appropriate approach. Fig5: despite magnifying the picture several times, I cannot see the baseline. Didn't you forget it? Footnote 4: the statement is weak. You should give reference to the state-of-the-art performance papers. In the supplementary part: Section 1 would benefit from a big table giving all params and their values. Beyond you, it would be nice if the community could move towards such better practices. As is, Section 1 is hardly readable and it is very difficult to check if everything is given. Section 3: why do you show these images? Do you have a message about them? Either say something of interest about them, or remove them! line 136: can (be) obtained (this was the only typo I could find, congratulations! ;))
nips_2017_1706
Fully Decentralized Policies for Multi-Agent Systems: An Information Theoretic Approach Learning cooperative policies for multi-agent systems is often challenged by partial observability and a lack of coordination. In some settings, the structure of a problem allows a distributed solution with limited communication. Here, we consider a scenario where no communication is available, and instead we learn local policies for all agents that collectively mimic the solution to a centralized multi-agent static optimization problem. Our main contribution is an information theoretic framework based on rate distortion theory which facilitates analysis of how well the resulting fully decentralized policies are able to reconstruct the optimal solution. Moreover, this framework provides a natural extension that addresses which nodes an agent should communicate with to improve the performance of its individual policy.
This paper considers reinforcement learning in decentralized, partially observable environments. It presents an "information theoretic framework based on rate distortion theory which facilitates analysis of how well fully decentralized policies are able to reconstruct the optimal solution" and presents an extension "that addresses which nodes an agent should communicate with to improve the performance of its individual policy." The paper presents some proof of concept results on an optimal power flow domain. On the positive side, the idea of applying rate distortion to this problem is interesting and as far as I can tell novel. I rather like it and think it is a promising idea. A major problem, however, is that even though the paper is motivated from the perspective of sequential problems, the formalization is (1) is a static problem. (The authors should describe in how far this corresponds to a DCOP problem). In fact, I believe that the approach is only approximating the right objective for such 1-shot problems: the objective is to approximate the optimal *centralized* policy p(u* | x). However, such a centralized policy will not take into account the affect of its actions on the distribution of information over the agents in the future, and as such much be sub-optimal from in a *decentralized* regime. Concretely: the optimal joint policy for a Dec-POMDP might sacrifice exploiting some information if it can help maintaining predictability and therefore the future ability to successfully coordinate. Another weakness of the proposed approach is in fact this dependence on the optimal centralized policy. Even though computing the solution for a centralized problem is easier from a complexity point of view (e.g., MDPs are in P), this in practices is meaningless since---save very special cases---the state and action spaces of these distributed problems are exponential in the number of agents. As such, it is typically not clear how to obtain this centralized policy. The paper gives a highly inadequate characterization of the related work. For instance just the statement "In practice however, POMDPs are rarely employed due to computational limitations [Nair et al., 2005]." is wrong in so many ways: -the authors mentions 'POMDP' but they cite a paper that introduces the 'networked distributed POMDP (ND-POMDP)' which is not a POMDP at all (but rather a special case of decentralized POMDP (Dec-POMDPs)). -the ND-POMDP makes very strong assumptions on the interactions of the agents ('transition and observation dependence'): essentially the agents cannot influence each other. Clearly this is not the use case that the paper aims at (e.g., in power flow the actions of the agents do affect each other), so really the paper should talk about (perhaps factored) Dec-POMDPs. -Finally, this statement seems a way to try and dismiss "using a Dec-POMDP" on this ground. However, this is all just syntax: as long as the goal is to solve a decentralized decision making problem which is not a very special sub-class** one can refuse to call it a Dec-POMDP, but the complexity will not go away. **e.g., see: Goldman, C.V. and Zilberstein, S., 2004. Decentralized control of cooperative systems: Categorization and complexity analysis. Also the claim "this method is the first of its kind to provide a data-driven way to learn local cooperative policies for multi-agent systems" strikes me as rather outrageous. I am not quite certain what "its kind" means exactly, but there are methods for RL in Dec-POMDPs that are completely decentralized, e.g.: Peshkin L, Kim KE, Meuleau N, Kaelbling LP. Learning to cooperate via policy search. In Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence 2000 Jun 30 (pp. 489-496). Morgan Kaufmann Publishers Inc. An overview of Dec-POMDP RL methods is given in: Oliehoek, F.A. and Amato, C., 2016. A concise introduction to decentralized POMDPs. Springer International Publishing. All in all, I think there is an interesting idea in this submission that could be shaped into good paper. However, there are a number of serious concerns that I think preclude publication at this point.
nips_2017_1707
Model-Powered Conditional Independence Test We consider the problem of non-parametric Conditional Independence testing (CI testing) for continuous random variables. Given i.i.d samples from the joint distribution f (x, y, z) of continuous random vectors X, Y and Z, we determine whether X ? ? Y |Z. We approach this by converting the conditional independence test into a classification problem. This allows us to harness very powerful classifiers like gradient-boosted trees and deep neural networks. These models can handle complex probability distributions and allow us to perform significantly better compared to the prior state of the art, for high-dimensional CI testing. The main technical challenge in the classification problem is the need for samples from the conditional product distribution f CI (x, y, z) = f (x|z)f (y|z)f (z) -the joint distribution if and only if X ? ? Y |Z. -when given access only to i.i.d. samples from the true joint distribution f (x, y, z). To tackle this problem we propose a novel nearest neighbor bootstrap procedure and theoretically show that our generated samples are indeed close to f CI in terms of total variational distance. We then develop theoretical results regarding the generalization bounds for classification for our problem, which translate into error bounds for CI testing. We provide a novel analysis of Rademacher type classification bounds in the presence of non-i.i.d nearindependent samples. We empirically validate the performance of our algorithm on simulated and real datasets and show performance gains over previous methods.
The authors propose a general test to test conditional independence between three continuous variables. The authors don't have strong parametric assumptions -- rather, they propose a method that uses samples from the joint distribution to create samples that are close to the conditional product distribution in total variation distance. Then, they feed these samples to a binary classifier such as a deep neural net, and after training it on samples from each of the two distributions, use the predictive error of the net to either accept or reject the null hypothesis of conditional independence. They compare their proposed method to those existing in literature, both with descriptions and with experimental results on synthetic and real data. Conditional independence testing is an important tool for modern machine learning, especially in the domain of causal inference, where we are frequently curious about claims of unconfoundedness. The proposed method appears straightforward and powerful enough to be used for many causal applications. Additionally, the paper is well-written. While the details of the theoretical results and proofs can be hard to follow, the authors do a good job of summarizing the results and making the methodology clear to practitioners. To me, the most impressive part of the paper is the theoretical results. The proposed method is a somewhat straightforward extension of the method for testing independence in the paper "Revisiting Classifier Two-Sample Tests" (https://arxiv.org/pdf/1610.06545.pdf), as the authors acknowledge (they modify the test to include a third variable Z, and after matching neighbors of Z, the methods are similar). However, the guarantees of the theoretical results are impressive. A few more comments: -Is there a discrete analog? The bootstrap becomes even more powerful when Z is discrete, right? -How sensitive is the test on the classification method used? Is it possible that a few changed hyperparameters can lead to different acceptance and rejection results? This would be useful to see alongside the simulations -Section 1.1 of Main Contributions is redundant -- the authors summarize their approach multiple times, and the extent appears unnecessary. -Writing out Algorithm 2 seems unnecessary — this process can be described more succinctly, and has been described already in the paper -Slight typos — should be “classifier” in line 157, “graphs” in 280
nips_2017_1647
Gauging Variational Inference Computing partition function is the most important statistical inference task arising in applications of Graphical Models (GM). Since it is computationally intractable, approximate methods have been used in practice, where mean-field (MF) and belief propagation (BP) are arguably the most popular and successful approaches of a variational type. In this paper, we propose two new variational schemes, coined Gauged-MF (G-MF) and Gauged-BP (G-BP), improving MF and BP, respectively. Both provide lower bounds for the partition function by utilizing the so-called gauge transformation which modifies factors of GM while keeping the partition function invariant. Moreover, we prove that both G-MF and G-BP are exact for GMs with a single loop of a special structure, even though the bare MF and BP perform badly in this case. Our extensive experiments indeed confirm that the proposed algorithms outperform and generalize MF and BP.
The paper presents two new approaches to variational inference for graphical models with binary variables - an important fundamental problem. By adding a search over gauge transformations (which preserve the partition function) to the standard variational search over (pseudo-)marginals, the authors present methods called gauged mean field (G-MF, Alg 1) and gauged belief propagation (G-BP, Alg 2). Very helpfully, both are guaranteed to yield a lower bound on the true partition function - this may be important in practice and enables empirical testing for improved performance even for large, intractable models (since higher lower bounds are better). Initial experiments demonstrate the benefits of the new methods. The authors demonstrate technical skill and good background knowledge. The new approaches seem powerful and potentially very helpful in practice, and appear very sensible and natural when introduced in Sec 3.1 given the earlier background. The discussion and extensions in Sec 3.2 are interesting. Theorem 1 is a strong theoretical result - does it hold in general for a graph with at most one cycle? However, I have some important questions/concerns. If these were fixed, then this could be a strong paper: 1. Algorithm details and runtime. Both algorithms require a sequence of decreasing barrier terms \delta_1 > \delta_2... but I don't see any comments on how these should be set? What should one do in practice and how much impact could this have on runtime and accuracy? There appear to be no comments at all on runtime. Both algorithms will likely take much longer than the usual time for MF or BP. Are we guaranteed an improvement at each iteration? Even if not, because we always have a lower bound, we could use the max so far to yield an improving anytime algorithm. Without any runtime analysis, the empirical accuracy comparisons are not really meaningful. There are existing methods which will provide improved accuracy for more computation, and comparison would be very helpful (e.g. AIS or clamping approaches such as Weller and Jebara NIPS 2014; or Weller and Domke AISTATS 2016, which shows monotonic improvement for MF and TRW when each additional variable is clamped). Note that with clamping methods: if a model has at most one cycle, then by clamping any one variable on the cycle and running BP, we obtain the exact solution (compare to Theorem 1). More generally, if sufficient variables are clamped st the remainder is acyclic then we obtain the exact solution. Of course, for a graph with many cycles this might take a long time - but this shows why considering accuracy vs runtime is important. What is the quality of the marginals returned by the new approaches? Does a better partition function estimate lead to better marginals? 2. Explanation of the gauge transformation. Given the importance here of the gauge transformation, which may be unfamiliar to many readers, it would help significantly if the explanation could be expanded and at least one example provided. I recognize that this is challenging given space constraints - even if it were in the Appendix it would be a big help. l. 177-179 please elaborate on why (8) yields a lower bound on the partition function. Minor details: 1.238-240 Interesting that G-BP performs similarly to BP for log-supermodular models. Any further comment on this? If/when runtime is added, we need more details on which method is used for regular BP. A few typos - l. 53 alternative -> alternating (as just defined in the previous sentence). I suggest that 'frustrated cycle' may be a better name than 'alternating cycle' since a balanced cycle of even length could have edge signs which alternate. l. 92 two most -> two of the most l. 134-5 on the expense of making underlying -> at the expense of making the underlying In Refs, [11] ising -> Ising; [29] and [30] bethe -> Bethe
nips_2017_1421
Scalable Demand-Aware Recommendation Recommendation for e-commerce with a mix of durable and nondurable goods has characteristics that distinguish it from the well-studied media recommendation problem. The demand for items is a combined effect of form utility and time utility, i.e., a product must both be intrinsically appealing to a consumer and the time must be right for purchase. In particular for durable goods, time utility is a function of inter-purchase duration within product category because consumers are unlikely to purchase two items in the same category in close temporal succession. Moreover, purchase data, in contrast to rating data, is implicit with non-purchases not necessarily indicating dislike. Together, these issues give rise to the positive-unlabeled demand-aware recommendation problem that we pose via joint low-rank tensor completion and product category inter-purchase duration vector estimation. We further relax this problem and propose a highly scalable alternating minimization approach with which we can solve problems with millions of users and millions of items in a single thread. We also show superior prediction accuracies on multiple real-world datasets.
Paper revolves over the observation that in e-commerce world customers rarely purchase two items that belong to the same category (e.g. Smartphone) in close temporal succession. Therefore, they claim that a robust recommendation system should incorporate both utility and time utility. An additional problem that is tackled in the paper is that many e-commerce systems have no explicit negative feedback to learn from (for example one can see only what items customer purchased - positives - and no explicit negatives in form of items user did not like). I believe that the second problem they mention is not as big of a concern as advertised by authors. In absence of any explicit negative signal good replacements are long dwell-time clicks that did not end up in purchase, as well as cart additions that did not end up in the final purchase or returns. Many companies are also implementing swipe to dismiss that is useful for collecting explicit negative signal and can be applied to any e-commerce site easily. The authors clearly define and describe the objective of modeling user intent as combined effect of form utility and time utility and identify limitations of minimizing the objective when formulated in the simplest form (as tensor nuclear norm) in terms of high computational cost. To tackle that the authors propose to relax the problem to a matrix optimization problem with a label-dependent loss and propose an efficient alternating minimization algorithm whose complexity is proportional to the number of purchase records in the matrix. I see the biggest strength of this paper as reformulating the objective and adding relaxations that significantly reduce computational cost and still achieves good result. Introducing time component adds another dimension and as you mentioned it results in a significant rise in computational cost O(mnl) where l is the number of time slots. What happens when you drop the time component in terms of accuracy? I saw that in your experiment you compare to many traditional collaborative filtering algorithms but you do not compare to your algorithm that doesn't take time in account or uses different definitions of what time slot is (e.g. just a s a two level definition it could be: purchased this month or not). Related work is generally covered well. Experiments only cover offline evaluation. Therefore, we have no sense of how this algorithm would work in practice in a production system. For example I would be curious to compare against just showing popular items as recommendations (as done in this paper: E-commerce in your inbox: Product recommendations at scale, KDD 2015). Paper would be much stronger with such results. Also, additional evaluation metrics should be considered, like traditional ranking metrics, precision at K, etc. It would be also interesting to put this in context of ad retargeting. It is well known that users on the Web are re-targeted for the same product they bought (or same category) even after they bought the product. While some see this as a waste of ad dollars some claim this still brings in revenue as users often want to see items from same category in order to get reassurance that you bought the best product from the category. In practice, it also happens that they return the original product and purchase the recommended one from same category. In practice, recommender production systems take this signal in account as well.
nips_2017_2588
Active Exploration for Learning Symbolic Representations We introduce an online active exploration algorithm for data-efficiently learning an abstract symbolic model of an environment. Our algorithm is divided into two parts: the first part quickly generates an intermediate Bayesian symbolic model from the data that the agent has collected so far, which the agent can then use along with the second part to guide its future exploration towards regions of the state space that the model is uncertain about. We show that our algorithm outperforms random and greedy exploration policies on two different computer game domains. The first domain is an Asteroids-inspired game with complex dynamics but basic logical structure. The second is the Treasure Game, with simpler dynamics but more complex logical structure.
Key question asked in the paper: Can an agent actively explore to build a probabilistic symbolic model of the environment from a given set of option and state abstractions? And does this help make future exploration easy via MCTS? This paper proposes a knowledge representation framework for addressing these questions. Proposal: A known continuous state space and discrete set of options (restricted to subgoal options) is given. An options model is treated a semi-MDP process. A plan is defined as a sequence of options. The idea of a probabilistic symbol is invoked from earlier work to refer to a distribution over infinitely many continuous low-level states. The idea of state masks is introduced to find independent factors of variations. Then each abstract subgoal option is defined as a policy that leads to a subgoal for the masked states, for e.g. opening a door. But since there could be many doors in the environment, the idea of a partitioned abstract subgoal option is proposed to bind together subgoal options for each instance of a door. The agent then uses these partitioned abstract subgoal options during exploration. Optimal exploration then is defined as finding a policy via MCTS that leads to greatest reduction of uncertainty over distributions over proposed options symbolic model. Feedback: - There are many ideas proposed in this paper. This makes it hard to clearly communicate the contributions. This work builds on previous work from Konidaris et al. Is the novel contribution in the MCTS algorithm? The contributions should be made more explicit for unfamiliar readers. - It is difficult to keep track of what's going on in sec 3. I would additionally make alg1 more self-inclusive and use one of the working examples (asteroid or maze) to explain things. - Discuss assumption of restriction to subgoal option. What kind of behaviors does this framework not permit? discuss this in the paper - Another restrictive assumption: "We use factors to reduce the number of potential masks, i.e. we assume that if state variables i and j always change together in the observations, then this will always occur. An example of a factor could be the (x, y, z) position of your keys, because they are almost never moved along only one axis". What are the implications of this assumptions on the possible range of behaviors? - The experimental setup needs to be developed more to answer, visualize and discuss a few interesting questions: (1) how does the symbol model change with active exploration? (2) visualize the space of policies at various intermediate epochs. Without this it is hard to get an intuition for the ideas and ways to expand them in the future. - I believe this is a very important line of work. However, clarity and articulation in the writing/experiments remains my main concern for a clear accept. Since some of the abstractions are fairly new, the authors also need to clearly discuss the implication of all their underlying assumptions.
nips_2017_3527
Active Learning from Peers This paper addresses the challenge of learning from peers in an online multitask setting. Instead of always requesting a label from a human oracle, the proposed method first determines if the learner for each task can acquire that label with sufficient confidence from its peers either as a task-similarity weighted sum, or from the single most similar task. If so, it saves the oracle query for later use in more difficult cases, and if not it queries the human oracle. The paper develops the new algorithm to exhibit this behavior and proves a theoretical mistake bound for the method compared to the best linear predictor in hindsight. Experiments over three multitask learning benchmark datasets show clearly superior performance over baselines such as assuming task independence, learning only from the oracle and not learning from peer tasks.
The authors propose an extension of the active learning framework to address two key issues: active learning of multiple related tasks, and using the 'peers' as an additional oracle to reduce the cost of querying the user all the time. The experimental evaluation of the proposed learning algorithms is performed on several benchmark datasets, and in this context, the authors proved their claims. The main novelty lies in the extension of the SHAMPO algorithm so that it can leverage the information from multiple tasks, and also peer-oracles (using the confidence in the predictions), to reduce the annotation efforts by the user. A sound theoretical formulation to incorporate these two is proposed. Experimental results show the validity of the proposed. The authors did not comment on the cost of using the peers as oracles, but assumed their availability. This should be detailed more. Also, it is difficult to assess the model's performance fully compared to the SHAMPO algorithm as its results are missing in Table 1. The accuracy difference between peer and one version of the model is marginal. What is the benefit of the former? Overall, this is a nice paper with a good mix of theory and empirical analysis. Lots of typos in the text that should be corrected.
nips_2017_2674
K-Medoids for K-Means Seeding We show experimentally that the algorithm clarans of Ng and Han (1994) finds better K-medoids solutions than the Voronoi iteration algorithm of Hastie et al. (2001). This finding, along with the similarity between the Voronoi iteration algorithm and Lloyd's K-means algorithm, motivates us to use clarans as a K-means initializer. We show that clarans outperforms other algorithms on 23/23 datasets with a mean decrease over k-means-++ (Arthur and Vassilvitskii, 2007) of 30% for initialization mean squared error (MSE) and 3% for final MSE. We introduce algorithmic improvements to clarans which improve its complexity and runtime, making it a viable initialization scheme for large datasets.
The authors propose to use a particular version of the K-medoids algorithm (clarans - that uses iterative swaps to identify the medoids) for initializing k-means and claim that this improves the final clustering quality. The authors have also tested their claims with multiple datasets, and demonstrated their performance improvements. They have also published code that will be made open after the review process. Comments: 1. The paper is easy to read and follow, and the authors have done a good job placing their work in context. I appreciate the fact that the optimizations are presented in a very accessible manner in Section 4. As the authors claim, open source code is an important contribution. 2. The authors should consider the following work for inclusion in their discussions and/or comparisons: Bahmani, Bahman, et al. "Scalable k-means++." Proceedings of the VLDB Endowment 5.7 (2012): 622-633. 3. While the authors have compared actual run times of different algorithms in Section 5, can they also provide some idea of the theoretical complexity (whenever possible)? 4. Related to previous comment: how much impact would the actual implementation have on the runtimes, particularly given that the “bf” algorithm uses BLAS routines and hence can be implicitly parallelizing matrix operations. Is there any other place in the implementations where parallel processing is done? 5. Please number the data sets in Table 3 to make it easy to compare with table 4. 6. In column 1(km++) of Table 4, why is it that km++ does not run unto 90 iterations always given that TL was based on this time? 7. In 5.2, the claim that “clarans often finds a globally minimal initialization” seems a bit too speculative. If the authors want to claim this they must run many many rounds at least in a few datasets, and show that the performance cannot get better by much.
nips_2017_3214
Matrix Norm Estimation from a Few Entries Singular values of a data in a matrix form provide insights on the structure of the data, the effective dimensionality, and the choice of hyper-parameters on higher-level data analysis tools. However, in many practical applications such as collaborative filtering and network analysis, we only get a partial observation. Under such scenarios, we consider the fundamental problem of recovering various spectral properties of the underlying matrix from a sampling of its entries. We propose a framework of first estimating the Schatten k-norms of a matrix for several values of k, and using these as surrogates for estimating spectral properties of interest, such as the spectrum itself or the rank. This paper focuses on the technical challenges in accurately estimating the Schatten norms from a sampling of a matrix. We introduce a novel unbiased estimator based on counting small structures in a graph and provide guarantees that match its empirical performances. Our theoretical analysis shows that Schatten norms can be recovered accurately from strictly smaller number of samples compared to what is needed to recover the underlying low-rank matrix. Numerical experiments suggest that we significantly improve upon a competing approach of using matrix completion methods.
Summary of paper: The paper proposes an estimator for the Schatten norm of a matrix when only few entries of the matrix is observed. The estimator is based on the relation between the Schatten k-norm of a matrix and the total weight of all closed k-length walks on the corresponding weighted graph. While the paper restricts the discussion to symmetric positive semi-definite matrices, and guarantees given for uniform sampling, the generic principle is applicable for any matrix and any sampling strategy. The results hold under standard assumptions, and corollaries related to spectrum and rank estimation are also provided in the appendix. Clarity: The paper is very well written and organised, including some of the material in the supplementary. The motivation for the work, and the discussion that builds up to the estimator and well presented. The numerical simulations are also perfectly positioned. Quality: The technical quality of the paper is high. The main structure of the proofs are correct, but I didn’t verify the computations in equations (26)-(36). The most pleasing component of the theoretical analysis is the level at which it is presented. The authors restrict the discussion to symmetric psd matrices for ease of presentation, but the sampling distribution is kept general until the final guarantees in Section 3.1. The authors also correctly note estimating the Schatten norm itself is never the end goal, and hence, they provide corollaries related to spectrum and rank estimation in the appendix. The numerical simulations do not involve a real application, but clearly show the advantage of the proposed estimator over other techniques when sampling rate is very small. That being said, I feel that further numerical studies based on this approach is important. Significance: The paper considers a classical problem that lies at the heart of several matrix related problems. Hence, the paper is very relevant for the machine learning community. The generality at which the problem is solved, or its possible extensions, has several potential applications. To this end, I feel that it will be great if the authors can follow up this work with a longer paper that also includes the more complicated case of rectangular matrices, analyses other sampling strategies (like column sampling / sub matrix sampling), and more thorough experimental studies. Originality: Since the underlying problem is classical, one should consider that many ideas have been tried out in the literature. I feel that the individual components of the discussion and analysis are fairly standard techniques (except perhaps the computational simplification for k<=7). But I feel that the neat way in which the paper combines everything is very original. For instance, the use of polynomial concentration in the present context is intuitive, but the careful computation of the variation bound is definitely novel. Minor comments: 1. Line 28-29: Can we hope to estimate the spectrum of a matrix from a submatrix? I am not very optimistic about this. 2. Appendix B: It would be nice if a description of this method is added. In particular, what is the time complexity of the method? 3. Line 238,262,281,292: In standard complexity terminology, I think d^2p = O(something) should actually be d^2p = \Omega(something). Big-O usually means at most upto constant, whereas here I think you would like to say that the number of observed entries is at least something (Big-Omega). 4. Line 263: How do you get the condition on r? The third term in (8) is less than 1 if r is less than d^(1-1/k).
nips_2017_399
Hypothesis Transfer Learning via Transformation Functions We consider the Hypothesis Transfer Learning (HTL) problem where one incorporates a hypothesis trained on the source domain into the learning procedure of the target domain. Existing theoretical analysis either only studies specific algorithms or only presents upper bounds on the generalization error but not on the excess risk. In this paper, we propose a unified algorithm-dependent framework for HTL through a novel notion of transformation function, which characterizes the relation between the source and the target domains. We conduct a general risk analysis of this framework and in particular, we show for the first time, if two domains are related, HTL enjoys faster convergence rates of excess risks for Kernel Smoothing and Kernel Ridge Regression than those of the classical non-transfer learning settings. Experiments on real world data demonstrate the effectiveness of our framework.
The paper presents a supervised non-parametric hypothesis transfer learning (HTL) approach for regression and its analysis, aimed at the cases where one has plenty of training data coming from the source task and few examples from the target one. The paper makes an assumption that the source and the target regression functions are related through so called transformation function (TF). The TF is assumed to have some parametric form (e.g. linking regressions functions linearly) and the goal of an algorithm is to recover its parameters. Once these parameters are learned, the hypothesis trained on the source task can be transformed to the hypothesis designated for the target task. The paper proposes two ways for estimation of these parameters, that is through kernel smoothing and kernel ridge regression. For both cases the paper presents consistency bounds, showing that one can have an improvement in the exponent of the non-parametric rate once the TF is smoother, in Holder sense, than the target regression function. In addition, the paper argues that one can run model selection over the class of TF through cross-validation and shows some bounds to justify this, which is not particularly surprising. Finally, the paper concludes with numerical experiments on two regression tasks on the real-world small-scale datasets. The rates stated in main theorems, 2 and 3 appear to make sense. One can see this from theorem 1, which is used to prove both, and essentially states an excess risk as a decomposition into the sum of excess risk bounds for source/TF regression functions. The later are bounded through well established excess risk bounds from the literature. I do not think this is a breakthrough paper, but it has a novel point as it addresses the theory of HTL in the non-parametric setting because it studies the Bayes risk rates rather than generalization bounds as was done previously. The paper has a novel bit as it studies non-parametric consistency rates of HTL, while previous theoretical works focused on the risk bounds. As one should expect, the improvement then occurs in the right place, that is in the exponent of the rate, rather than in some additive or multiplicative term. At the same time the algorithm appears to be simple and easy to implement. This makes this paper a good contribution to the literature on HTL. Also, the paper is quite well positioned in the related work. Unfortunately, it doesn't compare to previous HTL approaches experimentally and experimental evaluation is, frankly, quite modest, despite the simplicity of the algorithm.
nips_2017_2811
Multi-View Decision Processes: The Helper-AI Problem ������������������������������� ����������������������� We consider a two-player sequential game in which agents have the same reward function but may disagree on the transition probabilities of an underlying Markovian model of the world. By committing to play a specific policy, the agent with the correct model can steer the behavior of the other agent, and seek to improve utility. We model this setting as a multi-view decision process, which we use to formally analyze the positive effect of steering policies. Furthermore, we develop an algorithm for computing the agents' achievable joint policy, and we experimentally show that it can lead to a large utility increase when the agents' models diverge. 1 Introduction. In the past decade, we have been witnessing the fulfillment of Licklider's profound vision on AI [Licklider, 1960]: Man-computer symbiosis is an expected development in cooperative interaction between men and electronic computers. Needless to say, such a collaboration, between humans and AIs, is natural in many real-world AI problems. As a motivating example, consider the case of autonomous vehicles, where a human driver can override the AI driver if needed. With advances in AI, the human will benefit most if she allows the AI agent to assume control and drive optimally. However, this might not be achievable-due to human behavioral biases, such as over-weighting the importance of rare events, the human might incorrectly override the AI. In the way, the misaligned models of the two drivers can lead to a decrease in utility. In general, this problem may occur whenever two agents disagree on their view of reality, even if they cooperate to achieve a common goal. Formalizing this setting leads to a class of sequential multi-agent decision problems that extend stochastic games. While in a stochastic game there is an underlying transition kernel to which all agents (players) agree, the same is not necessarily true in the described scenario. Each agent may have a different transition model. We focus on a leader-follower setting in which the leader commits to a policy that the follower then best responds to, according to the follower's model. Mapped to our motivating example, this would mean that the AI driver is aware of human behavioral biases and takes them into account when deciding how to drive. To incorporate both sequential and stochastic aspects, we model this as a multi-view decision process. Our multi-view decision process is based on an MDP model, with two, possibly different, transition kernels. One of the agents, hereafter denoted as P 1 , is assumed to have the correct transition kernel and is chosen to be the leader of the Stackelberg game-it commits to a policy that the second agent (P 2 ) best-responds to according to its own model. The agents have the same reward function, and are in this sense cooperative. In an application setting, while the human (P 2 ) may not be a planner, we motivate our set-up as modeling the endpoint of an adaptive process that leads P 2 to adopt a best-response to the policy of P 1 . Using the multi-view decision process, we analyze the effect of P 2 's imperfect model on the achieved utility. We place an upper bound on the utility loss due to this, and also provide a lower bound on how much P 1 gains by knowing P 2 's model. One of our main analysis tools is the amount of influence an agent has, i.e. how much its actions affect the transition probabilities, both according to its own model, and according to the model of the other agent. We also develop an algorithm, extending backwards induction for simultaneous-move sequential games [c.f. Bošanskỳ et al., 2016], to compute a pair of policies that constitute a subgame perfect equilibrium. In our experiments, we introduce intervention games as a way to construct example scenarios. In an intervention game, an AI and a human share control of a process, and the human can intervene to override the AI's actions but suffers some cost in doing so. This allows us to derive a multi-view process from any single-agent MDP. We consider two domains: first, the intervention game variant of the shelter-food game introduced by Guo et al. [2013], as well as an autonomous driving problem that we introduce here. Our results show that the proposed approach provides a large increase in utility in each domain, thus overcoming the deficiencies of P 2 's model, when the latter model is known to the AI.
The paper introduces multi-view decision processes (MVDP) as a cooperative game between two agents, where one agent may have an imperfect model of the transition probabilities of the other agent. Generally, the work is motivated well, the problem is interesting and clearly relevant to AI and the NIPS community. The theoretical results are nicely complemented by experiments. Presentation is mostly smooth: the main goals are consistent with the narrative throughout the paper, but the some details from the experiments seem less clear (seem points below.) Some points are confusing and need further clarification: 1. In the second footnote on page 3, what is a "worst-case best response"? This is not defined, seems like they should all share the property that they attain the maximum value against some policy. 2. The authors talk about solving using backward induction and make reference to Bozansky et al 2016 and Littman 1994, but those works treat exclsively the zero-sum case. MVDP is a cooperative game that can be at least as hard as general-sum games (by Lemma 1), and backward induction generally cannot solve these due to equilibrium selection problems leading to non-unique values for subgames, also shown in Zinkevich et al 2005. How is it possible to apply backward induction by propagating single Q and V values here then? The authors claim to find an approximate joint optimal policy, which is stronger than a Nash equilibrium, and it seems by Zinkevich et al 2005 this should at least require cyclic (nonstationary) strategies. 3. From the start the authors talked about stochastic policies, but this is a cooperative identical payoff game. It's not clear later (seeing the proof of Lemma 1) that mixed strategies are necessary due to the different views leading to an effectively general-sum game. How then could a the determinstic version of the algorithm that computes a pure stationary policy be approximately optimal? Even in the Stackelberg case, mixing could be necessary. Minor: The role of c in the c-inervention games is not clear. On page 6, the authors write a cost c(s) is subtracted from the reward r_t when P2 takes an action other. But then in Remark 1 gives the equation rho(s, a) = [rho(s) + cI{a_{t,2} != a^0}] / (1+c) does not subtract c. This can be easily fixed by rewording the text at bottom of page 6.
nips_2017_3286
Towards Generalization and Simplicity in Continuous Control This work shows that policies with simple linear and RBF parameterizations can be trained to solve a variety of widely studied continuous control tasks, including the gym-v1 benchmarks. The performance of these trained policies are competitive with state of the art results, obtained with more elaborate parameterizations such as fully connected neural networks. Furthermore, the standard training and testing scenarios for these tasks are shown to be very limited and prone to over-fitting, thus giving rise to only trajectory-centric policies. Training with a diverse initial state distribution induces more global policies with better generalization. This allows for interactive control scenarios where the system recovers from large on-line perturbations; as shown in the supplementary video.
The paper evaluates natural policy gradient algorithm with simple linear policies on a wide range of “challenging” problems from OpenAI MuJoco environment, and shows that these shallow policy networks can learn effective policies in most domains, sometimes faster than NN policies. It further explores learning robust and more global policies by modifying existing domains, e.g. adding many perturbations and diverse initial states. The first part of the paper, while not proposing new approaches, offers interesting insights into the performance of linear policies, given plethora of prior work on applying NN policies as default on these problems. This part can be further strengthened by doing ablation study on the RL optimizer. Specifically, GAE, sigma vs alpha in Eq. 5, and small trajectory batch vs large trajectory batch (SGD vs batch opt). In particular, it is nice to confirm if there are subtle differences from prior NPG implementation with linear policies that made the results in this paper significantly better (e.g. in TRPO [Schulman et. al., 2015], one of the major differences was sigma vs alpha). It could be unsurprising that a global linear policy may perform very well in narrow state space. The second part of the paper tries to address such concern by modifying the environments such that the policies must learn globally robust policies to succeed. The paper then again showed that the RBF/linear policies could learn effective recover policies. It would be great to include corresponding learning curves for these in Section 4 beyond Figure 4, comparing NN, linear and RBF. Are there figures available? Additional comment: - A nice follow-up experiment to show is if linear policies can solve control problems from vision, with some generic pre-trained convolutional filters (e.g. ImageNet). This would further show that existing domains are very simple and do not require deep end-to-end optimization.
nips_2017_1380
Beyond normality: Learning sparse probabilistic graphical models in the non-Gaussian setting We present an algorithm to identify sparse dependence structure in continuous and non-Gaussian probability distributions, given a corresponding set of data. The conditional independence structure of an arbitrary distribution can be represented as an undirected graph (or Markov random field), but most algorithms for learning this structure are restricted to the discrete or Gaussian cases. Our new approach allows for more realistic and accurate descriptions of the distribution in question, and in turn better estimates of its sparse Markov structure. Sparsity in the graph is of interest as it can accelerate inference, improve sampling methods, and reveal important dependencies between variables. The algorithm relies on exploiting the connection between the sparsity of the graph and the sparsity of transport maps, which deterministically couple one probability measure to another.
This paper presents a method for identifying the independence structure of undirected probabilistic graphical models with continuous but non-Gaussian distributions, using SING, a novel iterative algorithm based on transport maps. The authors derive an estimate for the number of samples needed to recover the exact underlying graph structure with some probability and demonstrate empirically that SING can indeed recover this structure on two simple domains, where comparable methods that make invalid assumptions about the underlying data-generating process fail. Quality. The paper seems technically sound, with its claims and conclusions supported by existing and provided theoretical and empirical results. The authors mostly do a good job of justifying their approach, but do not discuss potential issues with the algorithm. For example, what is the complexity of SING? Since it has a variable-elimination-like aspect, is complexity exponential in the treewidth of the generated graph? If so, how is this approach feasible for non-sparse graphs? In that vein, I would like to see how SING fares empirically when applied to more realistic and higher-dimensional problems and in comparison to other standard MRF-learning approaches. The latter typically make overly strong assumptions, but can still perform well given that there are rarely enough data points for true structures to be recovered. Clarity. The writing is good, and each subsection is clear, but the authors should better define their notation and spend more time explaining the algorithm as a whole and gluing the pieces together. In particular, what exactly happens in subsequent iterations? A quick example or a brief high-level walk-through of the algorithm would be helpful here to assemble all of the pieces that have been described into a coherent whole. Also, is the amount of sparsity that is estimated monotonic over iterations? If SING over-estimates the amount of sparsity present in the data, can it then reduce the amount of sparsity it is estimating? The sparsity pattern identified on line 8 of Alg. 1 is used in line 2, correct? (Note that you use I_{S_{i+1}} and S^B_{I_{i}}, respectively, on these lines – is I_i the same as I_{S_{i+1}} from the previous iteration?) It’s not clear to me how this sparsity pattern gets used in the estimation of the transport map. You should more clearly define where the original and induced graphs come from. From what I can tell, the original graph is the result of the thresholded \Sigma (but no graph is discussed in Alg. 1), and the induced graph comes from some variable-elimination-like algorithm, but it’s not clear how this actually happens. Does Alg. 1 perform variable elimination? What are the factors? Why do you need to do this? Originality. The approach seems novel and original, although I am not that familiar with transport maps and their corresponding literature. Significance. The presented approach seems interesting and potentially useful, but lacks sufficient evaluation as the authors do not evaluate or discuss the complexity of their algorithm and only evaluate on two simple problems. It would also be useful to properly conclude section 5 by better explaining and contextualizing the result. In particular, is the number of samples needed high or low relative to other works that estimate similar types of graph structures, or is this completely novel analysis? Overall. With additional results, editing, and a more cohesive explanation of their approach, this could be a successful publication, but I do not think it has reached that threshold as it stands. Detailed comments: - Please define your notation. For example, in the introduction you should define the independence symbol and \bf{x}_{jk}. In section 2, you use both d_j d_k for (I think) partial derivatives, but then later use d_{jk} for (I assume) the same quantity, but define neither. You should define det() and the circle, which is typically used for function composition. The V on line 154 should be defined. The projection (?) operator on the equation below line 162 should be defined. Etc. - Line 75: inline citations should be Author (Year), instead of the number (use \citet instead of \citep if using natbib) - There are a few typos and duplicated words (e.g., lines 86, 191). - The font sizes on your figures are barely legible when printed – these should be increased. - Alg 1., line 1, “DECREASING = TRUE” does not show when printed, perhaps due to font choice or embedding. - Lines 156 and 173: These should be referred to as Equation (6) and Equation (7), not Line (6) and (7). - Figure 3d shows SING(B=1), but the B=1 case has not been explained before this, so it looks like a failure case of the algorithm. Better would be to put the success case in Figure (3d) and then show the failure case in Figure (4) (separate from the others) and discuss the effect of B=1 there. - Figure 5: it’s confusing to show the true graph (5a) as the ‘recovered’ graph. You should show the recovered adjacency matrix for SING as an added sub-figure of figure 5. Further, the x- and y- axis labels are just numbers, meaning that it’s not clear which numbers correspond to the hyperparameters, so lines 232-233 are confusing. - Figures 3-6: would be useful to have a legend or explanation somewhere that blue means an edge and white means no edge. ---- After reading the author response, I have updated my score to 'weak accept' as I agree that the NIPS community will benefit from this work and my main concerns of complexity with respect to the use of variable elimination have been satisfied. However, the exact use of the variable elimination heuristics still remains somewhat unclear to me and the authors should clarify and be more explicit about how it is used in their algorithm.
nips_2017_291
Learning a Multi-View Stereo Machine We present a learnt system for multi-view stereopsis. In contrast to recent learning based methods for 3D reconstruction, we leverage the underlying 3D geometry of the problem through feature projection and unprojection along viewing rays. By formulating these operations in a differentiable manner, we are able to learn the system end-to-end for the task of metric 3D reconstruction. End-to-end learning allows us to jointly reason about shape priors while conforming to geometric constraints, enabling reconstruction from much fewer images (even a single image) than required by classical approaches as well as completion of unseen surfaces. We thoroughly evaluate our approach on the ShapeNet dataset and demonstrate the benefits over classical approaches and recent learning based methods.
This submission proposes to learn 3D object reconstruction from a multi-view stereo setting end-to-end. Its main contribution is consistent feature projection and unprojection along viewing rays given camera poses and multiple images. Instead of mainly learning object shapes based on semantic image interpretation, this approach builds projective geometry into the approach to benefit from geometric cues, too. The paper proposes a way for differentiable unprojection that projects 2D image features to 3D space using camera projection equations and bilinear sampling. This is a simple yet effective way to implicitly build epipolar constraints into the system, because features from multiple views of the same world point unproject to the same voxel. While such combination of multiple views is of course not new, it is new as part of an end-to-end learnable approach. I did not completely understand how recurrent grid fusion works, please explain in more detail. A dedicated figure might ease understanding, too. I would also appreciate a short note on GPU RAM requirements and run-time. The paper is very well written and illustrations make it enjoyable to read. The approach is experimentally evaluated by comparison to traditional multi-view stereo matching algorithms (e.g., plane sweeping) and it is compared to the recent 3D-R2N2 approach, that also uses deep learning for 3D object reconstruction. While all experiments are convincing and support claims of the paper, it would be good to compare to more deep learning-based 3D reconstruction approaches (several are cited in the paper, e.g. [27] + additional work like "DeMoN: Depth and Motion Network for Learning Monocular Stereo" from Thomas Brox's lab). Clear explanation of theoretical differences to [27] would further improve the paper. At present, it remains unclear whether this submission is just a multi-view extension of [27] without reading [27] itself. Better motivating the own contribution here seems necessary. I believe this is a great paper and deserves to be published at NIPS, but also must admit that I still have some questions.
nips_2017_2799
Approximate Supermodularity Bounds for Experimental Design This work provides performance guarantees for the greedy solution of experimental design problems. In particular, it focuses on A-and E-optimal designs, for which typical guarantees do not apply since the mean-square error and the maximum eigenvalue of the estimation error covariance matrix are not supermodular. To do so, it leverages the concept of approximate supermodularity to derive nonasymptotic worst-case suboptimality bounds for these greedy solutions. These bounds reveal that as the SNR of the experiments decreases, these cost functions behave increasingly as supermodular functions. As such, greedy A-and E-optimal designs approach (1 − e −1 )-optimality. These results reconcile the empirical success of greedy experimental design with the non-supermodularity of the A-and E-optimality criteria.
This paper defines two types of approximate supermodularity, namely alpha-supermodularity and epsilon-supermodularity, both of which are extensions of the existing supermodularity definition. Then the authors show that the objectives of finding the A- and E-optimal Bayesian experimental design respectively belong to those two approximate supermodularity. Therefore, the greedy selection algorithm can obtain a good solution to the design problem that is within (1-e^{-1}) of the optimal, which is guaranteed by the standard result of supermodularity minimization. This paper is nicely written and everything is easy to understand. I really enjoy reading it. The result gives a theoretical justification of the effectiveness of the greedy algorithm in finding A- and E- optimal design, which is further confirmed by experiments. However, I also see some limitations of the proposed framework: 1. The analysis seems to apply only to Bayesian experimental design that the theta must have a prior. If not for the regularization term in the equation 4, it might be impossible to use any greedy algorithm. On the contrary, existing approach such as [10] does not have such limitation. 2. The relative error is different from the traditional definition, as the objective function f is normalized that f(D) <= 0 for all D. I suspect this is the key that makes the proof valid. Will this framework be applied to the unnormalized objective? Although there are limitations, I still think the paper is above the acceptance threshold of NIPS.
nips_2017_3550
On clustering network-valued data Community detection, which focuses on clustering nodes or detecting communities in (mostly) a single network, is a problem of considerable practical interest and has received a great deal of attention in the research community. While being able to cluster within a network is important, there are emerging needs to be able to cluster multiple networks. This is largely motivated by the routine collection of network data that are generated from potentially different populations. These networks may or may not have node correspondence. When node correspondence is present, we cluster networks by summarizing a network by its graphon estimate, whereas when node correspondence is not present, we propose a novel solution for clustering such networks by associating a computationally feasible feature vector to each network based on trace of powers of the adjacency matrix. We illustrate our methods using both simulated and real data sets, and theoretical justifications are provided in terms of consistency.
Review Summary: This paper accomplishes a lot in just a few pages. This paper is generally well-written, but the first three sections in particular are very strong. The paper does a great job of balancing discussing previous work, offering theoretical results for their approach, and presenting experimental results on both generated and real data. The paper would benefit from just a touch of restructuring to add a conclusion and future work section as well as more details to some of their experimental methods and results. Summary of paper: This paper presents two versions of their framework for clustering networks: NCGE and NCLM. Each version tackles a slightly different variation of the problem of clustering networks. NCGE is applied when there is node correspondence between the networks being clustered, while NCLM is applied when there isn’t. It is impressive that this paper tackles not one but both of these situations, presenting strong experimental results and connecting their work to the theory of networks. This paper is superbly written in some sections, which makes the weaker sections almost jarring. For example, sections 2 and 3 do an excellent job of demonstrating how the current work builds on previous work. But then in sharp contrast lines 197-200 rely heavily the reader knowing the cited outside sources to grasp their point. Understandably (due to page lengths) one can not include detailed explanations for everything, but the authors may want to consider pruning in some places to allow themselves more consistency in their explanations. The last subsection (page 8, lines 299-315) is the most interesting and strongest part of that final section. It would be excellent if this part could be expanded and include more details about these real networks. Also the paper ends abruptly without a conclusion or discussion of potential future work. Perhaps with a bit of restructuring, both could be included. Section 3.2 was another strong moment of the paper, peppering both interesting results (lines 140-142) and forecasting future parts of the paper (lines 149-150). However, this section might be one to either condense or prune just a bit to gain back a few lines as to add details in other sections. A few formatting issues, questions, and/or recommendations are below: Page 1, lines 23,24,26, 35: Citations are out of order. Page 2, line 74: The presentation of the USVT acronym is inconsistent with the presentation of the NCGE and NCLM acronyms. Page 3, line 97: Citations are out of order. Page 4, line 128: Based on the earlier lines, it seems that g(A) = \hat{P} should be g(A_i) = \hat{P_i} Lines 159-160: In what sense does the parameter J count the effective number of eigenvalues? Page 5, lines 180-183: It appears that $dn$ is a number, but then in line 182, it seems that $dn()$ is a matrix function. Which is it? Lines 180-183: The use of three different D’s in this area make it a challenge to sort through what all the D’s mean. Perhaps consider using different letters here. Line 196: Citations are out of order. Page 6, line 247: How is clustering accuracy measured? Page 7, line 259: There appears to be an extra space (or two) before the citation [26] Lines 266-277: There appears to be a bit of inconsistency in the use of K. For example, in line 266, K is a similarity matrix and in line 268, K is the number of clusters. Lines 269-270, table 2, and line 284: What are GK3, GK4, and GK5? How are they different? Line 285: There is a space after eigenvectors before the comma.
nips_2017_2131
Learning Deep Structured Multi-Scale Features using Attention-Gated CRFs for Contour Prediction Recent works have shown that exploiting multi-scale representations deeply learned via convolutional neural networks (CNN) is of tremendous importance for accurate contour detection. This paper presents a novel approach for predicting contours which advances the state of the art in two fundamental aspects, i.e. multi-scale feature generation and fusion. Different from previous works directly considering multi-scale feature maps obtained from the inner layers of a primary CNN architecture, we introduce a hierarchical deep model which produces more rich and complementary representations. Furthermore, to refine and robustly fuse the representations learned at different scales, the novel Attention-Gated Conditional Random Fields (AG-CRFs) are proposed. The experiments ran on two publicly available datasets (BSDS500 and NYUDv2) demonstrate the effectiveness of the latent AG-CRF model and of the overall hierarchical framework.
This paper proposes a gating mechanism to combine features from different levels in a CNN for the task of contour detection. The paper builds upon recent advances in using graphical models with CNN architectures [5,39] and augments these with attention. This results in large improvements in performance at the task of contour detection, achieving new state-of-the-art results. The paper also presents an ablation study where they analyze the impact of different parts of their architecture. Cons: 1. Unclear relationship to past works which use CRFs with CNNs [5,39] and other works such as [A,B] which express CRF inference as CNNs. The paper says it is inspired from [5,39] but does not describe the points of difference from [5, 39]. Is only attention the difference and thus the novelty? Does the derivation in Section 2 follow directly from [5,39] (except taking attention into account) or it requires additional tricks and techniques beyond [5,39]? How does the derivation relate to the concepts presented in [A]? Not only will these relationships make it easier to judge novelty, it will also allow readers to contextualize the work better. 2. Intuitively, I dont understand why we need attention to combine the information from different scales. The paper is fairly vague about the motivation for this. It would help if the rebuttal could provide some examples or more intuition about what a gating mechanism can provide that a non-gating mechanism can't provide. 3. The paper presents very thorough experimental results for contour detection, but I will encourage the authors to also study at the very least one other task (say semantic segmentation). This will help evaluate how generalizable the technique is. This will also increase the impact of the paper in the community. 4. The paper is somewhat hard to read. I believe there are some undefined variables, I haven't been able to figure out what f_l^{D,C,M} in figure 2 are. 5. There are minor typos in Table 2 (performance for [14] is misreported, ODS is either 70.25 or 71.03 in [14] but the current paper reports it as 64.1), and these numbers are not consistent with the ones in Figure 4. I would recommend that the authors cross-check all numbers in the draft. [A] Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials Philipp Krähenbühl and Vladlen Koltun NIPS 2011 [B] Conditional Random Fields as Recurrent Neural Networks. ICCV 2015
nips_2017_1098
Train longer, generalize better: closing the generalization gap in large batch training of neural networks Background: Deep learning models are typically trained using stochastic gradient descent or one of its variants. These methods update the weights using their gradient, estimated from a small fraction of the training data. It has been observed that when using large batch sizes there is a persistent degradation in generalization performance -known as the "generalization gap" phenomenon. Identifying the origin of this gap and closing it had remained an open problem. Contributions: We examine the initial high learning rate training phase. We find that the weight distance from its initialization grows logarithmically with the number of weight updates. We therefore propose a "random walk on a random landscape" statistical model which is known to exhibit similar "ultra-slow" diffusion behavior. Following this hypothesis we conducted experiments to show empirically that the "generalization gap" stems from the relatively small number of updates rather than the batch size, and can be completely eliminated by adapting the training regime used. We further investigate different techniques to train models in the large-batch regime and present a novel algorithm named "Ghost Batch Normalization" which enables significant decrease in the generalization gap without increasing the number of updates. To validate our findings we conduct several additional experiments on MNIST, CIFAR-10, CIFAR-100 and ImageNet. Finally, we reassess common practices and beliefs concerning training of deep models and suggest they may not be optimal to achieve good generalization.
Summary of the paper and some comments: This paper investigates the reasons for generalization gap, namely the detrimental effect of large mini-batches on the generalization performance of the neural networks. The paper argues that the reason why the generalization performance is worse when the minibatch size is smaller is due to the less number of steps that the model is trained for. They also discuss the relationship between their method and recent works on flat-minima for generalization. The main message of this paper is that there is no fundamental issue with SGD using large minibatches, and it is possible to make SGD + large minibatches to generalize well as well. The authors propose three different methods to overcome these issues, 1) Ghost BN: They have proposed to use smaller minibatch sizes during the training to compute the population statistics such as mean and the standard deviation. During the test time, they use the full-batch statistics. I think computing the population statistics over the small minibatches, probably have some sort of regularization effect coming from the noise in the estimates. I feel like this is not very well justified in the paper. 2) Learning rate tuning, they propose to rescale the learning rate of the large mini-batches proportionally to the size of the minibatch. This intuitively makes sense, but authors should discuss more and justify it in a better way. 3) Regime adaptation, namely trying to use the same number of iterations as the model being trained. This is justified in the first few sections of the paper. Firstly, the paper at some points is very verbose and feel like uses the valuable space to explain very simple concepts, e.g. the beginning of section 3 and lacks to provide enough justifications for their approach. The paper discusses mainly the generalization of SGD with large minibatches, but there is also an aspect of convergence of SGD as well. Because the convergence results of SGD also depends on the minibatch size as well, please see [1]. I would like to see a discussion of this as well. In my opinion, anyone that spend on monitoring the learning process of the neural networks would notice easily notice that the learning curves usually have a logarithmic behavior. The relationship to the random-walk theory is interesting, but the relationship that is established in this theory is very weak, it is only obtained by approximating an approximation. Equation 4 is interesting, for instance, if we initialize w_0 to be all zero, it implies that the norm of the w_t will grow logarithmically with respect to the number of iterations. This is also an interesting property, in my opinion. Experiments are very difficult to interpret, in particular when those plots are printed on the paper. The notation is a bit inconsistent in throughout the paper, some parts of the paper use bracket for the expectation and some other parts don't. Authors should explicitly state what the expectation is taken over. In Figure 3, can you also consider adding the large minibatch cases without regime adaptation for comparison? It would be interesting to see the learning curves of the models with respect to different wallclock time as well. I can not see the AB in Table 1 and 2. The results and the improvements are marginal, sometimes it hurts. It seems like largest improvement comes from RA. The paper could be better written. I think the paragraph sections in the abstract are unnecessary. The section 7 could be renamed as "Discussion and Conclusion" and could be simplified. [1] Li, Mu, et al. "Efficient mini-batch training for stochastic optimization." Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014.
nips_2017_2356
Trimmed Density Ratio Estimation Density ratio estimation is a vital tool in both machine learning and statistical community. However, due to the unbounded nature of density ratio, the estimation procedure can be vulnerable to corrupted data points, which often pushes the estimated ratio toward infinity. In this paper, we present a robust estimator which automatically identifies and trims outliers. The proposed estimator has a convex formulation, and the global optimum can be obtained via subgradient descent. We analyze the parameter estimation error of this estimator under high-dimensional settings. Experiments are conducted to verify the effectiveness of the estimator. high density region of p(x), we end up with a very different density ratio function and the original parametric pattern in the ratio is ruined. We give such an example in Section 6. Volatile Samples The change of external environment may be responded in unpredictable ways. It is possible that a small portion of samples react more "aggressively" to the change than the others. These samples may be skewed and show very high density ratios, even if the change of distribution is relatively mild when these volatile samples are excluded. For example, when testing a new fertilizer, a small number of plants may fail to adapt, even if the vast majority of crops are healthy. Overly large density ratio values can cause further troubles when the ratio is used to weight samples. For example, in the domain adaptation setting, we may reweight samples from one task and reuse them in another task. Density ratio is a natural choice of such "importance weighting" scheme [28,25]. However, if one or a few samples have extremely high ratio, after renormalizing, other samples will have almost zero weights and have little impact to the learning task. Several methods have been proposed to solve this problem. The relative density ratio estimation [33] estimates a "biased" version of density ratio controlled by a mixture parameter α. The relative density ratio is always upper-bounded by 1 α , which can give a more robust estimator. However, it is not clear how to de-bias such an estimator to recover the true density ratio function. [26] took a more direct approach. It estimates a thresholded density ratio by setting up a tolerance t to the density ratio value. All likelihood ratio values bigger than t will be clipped to t. The estimator was derived from Fenchel duality for f -divergence [18]. However, the optimization for the estimator is not convex if one uses log-linear models. The formulation also relies on the non-parametric approximation of the density ratio function (or the log ratio function) making the learned model hard to interpret. Moreover, there is no intuitive way to directly control the proportion of ratios that are thresholded. Nonetheless, the concept studied in our paper is inspired by this pioneering work. In this paper, we propose a novel method based on a "trimmed Maximum Likelihood Estimator" [17,10]. This idea relies on a specific type of density ratio estimator (called log-linear KLIEP) [30] which can be written as a maximum likelihood formulation. We simply "ignore" samples that make the empirical likelihood take exceedingly large values. The trimmed density ratio estimator can be formulated as a convex optimization and translated into a weighted M-estimator. This helps us develop a simple subgradient-based algorithm that is guaranteed to reach the global optimum. Moreover, we shall prove that in addition to recovering the correct density ratio under the outlier setting, the estimator can also obtain a "corrected" density ratio function under a truncation setting. It ignores "pathological" samples and recovers density ratio only using "healthy" samples. Although trimming will usually result a more robust estimate of the density ratio function, we also point out that it should not be abused. For example, in the tasks of two-sample test, a diverging density ratio might indicate interesting structural differences between two distributions. In Section 2, we explain some preliminaries on trimmed maximum likelihood estimator. In Section 3, we introduce a trimmed DRE. We solve it using a convex formulation whose optimization procedure is explained in Section 4. In Section 5, we prove the estimation error upper-bound with respect to a sparsity inducing regularizer. Finally, experimental results are shown in Section 6 and we conclude our work in Section 7.
Trimmed Density Ratio Estimation This paper proposes a convex method for robust density ratio estimation, and provides several theoretical and empirical results. I found the method to be well motivated, and of course Figure 3 is quite fun. Based on the evidence in the paper, I would feel comfortable recommending the method to others. I would be happy to see these results published; however, I found the presentation to be fairly terse, and thought the paper was very difficult to follow. My main recommendation for the writing would be to focus more on core contributions, and to remove some side remarks to allow for longer discussions. Moreover, the paper was missing some references. Noting these issues, my current recommendation for the paper is a middle-of-the-road “accept”; however, if the authors are willing to thoroughly revise their paper, I would be open to considering a higher rating. Below are some specific comments. Comments on motivation + literature: - The paper opens the abstract by describing density ratio estimation (DRE) as a “versatile tool” in machine learning. I think this substantially undersells the problem: DRE is a a core statistical and machine learning task, and any advance to it should be of broad interest. I would recommend stressing the many applications of DRE more. - On a related note, DRE has also received quite a bit of attention in the statistics literature. For example, Efron and Tibshirani (1996) use it to calibrate kernel estimators, and Fithian and Wager (2016) use DRE to stabilize heavy-tailed AB-tests. Fokianos (2004) models the density ratio to fit several densities at once. This literature should also be discussed. - Are there some applications where trimming is better justified than others? Engineering applications relating to GANs or anomaly detection seem like very natural places to apply trimming; other problems, like two-sample testing, may be more of a stretch. (In applications like anomaly detection, it’s expected to have regions where the density ratio diverges, so regularizing against them is natural. On the other hand, if one finds a diverging density ratio in two-sample testing, that’s very interesting and shouldn’t be muted out.) Comments on relationship to the one-class SVM: - I found Section 2 on the one-class SVM to be very distracting. It’s too short to help anyone who isn’t already familiar with the method, and badly cuts the flow of the paper. I think a version of the paper that simply removed Section 2 would be much nicer to read. (The discussion of the one-class SVM is interesting, but belongs in the discussion section of a long journal paper, not on page 2 of an 8-page NIPS paper.) - The use of the one-SVM in the experiments was interesting. However, I think the median reader would benefit from a lot more discussion. The point is that the proposed method gets to leverage the “background” images, whereas one-SVM doesn’t. So even though one-SVM is a good method, you bring more data to the table, so there’s no way one-SVM could win. (I know that the current draft says this, but it says it so briefly that I doubt anyone would notice it if they were reading at a regular speed.) - As a concrete recommendation: If the authors were to omit Section 2 and devote the resulting space to adding a paragraph or two to the intro that reviews and motivates the DRE literature to the intro (including the stats literature) and to thoroughly contrasting their results with the one-SVM in the results section, I think the paper would be much easier to read. Other comments: - In the setting of 3.1, Fithian and Wager (2016) show that if we assume that one class has much more data than the other, then density ratio models can be fit via logistic regression. Can the authors comment on how the proposed type of trimming can be interpreted in the logistic regression case? - Section 3.2 is awkwardly placed, and cuts the flow. I recommend moving this material to the intro. - I wasn’t sold by the connection to the SVM on lines 120-129. The connection seems rather heuristic to me (maybe even cosmetic?); I personally would omit it. The core contributions of the paper are strong enough that there’s no need to fish for tentative connections. - The most classical approach to convex robust estimation is via Huberization (e.g., Huber, 1972). What’s recommended here seems very different to me —- I think readers may benefit from a discussion of how they’re different. - The approach of assuming a high-dimensional linear model for the theory and and RBF kernel for the experiments is fine. Highlighting this choice more prominently might help some readers, though. References: Efron, Bradley, and Robert Tibshirani. "Using specially designed exponential families for density estimation." The Annals of Statistics 24.6 (1996): 2431-2461. Fithian, William, and Stefan Wager. "Semiparametric exponential families for heavy-tailed data." Biometrika 102.2 (2014): 486-493. Fokianos, Konstantinos. "Merging information for semiparametric density estimation." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 66.4 (2004): 941-958. Huber, Peter J. “Robust statistics: A review." The Annals of Mathematical Statistics (1972): 1041-1067. ########## UPDATE In response to the authors’ comprehensive reply, I have increased my rating to an “8”.
nips_2017_1099
Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks Deep neural networks are commonly developed and trained in 32-bit floating point format. Significant gains in performance and energy efficiency could be realized by training and inference in numerical formats optimized for deep learning. Despite advances in limited precision inference in recent years, training of neural networks in low bit-width remains a challenging problem. Here we present the Flexpoint data format, aiming at a complete replacement of 32-bit floating point format training and inference, designed to support modern deep network topologies without modifications. Flexpoint tensors have a shared exponent that is dynamically adjusted to minimize overflows and maximize available dynamic range. We validate Flexpoint by training AlexNet [1], a deep residual network [2, 3] and a generative adversarial network [4], using a simulator implemented with the neon deep learning framework. We demonstrate that 16-bit Flexpoint closely matches 32-bit floating point in training all three models, without any need for tuning of model hyperparameters. Our results suggest Flexpoint as a promising numerical format for future hardware for training and inference.
I find the proposed technique interesting, but I believe that these problem might be better situated for a hardware venue. This seems to be more a contribution on hardware representations, as compared to previous techniques like xor-net which don't ask for a new hardware representation and had in mind an application to compression of networks for phones, and evaluated degradation of final performance. However, I do think that the paper could be of interest to the NIPS community as similar papers have been published in AI conferences. The paper is well written and easy to understand. The upsides of the method and the potential problems are well described and easy to follow. Improvements over previous techniques seem somewhat incremental, and for many applications it's not clear that training with a low precision format is better than training with a high precision format and then cropping down, and no experiments are performed to explore that. My opinion on the evaluation is mixed. For one, no experiments are done to compare to previous (crop to low precision after training) techniques. Second, plotting validation error seems natural, but I would also like to see test numbers. I notice that the validation result for Alexnet on ILSVRC is about 20% error, while alexnet originally reported 17% error on ILSVRC-11 and 16.4% on ILSVR-2012 test. Allowing for direct comparison would improve the paper. The results for CIFAR (~5% error) seem to be near state of the art. Qualitative GAN results appear to be good. Further Comments: After discussion, I believe that the paper could be accepted (increased to a 7 from a 6). Other reviewers have argued that the authors need to provide evidence that such a system would be implementable in hardware, be faster, and have a lower memory footprint. I do not believe that those experiments are necessary. Given the large number of network architectures that people are using currently, which ones should the authors demonstrate on? If one architecture causes problems, is the paper unacceptable? I am also somewhat confused that other reviews seem to give no value to drastically reducing the memory footprint (by easy calculation each layer is represented in half as much memory). Many of the current state of the art network architectures use the maximum GPU memory during training. While it needs more work, this technique seems to provide a no-loss answer to how to increase network capacity which is very cool indeed This is also related to my original point about venue, as it is this paper sits somewhere between AI and systems. If the authors performs further "hardware-centric" experiments it seems the paper would clearly be better suited for a hardware or systems venue. If we evaluate this work as a network compression paper then the classification accuracy demonstrates success. A network trained in half as much memory is as accurate as the full memory network. It deserves consideration by the ACs for acceptance.
nips_2017_547
Minimizing a Submodular Function from Samples In this paper we consider the problem of minimizing a submodular function from training data. Submodular functions can be efficiently minimized and are consequently heavily applied in machine learning. There are many cases, however, in which we do not know the function we aim to optimize, but rather have access to training data that is used to learn it. In this paper we consider the question of whether submodular functions can be minimized when given access to its training data. We show that even learnable submodular functions cannot be minimized within any non-trivial approximation when given access to polynomially-many samples. Specifically, we show that there is a class of submodular functions with range in [0, 1] such that, despite being PAC-learnable and minimizable in polynomial-time, no algorithm can obtain an approximation strictly better than 1/2 o(1) using polynomially-many samples drawn from any distribution. Furthermore, we show that this bound is tight via a trivial algorithm that obtains an approximation of 1/2.
Informally stated, the paper states the following. There exists a family F of submodular functions valued in [0, 1], so that if after polynomially many samples any algorithm estimates some set S as the minimizer, there exists some f in F that is consistent with the samples and on which S is suboptimal optimum by at least 1/2. The argument is based on the probabilistic method, i.e., by showing that such functions exist with non-vanishing probability. Moreover, the class that is considered is shown to be PAC-learnable, effectively showing that for submodular functions PAC-learnability does not imply "minimizability" from samples. However, one has to be careful about this argument. Namely, PAC learnability guarantees low error only on samples coming from the training distribution, while to minimize the function it is necessary to approximate it well near the optimum (where we might lack samples) up to an additive factor. Thus, it is not very surprising to me that the former does not imply the latter and that one can construct function classes to fool any distribution. This is a purely theoretical result that both answers an interesting question, and also provides with a construction of intuitive class of submodular functions which are (as proven) hard to minimize from samples. I only read the (sketched) proofs in the main text, which seem to be very rigorously executed. The paper is very dense, which I do not think can be easily improved on, but it is nicely written. Based on this grounds, I would recommend this paper for acceptance. I think it would be more interesting if the authors present some natural settings where the learning setting they consider is used in practice. In other words, when does learn a submodular functions from training data of the form (S_i, F(S_i))? This setting is different from how submodular functions are learned in the vision domain (where one might argue is the primary application field of submodular minimization). There, one is given a set of instances, which we want to have a low energy, either using a min-margin approach, or via maximum likelihood. Questions / Remarks: 1. The argument l129-132 is unclear to me. Do you want to say that as n -> oo both P_{f\in F}(f(S) >= 1/2) and |F'|/|F| go to one, there has to exist an f in |F'| satisfying f(S) >= 1/2 - o(1)? 2. In prop. 2 you do not define F_A(B), which I guess it means F(A \cup B) - F(B), which is confusing as it is typically defined the other way round. You could also use F(A | B), which is I guess the more commonly used notation. 3. I think the discussion after prop 2 and before section 3 should be moved somewhere else, as it does not following from the proposition itself. 4. Maybe add a note that U(A) means drawn uniformly at random from the set A. 5. l212: It is not motivated why you need one predictor for each possible masking element. 6. When you minimize a symmetric submodular function it is to minimize it over all sets different from the empty set and the ground set. This is exactly what is done using Queyranne's algorithm in cubic time. 7. Shouldn't the 1 be 1/2 in the last equation of the statement of lemma 4? 8. I find the notation in the two inequalities in Lemma 5 (after "we get") confusing. Does it mean that it holds for any h(n) = o(1) function with the given probability? Also, maybe add an "and" after the comma there. 9. l98: a word is missing after "desirable optimization"
nips_2017_1738
Reinforcement Learning under Model Mismatch We study reinforcement learning under model misspecification, where we do not have access to the true environment but only to a reasonably close approximation to it. We address this problem by extending the framework of robust MDPs of [1,15,11] to the model-free Reinforcement Learning setting, where we do not have access to the model parameters, but can only sample states from it. We define robust versions of Q-learning, SARSA, and TD-learning and prove convergence to an approximately optimal robust policy and approximate value function respectively. We scale up the robust algorithms to large MDPs via function approximation and prove convergence under two different settings. We prove convergence of robust approximate policy iteration and robust approximate value iteration for linear architectures (under mild assumptions). We also define a robust loss function, the mean squared robust projected Bellman error and give stochastic gradient descent algorithms that are guaranteed to converge to a local minimum.
Overview: The authors propose an extension of Robust RL methods to model-free applications. Common algorithms (SARSA, Q-learning, TD(lambda)) are extended to their robust, model-free version. The central point of the paper is replacing the true and unknown transition probabilities P_i^a with a known confidence region U_i^a centered on an unknown probability p_i^a. Transitions according to p_i^a are sampled via a simulator. The robustness of the algorithms derive then from an additional optimization step in U_i^a. The authors clarify that, since the addition of term U to p must not violate the probability simplex. However, since p_i^a is unknown, the additional optimization is actually performed on a term U_hat that ignores the above restriction. This mismatch between U and U_hat, represented by a term beta, determines whether the algorithms converge towards to an epsilon-optimal policy. The value of epsilon is also function of beta. The authors propose also an extension of RL with function approximation architectures, both linear and nonlinear. In the linear case, the authors show that a specific operator T is a contraction mapping under an opportune weighted Euclidean norm. By adopting the steady state distribution of the exploration policy as weights, and by adopting an assumption from previous literature concerning a constant alfa < 1, the authors prove that, under conditions on alfa and beta, the robust operator T_hat is a contraction, and that the solution converges close to the true solution v_pi, where the closeness depends again on alfa and beta. Furthermore, the authors extend the results of Ref.6 to the robust case for smooth nonlinear approximation architectures. An additional assumption here is to assume a constant confidence bound U_i^a = U for all state-action pairs for the sake of runtime. The authors extend on the work of Ref.6 by introducing the robust extension of the mean squared projected bellman equation, the MSRPBE, and of the GTD2 and TDC update algorithms, which converge to a local optima given assumptions on step lengths and on confidence region U_hat. In conclusion, the authors apply both nominal and robust Q-learning, SARSA and TD-learning to several OpenAI gym experiments. In these experiments, a small probability of transitioning to a random state is added to the original models during training. The results show that the best robust Q-learning outperforms nominal Q-learning. However, it is shown that the performance of the robust algorithms is not monotone with p: after increasing for small values of p, the advantage of the robust implementation decreases as U and U_hat increase in discrepancy. Evaluation: - Quality: the paper appears to be mathematically sound and assumptions are clearly stated. However, the experimental results are briefly commented. Figures are partial: of the three methods compared, Q-learning, SARSA and TD-learning, only results from Q-learning are briefly presented. The algorithms of section 4 (with function approximation) are not accompanied by experimental results, but due to the theoretical analysis preceding them this is of relative importance. - Clarity: the paper is easy to read, with the exception of a few symbolic mismatches that are easy to identify and rectify. The structure of the paper is acceptable. However, the overall clarity of the paper would benefit from extending section 4.2 and 5, possibly by reducing some portions of section 3. - Originality: the model-free extensions proposed are an original contribution with respect to existing literature and to NIPS in particular. - Significance: the motivation of the paper is clear, as is the contribution to the existing theoretical research. Less clear is the kind of application this work is intended for. In particular, the paper assumes the existence of a simulator with unknown transition probabilities, but known confidence bounds to the actual model being simulated, to obtain samples from. While this assumption is not by itself wrong, it is not clearly mentioned in the abstraction or in the introduction and takes the risk of appearing as artificial. Other comments: - Line 194: the definition of beta given in this line is the one Lemma 4.2 rather than the one of Lemma 3.2; - Line 207: here and in the following, nu is used instead of theta to indicate the discount factor. Please make sure the symbology is consistent; - Line 268: the sentence "the state are sampled with from an" might contain a typo; - Eq. 7-8: there appears to be an additional minus signs in both equations.
nips_2017_1739
Hierarchical Attentive Recurrent Tracking Class-agnostic object tracking is particularly difficult in cluttered environments as target specific discriminative models cannot be learned a priori. Inspired by how the human visual cortex employs spatial attention and separate "where" and "what" processing pathways to actively suppress irrelevant visual features, this work develops a hierarchical attentive recurrent model for single object tracking in videos. The first layer of attention discards the majority of background by selecting a region containing the object of interest, while the subsequent layers tune in on visual features particular to the tracked object. This framework is fully differentiable and can be trained in a purely data driven fashion by gradient methods. To improve training convergence, we augment the loss function with terms for auxiliary tasks relevant for tracking. Evaluation of the proposed model is performed on two datasets: pedestrian tracking on the KTH activity recognition dataset and the more difficult KITTI object tracking dataset.
Summary: The authors present a framework for class-agnostic object tracking. In each time step, the presented model extracts a glimpse from the current frame using the DRAW-style attention mechanism also employed by Kahoue et al. Low level features are then extracted from this glimpse and fed into a two-stream architecture, which is inspired by the two-stream hypothesis. The dorsal stream constructs convolutional filters, which are used to extract a spatial Bernoulli distribution, each element expressing the probability of the object occupying the corresponding locations in the feature map. This location map is then combined with the appearance-based feature map computed by the ventral stream, removing the distracting features, which are not considered to be part of the object. An LSTM is used for tracking the state of the object, with an MLP transforming the output of the LSTM into parameters for both attention mechanisms and an offset to the bounding box parameters in the previous time step. The results on KTH and the more challenging KITTI dataset are significantly closer to the state of the art than previous attention based approaches to tracking. Qualitative Assessment: The paper reads well and is easy to follow. The description of the model is detailed enough for replication and experiments are performed providing evidence for some of the premises stated in the description of the model. The results are impressive compared to previous work on attention based tracking. Since, you used a standard tracking benchmark, I think more performance numbers from the tracking community could have been included to show how close the presented method is to the state of the art. (example: Re3: Real-Time Recurrent Regression Networks for Object Tracking D Gordon, A Farhadi, D Fox, arXiv preprint arXiv:1705.06368)
nips_2017_47
Attentional Pooling for Action Recognition We introduce a simple yet surprisingly powerful model to incorporate attention in action recognition and human object interaction tasks. Our proposed attention module can be trained with or without extra supervision, and gives a sizable boost in accuracy while keeping the network size and computational cost nearly the same. It leads to significant improvements over state of the art base architecture on three standard action recognition benchmarks across still images and videos, and establishes new state of the art on MPII dataset with 12.5% relative improvement. We also perform an extensive analysis of our attention module both empirically and analytically. In terms of the latter, we introduce a novel derivation of bottom-up and top-down attention as low-rank approximations of bilinear pooling methods (typically used for fine-grained classification). From this perspective, our attention formulation suggests a novel characterization of action recognition as a fine-grained recognition problem.
The authors propose to use a low rank approximation of second order pooling features for action recognition. They show results on three compelling vision benchmarks for action recognition and show improvements over the baseline. Pros: (+) The proposed method is simple and can be applied on top of any network architecture (+) Clear improvement over baseline Cons: (-) Unclear novelty over well known second order pooling approaches (-) Fair comparison with other attention based models is missing It is unclear to me how the proposed approach is novel compared to other low rank second order pooling methods, typically used for fine grained recognition (e.g. Lin et al, ICCV2015), semantic segmentation etc. While the authors do a decent job walking us through the linear algebra and interpreting the variables used, the final approach is merely a classifier on second order features. In addition, the authors make design choices that are not clearly justified. For multi class recognition, the authors do not justify the choice for a class agnoistic b but for class-specific a. These experiments should be added in order to prove the effectiveness of the proposed design. The authors claim that they get rid of the box detection step present at other approaches such as R*CNN or Mallya & Lazebnik. However, they do not discuss how their approach would generalize to instance-specific action recognition. An image can contain many people who perform different actions. In this case, the action labels are not assigned at the frame level, but at an instance level. R*CNN was designed for instance-level recognition, thus the necessity of the person box detection step. The authors should discuss how the proposed approach can handle this more generic setup of instance-level labels? This is a limitation of the current approach, rather than a benefit as implied in the Related Work section. The authors fail to provide a fair comparison with competing approaches. The authors use a ResNet101 or an Inception-V2 architecture for their base network. They show improvement over their own non-attentive baseline, using the same architecture. However, the competing approaches, such as R*CNN or Mallya & Lazebnik all use a VGG16 network. This makes comparison with these approaches and the current approach unfair and inconclusive on MPII and HICO dataset. On HMDB, the improvement over the TSN BN-inception method is small while comparisons with other methods is unclear due to the varying underlying base architectures.
nips_2017_2039
On the Optimization Landscape of Tensor Decompositions Non-convex optimization with local search heuristics has been widely used in machine learning, achieving many state-of-art results. It becomes increasingly important to understand why they can work for these NP-hard problems on typical data. The landscape of many objective functions in learning has been conjectured to have the geometric property that "all local optima are (approximately) global optima", and thus they can be solved efficiently by local search algorithms. However, establishing such property can be very difficult. In this paper, we analyze the optimization landscape of the random over-complete tensor decomposition problem, which has many applications in unsupervised leaning, especially in learning latent variable models. In practice, it can be efficiently solved by gradient ascent on a non-convex objective. We show that for any small constant ε > 0, among the set of points with function values (1 + ε)-factor larger than the expectation of the function, all the local maxima are approximate global maxima. Previously, the best-known result only characterizes the geometry in small neighborhoods around the true components. Our result implies that even with an initialization that is barely better than the random guess, the gradient ascent algorithm is guaranteed to solve this problem. Our main technique uses Kac-Rice formula and random matrix theory. To our best knowledge, this is the first time when Kac-Rice formula is successfully applied to counting the number of local optima of a highly-structured random polynomial with dependent coefficients.
### Summary The paper analyzes the landscape of tensor decomposition problem. Specifically, it studies random over-complete tensors. The associated objective function is nonconvex, yet in practice simple methods based on gradient ascent are observed to solve this problem. This paper proves why we should expect such outcome by showing that there is almost no local maxima other than the global maxima of the problem when the optimization is initialized by any solution that is slightly better than random guess. Importantly, it is shown that these initial points do not have to be close to the true components of the tensor. This is an interesting result and well written paper. The analysis involves two steps: local (points close to true components) and global (point far from true components). The number of local maxima in each case is analyzed and shown to be exactly 2n for the former and almost nonexistent for the latter. ### Questions 1. Authors mention that in practice gradient ascent type algorithms succeed. Do these algorithms use any specific initialization that meets your criterion? More precisely, do they use a scheme to produce initialization which is slightly better than chance and thus the objective value is slightly greater than 3*n? IF YES: Please elaborate briefly how this is done or provide a reference. IF NO: Any idea how such initial points could be generated easily? Could it be a very hard problem on its own? 2. The analysis is performed on random tensors, where each true component is assumed to be drawn iid from Gaussian distribution N (0, I). It seems to be a strong assumption and not true for real world problems. While I am not disputing the importance of the result provided in this paper, I was wondering if the authors had thoughts about analysis scenarios that could potentially explain the success of local schemes in a broader setup? For example, performing a worst-case analysis (i.e. choosing the worst tensors) and then showing that these worst tensors form a negligible fraction of the space of tensors? This would go beyond the Gaussian setting and may be more pertinent to explaining why local search succeeds on real world problems. ### Minor Comments 1. Lemma 2.2. Eq. 2.1, the first E is missing a | at the begining right after [. 2. Lines 80, 84, 123 local minima should be local maxima? 3. You refer to hard optimization as nonconvex, but since your task in 1.1. is defined as a maximization, concave functions are actually ideal for that. To avoid confusion around minimization/maximization/nonconvexity, you might want to replace 1.1 from Max f(x) with Min -f(x) so that everything remains a minimization, and thus nonconvexity implies hardness. In that case, you can also ignore my comment 2.
nips_2017_1854
Ensemble Sampling Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applications for which Thompson sampling is viable. We establish a theoretical basis that supports the approach and present computational results that offer further insight.
This paper proposes ensemble sampling which essentially approximates Thompson Sampling for complex models in a tractable way. This will be useful in a variety of applications. There does not seem to be an obvious mistake in the proofs. However, there are a number of limitations of the proposed method. See below for detailed comments: 1. What is the underlying noise model for the rewards? Please clarify this. 2. The weights of the neural network are assumed to be Gaussian distributed. Why is this a valid assumption? Please explain. 3. Please explain under what assumptions can the proposed method be applied? Can it applied for other model classes such as generalized linear models, and SVMs? 4. A neural network offers more flexibility. But is this truly necessary for applications where the time horizon is not too large, and the number of samples are limited? 5. Missing references in lines 85 - 92. Please add citations wherever necessary. 6. In line 3 of Algorithm 1, the model is picked uniformly at random. Can the results be improved by using a more intelligent sampling scheme? 7. Using a sample from a randomly perturbed prior and measurements seems similar to the Perturb and MAP framework in [1]. Please cite this work and describe how your framework is different. 8. To update the models, it is necessary to store the entire history of observations. This is inefficient from both a memory and computational point of view. 9. In line 120, "some number of stochastic gradient descent iterations". Please clarify this. How many are necessary? Do you model the error which arises from not solving the minimization problem exactly? 10. In Theorem 3, the approximation error increases with T. The approximate regret is then in some sense linear in T. Please justify this. Also, what is the dimension dependence of the approximation error? Also, please give some intuition on the linear dependence of on the gap \delta. 11. In section 5, the experiments should show the benefit of using a more complex model class. This is missing. Please justify this. 12. A sampling based technique should be used as a baseline. This is also missing. Please justify. 13. In line 249, "we used a learning rate of.. ". How did you tune these hyper-parameters? How robust are your results to these choices? [1]. Gaussian Sampling by Local Perturbations,
nips_2017_3482
Kernel Feature Selection via Conditional Covariance Minimization We propose a method for feature selection that employs kernel-based measures of independence to find a subset of covariates that is maximally predictive of the response. Building on past work in kernel dimension reduction, we show how to perform feature selection via a constrained optimization problem involving the trace of the conditional covariance operator. We prove various consistency results for this procedure, and also demonstrate that our method compares favorably with other state-of-the-art algorithms on a variety of synthetic and real data sets.
In this paper, authors propose a new nonlinear feature selection based on kernels. More specifically, the conditional covariance operator has been employed to measure the conditional independence between Y and X given the subset of X. Then, the feature selection can be done by searching a set of features that minimizing the conditional independence. This optimization problem results in minimizing over matrix inverse and it is hard to optimize it. Thus, a novel approach to deal with the matrix inverse problem is also proposed. Finally, the consistency of the feature selection algorithm is proved. Through experiments, the authors showed that the proposed algorithm compares favorably with existing algorithms. The paper is clearly written and the proposed algorithm is technically sound. Overall, I like the algorithm. Detailed comments: 1. In the proposed algorithm, some information about tuning parameters are missing. So, please include those information in the paper. For example, lambda_2 in Eq. (20). Also, how to initialize the original w? 2. It would be great to compare with other kernel based feature selection algorithms such as HSIC-LASSO and KNIFE. Also, related to the first comments, is it possible to further improve the proposed algorithm by properly initializing the w? For instance, HSIC-LASSO is a convex approach and can have reasonable features. 3. The results of TOX-171 is pretty interesting. It is nice to further investigate the reason why the proposed approach outperforms rest of it. 4. It is nice to have comparison of redundancy rate. In feature selection community tends to use this measure for evaluation. Zhao, Z., Wang, L., and Li, H. (2010). Efficient spectral feature se lection with minimum redundancy. In AAAI Conference on Artificial Intelligence (AAAI) , pages 673–678.
nips_2017_1915
Multi-Objective Non-parametric Sequential Prediction Online-learning research has mainly been focusing on minimizing one objective function. In many real-world applications, however, several objective functions have to be considered simultaneously. Recently, an algorithm for dealing with several objective functions in the i.i.d. case has been presented. In this paper, we extend the multi-objective framework to the case of stationary and ergodic processes, thus allowing dependencies among observations. We first identify an asymptomatic lower bound for any prediction strategy and then present an algorithm whose predictions achieve the optimal solution while fulfilling any continuous and convex constraining criterion.
This paper presents an asymptotic analysis for nonparametric sequential prediction when there are multiple objectives. While the paper has some relevance to a certain subset of the machine learning community, I have some concerns about the paper's relevance to NIPS, and I also am unclear on some of the basic setup of the paper. I discuss these issues in the remainder of the review. Overall, I think the results in the paper are technically strong and, with the right motivation for $\gamma$-feasibility (see below) I would be weakly positive on the paper. Relevance: While I can see that the contents of the paper have some relevance to machine learning, I feel that this sort of paper would fit much better in a conference like COLT or ALT, given the technical level and given how much this paper can benefit from the additional space offered by those venues. I am not convinced that the problem addressed by this paper is interesting to the broader machine learning community and even many theoretical researchers within online learning. I can understand that an asymptotic analysis may be interesting as it allows one to obtain a first approach to a problem, but if the asymptotic nature of the analysis is fundamental, as it is in this case (the authors mention that positive non-asymptotic results simply are not possible without additional assumptions), then is there hope to finally achieve learning algorithms that can actually be used? I do not see any such hope from this paper, especially in light of the construction of experts that should be fed into the MHA algorithm. Motivation for the objective: This paper lacks sufficient motivation for the formal objective; while the high-level objective is clear, namely, to obtain a strategy that minimizes the average $u$-loss subject to the average $c$-loss being at most $\gamma$, the formal rendering of this objective is not sufficiently motivated. I understand why one would want to use strategies that are $\gamma$-bounded (as defined in Definition 1). However, the authors then further restrict to processes that are $\gamma$-feasible with no motivation for why we should restrict to these processes. How do we know that this restriction is the right one, and not too great of a restriction? If we restrict less, perhaps we would arrive at a different $\mathcal{V}^*$ that is larger than the one arising from $\gamma$-feasible processes. In particular, with additional space, I feel that the authors should be able to properly explain (or at least give some explanation, as none whatsoever is given in the current paper) why we should care about the conditional expectation of $u(y, X_0)$ (conditional on the infinite past). It is paramount to motivate the $\gamma$-feasibility restriction so that $\mathcal{V}^*$ can be understood as a quantity of fundamental interest. Other comments on the paper: While the MHA algorithm seems interesting, without an explanation of what experts are used in the main paper, it is difficult to get a sense of what the algorithm is really doing. The authors should devote some space in the main text explaining the construction of the experts. I wonder if the authors could have provided proof sketches in the main paper for the main results, Theorems 1 and 3, and bought themselves about 3 pages (each of those proofs is about 1.5-2 pages), as well as an exposition that would be clearer to the audience. MINOR COMMENTS I looked through part of the proof of Theorem 1 and it seems fine, but I am not an expert in this area so I am trusting the authors. In the math display between lines 68 and 69, I think the a.s. should not be there. If it should be there, what does it mean? All the randomness has been removed by the outer expectation. Line 131: ``an histogram'' --> ``a histogram'' This paper is at times sloppily written. Examples: Line 131, you write $H_{h,k}$ and then later write $k, l = 1, 2, \ldots$, and this swapping of $h$ and $l$ happens throughout the paper. Math display between lines 107 and 108: The minus sign in front of the last summation on the first line should be a plus sign. ``Optimallity'' -> ``Optimality'' UPDATE AFTER REBUTTAL I am satisfied with the authors' response regarding the formal framing of the problem and the conditional expectation, provided that they provide details (similar to their rebuttal) in the paper.
nips_2017_768
Learning Koopman Invariant Subspaces for Dynamic Mode Decomposition Spectral decomposition of the Koopman operator is attracting attention as a tool for the analysis of nonlinear dynamical systems. Dynamic mode decomposition is a popular numerical algorithm for Koopman spectral analysis; however, we often need to prepare nonlinear observables manually according to the underlying dynamics, which is not always possible since we may not have any a priori knowledge about them. In this paper, we propose a fully data-driven method for Koopman spectral analysis based on the principle of learning Koopman invariant subspaces from observed data. To this end, we propose minimization of the residual sum of squares of linear least-squares regression to estimate a set of functions that transforms data into a form in which the linear regression fits well. We introduce an implementation with neural networks and evaluate performance empirically using nonlinear dynamical systems and applications.
This paper presents a method that takes advantage of MLPs to learning non-linear functions that support Koopman analysis of time-series data. The framework is based on Koopman operator theory, an observation that complex, nonlinear dynamics may be embedded with nonlinear functions g where a spectral properties of a linear operator K can be informative about both the dynamics of the system and predict future data. The paper proposes a neural network architecture and a set of loss functions suitable for learning this embedding in a Koopman invariant subspace, directly from data. The method is demonstrate both on numerical examples and on a few applications (Lorenz, Rossler, and unstable phenomena data). The paper is exceptionally clearly written, and the use of a neural network for finding Koopman invariant subspaces, a challenging and widely applicable task, is well motivated. The figures nicely illustrate the results, and I find the examples convincing. There's a few questions I was curious about. Firstly, how well does LKIS tolerate larger magnitudes of noise? Does it converge on the right Koopman eigenfunctions, or is there a point past which it fails? Second, how do the g's found by LKIS compare with those found by EDMD and kernel DMD? I read through the author’s rebuttal and my colleagues’ review, and decided to not change my assessment of the paper.
nips_2017_1679
Beyond Parity: Fairness Objectives for Collaborative Filtering We study fairness in collaborative-filtering recommender systems, which are sensitive to discrimination that exists in historical data. Biased data can lead collaborative-filtering methods to make unfair predictions for users from minority groups. We identify the insufficiency of existing fairness metrics and propose four new metrics that address different forms of unfairness. These fairness metrics can be optimized by adding fairness terms to the learning objective. Experiments on synthetic and real data show that our new metrics can better measure fairness than the baseline, and that the fairness objectives effectively help reduce unfairness.
In this paper the authors explore different metrics for measuring fairness in recommender systems. In particular, they offer 4 different metrics that measure if a recommender system over-estimates or under-estimates how much a group of users will like a particular genre. Further, they show that by regularizing the by the discrepancy across groups does not hurt the model accuracy while improving fairness on the metric used for the regularization (as well as undiscussed slight effects on the other fairness metrics). In contrast to the work of Hardt et al, this paper focuses on a regression task (rating prediction) and explores the difference between absolute and relative errors. This is an interesting point in recommender systems that could easily be overlooked by other metrics. Further, the clear demonstration of bias on MovieLens data and the ability to improve it through regularization is a valuable contribution. Additionally, the simplicity of the approach and clarity of the writing should be valued. However, the metrics are also limited in my opinion. In particular, all of the metrics focus on average ratings over different groups. It is easy to imagine a model that has none of the biases discussed in this paper, but performs terribly for one group and great for the other. A more direct extrapolation of equality of opportunity would seem to focus on model accuracy per-rating rather than aggregates. I do not think this ruins the work here, as I mentioned above that the directionality of the errors is an important observation, but I do think it is a limitation that will need to be developed upon in the future. In this vain, a significant amount of recommender systems research is not cited or compared against. Data sparsity, missing not at random, learning across different subgroups, using side information, etc. have all been explored in the recommender systems literature. At the least, comparing against other collaborative filtering models that attempt to improve the model accuracy by side information would provide better context (as well as reporting accuracy of the model by both gender and genre). A few pieces of related work are included below. "Collaborative Filtering and the Missing at Random Assumption" Marlin et al "Beyond Globally Optimal: Focused Learning for Improved Recommendations" Beutel al "It takes two to tango: An exploration of domain pairs for cross-domain collaborative filtering" Sahebi et al
nips_2017_2743
Online Learning with a Hint We study a variant of online linear optimization where the player receives a hint about the loss function at the beginning of each round. The hint is given in the form of a vector that is weakly correlated with the loss vector on that round. We show that the player can benefit from such a hint if the set of feasible actions is sufficiently round. Specifically, if the set is strongly convex, the hint can be used to guarantee a regret of O(log(T )), and if the set is q-uniformly convex for q ∈ (2, 3), the hint can be used to guarantee a regret of o( √ T ). In contrast, we establish Ω( √ T ) lower bounds on regret when the set of feasible actions is a polyhedron.
The paper concerns online linear optimization where at each trial, the player, prior to prediction, receives a hint about the loss function. The hint has a form of a unit vector which is weakly correlated with the loss vector (its angle's cosine with loss vector is at least alpha). The paper shows that: - When the set of feasible actions is strongly convex, there exists an algorithm which gets logarithmic regret (in T). The algorithm is obtained by a reduction to the online learning problem with exp-concave losses. The bound is unimprovable in general, as shown in the Lower Bounds section. - When the set of actions is (C,q)-uniformly convex (the modulus of uniform convexity scales as C eps^q), and the hint is constant among trials, a simple follow-the-leader algorithm (FTL) achieves a regret bound which improves upon O(sqrt(T)) when q is between 2 and 3. This is further extended to the case when the hint comes from a finite set of vectors. - When the set of actions is a polyhedron, the worst case regret has the same rate as without hints, i.e. Omega(sqrt(T)), even when the hint is constant among trials. The first result is obtained by a clever substitution of the original loss function by a virtual loss which exploits the hint vector and is shown to be exp-concave. Then, an algorithm is proposed which as subroutine uses any algorithm A_exp for exp-concave online optimization, feeding A_exp with virtual losses, and playing an action for which the original loss is equal to the virtual loss (by exploiting the hint) of A_exp. Since the virtual loss of the adversary is no larger then its original loss, the algorithm's regret is no more than the regret of A_exp. Interestingly, the second result uses FTL, which does not exploit the hint at all (and thus does not even need to know the hint). So I think this result is more about the constraint on the losses -- there is a direction v, along which the loss is positive in each round. This result resembles results from [15] on FTL with curved constraint sets. I think the results from [15] are incomparable to those here, contrary to what discussion in the appendix say. On the one hand, here the authors extended the analysis to uniformly convex sets (while [15] only work with strongly-convex sets). On the other hand, [15] assume that the cumulative loss vector at each trial is separated from the origin, and this is sufficient to get logarithmic regret; whereas here logarithmic regret is achieved under a different assumption that each individual loss is positive along v (with constant alpha). The extension of Theorem 4.2 to arbitrary hints (Appendix C) gives a bound which is exponential in dimension, while for q=2 Section 3 gives a much better algorithm which regret is only linear in the dimension. Do you think the exponential dependence on d for q > 2 is unimprovable? The paper is very well written, and the exposition of the main results is clear. For each result, the authors give an extended discussion, including illustrative examples and figures, which builds up intuition on why the result holds. I checked all the proofs in the main part of the paper and did not find any technical problems. I found the contribution to online learning theory strong and significant. What the paper lacks, however, is a motivating example(s) of practical learning problem(s) in which the considered version of hint would be available. --------------- I have read the rebuttal. The authors clarified comparison to [15], which I asked about in my review.
nips_2017_406
Controllable Invariance through Adversarial Feature Learning Learning meaningful representations that maintain the content necessary for a particular task while filtering away detrimental variations is a problem of great interest in machine learning. In this paper, we tackle the problem of learning representations invariant to a specific factor or trait of data. The representation learning process is formulated as an adversarial minimax game. We analyze the optimal equilibrium of such a game and find that it amounts to maximizing the uncertainty of inferring the detrimental factor given the representation while maximizing the certainty of making task-specific predictions. On three benchmark tasks, namely fair and bias-free classification, language-independent generation, and lighting-independent image classification, we show that the proposed framework induces an invariant representation, and leads to better generalization evidenced by the improved performance.
The paper proposes to learn invariant features using adversarial training. Given s a nuisance factor (s attribute of x), a discriminator tries to predict the nuisance factors s (an attribute of the input) given a encoder representation h=E(x,s) , and an encode rE tries to minimize the prediction of nuisance factor and and to predict the desired output. The encoder is function of x and s. Novelty: The paper draws some similarity with Ganian et al on unsupervised domain adaptation and their JMLR version. The applications to Multilingual machine translation and fairness applications are to the best of the knowledge of the reviewer new in this context and are interesting. Comments: - The application of the method to invariant features learning in object recognition is not realistic. In machine translation s being the one hot of the language at hand or in fairness s being an attribute of the input , is realistic. But if one wants invariance to rotation or lightning one can not supply the angle or the lightening to the encoder. E(x,s) is not realistic for geometric invariance i.e when s is indeed hidden and not given at test time. Have you tried in this case for instance E being just E(x)? See for instance this work https://arxiv.org/pdf/1612.01928.pdf that uses actually a very similar training with just E(x) - I think the paper should be more focused on the multiple sources / single target domain adaptation aspect. Geometric invariances are confusing in this setting, since the nuisance factor can not be assumed known at test time. - How would you extend this to deal with continuous nuisances factors s?
nips_2017_896
Generalized Linear Model Regression under Distance-to-set Penalties Estimation in generalized linear models (GLM) is complicated by the presence of constraints. One can handle constraints by maximizing a penalized log-likelihood. Penalties such as the lasso are effective in high dimensions, but often lead to unwanted shrinkage. This paper explores instead penalizing the squared distance to constraint sets. Distance penalties are more flexible than algebraic and regularization penalties, and avoid the drawback of shrinkage. To optimize distance penalized objectives, we make use of the majorization-minimization principle. Resulting algorithms constructed within this framework are amenable to acceleration and come with global convergence guarantees. Applications to shape constraints, sparse regression, and rank-restricted matrix regression on synthetic and real data showcase strong empirical performance, even under non-convex constraints.
I found much to like about this paper. It was very clear and well-written, and the proposed method is elegant, intuitive, novel (to the best of my knowledge), seemingly well motivated, and widely applicable. In addition, the main ideas in the paper are well-supported by appropriate numerical and real data examples. I would like to see the authors compare their method to two other methods that I would see as major competitors: first, the relaxed LASSO (Meinshausen 2007) wherein we fit the lasso and then do an unpenalized fit using only the variables that the lasso has selected. As I understand the field, this is the most popular method and the one that Hastie, Tibshirani, and Wainwright (2015) recommend for users who want to avoid the bias of the lasso (but note that the shrinkage “bias” is sometimes desirable to counteract the selection bias or “winner’s curse” that the selected coefficients may suffer from). Second, best-subsets itself is now feasible using modern mixed-integer optimization methods (Bertsimas, Kind, and Mazumder, 2016) (if one of these competitors outperforms your method, it would not make me reconsider my recommendation to accept, since the method you propose applies to a much more general class of problems). Bertsimas, Dimitris, Angela King, and Rahul Mazumder. "Best subset selection via a modern optimization lens." The Annals of Statistics 44.2 (2016): 813-852. Hastie, Trevor, Robert Tibshirani, and Martin Wainwright. Statistical learning with sparsity: the lasso and generalizations. CRC press, 2015. Meinshausen, Nicolai. "Relaxed lasso." Computational Statistics & Data Analysis 52.1 (2007): 374-393.
nips_2017_334
f -GANs in an Information Geometric Nutshell Nowozin et al showed last year how to extend the GAN principle to all fdivergences. The approach is elegant but falls short of a full description of the supervised game, and says little about the key player, the generator: for example, what does the generator actually converge to if solving the GAN game means convergence in some space of parameters? How does that provide hints on the generator's design and compare to the flourishing but almost exclusively experimental literature on the subject? In this paper, we unveil a broad class of distributions for which such convergence happens -namely, deformed exponential families, a wide superset of exponential families -. We show that current deep architectures are able to factorize a very large number of such densities using an especially compact design, hence displaying the power of deep architectures and their concinnity in the f -GAN game. This result holds given a sufficient condition on activation functions -which turns out to be satisfied by popular choices. The key to our results is a variational generalization of an old theorem that relates the KL divergence between regular exponential families and divergences between their natural parameters. We complete this picture with additional results and experimental insights on how these results may be used to ground further improvements of GAN architectures, via (i) a principled design of the activation functions in the generator and (ii) an explicit integration of proper composite losses' link function in the discriminator.
The authors identify several interesting GAN-related questions, such as to what extend solving the GAN problem implies convergence in parameter space, what the generator is actually fitting when convergence occurs and (perhaps most relevant from a network architectural point of view) how to choose the output activation function of the discriminator so as to ensure proper compositeness of the loss function. The authors set out to address these questions within the (information theoretic) framework of deformed exponential distributions, from which they derive among other things the following theoretical results: They present a variational generalization (amenable to f-GAN formulation) of a known theorem that relates an f-divergence between distributions to a corresponding Bregman divergence between the parameters of such distributions. As such, this theorem provides an interesting connection between the information-theoretic view point of measuring dissimilarity between probability distributions and the information-geometric perspective of measuring dissimilarity between the corresponding parameters. They show that under a reversibility assumption, deep generative networks factor as so called escorts of deformed exponential distributions. Their theoretical investigation furthermore suggests that a careful choice of the hidden activation functions of the generator as well as a proper selection of the output activation function of the discriminator could potentially help to further improve GANs. They also briefly mention an alternative interpretation of the GAN game in the context of expected utility theory. Overall, I find that the authors present some interesting theoretical results, however, I do have several concerns regarding the practical relevance and usefulness of their results in the present form. Questions & concerns: To begin with, many of their theorems and resulting insights hold for the specific case that the data and model distributions P and Q are members of the deformed exponential family. Can the authors justify this assumption, i.e. elaborate on whether it is met in practice and explain whether similar results (such as Theorems 4 & 6) could be derived without this assumption? One of the author’s main contributions, the information-geometric f-GAN identity in Eq.(7), relates the well-known variational f-divergence formulation over distributions to an information-geometric optimization problem over parameters. Can the authors explain what we gain from this parameter-based point of view? Is it possible to implement GANs in terms of this parameter-based optimization and what would be the benefits? I would really have liked to see experimental results comparing the two approaches to optimization of GANs. Somewhat surprising, the authors don’t seem to make use of the right hand side of this identity, other than to assert that solving the GAN game implies convergence in parameter space (provided the residual J(Q) is small). And why is this implication not obvious? Can the authors give a realistic scenario where convergence in the variational f-divergence formulation over distributions does not imply convergence in parameter space? As an interesting practical implication of their theoretical investigation, the authors show that the choice of the output activation function g_f (see section 2.4 in Ref.[34]) matters in order for the GAN loss to have the desirable proper compositeness property. I found this result particularly interesting and would certainly have liked to see how their theoretically derived g_f (as the composition of f′ and the link function) compares experimentally against the heuristic choice provided in Ref. [34]. Consequences for deep learning and experimental results: Under the assumption that the generator network is reversible, the authors show that there exists an activation function v such that the generator’s hidden layers factor exactly as escorts for the deformed exponential family. In Table 2 (left), the authors compare several generator architectures against different choices of activation functions (that may or may not comply with this escort factorization). The plot for the GAN MLP, for instance, indicates that the “theoretically superior” μ-ReLU performs actually worse than the baseline ReLU (limiting case μ → 1). From their analysis, I would however expect the invertible μ-ReLU (satisfying reversibility assumption) to perform better than the non-invertible baseline (not satisfying reversibility assumption). Can the authors explain why the μ-ReLU performs worse? Can they comment on whether the DCGAN architecture satisfies the reversibility assumption and how these results compare against the Wasserstein GAN which is not based on f-divergences (and for which I therefore do not expect their theoretical analysis to hold)? Finally, as far as I can tell, their results on whether one activation (Table 2 center) or link function (Table 2 right) performs better than the others are all within error bars. I don’t think this provides sufficient support for their theoretical results to be relevant in practice. Minor concerns: Many of their results are derived within the framework of deformed exponential densities. As the general ML community might not yet be familiar with this, I would welcome it if the authors could provide more intuition as to what a deformation (or signature) and an escort is for instance. In Theorem 8, the authors derive several upper bounds for J(Q). How do these bounds compare against each other and what are the implications for which activation functions to use? They also mention that this upper bound on J(Q) is decreasing with the normalization parameter of the escort Z. Can they elaborate a bit more on why this is a good thing and how we can leverage this? In light of these comments, I believe the insights gained from the theoretical analysis are not substantial enough from the point of view of a GAN practitioner.
nips_2017_1986
Neural system identification for large populations separating "what" and "where" Neuroscientists classify neurons into different types that perform similar computations at different locations in the visual field. Traditional methods for neural system identification do not capitalize on this separation of "what" and "where". Learning deep convolutional feature spaces that are shared among many neurons provides an exciting path forward, but the architectural design needs to account for data limitations: While new experimental techniques enable recordings from thousands of neurons, experimental time is limited so that one can sample only a small fraction of each neuron's response space. Here, we show that a major bottleneck for fitting convolutional neural networks (CNNs) to neural data is the estimation of the individual receptive field locations -a problem that has been scratched only at the surface thus far. We propose a CNN architecture with a sparse readout layer factorizing the spatial (where) and feature (what) dimensions. Our network scales well to thousands of neurons and short recordings and can be trained end-to-end. We evaluate this architecture on ground-truth data to explore the challenges and limitations of CNN-based system identification. Moreover, we show that our network model outperforms current state-of-the art system identification models of mouse primary visual cortex.
I am no expert in this field, so right now I can not fully judge on the potential impact of the paper at hand in this field of science. So first I want to ask a few clarifying questions to the authors to give me a clearer picture. I will also give a preliminary judgement at the end. Introuction 34-47: I think it should be mentioned that using more expressive models such like CNNs will come at a cost. Simple models such as GLMs are very interpretable, it also easier to argue for biological plausibility than with neural network trained by backpropagation. I do agree with your point about:"Thus, we should be able to learn much more complex nonlinear functions by combining data from many neurons and learning a common feature space from which we can linearly predict the activity of each neuron." Related Work: The approaches of McIntosh et al.[17] and Batty et al. [18] seem to be the most similar to yours. What were the practical and theoretical reasons for not comparing to them in your experiments? 3 Learning a common feature space This section describes a well-known insight from transfer learning in ML. Maybe establishing a link here is useful as you use a ML technique for a neuroscience application. 5 Learning non-linear feature spaces Is this comparison to GLMs fair? Your proposed architecture seems to share a lot of properties with the ground truth (such as convolutional layers). 5.2 Application to data from primary visual cortex It seems you needed to make a few modifications to your architecture. Can you respond to a possible çritique that says your approach only works better because you have more degrees of freedom to adjust hyperparameters to a given task? Without these adjustments, were you still able to improve over previous state of the art? -- The paper is well written and my initial impression is that this is a good contribution. The intuition to combining data from many neurons and learning a common feature space is intuitive and appears straightforward. I can't fully judge if this is novel or incremental, compared to the works of McIntosh et al.[17] and Batty et al. and I have some open questions with regards to the experiments.
nips_2017_2374
Visual Interaction Networks: Learning a Physics Simulator from Video From just a glance, humans can make rich predictions about the future of a wide range of physical systems. On the other hand, modern approaches from engineering, robotics, and graphics are often restricted to narrow domains or require information about the underlying state. We introduce the Visual Interaction Network, a generalpurpose model for learning the dynamics of a physical system from raw visual observations. Our model consists of a perceptual front-end based on convolutional neural networks and a dynamics predictor based on interaction networks. Through joint training, the perceptual front-end learns to parse a dynamic visual scene into a set of factored latent object representations. The dynamics predictor learns to roll these states forward in time by computing their interactions, producing a predicted physical trajectory of arbitrary length. We found that from just six input video frames the Visual Interaction Network can generate accurate future trajectories of hundreds of time steps on a wide range of physical systems. Our model can also be applied to scenes with invisible objects, inferring their future states from their effects on the visible objects, and can implicitly infer the unknown mass of objects. This work opens new opportunities for model-based decision-making and planning from raw sensory observations in complex physical environments.
Summary --- Interaction Networks (INs) [2] and the Neural Physics Engine [7] are two recent and successful instantiations of a physics engine. In each case a hand programed 2d physics engine was used to simulate dynamics of 2d balls that might bounce off each other like billiards, orbit around a made up gravitational well, or a number of other settings. Given ball states (positions and velocities), these neural approaches predict states at subsequent time steps and thus simulate simple entity interaction. This paper continues in the same vein of research but rather than taking states as input, the proposed architecture takes images directly. As such, the proposed Visual Interaction Networks (VINs) must learn to embed images into a space which captures state information and learn how to simulate 2d physics in that embedding space. The VIN comes in three parts. 1) The visual encoder is a CNN which takes a sequence of frames which show the three most recent states of a simulation and embeds them into a different embedding for each object (the number of objects is fixed). Some tricks help the CNN focus on pairs of adjacent frames and do position computations more easily. 2) The dynamics predictor does the same job as an IN but takes state embeddings as input (from the visual encoder) and predicts state embeddings. It's implemented as a triplet of INs which operate at different time scales and are combined via an MLP. 3) The state decoder is a simple MLP which decodes state embeddings into hand coded state vectors (positions and velocities). Hand coded states are be extracted from the ground truth 2d simulation and a loss is computed which forces decoded state embeddings to align to ground truth states during training, thus allowing the dynamics predictor to learn an appropriate 2d physics model. This model is tested across a range of dynamics similar to that used in INs. Performance is reported both in terms of MSE (how close are state predictions to ground truth states) and relative to a baseline which only needs to extract states from images but not predict dynamics. Across different number of objects and differing dynamics VINs outperform all baselines except in one setting. In the drift setting, where objects do not interact, VINs are equivalent to an ablated version that removes the relational part from each IN used in the dynamics predictor. When compared to the simulation rolled out by the ground truth simulator using MSE all models show increased errors longer from the initial state (as expected), but VIN is smallest at every time step. A particularly interesting finding is that VINs even outperform INs which are given ground truth states and need not decode them from images. This is hypothesized to be due to noise from the visual component of VINs. Finally, videos of VIN rollouts are provided and appear qualitatively to convincingly simulate each intended 2d dynamics. Strengths --- This is a well written paper which presents the next step in the line of work on neural physics simulators, evaluates the proposed method somewhat thoroughly, and produces clean and convincing results. The presentation also appropriately orients the work relative to old and new neural physics simulators. Weaknesses --- This paper is very clean, so I mainly have nits to pick and suggestions for material that would be interesting to see. In roughly decreasing order of importance: 1. A seemingly important novel feature of the model is the use of multiple INs at different speeds in the dynamics predictor. This design choice is not ablated. How important is the added complexity? Will one IN do? 2. Section 4.2: To what extent should long term rollouts be predictable? After a certain amount of time it seems MSE becomes meaningless because too many small errors have accumulated. This is a subtle point that could mislead readers who see relatively large MSEs in figure 4, so perhaps a discussion should be added in section 4.2. 3. The images used in this paper sample have randomly sampled CIFAR images as backgrounds to make the task harder. While more difficult tasks are more interesting modulo all other factors of interest, this choice is not well motivated. Why is this particular dimension of difficulty interesting? 4. line 232: This hypothesis could be specified a bit more clearly. How do noisy rollouts contribute to lower rollout error? 5. Are the learned object state embeddings interpretable in any way before decoding? 6. It may be beneficial to spend more time discussing model limitations and other dimensions of generalization. Some suggestions: * The number of entities is fixed and it's not clear how to generalize a model to different numbers of entities (e.g., as shown in figure 3 of INs). * How many different kinds of physical interaction can be in one simulation? * How sensitive is the visual encoder to shorter/longer sequence lengths? Does the model deal well with different frame rates? Preliminary Evaluation --- Clear accept. The only thing which I feel is really missing is the first point in the weaknesses section, but its lack would not merit rejection.
nips_2017_565
Unsupervised learning of object frames by dense equivariant image labelling One of the key challenges of visual perception is to extract abstract models of 3D objects and object categories from visual measurements, which are affected by complex nuisance factors such as viewpoint, occlusion, motion, and deformations. Starting from the recent idea of viewpoint factorization, we propose a new approach that, given a large number of images of an object and no other supervision, can extract a dense object-centric coordinate frame. This coordinate frame is invariant to deformations of the images and comes with a dense equivariant labelling neural network that can map image pixels to their corresponding object coordinates. We demonstrate the applicability of this method to simple articulated objects and deformable objects such as human faces, learning embeddings from random synthetic transformations or optical flow correspondences, all without any manual supervision.
Blue565 Unsupervised object learning from dense equivariant image labelling An impressive paper, marred by flaws in exposition, all fixable. The aim is to construct an object representation from multiple images, with dense labelling functions (from image to object), without supervision. Experiments seem to be very successful, though the paper would be improved by citing (somewhat) comparable numerical results on the MAFL dataset. The method is conceptually simple, which is a plus. The review of related methods seems good, though I admit to not knowing the field well enough to know what has been missed. Unfortunately, insufficient details of the method are given. In particular, there is no discussion of the relationships between the loss functions used in practice (in (4) and (5)) and those in the theoretical discussion (in page 4, first unnumbered equation and (2)). Also, I have some concerns about the authors’ use of the terms “equivariant” and “embedding” and their definition of “distinctive”. The standard definition of “equivariant” is: A function f is equivariant with respect to group of transformations if f(gx) = gf(x) for all x and all transformations g. (We note that the usage of “equivariant” in [30] is correct.) In contrast, f is invariant if f(gx) = f(x) for all x and all transformations g. Your definition in (1) says that \Phi is invariant with respect to image deformations. So please change the manuscript title! Note that both equivariance and invariance are a kind of “compatibility”. The word “embedding” usually means an embedding of an object into a larger space (with technical conditions). This doesn’t seem to be how the word is used here (though, OK, the sphere is embedded in R^3). The manuscript describes the map \Phi as an embedding. Perhaps “labelling” would be better, as in the title. The definition of “distinctive” on p4, line 124, is equivalent to: invariance of \Phi and injectivity of \Phi(x, \cdot) for every x. On p4, line 27: for a constant labelling, argmax is undefined, which is worrying. Overall the quality of the English is good, however there are several typos that should be fixed, a few of which are noted below. The paper is similar to [30], which is arXiv 1705.02193 (May 2017, unpublished as far as I know). They have significant overlap, however both the method and the datasets are different. —Minor comments. p1, line 17: typo: “form”->”from” p2, Figure 1 caption: - should be “Left: Given a large number of images” - typos: “coordiante”, “waprs” p4, first displayed equation: \alpha undefined. should be (x,g) p5, line 142: \Phi(x,u) has become \Phi_u x without comment. Fig 3 caption doesn’t explain clearly what the 2nd and 3rd rows are.
nips_2017_2088
Welfare Guarantees from Data Analysis of efficiency of outcomes in game theoretic settings has been a main item of study at the intersection of economics and computer science. The notion of the price of anarchy takes a worst-case stance to efficiency analysis, considering instance independent guarantees of efficiency. We propose a data-dependent analog of the price of anarchy that refines this worst-case assuming access to samples of strategic behavior. We focus on auction settings, where the latter is non-trivial due to the private information held by participants. Our approach to bounding the efficiency from data is robust to statistical errors and mis-specification. Unlike traditional econometrics, which seek to learn the private information of players from observed behavior and then analyze properties of the outcome, we directly quantify the inefficiency without going through the private information. We apply our approach to datasets from a sponsored search auction system and find empirical results that are a significant improvement over bounds from worst-case analysis.
This paper studies a non-worst-case notion of the price of anarchy (PoA). An auction A’s PoA is the worst-case ratio between the welfare of the optimal auction and the welfare in a Bayes-Nash equilibrium of A, taken over all value distributions and equilibria. The authors propose a non-worst case relaxation which they call the distributional PoA (DPoA), which is defined by a distribution D over bids. It measures the same ratio as the original PoA, but only over all value distributions and equilibria that could generate the bid distribution D. They bound the DPoA for single-dimensional settings and show that it can be estimated using samples from D. In one example, the authors provide an upper bound on the DPoA for the single item first price auction (Lemma 1). There are a few things I couldn’t reproduce in the proof. In line 140, the authors write that 1/\mu*\int_0^{v_i(1 – e^{-\mu})} (1 – G_{-i}(z)) dz <= T_i. Isn’t it less than or equal to T_i/\mu? Also, it seems as though if I add 1/\mu*\int_0^{v_i(1 – e^{-\mu})} (1 – G_{-i}(z)) dz to the right-hand-side of Equation (8), I’ll get 1/\mu*\int_0^{v_i(1 – e^{-\mu})} (1 – G_{-i}(z) + G_{-i}(z)) dz = 1/\mu*\int_0^{v_i(1 – e^{-\mu})} dz = v_i(1 – e^{-\mu})/\mu. (The authors write that it’s just v_i(1 – e^{-\mu}).) Lastly, in line 145, the authors write that \sum_i T_i*x_i >= max_i T_i. Since this is a single-item auction, there’s exactly one x_i that equals 1 and the rest equal 0. Should the inequality be flipped? (\sum_i T_i*x_i <= max_i T_i.) I would really like to see the proof of Lemma 5 written out. It’s not clear to me how it follows immediately from Lemma 9.9 in [5]. Rademacher complexity does have nice compositional properties. However, I’m not convinced, for example, that “multiplication of bid vectors with constants” is covered by Lemma 9.9 in [5]. Lemma 9.9 in [5] applies to function classes mapping some domain X to R. The function class defined by “multiplication of bid vectors with constants” maps from R^{n-1} to R^{n-1}. How exactly does Lemma 9.9 apply? Also, it seems like multiplication would have to be involved in the form of many payment functions. For example, in the first price auction, the payment function is b_i*x_i, where x_i can be written as {b_i > b_1}*…*{b_i > b_{i-1}}*{b_i > b_{i+1}}*…*{b_i > b_n}. Is there a way to write this without multiplication? The authors write that the conditions in Lemma 5 hold for sponsored search auctions. Is there no multiplication in that setting either? I think this paper is currently better written for an economics community as opposed to a learning community. For example, it’s not clear to me what a pay-your-bid auction is. On line 187, the authors write that “For any auction, we can re-write the expected utility of a bid b: u_i(b, v_i) = x_i(b)(v_i – p_i(b)/x_i(b)).” What if x_i(b) = 0? I think that overall, this is a really interesting contribution to beyond-worst-case analysis and the experiments seem to validate its practical relevance. However, I think the presentation could be improved to make it more understandable for an ICML audience. I’m also hesitant to recommend it for acceptance without a proof of Lemma 5. ======after rebuttal====== I'm willing to believe Lemma 5 if the authors are assuming that number of bidders is constant. I didn't realize that they were making this assumption. The authors should mention this explicitly in the preliminaries and/or the lemma statement.
nips_2017_1347
Differentiable Learning of Logical Rules for Knowledge Base Reasoning We study the problem of learning probabilistic first-order logical rules for knowledge base reasoning. This learning problem is difficult because it requires learning the parameters in a continuous space as well as the structure in a discrete space. We propose a framework, Neural Logic Programming, that combines the parameter and structure learning of first-order logical rules in an end-to-end differentiable model. This approach is inspired by a recently-developed differentiable logic called TensorLog [5], where inference tasks can be compiled into sequences of differentiable operations. We design a neural controller system that learns to compose these operations. Empirically, our method outperforms prior work on multiple knowledge base benchmark datasets, including Freebase and WikiMovies. the structure is a discrete optimization problem, and one that involves search over a potentially large problem space. Many past learning systems have thus used optimization methods that interleave moves in a discrete structure space with moves in parameter space [12,13,14,27]. In this paper, we explore an alternative approach: a completely differentiable system for learning models defined by sets of first-order rules. This allows one to use modern gradient-based programming frameworks and optimization methods for the inductive logic programming task. Our approach is inspired by a differentiable probabilistic logic called TensorLog [5]. TensorLog establishes a connection between inference using first-order rules and sparse matrix multiplication, which enables certain types of logical inference tasks to be compiled into sequences of differentiable numerical operations on matrices. However, TensorLog is limited as a learning system because it only learns parameters, not rules. In order to learn parameters and structure simultaneously in a differentiable framework, we design a neural controller system with an attention mechanism and memory to learn to sequentially compose the primitive differentiable operations used by TensorLog. At each stage of the computation, the controller uses attention to "softly" choose a subset of TensorLog's operations, and then performs the operations with contents selected from the memory. We call our approach neural logic programming, or Neural LP. Experimentally, we show that Neural LP performs well on a number of tasks. It improves the performance in knowledge base completion on several benchmark datasets, such as WordNet18 and Freebase15K [3]. And it obtains state-of-the-art performance on Freebase15KSelected [25], a recent and more challenging variant of Freebase15K. Neural LP also performs well on standard benchmark datasets for statistical relational learning, including datasets about biomedicine and kinship relationships [12]. Since good performance on many of these datasets can be obtained using short rules, we also evaluate Neural LP on a synthetic task which requires longer rules. Finally, we show that Neural LP can perform well in answering partially structured queries, where the query is posed partially in natural language. In particular, Neural LP also obtains state-of-the-art results on the KB version of the WIKIMOVIES dataset [16] for question-answering against a knowledge base. In addition, we show that logical rules can be recovered by executing the learned controller on examples and tracking the attention. To summarize, the contributions of this paper include the following. First, we describe Neural LP, which is, to our knowledge, the first end-to-end differentiable approach to learning not only the parameters but also the structure of logical rules. Second, we experimentally evaluate Neural LP on several types of knowledge base reasoning tasks, illustrating that this new approach to inductive logic programming outperforms prior work. Third, we illustrate techniques for visualizing a Neural LP model as logical rules.
This paper develops a model for learning to answer queries in knowledge bases with incomplete data about relations between entities. For example, the running example in the paper is answering queries like HasOfficeInCountry(Uber, ?), when the relation is not directly present in the knowledge base, but supporting relations like HasOfficeInCity(Uber, NYC) and CityInCountry(NYC, USA). The aim in this work is to learn rules like HasOfficeInCountry(A, B) <= HasOfficeInCountry(A, C) && CityInCountry(C, B). Note that this is a bit different from learning embeddings for entities in a knowledge base, because the rule to be learned is abstract, not depending on any specific entities. The formulation in this paper is cast the problem as one of learning two components: - a set of rules, represented as a sequence of relations (those that appear in the RHS of the rule) - a real-valued confidence on the rule The approach to learning follows ideas from Neural Turing Machines and differentiable program synthesis, whereby the discrete problem is relaxed to a continuous problem by defining a model for executing the rules where all rules are executed at each step and then averaged together with weights given by the confidences. The state that is propagated from step to step is softmax distribution over entities (though there are some additional details to handle the case where rules are of different lengths). Given this interpretation, the paper then suggests that a recurrent neural network controller can be used to generate the continuous parameters needed for the execution. This is analogous to the neural network controller in Neural Turing Machines, Neural Random Access Machines, etc. After the propagation step, the model is scored based upon how much probability ends up on the ground truth answer to the query. The key novelty is developing two representations of the confidence-weighted rules (one which is straightforward to optimize as a continuous optimization problem, and one as confidence weights on a set of discrete rules) along with an algorithm for converting from the continuous-optimization-friendly version to the rules version. To my knowledge, this is a novel and interesting idea, which is a neat application of ideas from the neural programs space. Experimentally, the method performs very well relative to what looks to be a strong set of baselines. The experiments are thorough and thoughtful with a nice mix of qualitative and quantitative results, and the paper is clearly written. One point that I think could use more clarification is what the interpretation is of what the neural network is learning. It is not quite rules, because (if I understand correctly), the neural controller is effectively deciding which part of the rules to apply next, during the course of execution. This causes me a bit of confusion when looking at Table 3. Are these rules that were learned relative to some query? Or is there a way of extracting query-independent rules from the learned model?
nips_2017_1121
Adaptive Clustering through Semidefinite Programming We analyze the clustering problem through a flexible probabilistic model that aims to identify an optimal partition on the sample X 1 , ..., X n . We perform exact clustering with high probability using a convex semidefinite estimator that interprets as a corrected, relaxed version of K-means. The estimator is analyzed through a non-asymptotic framework and showed to be optimal or near-optimal in recovering the partition. Furthermore, its performances are shown to be adaptive to the problem's effective dimension, as well as to K the unknown number of groups in this partition. We illustrate the method's performances in comparison to other classical clustering algorithms with numerical experiments on simulated high-dimensional data. In this work, we build upon the work and context of [6], and transpose and adapt their ideas for point clustering: we introduce a semidefinite estimator for point clustering inspired by the findings of [17] with a correction component originally presented in [6]. We show that it produces a very strong contender for clustering recovery in terms of speed, adaptivity and robustness to model perturbations. In order to do so we produce a flexible probabilistic model inducing an optimal partition of the data that we aim to recover. Using the same structure of proof in a different context, we establish elements of stochastic control (see for instance Lemma A.1 on the concentration of random subgaussian Gram matrices in the supplementary material) to derive conditions of exact clustering recovery with high probability and show optimal performances -including in high dimensions, improving on [16], as well as adaptivity to the effective dimension of the problem. We also show that our results continue to hold without knowledge of the number of structures given one single positive tuning parameter. Lastly we provide evidence of our method's efficiency and further insight from simulated data. Notation. Throughout this work we use the convention 0/0 := 0 and [n] = {1, ..., n}. We take a n b n to mean that a n is smaller than b n up to an absolute constant factor. Let S d−1 denote the unit sphere in R d . For q ∈ N * ∪ {+∞}, ν ∈ R d , |ν| q is the l q -norm and for M ∈ R d×d , |M | q , |M | F and |M | op are respectively the entry-wise l q -norm, the Frobenius norm associated with scalar product ., . and the operator norm. |D| V is the variation semi-norm for a diagonal matrix D, the difference between its maximum and minimum element. Let A B mean that A − B is symmetric, positive semidefinite.
The paper proposes a semidefinite relaxation for K-means clustering with isotropic and non-isotropic mixtures of sub-gaussian distributions in both high and low-dimensions. The paper has a good technical analysis of the methods proposed in it. There are some supporting empirical experiments too. However, one thing would have been useful to see was how computation time scales with both p and n. The empirical experiments considered n=30, which is pretty small for many application situations. So, for a better picture, it would have been good to see some results on computation times. The paper is nicely written and the flow of arguments in the paper is relatively clear. There are some minor typos like in line 111 in definition of b2, the subscript should be b in [n]\{a, b1}. The paper extends an approach of semidefinite relaxation for K-means clustering to account for non-isotropic behavior of mixture components. The method proposed can work for high-dimensional data too. The method will be quite useful but possibly computationally heavy too. So, an idea about the computational resources necessary to implement this method would have been useful.
nips_2017_2089
Diving into the shallows: a computational perspective on large-scale shallow learning Remarkable recent success of deep neural networks has not been easy to analyze theoretically. It has been particularly hard to disentangle relative significance of architecture and optimization in achieving accurate classification on large datasets. On the flip side, shallow methods (such as kernel methods) have encountered obstacles in scaling to large data, despite excellent performance on smaller datasets, and extensive theoretical analysis. Practical methods, such as variants of gradient descent used so successfully in deep learning, seem to perform below par when applied to kernel methods. This difficulty has sometimes been attributed to the limitations of shallow architecture. In this paper we identify a basic limitation in gradient descent-based optimization methods when used in conjunctions with smooth kernels. Our analysis demonstrates that only a vanishingly small fraction of the function space is reachable after a polynomial number of gradient descent iterations. That drastically limits the approximating power of gradient descent leading to over-regularization. The issue is purely algorithmic, persisting even in the limit of infinite data. To address this shortcoming in practice, we introduce EigenPro iteration, a simple and direct preconditioning scheme using a small number of approximately computed eigenvectors. It can also be viewed as learning a kernel optimized for gradient descent. Injecting this small, computationally inexpensive and SGD-compatible, amount of approximate second-order information leads to major improvements in convergence. For large data, this leads to a significant performance boost over the state-of-the-art kernel methods. In particular, we are able to match or improve the results reported in the literature at a small fraction of their computational budget. For complete version of this paper see https://arxiv.org/abs/1703.10622.
This paper looks at analyzing the problem of long times required for convergence of kernel methods when optimized via gradient descent. They define a notion of computational reach of gradient descent after t iterations and the failure of gradient descent to reach the epsilon neighborhood of an optimum after t iterations. They give examples of simplistic function settings with binary labels where gradient descent takes a long time to converge to the optimum. They also point out that adding regularization improves the condition number of the eigenspectrum (resulting in possible better convergence) but also leads to overregularization at times. They introduce the notion of EigenPro, where they pre-multiply the data using a preconditioner matrix, which can be pre-computed, improves the time for convergence by making the lower eigenvalues closer to the largest one as well as making cost per iteration efficient. They do a randomized SVD of the data matrix/kernel operator to get the eigenvalues and generate the pre-conditioning matrix using the ratio of the eigenvalues.They show that the per-iteration time is not much higher than kernel methods and demonstrate experiments to show that the errors are minimized better than standard kernel methods using less GPU time overall. The acceleration provided is significant for some of the Kernels. The paper is dense in presentation but is well written and not very difficult to follow. However there would be a lot of details that can be provided to compare how the pre-conditioning matrix can influence gradient descent in general and whether it should always be applied to data matrices every time we try to train a linear model on data. The authors also provide the argument of how lowering the ratio of the smaller eigenvalues compared to the larger one makes the problem more amenable to convergence. It would be good to see some motivation/geometric description of how the method provides better convergence using gradient descent. It would also be interesting to explore if other faster algorithms including proximal methods as well as momentum based methods also can benefit from such pre-conditioning and can improve the rates of convergence for kernel methods.
nips_2017_1120
Deliberation Networks: Sequence Generation Beyond One-Pass Decoding * The encoder-decoder framework has achieved promising progress for many sequence generation tasks, including machine translation, text summarization, dialog system, image captioning, etc. Such a framework adopts an one-pass forward process while decoding and generating a sequence, but lacks the deliberation process: A generated sequence is directly used as final output without further polishing. However, deliberation is a common behavior in human's daily life like reading news and writing papers/articles/books. In this work, we introduce the deliberation process into the encoder-decoder framework and propose deliberation networks for sequence generation. A deliberation network has two levels of decoders, where the first-pass decoder generates a raw sequence and the second-pass decoder polishes and refines the raw sentence with deliberation. Since the second-pass deliberation decoder has global information about what the sequence to be generated might be, it has the potential to generate a better sequence by looking into future words in the raw sentence. Experiments on neural machine translation and text summarization demonstrate the effectiveness of the proposed deliberation networks. On the WMT 2014 English-to-French translation task, our model establishes a new state-of-the-art BLEU score of 41.5.
----- After response: thank you for the thorough response. Two of my major concerns: the weakness of the baseline and the lack of comparison with automatic post-editing have been resolved by the response. I've raised my evaluation with the expectation that these results will be added to the final camera ready version. With regards to the examples, the reason why I said "cherry-picked?" (with a question mark) was because there was no mention of how the examples were chosen. If they were chosen randomly or some other unbiased method that could be noted in the paper. It's OK to cherry-pick representative examples, of course, and it'd be more clear if this was mentioned as well. Also, if there was some quantitative analysis of which parts of the sentence were getting better this would be useful, as I noted before. Finally, the explanation that beam search has trouble recovering from search errors at the beginning of the sentence is definitely true. However, the fact that this isn't taken into account at training and test time is less clear. NMT is maximizing the full-sentence likelihood, which treats mistakes at any part of the sentence equally. I think it's important to be accurate here, as this is a major premise of this work. --------- This paper presents a method to train a network to revise results generated by a first-pass decoder in neural sequence-to-sequence models. The second-level model is trained using a sampling-based method where outputs are sampled from the first-pass model. I think idea in the paper is potentially interesting, but there are a number of concerns that I had about the validity and analysis of the results. It would also be better if the content could be put in better context of prior work. First, it wasn't entirely clear to me that the reasoning behind the proposed method is sound. There is a nice example in the intro where a poet used the final word of the sentence to determine the first word, but at least according to probabilities, despite the fact that standard neural sequence-to-sequence models are *decoding* from left to right, they are still calculating the joint probability over words in the entire sentence, and indirectly consider the words in the end of the sentence in the earlier decisions because the conditional probability of latter words will be lower if the model makes bad decisions for earlier words. Of course this will not work if greedy decoding is performed, but with beam search it is possible to recover. I think this should be discussed carefully. Second, regarding validity of the results: it seems that the baselines used in the paper are not very representative of the state of the art for the respective tasks. I don't expect that the every paper has to be improving SOTA numbers, but for MT if "RNNSearch" is the original Bahdanau et al. attentional model, it is likely missing most of the improvements that have been made to neural MT over the past several years, and thus it is hard to tell how these results will transfer to a stronger model. For example, the below paper achieves single-model BLEU scores on the NIST04-06 data sets that are 4-7 BLEU points higher than the proposed model: > Deep Neural Machine Translation with Linear Associative Unit. Mingxuan Wang, Zhengdong Lu, Jie Zhou, Qun Liu (ACL 2017) Third, the paper has a nominal amount of qualitative evaluation of (cherry-picked?) examples, but lacks any analysis of why the proposed method is doing better. It would be quite useful if analysis could be done to show whether the improvements are indeed due to the fact that the network has access to future target words in a quantitative fashion. For example, is the proposed method better at generating beginnings of sentences, where standard models are worse at generating words in this more information-impoverished setting. Finally, there is quite a bit of previous work that either is similar methodologically or attempts to solve similar problems. The closest is automatic post-editing or pre-translation such as the following papers: > Junczys-Dowmunt, Marcin, and Roman Grundkiewicz. "Log-linear combinations of monolingual and bilingual neural machine translation models for automatic post-editing." WMT2016. > Niehues, Jan, et al. "Pre-Translation for Neural Machine Translation." COLING2016 This is not trained jointly with the first-past decoder, but even without joint training is quite useful. Also, the following paper attempts to perform globally consistent decoding, and introduces bilingual or bidirectional models that can solve a similar problem: > Hoang, Cong Duy Vu, Gholamreza Haffari, and Trevor Cohn. "Decoding as Continuous Optimization in Neural Machine Translation." arXiv 2017.
nips_2017_2844
Hierarchical Implicit Models and Likelihood-Free Variational Inference Implicit probabilistic models are a flexible class of models defined by a simulation process for data. They form the basis for theories which encompass our understanding of the physical world. Despite this fundamental nature, the use of implicit models remains limited due to challenges in specifying complex latent structure in them, and in performing inferences in such models with large data sets. In this paper, we first introduce hierarchical implicit models (HIMs). HIMs combine the idea of implicit densities with hierarchical Bayesian modeling, thereby defining models via simulators of data with rich hidden structure. Next, we develop likelihood-free variational inference (LFVI), a scalable variational inference algorithm for HIMs. Key to LFVI is specifying a variational family that is also implicit. This matches the model's flexibility and allows for accurate approximation of the posterior. We demonstrate diverse applications: a large-scale physical simulator for predator-prey populations in ecology; a Bayesian generative adversarial network for discrete data; and a deep implicit model for text generation.
The paper defines a class of probability models -- hierarchical implicit models -- consisting of observations with associated 'local' latent variables that are conditionally independent given a set of 'global' latent variables, and in which the observation likelihood is not assumed to be tractable. It describes an approach for KL-based variational inference in such 'likelihood-free' models, using a GAN-style discriminator to estimate the log ratio between a 'variational joint' q(x, z), constructed using the empirical distribution on observations, and the true model joint density. This approach has the side benefit of supporting implicit variational models ('variational programs'). Proof-of-concept applications are demonstrated to ecological simulation, a Bayesian GAN, and sequence modeling with a stochastic RNN. The exposition is very clear, well cited, and the technical machinery is carefully explained. Although the the application of density ratio estimation to variational inference seems to be an idea 'in the air' and building blocks of this paper have appeared elsewhere (for example the Adversarial VB paper), I found this synthesis to be cleaner, easier to follow, and more general (supporting implicit models) than any of the similar papers I've read so far. The definition of hierarchical implicit models is a useful point in theoretical space, and serves to introduce the setup for inference in section 3. However the factorization (1), which assumes iid observations, is quite restrictive -- I don't believe it technically even includes the Lotka-Volterra or stochastic RNN models explored in the paper itself! (since both have temporal dependence). It seems worth acknowledging that the inference approach in this paper is more general, and perhaps discussing how it could be adapted to problems and models with more structured (time series, text, graph) observations and/or latents. Experiments are probably the weakest point of this paper. The 'Bayesian GAN' is a toy and the classification setup is artificial; supervised learning is not why people care about GANs. The symbol generation RNN is not evaluated against any other methods and it's not clear it works particularly well. The Lotka-Volterra simulation is the most compelling; although the model has few parameters and no latent variables, it nicely motivates the notion of implicit models and shows clear improvement on the (ABC) state of the art. Overall there are no groundbreaking results, and much of this machinery could be quite tricky to get working in practice (as with vanilla GANS). I wish the experiments were more compelling. But the approach seems general and powerful, with the potential to open up entire new classes of models to effective Bayesian inference, and the formulation in this paper will likely be useful to many reasearchers as they begin to flesh it out. For that reason I think this paper is a valuable contribution. Misc comments and questions: Lotka-Volterra model: I'm not sure the given eqns (ln 103) are correct. Shouldn't the Beta_3 be added, not subtracted, to model the predator birth rate? As written, dx_2/dt is always negative in expectation which seems wrong. Also Beta_2 is serving double duty as the predator *and* prey death rate, is this intentional? Most sources (including the cited Papamakarios and Murray paper) seem to use four independent coefficients. line 118: "We described two classes of implicit models" but I only see one? (HIMs) line 146: "log empirical log q(x_n)" is redundant Suppose we have an implicit model, but want to use an explicit variational approximation (for example the mean-field Gaussian in the Lotka-Volterra experiment). Is there any natural way to exploit the explicit variational density for faster inference? Subtracting the constant log q(x) from the ELBO means the ratio objective (4) no longer yields a lower bound to the true model evidence; this should probably be noted somewhere. Is there an principled interpretation of the quantity (4)? It is a lower bound on log p(x)/q(x), which (waving hands) looks like an estimate of the negative KL divergence between the model and empirical distribution -- maybe this is useful for model criticism?
nips_2017_1877
Streaming Sparse Gaussian Process Approximations Sparse pseudo-point approximations for Gaussian process (GP) models provide a suite of methods that support deployment of GPs in the large data regime and enable analytic intractabilities to be sidestepped. However, the field lacks a principled method to handle streaming data in which both the posterior distribution over function values and the hyperparameter estimates are updated in an online fashion. The small number of existing approaches either use suboptimal hand-crafted heuristics for hyperparameter learning, or suffer from catastrophic forgetting or slow updating when new data arrive. This paper develops a new principled framework for deploying Gaussian process probabilistic models in the streaming setting, providing methods for learning hyperparameters and optimising pseudo-input locations. The proposed framework is assessed using synthetic and real-world datasets.
This paper presents a variational and alpha-divergence approach specifically designed for the case of Gaussian processes with streaming inputs. The motivation given in the introduction is convincing. The background section is very well presented, sets up the methodology for section 3 and at the same time explains why a batch based approach would be inadequate. Section 3 which is the most important, is however more difficult to follow. It is not a problem to understand how one equation leads to the other, but many important details are left out of the description making it very difficult to understand in a conceptual level. In particular, it'd help to explain the concept behind starting from eq. (5). Further, what is p(a|b) in eq. (6)? It seems to be an important element for our understanding of the method, as it somehow involves the update assumption. I was also wondering whether it'd be possible to arrive at the same bound by starting with the joint distribution and simply applying Jensen's inequality together with (4). That would probably be a clearer presentation. The extension to alpha-divergences and related conversation is interesting, although I would also expect to see a deeper comparison with [9] in the methodological level. This is important since the proposed method converges to [9] for alpha zero, which is what the authors use in practice. The experiments are structured nicely, starting from intuitive demonstration, moving to time-series and spatial data and finishing with classification data. The method does seem to perform well and run fast - it seems to be two time slower than SVI-GP, which still makes it a fast method. However, the experiments do not explore the hypothesis whether the SVI (batch-based) or the streaming based method performs better, simply because they use different approximations (we see that in time zero SVI is already much worse). Since the uncollapsed bound has been used for classification, I'd be keen to see it for the regression tasks too, to be able to compare with SVI-GP. Indeed, the SVI-GP seems to actually reduce the RMSE more than OSGP in Fig. 3. Another useful property to explore / discuss, is what happens when the order of the data is changed. Further, the comparisons do not consider any other competitive method which is tailored to streaming data, such as [9] or [Gijsberts and Metta: Real-time model learning using incremental sparse spectrum GP Regression, 2013]. Finally, it'd be useful to either repeat the plots in Fig 2,3 with number of data in the x-axis (so all methods are aligned) or to simply put those and state the difference in running times. Overall seems like a nice paper, which has good motivation and considers an important problem. However, the key points of the method are not demonstrated or explained adequately. I feel that re-working the presentation and experiments would significantly improve this paper. EDIT after rebuttal period: The authors have clarified most of my concerns and I have raised my score to reflect that.
nips_2017_3348
Spectral Mixture Kernels for Multi-Output Gaussian Processes Early approaches to multiple-output Gaussian processes (MOGPs) relied on linear combinations of independent, latent, single-output Gaussian processes (GPs). This resulted in cross-covariance functions with limited parametric interpretation, thus conflicting with the ability of single-output GPs to understand lengthscales, frequencies and magnitudes to name a few. On the contrary, current approaches to MOGP are able to better interpret the relationship between different channels by directly modelling the cross-covariances as a spectral mixture kernel with a phase shift. We extend this rationale and propose a parametric family of complex-valued cross-spectral densities and then build on Cramér's Theorem (the multivariate version of Bochner's Theorem) to provide a principled approach to design multivariate covariance functions. The so-constructed kernels are able to model delays among channels in addition to phase differences and are thus more expressive than previous methods, while also providing full parametric interpretation of the relationship across channels. The proposed method is first validated on synthetic data and then compared to existing MOGP methods on two real-world examples.
The paper formulates a new type of covariance function for multi-output Gaussian processes. Instead of directly specifying the cross-covariance functions, the authors propose to specify the cross-power spectral densities and using the inverse Fourier transform to find the corresponding cross-covariances. The paper follows a similar idea to the one proposed by [6] for single output Gaussian processes, where the power spectral density was represented through a mixture of Gaussians and then back-transformed to obtaining a powerful kernel function with extrapolation abilities. The authors applied the newly established covariance for a synthetic data example, and for two real data examples, the weather dataset and the Swiss Jura dataset for metal concentrations. Extending the idea from [6] sounds like a clever way to build new types of covariance functions for multiple-outputs. One of the motivations in [6] was to build a kernel function with the ability to perform extrapolation. It would have been interesting to provide this covariance function for multiple-outputs with the same ability. The experimental section only provides results with very common datasets, and the results obtained are similar to the ones obtained with other covariance functions for multiple-outputs. The impact of the paper could be increased by including experiments in datasets for which none of the other models are suitable. Did you test for the statistical significance of the results in Tables 1 and 2. It seems that, within the standard deviation, almost all models provide a similar performance.
nips_2017_3209
When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness Machine learning is now being used to make crucial decisions about people's lives. For nearly all of these decisions there is a risk that individuals of a certain race, gender, sexual orientation, or any other subpopulation are unfairly discriminated against. Our recent method has demonstrated how to use techniques from counterfactual inference to make predictions fair across different subpopulations. This method requires that one provides the causal model that generated the data at hand. In general, validating all causal implications of the model is not possible without further assumptions. Hence, it is desirable to integrate competing causal models to provide counterfactually fair decisions, regardless of which causal "world" is the correct one. In this paper, we show how it is possible to make predictions that are approximately fair with respect to multiple possible causal models at once, thus mitigating the problem of exact causal specification. We frame the goal of learning a fair classifier as an optimization problem with fairness constraints entailed by competing causal explanations. We show how this optimization problem can be efficiently solved using gradient-based methods. We demonstrate the flexibility of our model on two real-world fair classification problems. We show that our model can seamlessly balance fairness in multiple worlds with prediction accuracy.
The authors consider a novel supervised learning problem with fairness constraints, where the goal is to find an optimal predictor that is counterfactually fair from a list of candidate causal models. The parameterizations of each candidate causal model are known. The authors incorporate the fairness constraint as a regularization term in the loss function. Evaluations are performed on two real-world datasets, and results show that the proposed method balance fairness in multiple worlds with prediction accuracy. While the idea of exploring a novel fairness measure in counterfactual semantics and enforcing it over multiple candidate models is interesting, there are a few issues I find confusing, which is listed next: 1. The Counterfactual Fairness definition (Def 1) is not clear. It is not immediate to see which counterfactual quantity the authors are trying to measure. Eq2, the probabilistic counterfactual fairness definition, measures the total causal effect (P(Y_x)) of the sensitive feature X on the predicted outcome Y. It is a relatively simple counterfactual quantity, which can be directly computed by physically setting X to a fixed value, without using Pearl’s algorithm of three steps. 2. If the authors are referring to the total causal effect, it is thus unnecessary to use a rather complicated algorithm to compute counterfactuals (line87-94). If the authors are indeed referring to the counterfactual fairness defined in [1], the motivation of using this novel counterfactual fairness has not been properly justified. A newly proposed fairness definition should often fall into one of following categories: i. It provides a stronger condition for existing fairness definitions; ii. It captures discriminations which are not covered by existing definitions; or iii. It provides a reasonable relaxation to improve prediction accuracy. I went back to the original counterfactual fairness paper [1] and found discussions regarding this problem. Since this a rather recent result, it would be better if the authors could explain it a bit further in the background section. 3. The contribution of this paper seems to be incremental unless I am missing something. The authors claim that the proposed technique “learns a fair predictor without knowing the true causal model”, but it still requires a finite list of known candidate causal models and then ranges over them. The natural question at this point is how to obtain a list of candidate models? In causal literature, “not knowing the true causal model” often means that only observational data is available, let alone a list of fully-parametrized possible models exists. The relaxation considered in this paper may be a good starting point, but it does not address the fundamental challenge of the unavailability of the underlying model. Minor comments: - All references to Figure2 in Sec2.2 should be Figure1. [1] Matt J Kusner, Joshua R Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. arXiv preprint arXiv:1703.06856, 2017. Post-rebuttal: Some of the main issues were addressed by the authors. One of the issues was around Definition 1, and I believe the authors can fix that in the camera-ready version. Connections with existing fairness measures can be added in the Supplement. Still, unless I am missing something, it seems the paper requires that each of the causal models is *fully* specified, which means that they know precisely the underlying structural functions and distributions over the exogenous variables. This is, generally, an overly strong requirement. I felt that the argument in the rebuttal saying that “These may come from expert knowledge or causal discovery algorithms like the popular PC or FCI algorithms [P. Spirtes, C. Glymour, and R. Scheines. “Causation, Prediction, and Search”, 2000]” is somewhat misleading. Even when the learning algorithms like FCI can pin down a unique causal structure (almost never the case), it’s still not the accurate to say that they provide the fully specified model with the structural functions and distributions over the exogenous. If one doesn’t have this type of knowledge and setting, one cannot run Pearl’s 3-step algorithm. I am, therefore, unable to find any reasonable justification or setting that would support the feasibility of the proposed approach.
nips_2017_2669
Real-Time Bidding with Side Information We consider the problem of repeated bidding in online advertising auctions when some side information (e.g. browser cookies) is available ahead of submitting a bid in the form of a d-dimensional vector. The goal for the advertiser is to maximize the total utility (e.g. the total number of clicks) derived from displaying ads given that a limited budget B is allocated for a given time horizon T . Optimizing the bids is modeled as a contextual Multi-Armed Bandit (MAB) problem with a knapsack constraint and a continuum of arms. We develop UCB-type algorithms that combine two streams of literature: the confidence-set approach to linear contextual MABs and the probabilistic bisection search method for stochastic root-finding. Under mild assumptions on the underlying unknown distribution, we establish distributionindependent regret bounds of orderÕ(d · √ T ) when either B = ∞ or when B scales linearly with T .
The paper studies the problem of repeated bidding in online ad auctions where the goal is to maximize the accumulated reward over a fixed horizon and in the presence of budget constraint. It is also assumed that a context vector is available at each auctions which summarizes the specific information about the targeted person and the website. The expected value of an ad is considered to be a linear function of this context vector. This is a generalization of the setting considered in Weed et al (2016) in two directions: first having a budget constraint and second the availability of the side information. The paper proposes a bidding strategy in such a setting and provide a regret bound for it. While the paper technically sounds and is well-written, here are a few comments on the formulated problem. 1. It has been assumed in lines 51-53 that p_t is an iid process. This seems to be far from reality as p_t is the maximum bid of the competitors each of which might be a learning agent (just like us). In this case, competitors select their bids based on his observed history. This suggests that competitors bid (and hence p_t) is not a stationary process and more importantly is correlated with our bids. This is true even if each competitors values the ad differently and in a way specific to him. This adversarial correlation is actually one of the main differences between auctions and conventional bandit problems which has been ruled out in the paper by assuming that p_t is generated from a stationary distribution in an iid manner. 2. Assumption 1 which plays an important role in the results of the paper mentions that p_t and v_t are conditionally independent given the side information x_t. Aside from the above reason (comment 1) and even if we assume that the competitors are not learning agents, this seems a very restrictive assumption. x_t is commonly shared between all the bidders and all of them select their bids based on it. Consider for example a scenario that a group of the bidders (including us) are interested in a specific type of (target person, website) pair and they will bid higher when the associated x_t is reported. Thus, there is a correlation between those competitors bids (and hence p_t) and our bids given x_t. There is clearly a correlation between our bid and our value v_t (because we are a smart bidder) and hence, v_t and p_t cannot be independent given x_t. A few typos: 1. Assumption 1 in the Supplementary Materials is i fact Assumption 2 of the main paper. 2. There is a . missing at the end of line 313 and one at the end of Lemma 8. 3. At the end of Section C in the Supplementary Materials it should be Lemma 3 of [2] not Lemma 11 of [2].