id
stringlengths
11
20
paper_text
stringlengths
29
163k
review
stringlengths
666
24.3k
nips_2018_6839
DropBlock: A regularization method for convolutional networks Deep neural networks often work well when they are over-parameterized and trained with a massive amount of noise and regularization, such as weight decay and dropout. Although dropout is widely used as a regularization technique for fully connected layers, it is often less effective for convolutional layers. This lack of success of dropout for convolutional layers is perhaps due to the fact that activation units in convolutional layers are spatially correlated so information can still flow through convolutional networks despite dropout. Thus a structured form of dropout is needed to regularize convolutional networks. In this paper, we introduce DropBlock, a form of structured dropout, where units in a contiguous region of a feature map are dropped together. We found that applying DropbBlock in skip connections in addition to the convolution layers increases the accuracy. Also, gradually increasing number of dropped units during training leads to better accuracy and more robust to hyperparameter choices. Extensive experiments show that DropBlock works better than dropout in regularizing convolutional networks. On ImageNet classification, ResNet-50 architecture with DropBlock achieves 78.13% accuracy, which is more than 1.6% improvement on the baseline. On COCO detection, DropBlock improves Average Precision of RetinaNet from 36.8% to 38.4%. (a) (b) (c) Figure 1: (a) input image to a convolutional neural network. The green regions in (b) and (c) include the activation units which contain semantic information in the input image. Dropping out activations at random is not effective in removing semantic information because nearby activations contain closely related information. Instead, dropping continuous regions can remove certain semantic information (e.g., head or feet) and consequently enforcing remaining units to learn features for classifying input image.
The authors present a version of dropout that zeros out square blocks of neurons rather than individual neurons. They experience robust improvements relative to dropout. This is a good paper, with potentially very wide-spread applicability. The authors only present validation error in all their studies, but not test error. They claim that this is "common practice". I quickly checked the ResNet / DenseNet / Stochastic Depth / ShakeShake and ShakeDrop papers. Only one paper (DenseNet) seems to use validation error consistently and one (ResNet) partially. So I wouldn't call reporting the validation error common practice. In any case, this is a terrible practice for obvious reasons so I would urge any author (including the authors of this paper) to break the trend by reporting test error. If the authors feel that reporting validation error is necessary for comparion, they can report both. Did you use the validation set for hyperparameter tuning? If so, in what way? Since you write "we first estimate the block_size_factor to use (typically 0.5)", it seems you are using it for at least some tuning. How is reporting validation error on the final model then still unbiased? I am confused by your treatment of the DropBlock hyperparameters. On page 3, you start by saying that you don't set gamma and block_size directly, but block_size_factor and keep_prob. But then in line 156 you say "gamma = 0.4". I am assuming you mean "keep_prob = 0.4"? Similarly, in table 2 / line 210, you use gamma, but I think you mean keep_prob. How do you compensate for DropBlock? E.g. in dropout, if a feature is dropped with probability gamma, all other outputs are multiplied with 1/(1-gamma). How do you handle this in DropBlock? Have you tried block_size_factor=1, i.e. dropping entire channels? What happens then? ### Additional comments post-rebuttal ### I apologize for the stupid comment regarding the imagenet test set. I didn't know that it is "secret". "We are happy to create a holdout partition of training data and re-do the experiments for the final submission of 33 the paper." I am envious of your seemingly unlimited GPU access :) But seriously, I wouldn't want to burden you with this is everyone else if using the validation set for final numbers. I will leave it up to you whether you think having a test partition would improve the paper. The block_size_factor=1 results are interesting. Maybe you can find space to include them in the final paper. Also, you write "we first estimate the block_size_factor to use (typically 0.5)". I'd like to see more details on how this estimation happened. Also, I can't find drop_block_size for figure 4 (it's probably 0.5 ...). In figure 4, right side, "after sum" is missing an 'm'.
nips_2018_1568
Practical Methods for Graph Two-Sample Testing Hypothesis testing for graphs has been an important tool in applied research fields for more than two decades, and still remains a challenging problem as one often needs to draw inference from few replicates of large graphs. Recent studies in statistics and learning theory have provided some theoretical insights about such high-dimensional graph testing problems, but the practicality of the developed theoretical methods remains an open question. In this paper, we consider the problem of two-sample testing of large graphs. We demonstrate the practical merits and limitations of existing theoretical tests and their bootstrapped variants. We also propose two new tests based on asymptotic distributions. We show that these tests are computationally less expensive and, in some cases, more reliable than the existing methods.
This paper studies the problem of two-sample testing of large graphs under the inhomogeneous Erdos Renyi model. This model is pretty generic, and assumes that an undirected edge (ij) is in the graph with probability P_{ij} independently of all other edges. Most generically the parameter matrix P could be anything symmetric (zero diagonal), but common models are stochastic block model or mixed membership stochastic block model, which both result in P being low rank. Suppose there were two random graph distributions, parameterized by matrices P and Q, and the goal is to test whether P = Q or not (the null hypothesis being that they are equal). They assume that the graphs are vertex-aligned, which helps as it reduces the problem of searching over permutations to align the graphs. If one has many samples from each distribution, and the number of samples goes to infinity, then there are kernel based tests constructed based on comparing the graph adjacency or Laplacian matrices. This paper focuses on the setting where there are only a few samples from each distribution, or even possibly only one sample from each distribution. When the number of samples per distribution is greater than 1, then the paper uses test statistics previously introduced by Ghoshdastidar, one which looks at the spectral norm of differences between the mean adjacency matrix of the two distributions, and the other which looks at correlation of edges in the average adjacency matrices of the two distributions. The limitation of previous tests was that the theoretical thresholds used in the test are too large (for moderate graph sizes), so they estimate threshold by bootstrapping, which can be quite expensive. The contribution of this paper is to provide an asymptotic analysis of these test statistics as the number of vertices goes to infinity, showing that the test statistic is asymptotically dominated by a standard normal random variable. This suggests an easy to compute threshold to use for the hypothesis test which does not require bootstrapping samples to compute. The asymptotics are as the number of vertices goes to infinity rather than the number of samples going to infinity, but the separation condition between P and Q that the test can distinguish between decreases as the number of samples per distribution increases. When there is only one sample per distribution, most previous methods are limited to assuming that P and Q are low rank. The authors propose a test based on asymptotic distribution of eigenvalues of a matrix related to the difference between adjacency matrices (but each entry scaled by a term related to the variances of that edge). Basically they show that if one knew the parameter matrices P and Q, then the extreme eigenvalues of matrix C defined in eq (9) follow the Tracy-Widom law, thus the spectral radius of C is a useful test statistic. Since in practice P and Q are not known, the authors then propose to estimate P and Q using community detection algorithms such as spectral clustering, where P and Q are approximated by stochastic block models. I think this last part is more heuristic as the authors did not mention whether the theoretical result in spectral clustering are tight enough to provide analogous results as Theorem 4 in the setting where P and Q are estimated. I enjoyed reading the paper and found the results quite interesting and clearly presented. Comment about organization: I feel that since the formal problem statement pertains to the entire paper (m > 1 and m=1), it shouldn't be a subsection within the m \to \infty section; maybe make the problem statement its own section.
nips_2018_5500
Quadrature-based features for kernel approximation We consider the problem of improving kernel approximation via randomized feature maps. These maps arise as Monte Carlo approximation to integral representations of kernel functions and scale up kernel methods for larger datasets. Based on an efficient numerical integration technique, we propose a unifying approach that reinterprets the previous random features methods and extends to better estimates of the kernel approximation. We derive the convergence behaviour and conduct an extensive empirical study that supports our hypothesis 1 .
Post rebuttal: Thanks for the feedback. Summary of the paper: This paper proposes a novel class of random feature maps that subsumes the standard random Fourier features by Rahimi and orthogonal random features by [16]. The approach is based on spherical radial quadrature rules by Genz and Monahan [18]. Some theoretical guarantees are provided, and impressive experimental results are reported. ~~~ Quality ~~~ Strength: Extensive experimental results have been conducted, showing impressive empirical performance of the proposed approach. Also some theoretical guarantees are provided. Weakness: Theoretical error-bounds seem not very tight, and it is theoretically not very clear why the proposed works well; see the detailed comments below. I guess this would be because of the property of spherical radial rules being exact for polynomials up to a certain order. ~~~ Clarity ~~~ Strength: This paper is basically written well, and connections to the existing approaches are discussed concisely. Weakness: It would be possible to explain theoretical properties of spherical quadrature rules more explicitly, and this would help understand the behaviour of the proposed method. ~~~ Originality ~~~ Strength: I believe the use of spherical radial quadrature is really novel. Weakness: It seems that theoretical arguments basically follow those of [32]. ~~~ Significance ~~~ Strength: The proposed framework subsumes the standard random Fourier features by Rahimi and orthogonal random features by [16]; this is very nice, as it provides a unifying framework for existing approaches to random feature maps. Also the experimental results are impressive (in particular for approximating kernel matrices). The proposed framework would indicate a new research direction, encouraging the use of classical quadrature rules in the numerical analysis literature. %%% Detailed comments %%% ++ On the properties of spherical radial rules ++ You mentioned that the weights of a spherical radial rule [18] are derived so that they are exact for polynomials up to a certain order and unbiased for other functions. We have the following comments: - Given that [18] is a rather minor work (guessing from the number of citations; I was not aware of this work), it would be beneficial for the reader to state this property in the form of Theorem or something similar, at least in Appendix. From the current presentation it is not very clear what exactly the this property means mathematically. - Are spherical radial rules related to Gauss-Hermite quadrature, which are (deterministic) quadrature rules that are exact for polynomials up to a certain order and deals with Gaussian measures? ++ Line 109 and Footnote 3 ++ From what you mentioned that with some probability $a_0$ can be an imaginary number. What should one do if this is the case? ++ Proposition 3.1 ++ The upper bound is polynomial in $\epsilon$, which seems quite loose, compared to the exponential bound derived in the following paper: Optimal rates for random Fourier features B. K. Sriperumbudur and Z. Szabo Neural Information Processing Systems, 2015 ++ Proposition 3.2 ++ What is $\lambda$? Where is this defined?
nips_2018_2453
Geometry-Aware Recurrent Neural Networks for Active Visual Recognition We present recurrent geometry-aware neural networks that integrate visual information across multiple views of a scene into 3D latent feature tensors, while maintaining an one-to-one mapping between 3D physical locations in the world scene and latent feature locations. Object detection, object segmentation, and 3D reconstruction is then carried out directly using the constructed 3D feature memory, as opposed to any of the input 2D images. The proposed models are equipped with differentiable egomotion-aware feature warping and (learned) depth-aware unprojection operations to achieve geometrically consistent mapping between the features in the input frame and the constructed latent model of the scene. We empirically show the proposed model generalizes much better than geometryunaware LSTM/GRU networks, especially under the presence of multiple objects and cross-object occlusions. Combined with active view selection policies, our model learns to select informative viewpoints to integrate information from by "undoing" cross-object occlusions, seamlessly combining geometry with learning from experience.
The paper proposes a method for reconstructing, segmenting and recognizing object instances in cluttered scenes in an active vision context, i.e. in situations where the camera can be moved actively by an agent, for instance a robot. Active vision is investigated actively in the recent past, and there seems to be a convergence between different communities, which start to develop similar methodologies for different objectives: (i) research in agent control, in particular with POMDP/Deep-RL, tackles realistic settings requiring visual recognition and includes geometry in its research, and (ii) research in vision/object recognition discovered Deep-RL to tackle active vision and creates neural/differentiable formulations of geometry based methods. The proposed method belongs to the second category and solves the problem with a 3D tensor representation, to which different subsequent views are unprojected and integrated. Depth and FG masks are estimated directly from RGB input using U-nets, and the different 3D representations are aligned with the first viewpoint in the sequence using ground truth odometry (using groundtruth ego-motion is standard in the community when working with simulated environments). From the 3D representation, which is updated using a recurrent layer, different output layers estimate different desired quantities: reconstruction map (voxel occupancy), segmentation map, etc. A policy output estimates the next view from a discrete action space. The method is very interesting and addresses many open problems, in particular how to perform these estimations from cluttered scenes on multiple objects jointly. Methodology-wise, it builds on existing work adding several key-components, in particular the 3D voxel-like representation which is updated with recurrent networks, as well as instance-wise output through clustering, which is responsible for the multiple-object capabilities. Here, we can remark that the number of objects in the scene needs to be known beforehand, since clustering is performed using k-means instead of a more sophisticated method capable of estimating the amount of instances. This is ok for the performed experiments on the ShapeNet dataset but seems to be an important restriction, unless some of the discovered instances can be estimated as no-objects using an additional confidence output (like in the 2D object detection literature like YOLO, SSD etc.). The paper is easy to read and to understand, and in large parts very easily understandable. However, that said, I have a couple of issues with the presentation and the lack of description of key details, which unfortunately concern the key differences to existing work. The paper is very dense and it is clearly difficult to present all the details, but the choices made here are not optimal. The first (and minor) part concerns the recurrent memory update, which is intuitively clear, but which lacks equations which related all key quantities to each other. The updates are described of being “recurrent”, but here is necessary to provide clearer indications on what is input, what is output, is there a dedicated hidden recurrent layer or does the 3D tensor play the role of the recurrent representation etc. It is, of course, not necessary to provide all the detailed equations of the different gates of a complex GRU cell; however, an RNN-like description is compact and can provide all relevant details, while it can simply be mentioned that GRU gates are part of the implementation. The second part concerns the way how the output is dealt with to estimate object instances. Firstly, an embedding tensor is mentioned but never introduced. It kind of becomes clearer in the following description, but this description is too short and not motivated. We don’t know the goal of this part of the paper, and we discover the objectives while reading it. The goal is to estimate outputs for each instance in the scene, which requires identifying the object instances first, done by clustering. This should be better motivated and related to the embedding tensor. Metric learning is thrown into the text in a quick way, referencing DRLIM (reference 11). In order to make the paper self-contained, this should be properly developed. Finally, all the different losses should be given correctly with equations. In particular, how does the method deal with the fact that multiple outputs are created for the scene (the different discovered object instances) and need to be matched to the multiple ground truth objects? In the 2D object detection literature, this is tackled in different ways, either using bi-partite graph matching / the Hungarian algorithm (Multi-box) or through a grid-based heuristic (YOLO, SSD). The experiments are convincing and show good performance against strong baselines. On a minor note, “random” and “oneway” are only shown in figure 3 and not figure 2. === After rebuttal === After rebuttal I keep my positive rating, since the rebuttal addressed some of my concerns. However, the following 4 references are very close to the paper and should be cited and properly referenced: - MapNet (CVPR 18): http://www.robots.ox.ac.uk/~joao/mapnet/ - IQA (CVPR 18): https://pjreddie.com/media/files/papers/IQA.pdf - Im2Pano3D (CVPR 18): https://arxiv.org/pdf/1712.04569.pdf - Neural Map (ICLR 2017): https://arxiv.org/abs/1702.08360
nips_2018_2384
Bandit Learning with Positive Externalities In many platforms, user arrivals exhibit a self-reinforcing behavior: future user arrivals are likely to have preferences similar to users who were satisfied in the past. In other words, arrivals exhibit positive externalities. We study multiarmed bandit (MAB) problems with positive externalities. We show that the self-reinforcing preferences may lead standard benchmark algorithms such as UCB to exhibit linear regret. We develop a new algorithm, Balanced Exploration (BE), which explores arms carefully to avoid suboptimal convergence of arrivals before sufficient evidence is gathered. We also introduce an adaptive variant of BE which successively eliminates suboptimal arms. We analyze their asymptotic regret, and establish optimality by showing that no algorithm can perform better.
In this paper, the authors study the effects of positive externalities in MAB. In real platforms such as news site, the positive externalities come from the fact that if one of the arms generates rewards, then the users who prefer that arm become more likely to arrive in the future. If the platform knows the user preferences or could infer it from data, the usual way for handling positive externalities is to use contextual bandits: the services are personalized for each kind of users. However, most of the time the contexts are not very informative about the user preferences. The usual alternative is to consider that the reward process evolves during time, and hence to use adversarial bandits or piecewise stationary bandits. Here, the authors propose to take advantage of prior knowledge on how the reward process evolves during time: the positive externalities. The positive externalities change the usual trade-off between exploration and exploitation. Indeed, the effects of the choices of the platform are amplified. If the platform chooses the optimal arm, then this choice is amplified by the arriving of the users that like the optimal arm. However, if the algorithm chooses a sub-optimal arm, the price to pay in terms of future rewards can be dramatic. In order to analyze the positive externality effect, the authors introduce the regret against an Oracle which knows the optimal arm. Depending on the value of \alpha, which measures the strength of positive externalities, a regret lower bound of MAB with positive externalities is provided. Then they bring out that classical approaches are suboptimal. Firstly, they show that UCB algorithm achieves a linear regret for bandits with positive externalities. Secondly, they show that explore then exploit algorithm may incur a linear regret when \alpha > 1. A first algorithm called Balanced Exploration (BE) is introduced. In the exploration phase the arm which has the lowest cumulated reward is chosen, while in the exploitation phase the arm which has been the less chosen in the exploration phase is played. it is worth noting that BE algorithm needs only to know the time horizon. The analysis of this algorithm shows that it is near optimal. A second algorithm, that assumes that in addition to the time horizon the parameters of positive externalities are known (\alpha and \theta_a), is proposed. This algorithm uses an unbiased estimator of the mean reward thanks to the knowledge of the arrival probability. The algorithm plays the arm that has received the least rewards in order to explore. In order to exploit, the suboptimal arms are eliminated when their upper bounds are lesser that the lower bound of the estimated best arm. The authors show that Balanced Exploration with Arm Elimination is optimal. I regret that there are no experiments because it could be interesting for the reader to observe the gap in performances between BE, where the arrival probabilities are unknown and BE-AE, where the arrival probabilities are known. Overall, the paper is very interesting and I vote for acceptance. Minor Comments: The notation \theta^\alpha line 151 is misleading. You use the notation power alpha for N_a^\alpha and for \theta^\alpha but it does not mean the same thing. I suggest to use \theta(\alpha). Line 196, there is a typo u_a(0) = \infty and not 0.
nips_2018_1484
Continuous-time Value Function Approximation in Reproducing Kernel Hilbert Spaces Motivated by the success of reinforcement learning (RL) for discrete-time tasks such as AlphaGo and Atari games, there has been a recent surge of interest in using RL for continuous-time control of physical systems (cf. many challenging tasks in OpenAI Gym and DeepMind Control Suite). Since discretization of time is susceptible to error, it is methodologically more desirable to handle the system dynamics directly in continuous time. However, very few techniques exist for continuous-time RL and they lack flexibility in value function approximation. In this paper, we propose a novel framework for model-based continuous-time value function approximation in reproducing kernel Hilbert spaces. The resulting framework is so flexible that it can accommodate any kind of kernel-based approach, such as Gaussian processes and kernel adaptive filters, and it allows us to handle uncertainties and nonstationarity without prior knowledge about the environment or what basis functions to employ. We demonstrate the validity of the presented framework through experiments.
Strengths 1. Considering dynamic programming problems in continuous time such that the methodologies and tools of dynamical systems and stochastic di_x000b_eren- tial equations is interesting, and the authors do a good job of motivating the generalities of the problem context. 2. The authors initially describe problems/methods in very general terms, which helps preserve understandability of the _x000c_first sections. Weaknesses 1. The parameterizations considered of the value functions at the end of the day belong to discrete time, due to the need to discretize the SDEs and sample the state-action-reward triples. Given this discrete implementa- tion, and the fact that experimentally the authors run into the conven- tional di_x000e_culties of discrete time algorithms with continuous state-action function approximation, I am a little bewildered as to what the actual bene_x000c_t is of this problem formulation, especially since it requires a re- de_x000c_nition of the value function as one that is compatible with SDEs (eqn. (4) ). That is, the intrinsic theoretical bene_x000c_ts of this perspective are not clear, especially since the main theorem is expressed in terms of RKHS only. 2. In the experiments, the authors mention kernel adaptive _x000c_filters (aka kernel LMS) or Gaussian processes as potential avenues of pursuit for addressing the function estimation in continuous domains. However, these methods are fundamentally limited by their sample complexity bottleneck, i.e., the quadratic complexity in the sample size. There's some experimental ref- erence to forgetting factors, but this issue can be addressed in a rigorous manner that preserves convergence while breaking the bottleneck, see, e.g., A. Koppel, G. Warnell, E. Stump, and A. Ribeiro, ``Parsimonious on- line learning with kernels via sparse projections in function space," arXiv preprint arXiv:1612.04111, 2016. Simply applying these methods without consideration for the fact that the sample size conceptually is approaching in_x000c_nity, makes an update of the form (16) inapplicable to RL in general. Evaluating the Bellman operator requires computing an expected value. 3. Moreover, the limited complexity of the numerical evaluation is reflective of this complexity bottleneck, in my opinion. There are far more effective RKHS value function estimation methods than GPTD in terms of value function estimation quality and memory efficiency: A. Koppel, G. Warnell, E. Stump, P. Stone, and A. Ribeiro. ``Policy Evaluation in Continuous MDPs with E_x000e_cient Kernelized Gradient Tem- poral Di_x000b_fference," in IEEE Trans. Automatic Control (submitted), Dec. 2017." It's strange that the authors only compare against a mediocre benchmark rather than the state of the art. 4. The discussion at the beginning of section 3 doesn't make sense or is written in a somewhat self-contradictory manner. The authors should take greater care to explain the di_x000b_erence between value function estimation challenges due to unobservability, and value function estimation problems that come up directly from trying to solve Bellman's evaluation equation. I'm not sure what is meant in this discussion. 5. Also, regarding L87-88: value function estimation is NOT akin supervised learning unless one does Monte Carlo rollouts to do empirical approxima- tions of one of the expectations, due to the double sampling problem, as discussed in R. S. Sutton, H. R. Maei, and C. Szepesvari, \A convergent o(n) temporal- di_x000b_erence algorithm for o_x000b_-policy learning with linear function approxi- mation," in Advances in neural information processing systems, 2009, pp. 1609?1616. and analyzed in great detail in : V. R. Konda and J. N. Tsitsiklis, ``Convergence rate of linear two-timescale stochastic approximation," Annals of applied probability, pp. 796- 819, 2004. 6. The Algorithm 1 pseudo-code is strangely broad so as to be hand-waving. There's no speci_x000c_cs of a method that could actually be implemented, or even computed in the abstract. Algorithm 1 could just as well say "train a deep network" in the inner loop of an algorithm, which is unacceptable, and not how pseudo-code works. Specifically, one can't simply "choose" at random" an RKHS function estimation algorithm and plug it in and assume it works, since the lion-share of methods for doing so either re- quire in_x000c_nite memory in the limit or employ memory-reduction that cause divergence. 7. L107-114 seems speculative or overly opinionated. This should be stated as a remark, or an aside in a Discussion section, or removed. 8. A general comment: there are no transitions between sections, which is not good for readability. 9. Again, the experiments are overly limited so as to not be convincing. GPTD is a very simplistic algorithm which is not even guaranteed to pre- serve posterior consistency, aka it is a divergent Bayesian method. There- fore, it seems like a straw man comparison. And this comparison is con- ducted on a synthetic example, whereas most RL works at least consider a rudimentary OpenAI problem such as Mountain car, if not a real robotics, power systems, or _x000c_financial application.
nips_2018_1485
Gradient Descent Meets Shift-and-Invert Preconditioning for Eigenvector Computation Shift-and-invert preconditioning, as a classic acceleration technique for the leading eigenvector computation, has received much attention again recently, owing to fast least-squares solvers for efficiently approximating matrix inversions in power iterations. In this work, we adopt an inexact Riemannian gradient descent perspective to investigate this technique on the effect of the step-size scheme. The shift-and-inverted power method is included as a special case with adaptive step-sizes. Particularly, two other step-size settings, i.e., constant step-sizes and Barzilai-Borwein (BB) step-sizes, are examined theoretically and/or empirically. We present a novel convergence analysis for the constant step-size setting that achieves a rate atÕ( , where λ i represents the i-th largest eigenvalue of the given real symmetric matrix and p is the multiplicity of λ 1 . Our experimental studies show that the proposed algorithm can be significantly faster than the shift-and-inverted power method in practice.
This paper proposes an algorithm which combines shift-and-invert preconditioning and gradient descent to compute leading eigenvector. Shift-and-invert preconditioning is a common preconditioning technique, and gradient descent is one of the most popular optimization algorithms, so this combination seems reasonable. They establish the complexity of using this method, by using a somewhat new potential function. Pros: The main difference of this paper with previous works is that it allows the multiplicity of the leading eigenvalue to be larger than 1, while most works assume an eigen-gap between the largest and second largest eigenvalue. In addition, this paper introduces a somewhat new potential function, that depends on the angle between a vector and the subspace spanned by the eigenvectors. More specifically, the proof uses two potential functions, one is log(u) and another is 1 – sin^2(v), where u and v depend on the angle. The second one is not new, but the first one log(.) seems to be new. Cons: There are quite a few issues to be addressed/clarified. 1. Complexity. Perhaps the biggest issue is: the complexity proved in this work is not shown to be better than the other existing results (a bit subtle; see explanation below). In fact, we need to discuss two cases: Case 1: the multiplicity of the largest eigenvalue is 1, i.e $\lambda_1 > \lambda_2 = \lambda_{p+1}$. In this case, the complexity of this work is not better than those in [Musco and Musco, 2015] and [Garber et al., 2016] (in fact, the same). Case 2: the multiplicity of the leading eigenvalue is larger than 1. In this case, the proposed rate is the “best“ so far. However, most other works do not consider this scenario, so this is not a fair comparison. Overall, the improvement only in Case 2 seems to be a minor progress. BTW: in Table 1, it seems that $\sqrt{\lambda_1}$ is missing from the numerator of the rate for ``this work''. 2. Limited Novelty of Proposed Algorithm There are already a few works which combine shift-and-invert preconditioning with some common base methods, such as accelerated gradient descent [Wang et al., 2017], accelerated randomized coordinate descent methods [Wang et al., 2017] or SVRG [Garber et al., 2016]. It is not surprising to use gradient descent (GD) for subproblems. From the arguments made in paper, I fail to see why GD will be a better choice. First, the authors claim that GD is better because Lanczos algorithms does not work for positive definite matrix, but GD does (line 39). However, some other base methods, such as``coordinate-wise'' methods [Wang et al., 2017], also work for positive definite matrices. Second, the authors claim GD will not be stuck at saddle points with probability one from a random starting point. The authors make this point by citing [Pitaval et al., 2015] (line 42). However, [Pitaval et al., 2015] is about a quite different problem. In addition, people tend to believe all reasonable first order methods will not be stuck at saddle points. The warm-start technique (line 158) is also not new to the community. The same formulation of warm-start has been used in [Garber et al., 2016]. 3 Algorithm Details The first negative sign in line 4 of Algorithm 1 should be removed. From equation (3), Riemannian gradient is equal to Euclidean gradient left multiplied by $(I - x_{t-1} x_{t-1}^T)$. In Algorithm 1, $y_t$ is the approximate Euclidean gradient. Therefore, the approximation Riemannian gradient in line 4 should be $(I - x_{t-1} x_{t-1}^T) y_t$. Another thing that requires clarification is the choice of stepsize. In Algorithm 1, authors pick the stepsize at every iteration such that the stepsize satisfies the second inequality in line 176. Why do the authors pick the stepsize at each iteration $t$, while the second inequality in line 176 is not dependent on each iteration $t$. Why don't we pick a constant stepsize before running the iterations? 4. Experiments The authors only compare Rimennian gradient descent solver with/without shift-and-invert preconditioning, and shift-and-inverted power method. As I mentioned before, the authors does not show the complexity of this work is better than [Musco and Musco, 2015] (Block Krylov) and [Garber et al., 2016] (shift-and-inverted power method). Therefore, it is reasonable to also consider comparing with these methods. 5. Minor issues: --It seems that the authors did not explicitly define Riemannian gradient. -- line 171 should be Lemma 3.2 rather than 3.3. After the rebuttal: I've increased my score by one, due to the good empirical performance. The tone of this paper is like "our contribution is to combine two methods, and get the best rate so far", and my above comments said, "combining two methods is rather straightforward, and the rate does not improve previous ones" --thus a clear reject. If the authors motivate the paper from an empirical perspective by saying "we find the combination of SI+GD works really well, so to promote it we provide theoretical justification", the contribution may be more clear. Considering the above reasons, I think this paper is slightly below the acceptance threshold. [Garber et al., 2016] Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli, and Aaron Sidford. Faster eigenvector computation via shift-and-invert preconditioning. In International Conference on Machine Learning, pages 2626–2634, 2016. [Musco and Musco, 2015] Cameron Musco and Christopher Musco. Randomized block krylov methods for stronger and faster approximate singular value decomposition. In Advances in Neural Information Processing Systems, pages 1396–1404, 2015. [Pitaval et al., 2015] Renaud-Alexandre Pitaval, Wei Dai, and Olav Tirkkonen. Convergence of gradient descent for low-rank matrix approximation. IEEE Trans. Information Theory, 61(8):4451–4457, 2015. [Wang et al., 2017] Jialei Wang, Weiran Wang, Dan Garber, and Nathan Srebro. Efficient coordinate-wise leading eigenvector computation. CoRR, abs/1702.07834, 2017. URL http://arxiv.org/abs/1702. 07834.
nips_2018_6722
Diversity-Driven Exploration Strategy for Deep Reinforcement Learning Efficient exploration remains a challenging research problem in reinforcement learning, especially when an environment contains large state spaces, deceptive or sparse rewards. To tackle this problem, we present a diversity-driven approach for exploration, which can be easily combined with both off-and on-policy reinforcement learning algorithms. We show that by simply adding a distance measure regularization to the loss function, the proposed methodology significantly enhances an agent's exploratory behavior, and thus prevents the policy from being trapped in local optima. We further propose an adaptive scaling strategy to enhance the performance. We demonstrate the effectiveness of our method in huge 2D gridworlds and a variety of benchmark environments, including Atari 2600 and MuJoCo. Experimental results validate that our method outperforms baseline approaches in most tasks in terms of mean scores and exploration efficiency.
[Summary]: The paper proposes a formulation for incentivizing efficient exploration in deep reinforcement learning by encouraging the diversity of policies being learned. The exploration bonus is defined as the distance (e.g., KL-divergence) between the current policy and a set of previous policies. This bonus is then added to the standard reinforcement learning loss, whether off-policy or on-policy. The paper further uses an adaptive scheme for scaling the exploration bonus with respect to external reward. The first scheme, as inspired from [Plappert et.al. 2018], depends on the magnitude of exploration bonus. The second scheme, on the other hand, depends on the performance of policy with respect to getting the external reward. [Paper Strengths]: The paper is clearly written with a good amount of details and is easy to follow. The proposed approach is intuitive and relates closely to the existing literature. [Paper Weaknesses and Clarifications]: - My major concerns are with the experimental setup: (a) The paper bears a similarity in various implementation details to Pappert et.al. [5] (e.g. adaptive scaling etc.), but it chose to compare with the noisy network paper [8]. I understand [5] and [8] are very similar, but the comparison to [5] is preferred, especially because of details like adaptive scaling etc. (b) The labels in Figure-5 mention that DDPG w/ parameter noise: is this method from Plappert et.al. [5] or Fortunato et.al. [8]. It is unclear. (c) No citations are present for the names of baseline methods in the Section-4.3 and 4.4. It makes it very hard to understand which method is being compared to, and the reader has to really dig it out. (d) Again in Figure-5, what is "DDPG(OU noise)"? I am guessing its vanilla DDPG. Hence, I am surprised as to why is "DDPG (w/ parameter space noise)" is performing so much worse than vanilla DDPG? This makes me feel that there might be a potential issue with the baseline implementation. It would be great if the authors could share their perspective on this. (e) I myself compared the plots from Figure-1,2,3, in Pappert et.al. [5] to the plots in Figure-5 in this paper. It seems that DDPG (w/ parameter space noise) is performing quite worse than their TRPO+noise implementation. Their TRPO+noise beats the vanilla TRPO, but DDPG+noise seems to be worse than DDPG itself. Please clarify the setup. (f) Most of the experiments in Figure-4 seems to be not working at all with A2C. It would be great if authors could share their insight. - On the conceptual note: In the paper, the proposed approach of encouraging diversity of policy has been linked to "novelty search" literature from genetic programming. However, I think that taking bonus as KL-divergence of current policy and past policy is much closer to perturbing policy with a parameter space noise. Both the methods encourage the change in policy function itself, rather than changing the output of policy. I think this point is crucial to the understanding of the proposed bonus formulation and should be properly discussed. - Typo in Line-95 [Final Recommendation]: I request the authors to address the clarifications and comments raised above. My current rating is marginally below the acceptance threshold, but my final rating will depend heavily on the rebuttal and the clarifications for the above questions. [Post Rebuttal] Reviewers have provided a good rebuttal. However, the paper still needs a lot of work in the final version. I have updated my rating.
nips_2018_3582
Stimulus domain transfer in recurrent models for large scale cortical population prediction on video To better understand the representations in visual cortex, we need to generate better predictions of neural activity in awake animals presented with their ecological input: natural video. Despite recent advances in models for static images, models for predicting responses to natural video are scarce and standard linear-nonlinear models perform poorly. We developed a new deep recurrent network architecture that predicts inferred spiking activity of thousands of mouse V1 neurons simultaneously recorded with two-photon microscopy, while accounting for confounding factors such as the animal's gaze position and brain state changes related to running state and pupil dilation. Powerful system identification models provide an opportunity to gain insight into cortical functions through in silico experiments that can subsequently be tested in the brain. However, in many cases this approach requires that the model is able to generalize to stimulus statistics that it was not trained on, such as band-limited noise and other parameterized stimuli. We investigated these domain transfer properties in our model and find that our model trained on natural images is able to correctly predict the orientation tuning of neurons in responses to artificial noise stimuli. Finally, we show that we can fully generalize from movies to noise and maintain high predictive performance on both stimulus domains by fine-tuning only the final layer's weights on a network otherwise trained on natural movies. The converse, however, is not true.
##### I have read over the author's rebuttal, and I am satisfied by their clarification and response. ##### In this submission, the authors use modern deep learning techniques to develop a predictive model of mouse V1 cortex. Most notably, this model seeks to fill a gap in the literature for simultaneously predicting large populations of neurons responding to both natural video and noise stimuli, whereas previous models had mainly focused on static image stimuli in small cortical populations. The model developed by the authors receives these video stimuli as inputs, and consists of well-established computer vision architectures — here a feed-forward convolutional network feeding into a convolutional GRU — to extract stimulus feature representations common to all neurons. These are then read-off and modulated by a spatial-transformer network to ensure receptive-field stability of the regressed neural outputs. A novel structural contribution has been added to account for non-stimulus driven experimental noise which might be present in the data. Specifically, the readouts are additionally modified by “shift” and “modulator” networks which factor in these sources of noise such as pupil movement (correction to this is needed for proper receptive-field mapping), pupil diameter, and measured running speed of the animal (which is known to be correlated with V1 activity in rodents). They show that this model outperforms a non-recurrent linear-nonlinear control model (fig. 2a) across both datasets for all three mice, and assess the relative contributions of the novel shift and modulator component networks (fig. 2c). In “domain transfer” experiments, they show both qualitative and quantitative evidence for models generalizing outside their training set to produce receptive fields and tuning curves similar to mouse V1 activity, even though the mice have been shown different noise stimuli. Lastly, the authors study the quantitative amount of transfer between stimulus domains and find that models trained on movies serve as reasonable predictors of white-noise induced activity, and find that the converse is not true (fig. 4b). However, a network which combined domain knowledge from both stimuli sets provides a sufficient basis space for explaining activity presented in either domain (fig. 4c). Overall, I am impressed by the ambition of this work. I believe that the field could benefit from more of this style of large-scale modeling, and further consideration of generalization to novel stimuli as a complex yet intriguing phenomenon. The application to large-scale populations responding to complex stimuli, instead of only a small subset of V1 responding to simpler stimuli is also noteworthy. I found the introduction of the shifter and modulator networks to be a novel idea, encoding well-thought-out structural priors into the model. That being said, although I have no complaints concerning the high-level scope of analysis presented in this submission, I do have some concerns as to the strength of results for several reasons: (1). I am willing to believe the qualitative comparison between the “full” model to a “standard” linear-nonlinear system in fig 2a. Based on the lack of recurrence alone, the result seems intuitive. However, I am concerned with the quantitative estimate between these two models due to not controlling for numbers of learnable parameters. From what I understand, the authors balance the number of parameters in the ConvNet betweeen models — three 12-feature layers in the full model and one 36-feature layer in the LN mode — and readout structures. However, in the absence of the GRU in the LN model, there is an imbalance in these numbers. Generally speaking, you can interpret this in two ways: either (i). the extra units and nonlinearities can artificially inflate the score of the full model if no hyperparameter optimization has been performed or (ii) the authors had performed a hyper parameter sweep (number of units) across both models and settled on the best performing settings. Given that no mention towards the latter has been discussed in the submission, I am inclined to believe that the former is true. (2). I am concerned with a statement that was made inside the “Network Implementation” subsection which stated that all readouts biases were initialized to the mean firing rate of that neuron. It seems to me that this choice is also reasonable, however it sounds like that is a strong prior for what these values should be. Could the authors discuss the impact of this initialization, and how it relates to why the full model might only be performing ~50-70% of the oracle estimator? (3). Similar to the above in (1), even if I am intrigued by the application of the shift and modulator networks, I am concerned that for parameter balancing reasons and the lack of a hyperparameter sweep, the results in 2c might be artificially inflated. Additionally, I’m concerned that the novelty of such an approach might be detracting from what might be a more standard way of incorporating these stimulus-independent noise sources — namely, by having them as generic features that are concatenated as inputs to the readout MLP, rather than these specific priors. I would be just as impressed by seeing a larger/deeper readout network with a more generic structure perform at the level (or better) than something with a “clever” implementation. (4). Almost unsurprisingly, the authors find that models trained on movies are well-correlated with noise-induced activity, although the converse is not true, and that a model trained on both can in fact predict both. I would have liked a similar comparison between natural movies and the rendered videos as well, because it is unclear whether the statistics between even these two domains are imminently transferable. What I did find interesting here is that although the movie-optimized models are well correlated with noise responses, they show high variance in predicting the low-levels details such as orientation and direction tuning. The paper ends on a bit of a cliff-hanger in the sense that they do not propose a better method of a generalized model than what amounts to just training on all stimulus contingencies. In summary, I am inclined to like this work. The results — although not necessarily as strong as they could be — feel intuitive. If text length were not a factor, I personally feel like I would have benefitted from an extended discussion/interpretation section of results, and more detailed analysis of those presented in the submission (specifically as far as the domain transfer section is concerned). The approach and aims of this work alone are a worthwhile contribution to the neuroscience community as a whole.
nips_2018_2078
BML: A High-performance, Low-cost Gradient Synchronization Algorithm for DML Training In distributed machine learning (DML), the network performance between machines significantly impacts the speed of iterative training. In this paper we propose BML, a new gradient synchronization algorithm with higher network performance and lower network cost than the current practice. BML runs on BCube network, instead of using the traditional Fat-Tree topology. BML algorithm is designed in such a way that, compared to the parameter server (PS) algorithm on a Fat-Tree network connecting the same number of server machines, BML achieves theoretically dominating communication cost. Therefore, in order to run DML in large scale, we need to carefully design the network with minimized synchronization overhead among machines. The widely-used network topology to run DML in today's data centers is Clos network, or FatTree [5]. Although Fat-Tree achieves great success in providing uniform network performance to cloud computing applications, it may not well match the traffic model of gradient synchronization in DML. Running the typical parameter server (PS) synchronization algorithm in Fat-Tree, each synchronization flow needs to traverse multiple hops of switches before being aggregated. It not only hurts the gradient synchronization performance, but also wastes the bandwidth/link resource. In this paper, we suggest using BCube [12] as the underlying network topology for DML training, and design a novel distributed gradient synchronization algorithm on top of BCube, called BML. BCube is a recursive network topology composed of commodity switches and servers with k (the typical value of k is 2∼4) interfaces. The synchronization algorithm of BML is designed in such a way that, compared to the PS algorithm running a Fat-Tree network connecting the same number of server machines, BML running on a BCube network can theoretically achieve 1 k of the gradient synchronization time, with only k 5 of switches. We have implemented BML in TensorFlow. We run two representative public deep learning benchmarks, namely, LeNet-5 [15] and VGG-19 [22], on a testbed with 9 dual-GPU servers. The experiment results show that, BML can reduce the job completion time of DML training by up to 56.4% compared with the PS algorithm on Fat-Tree network. The advantage of BML is higher when the sub-minibatch size per machine is smaller, which is important for large-scale DML to guarantee the model accuracy. 2 Background and Motivation DML Models and Notations: DML can run on multiple CPUs/GPUs in a machine or on multiple machines. In this work we focus on DML network among machines. In order to decouple the intermachine and intra-machine communications, throughout this paper we simply take one machine as a single training worker, though the machine can be equipped with multiple GPUs. Based on splitting whether the training data or the model parameters onto multiple machines, DML can be divided into data-parallel and model-parallel ones. In data-parallel DML, each machine uses a shard of training data to compute the gradients; while in model-parallel DML, a machine computes gradients for part of the model. In this work we focus on data-parallel DML. In each iteration, every machine trains local gradients for the entire model based on its sub-minibatch of training data, and synchronizes the gradients with other machines. The aggregated gradients are calculated upon all the machines' local gradients, which are then applied to the model update. According to the tradeoff between gradient freshness and computing resource utilization, there are three typical synchronization modes. 1) Bulk synchronous parallel (BSP). 2) Total asynchronous parallel (TAP). 3) Stale synchronous parallel (SSP). Given a predefined accuracy of the trained model, it is difficult to tell which synchronization mode runs the fastest in practice. BSP wastes the computation resource of some faster machines, but fully follows the sequential behavior as trained by a single machine. TAP makes full utilization of the computing resource, but the convergence speed is unpredictable with the possibility of no convergence at all [10]. SSP lies between the two with proven convergence [9,26]. In this work we focus on BSP synchronization, which is widely used in modern ML applications [11,24].
This paper observes that by putting network cards into computers, some of the router functionality can be performed for free by each compute node. This potentially means less router equipment, and less transmission time for averaging data. This paper then shows how to you how to do it for real, giving details of the averaging algorithm for 'k' network cards per compute node. For k=2 and small minibatches, they actually see their expected factor of 2 speedup experimentally. The algorithm has some little tricks in it that make it a useful contribution. It is nice that this technique can in many cases be combined with other ways to reduce communication overhead. The technique is more something to consider when setting up a new compute cluster. This is really a networking paper, since the technique describes how to aggregate [sum] and distribute any vector amongst N compute nodes. The vector could be a gradient, a set of model parameters, or, well, anything else. On the other hand, they actually demonstrate experimentally that it can work in practice for a fairly common machine learning scenario. ### After author review ### Margin notes: line 95: GST - where defined? line 226: codes --> code Some of the statements in the rebuttal seemed more understandably stated than the more difficult level of presentation in the paper. Occasionally I had to remember previous statements and infer comparisons by myself. As noted by reviewer #2, the comparison figure and its discussion helps bring out the practical importance of their paper. I agree with reviewer #1 that Figure 4 and its discussion needs a very devoted reader. If I had to make room, I might consider moving Figure 4, lines 165--203 and the dense notation use for it (148--157) to a Supplementary Material. The body of the paper might contain a much simplified version of Fig. 4 and its discussion, with just the thread 0,1 link activity during the two steps each of aggregation and broadcast, and refer the reader to the supplementary material for details of precisely which gradient piece is chosen for transmission so as to lead to a final state where all servers know all pieces.
nips_2018_8009
Transfer of Deep Reactive Policies for MDP Planning Domain-independent probabilistic planners input an MDP description in a factored representation language such as PPDDL or RDDL, and exploit the specifics of the representation for faster planning. Traditional algorithms operate on each problem instance independently, and good methods for transferring experience from policies of other instances of a domain to a new instance do not exist. Recently, researchers have begun exploring the use of deep reactive policies, trained via deep reinforcement learning (RL), for MDP planning domains. One advantage of deep reactive policies is that they are more amenable to transfer learning. In this paper, we present the first domain-independent transfer algorithm for MDP planning domains expressed in an RDDL representation. Our architecture exploits the symbolic state configuration and transition function of the domain (available via RDDL) to learn a shared embedding space for states and state-action pairs for all problem instances of a domain. We then learn an RL agent in the embedding space, making a near zero-shot transfer possible, i.e., without much training on the new instance, and without using the domain simulator at all. Experiments on three different benchmark domains underscore the value of our transfer algorithm. Compared against planning from scratch, and a state-of-the-art RL transfer algorithm, our transfer solution has significantly superior learning curves.
(i) Summary: The paper proposes TransPlan, a novel neural architecture dedicated to “near” zero-shot transfer learning between equi-sized discrete MDP problems from the same RDDL domain (i.e., problems with same number of state and action variables, but with different non-fluents / topologies). It combines several deep neural models to learn state and state-action embeddings which shall be used to achieve sample-efficient transfer learning across different instances. It uses GCNs (Graph Convolutional Networks), a generalization of ConvNets for arbitrary graph embeddings, to learn state embeddings, A3C (Asynchronous Advantage Actor Critic) as its RL module that learns policies mapping state embeddings to an abstract representation of probability distributions over actions, and action decoders that attempt to transform these abstract state-action embeddings into the MDP’s original action space for the target problem. Experiments on IPPC domains (e.g., SysAdmin, Game of Life, and Navigation) compare the proposed approach with A3C (without transfer learning) and A2T (Attend, Adapt and Transfer). Results indicate that TransPlan can be effective for transfer learning in these domains. (ii) Comments on technical contributions: - Contributions seem a bit overstated: The authors argue that the proposed transfer learning approach is the “first domain-independent transfer algorithm for MDP planning domains expressed in RDDL”. Regardless of whether TransPlan is the first transfer algorithm for the RDDL language, the proposed approach does not seem “domain-independent” in a very general sense, given that its state encoder relies on RDDL domains exhibiting a graph structure (i.e., “state variables as nodes, edges between nodes if the respective objects are connected via the non-fluents in the domain”) - which explains the particular choice of IPPC domains for experiments. Moreover, the transfer learning is only focused on target instances with same size as the instances used in the learning phase, which seems like an important limitation. Finally, it is not clear until the experiments section that TransPlan is only applicable to discrete domains, another limitation to a subset of RDDL. - The paper contributions are not put in context with recent related work: The main works related to the idea of Deep Reactive Policies in model-known planning (e.g., “Action schema networks: Generalised policies with deep learning”, and “Training deep reactive policies for probabilistic planning problems”) are barely described and not discussed at all. In particular, it is not even mentioned that Action-Schema Nets also address the transfer learning between instances of the same model independently of the size of the target instances. Is the RDDL vs. PPDDL issue that important that these methods are incomparable? - The background necessary to appreciate the proposed method could be presented better. Section 2.1. Reinforcement Learning The presentation of the Actor-Critic architecture (which is the core aspect of A3C) is really confusing. First off, it is said that A3C “constructs approximations for both the policy (using the ‘critic’ network) and the value function (using the ‘actor’ network)”. This is wrong or at least very unclear. The actor network is the approximation for the policy and the critic network it is used as the baseline in the advantage function (i.e., the opposite of what the authors seem to say). Moreover, the formulation of the policy gradient seems totally wrong as the policy \pi(a|s) is the only \theta-parameterized function in the gradient (it should be denoted instead as \pi_{\theta}(a|s)), and also the value functions Q_{\pi}(s, a; \theta) and V(s; \theta) do not share the same set of parameters - indeed, Q-values are approximated via Monte-carlo sampling and V(s; \theta) is the baseline network learned via regression. Additionally, the actor network does not maximize the H-step lookahead reward by minimizing the expectation of mean squared loss, the minimization of the mean squared loss is part of the regression problem solved to improve the approximation of the critic network (i.e., the baseline function); it is the policy gradient that attempts to maximize the expected future return. Section 2.2. Probabilistic Planning The presentation of the RDDL language is not very clear and could mislead the reader. Aside from technical imprecision, the way it is presented can give the reader the impression that the language is supposed to be solely oriented to “lifted” MDPs, whereas I understand that it is a relational language aimed at providing a convenient way to compactly represent ground factored MDPs problems. This intent/usage of RDDL should be clarified. Section 2.3 Graph Convolutional Networks - The formulas are not consistent with the inputs presented: The explanation of the inputs and outputs of the GCN layer in lines 149-151 are confusing since parameter N is not explained at all and D^{(l+1)} is simply not used in the formula for F^{(l+1)}. - The intuition for the GCN layer and propagation rule/formula is too generic: The formula for the GCN layer’s activation function has not meaning at all for anyone not deeply familiar with ICLR2017 paper on Semi-supervised classification with graph convolutional networks. Also, the intuition that “this propagation rule implies that the feature at a particular node of the (l + 1)th layer is the weighted sum of the features of the node and all its neighbours at the lth layer” is completely generic and not does not elucidate at all the formula in line 152. Finally, the statement that “at each layer, the GCN expands its receptive field at each node by 1” is not explained and difficult to understand for someone not expert in GCNs. - Problem formulation is not properly formalized: The authors say that “we make the assumption that the state and action spaces of all problems is the same, even though their initial state, non-fluents and rewards could be different”. Putting together this statement with the following statement in lines 95-96 “Our work attempts a near-zero shot learning by learning a good policy with limited learning, and without any RL.” triggers some questioning not explicitly addressed in the paper about how could the rewards be different in the target instances and the transfer learning still be effective without any RL. The authors should really clarify what they mean by the “rewards could be different”. (iii) Comments on empirical evaluation: The empirical evaluation is centered around answering the following questions: (1) Does TRANSPLAN help in transfer to new problem instances? (2) What is the comparison between TRANSPLAN and other state of the art transfer learning frameworks? (3) What is the importance of each component in TRANSPLAN? Questions (1) and (2) are partially answered by the experiments. Indeed, TransPlan does seem to transfer the learning to new problem instances (given the limitations pointed earlier) and it improves on the transfer learning A2T approach. But these conclusions are based solely on “learning curves” plotted in terms of “number of iterations”. So, it is important to remember that the first and foremost motivation of transfer learning is to amortize the computational cost of training the neural model over all target instances of the domain. If this is not successfully accomplished, there is not point in incurring the cost of the offline learning phase, and then it is best to plan from scratch for each instance of the domain. Therefore, it is in my understanding that a better experimental design should focus on measuring and comparing learning/training times and transfer times, instead of relying on the number of iterations to showcase the learning evolution of TransPlan. Particularly, this is important to fairly highlight the value of transfer learning when comparing with baseline A3C, and the particular advantages of TransPlan when comparing with the model-free A2T transfer learning approach. Additionally, the authors should better explain why in Table 1 the columns for iter 0 are not 0.0 and the last columns for iter \infty are not 1.0. Judging by the formula of \alpha(i) in line 300, this should be the case. (iv) Detailed comments: Typos: - Line 27: “[...] and RDDL planning solve each problem instance [...]” should be “[...] and RDDL planning that solve each problem instance [...]” - Line 85: it lacks the proper reference for “transferring the value function or policies from the source to the target task []” - Line 109: “RDDL Reprentation” should be “RDDL Representation” Other suggestions for better clarity: - Lines 107-108: “[...] an MDP exponential states [...]” should be “[...] an MDP exponential state space [...]” - Line 123: “[...] this domain is inherently non-deterministic [...]” would be better phrased “[...] this domain is inherently dynamic [...]”? - Figure 1: The model architecture is not really clear. Perhaps it would be better if the neural components used in the training/learning and transfer phases were separated with its inputs and outputs clearly shown in the figure. Also, an algorithm detailing the training procedure steps would greatly improve the presentation of the TransPlan approach. (v) Overall evaluation: The paper is relevant for the NIPS audience. It brings together interesting and novel ideas in deep reactive networks and training methods (i.e., domain-adversarial training). However, the overall readability of the paper is compromised by the unclear background section, the ambiguous presentation of the model architecture, and technical issues with some formulae. Additionally, the experimental setup seems biased to showcase what is stated by the authors in lines 314 - 315 “TRANSPLAN is vastly ahead of all algorithms at all times, underscoring the immense value our architecture offers to the problem.” In my understanding, it is a major experimental drawback to not compare the proposed approach with the baseline approaches w.r.t. the training/transfer times (instead, all comparisons are based on number of iterations), which is common in RL and transfer learning testbeds. So, my recommendation is to reject the paper mainly on the premises that the overall presentation must be improved for clarity/correctness; ideally the experiments should also take into consideration the computational costs in terms of training and transfer times.
nips_2018_95
Understanding Weight Normalized Deep Neural Networks with Rectified Linear Units This paper presents a general framework for norm-based capacity control for L p,q weight normalized deep neural networks. We establish the upper bound on the Rademacher complexities of this family. With an L p,q normalization where q ≤ p * and 1/p+1/p * = 1, we discuss properties of a width-independent capacity control, which only depends on the depth by a square root term. We further analyze the approximation properties of L p,q weight normalized deep neural networks. In particular, for an L 1,∞ weight normalized network, the approximation error can be controlled by the L 1 norm of the output layer, and the corresponding generalization error only depends on the architecture by the square root of the depth.
This paper essentially seems to address 2 questions (a) Under certain natural weight constraints what is the Rademacher complexity upperbound for nets when we allow for bias vectors in each layer? and (b) How well can such weight constrained nets approximate Lipschitz functions in the sup norm over compact domains? There is a section 4.1 in the paper which is about generalization bounds for nets doing regression. but to me that does not seem to be an essential part of the paper. Let me first say at the outset that the writing of the paper seems extremely bad and many of the crucial steps in the proofs look unfollowable. As it stands this paper is hardly fit to be made public and needs a thorough rewriting! There are few issues that I have with the section on upperbounding Rademacher complexity as has been done between pages 15 to 19, 1. Isnt the entire point of this analysis is just to show that the Srebro-Tomioka-Neyshabur result (http://proceedings.mlr.press/v40/Neyshabur15.pdf) holds even with biases? If that is the entire point then why is this interesting? I am not sure why we should have expected anything much to change about their de-facto exponential dependances on depth with addition of bias! A far more interesting analysis that could have been done and which would have led to a far more interesting paper is if one could have lifted the Rakhlin-Golowich-Shamir analysis of Rademacher complexity (which does not have the exp(depth) dependance!) to nets with biases. I strongly feel thats the research that should have been done! 2. The writing of the proofs here are extremely difficult to follow and the only reason one can follow anything here is because one can often look up the corresponding step in Srebro-Tomioka-Neyshabur! There are just way too many steps here which make no sense to me : like what does "sup_{\x_i}" even mean in the lines below line 350? How did that appear and why is Massart's Lemma applicable for a sup over a continuum? It hardly makes any sense! I have no clue how equation 11(d) follows from equation 11(c). This is similar to Lemma 17 of Srebro-Tomioka-Neyshabur but this definitely needs a rederivation in this context. The steps following equation 12(b) also make little sense to me because they seem to indicate that one is taking \sigma of an innerproduct whereas this \sigma is I believe defined over vectors. Its again not clear as to how the composition structure below equation 12(d) got broken. Now coming to the proof of Theorem 3 - its again fraught with too many problems. Lemma 9 is a very interesting proposition and I wish this proof were clear and convincing. As of now its riddled with too many ambiguous steps! For example : below line 464 what is being computed are \floor(r_1/2) and \floor(r_2/2) number of linear functions apart from x and -x. Any linear function needs 2 ReLU gates for computation. But this doubling is not visible anywhere! Page 23 to 24, the argument gets even more difficult to follow : I do not understand the point that is being made at the top of page 23 that there is somehow a difference between stating what the 2 hidden neurons are and what they compute. The induction is only more hairy and there is hardly any clear proof anywhere about why this composed function should be seen as computing the required function! Maybe the argument of Theorem 3 is correct and it does seem plausible but as it stands I am unable to decide the correctness given how unintelligible the presentation is! ============================================================== Response to "author response" : I thank the authors for their detailed response. I do think its a very interesting piece of work and after the many discussions with the co-reviewers now I am more convinced of the correctness of the proofs. Though still there are major sections in the paper like the argument about Lemma 9/Theorem 3 which are still too cryptic for me to judge correctness with full certainty. This paper definitely needs a thorough rewriting with better notation and presentation.
nips_2018_997
Efficient Stochastic Gradient Hard Thresholding Stochastic gradient hard thresholding methods have recently been shown to work favorably in solving large-scale empirical risk minimization problems under sparsity or rank constraint. Despite the improved iteration complexity over full gradient methods, the gradient evaluation and hard thresholding complexity of the existing stochastic algorithms usually scales linearly with data size, which could still be expensive when data is huge and the hard thresholding step could be as expensive as singular value decomposition in rank-constrained problems. To address these deficiencies, we propose an efficient hybrid stochastic gradient hard thresholding (HSG-HT) method that can be provably shown to have sample-size-independent gradient evaluation and hard thresholding complexity bounds. Specifically, we prove that the stochastic gradient evaluation complexity of HSG-HT scales linearly with inverse of sub-optimality and its hard thresholding complexity scales logarithmically. By applying the heavy ball acceleration technique, we further propose an accelerated variant of HSG-HT which can be shown to have improved factor dependence on restricted condition number in the quadratic case. Numerical results confirm our theoretical affirmation and demonstrate the computational efficiency of the proposed methods.
The article analyses convergence in hard-thresholding algorithms and proposes an accelerated stochastic hybrid hard thresholding method that displays better convergence with respect to the compared methods. The article is dense but relatively fine to follow. Theoretical development seems to be complete and accurate, though I admit I have not throughly followed the full derivation. Experimental section is in accordance with the theoretical claims and is more than sufficient. Just for the sake of reproducibility of the results an exhaustive pseudocode or repository should be made available as a companion to the article to further strength the autor's points.
nips_2018_2171
Limited memory Kelley's Method Converges for Composite Convex and Submodular Objectives The original simplicial method (OSM), a variant of the classic Kelley's cutting plane method, has been shown to converge to the minimizer of a composite convex and submodular objective, though no rate of convergence for this method was known. Moreover, OSM is required to solve subproblems in each iteration whose size grows linearly in the number of iterations. We propose a limited memory version of Kelley's method (L-KM) and of OSM that requires limited memory (at most n + 1 constraints for an n-dimensional problem) independent of the iteration. We prove convergence for L-KM when the convex part of the objective (g) is strongly convex and show it converges linearly when g is also smooth. Our analysis relies on duality between minimization of the composite objective and minimization of a convex function over the corresponding submodular base polytope. We introduce a limited memory version, L-FCFW, of the Fully-Corrective FrankWolfe (FCFW) method with approximate correction, to solve the dual problem. We show that L-FCFW and L-KM are dual algorithms that produce the same sequence of iterates; hence both converge linearly (when g is smooth and strongly convex) and with limited memory. We propose L-KM to minimize composite convex and submodular objectives; however, our results on L-FCFW hold for general polytopes and may be of independent interest.
Summary: ------- The paper proposes a simple modification of the Original Simplicial Method that keeps in memory only the active linear constraints. Then by linking the dual method to Fully corrective Frank Wolfe they get convergence rates. Quality & Clarity: ------------------ The paper is well written and motivated for people in the field. Note that it is probably quite abstruse otherwise. Its place in the literature is well defined. Proofs are correct. The final rates (Theorem 2 and 3) are quickly stated: what is the total cost of the algorithm, taking into account the computations of the subproblems ? (by taking a concrete example where the subproblems can be computed efficiently) This would enlighten the impact in this rate of the limited memory implementation in comparison to the original simplicial method. In Theorem 3, first, \Delta^{(i+1)} should be \Delta^{(i)}. Then these results only state that the estimated gap is bounded by the true gap, I don't get the sub-linear rate from reading these equations. Originality & Significance: --------------------------- The paper introduces two simple, though efficient, ideas to develop and analyze an alternative of the simplicial method (looking only at the active linear constraints of each subproblem and drawing convergence rates from the Frank-Wolfe literature). Relating the dual of the simplicial method to Franck-Wolfe method was already done by Bach [2]. I think that the present paper is therefore only incremental. Moreover the authors do not give compelling applications (this is due to the restricted application of the original simplicial method). In particular, they do not give concrete examples where the subproblems can easily be computed, that would help to understand the practical impact of the method. Minor comments: --------------- - l139: no parenthesis around w - l140: put footnote for closed function here - l 227: Replace "Moreover" by "Therefore" - l 378: must be strictly contained *in* - As stated from the beginning, OSM is not Kelley's method, so why is the algorithm called Limited memory Kelley's method and Limited memory simplicial method ? This confusion might weaken its referencing in the literature, I think. -------------------------------------- Answer to author's feedback -------------------------------------- Thank you for your detailed feedback. I revised my "incremental" comment. The problem was clearly asked before and the answer you provided is simple, clear and interesting.
nips_2018_1670
Deep, complex, invertible networks for inversion of transmission effects in multimode optical fibres We use complex-weighted, deep networks to invert the effects of multimode optical fibre distortion of a coherent input image. We generated experimental data based on collections of optical fibre responses to greyscale input images generated with coherent light, by measuring only image amplitude (not amplitude and phase as is typical) at the output of 1 m and 10 m long, 105 µm diameter multimode fibre. This data is made available as the Optical fibre inverse problem Benchmark collection. The experimental data is used to train complex-weighted models with a range of regularisation approaches. A unitary regularisation approach for complexweighted networks is proposed which performs well in robustly inverting the fibre transmission matrix, which is compatible with the physical theory. A benefit of the unitary constraint is that it allows us to learn a forward unitary model and analytically invert it to solve the inverse problem. We demonstrate this approach, and outline how it has the potential to improve performance by incorporating knowledge of the phase shift induced by the spatial light modulator.
I have read this paper and found it to be a solid contribution both to the field of optics and to NIPS. With respect to its contribution to optics, I will admit that I have not worked in this field for over two decades, but it does seem that they are solving an interesting problem in a new way. The removal of speckle from images seems to be a relevant problem in the field: (2018) SAR Image Despeckling Using a Convolutional Neural Network Puyang Wang, Student Member, IEEE, He Zhang, Student Member, IEEE and Vishal M. Patel, Senior Member, IEEE https://arxiv.org/pdf/1706.00552.pdf Most cited (2010) "General Bayesian estimation for speckle noise reduction in optical coherence tomography retinal imagery" Alexander Wong, Akshaya Mishra, Kostadinka Bizheva, David A. Clausi1 And I could not find a similar work in my search. The authors contribute compelling examples in their supplemental materials, which lends credence to the claims that their method actually works. The authors also promise to contribute a novel dataset to use for ML benchmarking on this type of problem. In my opinion, this paper comprises what I believe a NIPS application paper should have, a paper that would be considered best in class in it's field of application and one that uses neural information processing methods to bring a substantial advance to the application domain. Additionally the contribution of a benchmark dataset should further motivate research on this problem area. The paper is well written and easy to follow.
nips_2018_3510
Dual Policy Iteration A novel class of Approximate Policy Iteration (API) algorithms have recently demonstrated impressive practical performance (e.g., ExIt [1], AlphaGo-Zero [2]). This new family of algorithms maintains, and alternately optimizes, two policies: a fast, reactive policy (e.g., a deep neural network) deployed at test time, and a slow, non-reactive policy (e.g., Tree Search), that can plan multiple steps ahead. The reactive policy is updated under supervision from the non-reactive policy, while the non-reactive policy is improved via guidance from the reactive policy. In this work we study this class of Dual Policy Iteration (DPI) strategy in an alternating optimization framework and provide a convergence analysis that extends existing API theory. We also develop a special instance of this framework which reduces the update of non-reactive policies to model-based optimal control using learned local models, and provides a theoretically sound way of unifying model-free and model-based RL approaches with unknown dynamics. We demonstrate the efficacy of our approach on various continuous control Markov Decision Processes.
Major Comments: 1) page 3: How do you set a parameter $\alpha$ and what effect it has? In other words, if $\eta^* = \argmin J(\eta)$ and $\eta_{\alpha}$ is the solution of Eq , what can you say about relation between $\eta^*$ and $\eta_{\alpha}$? 2) page 3: parameter $\delta$ in Eq 3. Could you please elaborate more on how are you going to choose? If it does depend on policy $\pi_n$, then we need to replace it by $\delta_n$ (i.e. it is changing from iteration to iteration). Also, apparently, this parameter will have an effect on the transition policy $\hat{P}$ that is found, and, hence, has effect to the policy $\eta_n$ evaluated at iteration $n$. Did you study this effect? 3) page 3: Theorem 3.1. I am confused with this result. Because the real transition $P$ is not available then we need to study the $J(\eta_n)$ evaluated with model $\hat{P}$ instead of $P$. But Theorem 3.1 studies the quality of policy $\eta_n$ with respect to real model $P$. Therefore, Theorem 3.1 doesn't quantify the quality of the tree search policy $\eta_n$. 4) page 4: Section 3.2. How do you define parameter $\beta$ in Eq 6. How effect does it have on the approximate solution $\pi_{n+1}$? 5) Due to PDL Lemma, for a given fast reactive policy $\pi_n$ the corresponding tree search policy $\eta_n$ actually doesn't depend on $\pi_n$. This is because policy $eta_n = argmin_{\eta} J(\eta)$, and $J(\eta)$ doesn’t depend on $pi_n$. The dependence appears when we impose trust region constraint given in terms of $\pi_n$. My biggest issue with this paper is about the parameters $\alpha, \delta, \beta$ that was introduced as some form of relaxation but, at the same time, no effect of them were studied. (Please see my comment on Theorem 3.1 in 3) for better understanding). Minor Comments: 1) page 3: Although the min-max framework considered in Section 3 is understandable, it would be good to elaborate on for readers. Apart from the above major and minor comments, overall the paper appears to be written and structured well. Technically novel.
nips_2018_4901
Robust Hypothesis Testing Using Wasserstein Uncertainty Sets We develop a novel computationally efficient and general framework for robust hypothesis testing. The new framework features a new way to construct uncertainty sets under the null and the alternative distributions, which are sets centered around the empirical distribution defined via Wasserstein metric, thus our approach is data-driven and free of distributional assumptions. We develop a convex safe approximation of the minimax formulation and show that such approximation renders a nearly-optimal detector among the family of all possible tests. By exploiting the structure of the least favorable distribution, we also develop a tractable reformulation of such approximation, with complexity independent of the dimension of observation space and can be nearly sample-size-independent in general. Real-data example using human activity data demonstrated the excellent performance of the new robust detector.
The rebuttal addressed my technical concerns, and also I seemed to have misjudged the size of the contributions at first. My score has been updated. This paper studies the two-sample non-parametric hypothesis testing problem. Given two collections of probability distribution, the paper studies approximating the best detector against the worst distributions from both collections. A standard surrogate loss approximation is used to upper bound the worst case risk (the maximum of the type I and type II errors) with a convex surrogate function, which is known to yield a good solution. Some parts of section 3 are novel, but I'm not familiar enough with the literature to pinpoint exactly what. The convex surrogate approximation is a standard relaxation in learning theory. I wish the authors be more clear in this manner. The main novel contribution seems to be in the setting where the two collections of distributions are wasserstein balls centered around empirical distributions. It that case, Theorem 3 derives a convex relaxation to solving the optimal detector. The authors then provide experimental justification for their method, showing improved performance on real data from previous algorithms. The writing was generally good, though I found the high level ideas hard to follow. Please be more clear about which results apply in which setting. Also, the authors really need to justify why considering wasserstein balls around empirical distributions is a good idea. In fact, I have a few technical concerns: 1. What if the two empirical distributions from P1 and P2 have little overlap? It seems that the hypothesis testing problem becomes trivial 2. Is prop 1 only for the Wasserstein-ball sets? The proof certainly needs this assumption but it's not in the proposition statement. 3. Why in the objective for problem (6) concave? What assumptions on \phi are needed? 4. Please justify the min-max swap in 374 To summarize, given my technical concerns and the fact that the main contribution seems to be deriving a convex relaxation, I'm a bit hesitant to recommend this paper.
nips_2018_3451
ρ-POMDPs have Lipschitz-Continuous -Optimal Value Functions Many state-of-the-art algorithms for solving Partially Observable Markov Decision Processes (POMDPs) rely on turning the problem into a "fully observable" problem-a belief MDP-and exploiting the piece-wise linearity and convexity (PWLC) of the optimal value function in this new state space (the belief simplex ∆). This approach has been extended to solving ρ-POMDPs-i.e., for informationoriented criteria-when the reward ρ is convex in ∆. General ρ-POMDPs can also be turned into "fully observable" problems, but with no means to exploit the PWLC property. In this paper, we focus on POMDPs and ρ-POMDPs with λ ρ -Lipschitz reward function, and demonstrate that, for finite horizons, the optimal value function is Lipschitz-continuous. Then, value function approximators are proposed for both upper-and lower-bounding the optimal value function, which are shown to provide uniformly improvable bounds. This allows proposing two algorithms derived from HSVI which are empirically evaluated on various benchmark problems.
post-rebuttal: I have a small suggestion for the authors to include the related work in the paper if they can and to put a small experiment in the paper where surveillance is the goal of the agent. This is the main motivation of the paper. Overall I am happy with the paper and response. ----- Summary: This paper presents a method for solving POMDPs (offline POMDP planning) when the value function of the POMDP is not necessarily convex (mainly breaking the PWLC property exploited by most POMDP planners). The paper presents a new POMDP planner based on heuristic search value iteration that does not rely on convexity of the optimal value function to compute it. The idea is to use upper and lower bound on the value function (and then tighten them at various beliefs) to compute the optimal value function (or an approximation of it). The property that the paper exploits to generate these upper and lower bounds is that the optimal value function has to be Lipschitz continuous. The paper also shows that if the immediate expected reward is Lipschitz continuous then the optimal value function is guaranteed to be Lipschitz continuous and then exploits this property of the value function to propose upper and lower bounds on the value function by obtaining an expression for the Lipschitz contant for the curves (cones in this case) to upper and lower bound the value function. Finally, the paper gives empirical results on standard POMDP problems in literature. Quality - This is a good quality paper; thorough and rigorous for most parts. Clarity: The paper is clear and well-written. Originality: The paper presents original work. Significance: I think the paper presents significant results that are relevant to this community. Strength: I believe the main strength of the paper is the carefully designed and principle method for obtaining Lipschitz continuous upper and lower bound on the value function of a POMDP. Weakness: - I am quite not convinced by the experimental results of this paper. The paper sets to solve POMDP problem with non-convex value function. To motivate the case for their solution the examples of POMDP problem with non-convex value functions used are: (a) surveillance in museums with thresholded rewards; (b) privacy preserving data collection. So then the first question is when the case we are trying to solve are above two, why is there not a single experiment on such a setting, not even a simulated one? This basically makes the experiments section not quite useful. - How does the reader know that the reward definitions of rho for this tasks necessitates a non-convex reward function. Surveillance and data collection has been studied in POMDP context by many papers. Fortunately/unfortunately, many of these papers show that the increase in the reward due to a rho based PWLC reward in comparison to a corresponding PWLC state-based reward (R(s,a)) is not that big. (Papers from Mykel Kochenderfer, Matthijs Spaan, Shimon Whiteson are some I can remember from top of my head.) The related work section while missing from the paper, if existed, should cover papers from these groups, some on exactly the same topic (surveillance and data collection). - This basically means that we have devised a new method for solving non-convex value function POMDPs, but do we really need to do all that work? The current version of the paper does not answer this question to me. Also, follow up question would be exactly what situation do I want to use the methodology proposed by this paper vs the existing methods. In terms of critisim of significance, the above points can be summarized as why should I care about this method when I do not see the results on problem the method is supposedly designed for.
nips_2018_6497
Blockwise Parallel Decoding for Deep Autoregressive Models Deep autoregressive sequence-to-sequence models have demonstrated impressive performance across a wide variety of tasks in recent years. While common architecture classes such as recurrent, convolutional, and self-attention networks make different trade-offs between the amount of computation needed per layer and the length of the critical path at training time, generation still remains an inherently sequential process. To overcome this limitation, we propose a novel blockwise parallel decoding scheme in which we make predictions for multiple time steps in parallel then back off to the longest prefix validated by a scoring model. This allows for substantial theoretical improvements in generation speed when applied to architectures that can process output sequences in parallel. We verify our approach empirically through a series of experiments using state-of-the-art self-attention models for machine translation and image super-resolution, achieving iteration reductions of up to 2x over a baseline greedy decoder with no loss in quality, or up to 7x in exchange for a slight decrease in performance. In terms of wall-clock time, our fastest models exhibit real-time speedups of up to 4x over standard greedy decoding.
This paper presents a parallel inference algorithm that allows performing greedy decoding in sub-linear time. The proposed approach considers using k auxiliary models, in which each one predicts the j-th next word. In this way, these k models can run and predict the next k words in parallel. Then a verification step is conducted to verify the prediction of these k models. If the proposal is correct, then the predictions are accepted and the decoding can move on. In this way, in the best case, the model only requires m/k step to generate a sequence with m tokens. Pros: - The idea proposed in this paper is neat and interesting. - The authors present experiments on tasks in two different domains to empirically demonstrate the efficiency of the decoding algorithm. - The authors proposed both exact and inexact algorithms for making the predictions. - The authors provide a nice analysis of their approach. Cons: - The writing of the paper can be improved, and the description of the proposed approach is sometimes confusing and descriptions of experimental settings and results are handwaving. - The speed-up of the decoding is not guaranteed and require large duplicated computation, although the computation can be amortized. Comments: - It is unclear what the knowledge distillation approach is actually used in the experiment. - I do not fully understand Table 1. It is unclear if the results showing is the exact or the approximate inference approach. It seems that the results showing here is using the exact greedy approach as the authors called it regular. But, in this case, why the Blue score is different for different k? If I understand correctly the p_1 model for different k should be the same, so the exact decoding should give the same output. - Line 210-214: for approaches except "Regular", the authors should quantify the quality of the decoding quantitively. For example, what is the mean difference in pixel values between images generated by approximation greedy decoding and exact greedy decoding? How many of them can the difference be perceived by a human? Minor comments: - Figure 2 is not readable when the paper is printed in white and black. - Line 109: Figure 3 is not showing what the authors describe. - Line 199-120: Please explicitly define what do you mean by "Regular" and "Approximate" in Table 2. ----- Comments after rebuttal: Thanks for the clarification and additional experiments on showing performance measured under wall-clock time and Table 1. With the clarification and the results on wall-clock, I update my rating.
nips_2018_2257
Spectral Filtering for General Linear Dynamical Systems We give a polynomial-time algorithm for learning latent-state linear dynamical systems without system identification, and without assumptions on the spectral radius of the system's transition matrix. The algorithm extends the recently introduced technique of spectral filtering, previously applied only to systems with a symmetric transition matrix, using a novel convex relaxation to allow for the efficient identification of phases.
Summary ------- This paper presents an algorithm for predicting the (one-step ahead) output of an unknown linear time-invariant dynamical system (LDS). They key benefits of the proposed algorithm are as follows: - the regret (difference between the prediction error of the algorithm, and the prediction error of best LDS) is bounded by sqrt(T) (ignoring logarithmic factors), where T is the length of the data sequence over which predictions are made. - the regret bound does not depend on the spectral radius of the transition matrix of the system (in the case that data is generated by a LDS). - the algorithm runtime is polynomial in the 'natural parameters' of the problem. The algorithm is based on a recently proposed 'spectral filtering' technique [HSZ17]. In previous work, this method was only applicable to systems with a symmetric transition matrix (i.e. real poles). The present paper extends the method to general LDSs (i.e. non-symmetric transition matrices, or equivalently, complex poles). The performance of the proposed algorithm is illustrated via numerical simulations, and is compared with popular existing methods including expectation maximization (EM) and subspace identification. Quality ------- In my view, the paper is of very high quality. The technical claims are well-substantiated by thorough proofs in the supplementary material, and the numerical experiments (though brief) are compelling and relevant (i.e. comparisons are made to methods that are routinely used in practice). Clarity ------- In my view, the paper is well-written and does a good job of balancing technical rigor with higher-level explanations of the key concepts. Overall, it is quite clear. I do have a few comments/queries: In my opinion, the supplementary material is quite crucial to the paper; not just in the obvious sense (the supplementary material contains the proofs of the technical results), but personally, I found some of the sections quite difficult to follow without first reading the supplementary material, e.g. Section 4.1. I very much like the idea of giving a high-level overview and providing intuition for a result/proof, but to be honest, I found it quite difficult to follow these high-level discussion without first reading through the proofs. This is probably the result of a combination of a few factors: i) the results are quite technical, ii) these ideas (spectral-filtering) are quite new/not yet well known, iii) my own limitations. This is not necessarily a problem, as the supplementary material is well written. Regarding the presentation of some of the technical details: Should \Theta be \hat{\Theta} in Line 188 and equation (5)? In the supplementary material, the reference to (equation?) (2) doesn't seem to make sense, e.g., line 33 but also elsewhere. Do you mean Definition 2? In Line 18, do you mean \sum_{j=0}^\tau\beta_jA^{i-j} = A^{i-\tau}p(A) = 0? In equation (10), is there an extra C? It's not a big deal, but i is used as both sqrt(-1) (e.g. (29)) and as an index, which can be a little confusing at first. Below (28), should M'_\omega be M'_l? Furthermore, should M'_l be complex, not real? If M'_l is real, then where does (31) come from? (I would understand if M'_l was complex). Are the r's in (38) and below missing an l subscript? Concerning reproducibility of numerical results: I would imagine that the numerical results are a little difficult to reproduce. For instance, the details on the EM algorithm (initialization method?) and SSID (number of lags used in the Hankel matrix?) are not given. Perhaps Line 174 'the parameter choices that give the best asymptotic theoretical guarantees are given in the appendix' could be a bit more specific (the appendices are quite large). Also, Unless I missed it, in Sec 5 k is not specified, nor is it clear why W = 100 is chosen. Originality ----------- The paper builds upon the previous work of [HSZ17] which considers the same one-step ahead prediction problem for LDS. The novelty is in extending this approach to a more general setting; specifically, handling the case of linear systems with asymmetric state transition matrices. In my view, this represents a sufficiently novel/original contribution. Significance ------------ The problem of time-series prediction is highly relevant to a number of fields and application areas, and the paper does a good job of motivating this in Section 1. Minor corrections & comments ---------------------------- Full stop after the equation between lines 32-33. Line 39, 'assumed to be [proportional to] a small constant'? It's a little strange to put Algorithm 1 and Theorem 1 before Definition 2, given that Definition 2 is necessary to understand the algorithm (and to some extent the theorem). Line 185 'Consider a noiseless LDS'.
nips_2018_621
Evolution-Guided Policy Gradient in Reinforcement Learning Deep Reinforcement Learning (DRL) algorithms have been successfully applied to a range of challenging control tasks. However, these methods typically suffer from three core difficulties: temporal credit assignment with sparse rewards, lack of effective exploration, and brittle convergence properties that are extremely sensitive to hyperparameters. Collectively, these challenges severely limit the applicability of these approaches to real-world problems. Evolutionary Algorithms (EAs), a class of black box optimization techniques inspired by natural evolution, are well suited to address each of these three challenges. However, EAs typically suffer from high sample complexity and struggle to solve problems that require optimization of a large number of parameters. In this paper, we introduce Evolutionary Reinforcement Learning (ERL), a hybrid algorithm that leverages the population of an EA to provide diversified data to train an RL agent, and reinserts the RL agent into the EA population periodically to inject gradient information into the EA. ERL inherits EA's ability of temporal credit assignment with a fitness metric, effective exploration with a diverse set of policies, and stability of a population-based approach and complements it with off-policy DRL's ability to leverage gradients for higher sample efficiency and faster learning. Experiments in a range of challenging continuous control benchmarks demonstrate that ERL significantly outperforms prior DRL and EA methods.
In this paper, the authors combine an evolutionary algorithm with a Deep RL algorithm so that the combination achieves the best of both worlds. It is successfully applied to 6 standard mujoco benchmarks. The following part of this review has been edited based on the other reviewers points, the authors rebuttal, an attempt to reproduce the results of the authors, and investigations in their code. First, investigations of the results and code revealed several points that are poorly documented in the paper. Here is a summary: * DDPG: - The hidden layer sizes in both the Actor and the Critic in the implementation are different from those mentioned in the paper. This is a problem since ddpg's performance can vary a lot with the architecture of the nets. - The Actor uses tanh non-linearities, and the Critic elu non-linearties. This is not mentioned. - The norm of the gradient of the Critic is clipped. This is not mentioned. * Evolutionary Algorithm: - Tournament selection is performed with tournament size of 3, and is done with replacement. - Individuals selected during tournament selection produce offspring with the elite through crossover until the population is filled. - The way mutations are handled is more complex that what is mentioned in the paper and involves many hyper-parameters. For each non elitist individual there is a fixed probability for his genome to mutate. If the genome mutates, then each weight of the actor can mutate in 3 different ways. "Normal" mutations involve adding a 10% Gaussian noise; "Super" mutations involve adding 100% Gaussian noise, and "Reset" mutations involve resetting the weight using a normalized center Gaussian noise. It seems that on average, around 10% of the weights of the Actor that undergoes a mutation are changed, but some parts of the code are still obscure. The whole mutation process is really messy and deserves more attention since it is supposed to account for half of the success of the method (the other half being the deep RL part). The authors should definitely put forward all the above facts in their paper as some of them play an important role in the performance of their system. About performance itself, in agreement with other reviewers, I consider that the authors should not base their paper on state-of-the-art performance claims, but rather on the simplicity and conceptual efficiency of their approach with respect to alternatives. I also agree with other reviewers that the title is too generic and that the approach should be better positionned with respect to the related literature (Baldwin effect etc.). For the rest, my opinion has not changed much so I keep the rest of the review mostly as is. The idea is simple, the execution is efficient, and the results are compelling (but see the remarks about reproducibility above). I am in favor of accepting this paper as it provides a useful contribution. Below I insist on weaknesses to help the authors improve their paper. A first point is a lack of clarity about deceptive reward signals or gradients. The authors state that the hard inverted pendulum is deceptive, but they don't explain what they mean, though this matters a lot for the main message of the paper. Indeed, ERL is supposed to improve over EA because it incorporates gradient-based information that should speed up convergence. But if this gradient-based information is deceptive, there should be no speed up, in contradiction with results of ERL in "deceptive reward" tasks. I would be glad to see a closer examination of ERL's behaviour in the context of a truly deceptive gradient task: does it reduce to the corresponding EA, i.e. does it consistently reject the deep RL policy until a policy that is close enough to the optimum has been found? In that respect, the authors should have a look at the work of Colas et al. at ICML 2018, which is closely related to theirs, and where the effect of deceptive gradients on deep RL is discussed in more details. Related to the above, details are missing about the tasks. In the standard inverted pendulum, is the action discrete in {-1,0,1} or continous in [-1,1] or something else? What is the immediate reward signal? Does it incorporate something to favor smaller actions? Is there a "success" state that may stop a trial before 1000 steps? All these details may make a difference in the results. The same kind of basic facts should also be given about the other benchmarks, and the corresponding details could be rejected into the appendices. The methodology for reported metrics has nice features, but the study could be made more rigorous with additional information: how many seeds did you use? How do you report variance? Can you say something about the statistical significance of your results? The third weakness is in the experimental results when reading Fig. 3. For instance, results of DDPG on half-cheetah seem to be lower than results published in the literature (see e.g. Henderson et al. "Deep RL that matters"). Could the authors investigate why (different implementation? Insufficient hyper-parameter tuning... ?) The fourth weakness is the related work section. First, the authors should read more about Learning Classifier Systems (LCSs), which indeed combined EAs and RL from the very start (Holland's 1975 work). The authors may read Lanzi, Butz or Wilson's papers about LCSs to take more distance about that. They should also take more distance about using EAs to obtain RL algorithms, which is now an important trend in Meta-RL at the moment. More generally, in a NIPS paper, we expect the "related work" section to provide a good overview of the many facets of the domain, which is not the case here. Finally, the last sentence "These approaches are complementary to the ERL framework and can be readily combined for potential further improved performance." is very vague and weak. What approach do the authors want to combine theirs with and how can it be "readily combined"? The authors must be much more specific here. In Section 2.2 it should be stated more clearly which EA is used exactly. There are many families, most algorithms have names, from the section we just get that the authors use an EA. Later on, more details about mutation etc. are missing. For instance, how do they initialize the networks? How many weights are perturbed during mutation? The authors may also explain why they are not using an ES, as used in most neuroevolution papers involved in the competition to deep RL (see numerous "Uber labs deep-neuroevolution papers"), why not NEAT, etc. Actually, it took me a while to realize that an ES cannot be used here, because there is no way to guarantee that the deep RL policy will comply with the probability distribution corresponding to the covariance matrix of the ES current population. Besides, ESs also perform a form of approximate gradient descent, making the advantage of using deep RL in addition less obvious. All this could be discussed. I would also be glad to see a "curriculum learning" aspect: does ERL start using EAs a lot, and accepts the deep RL policy more and more along time once the gradient is properly set? Note also that the authors may include policies obtained from imitation within their framework without harm. This would make it even richer. Finally, the paper would be stronger if performance was compared to recent state-of-the-art algorithms such as SAC (Haarnoja), TD3 (Fujimoto) or D4PG (Horgan), but I do not consider this as mandatory... More local points. l. 52: ref [19] has nothing to do with what is said, please pick a better ref. l. 195-201: this is a mere repetition of lines 76-81. Probably with a better view of related work, you can get a richer intro and avoid this repetition. p.2: "However, each of these techniques either rely on complex supplementary structures or introduce sensitive parameters that are task-specific": actually, the author's method also introduce supplementary structures (the EA/the deep RL part), and they also have task-specific parameters... The authors keep a standard replay buffer of 1e^6 samples. But the more actors, the faster the former content is washed out. Did the authors consider increasing the size of the replay buffer with the number of agents? Any thoughts or results about this would be welcome. l.202 sq. (diverse exploration): the fact that combining parameter noise exploration with action exploration "collectively lead to an effective exploration strategy" is not obvious and needs to be empirically supported, for instance using ablative studies about both forms of exploration. typos: I would always write "(whatever)-based" rather than "(whatever) based" (e;g. gradient-based). Google seems to agree with me. l. 57: suffer with => from l. 108: These network(s) l. 141: gradients that enable(s) l. 202: is used (to) generate l. 217: minimas => minima (latin plural of minimum) l. 258: favorable => favorably? l.266: a strong local minima => minimum (see above ;)) In ref [55] baldwin => Baldwin.
nips_2018_3363
Representation Learning of Compositional Data We consider the problem of learning a low dimensional representation for compositional data. Compositional data consists of a collection of nonnegative data that sum to a constant value. Since the parts of the collection are statistically dependent, many standard tools cannot be directly applied. Instead, compositional data must be first transformed before analysis. Focusing on principal component analysis (PCA), we propose an approach that allows low dimensional representation learning directly from the original data. Our approach combines the benefits of the log-ratio transformation from compositional data analysis and exponential family PCA. A key tool in its derivation is a generalization of the scaled Bregman theorem, that relates the perspective transform of a Bregman divergence to the Bregman divergence of a perspective transform and a remainder conformal divergence. Our proposed approach includes a convenient surrogate (upper bound) loss of the exponential family PCA which has an easy to optimize form. We also derive the corresponding form for nonlinear autoencoders. Experiments on simulated data and microbiome data show the promise of our method.
This paper suggests a generalization of PCA that is applicable to compositional data (non-negative data where each sample sums to 1 or to 100%). It builds up on exponential family PCA developed in the early 2000s. There have been multiple more or less ad hoc approaches of transforming compositional data to make it amenable to PCA, and a more principled method of compositional PCA would certainly be an interesting development. The paper is reasonably well written and uses interesting mathematics. However, it (a) lacks convincing examples, (b) is somewhat confusing in places. I put it above the acceptance threshold, hoping that the authors would be able to improve the presentation. Caveat: I am not familiar with Bregman divergences and do not know the [22] paper that this one heavily uses. I cannot therefore judge on the details of the math. MAJOR ISSUES 1. When reading the paper, I was waiting for some cool visualizations of real-life data. I was disappointed to find *no* visualizations in Section 5.2. Actually the text says "visualizations of the two main axes..." (line 233 and 246), but they are not shown!! Please show! Also, you seem to be saying that all methods gave similar results (e.g. line 236). If so, what's the point of using your method over more standard/simple ones? I would like to see some example where you can show that your method outperforms vanilla PCA and alr/clr/ilr+PCA in terms of visualization. Your metrics in Figure 2 are not very convincing because it's hard to judge which of them should be most relevant (maybe you should explain the metrics better). 2. The toy example (Figure 1) is good, but I would like to see alr+PCA and ilr+PCA in addition to clr+PCA (I know that different people recommend different choices; does it matter here?), and more importantly, I'd like to see vanilla PCA. It seems to me that at least for 3-arms vanilla PCA should give an ideal "Mercedes" shape without any log-transforms. Is it true for 10-arms? If it _is_ true, then this is a poor example because vanilla PCA performs optimally. If so, the authors should modify the toy example such that alr/clr/ilr+PCA performed better than PCA (this would also help motivating clr-transform) and that CodaPCA performed even better than that. 3. I think a better motivation is needed for log-transforms in Section 2. Line 52 mentions "nonlinear structure due to the simplex constraint" but simplex constraint is clearly linear. So if the simplex constraint were the only issue, I wouldn't see *any* problem with using standard vanilla PCA. Instead, there arguably *is* some nonlinearity in the nature of compositional data (otherwise why would anybody use log-transforms), and I remember Aitchinson showing some curved scatterplots on the 2D simplex. This needs to be explained/motivated here. 4. Line 91 says that count data can be described with Poisson distribution. Line 260 again mentions "raw count data". But compositional data does not need to be count, it can come as continuous fractions (e.g. concentrations). Does the method suggested here only apply to counts? If so, this should be made very clear in the title/abstract/etc. If not, please clarify why Poisson distribution still makes sense. MINOR ISSUES 1. Box after line 81. Shouldn't Eq 3 be part of the box? Without it the box looks incomplete. The same for the box after line 107. 2. Eq 6. "\phi-PCA" should be written with a hyphen (not minus) and capitalized PCA. Please use hyphens also below in similar expressions. 3. The end of Section 3 -- I suggest to insert some comments on how exactly exp family PCA reduces to standard PCA in the Gaussian case. What is \phi function, what is Bregman divergence, etc. 3. Line 113, last sentence of the section. This is unclear. Why is it a hole? Why is not easy? 4. Line 116, first sentence of the section: again, not clear how exactly clr is a workaround. 5. Line 139: period missing. 6. Line 144, where does this \phi() come from? Is it Poisson? Please explain. 7. Eq (11): what is "gauged-KL", the same as "gauged-phi" above? 8. Box after line 150: now you have loss function in the box, compare with minor issue #1. 9. Lines 201 and 214: two hidden layers or one hidden layer? 10. Line 206: space missing. 11. Line 211: isn't CoDA-PCA and SCoDA-PCA the same thing? Got confused here. 12. Line 257: I don't see any overfitting in Figure 2. This would be test-set curve going upwards, no? Please explain why do you conclude there is overfitting. 13. Shouldn't you cite Aitchinson's "PCA of compositional data" 1983? ----------------- **Post-rebuttal update:** I still think the paper is worth accepting, but my main issue remains the lack of an obviously convincing application. I see that reviewer #3 voiced the same concern. Unfortunately, I cannot say that the scatter plots shown in the author response (panels A and B) settled this issue. The authors write that panel A, unlike B, shows three clusters but to me it looks like wishful thinking. I am not familiar with the microbiome data but I do agree with the point of reviewer #3 that with 100+ features the compositional nature of the data can simply be ignored without losing much. Would the authors agree with that? I think it would be worth discussing. Overall, it looks like a good attempt at setting up composional PCA in a mathematically principled way, but in the end I don't see examples where it performs noticeably better than standard naive approaches.
nips_2018_1639
Semi-crowdsourced Clustering with Deep Generative Models We consider the semi-supervised clustering problem where crowdsourcing provides noisy information about the pairwise comparisons on a small subset of data, i.e., whether a sample pair is in the same cluster. We propose a new approach that includes a deep generative model (DGM) to characterize low-level features of the data, and a statistical relational model for noisy pairwise annotations on its subset. The two parts share the latent variables. To make the model automatically trade-off between its complexity and fitting data, we also develop its fully Bayesian variant. The challenge of inference is addressed by fast (natural-gradient) stochastic variational inference algorithms, where we effectively combine variational message passing for the relational part and amortized learning of the DGM under a unified framework. Empirical results on synthetic and real-world datasets show that our model outperforms previous crowdsourced clustering methods.
Post Rebuttal Summary -- Thanks to the authors for a careful rebuttal that corrected several notation issues and provided an attempt at more realistic experiment using CIFAR-10 with real (not simulated) crowd labels. I found this rebuttal satisfactory and I am willing to vote for acceptance. I won't push *too* hard for acceptance because I still wish both the new experiment and the revised presentation of variational method details could go through more careful review. My chief remaining concerns are that it is still difficult to distinguish the reasons for gains from BayesSCDC over plain old SCDC because multiple things change (factorization of q(z,x), estimation of global parameters). I do hope the authors keep their promise to compare to a "middle of the road" 3rd version and also offer insight about how amortization makes the q(z|o)q(x|z,o) approach still less flexible than mean field methods. Review Summary -- Overall, I think the core ideas here are interesting, but I'm rating this paper as borderline accept because it doesn't do any experiments on real crowd-labeled datasets, misses opportunities to provide take-home insights about the various variational approximations attempted here, suffers from some minor bugs in the math (nothing showstopping), and leaves open several procedural questions about experiments. My final decision will depend on a satisfactory rebuttal. Paper Summary -- This paper considers the problem where a given observed dataset consists of low-level feature vectors for many examples as well as pair-wise annotations for some example pairs indicating if they should belong to the same cluster or not. The goal is to determine a hard/soft clustering of the provided examples. The annotations may be *noisy*, e.g. from a not-totally-reliable set of crowd workers. The paper assumes hierarchical model for both observed feature vectors O and observed pair-wise annotations L. Both rely on a shared K-component mixture model, which assigns each example into one of K clusters. Given this cluster assignment, the O and L are then generated by separate paths: observed vectors O come from a Gaussian-then-deep-Gaussian, and pairwise annotations L are produced by a two-coin Dawid-Skene model, originally proposed in [16]. Many entries of L may not be observed. Two methods for training the proposed model are developed: * "SCDC" estimates an approximate posterior for each example's cluster assignment z and latent vector x which conditions on observed features O -- q(z, x | O) -- but point estimates the "global" parameters (e.g. GMM parameters \pi, \mu, \Sigma) * "BayesSCDC" estimates an approximate posterior for all parameters that aren't neural net weights, using a natural-gradient formulation. Notably, for "SCDC" the posterior over z,x has conditional structure: q(z,x|o) = q(z|o) q(x|z,o). In contrast, the same joint posterior under the "BayesSCDC" model has mean-field structure with no conditioning on o: q(z,x) = q(z)q(x). Experiments focus on demonstrating several benefits: * 5.1 toy data experiments show benefits of including pairwise annotations in the model. * 5.2 compares on a Faces dataset the presented approach to several baselines that can also incorporate pairwise annotations to discover clusters * 5.3 compares the two presented methods (full BayesSCDC vs SCDC) on MNIST data Strengths -- * The pair-wise annotation model nicely captures individual worker reliability into a probabilistic model. * Experiments seem to explore a variety of reasonable baselines. Weaknesses -- * Although the method is intended for noise crowd-labeling, none of the experiments actually includes truly crowd-labeled annotations. Instead, all labels are simulated as if from the true two-coin model, so it is difficult to understand how the model might perform on data actually generated by human labelers. * The claimed difference between the fully-Bayesian inference of BayesSCDC and the point-estimation of global parameters in SCDC seems questionable to me... without showing the results of multiple independent runs and breaking down the differences more finely to separate q(z,x) issues from global parameter issues, its tough to be sure that this difference isn't due to poor random initialization, the q(z,x) difference, or other confounding issues. Originality -- The key novelty claimed in this paper seems to be the shared mixture model architecture used to explain both observed features (via a deep Gaussian model) and observed noisy pairwise annotations (via a principled hierarchical model from [16]). While I have not seen this exact modeling combination before, the components themselves are relatively well understood. The inference techniques used, while cutting edge, are used more or less in an "out-of-the-box" fashion by intelligently combining ideas from recent papers. For example, the recognition networks for non-conjugate potentials in BayesSCDC come from Johnson et al. NIPS 2016 [12], or the amortized inference approach to SCDC with marginalization over discrete local variables from Kingma, Mohamed, Rezende, and Welling [13]. Overall, I'm willing to rate this as just original enough for NIPS, because of the technical effort required to make all these work in harmony and the potential for applications. However, I felt like the paper had a chance to offer more compelling insight about why some approaches to variational methods work better than others, and that would have really made it feel more original. Significance -- The usage of noisy annotations to guide unsupervised modeling is of significant interest to many in the NIPS community, so I expect this paper will be reasonably well-received, at least by folks interested in clustering with side information. I think the biggest barriers to widespread understanding and adoption of this work would be the lack of real crowd-sourced data (all experiments use simulated noisy pairwise annotations) and helping readers understand exactly why the BayesSCDC approach is better than SCDC alone when so much changes between the two methods. Quality Issues -- ## Q1) Correctness issues in pair-wise likelihood in Eq. 2 In the pair-wise model definition in Sec. 2.2, a few things are unclear, potentially wrong: * The 1/2 exponent is just wrong as a poor post-hoc correction to the symmetry issue. It doesn't result in a valid distribution over L (e.g. that integrates to unity over the support of all binary matrices). A better correction in Eq. 2 would be to restrict the sum to those pairs (i,j) that satisify i < j (assuming no self edges allowed). * Are self-edges allowed? That is, is L_11 or L_22 a valid entry? The sum over pairs i,j in Eq. 2 suggests so, but I think logically self-edges should maybe be forbidden. ## Q2) Correctness issue in formula for mini-batch unbiased estimator of pair-wise likelihood In lines 120-122, given a minibatch of S annotations, the L_rel term is computed by reweighting a minibatch-specific sum by a scale factor N_a / |S|, so that the term has similar magnitude as the full dataset. However, the N_a term as given is incorrect. It should count the total number of non-null observations in L. Instead, as written it counts the total number of positive entries in L. ## Q3) Differences between SCDC and BayesSCDC are confusing, perhaps useful to breakdown more finely The presented two approximation approaches, SCDC and BayesSCDC, seem to differ on several axes, so any performance difference is hard to attribute to one change. First, SCDC assumes a more flexible q(x, z) distribution, while BayesSCDC assumes a mean-field q(x)q(z) with a recognition network for a surrogate potential for p(o|x). Second, SCDC treats the global GMM parameters as point estimates, while BayesSCDC infers a q(\mu, \Sigma) and q(\pi). I think these two axes should be explored independently. In particular, I suggest presenting 3 versions of the method: * the current "SCDC" method * the current "BayesSCDC" method * a version which does q(x)q(z) with a recognition network for a surrogate potential for p(o|x) (Eq. 10), but point estimates global parameters. The new 3rd version should enjoy the fast properties of BayesSCDC (each objective evaluation doesn't require marginalizing over all z values), but be more similar to SCDC. Clarity ------- The paper reads reasonably well. The biggest issue in clarity is that some some key hyperparameters required to reproduce experiments are just not provided (e.g. the Dirichlet concentration for prior on \pi, the Wishart hyperparameters, etc.). These absolutely need to be in a final version. Other reproducibility concerns: * what is the procedure for model initialization? * how many initializations of each method are allowed? how do you choose the best/worst? accuracy on training?
nips_2018_5436
Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. In this work, we study unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery. We introduce a weight structure that is necessary for asymptotic convergence to the true sparse signal. With this structure, unfolded ISTA can attain a linear convergence, which is better than the sublinear convergence of ISTA/FISTA in general cases. Furthermore, we propose to incorporate thresholding in the network to perform support selection, which is easy to implement and able to boost the convergence rate both theoretically and empirically. Extensive simulations, including sparse vector recovery and a compressive sensing experiment on real image data, corroborate our theoretical results and demonstrate their practical usefulness. We have made our codes publicly available. 2 .
This paper proves that the unfolded LISTA, an empirical deep learning solver for sparse codes, can guarantee faster (linear) asymptotical convergence than standard iterative ISTA algorithm. The authors proposed a partial weight coupling structure and a support detection schemes. Rooted in standard LASSO techniques, they are both shown to speed up LISTA convergence. The theories are endorsed by extensive simulations and a real-data CS experiment. Strength: I really like this paper. Simple, elegant, easy-to-implement techniques are backed up by solid theory. Experiments follow a step-by-step manner and accompanies theories fairly well. - ISTA is generally sub-linearly convergent before its iterates settle on a support. Several prior works [8,15,21] show the acceleration effect of LISTA from different views, but this paper for the first time established the linear convergence of LISTA (upper bound). I view that as an important progress in the research direction of NN sparse solvers. - Both weight coupling and support detection are well motivated by theoretical speedup results. They are also very practical and can be “plug-and-play” with standard LISTA, with considerable improvements observed in experiments Weakness: I have no particular argument for weakness. Two suggestions for authors to consider: - How the authors see whether their theory can be extended to convolutional sparse coding, which might be more suitable choices for image CS? - Although IHT and LISTA solve different problems (thus not directly "comparable" in simulation), is it possible that the l0-based networks [4,21] can also achieve competitive performance in real data CS? The authors are suggested to compare in their updated version.
nips_2018_6405
Large Scale computation of Means and Clusters for Persistence Diagrams using Optimal Transport Persistence diagrams (PDs) are now routinely used to summarize the underlying topology of complex data. Despite several appealing properties, incorporating PDs in learning pipelines can be challenging because their natural geometry is not Hilbertian. Indeed, this was recently exemplified in a string of papers which show that the simple task of averaging a few PDs can be computationally prohibitive. We propose in this article a tractable framework to carry out standard tasks on PDs at scale, notably evaluating distances, estimating barycenters and performing clustering. This framework builds upon a reformulation of PD metrics as optimal transport (OT) problems. Doing so, we can exploit recent computational advances: the OT problem on a planar grid, when regularized with entropy, is convex can be solved in linear time using the Sinkhorn algorithm and convolutions. This results in scalable computations that can stream on GPUs. We demonstrate the efficiency of our approach by carrying out clustering with diagrams metrics on several thousands of PDs, a scale never seen before in the literature.
# Update after rebuttal I thank the authors for their comments in the rebuttal. In conjunction with discussions with other reviewers, they helped me understand the salient points of the paper better. I would still add the following suggestions: 1. It would strengthen the paper if advantages over existing methods that are not Wasserstein-based could be showed (persistence landscapes, persistence images, ...), in particular since those methods have the disadvantage of suffering from descriptors that are not easily interpretable and do not yield 'proper' means (in the sense that the mean landscape, for example, cannot be easily converted back into a persistence diagram. 2. The experiment could be strengthened by discussing a more complex scenario that demonstrates the benefits of TDA _and_ the benefits of the proposed method. This could be a more complex data set, for example, that is not easily amenable to non-topological approaches, such as natural image data. I realize that this a balancing act, however. Having discussed at length how to weigh the originality of the paper, I would like to see the method published in the hopes that it will further promulgate TDA. I am thus raising my score. # Summary This paper presents an algorithm for calculating approximate distances between persistence diagrams, i.e. feature descriptors from topological data analysis. The approximation is based on a reformulation of the distance calculation as an optimal transport problem. The paper describes how to apply previous research, most notably entropic regularization, to make calculations efficient and parallelizable. In addition to showing improved performance for distance calculations, the paper also shows how to calculate barycentres in the space of persistence diagrams. Previously, this involved algorithm that are computationally infeasible at larger scales, but the new formulations permits faster calculations. This is demonstrated by deriving a $k$-means algorithm. # Review This is an interesting paper and I enjoyed reading it. I particularly appreciated the step-by-step 'instructions' for how to build the algorithm, which increases accessibility even though the topic is very complex with respect to terminology. However, I lean towards rejecting the paper for now: while the contribution is solid and reproducible, the paper appears to be a (clever!) application of previous works, in particular the ones by Altschuler et al., Benamou et al., Cuturi, as well as Cuturi and Doucet. In contrast to these publications, which have an extremely general scope (viz. the whole domain of optimal transport), this publication is only relevant within the realm of topological data analysis. Moreover, the experimental section is currently rather weak and does not give credence to the claim in the abstract that 'clustering is now demonstrated at scales never seen before in the literature'. While I agree that this method is capable of handling larger persistence diagrams, I think the comparison with other algorithms needs to have more details: the 'Hera' package, as far as I know, also permits the calculation of approximations to the Wasserstein distance and to the bottleneck distance between persistence diagrams. Were these used in the experiments? And if so, what were the parameters such as the 'relative error'? And for that matter, how many iterations of the new algorithm were required and what are the error bounds? In addition, it might be fairer to use a single-core implementation for the comparison here---this would also show the benefits of parallelization! Such plots could be shown in the supplementary materials of the paper. At present, the observations with respect to speed are somewhat 'obvious': it seems clear that a multi-core algorithm (on a P100!) can always beat a single-core implementation. The same goes for the barycentre calculations, which are not even written in the same programming language I assume. Instead of showing that the algorithm is faster here (which I will definitely believe) can the authors say more about the qualitative behaviour? Are the barycentres essentially the same? How long does convergence take? Statements of this sort would also strengthen reproducibility; the authors could show all required parameters in addition to their experiments. Minor comments: - L. 32: If different kernels for persistence diagrams are cited, the first one, i.e. the multi-scale kernel by Reininghaus et al. in "A Stable Multi-Scale Kernel for Topological Machine Learning" should also be cited. - In the introduction, works about approximations of persistence diagram distances (such as the one by Kerber et al.) could be briefly discussed as well. Currently, they only appear in the experimental section, which is a little bit late, I think. - L. 57: "verifying" --> "satisfying" (this happens at other places as well) - L. 114: $\mathcal{M}_{+}$ should be briefly defined/introduced before usage - How are parameters tuned? While I understand how the 'on-the-fly' error control is supposed to work, I am not sure how reasonable defaults can be obtained here. Or is the algorithm always iterated until the approximation error is sufficiently small? - L. 141: Please add a citation for the C-transform, such as Santambrogio (who is already cited in the bibliography) - L. 155: "criterion" --> "threshold" - L. 168: What does $|D_1|$ refer to? Some norm of the diagram points? This is important in order to understand the discretization error. - L. 178: From what I can tell, the algorithm presented in the paper generalizes to arbitrary powers; does the statement about the convolutions only hold for the squared Euclidean distance? The authors should clarify this. - L. 231: "Turner et al." is missing a citation - Timing information for the first experiment in Section 5 should be included; it is important to know the dependencies between the 'fidelity' of the approximation and the computational speed; in particular since the paper speaks about 'smearing' in L. 251, which prohibits the recovery of 'spiked' diagrams
nips_2017_1687
A PAC-Bayesian Analysis of Randomized Learning with Application to Stochastic Gradient Descent We study the generalization error of randomized learning algorithms-focusing on stochastic gradient descent (SGD)-using a novel combination of PAC-Bayes and algorithmic stability. Importantly, our generalization bounds hold for all posterior distributions on an algorithm's random hyperparameters, including distributions that depend on the training data. This inspires an adaptive sampling algorithm for SGD that optimizes the posterior at runtime. We analyze this algorithm in the context of our generalization bounds and evaluate it on a benchmark dataset. Our experiments demonstrate that adaptive sampling can reduce empirical risk faster than uniform sampling while also improving out-of-sample accuracy.
The paper opens the way to a new use of PAC-Bayesian theory, by combining PAC-Bayes with algorithmic stability to study stochastic optimization algorithms. The obtained probabilistic bounds are then used to inspire adaptive sampling strategies, studied empirically in a deep learning scenario. The paper is well written, and the proofs are non-trivial. It contains several clever ideas, namely the use of algorithmic stability to bound the complexity term inside PAC-Bayesian bounds. It's also fruitful to express the prior and posterior distributions over the sequences of indexes used by a stochastic gradient descent algorithm. Up to my knowledge, this is very different than any other previous PAC-Bayes theorems, and I think it can inspire others in the future. The experiments using a (moderately) deep network, trained by both SGD and AdaGrad, are convincing enough; they show that the proposed adaptive sampling strategy can benefit to existing optimization methods simply by selecting the samples during training time. Small comments: - Line 178: In order to sell their result, the authors claim that "high-probability bounds are usually favored" over bounds in expectation. I don't think the community is unanimous about this, and I would like the authors to convince me that I should prefer high-probability bounds. - Section 5 and Algorithm 1: I suggest to explicitly express the utility function f as a function of a model h. Typos: - Line 110: van -> can - Supplemental, Line 183: he -> the ** UPDATE ** I encourage the authors to carefully consider the reviewer's comments while preparing the final version of their paper. Concerning the claim that high-probability bound imply expectation bound, I think it's right, but it deserves to be explained properly to convince the readers.
nips_2017_2186
The Marginal Value of Adaptive Gradient Methods in Machine Learning Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several stateof-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks.
// Summary: In an optimization and a learning context, the authors compare recently introduced adaptive gradient methods and more traditional gradient descent methods (with potential momentum). Adaptive methods are based on metrics which evolve along the optimization process. Contrary to what happens for gradient descent, Nesterov's method or the heavy ball method, this may result in estimates which are outside of the linear span of past visited points and estimated gradients. These methods became very popular recently in a deep learning context. The main question adressed by the authors is to compare both categories of method. First the authors construct an easy classification example for which they prove that adaptive methods behave very badly while non adaptive methods achieve perfect accuracy. Second the authors report extensive numerical comparisons of the different classes of algorithms showing consistent superiority of non adaptive methods. // Main comments: I do not consider myself knowledgable enough to question the relevance and correctness of the numerical section. This paper asks a very relevant question about adaptive methods. Giving a definitive answer is a complicated task and the authors provide elements illustrating that adaptive methods do not necessarily have a systematic advantage over non adaptive ones and that non adaptive optimization methods could be considered as serious challengers. The content of the paper is simple and really acessible, hilighting the simplicity of the question. Although simple, this remains extremely relevant and goes beyond the optimization litterature, questioning the reasons why certain methods reach very high popularity in certain communities. The elements given illustrate the message of the authors very clearly. The main points I would like to raise: - The artificial example given by the authors is only relevant for illustrative purposes. Comparison of algorithmic performances should be based on the ability to solve classes of problems rather than individual instances. Therefore, it is not really possible to draw any deep conclusion from this. For example, a bad choice of the starting point for non adaptive methods would make them equally bad as adaptive methods on this specific problem. I am not sure that the authors would consider this being a good argument against methods such as gradient descent. - Popularity of adaptive methods should be related to some practical advantage from a numerical point of view. I guess that positive results and comparison with non adaptive methods have already been reported in the litterature. Do the authors have a comment on this?
nips_2017_2373
Training recurrent networks to generate hypotheses about how the brain solves hard navigation problems Self-localization during navigation with noisy sensors in an ambiguous world is computationally challenging, yet animals and humans excel at it. In robotics, Simultaneous Location and Mapping (SLAM) algorithms solve this problem through joint sequential probabilistic inference of their own coordinates and those of external spatial landmarks. We generate the first neural solution to the SLAM problem by training recurrent LSTM networks to perform a set of hard 2D navigation tasks that require generalization to completely novel trajectories and environments. Our goal is to make sense of how the diverse phenomenology in the brain's spatial navigation circuits is related to their function. We show that the hidden unit representations exhibit several key properties of hippocampal place cells, including stable tuning curves that remap between environments. Our result is also a proof of concept for end-to-end-learning of a SLAM algorithm using recurrent networks, and a demonstration of why this approach may have some advantages for robotic SLAM.
In this study, the authors trained recurrent neural network models with LSTM units to solve a well studied problem, namely Simultaneous Location and Mapping (SLAM). The authors trained the network to solve several different version of the problems, and made some qualitative comparison to rodent physiology regard spatial navigation behavior. In general, I think the authors' approach follows an interesting idea, namely using recurrent network to study the cognitive functions of the brain. This general idea has recently started to attract substantial attentions in computational neuroscience. The authors tested several version of the localization tasks, which is extensive. And the attempt to compare the model performance to the neurophysiology should be appreciated. They also attempted to study the inner working of the training network by looking into the dimensionality of the network, which reveals a relatively low dimensional representation, but higher than the dimension of the physical space. Concerns- The algorithms used is standard in machine learning, thus my understanding is that the main contribution of the paper presumably should come from either solving the SLAM problem better or shed some light from the neuroscience perspective. However, I am not sure about either of them. 1) For solving the SLAM problem, I don't see a comparison to the state-of-the-art algorithms in SLAM, so it is unclear where the performance of this algorithm stands. After reading through other reviewers' comments and the rebuttal, I share similar concerns with Reviewer 1- it is unclear whether the networks learn to solve the SLAM generically, and whether the network can perform well in completely different environments. The set-up the authors assumed seem to be restrictive, and it is unclear whether it can apply to realist SLAM problem. I agree with Reviewer 1 that this represents one important limitation of this study I initially overlooked, and it was not satisfyingly addressed in the rebuttal. 2) From the neuroscience perspective, some of the comparison the authors made in Section 3.2 are potentially interesting. However, my main concern is that the "neuron-like representations" shown in Fig.4A,B are not really neuron-like according to the rodent physiology literature. In particular, the place fields shown in Fig4A and 4B do not resemble the place fields reported in rodent physiology, which typically have roughly circular symmetric shape. I'd certainly appreciate these results better if these spatial tuning resembles more to the known rodent physiology. Can the authors show the place fields of all the neurons in the trained network, something like Fig 1 in Wilson & McNaughton (1993, Science)? That'd be helpful in judging the similarity of the representation in the trained network and the neural representation of location in rodents. Also, in the rodent literature, the global remapping and rate remapping are often distinguished and the cause for these two types of remapping are not well-understood at this point. Here the authors only focus on global remapping. One potentially interesting question is whether the rate remapping also exist in the trained network. The paper would be stronger if the authors could shed some light on the mechanisms for these two types of remapping from their model. Just finding global remapping across different environment isn't that surprising in my view. Related to Fig4E, it is known that neurons in hippocampus CA1 exhibit a heavy tail in their firing rate (e.g, discussed in ref[20]). Do the authors see similar distributions of the activity levels in the trained network? 3) The manuscript is likely to be stronger if the authors could emphasize either the performance of the network in performing SLAM or explaining the neurophysiology. Right now, it feels that the paper puts about equal weights on both, but neither part is strong enough. According to the introduction and abstract, it seems that the authors want to emphasize the similarity to the rodent physiology. In that regard, Fig. 2 and Fig. 3 are thus not particularly informative, unless the authors show the performance from the recurrent network is similar to the rodent (or maybe even humans, if possible) behavior in some interesting ways. To emphasize the relevance to neuroscience, it is useful to have a more extended and more thorough comparison to the hippocampus neurophysiology, though I am not sure if that's possible given the place fields of the trained network units do not resemble the neurophysiology that well, as I discussed earlier. Alternatively, to emphasize the computational power of the RNN model in solving SLAM, it would be desirable to compare the RNN to some state-of-the-art algorithms. I don't think the particle filtering approach the authors implemented represents the state-of-the-art in solving SLAM. But I could be wrong on this, as I am not very familiar with that literature. Finally, a somewhat minor point- the authors make the comparison on the capacity of the network to the multi-chart attractor model in the Discussion, but I am not sure how that represents a fair comparison. I'd think one LSTM unit is computational more powerful than a model neuron in a multi-chart attractor model. I'd like to understand why the authors think a comparison can be drawn here.
nips_2017_250
MaskRNN: Instance Level Video Object Segmentation Instance level video object segmentation is an important technique for video editing and compression. To capture the temporal coherence, in this paper, we develop MaskRNN, a recurrent neural net approach which fuses in each frame the output of two deep nets for each object instance -a binary segmentation net providing a mask and a localization net providing a bounding box. Due to the recurrent component and the localization component, our method is able to take advantage of long-term temporal structures of the video data as well as rejecting outliers. We validate the proposed algorithm on three challenging benchmark datasets, the DAVIS-2016 dataset, the DAVIS-2017 dataset, and the Segtrack v2 dataset, achieving state-of-the-art performance on all of them.
Last year has seen a large progress in video object segmentation, triggered by the release of the DAVIS dataset. In particular, [Khoreva et al., CVPR'17] and [Caelles et al., CVPR'17] have shown that training a fully-convolutional network on the first frame of the video can greatly improve performance in the semi-supervised scenario, where the goal is to, given ground truth annotation in the first frame of a video, segment the annotated object in the remaining frames. In addition, Khoreva et al., demonstrated that feeding the prediction for the previous frame as an additional input to the network and utilising a parallel branch that operates on the optical flow magnitude further improves the performance. These approaches have indeed set the new state-of-the-art on DAVIS, but remained constrained to operating on a single frame and a single object at a time. In this work the authors propose to address these limitations as well as integrate some promising techniques from the recent object detection methods into a video object segmentation framework. Starting from the approach of Khoreva et al., which consists of two fully-convolutional networks pertained for mask refinement, taking rgb and flow magnitude as input respectively, as well as an object mask predicted in the previous frame, and fine-tuned on the first frame of the test video, they propose the following extensions: 1. The object mask from the previous frame is warped with optical flow to simplify mask refinement. 2. The model's prediction is "cleaned up" by taking a box proposal, refining it with a specialised branch of the network and suppressing all the segmentation predictions outside of the box. 3. A (presumably convolutional) RNN is attached in the end of the pipeline, which enforces temporal consistency on the predictions. 4. The model is extended to the instance-level scenarios, that is, it can segment multiple objects at once, assigning a distinct label to each of them. A complex system like this is, of course, challenging to present in an 8 page paper. The authors address this issue by simply skipping the description of some of the components. In particular, the RNN is not described at all and the details of the bounding box generation are omitted. Moreover, the "instance-level" results are obtained by simply running the model separately for each instance and merging the predictions on the frame level, which can just as well be done for any other method. In general, the paper is well written and the evaluation is complete, studying the effect of each extension in separation. The study shows, that in the scenario when the networks are fine-tuned on the first frame of the video, the effect of all the proposed extensions is negligible. This is in part to be expected, since, as was shown in the recent work, training a network on the first frame of a video serves as a very strong baseline in itself. When no fine-tuning on the test sequence is done though, the proposed extensions do improve the method's performance, but the final result in this setting on DAVIS'16 remains more than 13% bellow the rgb-only, no-test-finetuning model of Khoreva et al. Though their model is trained on a larger COCO dataset, this is hardly enough to explain such a large performance gap with a much simpler model. In the state-of-the-art comparison the full method outperforms the competitors on most of the measures on 3 datasets (DAVIS'16, DAVIS'17 and Seg-Track-v2). That said, the comparison with [Khoreva et al., CVPR'17] on DAVIS'17 is missing and the performance gap is not always significant relatively to the complexity of the proposed solution (0.9% on DAVIS'16). Especially, given the recent extension of Khoreva et al., (see reference [25] in the paper), which shows significantly better results with much simpler tools. Overall, the authors have done a good job by combining insights from several recent publications into a single framework but failed to achieve convincing results.
nips_2017_2115
Conservative Contextual Linear Bandits Safety is a desirable property that can immensely increase the applicability of learning algorithms in real-world decision-making problems. It is much easier for a company to deploy an algorithm that is safe, i.e., guaranteed to perform at least as well as a baseline. In this paper, we study the issue of safety in contextual linear bandits that have application in many different fields including personalized recommendation. We formulate a notion of safety for this class of algorithms. We develop a safe contextual linear bandit algorithm, called conservative linear UCB (CLUCB), that simultaneously minimizes its regret and satisfies the safety constraint, i.e., maintains its performance above a fixed percentage of the performance of a baseline strategy, uniformly over time. We prove an upper-bound on the regret of CLUCB and show that it can be decomposed into two terms: 1) an upper-bound for the regret of the standard linear UCB algorithm that grows with the time horizon and 2) a constant term that accounts for the loss of being conservative in order to satisfy the safety constraint. We empirically show that our algorithm is safe and validate our theoretical analysis.
POST-REBUTTAL: The authors have answered my concerns and will clarify the point of confusion. I'm changing from a marginal accept to an accept. OLD REVIEW: Summary of the paper This paper proposes a "safe" algorithm for contextual linear bandits. This definition of safety assumes the existence of a current "baseline policy" for selecting actions. The algorithm is "safe" in that it guarantees that it will only select an action that differs from the action proposed by the baseline policy if the new action produces larger expected reward than the action proposed by the baseline policy. Due to the random nature of rewards, this guarantee is with high probability (probability at least 1-delta). Summary of review The paper is well written. It is an extremely easy read - I thank the authors for submitting a polished paper. The proposed problem setting and approach are novel to the best of my knowledge. The problem is well motivated and interesting. Sufficient theoretical and empirical justifications are provided to convince the reader of the viability of the proposed approach. However, I have some questions. I recommend at least weak acceptance, but would consider a stronger acceptance if I am misunderstanding some of these points. Questions 1. Definition 1 defines a desirable performance constraint. The high probability nature of this constraint should be clarified. Notice that line 135 doesn't mention that (3) must hold with high probability. This should be stated. 2. More importantly, the statement of *how* (3) must hold is critical, since right now it is ambiguous. During my first read it sounds like (3) must hold with probability 1-\delta. However this is *not* achieved by the algorithm. If I am understanding correctly (please correct me if I am wrong), the algorithm ensures that *at each time step* (3) holds with high probability. That is: \forall t \in \{1,\dotsc,T\}, \Pr \left ( \sum_{i=1}^t r_{b_i}^i - \sum_{i=1}^t r_{a_i}^t \leq \alpha \sum_{i=1}^t r_{b_i}^i \right ) NOT \Pr \left ( \forall t \in \{1,\dotsc,T\}, \sum_{i=1}^t r_{b_i}^i - \sum_{i=1}^t r_{a_i}^t \leq \alpha \sum_{i=1}^t r_{b_i}^i \right ) The current writing suggests the latter, which (I think) is not satisfied by the algorithm. In your response could you please clarify which of these you are claiming that your algorithm satisfies? 3. If your algorithm does in fact satisfy the former (the per-time step guarantee), then the motivation for the paper is undermined (this could be addressed by being upfront about the limitations of this approach in the introduction, without changing any content). Consider the actual guarantee that you provide in the domain used in the empirical study. You run the algorithm for 40,000 time steps with delta = 0.001. Your algorithm is meant to guarantee that "with high probability" it performs at least as well as the baseline. However, you only guarantee that the probability of playing a worse action will be at most 0.001 *at each time step*. So, you are guaranteeing that the probability of playing a worse action at some point will be at most 1-.999^40000 = (approximately) 1. That is, you are bounding the probability of an undesirable event to be at most 1, which is not very meaningful. This should be discussed. For now, I would appreciate your thoughts on why having a per-step high probability guarantee is important for systems where there are large numbers of time steps. If a single failure is damning then we should require a high probability guarantee that holds simultaneously for all time steps. If a single failure is not damning, but rather amortized cost over over thousands of time steps is what matters, then why are we trying to get per-time step high probability guarantees?
nips_2017_3021
Learning Disentangled Representations with Semi-Supervised Deep Generative Models Variational autoencoders (VAEs) learn representations of data by jointly training a probabilistic encoder and decoder network. Typically these models encode all features of the data into a single variable. Here we are interested in learning disentangled representations that encode distinct aspects of the data into separate variables. We propose to learn such representations using model architectures that generalise from standard VAEs, employing a general graphical model structure in the encoder and decoder. This allows us to train partially-specified models that make relatively strong assumptions about a subset of interpretable variables and rely on the flexibility of neural networks to learn representations for the remaining variables. We further define a general objective for semi-supervised learning in this model class, which can be approximated using an importance sampling procedure. We evaluate our framework's ability to learn disentangled representations, both by qualitative exploration of its generative capacity, and quantitative evaluation of its discriminative ability on a variety of models and datasets.
This paper investigates the use of models mixing ideas from 'classical' graphical models (directed graphical models with nodes with known meaning, and with known dependency structures) and deep generative models (learned inference network, learned factors implemented as neural networks). A particular focus is the use of semi-supervised data and structure to induce 'disentangling' of features. They test on a variety of datasets and show that disentangled features are interpretable and can be used for downstream tasks. In contrast with recent work on disentangling ("On the Emergence of Invariance and Disentangling in Deep Representations", "Early visual concept learning with unsupervised deep learning", and others), here the disentangling is not really an emergent property of the model's inductive bias, and more related to the (semi)-supervision provided on semantic variables of the model. My opinion on the paper is overall positive - the derivations are correct, the model and ideas contribute to the intersection of graphical models / probabilistic modeling and deep learning, and well-made/detailed experiments support the authors' claims. Negative: - I don't think the intersection of ideas in structured graphical models and deep generative model is particularly novel at this point (see many of the paper's own references). In particular I am not convinced by the claims of increased generality. Clearly there is a variety of deep generative models that fall in the PGM-DL spectrum described by the authors and performing inference with flexible set of observed variables mostly involves using different inference networks (with potential parameter reuse for efficiency). Regarding continuous domain latents: I don't see what would prevent the semi-supervised model from Kingma et al from handling continous latent variable: wouldn't simply involve changing q(y|x) to a continuous distribution and reparametrize (just as they do for the never-supervised z)? Generally, methodologically the papers appears somewhat incremental (including the automated construction of the stochastic computation graph, which is similar to what something like Edward (Tran et al.) does). - Some numerical experiments results are slightly disappointing (the semi-supervised MNIST numbers seem worse than Kingma et al.; the multi-MNIST reconstructions are somewhat blurry; what do samples look like?) Positive: - The paper is well written and tells a compelling story. - The auxiliary bound introduced in (4)-(5) is novel and and allows the posterior in the semi-supervised case to force learning over the classification network (in corresponding equation (6) of Kingma et al. the classification network does not appear). Do the authors know how the variance between (4) and the analogous expression from Kingma et al compare? - The rest of the experiments (figure 2, 5) paint a clear picture of why having semantics inform the structure of the generative model can lead to interesting way to probe the behavior of our generative models.
nips_2017_3020
Fast-Slow Recurrent Neural Networks Processing sequential data of variable length is a major challenge in a wide range of applications, such as speech recognition, language modeling, generative image modeling and machine translation. Here, we address this challenge by proposing a novel recurrent neural network (RNN) architecture, the Fast-Slow RNN (FS-RNN). The FS-RNN incorporates the strengths of both multiscale RNNs and deep transition RNNs as it processes sequential data on different timescales and learns complex transition functions from one time step to the next. We evaluate the FS-RNN on two character level language modeling data sets, Penn Treebank and Hutter Prize Wikipedia, where we improve state of the art results to 1.19 and 1.25 bits-per-character (BPC), respectively. In addition, an ensemble of two FS-RNNs achieves 1.20 BPC on Hutter Prize Wikipedia outperforming the best known compression algorithm with respect to the BPC measure. We also present an empirical investigation of the learning and network dynamics of the FS-RNN, which explains the improved performance compared to other RNN architectures. Our approach is general as any kind of RNN cell is a possible building block for the FS-RNN architecture, and thus can be flexibly applied to different tasks.
The paper proposed a new RNN structure called Fast-slow RNN and showed improved performance on a few language modeling data set. Strength: 1. The algorithm combines the advantages of a deeper transition matrix (fast RNN) and a shorter gradient path (slow RNN). 2. The algorithm is straightforward and can be applied to any RNN cells. Weakness: 1. I find the first two sections of the paper hard to read. The author stacked a number of previous approaches but failed to explain each method clearly. Here are some examples: (1) In line 43, I do not understand why the stacked LSTM in Fig 2(a) is "trivial" to convert to the sequential LSTM Fig2(b). Where are the h_{t-1}^{1..5} in Fig2(b)? What is h_{t-1} in Figure2(b)? (2) In line 96, I do not understand the sentence "our lower hierarchical layers zoom in time" and the sentence following that. 2. It seems to me that the multi-scale statement is a bit misleading, because the slow and fast RNN do not operate on different physical time scale, but rather on the logical time scale when the stacks are sequentialized in the graph. Therefore, the only benefit here seems to be the reduce of gradient path by the slow RNN. 3. To reduce the gradient path on stacked RNN, a simpler approach is to use the Residual Units or simply fully connect the stacked cells. However, there is no comparison or mention in the paper. 4. The experimental results do not contain standard deviations and therefore it is hard to judge the significance of the results.
nips_2017_1200
Generating steganographic images via adversarial training Adversarial training has proved to be competitive against supervised learning methods on computer vision tasks. However, studies have mainly been confined to generative tasks such as image synthesis. In this paper, we apply adversarial training techniques to the discriminative task of learning a steganographic algorithm. Steganography is a collection of techniques for concealing the existence of information by embedding it within a non-secret medium, such as cover texts or images. We show that adversarial training can produce robust steganographic techniques: our unsupervised training scheme produces a steganographic algorithm that competes with state-of-the-art steganographic techniques. We also show that supervised training of our adversarial model produces a robust steganalyzer, which performs the discriminative task of deciding if an image contains secret information. We define a game between three parties, Alice, Bob and Eve, in order to simultaneously train both a steganographic algorithm and a steganalyzer. Alice and Bob attempt to communicate a secret message contained within an image, while Eve eavesdrops on their conversation and attempts to determine if secret information is embedded within the image. We represent Alice, Bob and Eve by neural networks, and validate our scheme on two independent image datasets, showing our novel method of studying steganographic problems is surprisingly competitive against established steganographic techniques.
The authors studied how to use adversarial training to learn the encoder, the steganalyzer and the decoder at the same time using unsupervised learning. Specifically, the authors designed an adversarial game of three parties. The encoder generates images based on the cover and the message. Then the generated image can be received to both decoder and steganalyzer. The goal of the encoder and decoder is to correctly encode and decode the message, and the goal of the steganalyzer is to determine whether the image is encrypted. Thus it is a minimax game. Despite the algorithmic part of the paper first, my major concern is the correctness of the methodology. As we all know, the neural networks have great ability to fit or classify the training data. You use two almost same neural network architectures as the decoder and steganalyzer. Then the question raises, as the decoder and steganalyzer use the same information in the training process. If the steganalyzer can correctly predict the label of the image, then the task fails. But if the steganalyzer cannot correctly predict the label, I don't think the decoder can recover the message well. I am confused about this and hope the author can answer the question in the rebuttal phase. From the experiment we can see the loss of steganalyzer is super large according to figure 3 and 4, which means that the steganalyzer fails to classify almost all examples. However, the authors also shown that steganalyzer is "pretty well" if the steganographic algorithm is "fixed". I am also confused about such self-contradictory explanations. Do you mean that during training, your steganalyzer is not well learnt? And your encoder and decoder is learnt using a bad steganalyzer? Here are also some minor comments: 1. Line 151: the loss function L_E has no dependency on \theta_A given C' 2. Line 152: parameter C should be included into loss function L_E 3. A large number of details are missed in the experiments. What are the values of hyperparameters? In the paper, the author claimed it is converged after 150 training steps. As the batch size is 32, you cannot even make a full pass to the training examples during training. How can you explain this?
nips_2017_2501
Sample and Computationally Efficient Learning Algorithms under S-Concave Distributions We provide new results for noise-tolerant and sample-efficient learning algorithms under s-concave distributions. The new class of s-concave distributions is a broad and natural generalization of log-concavity, and includes many important additional distributions, e.g., the Pareto distribution and t-distribution. This class has been studied in the context of efficient sampling, integration, and optimization, but much remains unknown about the geometry of this class of distributions and their applications in the context of learning. The challenge is that unlike the commonly used distributions in learning (uniform or more generally log-concave distributions), this broader class is not closed under the marginalization operator and many such distributions are fat-tailed. In this work, we introduce new convex geometry tools to study the properties of s-concave distributions and use these properties to provide bounds on quantities of interest to learning including the probability of disagreement between two halfspaces, disagreement outside a band, and the disagreement coefficient. We use these results to significantly generalize prior results for margin-based active learning, disagreement-based active learning, and passive learning of intersections of halfspaces. Our analysis of geometric properties of s-concave distributions might be of independent interest to optimization more broadly.
This review is adapted from my review from COLT 2017 - My feedback to this paper has not changed much since then. This paper studies a new family of distributions, s-concave distributions, which appears in works of (Brascamp and Lieb, 1976; Bobkov, 2007, Chandrasekaran, Deshpande and Vempala, 2009). The main result is a series of upper and lower bounds regarding its probability distribution function and its measure over certain regions. These inequalities can be readily applied to (active) learning linear separators and learning the intersection of two halfspaces. Overall this is an interesting paper, extending the family of distributions in which the problem of learning linear separators can be efficiently solved. This may spur future research on establishing new distribution-dependent conditions for (efficient) learnability. Technical quality: on one hand, the results are impressive, since the family of admissible distributions (in the sense of Awasthi, Balcan and Long, 2014 and Klivans, Long and Tang, 2009) is broadened; on the other hand, the analysis only works when -1/(2n+3)<=s<=0. In high dimensional settings, the range of s is fairly small. Novelty: the result of this paper follows the reasoning in (Lovasz and Vempala, 2007); but this is definitely a non-trivial extension. Potential Impact: this work may spur future research on distributional conditions for efficient learnability.
nips_2017_2767
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent Most distributed machine learning systems nowadays, including TensorFlow and CNTK, are built in a centralized fashion. One bottleneck of centralized algorithms lies on high communication cost on the central node. Motivated by this, we ask, can decentralized algorithms be faster than its centralized counterpart? Although decentralized PSGD (D-PSGD) algorithms have been studied by the control community, existing analysis and theory do not show any advantage over centralized PSGD (C-PSGD) algorithms, simply assuming the application scenario where only the decentralized network is available. In this paper, we study a D-PSGD algorithm and provide the first theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed stochastic gradient descent. This is because D-PSGD has comparable total computational complexities to C-PSGD but requires much less communication cost on the busiest node. We further conduct an empirical study to validate our theoretical analysis across multiple frameworks (CNTK and Torch), different network configurations, and computation platforms up to 112 GPUs. On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts.
In this paper, the authors present an algorithm for decentralized parallel stochastic gradient descent (PSGD). In contrast to centralized PSGD where worker nodes compute local gradients and the weights of a model are updated on a central node, decentralized PSGD seeks to perform training without a central node, in regimes where each node in a network can communicate with only a handful of adjacent nodes. While this network architecture has typically been viewed as a limitation, the authors present a theoretical analysis of their algorithm that suggests D-PSGD can achieve a linear speedup comparable to C-PSGD, but with significantly lower communication overhead. As a result, in certain low bandwidth or high latency network scenarios, D-PSGD can outperform C-PSGD. The authors validate this claim empirically. Overall, I believe the technical contributions of this paper could be very valuable. The authors claim to be the first paper providing a theoretical analysis demonstrating that D-PSGD can perform competitively or even outperform C-PSGD. I am not sufficiently familiar with the relevant literature to affirm this claim, but if true then I believe that the analysis provided by the authors is both novel and intriguing, and could have nontrivial practical impact on those training neural networks in a distributed fashion, particularly over high latency networks. The authors additionally provide a convincing experimental evaluation of D-PSGD, demonstrating the competitiveness of D-PSGD using modern CNN architectures across several network architectures that vary in scale, bandwidth and latency. I found the paper fairly easy to read and digest, even as someone not intimately familiar with the parallel SGD literature and theory. The paper does contain a number of small typographical errors that should be corrected with another editing pass; a small list of examples that caught my eye is compiled below. -- Minor comments -- Line 20: "pay" -> "paying" Line 23: "its" -> "their", "counterpart" -> "counterparts" Line 33: "parameter server" -> (e.g) "the parameter server topology" Algorithm 1: the for loop runs from 0 to K-1, but the output averages over x_{K}. Line 191: "on" -> "in" Line 220: "of NLP" -> "of the NLP experiment"
nips_2017_2233
Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems We propose a generic algorithmic building block to accelerate training of machine learning models on heterogeneous compute systems. Our scheme allows to efficiently employ compute accelerators such as GPUs and FPGAs for the training of large-scale machine learning models, when the training data exceeds their memory capacity. Also, it provides adaptivity to any system's memory hierarchy in terms of size and processing speed. Our technique is built upon novel theoretical insights regarding primal-dual coordinate methods, and uses duality gap information to dynamically decide which part of the data should be made available for fast processing. To illustrate the power of our approach we demonstrate its performance for training of generalized linear models on a large-scale dataset exceeding the memory size of a modern GPU, showing an order-of-magnitude speedup over existing approaches.
The paper presents a specific way of exploiting a system with two heterogeneous compute nodes with complementary capabilities. The idea is to perform block co-ordinate algorithm with the subproblem of optimizing with the block variables solved at the node with high-compute-low-memory and the subproblem of choosing the block at the node with low-compute-high-memory. Here are some comments: 1. Overall, it is not very clear why the above is the only worthwhile strategy to try. Atleast empirical comparison with popular distributed algorithms (that ignore the resource capabilities) is necessary in order to access the merit of this proposal. 2. More importantly, the proposed split-up of workload in inherently not parallelizable. Hence a time-delay strategy is employed. Though empirically it is shown that it works, it not clear why this is the way to go? 3. In literature of block co-ordinate descent scoring functions (similar to (6) ) are popular. An empirical comparison with strategies like those used in SMO (libsvm etc.; first order and second order criteria for sub-optimality) seems necessary. 4. The results of Theorem1 and Theorem2 are interesting (I have not been able to fully check the correctness of the proofs). 5. Refer eqn (2). The duality is lower-bounded by the expression on LHS in (2). It is need not be equal to it. I have read the rebuttal and would like to stay with the score.
nips_2017_1755
Approximation Bounds for Hierarchical Clustering: Average Linkage, Bisecting K-means, and Local Search Hierarchical clustering is a data analysis method that has been used for decades. Despite its widespread use, the method has an underdeveloped analytical foundation. Having a well understood foundation would both support the currently used methods and help guide future improvements. The goal of this paper is to give an analytic framework to better understand observations seen in practice. This paper considers the dual of a problem framework for hierarchical clustering introduced by Dasgupta [Das16]. The main result is that one of the most popular algorithms used in practice, average linkage agglomerative clustering, has a small constant approximation ratio for this objective. Furthermore, this paper establishes that using bisecting k-means divisive clustering has a very poor lower bound on its approximation ratio for the same objective. However, we show that there are divisive algorithms that perform well with respect to this objective by giving two constant approximation algorithms. This paper is some of the first work to establish guarantees on widely used hierarchical algorithms for a natural objective function. This objective and analysis give insight into what these popular algorithms are optimizing and when they will perform well.
The paper extends the work of Dasgupta towards defining a theoretical framework for evaluating hierarchical clustering. The paper defines an objective function that is equivalent to that of Dasgupta in the sense that an optimal solution for the cost function defined by Dasgupta will also be optimal for the one defined in this paper and vice versa. The behaviour of approximate solution will differ though because the cost function is of the form D(G) - C(T), where D is a constant (depending on the input G) and C(T) is the cost (defined by Dasgupta) and T is the hierarchical solution. The authors show a constant approximation guarantee w.r.t. their objective function for the popular average linkage algorithm for hierarchical clustering. They complement this approximation guarantee by showing that this approximation guarantee is almost tight. They also show that the decisive algorithm for hierarchical clustering gives poor results w.r.t. their objective function. They also claim to show approximation guarantee for a local search based algorithm. Significance: The results discussed in the paper are non-trivial extensions to the work of Dasgupta. Even though the model is similar the the previous work, it is interesting in the sense that the paper is able to give theoretical guarantees for the average linkage algorithm that is a popular technique for hierarchical clustering. Originality: The paper uses basic set of techniques for analysis and is simple to read. The analysis is original as per my knowledge of the related literature. Clarity: The paper uses simple set of techniques and is simple to read. The authors do a fair job in explaining the results and putting things in context with respect to previous work. The explanations are to the point and simple. There are a few places where more clarity will be appreciated. These are included in the specific comments below. Quality: There is a technical error in line 261. It is not clear how the very first inequality is obtained. Furthermore, the last inequality is not justified. The authors should note that as per the analysis, in case (iii) you assume that |B|<=|A| and not the other way around. Given this, I do not know how to fix these errors in a simple way. Specific comments: 1. Please use proper references [C] can mean [CC17] or [CKMM17] 2. Line 69: “Let leaves …”. I think you mean leaves denote the set of leaves and not number. 3.Line 91: Please correct the equation. The factor of n is missing from the right hand side. 4. Line 219: Figure 1 is missing from the main submission. Please fix this, as the proof is incomplete without the figure which is given in the appendix.
nips_2017_3435
Cross-Spectral Factor Analysis In neuropsychiatric disorders such as schizophrenia or depression, there is often a disruption in the way that regions of the brain synchronize with one another. To facilitate understanding of network-level synchronization between brain regions, we introduce a novel model of multisite low-frequency neural recordings, such as local field potentials (LFPs) and electroencephalograms (EEGs). The proposed model, named Cross-Spectral Factor Analysis (CSFA), breaks the observed signal into factors defined by unique spatio-spectral properties. These properties are granted to the factors via a Gaussian process formulation in a multiple kernel learning framework. In this way, the LFP signals can be mapped to a lower dimensional space in a way that retains information of relevance to neuroscientists. Critically, the factors are interpretable. The proposed approach empirically allows similar performance in classifying mouse genotype and behavioral context when compared to commonly used approaches that lack the interpretability of CSFA. We also introduce a semi-supervised approach, termed discriminative CSFA (dCSFA). CSFA and dCSFA provide useful tools for understanding neural dynamics, particularly by aiding in the design of causal follow-up experiments.
The authors propose a factor analysis method called CSFA for modelling LFP data, which is a generative model with the specific assumption that factors, called Elecotomes, are sampled from Gaussian processes with cross-spectral mixture kernel. The generative model is straightforward use of CSM, and the estimation is apparently also a known form (resilient back prop; I never heard of it before). I do like the dCSFA formulation. The proposed method focuses on spectral power and phase relationship across regions, and is claimed to bring both better interpretability and higher predictive power. They also extend CSFA to be discriminative to side information such as genetic and behavioral data by incorporating logistic loss (or the likes of it). The authors applied the proposed method to synthetic and real mouse data, and compared it with PCA. == issues == === Page limit === This paper is 10 pages. (references start at page 8 and goes to page 10) === Section 2.1 === The authors claim that reduced dimensionality increases the power of hypothesis testing. In general, this is not true. Any dimensionality-reduction decision is made implicitly upon a hypothesis of dimensionality. The hypothesis testing performed after reduction is conditional on it, making the problem more subtle than presented. === Section 2.5 === lambda in Eq(7) is chosen based on cross-validation of the predictive accuracy. Why was it not chosen by cross-validation of the objective function itself? The authors should report its value along the accuracy in the result section. Would it be extremely large to make the first likelihood term ineffective? === FA or PCA === Eq 3 shows that the noise is constant across the diagonal. The main difference between FA and PCA is allowing each dimension of signal to have different amount of noise. This noise formulation is closer to PCA, isn't it? But then I was confused reading sec 3.2. What's the covariance matrix of interest in sec 3.2? Over time? space? factors? frequencies? === "Causal" === The word 'causal' frequently appears in the first 2 pages of the text, setting high expectations. However, the method itself does not seem to have any explanatory power to discover causal structure or inherent causal hypothesis. Or is the reader supposed to get it from the phase plots? === comparison with baseline PCA in frequency domain === How was the window of data converted to the "Fourier domain"? Was it simply a DFT or was a more sophisticated spectrum estimator used, e.g., multi-taper methods. I ask this because raw FFT/DFT is a bad estimator for the spectrum (very high variance regardless of the length of time window). Neuroscientists often use packages that support such estimation (e.g., MATLAB package chronux). It would not be fair if those are not taken into account. === averaging over 6 mice === Was the CSFA/dCSFA for Fig 3 computed to produce common factor across 6 mice? What justifies such approach? == minor == - L 230, 231: absolute KL div numbers are not useful. At least provide standard error of the estimator. (KL div is often very hard to estimate!!) - Subsection 2.1 & 2.3.1 reads like filler texts. (not much content) - L 124: $s\tilde B_q$? what's $s$ here? - L 228: to to - overall writing can be improved
nips_2017_1293
Training Deep Networks without Learning Rates Through Coin Betting Deep learning methods achieve state-of-the-art performance in many application scenarios. Yet, these methods require a significant amount of hyperparameters tuning in order to achieve the best results. In particular, tuning the learning rates in the stochastic optimization process is still one of the main bottlenecks. In this paper, we propose a new stochastic gradient descent procedure for deep networks that does not require any learning rate setting. Contrary to previous methods, we do not adapt the learning rates nor we make use of the assumed curvature of the objective function. Instead, we reduce the optimization process to a game of betting on a coin and propose a learning-rate-free optimal algorithm for this scenario. Theoretical convergence is proven for convex and quasi-convex functions and empirical evidence shows the advantage of our algorithm over popular stochastic gradient algorithms.
This paper presents an optimization strategy using coin betting, and a variant which works well for training neural networks. The optimizer is tested and compared to various stochastic optimization routines on some simple problems. I like this paper because it's an unusual optimization method which surprisingly seems to work reasonably well. It also has fewer tunable parameters than existing stochastic optimizers, which is nice. However, I'm giving it a marginal accept for the following reasons: - The experiments aren't very convincing. They're training some very simple models on some very small datasets, and results in this regime do not necessarily translate over to "real" problems/models. Further, success on these problems mostly revolves around preventing overfitting. I'd actually suggest additional experiments in _both_ directions - additional simple unit test-like experiments (like the |x - 10| experiment, but more involved - see e.g. the "unit tests for stochastic optimization" paper), and experiments on realistically large models on realistically large datasets. - I don't 100% buy the "without learning rates" premise, for the following reason: If you divide the right-most term of line 10 of Algorithm 2 by L_{t, i}, then the denominator simplifies to max(G_{t, i}/L_{t, i} + 1, \alpha). So, provided that \alpha is bigger than G_{t, i}/L_{t, i}, the updates effectively are getting scaled by \alpha. In general I would expect G_{t, i}/L_{t, i} to be smaller than \alpha = 100 particularly at the beginning of training, and as a result I'd expect the setting of \alpha to have a considerable effect on training. Of course, if \alpha = 100 well in any experiment of interest, we can ignore it, but arguably the default learning rate setting of Adam works reasonably well in most problems of interest too -but of course, we wouldn't call it "without learning rates" method. - Overall, this is an unusual and unconventional idea (which is great!), but it is frankly not presented in a clear way. I do not get a lot of intuition from the paper about _why_ this works, how it is similar/different from SGD, how the different quantities being updated (G, L, reward, \theta, w, etc) evolve over the course of a typical training run, etc. despite spending considerable time with the paper. I would suggest adding an additional section to the appendix, or something, which gives a lot more intuition about how and why this actually works. More specific comments: - Very high level: "backprop without learning rates" is a strange phrase. Backprop has no learning rate. It's an efficient algorithm for finding the gradients with respect to all parameters in your model with respect to the output. SGD has a learning rate. SGD is often used for training neural networks. In neural networks backprop is often used for finding the gradients necessary for SGD. But you don't need a learning rate to backprop; they are disjoint concepts. - In your introduction, it might be worth mentioning results that show that the learning rate is one of the most "important" hyperparameters, in the sense that if it's set wrong the model may not work at all, so its correct setting can have a strong effect on the outcome. - The relation of the second and third inequality in the proof after line 112 took me about 15 minutes to figure out/verify. It would be helpful if you broke this into a few steps. - Algorithm 1 is missing some input; e.g. it does not take "T" or "F" as input. - Calling \beta_{t, i} the "fraction to bet" is odd because it can be negative, e.g. if the gradients are consistently negative then \theta_{t, i} will be negative and 2\sigma(...) - 1 will be close to -1. So you are allowed to bet a negative amount? Further, it seems that in general 2\theta_{t, i} can be substantially smaller than G_{t, i} + L_i - I think you have redefined w_t. When defining coin betting you use w_t to refer to the bet at round t. In COCOB w_t are the model parameters, and the bet at round t is (I think) \beta_t, i (L_i + Reward_t, i). - I think most readers will be most familiar with (S)GD as an optimization workhorse. Further presentation of COCOB vs. variants of SGD would be helpful, e.g. drawing specific analogies between each step of COCOB and each step of some SGD-based optimizer. Or, perhaps showing the behavior/updates of COCOB vs. SGD on additional simple examples. - "Rather, it is big when we are far from the optimum and small when close to it." Not necessarily - for example, in your |x - 10| example, at the beginning you are quite far from the optimum but \sum_i g_i^2 is quite small. Really the only thing you can say about this term is that it grows with iteration t, I think. - In Algorithm 2, the loop is just "repeat", so there is no variable T defined anywhere. So you can't "Return w_T". I think you mean "return w_t". You also never increment, or initialize, t. - Setting L_{t, i} to the running max of the gradients seems like a bad idea in practice with neural networks (particularly for RNNs) because if at some iteration gradients explode in a transient manner (a common occurrence), then for all subsequent iterations the update will be very small due to the quadratic L_{t, i} term in the denominator of line 10 of algorithm 2. It seems like you would want to set an explicit limit as to the values of |g_{t, i}| you consider, e.g. setting L_{t, i} <- max(L_{t - 1, i}, min(|g_{t, i}|, 10)) or something. - What version of MNIST are you using that has 55k training samples? It technically has 60k training images, typically split into 50k for training and 10k for validation. - I would suggest using an off-the-shelf classifier for your MNIST experiments too, since you are missing some experiment details (how were the weights initialized, etc). - You say your convolutional kernels are of shape 5 x 5 x 3 and your pooling is 3 x 3 x 3, I think you mean 5 x 5 and 3 x 3 respectively. - What learning rates did you use for the different optimizers and did you do any kind of search over learning rate values? This is absolutely critical!
nips_2017_2592
Polynomial time algorithms for dual volume sampling We study dual volume sampling, a method for selecting k columns from an n ⇥ m short and wide matrix (n  k  m) such that the probability of selection is proportional to the volume spanned by the rows of the induced submatrix. This method was proposed by Avron and Boutsidis (2013), who showed it to be a promising method for column subset selection and its multiple applications. However, its wider adoption has been hampered by the lack of polynomial time sampling algorithms. We remove this hindrance by developing an exact (randomized) polynomial time sampling algorithm as well as its derandomization. Thereafter, we study dual volume sampling via the theory of real stable polynomials and prove that its distribution satisfies the "Strong Rayleigh" property. This result has numerous consequences, including a provably fast-mixing Markov chain sampler that makes dual volume sampling much more attractive to practitioners. This sampler is closely related to classical algorithms for popular experimental design methods that are to date lacking theoretical analysis but are known to empirically work well.
The paper studies efficient algorithms for sampling from a determinantal distribution that the authors call "dual volume sampling", for selecting a subset of columns from a matrix. The main results in the paper are two sampling algorithms for selecting k columns from an n x m matrix (n<=k<=m): - an exact sampling procedure with time complexity O(k m^4) - an approximate sampling, with time complexity O(k^3 n^2 m) (ignoring log terms). The approximate sampling algorithm makes use of Strongly Rayleigh measures, a technique previously used for approximately sampling from closely related determinantal distributions, like Determinantal Point Processes. Compared to the exact sampling, it offers a better dependence on m, at the cost of the accuracy of sampling, and a worse dependence on k and n. The authors perform some preliminary experiments comparing their method to other subset selection techniques for linear regression. It is worth noting that the plots are not very informative in comparing dual volume sampling to leverage score sampling which is to my mind the most relevant baseline. This determinantal distribution was previously discussed and motivated in [1], who referred to it just as "volume sampling", and suggested potential applications in linear regression, experimental design, clustering, etc., however without providing a polynomial time algorithm, except for the case of k=n. The paper addresses an important topic that is relevant to the NIPS community. The results are interesting, however the contribution is somewhat limited. To my knowledge, the algorithms (in particular, the exact sampling algorithm) are much slower compared to Leverage Score sampling, and the authors did not make a compelling argument for why DVS is better suited for the task of experimental design. [1] Avron and Boutsidis. Faster Subset Selection For Matrices and Applications, 2013. https://arxiv.org/abs/1201.0127
nips_2017_821
Fixed-Rank Approximation of a Positive-Semidefinite Matrix from Streaming Data Volkan Cevher EPFL Several important applications, such as streaming PCA and semidefinite programming, involve a large-scale positive-semidefinite (psd) matrix that is presented as a sequence of linear updates. Because of storage limitations, it may only be possible to retain a sketch of the psd matrix. This paper develops a new algorithm for fixed-rank psd approximation from a sketch. The approach combines the Nyström approximation with a novel mechanism for rank truncation. Theoretical analysis establishes that the proposed method can achieve any prescribed relative error in the Schatten 1-norm and that it exploits the spectral decay of the input matrix. Computer experiments show that the proposed method dominates alternative techniques for fixed-rank psd matrix approximation across a wide range of examples.
The authors address the problem of fixed-rank approximation of a positive semidefinite (psd) matrix from streaming data. The authors propose a simple method based on the Nystrom approximation, in which a sketch is updated as streaming data comes in, and the rank-r approximation is obtained as the best rank-r approximation to the full Nystrom approximation. For Gaussian or uniformly distributed (Haar) orthogonal sketch matrices, the authors prove a standard upper bound on the Schatten-1 (aka Nuclear) norm of the approximation error. Detailed empirical evidence are discussed, which indicate that the proposed method improves on previously proposed methods for fixed-rank approximation of psd matrices. The paper is very well written and easy to follow. The result appears to be novel. The proposed method admits a theoretical analysis and seems to perform well in practice. Implementation details are discussed. This paper whould be a welcome addition to the steaming PCA literature and is a clear accept in my opinion. Minor comment: - The "experimental results" section does not mention which test matrix ensemble was used, and whether the true underlying rank r was made available to the algorithms. - Some readers will be more familiar with the term "nuclear norm" than with the term "Schatten-1 norm" - It would be nice to discuss the increase in approximation error due to streaming, compared with a classical sketch algorithm that is given all the data in advance.
nips_2017_1276
Natural Value Approximators: Learning when to Trust Past Estimates Neural networks have a smooth initial inductive bias, such that small changes in input do not lead to large changes in output. However, in reinforcement learning domains with sparse rewards, value functions have non-smooth structure with a characteristic asymmetric discontinuity whenever rewards arrive. We propose a mechanism that learns an interpolation between a direct value estimate and a projected value estimate computed from the encountered reward and the previous estimate. This reduces the need to learn about discontinuities, and thus improves the value function approximation. Furthermore, as the interpolation is learned and state-dependent, our method can deal with heterogeneous observability. We demonstrate that this one change leads to significant improvements on multiple Atari games, when applied to the state-of-the-art A3C algorithm.
This paper proposes a novel way to estimate the value function of a game state, by treating its previous reward and value estimations as additional input besides the current state. These additional input is the direct reason why value function depicts a non-smoothness structure (e.g., a sparse immediate rewards lead to a bump in the value function). By taking them into consideration explicitly, the value function can be more easily estimated. Although the proposed idea is quite interesting, there are a lot of baselines that the proposed method might need to compare. As mentioned in Section 7, eligible trace estimates the cumulative return as a linear combination of k-step *future* returns with geometric weights. Also Generalized Advantage Estimation, as a novel way to estimate the overall return using future estimation, is another example. Comparing these previous approaches with the proposed method that uses past estimation is both interesting and important. In addition, there exist many simple approaches that also captures the history (and sparse reward that happened recently), e.g., frame stacking. A comparison will also be interesting. The paper starts with the story that the value function is smooth, if the non-smooth part is explained away by previous rewards and values. There are also motivating examples (e.g., Fig. 1 and Fig. 2). However, there is no experiments in Atari games showing that the estimated value function using proposed approach is indeed smoother than the baseline. Table. 1 shows that while median score shows a strong boost, mean score of the proposed approach is comparable to the baseline. This might suggests that the proposed approach does a good job in reducing the variance of the trained models, rather than giving higher performance. In the paper, there seems to be no relevant analysis. Some detailed questions: 1. According to the paper, the modified A3C algorithm uses N = 20 steps, rather than N = 5 as in the vanilla A3C. Did the baseline also use N = 20? Note that N could be an important factor for performance, since with long horizon the algorithm will see more rewards in one gradient update given the same number of game frames. 2. Line 154 says "all agents are run with one seed". Does that mean the agents are initialized by the same random seed among different games, or the game environment starts from the same seed? Please clarify.
nips_2017_3116
Implicit Regularization in Matrix Factorization We study implicit regularization when optimizing an underdetermined quadratic objective over a matrix X with gradient descent on a factorization of X. We conjecture and provide empirical and theoretical evidence that with small enough step sizes and initialization close enough to the origin, gradient descent on a full dimensional factorization converges to the minimum nuclear norm solution.
This paper studies an interesting observation: with small enough step sizes and initialization close enough to the origin, even for an underdetermined / unregularized problem of the form (1), gradient descent seems to find the solution with minimum nuclear norm. In other words, starting from an initialization with low nuclear norm, gradient descent seems to increase the norm of the iterates "just as needed". The authors state that conjecture and then proceed to prove it for the restricted case when the operator A is based on commutative matrices. The paper is pretty interesting and is a first step towards understanding the implicit regularization imposed by the dynamics of an optimization algorithm such as gradient descent. There seems to be many small errors in Figure 1 and 2 (see below). Detailed comments ----------------- Line 46: I think there are way earlier references than [16] for multi-task learning. I think it is a good idea to give credit to earlier references. Another example of model that would fall within this framework is factorization machines and their convex formulations. Line 63: I guess X = U U^T? X is not defined... Figure 1: - the figure does not indicate (a) and (b) (I assume they are the left and right plots) - what does the "training error" blue line refer to? - X_gd is mentioned in the caption but does not appear anywhere in the plot Figure 2: - I assume this is the nuclear norm of U U^T - X_gd (dotted black line) does not appear anywhere - It is a bit hard to judge how close the nuclear norms are to the magenta line without a comparison point - How big is the nuclear norm when d < 40? (not dislpayed) Line 218: I assume the ODE solver is used to solve Eq. (3)? Please clarify.
nips_2017_3370
Z-Forcing: Training Stochastic Recurrent Networks Many efforts have been devoted to training generative latent variable models with autoregressive decoders, such as recurrent neural networks (RNN). Stochastic recurrent models have been successful in capturing the variability observed in natural sequential data such as speech. We unify successful ideas from recently proposed architectures into a stochastic recurrent model: each step in the sequence is associated with a latent variable that is used to condition the recurrent dynamics for future steps. Training is performed with amortised variational inference where the approximate posterior is augmented with a RNN that runs backward through the sequence. In addition to maximizing the variational lower bound, we ease training of the latent variables by adding an auxiliary cost which forces them to reconstruct the state of the backward recurrent network. This provides the latent variables with a task-independent objective that enhances the performance of the overall model. We found this strategy to perform better than alternative approaches such as KL annealing. Although being conceptually simple, our model achieves state-of-the-art results on standard speech benchmarks such as TIMIT and Blizzard and competitive performance on sequential MNIST. Finally, we apply our model to language modeling on the IMDB dataset where the auxiliary cost helps in learning interpretable latent variables.
The paper introduces a training technique that encourages autoregressive models based on RNNs to utilize latent variables. More specifically, the training loss of a recurrent VAE with a bidirectional inference network is augmented with an auxiliary loss. The paper is well structured and written, and it has an adequate review of previous work. The technique introduced is heuristic and justified intuitively. The heuristic is backed by empirical results that are good but not ground-breaking. The results for speech modelling are very good, and for sequential MNIST good. The auxiliary loss brings no quantitative improvement to text modelling , but an interesting use for the latent variables is shown in section 5.4. Some concerns: Line 157 seems to introduce a second auxiliary loss that is not shown in L_{aux} (line 153). Is this another term in L_{aux}? Was the backward RNN pretrained on this objective? More details should be included about this. Otherwise, this paper should be rejected as there is plenty of space left to include the details. Line 186: How was the \beta term introduced? What scheduled? Did detail this prove essential? No answers to these questions are provided Line 191: What type of features were used? Raw signal? Spectral? Mel-spectral? Cepstral? Please specify. Line 209: Why the change from LSTM to GRU? Was this necessary for obtaining good results? Were experiments run with LSTM? Line 163: Gradient disconnection here and on paragraph starting on line 127. Can they be shown graphically on Figure 1? Maybe an x on the relevant lines? Figure 2 has very low quality on a print out. Should be improved. Line 212: "architecturaly flat" do the refer to number of layers? Do any of the baseline models include information about the 2D structure of the MNIST examples? specify if any does as this would be advantageous for those models. The section that refers to IWAE: Were the models trained with IWAE or just evaluated? I think it was just evaluated, but it should be more clearly specified. Would it be possible to train using IWAE?
nips_2017_455
Nonparametric Online Regression while Learning the Metric We study algorithms for online nonparametric regression that learn the directions along which the regression function is smoother. Our algorithm learns the Mahalanobis metric based on the gradient outer product matrix G of the regression function (automatically adapting to the effective rank of this matrix), while simultaneously bounding the regret -on the same data sequence-in terms of the spectrum of G. As a preliminary step in our analysis, we extend a nonparametric online learning algorithm by Hazan and Megiddo enabling it to compete against functions whose Lipschitzness is measured with respect to an arbitrary Mahalanobis metric.
This paper describes a novel algorithm for online nonparametric regression problem. It employs Mahalanobis metric to obtain a better distance measurement in the traditional online nonparametric regression framework. In terms of theoretical analysis, the proposed algorithm improves the regret bound and achieves a competitive result to the state-of-the-art. The theoretical proof is well organized and correct to the reviewer. However, the novelty of the proposed algorithm may be limited. The overall algorithm consists two separable components: online nonparametric regression with Mahalanobis metric, the estimation of gradient outer-product. The first component is a straightforward extension from [7], which proposed a general framework including all kinds of distance metric spaces. In Algorithm 1, it is obvious that the proposed work specializes the metric space as Mahalanobis distance and keep other steps unchanged. Therefore, the statement that the proposed algorithm is a generalized version of the algorithm from [7] in line 244 is improper. This extension is valuable only when the motivation of incorporating Mahalanobis distance is clearly introduced and convincing. Otherwise, this work mainly discusses a special case in the framework of [7]. On the other hand, the significance of the proposed method and the related regret bound should be discussed with more details. Because compared with the original method in [7], the improvement of regret bound depends on the specific problem. The second component is an application from [14] to estimate the gradient outer-product. The authors should discuss the increased computational complexity brought by this phase. Some minor issues: 1. How does one compute the active center efficiently in line 8 of Algorithm 1 2. $\tilde{\rho}$ in Theorem 2 is not defined. 3. What's the relationship between smoother f_0 and Mahalanobis metric? 4. There is no experimental evaluation to demonstrate the improvement of the proposed method in comparison with the original framework in terms of accuracy or convergence rate.
nips_2017_366
Dykstra's Algorithm, ADMM, and Coordinate Descent: Connections, Insights, and Extensions We study connections between Dykstra's algorithm for projecting onto an intersection of convex sets, the augmented Lagrangian method of multipliers or ADMM, and block coordinate descent. We prove that coordinate descent for a regularized regression problem, in which the penalty is a separable sum of support functions, is exactly equivalent to Dykstra's algorithm applied to the dual problem. ADMM on the dual problem is also seen to be equivalent, in the special case of two sets, with one being a linear subspace. These connections, aside from being interesting in their own right, suggest new ways of analyzing and extending coordinate descent. For example, from existing convergence theory on Dykstra's algorithm over polyhedra, we discern that coordinate descent for the lasso problem converges at an (asymptotically) linear rate. We also develop two parallel versions of coordinate descent, based on the Dykstra and ADMM connections.
This paper studies connections/equivalences between Dykstra's algorithm, coordinate descent (à la Gauss-Seidel block alternating minimization). Many of these equivalences were already known (indeed essentially in the optimization literature as stated by the author). The authors investigates this for the soft feasibility (best approximation) problem and block-seperable regularized regression, where the regularizers are positively homogeneous (i.e. supports of closed convex sets containing the origin). The author claims that this is the first work to investigate this for the casen where the design is not unitary. Actually, the extension to arbitrary design is very straightforward (almost trivial) through Fenchel-Rockafellar duality. In fact, positive homogeneity is not even needed as Dykstra's algorithm has been extended to tje proximal setting (beyond indicator functions of closed convex sets). Having said this, I have several concerns on this paper. + My first concern pertains to the actual novelty of the work, which I beleive is quite limited. The manuscript has a flavour of a review paper with some incremental extensions. + The paper contains some important inaccuracies that should be addressed. Other detailed comments are as follows: + (1): intersection of sets should be non-empty. Otherwise many of the statements made do not rigorously hold. + What the author is calling seminorms are actually support functions (symmetry is NOT needed), or equivalently gauges of polar sets. + Extension to the non-euclidean case: several key details are missing for all this to make sense. In particular the notion of a Legendre function. Qualification conditions for strong duality to hold are not stated either.
nips_2017_1544
Differentially Private Empirical Risk Minimization Revisited: Faster and More General * In this paper we study the differentially private Empirical Risk Minimization (ERM) problem in different settings. For smooth (strongly) convex loss function with or without (non)-smooth regularization, we give algorithms that achieve either optimal or near optimal utility bounds with less gradient complexity compared with previous work. For ERM with smooth convex loss function in high-dimensional (p n) setting, we give an algorithm which achieves the upper bound with less gradient complexity than previous ones. At last, we generalize the expected excess empirical risk from convex loss functions to non-convex ones satisfying the PolyakLojasiewicz condition and give a tighter upper bound on the utility than the one in [34].
Summary: A large number of machine learning models are trained on potentially sensitive data, and it is often import to guarantee privacy of the training data. Chaudhuri and Monteleoni formulated the differentially private ERM problem and started a line of work on designing differentially private optimization algorithms for variants of ERM problems. Recent works have gotten nearly optimal tradeoffs between the additional error introduced by the DP algorithm (the privacy risk) and the privacy parameter, for a large class of settings. In this work, these results are improved in the additional axis of computational efficiency. For smooth and strongly convex losses, this work gets privacy risk bounds that are essentially the best known, but do so at a computational cost that is essentially (n+ \kappa) gradient computaitons, instead of n\kappa, where \kappa is the condition number. Similar improvements are presented for other settings of interest, when the loss function is not strongly convex, or when the constraint set has small complexity. A different viewpoint on the results is that the authors show that DP noise addition techniques and modern optimization methods can be made to work well together. Speficially, one can use SVRG with noise addition at each step and the authors show that this noisy SVRG also gets near optimal privacy risk. Similarly for the case of constraint sets with small Gaussian width (such as l_1), where previous work used noisy mirror descent, the authors show that one can use an accelerated noisy mirror descent and get faster runtimes without paying in the privacy cost. I think the problem is very important and interesting. While the tools are somewhat standard, I think this paper advances the state of the art sufficiently that I am compelled to recommend acceptance.
nips_2017_193
Structured Embedding Models for Grouped Data Word embeddings are a powerful approach for analyzing language, and exponential family embeddings (EFE) extend them to other types of data. Here we develop structured exponential family embeddings (S-EFE), a method for discovering embeddings that vary across related groups of data. We study how the word usage of U.S. Congressional speeches varies across states and party affiliation, how words are used differently across sections of the ArXiv, and how the copurchase patterns of groceries can vary across seasons. Key to the success of our method is that the groups share statistical information. We develop two sharing strategies: hierarchical modeling and amortization. We demonstrate the benefits of this approach in empirical studies of speeches, abstracts, and shopping baskets. We show how S-EFE enables group-specific interpretation of word usage, and outperforms EFE in predicting held-out data.
This paper presents a word embedding model for grouped data. It extends EFE to learn group-specific embedding vectors, while sharing the same context vector. To handle groups limited data, the authors propose two methods (hierarchical and amortization) to derive group-specific embedding vectors from a shared one. The paper is clearly written, but the novelty is a bit limited since it is an incremental work beyond EFE. 1. From Table 2, there is no clear winner among the three proposed models (hierarchical, amortiz+feedf, and amortiz+resnet), and the performance differences are subtle especially on the shopping data. If one would like to use S-EFE, do you have any practical guidance on choosing the right model? I guess we should prefer amortiz+resnet to amortiz+feedf, since amortiz+resnet always outperforms amortiz+feedf. Line 276 mentions that hierarchical S-EFE works better when there are more groups. Why? 2. Why Separate EFE performs worse than Global EFE? 3. The authors proposed hierarchical and amortization methods for the reasons in lines 128-131. It is interesting to see how S-EFE performs w.r.t. various data sizes. From this experiment, we might understand if S-EFE is going to surpass hierarchical/amortiz S-EFE, provided more and more data. If yes, would it be better to promote S-EFE as the leading method? If not, why? 4. I understand that pseudo log likelihood is a standard metric for evaluating embedding methods. However, it would be great to see how embedding helps in standard language understanding tasks, like text classification, machine translation, etc. 5. lines 141 and 143: Eq. 9 to Eq. 4
nips_2017_1119
Learned D-AMP: Principled Neural Network Based Compressive Image Recovery Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be "unrolled" to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over 50× faster than BM3D-AMP and hundreds of times faster than NLR-CS.
Summary of the work: The authors present a theoretically motivated NN architecture for solving inverse problems arising in compressed sensing. The network architecture arises from ‘unrolling’ the denoiser-based approximate message passing (D-AMP) algorithm with convolutional denoiser networks inside. The work is nicely presented, and to my knowledge, original. The authors present competitive results on known benchmark data. Overall evaluation: I enjoyed reading this paper, well presented, theoretically motivated work. I was pleased to see that the method achieves very competitive performance in the end both in terms of accuracy and in terms of speed. Comments and questions: 1. Why is the Onsager correction so important?: The Onsager correction is needed because the error distribution differs from the white Gaussian noise assumption typically used to train/derive denoising methods. However, if the whole D-IT unrolled network is trained (or only fine-tuned after initial training) end-to-end, then surely, subsequent denoisers can adopt to the non-Gaussian noise distribution and learn to compensate for the bias themselves. This would mean that the denoisers would no longer be good denoisers when taken out of context, but as part of the network they could co-adapt to compensate for each other’s biases. If this is not the case, why? 2. Validation experiments for layer-wise-training: My understanding is that all experiments use the layer-wise training procedure, and there is no explicit comparison between layer-wise or end-to-end training. Is this correct? 3. Connections to the theory denoising autoencoders: This paper is presented from the perspective of compressive sensing, with solid theory, and unsurprisingly most of the references are to work from this community/domain. However, denoisers are applied more generally in unsupervised learning, and there is a relevant body that the NIPS community will be familiar with which might be worth discussing in this paper. Specifically, (Alain and Bengio, 2012) https://arxiv.org/abs/1211.4246 show that neural networks trained for denoising learn approximate the gradients of the log data generating density. Therefore, one could implement iterative denoising as “taking a gradient step” along the image prior. This is exploited in e.g. (Sonderby et al, 2017) https://arxiv.org/abs/1610.04490 where the connection to AMP is noted - although not discussed in detail. 4. Lipschitz continuity remark: the authors say ConvNets are clearly Lipschitz, otherwise their gradients would explode during training. While this is true, and I would assume doesn’t cause any problems in practice, technically, the Lipschitz constant can change with training, and is unbounded. A similar Lipschitz-continuity requirement arises in - for example - the Wasserstein GAN’s discriminator network, where Lipschitz continuity with a certain constant is ensured by weight clipping. Can the authors comment on the effect of the Lipschitz constant’s magnitude on the findings? Minor comments: Table 1. would benefit from highlighting the fastest/best methods (and those that come within one confidence interval from the winner)
nips_2017_3278
Protein Interface Prediction using Graph Convolutional Networks We consider the prediction of interfaces between proteins, a challenging problem with important applications in drug discovery and design, and examine the performance of existing and newly proposed spatial graph convolution operators for this task. By performing convolution over a local neighborhood of a node of interest, we are able to stack multiple layers of convolution and learn effective latent representations that integrate information across the graph that represent the three dimensional structure of a protein of interest. An architecture that combines the learned features across pairs of proteins is then used to classify pairs of amino acid residues as part of an interface or not. In our experiments, several graph convolution operators yielded accuracy that is better than the state-of-the-art SVM method in this task.
The authors present a novel method for performing convolutions over graphs, which they apply to predict protein interfaces, showing clear improvements over existing methods. The methodological contribution is strong, and the paper mostly clearly written. However, the authors should evaluate compute costs and more alternative methods for performing graph convolutions. Major comments ============= The authors highlight that their method differs from related methods by being designed for pairs (or collections) of graphs (l104). However, the two protein graphs a convolved separately with shared weights, and the resulting node features merged afterward. Any other method can be used to convolve proteins before the resulting features are merged. The authors should clarify or not highlight this difference between their and existing methods. The authors should compare their method to a second graph convolutional network apart from DCNN, e.g. Schlichtkrull et al or Duvenaud et al. The authors should clarify if they used the features described in section 3.1.1 as input to all methods. For a fair comparison, all methods should be trained with the same input features. The authors should compare the memory usage and runtime of their method to other convolutional methods. Does the method scale to large proteins (e.g. >800 residues each) with over 800^2 possible residue-residue contacts? The authors should also briefly describe if computations can be parallelized on GPUs and their method be implemented as a user friendly ‘graph convolutional layer’. The authors should describe more formally (using equations) how the resulting feature vectors are merged (section 2.3). The authors should also clarify how they are dealing with variable-length proteins which result in a variable number of feature vectors. Are the merged feature vectors processed independently by a fully connected layer with shared weights? Or are feature vectors concatenated and flattened, such that the fully connected layer can model interactions between feature vectors as suggested by figure 2? The authors should also clarify if the same output layer is applied independently to feature vectors or jointly. Section 3.2: The authors should describe more clearly which hyper-parameters were optimized, both for GCN, PAIRpred, and DCNN. For a fair comparison, the most important hyper-parameters of all methods must be optimized. l221-225: The authors used the AUC for evaluation. Since labels are highly unbalanced, the authors should also compare and present the area under precision-recall curve. The authors should also describe if performance metrics were computed per protein complex as suggested by figure 3, or across complexes. Minor comments ============= l28-32: This section should be toned down since the authors use some of the mentioned ‘hand-crafted’ features as input to their own model. l90: The authors should clarify what they mean by ‘populations’. l92-99: The authors should mention that a model without downsampling results in higher memory and compute costs. l95: typo ‘classiy’ Table 1: The authors should clarify that ‘positive examples’ and ‘negative examples’ are residues that are or are not in contact, respectively. l155-162: The authors should mention the average protein length, which is important to assess compute costs (see comment above). l180: The authors should clarify if the ‘Gaussian function’ corresponds to the PDF or CDF of a normal distribution. Table 2: The authors should more clearly describe in the main text how the ‘No Convolutional’ model works. Does it consist of 1-4 fully connected layers that are applied to each node independently, and are the resulting feature vectors merged in the same way as in GCN? If so, which activation function was used and how many hidden units? Is it equivalent to GCN with a receptive field of 0? Since the difference between the mean and median AUC is not clear by looking at figure 3, the authors should plot the mean and median as vertical lines. Since the figure is not very informative, I suggest to move it to the appendix and to show instead more protein complexes as in figure 4. l192: Did the authors both downsample negative pairs (caption table 1) and give 10x higher weight to positive pairs? If so, it should be pointed out in the text that two techniques were used to account for class-imbalance. l229: What are ‘trials’? Did the authors use different train/test split, or did they train models multiple times to account for the randomness during training?
nips_2017_1946
Process-constrained batch Bayesian Optimisation Prevailing batch Bayesian optimisation methods allow all control variables to be freely altered at each iteration. Real-world experiments, however, often have physical limitations making it time-consuming to alter all settings for each recommendation in a batch. This gives rise to a unique problem in BO: in a recommended batch, a set of variables that are expensive to experimentally change need to be fixed, while the remaining control variables can be varied. We formulate this as a process-constrained batch Bayesian optimisation problem. We propose two algorithms, pc-BO(basic) and pc-BO(nested). pc-BO(basic) is simpler but lacks convergence guarantee. In contrast pc-BO(nested) is slightly more complex, but admits convergence analysis. We show that the regret of pc-BO(nested) is sublinear. We demonstrate the performance of both pc-BO(basic) and pc-BO(nested) by optimising benchmark test functions, tuning hyper-parameters of the SVM classifier, optimising the heat-treatment process for an Al-Sc alloy to achieve target hardness, and optimising the short polymer fibre production process.
The paper presents a bayesian optimization framework in which samples can be collected in similarly constrained batches. In short, rather than selecting a single point to sample a label from (as in usual bayesian optimization) or selecting multiple points simultaneously (as in batch bayesian optimization), multiple samples in a batch can be collected in which one or more variables need to remain constant throughout the batch. This is motivated from experiments in which some parameter (e.g., temperature in an oven, geometry of an instrument) can be designed prior to sampling, and be reused in multiple experiments in a batch, but cannot be altered modified during a batch experiment. The authors (a) propose this problem for the first time (b) propose two algorithms for solving it, (b) provide bounds on the regret for the most sophisticated of the two algorithms, and (c) present a comparison of the two algorithms on a variety of datasets. One experiment actually involves a metallurgical setup, in which the authors collaborated with researchers in their institution to design a metal alloy, an interdisciplinary effort that is commendable and should be encouraged. My only concern is that the paper is somewhat dense. It is unlikely that someone unfamiliar with Bayesian optimization would be able to parse the paper. Even knowing about bayesian optimization, notation is not intoduced clearly and the algorithms are hard to follow. In particular: -what is sigma in Algorithms 1, and 2. Is it the variance of the GP used in the UCB? -The activation fuction alpha^GB-UCB should be clearly and explicitly defined somewhere. -The description of the inner loop and the outer loop in Algorithm 1 are not consistent with the exposition in the text: h and g are nowhere to be found in Algorithm 1. This seems to be because your replace the optimization with the GP-UCB-PE. It would be better if the link is made explicit. -There is absolutely no description of how any optimization involving an activation function is performed. Do you use sampling? Gradient descent? Some combination of both? These should be described, especially in how they where instantiated in the experiments. -Is alpha different from alpha^GP-UCB in Alg. 1? how? -In both algorithms 1 and 2, it is not clear how previously selected samples in a batch (k'\< k) are used to select the k-th sample in a batch, other than that they share a constrained variable. All of these are central to understanding the algorithm. This is a pity, as the exposition is otherwise quite lucid.
nips_2017_3185
Learning with Bandit Feedback in Potential Games This paper examines the equilibrium convergence properties of no-regret learning with exponential weights in potential games. To establish convergence with minimal information requirements on the players' side, we focus on two frameworks: the semi-bandit case (where players have access to a noisy estimate of their payoff vectors, including strategies they did not play), and the bandit case (where players are only able to observe their in-game, realized payoffs). In the semi-bandit case, we show that the induced sequence of play converges almost surely to a Nash equilibrium at a quasi-exponential rate. In the bandit case, the same result holds for ε-approximations of Nash equilibria if we introduce an exploration factor ε > 0 that guarantees that action choice probabilities never fall below ε. In particular, if the algorithm is run with a suitably decreasing exploration factor, the sequence of play converges to a bona fide Nash equilibrium with probability 1.
The authors show that exponential weight updates and its generalization FTRL, converges pointwise to a Nash equilibrium in potential games. The latter holds even when players receive bandit feedback of the game. Unfortunately, the main result, for the case of full feedback is already known in the economics literature, where Hedge is known as smooth fictitious play. It is presented in the paper: Hofbauer, Sandholm, "On the global convergence of smooth fictitious play", Econometrica 2002. This work is not even cited in the paper and it shows that smooth fictitious play and its extensions of FTRL (though they do not call it FTRL) does converge to Nash in potential games and in three other classes of games. Moreover, the result of Kleinberg, Piliouras, Tardos:"Multiplicative Updates Outperform Generic No-Regret Learning in Congestion Games", STOC'09, is presented in the paper, for the actual sequence of play (unlike what the authors claim). So that paper shows that for the class of congestion games (which is equivalent to the class of all potential games (c.f. Monderer-Shapley)), the actual play of MWU converges to Nash equilibria and in fact almost surely to Pure Nash equilibria (except for measure zero game instances), which is an even stronger result. In light of the above two omissions, the paper needs to be re-positioned to argue that the extension to bandit feedback is not an easy extension of the above results. It is an interesting extension and I can potentially see that extension be presented at a NIPS quality conference, but it needs to be positioned correctly with the literature and the paper needs to argue as to why it does not easily follow from existing results. In particular, given that most of these results go through stochastic approximations, such stochastic approximations are most probably robust to unbiased estimates of the payoffs, which is all that is needed for the bandit extension.
nips_2017_3184
Robust Conditional Probabilities Conditional probabilities are a core concept in machine learning. For example, optimal prediction of a label Y given an input X corresponds to maximizing the conditional probability of Y given X. A common approach to inference tasks is learning a model of conditional probabilities. However, these models are often based on strong assumptions (e.g., log-linear models), and hence their estimate of conditional probabilities is not robust and is highly dependent on the validity of their assumptions. Here we propose a framework for reasoning about conditional probabilities without assuming anything about the underlying distributions, except knowledge of their second order marginals, which can be estimated from data. We show how this setting leads to guaranteed bounds on conditional probabilities, which can be calculated efficiently in a variety of settings, including structured-prediction. Finally, we apply them to semi-supervised deep learning, obtaining results competitive with variational autoencoders.
This paper studies the problem of computing probability bounds, more specifically bounds over probability of atoms of the joint space and conditional probabilities of the class, under the assumption that only some pairwise marginal as well as some univariate marginal values are known. The idea is that such marginals may be easier to obtain than fully specified probabilities, and that cautious inferences can then be used to produce predictions. It is shown that when the marginals follow a tree structure (results are extended to a few other structures), then the problem can actually be solved in closed, analytical form, relating it to cover set and maximum flow problems. Some experiments performed on neural networks show that this simple method is actually competitive with other more complex approaches (Ladder, VAE), while outperforming methods of comparable complexity. The paper is elegantly written, with quite understandable and significant results. I see no reason to not accept it. Authors may be interested in looking at the following papers coming from the imprecise probability literature (since they deal with partially specified probabilities, this may be related): * Benavoli, A., Facchini, A., Piga, D., & Zaffalon, M. (2017). SOS for bounded rationality. arXiv preprint arXiv:1705.02663. * Miranda, E., De Cooman, G., & Quaeghebeur, E. (2007). The Hausdorff moment problem under finite additivity. Journal of Theoretical Probability, 20(3), 663-693. Typos: * L112: solved Our —> solved. Our * L156: assume are —> assume we are * References: 28/29 are redundant
nips_2017_917
Online Convex Optimization with Stochastic Constraints This paper considers online convex optimization (OCO) with stochastic constraints, which generalizes Zinkevich's OCO over a known simple fixed set by introducing multiple stochastic functional constraints that are i.i.d. generated at each round and are disclosed to the decision maker only after the decision is made. This formulation arises naturally when decisions are restricted by stochastic environments or deterministic environments with noisy observations. It also includes many important problems as special case, such as OCO with long term constraints, stochastic constrained convex optimization, and deterministic constrained convex optimization. To solve this problem, this paper proposes a new algorithm that achieves O( p T ) expected regret and constraint violations and O( p T log(T )) high probability regret and constraint violations. Experiments on a real-world data center scheduling problem further verify the performance of the new algorithm.
***Summary*** The paper proposes an algorithm to address a mixed online/stochastic setting where the objective function is a sequence of arbitrary convex functions (with bounded subgradients) under stochastic convex constraints (also with bounded subgradients). A static regret analysis (both in expectation and high-probability) is carried out. A real-world experiment about the allocation of jobs across servers (so as to minimize electricity cost) illustrates the benefit of the proposed method. Assessment: + Overall good & clear structure and presentation + Technically sound and appears as novel + Good regret guarantees - Confusing connection with "Deterministic constrained convex optimization" - Some confusion with respect to unreferenced previous work [A] - More details would be needed in the experimental section More details can be found in the next paragraph. ***Further comments*** -In Sec. 2.2, the rationale of the algorithm is described. In particular, x(t+1) is chosen to minimize the "drift-plus-penalty" expression. Could some intuition be given to explain how the importance of the drift and penalty are traded off (e.g., why is a simple sum, without a tuning weight, sufficient?). -Lemma 5: I do not really have the expertise to assess whether Lemma 5 corresponds to "a new drift analysis lemma for stochastic process". In particular, I am not sufficiently versed in the stochastic process literature. -Could some improvements be obtained by having \alpha and V dependent on t? -For Lemma 9, it seems to me that the authors could reuse the results from Proposition 34 in [B] -Comparison with "OCO with long term constraints": It appears that [A] (not referenced) already provide O(sqrt(T)) and O(sqrt(T)) guarantees, using similar algorithmic technique. This related work should be discussed. -Comparison with "Stochastic constrained convex optimization": Is [16] the only relevant reference here? -Confusion comparison with "Deterministic constrained convex optimization": In the deterministic constrained setting, we would expected the optimization algorithm to output a feasible solution; why is it acceptable (and why does it make sense) to have a non-feasible solution here? (i.e., constraint violation in O(1/sqrt(T))) -Experiments: - More details about the baselines and the implementation (to make things reproducible, e.g., starting points) should appear in the core article - Is the set \mathcal{X_0} = [x_1^min, x_1^max] \times ... \times [x_100^min, x_100^max]? If this is the case, it means that problem (2) has to be solved with box constraints. More details would be in order. - Log-scale for unserved jobs (Figure d) may be clearer - Bigger figures would improve the clarity as well - An assessment of the variability of the results is missing to decide on the significance of the conclusions (e.g., repetitions to display standard errors and means). - To gain some space for the experiments, Sec. 4 could be reduced and further relegated to the supplementary material. -Could the analysis extended to the dynamic regret setting? ***Minor*** -line 52: typo, min -> argmin. I may have missed them in the paper, but under which assumptions does the argmin of the problem reduce to the single x^*? -line 204 (a): isn't it an inequality instead of an equality? [A] Yu, H. & Neely, M. J. A Low Complexity Algorithm with O(sqrt(T)) Regret and Constraint Violations for Online Convex Optimization with Long Term Constraints preprint arXiv:1604.02218, 2016 [B] Tao, T.; Vu, V. Random matrices: universality of local spectral statistics of non-Hermitian matrices The Annals of Probability, 2015, 43, 782-874 ================== post-rebuttal comments ================== I thank the authors for their rebuttal (discussion about [A] was ignored). I have gone through the other reviews and I maintain my score. I would like to emphasize though, that the authors should clarify the discussion about the stochastic/deterministic constrained convex opt. case (as answered in the rebuttal).
nips_2017_587
Safe Model-based Reinforcement Learning with Stability Guarantees Reinforcement learning is a powerful paradigm for learning optimal policies from experimental data. However, to find optimal policies, most reinforcement learning algorithms explore all possible actions, which may be harmful for real-world systems. As a consequence, learning algorithms are rarely applied on safety-critical systems in the real world. In this paper, we present a learning algorithm that explicitly considers safety, defined in terms of stability guarantees. Specifically, we extend control-theoretic results on Lyapunov stability verification and show how to use statistical models of the dynamics to obtain high-performance control policies with provable stability certificates. Moreover, under additional regularity assumptions in terms of a Gaussian process prior, we prove that one can effectively and safely collect data in order to learn about the dynamics and thus both improve control performance and expand the safe region of the state space. In our experiments, we show how the resulting algorithm can safely optimize a neural network policy on a simulated inverted pendulum, without the pendulum ever falling down.
This paper addresses the problem of safe exploration for reinforcement learning. Safety is defined as the learner never entering a state where it can't return to a low cost part of the state space. The proposed method learns a Gaussian Process model of the system dynamics and uses Lyapunov functions to determine if a state-action pair is recoverable from. Theoretical results are given for safety and exploration quality under idealized conditions. A practical method is introduced and an experiment shows it explores much of the recoverable state-space without entering an unrecoverable state. Overall, this paper addresses an important problem for real world RL problems. Being able to explore without risking entering a hazardous state would greatly enhance the applicability of RL. This problem is challenging and this work makes a nice step towards a solution. I have several comments for improvement but overall I think the work will have a good impact on the RL community and is a good fit for the NIPS conference. One weakness of the paper is that empirical validation is confined to a single domain. It is nice to see the method works well on this domain but it would have been good to have tried it a more challenging domain. I suspect there are scalability issues and I think a clear discussion of what is preventing application to larger problems would benefit people wishing to build on this work. For example, the authors somewhat brush the issue of the curse of dimensionality under the rug by saying the policy can always be computed offline. But the runtime could be exponential in dimension and it would be preferable to just acknowledge the difficulty. The authors should also give a clear definition of safety and the goal of the work early on in the text. The definition is somewhat buried in lines 91-92 and it wasn't until I looked for this definition that I found it. Emphasizing this would make the goal of the work much clearer. Similarly, this paper could really benefit from paragraphs that clearly define notation. As is, it takes a lot of searching backwards in the text to make sense of theoretical results. New notation is often buried in paragraphs which makes it hard to access. Updated post author feedback: While I still think this is a good paper and recommend acceptance, after reflecting on the lack of clarity in the theoretical results, I've revised my recommendation to just accept. Theorem 4 seems of critical importance but its hard to see that the result maps to the description given after it. Clarifying the theoretical results would make this a much better paper than it already is. Minor comments: How is the full region of attraction computed in Figure 2A? Line 89: What do you mean by state divergence? Line 148: This section title could have a name that describes what the theoretical results will be, e.g., theoretical results on exploration and policy improvement Line 308: Could footnote this statement. Line 320: "as to" -> to Theorem 4: How is n* used?
nips_2017_1807
Dynamic Importance Sampling for Anytime Bounds of the Partition Function Computing the partition function is a key inference task in many graphical models. In this paper, we propose a dynamic importance sampling scheme that provides anytime finite-sample bounds for the partition function. Our algorithm balances the advantages of the three major inference strategies, heuristic search, variational bounds, and Monte Carlo methods, blending sampling with search to refine a variationally defined proposal. Our algorithm combines and generalizes recent work on anytime search [16] and probabilistic bounds [15] of the partition function. By using an intelligently chosen weighted average over the samples, we construct an unbiased estimator of the partition function with strong finite-sample confidence intervals that inherit both the rapid early improvement rate of sampling and the long-term benefits of an improved proposal from search. This gives significantly improved anytime behavior, and more flexible trade-offs between memory, time, and solution quality. We demonstrate the effectiveness of our approach empirically on real-world problem instances taken from recent UAI competitions.
The authors present a method for estimating the partition function that alternates between performing heuristic search and importance sampling. The estimated value of the partition function is confidence bounded and improves with additional computation time. Experimental evaluation is performed on problems from 2006 and 2008 UAI competitions by comparing confidence bounds of the proposed method against previous work that uses only sampling [15] or search [16]. The proposed method significantly outperforms sampling on certain problems and search on others, while maintaining performance roughly comparable to or better than either sampling or search across all problems. The originality of the work is a bit limited, as it is a fairly straightforward (but novel as far as I know) combination of two recent papers, references [15] and [16]. To review past work, [15] proposes a method that combines importance sampling with variational optimization. The authors chose a proposal distribution that is the solution to a variational optimization problem and show this choice can be interpreted as picking the proposal distribution that minimizes an upper bound on the value of importance weights. The importance weight bound is then used to construct a (probabilistic, but with confidence intervals) bound on the estimate of the partition function. [16] describes a search algorithm for computing deterministic bounds on the partition function. Unlike deterministic bounds given by variational methods, the search algorithm proposed in [16] can continue to improve its estimate in an anytime manner, even when given limited memory resources. This paper combines the methods of approaches of [15] and [16] by alternating between importance sampling and search. Search is performed using a weighted mini-bucket variational heuristic. The same mini-buckets are used to improve the proposal distribution over time, generating better samples. The authors provide a method for weighting later samples more heavily as the proposal distribution improves. The authors demonstrate that the proposed method improves estimation of the partition function over state of the art sampling and search based methods ([15] and [16]). There is room for researchers to build on this paper in the future, particularly in the areas of weighting samples generated by a set of proposal distributions that improve with time and exploring when computation should be focused more on sampling or search. The paper is clear and well organized. Previous work the paper builds on is well cited. Minor: - Line 243 states that DIS remains nearly as fast as AOBFS in figures 3(d) and (f), but it appears that the lower bound of DIS converges roughly half an order of magnitude slower. Line 233 states, "we let AOBFS access a lower bound heuristic for no cost." Does the lower bound converge more slowly for DIS than AOBFS in these instances or is the lower bound shown for AOBFS unfair? - Line 190, properites is a typo. - Line 146 states that, "In [15], this property was used to give finite-sample bounds on Z which depended on the WMB bound, h_{\emptyset}^{+}." It appears that in [15] the bound was only explicitly written in terms of Z_{trw}. Mentioning Z_{trw} may make it slightly easier to follow the derivation from [15].
nips_2017_1806
Graph Matching via Multiplicative Update Algorithm As a fundamental problem in computer vision, graph matching problem can usually be formulated as a Quadratic Programming (QP) problem with doubly stochastic and discrete (integer) constraints. Since it is NP-hard, approximate algorithms are required. In this paper, we present a new algorithm, called Multiplicative Update Graph Matching (MPGM), that develops a multiplicative update technique to solve the QP matching problem. MPGM has three main benefits: (1) theoretically, MPGM solves the general QP problem with doubly stochastic constraint naturally whose convergence and KKT optimality are guaranteed. (2) Empirically, MPGM generally returns a sparse solution and thus can also incorporate the discrete constraint approximately. (3) It is efficient and simple to implement. Experimental results show the benefits of MPGM algorithm.
This paper presents a novel method (MPMG) to solve QP graph matching problem. The graph matching problem is formulated as argmax_x(x^TWx) where W encodes the pair-to-pair and node-to-node affinities and x is the desired permutation matrix (in vector form). The desired solution is a permutation encoding the optimal matching, also expressed as doubly stochastic (entries >= 0, rows/columns sum to 1) and discrete (entries in {0,1}). The standard approaches to solving the QP is to relax the NP-hard problem (relaxation can be for either the doubly-stochastic constraint or the discrete constraint). This proposed MPMG solution follows the Lagrange multiplier technique which moves the doubly-stochastic constraint into the error term. This proposed approach, along with proofs for convergence and KKT-optimality are a a novel contribution for this graph matching formulation. Experimentally, evidence is provided that shows the approach converges near a sparse/discrete solution even though the constraint is not explicitly modeled in the solution, which is the most promising feature of the method. However, there are some concerns regarding the experimental details which might jeopardize the strength of the empirical claims. Other state of the art techniques do rely on iterative solutions, so the paper should mention the convergence criterion especially for experiments like figure 1 where the MPMG is initialized with RRWM, it would be useful to make sure that the convergence criterion for RRWM is appropriately selected for these problems. Following up on the previous comment, it seems initialization with RRWM is used for FIgure 1 but initialization with SM for the subsequent experiments. Furthermore, regarding the experiments, how is performance time computed? Is this including time for the initialization procedure, or just the subsequent MPGM iterations after initialization? Also, RRWM also requires an initialization. In [3] the basic outline of the algorithm uses uniform, so is it a fair comparison at all for MPMG that uses a established algorithm as initialization vs RRWM which uses uniform? If the details have been handled properly perhaps more explanation should be given in the text to clarify these points. In light of these points it is hard to take anything meaningful away from the experiments provided. In the experiments, how are the node-to-node affinities determined (diagonals of W)? I think this is not described in the paper. Image feature point matching would not actually be solved as a graph matching problem in any practical scenario (this is true for a number of reasons not limited to number of features, pair-to-pair affinities, robustness to large # of outliers). As the paper suggests, there are many other practical labeling problems which are typically solved using this QP formulation and perhaps the paper would have more value stronger
nips_2017_2396
Discovering Potential Correlations via Hypercontractivity Discovering a correlation from one variable to another variable is of fundamental scientific and practical interest. While existing correlation measures are suitable for discovering average correlation, they fail to discover hidden or potential correlations. To bridge this gap, (i) we postulate a set of natural axioms that we expect a measure of potential correlation to satisfy; (ii) we show that the rate of information bottleneck, i.e., the hypercontractivity coefficient, satisfies all the proposed axioms; (iii) we provide a novel estimator to estimate the hypercontractivity coefficient from samples; and (iv) we provide numerical experiments demonstrating that this proposed estimator discovers potential correlations among various indicators of WHO datasets, is robust in discovering gene interactions from gene expression time series data, and is statistically more powerful than the estimators for other correlation measures in binary hypothesis testing of canonical examples of potential correlations.
Summary: Finding measures of correlation among data covariates is an important problem. There have been several measures that have been proposed in the past - maximum correlation, distance correlation and maximum information coefficient. This paper introduced a new correlation measure by connecting dots among some recent work in information theory literature. The candidate in this paper is called hyper-contractivity coefficient or recently also is known as information bottleneck. Information bottleneck principle is this: Whats the best summary of variable X (most compressed) such that it also retains a lot of information about another variable Y. So for all summaries of X, U the maximum ratio of I(U;Y)/I(U;X) is the hyper contractivity coefficient. This quantity has another interpretation due to recent work. Consider the conditional p(y|x) and marginal p(x) associate with joint distribution of (X,Y). Consider a different distribution r(x) and the marginal r(y) due to the joint r(x) p(y|x). The hypercontractivity coefficient is the maximum ratio of Divergence between r(y) (induced distribution on y by the change distribution r(x)) and p(y) to the divergence between r(x) and p(x). In other words "whats the maximum change at output y that can be caused by a new distribution that looks similar to x in KL distance" Renyi has come up with several axioms for characterizing correlation measures. The authors show that if you make one of the axioms strict in a sense, and then drop the need for symmetry, then hypercontractivity satisfies all the axioms. To be less abstract, the goal of authors is that they like to have a measure which has high correlation value even in for most values of X there seems to be no influence on Y while in some rare region of the X space, correlation with Y is almost deterministic. This has practical applications as the authors demonstrate in the experiments. It turns out the hypercontractivity coefficient has this property while all other previous measures do not have this property provably. This is also related to their modification of the axioms of Renyi by a stronger one (axiom 6 instead of 6'). the authors propose an asymptotically consistent estimator using the second definition of hyper contractivity. They define an optimization problem in terms of the ratio of the change distribution r(x) and original distribution p(x) which they solve by gradient descent.To come to this formulation they use existing results in kernel density estimation + clever tricks in importance sampling and the defn of hyper contractivity. The authors justify the use of the measure by a very exhanustive set of experiments. Strengths: a) The paper connects hyper contractivity ( which has recently acquired attention in information theory community) to a correlation measure that can actually pick out "hidden strong correlations under rare events" much better than the existing ones. Theoretically justifying it with appropriate formulation of axioms based on Renyi's axioms and clarifying relationships amongs various other correlation measures is pretty interesting and a strong aspect of this paper. b) Authors apply this to discover existing known gene pathways from data and also discover potential correlation among WHO indicators which are difficult to detect. There is an exhaustive simulations with synthetic functions too. I have the following concerns too: a) Detecting influence from X to Y is sort of well studied in causality literature. In fact counterfactual scores called probability of sufficiency and probability of necessity have been proposed (Please refer "Causality" book by Judea Pearl 2009. Chapter 7 talks about scores for quantifying influence). So authors must refer to and discuss relationships to such counterfactual measures. Since these are counterfactual measures, they also do not measure just factual influence. Further , under some assumptions the above counterfactual measures can be computed from observational data too. Such key concepts must be referred to and discussed if not compared. I however admit that the current works focus is on correlation measures and not on influence in the strictest causal sense. b) This comment is related to my previous comment : The authors seem to suggest that hypercontractivity suggest a direction of influence from X to Y ( this suggestion is sort of vague in the paper). I would like to point out two things : a) In experiments in the WHO dataset, some of the influences are not causal (for example case E) and some of them may be due to confounders (like F between arms exports and health expenditure). So this measure is about finding hidden potential correlations or influences if the causal direction if already known b) For gaussians, S(Y,X) and S(X,Y) are same - therefore directional differences depend on the functional relationships between X and Y. So the title of discovering potential influence may not be a suitable title given prior work on counterfactual influence scores? I agree however that it is discovering rare hidden correlations. c) Does it make sense to talk about a conditional version ? If so where would those be useful ? d) Section 4.3 - Authors say "X_t will cause ... Y_t to change at a later point in time and so on". However, the authors for the same t_i check for the measure s(X_{t_i},Y_{t_i}). So are we not measure instantaneous influence here. I agree that the instantaneous influences can come one after the other in a sequence of pairs of genes. However, this discrepancy needs to be clarified. e) Why did the authors not use UMI or CMI estimators for experiments in Section 4.2 and 4.1 It seems like shannon capacity related estimators proposed in ref 26 does loosely have properties like that of axiom 6 ( roughly speaking). Actually I wonder why the authors did not include shannon capacity in the theoretical treatment with other measures (I understand that certain normalization criterions like being between 0 and 1 and 5 are not satisfied)? f)Slightly serious concern: Does Axiom 6, for a given {\cal X,\cal Y} domain need to hold for "some" non constant function f or "every" non constant function f ?? Axiom 6' is for any function f ?? The reason I am asking is because the last expression in page 15 in the appendix has H(f(X) in the numerator and denominator. The authors say "H(f(X)) =infinity for a continuous random variable X and a nonconstant function f" - So clearly if this is the proof Axiom 6 is not for any f. Because f being a linear function and X being a uniform distribution on bounded support clearly does not satisfy that statement in the proof if it means any f. g) Follow up of concerns in (f) : In page 14, proof of Proposition 1, point 6' - If Y=f(X) s(X;f(X)) =1 from Theorem 1" . That seems to rely on the above concern in (f). It seems like axiom 6 is not valid for any f. In that case, how is proposition 1 point 6' correct? I think that needs a separate proof for alpha=1 right ?? h) Regarding directionality: Can you characterize functions f, noise such that Y= f(X, noise) and S (Y,X) << S(X,Y). If this can be done, this could be used potentially for casual inference between pairs from just observational data on which there have been several works in the past few years. Have the authors considered applying this to that case? i) With respect to the material on page 27 in the appendix: It seems like definition in (5) has an interventional interpretation. You are trying different soft interventions r(x) at x and you are checking for what soft intervention that is not too different from p(x) the channel output changes dramatically. This seems like a measure that sort of simulates an intervention assuming a causal direction to check the strength of causation. In this sense, mildly it is a counterfactual measure. Can this be use as a substitute for interventions in certain structural equation models (under some suitable assumptions on non-linearity noise etc). If authors have answers, I would like to hear it.
nips_2017_1058
Optimal Sample Complexity of M -wise Data for Top-K Ranking We explore the top-K rank aggregation problem in which one aims to recover a consistent ordering that focuses on top-K ranked items based on partially revealed preference information. We examine an M -wise comparison model that builds on the Plackett-Luce (PL) model where for each sample, M items are ranked according to their perceived utilities modeled as noisy observations of their underlying true utilities. As our result, we characterize the minimax optimality on the sample size for top-K ranking. The optimal sample size turns out to be inversely proportional to M . We devise an algorithm that effectively converts M -wise samples into pairwise ones and employs a spectral method using the refined data. In demonstrating its optimality, we develop a novel technique for deriving tight ∞ estimation error bounds, which is key to accurately analyzing the performance of top-K ranking algorithms, but has been challenging. Recent work relied on an additional maximum-likelihood estimation (MLE) stage merged with a spectral method to attain good estimates in ∞ error to achieve the limit for the pairwise model. In contrast, although it is valid in slightly restricted regimes, our result demonstrates a spectral method alone to be sufficient for the general M -wise model. We run numerical experiments using synthetic data and confirm that the optimal sample size decreases at the rate of 1/M . Moreover, running our algorithm on real-world data, we find that its applicability extends to settings that may not fit the PL model.
Summary: The authors consider the problem of optimal top-K rank aggregation from many samples each containing M-wise comparisons. The setup is as follows: There are n items and each has a hidden utility w_i such that w_i >= w_{i+1}. w_K-w_{K+1} has a large separation Delta_K. The task is to identify the top K items. There is a series of samples that are given. Each samples is a preference ordering among M items out of the n chosen. Given the M items in the sample, the preference ordering follows the PL model. Now there is a random hyper graph where each set of M vertices is connected independently with probability p. Further, every hyperedge gives L M-wise i.i.d preference order where M items come from the hyperedge. The authors show that under some density criterion on p, the necessary and sufficient number of samples nis O( n log n/M Delta_K^2 ) . The notable point is that it is inversely proportional to M. The algorithm has two parts: For every M-wise ordered sample, one creates M pariwise orderings that respect the M-wise ordering such that these pairwise orderings are not correlated. This is called sample breaking. Then the problem uses standard algorithmic techniques like Rank Centrality that use the generated pairwise ordered samples to actually generate a top K ranking. The proofs involve justifying the sample breaking step and new analysis is needed to prove the sample complexity results. Strengths: a) For the dense hypergraph case, the work provides sample optimal strategy for aggregation which is the strength of the paper. b) Analyzing rank centrality along with path breaking seems to require a strong l_infinity bound on the entries of the stationary distribution of a Markov Chain whose transition matrix entries have been estimated upto to some additive error. Previous works have provided only l_2 error bounds. This key step helps them separate the top K items from the rest due to the separation assumed. I did not have the time to go through the whole appendix. However I only quickly checked the proof of Theorem 3 which seems to be quite crucial. It seems to be correct. Weaknesses: a) Spectral MLE could be used once sample breaking is employed and new pairwise ordered samples generated. Why has that not been compared with the proposed approach for the experiments in section 4.2 ? Infact that could have been compared even for synthetic experiments in Section 4.1 for M > 2 ? b) The current method's density requirements are not optimal (at least w.r.t the case for M=2). I was wondering if experimentally, sample breaking + some other algorithmic approach can be shown to be better ?
nips_2017_1663
Robust Estimation of Neural Signals in Calcium Imaging Calcium imaging is a prominent technology in neuroscience research which allows for simultaneous recording of large numbers of neurons in awake animals. Automated extraction of neurons and their temporal activity from imaging datasets is an important step in the path to producing neuroscience results. However, nearly all imaging datasets contain gross contaminating sources which could originate from the technology used, or the underlying biological tissue. Although past work has considered the effects of contamination under limited circumstances, there has not been a general framework treating contamination and its effects on the statistical estimation of calcium signals. In this work, we proceed in a new direction and propose to extract cells and their activity using robust statistical estimation. Using the theory of M-estimation, we derive a minimax optimal robust loss, and also find a simple and practical optimization routine for this loss with provably fast convergence. We use our proposed robust loss in a matrix factorization framework to extract the neurons and their temporal activity in calcium imaging datasets. We demonstrate the superiority of our robust estimation approach over existing methods on both simulated and real datasets.
This is an important and timely paper in the automated signal detection of calcium imaging of neurons. The authors have developed a new methodology based on careful noise model and robust statistics. The prevalence of calcium imaging experimental studies has increased interest in powerful analysis methods for understanding correlation of neuronal firing patterns, and this paper represents a strong advance in this direction. The use of robust location estimator appears be a good approach, particularly given the magnitude of signal and noise variability and strong presence of ouliers in neuropil. It is somewhat surprising that previous methods have not considered this. The author's noise model as a superposition of positive sources and lower amplitude normal component is interesting and gives more statistical power in the analysis. In equations (2) these components should be more explicitly labelled for readability. The authors present a rigorously argument for the algorithms convergence rate which is fast. The authors present a nice comparison of their EXTRACT algorithm with two other approaches in actual manually labelled microendoscopic single-photon imaging data and show superior performance. This section of the paper could be developed a little more carefully with more explicitly stated performance statistics, although Figure 4 is well done. In summary this is a very strong paper on a timely and important topic. The availability of a code and how it might be deployed would be helpful.
nips_2017_1662
Deep Recurrent Neural Network-Based Identification of Precursor microRNAs MicroRNAs (miRNAs) are small non-coding ribonucleic acids (RNAs) which play key roles in post-transcriptional gene regulation. Direct identification of mature miRNAs is infeasible due to their short lengths, and researchers instead aim at identifying precursor miRNAs (pre-miRNAs). Many of the known pre-miRNAs have distinctive stem-loop secondary structure, and structure-based filtering is usually the first step to predict the possibility of a given sequence being a pre-miRNA. To identify new pre-miRNAs that often have non-canonical structure, however, we need to consider additional features other than structure. To obtain such additional characteristics, existing computational methods rely on manual feature extraction, which inevitably limits the efficiency, robustness, and generalization of computational identification. To address the limitations of existing approaches, we propose a pre-miRNA identification method that incorporates (1) a deep recurrent neural network (RNN) for automated feature learning and classification, (2) multimodal architecture for seamless integration of prior knowledge (secondary structure), (3) an attention mechanism for improving long-term dependence modeling, and (4) an RNN-based class activation mapping for highlighting the learned representations that can contrast pre-miRNAs and non-pre-miRNAs. In our experiments with recent benchmarks, the proposed approach outperformed the compared state-of-the-art alternatives in terms of various performance metrics.
The paper presents an LSTM model with an attention mechanism for classifying whether an RNA molecule is a pre-microRNA from its sequence and secondary structure. Class weights are incorporated into log-loss to account for class imbalance in the datasets used. The proposed method is extensively evaluated against 5 other existing methods on 3 datasets, and is shown to outperform the existing methods in most cases. The paper then attempts to give some insight into the features that are important for achieving good performance. First, by showing that secondary structures are largely responsible, but sequence features give a small boost, and second, by interpreting the attention weights using an adapted version of class activation mapping (proposed in an earlier CVPR paper). Results using different architectures and hyperparameters are also shown. The paper presents an interesting application of LSTMs to biological data, and has some novel elements for model interpretation. The main issue I have with this paper is in how the method is evaluated. If the goal is to find new pre-microRNAs, especially those that have noncanonical structures (as stated in the abstract) special care must be taken in the construction of the training and test sets to make sure the reported performance will be reflective of performance on this discovery task. This is why in ref [10] describing miRBoost, methods are evaluated based on ability to predict completely new pre-microRNAs that were not previously found in previous versions of miRBase (included as 'test' dataset in this paper). While holding out 10% of the dataset as a test set may seem to also be sufficient, it results in a far smaller evaluation set of positives (1/5 - 1/10) than using the 'test' dataset, as well as possibly allowing the models to learn structural features of newly discovered RNAs, some of which may have been randomly placed in the training set. An even stricter evaluation would use "structurally nonhomologous training and test data sets", as proposed in Rivas et al. (PMID:22194308), that discusses the importance of doing so for properly evaluating methods on the related RNA secondary structure prediction task. If a model is able to perform well on this stricter evaluation, where test RNA sequences are structurally different from training sequences, one can then be more confident in the ability to discover structurally novel pre-microRNAs. A related comment on the evaluation is that the AUROC metric is unsuitable for tasks where only performance on the positive class is important (as in this case) and the area under the precision-recall curve (AUPR), or even better, the area under the precision-recall-gain curve (AUPRG; Flach and Kull NIPS 2015) should be used. The adaptation of the class activation mapping method to the LSTM model is interesting and the paper hints that it could indeed be useful in uncovering new biological insight. It would strengthen the section if features common to the negatively predicted examples could be discussed as well. The attention mechanism seemed to make a big difference to the performance - it would be interesting if it was possible to dissect how/why this is occurring in the model (and on this task). Does the attention mechanism make a bigger difference for longer sequences? And is it possible to show some of the long range dependencies that the attention mechanism picks up? Other questions for the authors: - In section 4.3 were the chosen examples actually correctly classified as positives and negatives respectively? - Line 175 - how were the dropout rates selected empirically? Using cross validation? - In the discussion it is mentioned that additional data modes can be added as separate network branches. Why was this approach not taken for the structure data and instead an explicit encoding that simultaneously considers structure and sequence was used? Typos: Line 91: "long-term dependency" -> "long-term dependencies" Table 1: "Positive exmaples" -> "Positive examples" Table 2 caption: "under curve of receiver operation characteristic" -> "area under receiver operating characteristic curve" Table 3: microPred (test), cross-species, 0.985 should be highlighted instead Figure 4 caption: "RNN ouputs" -> "RNN outputs" Line 237: "technqiue" -> "technique" Reference 10 should be Tran et al. not Tempel et al.
nips_2017_1404
Collaborative PAC Learning We consider a collaborative PAC learning model, in which k players attempt to learn the same underlying concept. We ask how much more information is required to learn an accurate classifier for all players simultaneously. We refer to the ratio between the sample complexity of collaborative PAC learning and its non-collaborative (single-player) counterpart as the overhead. We design learning algorithms with O(ln(k)) and O(ln 2 (k)) overhead in the personalized and centralized variants our model. This gives an exponential improvement upon the naïve algorithm that does not share information among players. We complement our upper bounds with an Ω(ln(k)) overhead lower bound, showing that our results are tight up to a logarithmic factor. A natural but naïve algorithm that forgoes collaboration between players can achieve our objective by taking, from each player distribution, a number of samples that is sufficient for learning the individual task, and then training a classifier over all samples. Such an algorithm uses k times as many samples as needed for learning an individual task -we say that this algorithm incurs O(k) overhead in sample complexity. By contrast, we are interested in algorithms that take advantage of the collaborative environment, learn k tasks by sharing information, and incur o(k) overhead in sample complexity. We study two variants of the aforementioned model: personalized and centralized. In the personalized setting (as in the department store example), we allow the learning algorithm to return different functions for different players. That is, our goal is to return classifiers f 1 , . . . , f k that have error of at most on player distributions D 1 , . . . , D k , respectively. In the centralized setting (as in the hospital example), the learning algorithm is required to return a single classifier f that has an error of at most on all player distributions D 1 , . . . , D k . Our results provide upper and lower bounds on the sample complexity overhead required for learning in both settings.
**Edit** I have read the other reviews and rebuttal. I thank the authors for their feedback and clarification and my positive evaluation of this paper remains unchanged. The authors give theoretical guarantees for collaborative learning with shared information under the PAC regime. They show that the sample complexity for (\epsilon,\delta)-PAC learning k classifiers for a shared problem (or a size k majority-voting ensemble) only needs to grow by a factor of approximately 1+log(k) (or 1+log^2(k)) rather than a factor of k as naive theory might predict. The authors provide pseudocode for two algorithms treated by the theory. These were correct as far as I could see, though I didn't implement either. The paper is mostly clearly written, and strikes a sensible balance between what is included here and what is in the supplementary material, and I enjoyed reading it. I checked the paper and sections A and B of the supplementary material quite carefully, and skimmed C and D. There were no obvious major errors I could find. It would, of course, have been ideal to see experimental results corroborating the theory, but space is limited and the theory is the main contribution. Hopefully those will appear in a later extended version. I have mostly minor comments: Somewhere it should be highlighted explicitly that t is a placeholder for log_{8/7}(k) thus although t is indeed logarithmic in k it is also not *very* small compared to k unless k is large (few hundred say) and in this setting a very large k strikes me as unlikely. On the other hand some central steps of the proof involve union bounding over 2t terms, which is usually disastrous, but here the individual sample complexities for each classifier still only grow like log(2t/\delta), i.e. loglog(k) and you may like to highlight that as well. I think it is also worth mentioning that the guarantees for the second algorithm depend on a large enough quorum of individual voters being right at the same time (w.h.p) thus the guarantee I think is for the whole ensemble and not for parts of it i.e. since we don't know which individual classifiers are right and which are not. In particular, in practice could early stopping or communications errors potentially invalidate the theoretical guarantees here? Line 202 and elsewhere use the notation of line 200, i.e. n_{r-1,c} + \frac{1}{8}n_{r-1,c-1}, since it is much clearer. Line 251 can you please clarify the notation? \mathcal{D}_{\orth} is a distribution over - what? - with relation to the \mathcal{D}_i? It seems like \mathcal{F}_k and \mathcal{D}_k have some dependency on one another? Line 393 in supplementary should be \forall r < r' \in [t]. A comment - the theory for the second algorithm looks like it could be adapted to, say, a bagged ensemble with only very minor modifications. There is little in the way of good theory for ensemble classification, in particular when is an ensemble of "weak" classifiers better than a single "strong" classifier is not well understood.
nips_2017_2162
Triple Generative Adversarial Nets Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation, where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. To address the problems, we present triple generative adversarial net (Triple-GAN), which consists of three players-a generator, a discriminator and a classifier. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution. Our results on various datasets demonstrate that Triple-GAN as a unified model can simultaneously (1) achieve the state-of-the-art classification results among deep generative models, and (2) disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally.
Summary: The paper presents a GAN-like architecture called Triple-GAN that, given partially labeled data, is designed to achieve simultaneously the following two goals: (1) Get a good generator that generates realistically-looking samples conditioned on class labels; (2) Get a good classifier, with smallest possible prediction error. The paper shows that other similar GAN-based approaches always implicitly privileged either (1) or (2), and or needed much more labeled data to train the classifier. By separating the discrimination task between true and fake data from the classification task, the paper outperforms the state-of-the-art, both in (1) and (2). In particular, the classifier achieves high accuracy with only very few labeled dataset, while the generator produces state-of-the-art images, even when conditioned on y labels. Quality & Clarity: The paper indeed identifies plausible reasons of failure/inefficiency of the other similar GAN-type methods (such as Improved-GAN). The underlying idea behind Triple-GAN is very elegant and comes with both nice theoretical justifications and impressive experimental results. However the overall clarity of exposition, the text-flow and the syntax can still be much improved. Concerning syntax: too many typos and too many grammatically incorrect sentences. Concerning the exposition: there are too many vague or even unclear sentences/paragraphs (see below); even the statements of some Corollaries are vague (Corollary 3.2.1 & 3.3.1). Some proofs are dismissed as evident or refer to results without precise citation. There are some long and unnecessary repetitions (f.ex. l.58-73 and l.120-129 which are essentially the same), while other points that would need more explanations are only mentioned with a few unclear sentences (l.215-220). This is in striking contrast with the content's quality of this paper. So before publication, there will be some serious work to do on the overall presentation, exposition, and syntax! Detailed comments: l.13-14: the expression 'concentrating to a data distribution', which is used in several places in this paper, does not exist. Please use f.ex. something like 'converges to'. l. 37-38: formulation of points (1) and (2) is too vague and not understandable at the first read. Please reformulate. l. 46-49: good point, but formulation is cumbersome. Reformulate it more neatly. You could f.ex. use a 'on the one hand ... on the other ...' type of construction. l. 52-56: Unclear. ('the marginal distribution of data' , 'the generator cannot leverage the missing labels inferred by the discriminator': what do you mean?). Please reformulate. l.58-73: too many unnecessary overlap with l.120-129. (And text is clearer in l.120-129, I find...). Please mind repetitions. One only needs repetitions if one messed up the explanation in the first place... l. 140&157: please format your equations more nicely. l .171: proof of Lemma 3.1: last sentence needs an explanation or a reference. l.174: Corollary 3.2.1: I find it clearer to say p(x) = p_g(x) = p_d(x) and p(y) = p_g(y) = p_d(y). l.185: Corollary 3.3.1: What is a 'regularization on the distances btw ...' ? Define it. Without this definition, I can't say whether the proof is obvious or not. And in any way, please say one or two sentences why the statement is true, instead of dismissing it as obvious. l.215-220: I don't see what the REINFORCE alrogrithm, which is used in reinforcement learning, has to do with your algorithm. And the whole paragraph is quite obscure to me...Please reformulate. (Maybe it needs to be a bit longer?). Esperiments: - please state clearly when you are comparing to numbers reported in other papers, and when you really re-run the algorithms yourself. l.228: '10 000 samples' should be '10 000 test samples' l.232: '(Results are averaged over 10 runs)' : don't put it in parenthesis: it's important! Also, remind this in the caption of Table 1. l.254-263: how many labeled data do you use for each of the experiments reported here? Please specify! Also, you did not mention that triple-GAN simply also has more parameters than an improved GAN, so it already has an advantage. Please mention it somewhere. table 1: what are the numbers in parenthesis? 1 std? Calculated using the 10 runs? l.271: 'the baseline repeats to generate strange samples' : reformulate. Generally: there are quite a few articles ('the') and other small words missing in the text. A careful proof reading will be necessary before publication. Answer to rebuttal: We thank the reviewers for their answers and hope that they will incorporate all necessary clarifications asked by the reviewers. And please be precise: in your rebuttal, you speak of a distance over joint probability distributions, and then cite the KL-divergence as an example. But mathematically speaking, the KL-divergence is not a distance! So please, be precise in your mathematical statements!
nips_2017_2304
Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication We consider a large-scale matrix multiplication problem where the computation is carried out using a distributed system with a master node and multiple worker nodes, where each worker can store parts of the input matrices. We propose a computation strategy that leverages ideas from coding theory to design intermediate computations at the worker nodes, in order to optimally deal with straggling workers. The proposed strategy, named as polynomial codes, achieves the optimum recovery threshold, defined as the minimum number of workers that the master needs to wait for in order to compute the output. This is the first code that achieves the optimal utilization of redundancy for tolerating stragglers or failures in distributed matrix multiplication. Furthermore, by leveraging the algebraic structure of polynomial codes, we can map the reconstruction problem of the final output to a polynomial interpolation problem, which can be solved efficiently. Polynomial codes provide order-wise improvement over the state of the art in terms of recovery threshold, and are also optimal in terms of several other metrics including computation latency and communication load. Moreover, we extend this code to distributed convolution and show its order-wise optimality.
This paper extends the coded distributed computing of matrix multiplication proposed in Lee et al, 2016. The idea is to distribute the matrices by pre-multiplying them with the generator matrices of Reed Solomon code. Since Reed-Solomon codes are MDS codes, any K out of N distributed components are sufficient to recover the desired matrix multiplication. I am somewhat confused by this paper. Reed-Solomon codes ARE defined to be evaluations of polynomials over finite field (or any fields). So why the authors say that they define something new called polynomial code and then follow up by saying the decoding is just like Reed-Solomon code? Reed Solomon codes are MDS code (in fact, Lee et al also used Reed Solomon codes, they just called it MDS codes). The key idea of using codes for both matrices in matrix-matrix multiplication has appeared recently in Lee et al, 2017. I think there is a major problem from an application point of view in this paper that was not there for a simple matrix-vector multiplication of Lee et al of 2016. In case of matrix-vector multiplication A times x of Lee et al, 2016, the matrix A was fixed and x was variable. The matrix A was coded and distributed to the workers. x was uncoded. So the encoding is done only one time. It does indeed speed up the distributed computing. On the other hand, in this paper both the matrices A and B that are to be multiplied are being coded up. As a result when two matrices are given to multiply, a central node has to do a lot of polynomial evaluations and then distribute the evaluations among workers. The overhead for the central node seem to be way more than the actual computing task. Notably, this has to be done every time; as opposed to the one-time-encoding of Lee et al, 2016, or Dutta et al, 2016. Also surprising is the lack of experimental results. Most of the related literature is interesting because of the experimental superior performance they exhibit in real systems. This paper does not make any attempt in that direction. However since theoretically it is not that novel, and given the overhead in evaluating the polynomials, one really needs experimental validation of the methods. ======================== After the rebuttal: I did not initially see the making of the product MDS as a big deal (i.e. the condition that no two terms in the expression (16) have same exponent of x is not that difficult to satisfy). But on second thought, the simplicity should not be deterrent, given that the previous papers kept this problem open. There is one other point. What I understand is that each matrix is encoded using polynomials such that the product becomes codewords of Reed_Solomon codes. I do not necessarily agree with the term Polynomial code and the claim that a new class of code has been discovered. Finally, it is not easy to approximate matrix multiplications over reals with finite field multiplication. I don't know why that is being seen as straight-forward quantization.
nips_2017_2305
Unsupervised Learning of Disentangled Representations from Video We present a new model DRNET that learns disentangled image representations from video. Our approach leverages the temporal coherence of video and a novel adversarial loss to learn a representation that factorizes each frame into a stationary part and a temporally varying component. The disentangled representation can be used for a range of tasks. For example, applying a standard LSTM to the time-vary components enables prediction of future frames. We evaluate our approach on a range of synthetic and real videos, demonstrating the ability to coherently generate hundreds of steps into the future.
This paper presents a neural network architecture and video-based objective function formulation for the disentanglement of pose and content features in each frame. The proposed neural network consists of encoder CNNs and a decoder CNN. The encoder CNNs are trained to decompose frame inputs into contents features (i.e. background, colors of present structures, etc), and pose features (i.e. current configuration of the structures in the video). A decoder CNN is then used to combine the content features of one frame and pose features of a different frame to reconstruct the frame that corresponds to the pose features. This paper presents a new loss for the disentanglement of pose features from input images using a discriminator network (similar to adversarial loss). They train a discriminator network find similarities between features from the pose encoder of images from the same video but different time steps. The pose encoder is then trained to find features that the discriminator cannot distinguish as being similar. At convergence, the pose encoder features will only contain difference between the frames which should reflect the pose change of the person in time. After the encoder-decoder CNN has been trained to identify the content and pose features, an LSTM is trained to take the content features of the last frame and previous pose features to predict the next pose features. The disentangled content and pose features result in high quality video prediction. The experimental results backup the advantage of this method in terms of feature quality for classification tasks, and shows quantitative and qualitative performance boost over the state-of-the-art in video prediction. Pros: Novel video-based loss functions for feature disentanglement. Excellent results on video prediction using the learned pose and content features. Cons: Although this paper claims to be an unsupervised method, the proposed objective function actually needs some weak supervision that different videos should have different content, e.g. object instances; otherwise the second term in equation 3 could be negatively affected (classifying images from different videos as different when they may look the same). The pose LSTM is trained separately from the encoder-decoder CNN. Can the authors comment on why joint end-to-end training was not performed? In fact, similar networks for end-to-end video prediction have been experimented in Oh et al, NIPS 2015. Some typos in equations and text. Should equation 3 be L_{adversarial}(C) instead of L_{adversarial}(D)? h^t_c in LSTM input could be denoted as h^n where n is the last observed frame index, t makes it seem as the current step in generation. Lines 204 and 219 point to Fig. 4.3 which is not in the main paper. Figure 4 should have matching test sequences since it expect us to compare the “left”, “middle”, and “right” results. Action classification experiments are a little confusing: It is not clear what features are used to determine action class and how they are used (i.e. concatenated features? Single features and average for each video? Classification on clips?). The paper states: “In either case, we train a two layer classifier network S on top of either hc or hp, with its output predicting the class label y.” In KTH, the content is not a good indicator about the action that is happening in the video since the same people perform all actions in similar backgrounds. Thus it’s not clear how the content features can be useful for action classification. Classification results of network trained without pose disentanglement loss can make classification results stronger if previous issues are clarified. Pose estimation results using the disentangled pose features are strongly recommended that will significantly strengthen the paper. For the SUNCG dataset (3D object renderings), comparison with the weakly-supervised pose disentangling work by Yang et al, NIPS 2015 is suggested to highlight the performance of the proposed method. The use of the first term in L_{adversarial}(E_p) is not clear to me, it seems that the first term and second term in this loss are pulling the features in completely opposite directions which, to my understanding, is helping D instead of confusing it. Isn’t confusing D the whole purpose of L_{adversarial}(E_p) to confuse D? Title can be more specific to the contribution (adversarial-like loss for content-pose feature disentangling)
nips_2017_2163
Efficient Sublinear-Regret Algorithms for Online Sparse Linear Regression with Limited Observation Online sparse linear regression is the task of applying linear regression analysis to examples arriving sequentially subject to a resource constraint that a limited number of features of examples can be observed. Despite its importance in many practical applications, it has been recently shown that there is no polynomialtime sublinear-regret algorithm unless NP⊆BPP, and only an exponential-time sublinear-regret algorithm has been found. In this paper, we introduce mild assumptions to solve the problem. Under these assumptions, we present polynomialtime sublinear-regret algorithms for the online sparse linear regression. In addition, thorough experiments with publicly available data demonstrate that our algorithms outperform other known algorithms.
The paper considers the online sparse regression problem introduced by Kale (COLT'14), in which the online algorithm can only observe a subset of k features of each data point and has to sequentially predict a label based only on this limited observation (it can thus only use a sparse predictor for each prediction). Without further assumptions, this problem has been recently shown to be computationally hard by Foster et al (ALT'16). To circumvent this hardness, the authors assume a stochastic i.i.d. setting, and under two additional distributional assumptions they give efficient algorithms that achieve sqrt(T) regret. The results are not particularly exciting, but they do give a nice counter to the recent computational impossibility of Foster et al, in a setting where the data is i.i.d. and well-specified by a k-sparse vector. One of the main things I was missing in the paper is a proper discussion relating its setup, assumptions and results to the literature on sparse recovery / compressed sensing / sparse linear regression. Currently, the paper only discusses the literature and previous work on attribute efficient learning. In particular, I was not entirely convinced that the main results of the paper are not simple consequences of known results in sparse recovery: since the data is stochastic i.i.d., one could implement a simple follow-the-leader style algorithm by solving the ERM problem at each step; this could be done efficiently by applying a standard sparse recovery/regression algorithm under a RIP condition. It seems that this approach might give sqrt(T)-regret bounds similar to the one established in the paper, perhaps up to log factors, under the authors' Assumption (a) (which is much stronger than RIP: it implies that the data covariance matrix is well conditioned, so the unique linear regression solution is the sparse one). While the authors do consider an additional assumption which is somewhat weaker than RIP (Assumption b), it is again not properly discussed and related to more standard assumptions in sparse recovery (e.g, RIP), and it was not clear to me how reasonable and meaningful this assumption is. Questions / comments: * The "Online" in the title of the paper is somewhat misleading: the setting of the paper is actually a stochastic i.i.d. setting (where the objective is still the regret), whereas "online" typically refers to the worst-case non-statistical online setting. * The description of Alg 1 is quite messy and has to be cleaned up. For example: - line 192, "computing S_t": the entire paragraph was not clear to me. What are the "largest features with respect to w_t"? - line 196: what are i,j? - line 199, "computing g_t": many t subscripts are missing in the text. * Thm 2: Why k_1 is arbitrary? Why not simply set it to k'-2 (which seems to give the best regret bound)? * Can you give a simpler algorithm under Assumption (b) for the easier case where k <= k'+2 (analogous to Alg 1)? Minors / typos: * line 14: the references to [11,10] seems out of context. * line 156, Assumption (a): use an explicit constant in the assumption, rather than O(1); in the analysis you actually implicitly assume this constant is 1. * line 188: "dual average" => "dual averaging". * Lem 3 is central to the paper and should be proven in the main text. * Lem 4 is a standard bound for dual averaging / online gradient descent, and a proper reference should be given. Also, it seems that a simpler, fixed step-size version of the bound suffices for your analysis. * line 226: "form" => "from". * line 246: "Rougly" => "Roughly".
nips_2017_2650
PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space Few prior works study deep learning on point sets. PointNet [20] is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.
As clearly indicated in the title, this paper submission is an extension of the PointNet work of [19], to appear at CVPR 2017. The goal is to classify and segment (3D) point clouds. Novel contributions over [19] are the use of a hierarchical network, leveraging neighbourhoods at different scales, and a mechanism to deal with varying sampling densities, effectively generating receptive fields that vary in a data dependent manner. All this leads to state-of-the-art results. PointNet++ seems an important extension over PointNet, in that it allows to properly exploit local spatial information. Yet the impact on the overall performance is just 1-2 percent. Some more experimental or theoretical analysis would have been appreciated. For instance: - A number of alternative sampling options, apart from most distant point, are mentioned in the text, but not compared against in the experiments. - In the conclusions, it is mentioned that MSG is more robust than MRG but worse in terms of computational efficiency. I'd like to see such a claim validated by a more in-depth analysis of the computational load, or at least some indicative numbers.
nips_2017_773
Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both.
This paper proposed a unified end-to-end framework for training neural networks to get compressible representations or weights. The proposed method encourages compressibility by minimizing the entropy of representation or weight. Because the original entropy-based objective cannot be minimized directly, the authors instead relax the objective and use discretized approximations. The proposed method also use an annealing schedule to start from a soft discretization to gradually transform to a hard discretization. My main concerns with this paper are on experiments and comparisons. The authors performed two sets of experiments, one on compressing representations of image and the other on compressing classifier weights. For the image compression experiment, the proposed method offers visually appealing results but it is only compared to BPG and JPEG. I suggest the authors also include comparisons against other neural network based approaches [1]. In the weight compression experiment, the proposed method does not show advantage over previous state-of-the-art. The authors also noted this but argued that the proposed method is trained end-to-end while the previous methods have many hand designed steps. However, I would expect integrated end-to-end training to improve performance instead of hindering it. I suggest the authors investigate more and improve their method to fully realize the benefit of end-to-end training. [1] Full Resolution Image Compression with Recurrent Neural Networks
nips_2017_514
One-Sided Unsupervised Domain Mapping In unsupervised domain mapping, the learner is given two unmatched datasets A and B. The goal is to learn a mapping G AB that translates a sample in A to the analog sample in B. Recent approaches have shown that when learning simultaneously both G AB and the inverse mapping G BA , convincing mappings are obtained. In this work, we present a method of learning G AB without learning G BA . This is done by learning a mapping that maintains the distance between a pair of samples. Moreover, good mappings are obtained, even by maintaining the distance between different parts of the same sample before and after mapping. We present experimental results that the new method not only allows for one sided mapping learning, but also leads to preferable numerical results over the existing circularity-based constraint. Our entire code is made publicly available at https://github.com/sagiebenaim/DistanceGAN.
This paper tackles the problem of unsupervised domain adaptation. The paper introduces a new constraint, which compares samples and enforces high cross-domain correlation between the matching distances computed in each domain. An alternative to pairwise distance is provided, for cases in which we only have access to one data sample at a time. In this case, the same rationale can be applied by splitting the images and comparing the distances between their left/right or up/down halves in both domains. The final unsupervised domain adaptation model is trained by combining previously introduced losses (adversarial loss and circularity loss) with the new distance loss, showing that the new constraint is effective and allows for one directional mapping. The paper is well structured and easy to follow. The main contribution is the introduction of a distance constraint to recent state-of-the-art unsupervised domain adaptation pipeline. The introduction of the distance constraint and the assumption held by this constraint, i.e. there is a high degree of correlation between the pairwise distances in the 2 domains, are well motivated with experimental evidence. Although literature review is quite extensive, [a-c] might be relevant to discuss. [a] https://arxiv.org/pdf/1608.06019.pdf [b] https://arxiv.org/pdf/1612.05424.pdf [c] https://arxiv.org/pdf/1612.02649.pdf The influence of having a distance loss is extensively evaluated on a variety of datasets for models based on DiscoGAN, by changing the loss function (either maintaining all its components, or switching off some of them). Qualitative and quantitative results show the potential of the distance loss, both in combination with the cycle loss and on its own, highlighting the possibility of learning only one-sided systems. However, given that the quantitative evaluation pipeline is not robust (i.e. it depends on training a regressor), it is difficult to make any strong claims on the performance of the method. The authors qualitatively and quantitatively assess their contributions for models based on CycleGAN as well. Quantitative results are showed for SVHN to MNIST mapping. However, among the large variety of mappings shown in the CycleGAN paper, authors only pick the horse to zebra mapping to make the qualitative comparison. Given the subjectivity of this kind of comparison, it would be more compelling if the authors could show some other mappings such as season transfer, style transfer, other object transfigurations or labels2photo/photo2label task. Quantitative evaluation of CycleGAN-based models could be further improved by following the FCN-scores of the CycleGAN paper on the task labels/photo or photo/labels. Finally, in the conclusions, the claim “which empirically outperforms the circularity loss” (lines 293-294) seems a bit too strong given the experimental evidence.
nips_2017_1336
Practical Locally Private Heavy Hitters We present new practical local differentially private heavy hitters algorithms achieving optimal or near-optimal worst-case error -TreeHist and Bitstogram. In both algorithms, server running time isÕ(n) and user running time isÕ(1), hence improving on the prior state-of-the-art result of Bassily and Smith [STOC 2015] requiringÕ(n 5/2 ) server time andÕ(n 3/2 ) user time. With a typically large number of participants in local algorithms (n in the millions), this reduction in time complexity, in particular at the user side, is crucial for the use of such algorithms in practice. We implemented Algorithm TreeHist to verify our theoretical analysis and compared its performance with the performance of Google's RAPPOR code.
Private Heavy Hitters is the following problem: a set of n users each has an element from a universe of size d. The goal is to find the elements that occur (approximately) most frequently, but do so under a local differential privacy constraint. This is immensely useful subroutine, which can be used, e.g. to estimate the common home pages of users of a browswer, or words commonly used by users of a phone. In fact, this is the functionality implemented by the RAPPOR system in chrome, and more recently in use in iphones. To make things a bit more formal, there is an a desired privacy level that determines how well one can solve this problem. A \gamma-approximate heavy hitters algorithm outputs a set of elements that contains most elements with true relative frequency at least 2\gamma and a negligible number of elements with relative frequency less than \gamma. The best achievable accuracy is \sqrt{n log d}/eps, and Bassily and Smith showed matching lower and upper bounds. In that work, this was achieved at a server processing cost of n^{5/2}. This work gives two more practical algorithms that achieve the same gamma with a server processing cost of approximately n^{3/2}. On the positive side, this is an important problem and algorithms for this problem are already in largescale use. Thus advances in this direction are valuable. On the negative side, in most applications, the target gamma is a separate parameter (see more detailed comment below) and it would be much more accurate to treat it as such and present and compare bounds in terms of n, d and gamma. Clarifying these dependencies better would make the work more relevant and usable. Detailed comment: In practice, the goal is usually to collect several statistics at a small privacy cost. So a target eps, of say 1, is split between several different statstics, so that the target gamma is not \sqrt{n log d}/eps, but \sqrt{n log d}/ k\eps, when we want to collect k statistics. Often this k itself is determined based on what may be achievable, for a fixed constant gamma, of say 0.01. In short then, the gamma used in practice is almost never \sqrt{n log d}/eps, but a separate parameter. I believe therefore that the correct formulation of the problem should treat gamma as a separate parameter, instead of fixing it at ~\sqrt{n}.
nips_2017_213
A Linear-Time Kernel Goodness-of-Fit Test We propose a novel adaptive test of goodness-of-fit, with computational cost linear in the number of samples. We learn the test features that best indicate the differences between observed samples and a reference model, by minimizing the false negative rate. These features are constructed via Stein's method, meaning that it is not necessary to compute the normalising constant of the model. We analyse the asymptotic Bahadur efficiency of the new test, and prove that under a mean-shift alternative, our test always has greater relative efficiency than a previous linear-time kernel test, regardless of the choice of parameters for that test. In experiments, the performance of our method exceeds that of the earlier linear-time test, and matches or exceeds the power of a quadratic-time kernel test. In high dimensions and where model structure may be exploited, our goodness of fit test performs far better than a quadratic-time two-sample test based on the Maximum Mean Discrepancy, with samples drawn from the model.
The kernel Stein discrepancy goodness-of-fit test is based on a kernelized Stein operator T_p such that under the distribution p (and only under the distribution p), (T_p f) has mean 0 for all test functions f in an RKHS unit ball. The maximum mean of (T_p f) over the class of test functions under a distribution q thus provides a measure of the discrepancy between q and p. This quantity is known as the kernel Stein discrepancy and can also be expressed as the RKHS norm of a witness function g. In this paper, instead of estimating the RKHS norm of g using a full-sample U-statistic (which requires quadratic time to compute) or an incomplete U-statistic (which takes linear time but suffers from low power), the authors compute the empirical norm of g at a finite set of locations, which are either generated from a multivariate normal distribution fitted to the data or are chosen to approximately maximize test power. This produces a linear-time test statistic whose power is comparable to that of the quadratic-time kernel Stein discrepancy in simulations. The authors also analyze the test's approximate Bahadur efficiency. This paper is well-written and well-organized, and the proposed test dramatically improves upon the computational burden of other tests, such as the maximum mean discrepancy and kernel Stein discrepancy tests, without sacrificing too much power in simulations. I certainly believe this paper is worthy of inclusion in NIPS. Although this is not often a salient concern in machine learning, in more traditional statistical applications of hypothesis testing, randomized tests engender quite a bit of nervousness; we don't want a rejection decision in a costly clinical trial or program evaluation to hinge on having gotten a lucky RNG seed. For permutation tests or bootstrapped test statistics, we know that the rejection decision is stable as long as enough permutations or bootstrap iterations are performed. For the authors' proposed test, on the other hand, randomization is involved in choosing the test locations v_1 through v_J (or in choosing the initial locations for gradient ascent), and the simulations use a very small value of J (namely, J=5). Furthermore, in order to estimate parameters of the null distribution of the test statistic, a train-test split is needed. Which raises the question: are the test statistic and rejection threshold stable (for fixed data, across different randomizations)? It would be helpful to have additional simulations regarding the stability of the rejection decision for a fixed set of data, and guidelines as to how many train-test splits are required / how large J needs to be to achieve stability. A minor comment: line 70, switch the order of "if and only if" and "for all f".
nips_2017_2330
Improved Graph Laplacian via Geometric Consistency In all manifold learning algorithms and tasks setting the kernel bandwidth used construct the graph Laplacian is critical. We address this problem by choosing a quality criterion for the Laplacian, that measures its ability to preserve the geometry of the data. For this, we exploit the connection between manifold geometry, represented by the Riemannian metric, and the Laplace-Beltrami operator. Experiments show that this principled approach is effective and robust.
This paper proposes a method for improving the graph Laplacian by choosing the bandwidth parameter in the heat kernel for the neighborhood graph. The method uses geometric consistency in the sense that the Riemannian metric estimated with the graph Laplacian should be close to another estimate of the Riemannian metric based on the estimation of the tangent space. Some experiments with synthetic and real data demonstrate promising results of the proposed method. The proposed method for choosing bandwidth seems novel and interesting. It is based on the very natural geometric idea for capturing the Riemannian geometry of the manifold. There are, however, some concerns also as described below. - From the experimental results in Table 1, it is not very clear whether the proposed method outperforms the other methods. In comparison with Rec, the advantage of GC are often minor, and GC^{-1} sometimes gives worse results. More careful comparison including statistical significance is necessary. - The authors say that d=1 is a good choice in the method. The arguments to support this claim is not convincing, however. It is not clear whether Proposition 3.1 tells that d=1 is sufficient. More detailed explanations and discussions would be preferable. - The paper is easy to read, but not necessarily well written. There are many typos in the paper. To list a few, -- l.36: methods are duplicated. -- eq.(2): it should be minus in the exponential. -- l. 179: on is duplicated. -- Caption of Figure 1: have are.
nips_2017_1298
Runtime Neural Pruning In this paper, we propose a Runtime Neural Pruning (RNP) framework which prunes the deep neural network dynamically at the runtime. Unlike existing neural pruning methods which produce a fixed pruned model for deployment, our method preserves the full ability of the original network and conducts pruning according to the input image and current feature maps adaptively. The pruning is performed in a bottom-up, layer-by-layer manner, which we model as a Markov decision process and use reinforcement learning for training. The agent judges the importance of each convolutional kernel and conducts channel-wise pruning conditioned on different samples, where the network is pruned more when the image is easier for the task. Since the ability of network is fully preserved, the balance point is easily adjustable according to the available resources. Our method can be applied to off-the-shelf network structures and reach a better tradeoff between speed and accuracy, especially with a large pruning rate.
Update: I updated my score after RNN overhead clarification for more recent networks and their remark on training complexity. The authors propose a deep RL based method to choose a subset of convolutional kernels in runtime leading to faster evaluation speed for CNNs. I really like the idea of combining an RNN and using it to guide network structure. I have some doubts on the overhead of the decision network (see below) would like a comment on that before making my final decision. -Do your computation results include RNN runtime (decision network), can you comment on the overhead? The experiments focus on VGG16 which is a very heavy network. In comparison, more efficient networks like GoogleNet may make the RNN overhead significant. Do you have time or flops measurements for system overhead? Finally, as depth increases (e.g. Resnet151) your decision overhead increases as well since the decider network runs at every step. - Do you train a different RNP for each p? Is 0.1 a magic number? -The reward structure and formulating feature (in this case kernel) selection as an MDP is not new, check Timely Object Recognition Karayev, Imitation Learning by Coaching, Hal Daume III. However, application of it to internal kernel selection is interesting. line 33 "Some other samples are more difficult, which require more computational resources. This property is not exploited in previous neural pruning methods, where input samples are treated equally" This is not exactly true check missing references below. Missing references on sample adaptive inference in DNNs: -The cascading neural network: building the Internet of Smart Things Leroux et al. -Adaptive Neural Networks for Fast Test-Time Prediction Bolukbasi et al. -Changing Model Behavior at Test-Time Using Reinforcement Learning Odena et al. misc: table 1 3x column should have Jaderberg as bold (2.3 vs 2.32) line 158 feature -> future line 198 p=-0.1 -> p=0.1 ?
nips_2017_1431
Noise-Tolerant Interactive Learning Using Pairwise Comparisons We study the problem of interactively learning a binary classifier using noisy labeling and pairwise comparison oracles, where the comparison oracle answers which one in the given two instances is more likely to be positive. Learning from such oracles has multiple applications where obtaining direct labels is harder but pairwise comparisons are easier, and the algorithm can leverage both types of oracles. In this paper, we attempt to characterize how the access to an easier comparison oracle helps in improving the label and total query complexity. We show that the comparison oracle reduces the learning problem to that of learning a threshold function. We then present an algorithm that interactively queries the label and comparison oracles and we characterize its query complexity under Tsybakov and adversarial noise conditions for the comparison and labeling oracles. Our lower bounds show that our label and total query complexity is almost optimal.
he authors study the active binary classification setting where + The learner can ask the label of an instance. The labelling oracle will provide a noisy answer (with either adversarial noise or Tsybakov) + In addition to the labeling oracle, there is a comparison oracle. The leaner then can ask which of two instances is more likely to be positive. This oracle's answers is also noisy. The authors provide results which show that if the learner uses comparison queries, then it can reduce the label-query complexity. The problem that the authors study seem to be a valid and interesting one. The presentation of the result however should be improved. In the current state, accurately parsing the arguments is hard. Also, it would have been better if a better proof sketch was included in the main paper (to give the reviewer an intuition about the correctness of the results without having to read the supplements). The core idea is to use the usual active-type algorithm, but then replace the label-request queries with a sub-method that uses mostly comparison queries (along with a few label queries). This is possible by first sorting the instances based on comparison queries and then doing binary search (using label queries) to find a good threshold. One thing that that concerning is the assumption on the noise level. On one hand, the authors argue that the major contribution of the paper compared to previous work is is handling noise. However, their positive result is not applicable when the amount of noise is more than some threshold (which depends on epsilon and even delta). Also, the authors do not discuss if the learner is allowed to perform repeated comparison queries about the same pair of instances. This seems to be helpful. [Furthermore, the authors assume that the outcome of the queries are iid, even when they have been asked about overlapping pairs]
nips_2017_3536
Experimental Design for Learning Causal Graphs with Latent Variables We consider the problem of learning causal structures with latent variables using interventions. Our objective is not only to learn the causal graph between the observed variables, but to locate unobserved variables that could confound the relationship between observables. Our approach is stage-wise: We first learn the observable graph, i.e., the induced graph between observable variables. Next we learn the existence and location of the latent variables given the observable graph. We propose an efficient randomized algorithm that can learn the observable graph using O(d log 2 n) interventions where d is the degree of the graph. We further propose an efficient deterministic variant which uses O(log n + l) interventions, where l is the longest directed path in the graph. Next, we propose an algorithm that uses only O(d 2 log n) interventions that can learn the latents between both nonadjacent and adjacent variables. While a naive baseline approach would require O(n 2 ) interventions, our combined algorithm can learn the causal graph with latents using O(d log 2 n + d 2 log (n)) interventions.
The authors propose theory and algorithms for identifying ancestral relations, causal edges and latent confounders using hard interventions. Their algorithms assume that it is possible to perform multiple interventions on any set of variables of interest, and the existence of an independence oracle, and thus is mostly of theoretical value. In contrast to previous methods, the proposed algorithms do not assume causal sufficiency, and thus can handle confounded systems. The writing quality of the paper is good, but some parts of it could be changed to improve clarity. General Comments Several parts of the paper are hard to follow (see below). Addressing the below comments should improve clarity. In addition, it would be very helpful if further examples are included, especially for Algorithms 1-3. Furthermore, results are stated but no proof is given. Due to lack of space, this may be challenging. For the latter, it would suffice to point to the appendix provided in the supplementary material. Some suggestions to save space: (1) remove the end if/for/function parts of the algorithms, as using appropriate indentation suffices, (2) the “results and outline of the paper” section could be moved to the end of the introduction, and several parts in the introduction and background could be removed or reduced. Although existing work is mentioned, the connections to previous methods are not always clear. For instance, how do the proposed methods relate to existing methods that assume causal sufficiency? How about methods that do not assume causal sufficiency, but instead make other assumptions? Also, the paper [1] (and references therein) are related to the work but not mentioned. In the abstract, the authors mention that some experiments may not be technically feasible or unethical, yet this is not addressed by the proposed algorithms. In contrast, the algorithms assume that any set of interventions is possible. This should be mentioned as a limitation of the algorithms. Furthermore, some discussion regarding this should be included. An important question is if and how such restrictions (i.e. some variables can’t be intervened upon) affect the soundness/completeness of the algorithms. Lines 47-50: The statement regarding identifiability of ancestral relations is incorrect. For some edges in MAGs it can be shown that they are direct, or whether the direct relation is also confounded; see visible edges in [2] and direct causal edges in [3]. If shown to be direct, they are so only in the context of the measured variables; naturally, multiple variables may mediate the direct relation. Line 58: [4] is able to handle interventional data with latent variables. Other algorithms that are able to handle experimental data with latent variables also exist [5-9]. Definition 1: How is the strong separating system computed? (point to appendix from the supplementary material) Lemma 3: Shouldn’t also X_i \not\in S? Algorithm 2, line 7: What is Y? This point is crucial for a correct understanding of the algorithm. I assume there should be an iteration over all Y that are potential children of X, but are not indirect descendants of X. Lemma 4: For sparse graphs, the transitive closure can be trivially computed in O(n * (n + m)) time (n: number of nodes, m: number of edges) which is faster than O(n^\omega). I mention this as the graphs considered in this paper are sparse (of constant degree). Line 179: The fact that Tr(D) = Tr(D_{tc}) is key to understanding Lemma 5 and Algorithm 3 and should be further emphasized. Algorithm 3: It is not clear from the algorithm how S is used, especially in conjunction with Algorithm 1. One possibility is for Algorithm 1 to also always intervene on S whenever intervening on set S_i (that is, intervene on S_i \cup S). Is this correct? This part should be improved. Algorithm 3: Using S to intervene on a random variable set, and then performing additional interventions using Algorithm 1 raises the question whether a more efficient approach exists which can take into consideration the fact that S has been intervened upon. Any comments on this? Algorithm 3: It would be clearer if the purpose of c is mentioned (i.e. that it is a parameter which controls the worst case probability of recovering the observable graph). Line 258: Induced matching is not defined. Although this is known in graph theory, it should be mentioned in the paper as it may confuse the reader. Do-see test: Although not necessary, some comment regarding the practical application of the do-see test would be helpful, if space permits. Typos / Corrections / Minor Suggestions Pa_i is used multiple times but not defined The longest directed path is sometimes denoted as r and sometimes as l. Use only one of them. Abstract: O(log^2(n)) -> O(d*log^2(n)) Line 30: … recovering these relations … -> … recovering some relations … (or something along that lines, as they do not identify all causal relations) Line 59: international -> interventional Line 105: Did you mean directed acyclic graph? Line 114: (a) -> (b) Line 114: … computed only transitive closures … -> … computing using transitive closured … Line 120: (a) -> (c) Line 159: Define T_i … -> something wrong here Line 161: Using a different font type for T_i is confusing. I would recommend using different notation. Lemma 5, Line 185: V_i \in S^c would be clearer if written as V_i \not in S (unless this is not the same and I missed something) Line 191: We will show in Lemma 5 -> Shouldn’t it be Theorem 3? Algorithm 3: \hat{D}(Tr(\hat{D}_S)) looks weird. Consider removing Tr(\hat{D}_S) (unless I misunderstood something). Lines 226-227: To see the effect of latent path Line 233: Figure 4.2 -> Figure 2 Line 240: … which shows that when … -> remove that [1] Hyttinen et al, Experiment Selection for Causal Discovery, JMLR 2013 [2] Zhang, Causal Reasoning with Ancestral Graphs, JMLR 2008 [3] Borboudakis et al, Tools and Algorithms for Causally Interpreting Directed Edges in Maximal Ancestral Graphs, PGM 2012 [4] Triantafillou and Tsamardinos. Constraint-based causal discovery from multiple interventions over overlapping variable sets. JMLR 2015 [5] Hyttinen et al, Causal Discovery of Linear Cyclic Models from Multiple Experimental Data Sets with Overlapping Variables, UAI 2012 [6] Hyttinen et al, Discovering Cyclic Causal Models with Latent Variables: A General SAT-Based Procedure, UAI 2013 [7] Borboudakis and Tsamardinos, Towards Robust and Versatile Causal Discovery for Business Applications, KDD 2016 [8] Magliacane et al, Ancestral Causal Inference, NIPS 2016 [9] Magliacane et al, Joint Causal Inference from Observational and Experimental Datasets, Arxiv
nips_2017_389
Adversarial Surrogate Losses for Ordinal Regression Ordinal regression seeks class label predictions when the penalty incurred for mistakes increases according to an ordering over the labels. The absolute error is a canonical example. Many existing methods for this task reduce to binary classification problems and employ surrogate losses, such as the hinge loss. We instead derive uniquely defined surrogate ordinal regression loss functions by seeking the predictor that is robust to the worst-case approximations of training data labels, subject to matching certain provided training data statistics. We demonstrate the advantages of our approach over other surrogate losses based on hinge loss approximations using UCI ordinal prediction tasks.
The paper proposes an adversarial approach to ordinal regression, building upon recent works along these lines for cost-sensitive losses. The proposed method is shown to be consistent, and to have favourable empirical performance compared to existing methods. The basic idea of the paper is simple yet interesting: since ordinal regression can be viewed as a type of multiclass classification, and the latter has recently been attacked by adversarial learning approaches with some success, one can combine the two to derive adversarial ordinal regression approaches. By itself this would make the contribution a little narrow, but it is further shown that the adversarial loss in this particular problem admits a tractable form (Thm 1), which allows for efficient optimisation. Fisher-consistency of the approach also follows as a consequence of existing results for the cost-sensitive case, which is a salient feature of the approach. The idea proves effective as seen in the good empirical results of the proposed method. Strictly, the performance isn't significantly better than existing methods, but rather is competitive; it would be ideal if taking the adversarial route led to improvements, but they at least illustrate that the idea is conceptually sound. Overall, I thus think the paper makes a good contribution. My only two suggestions are: o It seems prudent to give some more intuition for why the proposed adversarial approach is expected to result in a surrogate that is more appealing than standard ones (beyond automatically having consistency) -- this is mentioned in the Introduction, but could be reiterated in Sec 2.5. o As the basis of the paper is that ordinal regression with the absolute error loss can be viewed as a cost-sensitive loss, it might help to spell out concretely this connection, i.e. specify what exactly the cost matrix is. This would make transparent e.g. Eqn 3 in terms of their connection to the work of (Asif et al., 2015). Other comments: - notation E_{P(x, y) \hat{P}(\hat{y} | x)}[ L_{\hat{Y}, Y} ] is a bit confusing -- Y, \hat{Y} are presumably random variables with specified distribution? And the apparent lack of x on the RHS may confuse. - the sentence "using training samples distributed according to P̃(x, y), which are drawn from P(x, y)" is hard to understand - Sec 3.2 and elsewhere, may as well define \hat{f} = w \cdot x, not w \cdot x_i; dependence on i doesn't add much here - Certainly consistency compared to Crammer-Singer SVM is nice, but there are other multi-class formulations which are consistent, so I'm not sure this is the strongest argument for the proposed approach? - what is the relative runtime of the proposed methods in the experiments? - it wasn't immediately clear why one couldn't use the existing baselines, e.g. RED^th, in the case of a Gaussian kernel (possibly by learning with the empirical kernel map).
nips_2017_3205
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.
The paper proposes a simple non-Bayesian baseline method for estimating predictive uncertainty. This is achieved by using ensembles of NNs, where M distinct NNs, from different random initializations, are trained independently through optimizing a scoring rule. The authors simply used a conventional loss function (i.e. softmax cross entropy) as the score rule for classification task, while for regression problems the authors replaced a regular loss function (i.e. MSE) by negative log-likelihood (NLL), which should capture predictive uncertainty estimates. In addition to this, an adversarial training (AT) schema is proposed to promote smoothness over the predictive distribution. To demonstrate the capacity of ensemble and AT to estimate predictive uncertainty, the authors evaluated the proposed method on a large number of datasets across different tasks, both for regression and classification (vision) tasks. Their method outperforms MC dropout for classification tasks while being competitive on regression datasets in term of NLL. Furthermore evaluations demonstrate the ability of the proposed approach to provide uniform predictive distributions for out-of-distribution (unknown) samples, for an object classification task (ImageNet). The paper proposed the use of NNs ensemble, which trained independently on ordinary training samples and their corresponding adversary, in order to estimate the predictive uncertainty. Although the proposed method is not significantly novel, nor very sophisticated, the paper is contributing to the domain by expliciting an idea that have not been very much exploited so far. However, the results obtained from the experimentations are not able to support well the proposal. Training NNs by adversaries is one of the contributions of the paper, but the relationship between smoothness and obtaining a well-calibrated predictive uncertainty estimates is a bit vague. How smoothness obtained using AT can help to achieve a well-calibrated predictive uncertainty estimates? In other words, the effectiveness of AT on the predictive uncertainty estimates is not clearly justified, and the experiments are not supporting this well. For examples, the authors showed AT has no considerable effect for regression problems (Table 2 in Appendix), while it has a significant effect for vision classification problems. Why? Moreover, if we consider AT as data augmentation, can we demonstrate the positive effects of AT on the predictive uncertainty over other simple data augmentation approach, such as random crop? I think that having another experiment with simple data augmentation for training NNs and ConvNets can highlight whether the smoothness has a definite effect on the predictive uncertainty or not, and whether AT is truly key to achieve this. Recognizing erroneous samples, which are confidently misclassified by NNs, is also another concern for AI safety. The high confidence of misclassified samples prohibit a human intervention then some major deficiencies (accidents) can occur. So, the question can be asked that whether this ensemble can recognize the misclassified samples by providing uniform distributions? The experiments should support that. Estimating the predictive uncertainty is a key for recognizing out-of-distribution and misclassified samples. However, a few non-Bayesian papers are missed from the literature review of the paper. For example, Hendrycks and Gimpel (2016) propose a baseline method by using softmax statistics to estimate the probability of error and probability of out-of-distribution samples. Also, ensemble of DNNs have been evaluated by Abbasi and Gagné (2017) for computing the uncertainty of the predictions for adversaries cases, a hard case of out-of-distribution examples. References: Hendrycks, Dan, and Kevin Gimpel (2016). "A baseline for detecting misclassified and out-of-distribution examples in neural networks." arXiv preprint arXiv:1610.02136. Abbasi, Mahdieh, and Christian Gagné (2017). "Robustness to Adversarial Examples through an Ensemble of Specialists." arXiv preprint arXiv:1702.06856. ** Update following reviewers discussions ** I likely underappreciated the paper, re-reading it in the light of other reviews and discussions with other reviewers, I increased my score to 6. I still maintain my comments on AT, which I think is the weakest part of the paper.
nips_2017_2892
Discriminative State-Space Models We introduce and analyze Discriminative State-Space Models for forecasting nonstationary time series. We provide data-dependent generalization guarantees for learning these models based on the recently introduced notion of discrepancy. We provide an in-depth analysis of the complexity of such models. We also study the generalization guarantees for several structural risk minimization approaches to this problem and provide an efficient implementation for one of them which is based on a convex objective.
Synopsis ------------------- * This paper follows in a line of work from Kuznetsov and Mohri on developing agnostic learning guarantees for time series prediction using sequential complexity measures. Ostensibly, the setup in past works and the present work is that one will fit a hypothesis to observations (x_t,y_t) from time 1 through T, then use this hypothesis to predict y_{T+1} from x_{T+1} (the generalization guarantees provided are slightly more general than this). Where previous works (e.g. "Learning Theory and Algorithms for Forecasting Non-Stationary Time Series", NIPS 2015) considered the case where the hypothesis class is a fixed mapping from context space to outcome space, the present work extends this setup to the case where the setting where one learns a "state space" model. Here, the each hypothesis is a pair (h,g), where g generates an internal "state" variable and h predicts using both a context and state variable (see the paper for more detail). This captures, for example, HMMs. The authors give basic generalization guarantees for this setup, generalization for the setup under structural risk minimization, and an algorithm for learning a restricted class of state space models. Review ------------------- * I found the concept for the paper exciting and believe it is well-written but I feel it has two major shortcomings: I) Technical novelty * The main results Theorem 1 and 2 should be compared with Theorem 1/Corollary 2 of the NIPS 2015 paper mentioned above (call it KM15 for short). The authors mention that the main advantage of the present work is that the discrepancy term in Theorem 2 does not include a sup over the class G, which one would get by applying Theorem 1 from KM15 naively. Yet, if we look at the proof of Theorem 1 Theorem 2 of the present work, we see that this is achieved by being careful with where one takes the sup over G (line 418), then immediately applying to Lemma 1, whose proof is essentially identical to that of KM15 Theorem 1. To give some more examples: Appendix B largely consists of standard facts about sequential rademacher complexity. It is unclear to me whether the structural risk minimization results represent a substantive improvement on the basic generalization bounds. While I admit that others may find these results practically useful, I feel that the applications and experiments should have been fleshed out further if this was the goal. II) General setting is not instantiated sufficiently. * I was not satisfied with the depth to which the general bounds (eg Theorem 2) were instantiated. I do not feel that the basic promise of the paper (learning state space models) has been fully explored. Namely ** The algorithms section (5) only applies in the setting where the state space mapping class G is a singleton, and so there is no real "learning" of the state space dynamics going on. While I acknowledge that developing efficient algorithms for learning G may be challenging, this bothers me from a conceptual perspective because Theorem 2 bears no improvement over the KM15 results mentioned above when G only contains a single element. ** In line with this comment, the discrepancy term never seems to be instantiated. ** While Theorem 2 depends on the expected sequential cover of the class H x G, which one expects should exhibit some rich dependence on the dynamics G and the distribution over (X,Y), Appendix B seems to handle this term by immediately bounding it by the *worst case* sequential rademacher complexity, which includes a sup over all state space paths, not just those that could be induced by G. Conclusion ------------------- * In summary, I found the basic concept fairly interesting and think that the tools introduced (eg sequential covering for state space models) will be useful, but I don't think they were explored to enough depth. I also found Assumption 1 and the concrete cases where the discrepancy can be removed in the SRM setting to form a nice set of observations. * One last comment: While you allude that some notion of discrepancy is necessary for learning based on some results of Long, it would be nice to see a formal lower bound whose functional form depends on this quantity in the general case. Misc comments: * Line 123: Definition of \mathcal{R}_{s} doesn't exactly match the formal specification for the class \mathcal{R} used in the definition of the sequential covering number.
nips_2017_521
A New Theory for Matrix Completion Prevalent matrix completion theories reply on an assumption that the locations of the missing data are distributed uniformly and randomly (i.e., uniform sampling). Nevertheless, the reason for observations being missing often depends on the unseen observations themselves, and thus the missing data in practice usually occurs in a nonuniform and deterministic fashion rather than randomly. To break through the limits of random sampling, this paper introduces a new hypothesis called isomeric condition, which is provably weaker than the assumption of uniform sampling and arguably holds even when the missing data is placed irregularly. Equipped with this new tool, we prove a series of theorems for missing data recovery and matrix completion. In particular, we prove that the exact solutions that identify the target matrix are included as critical points by the commonly used nonconvex programs. Unlike the existing theories for nonconvex matrix completion, which are built upon the same condition as convex programs, our theory shows that nonconvex programs have the potential to work with a much weaker condition. Comparing to the existing studies on nonuniform sampling, our setup is more general.
The authors study matrix completion from a few entries. They propose a new proprty of (matrix,sampling pattern) which they term "Isomery". They show that the well known (incoherence,low rank,random sampling) assumption implies this condition. The authors then consider a nonconvex bilinear program for matrix completion. Under the Isomery assumption, they prove that the exact solution is a critical point of this program (with no assumptions on rank, it seems). Some empirical evidence are presented, whereby the nonconvex program is superior to the convex Nuclear Norm minimization method for a certain pattern of nonuniform sampling. The paper is relatively well written (see comments on English below). The results are novel as far as I can tell, and interesting. This work touches on fundamental aspects of matrix recovery, and can certainly lead to new research. There are a few major comments I'd be happy to see addressed. That said, I vote accept. Major comments: 1. The authors say nothing about the natural questions: Can the Isometry condition be verified in practice? Is it a reasonable assumption to make? 2. The authors discuss in the introduction reasons why random sampling may not be a reasonable assumption, with some real data examples. It would be nice if, for a real full data matrix, and a sampling pattern of the kind shown, the authors can verify the Isomry assumption. 3. The matrix recovery problem, as well as all the algorithms discussed in this paper, are invariant under row and col permutations. The authors discuss sampling patterns which are "concentrated around the main diagonal" - this condition is not invariant under permutations. Only invariant conditions make sense. 4. The main contributions claimed (l.79) are not backed up by the paper itself. Specifically, - it's unclear what is meant by "our theories are more flexible and powerful" - the second bullet point in the main contribution list (l.83) referes to "nonconvex programs" in general - The third bullet point (l.87) is not actuallty proved, and should not be advertised as a main contribution. The authors proved Theorem 3.1, but its connection to matrix completion is only vageuly described in the paragraph above Theorem 3.1. minor comments: - English throughout - Theorem 3.2: formally define "with high probability" - The word "theory" and "theories" in the title and throughout: I'm a non-native English speaker, yet I don't see why you use the word. - l.56: "incoherent assumption" -> "incoherence assumption" - l.75 "by the commonly used" -> "of the commonly used" - l.79 "hypothesis" -> condition - l.92, l.129 "proof processes" -> "proofs" - l.188 "where it denotes" -> "where we denote" - l.208 "that" -?? - Eq. (8) and Eq. (9) - maybe missing =0 on the right hand side? - Figure 2: label vertical axis - what do you mean by "proof of theory" (eg line 92)?
nips_2017_2490
Affine-Invariant Online Optimization and the Low-rank Experts Problem We present a new affine-invariant optimization algorithm called Online Lazy Newton. The regret of Online Lazy Newton is independent of conditioning: the algorithm's performance depends on the best possible preconditioning of the problem in retrospect and on its intrinsic dimensionality. As an application, we show how Online Lazy Newton can be used to achieve an optimal regret of order √ rT for the low-rank experts problem, improving by a √ r factor over the previously best known bound and resolving an open problem posed by Hazan et al. [15].
Summary: This paper proposes a new optimization algorithm, Online Lazy Newton (OLN), based on Online Newton Step (ONS) algorithm. Unlike ONS which tries to utilize curvature information within convex functions, OLN aims at optimizing general convex functions with no curvature. Additionally, by making use of low rank structure of the conditioning matrix, the authors showed that OLN yields better regret bound under certain conditions. Overall, the problem is well-motivated and the paper is easy to follow. Major Comments: 1. The major difference between OLN and ONS is that ONS introduces a lazy evaluation step, which accumulates the negative gradients at each round. The authors claimed in Lines 155-157 that this helps in decoupling between past and future conditioning and projections and it is better in the case when transformation matrix is changing between rounds. It would be better to provide some explanations. 2. Lines 158-161, it is claimed that ONS is not invariant to affine transformation due to its initialization. In my understanding, the regularization term is added partly because it allows for an invertible matrix and can be omitted if Moore-Penrose pseudo-inverse is used as in the FTAL. 3. Line 80 and Line 180, it is claimed that O(\sqrt(r T logT)) is improved upon O(r \sqrt(T)). The statement will hold under the condition that r/logT = O(1), is this always true? Minor: 1. Lines 77-79, actually both ONS and OLN utilizes first-order information to approximate second-order statistics. 2. Line 95, there is no definition of the derivative on the right-hand-side of the equation prior to the equation. 3. Line 148-149, on the improved regret bound. Assuming that there is a low rank structure for ONS (similar to what is assumed in OLS), would the regret bound for OLS still be better than ONS? 4. Line 166, 'In The...' -> 'In the...' 5. Line 181, better to add a reference for Hedge algorithm. 6. Line 199, what is 'h' in '... if h is convex ...'? 7. Line 211-212, '...all eigenvalues are equal to 1, except for r of them...', why it is the case? For D = I + B defined in Line 209, the rank for B is at most r, then at least r of the eigenvalues of D are equal to 1. The regret bound depends on the low rank structure of matrix A as in Theorem 3, another direction that would be interesting to explore is to consider the low rank approximation to the matrix A and check if similar regret bound can be derived, or under which conditions similar regret bound can be derived. I believe the proposed methods will be applicable to more general cases along this direction.
nips_2017_1538
An Empirical Bayes Approach to Optimizing Machine Learning Algorithms There is rapidly growing interest in using Bayesian optimization to tune model and inference hyperparameters for machine learning algorithms that take a long time to run. For example, Spearmint is a popular software package for selecting the optimal number of layers and learning rate in neural networks. But given that there is uncertainty about which hyperparameters give the best predictive performance, and given that fitting a model for each choice of hyperparameters is costly, it is arguably wasteful to "throw away" all but the best result, as per Bayesian optimization. A related issue is the danger of overfitting the validation data when optimizing many hyperparameters. In this paper, we consider an alternative approach that uses more samples from the hyperparameter selection procedure to average over the uncertainty in model hyperparameters. The resulting approach, empirical Bayes for hyperparameter averaging (EB-Hyp) predicts held-out data better than Bayesian optimization in two experiments on latent Dirichlet allocation and deep latent Gaussian models. EB-Hyp suggests a simpler approach to evaluating and deploying machine learning algorithms that does not require a separate validation data set and hyperparameter selection procedure.
# Update after author feedback I maintain my assessment of the paper. I strongly urge the authors to improve the paper on the basis of the reviewers' input; this could turn out to become a very nice paper. Specially important IMHO - retrain on validation+train (will come as an obvious counter-argument) - clarify the use of $\lambda$; R2 and R3 got confused - variance in experiments table 2+3 (again, would be an easy target for critique once published) # Summary of paper Bayesian optimisation (BO) discards all but the best performance evaluations over hyperparameter sets; this exposes to overfitting the validation set, and seems wasteful. The proposed method "averages" over choices hyperparameters $\eta$ in a Bayesian way, by treating them as samples of a posterior and integrating over them. This requires introducing an extra layer of hyper-hyperparameters (introducing $\lambda$, subjected to empirical Bayes optimisation). An algorithmic choice is required to define an "acquisition strategy" in BO terms, ie a model for $r^{(s)}$, as well as a model for hyperprior $p(\eta | \lambda)$. The proposed method does not require a separate validation set. # Evaluation The paper is very clearly written, without any language issues. It represents the literature and the context of the methods well. The problem statement and resolution process is clear, as is the description of the adopted solutions and approximations. The authors take care in preventing misunderstandings in several places. The discussion of convergence is welcome. The experiments are non-trivial and help the argumentation, with the caveat of the significance of results mentioned above. The method is novel, it applies the idea of EB and hierarchical Bayes to the BO method. I believe the approach is a good, interesting idea. # Discussion Discussion lines 44-47 and 252 sqq: it may sound like the case presented here was cherry-picked to exhibit a "bad luck" situation in which the leftmost (in fig 1) hyperparameter set which is selected by BO also exhibits bad test error; and in a sense, it is indeed "bad luck", since only a minority of hyperparameter choices plotted in fig 1 seem to exhibit this issue. However, dealing with this situation by averaging over $\eta$'s seems to be the central argument in favour of the proposed EB-Hyp method. I buy the argument, but I also recognise that the evidence presented by fig 1 is weak. The following advantage does not seem to be discussed in the paper: since a separate validation set is not needed, the training set can be made larger. Does this explain the observed performance increase? An analytical experiment could help answer this. Does the comparison between BO with/without validation in tables 2 and 3 help? In algo 1, I am not clear about the dependence on $\hat{\lambda}$. I am assuming it is used in the line "return approximation to ..." (I suggest you write $p(X^*|X_\textrm{train}, \hat{\lambda})$ to clarify the dependence.) I assume that in line "find optimal...", we actually use, per lines 200-208, $\hat{\lambda} = \arg \max \frac{1}{T} \sum_s p(\eta ^{(s)} | \lambda) f^{(s)}_X $. This is worth clarifying. But then, I'm confused what $p(\eta)$ in line "calculate hyperparameter posterior..." and eq 5 is? Is this an input to the algorithm? EB-Hyp seems to carry the inconvenient that we must consider, as the "metric of interest" in BO terms, the predictive likelihood $f^{(s)}_X$, while BO can use other metrics to carry out its optimisation. This restriction should be discussed. ## Experiments How many steps/ evaluations of $A$ do we use in either experiment? We are using the same number for all configurations, I suppose? How does the performance increase with the number of steps? Does EB-Hyp overtake only after several steps, or is its performance above other methods from the start? I find it hard to assign significance to the improvements presented in table 2 and 3, for lack of a sense of performance variance. The improvements are treated as statistically significant, but eg are they immune against different random data splits ? As it stands, I am not convinced that the performance improvement is real. # Suggestions for improvement, typos ## Major Fig 2 is incomplete: it is missing a contour colour scale. In the caption, what is a "negative relationship"? Please clarify. What is the point of fig 2 ? I can't see where it is discussed? ## Minor fig 1a caption: valiation fig 1: make scales equal in a vs b algo 1: increment counter $s$. Number lines. line 228: into into line 239: hyerparameters
nips_2017_2491
Beyond Worst-case: A Probabilistic Analysis of Affine Policies in Dynamic Optimization Affine policies (or control) are widely used as a solution approach in dynamic optimization where computing an optimal adjustable solution is usually intractable. While the worst case performance of affine policies can be significantly bad, the empirical performance is observed to be near-optimal for a large class of problem instances. For instance, in the two-stage dynamic robust optimization problem with linear covering constraints and uncertain right hand side, the worst-case approximation bound for affine policies is O( p m) that is also tight (see Bertsimas and Goyal [8]), whereas observed empirical performance is near-optimal. In this paper, we aim to address this stark-contrast between the worst-case and the empirical performance of affine policies. In particular, we show that affine policies give a good approximation for the two-stage adjustable robust optimization problem with high probability on random instances where the constraint coefficients are generated i.i.d. from a large class of distributions; thereby, providing a theoretical justification of the observed empirical performance. On the other hand, we also present a distribution such that the performance bound for affine policies on instances generated according to that distribution is ⌦( p m) with high probability; however, the constraint coefficients are not i.i.d.. This demonstrates that the empirical performance of affine policies can depend on the generative model for instances.
Review 2 after authors' comments: I believe the authors gave a very good rebuttal to the comments made which leads me to believe that there is no problem in accepting this paper, see updated rating. ------------ This paper addresses the challenging question of giving bounds for affine policies for adjustable robust optimization models. I like the fact that the authors (in some probabilistic sense) reduced the large gap of sqrt(m) to 2 for some specific instances. The authors have combined various different techniques combining probabilistic bounds to the structure of a set, which is itself derived using a dual-formulation from the original problem. However, I believe the contributions given in this paper and the impact is not high enough to allow for acceptance for NIPS. I came to this conclusion due to the following reasons: - Bounds on the performance of affine policies have been described in a series of earlier papers. This paper does not significantly close the gap in my opinion. - The results strongly depend on the generative model for the instances. However, adjustable robust optimization is solely used in environments that have highly structured models, such as the ones mentioned in the paper on page 2, line 51: set cover, facility location and network design problems. It is also explained that the performance differs if the generative model, or the i.i.d. assumption is slightly changed. Therefore, I am not convinced about the insights these results give for researchers that are thinking of applying adjustable robust optimization to solve their (structured) problem. - Empirical results are much better than the bounds presented here. In particular, it appears that affine policies are near optimal. This was known and has been shown in various other papers before. - On page 2, lines 48-49 the authors say that "...without loss of generality that c=e and d=\bar{d}e (by appropriately scaling A and B)." However, I believe you also have to scale the right-hand side h (or the size/shape of the uncertainty set). And the columns of B have to be scaled, making the entries no longer I.I.D. in the distribution required in Section 2? - Also on page 2, line 69 the authors describe the model with affine policies. The variables P and q are still in an inner minimization model. I think they should be together with the minimization over x? - On page 6, Theorem 2.6. The inequality z_AR <= z_AFF does always hold I believe? So not only with probability 1-1/m? - There are a posteriori methods to describe the optimality gap of affine policies that are much tighter for many applications, such as the methods described by [Hadjiyiannis, Michael J., Paul J. Goulart, and Daniel Kuhn. "A scenario approach for estimating the suboptimality of linear decision rules in two-stage robust optimization." Decision and Control and European Control Conference (CDC-ECC), 2011 50th IEEE Conference on. IEEE, 2011.] and [Kuhn, Daniel, Wolfram Wiesemann, and Angelos Georghiou. "Primal and dual linear decision rules in stochastic and robust optimization." Mathematical Programming 130.1 (2011): 177-209.]. ------- I was positively surprised by the digitized formulation on page 8. Is this approach used before in the literature, and if so where? The authors describe that the size can depend on desired accuracy. With the given accuracy, is the resulting solution a lower bound or an upper bound? If it is an upper bound, is the solution feasible? Of course, because it is a MIP it can probably only solve small problems as also illustrated by the authors. ------- I have some minor comments concerning the very few types found on page 5 (this might not even be worth mentioning here): - line 166. "peliminary" - line 184. "affinly independant" - line 186. "The proof proceeds along similar lines as in 2.4." (<-- what does 2.4 refer to?)
nips_2017_429
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization We develop a new accelerated stochastic gradient method for efficiently solving the convex regularized empirical risk minimization problem in mini-batch settings. The use of mini-batches has become a golden standard in the machine learning community, because the mini-batch techniques stabilize the gradient estimate and can easily make good use of parallel computing. The core of our proposed method is the incorporation of our new "double acceleration" technique and variance reduction technique. We theoretically analyze our proposed method and show that our method much improves the mini-batch efficiencies of previous accelerated stochastic methods, and essentially only needs size √ n mini-batches for achieving the optimal iteration complexities for both non-strongly and strongly convex objectives, where n is the training set size. Further, we show that even in non-mini-batch settings, our method achieves the best known convergence rate for non-strongly convex and strongly convex objectives.
The paper proposes a novel doubly accelerated variance reduced dual averaging method for solving the convex regularized empirical risk minimization problem in mini batch settings. The method essentially can be interpreted as replacing the proximal gradient update of APG method with the inner SVRG loop and then introducing momentum updates in inner SVRG loops. Finally to allow lazy updated, primal SVRG is replaced with variance reduce dual averaging. The main difference from AccProxSVRG is the introduction of momentum term at the outer iteration level also. The method requires only O(sqrt{n}) sized mini batches to achieve optimal iteration complexities for both convex and non-convex functions when the problem is badly conditioned or require high accuracy. Experimental results show substantial improvements over state of the art under the above scenario. Overall, given the theoretical complexity of the paper, it is very well written and explained. Relation with the previous work and differences are clear and elaborative. The reviewer thinks that the paper makes substantial theoretical and algorithmic advances. However, I am giving paper relatively lower rating due to following reasons: a) I would like to see more experiments on different datasets especially the effect of conditioning and regularizer. b) The accuracy curves are not shown in the experiments. This is important in this case because it seems that major differences between methods are visible only at lower values of the gap and it would be interesting to see if accuracy has saturated by then which will make the practical utility limited. c) Instead of x axis as gradient evaluations, I would like to see the actual wall clock time there because the constants or O(1) computation in inner loop matters a lot. Especially, the double accelerated equations look computationally more tedious and hence I would like to see the time axis there. d) Looking at lazy updates in supplementary, I am not sure why accelerated SVRG cannot have lazy updates. Can authors exactly point out the step in detail which will have the issues.
nips_2017_1832
Generalization Properties of Learning with Random Features We study the generalization properties of ridge regression with random features in the statistical learning framework. We show for the first time that O(1/ √ n) learning bounds can be achieved with only O( √ n log n) random features rather than O(n) as suggested by previous results. Further, we prove faster learning rates and show that they might require more random features, unless they are sampled according to a possibly problem dependent distribution. Our results shed light on the statistical computational trade-offs in large scale kernelized learning, showing the potential effectiveness of random features in reducing the computational complexity while keeping optimal generalization properties.
This is in my opinion an excellent paper, a significant theoretical contribution to understanding the role of the well established random feature trick in kernel methods. The authors prove that for a wide range of optimization tasks in machine learning random feature based methods provide algorithms giving results competitive (in terms of accuracy) to standard kernel methods with only \sqrt{n} random features (instead of linear number; this provides scalability). This is according to my knowledge, one of the first result where it is rigorously proven that for downstream applications (such as kernel ridge regression) one can use random feature based kernel methods with relatively small number of random features (the whole point of using the random feature approach is to use significantly fewer random features than the dimensionality of a data). So far most guarantees were of point-wise flavor (there are several papers giving upper bounds on the number of random features needed to approximate the value of the kernel accurately for a given pair of feature vectors x and y but it is not clear at all how these guarantees translate for instance to risk guarantees for downstream applications). The authors however miss one paper with very relevant results that it would be worth to compare with theirs. The paper I talk about is: "Random Fourier Features for Kernel Ridge Regression: Approxiamation Bounds and Statistica Guarantees" (ICML'17). In this paper the authors work on exactly the same problem and derive certain bounds on the number of random features needed, but it is not clear for me how to obtain from these bounds (that rely on certain spectral guarantees derived in that paper) the \sqrt{n} guarantees obtained in this submission. Thus definitely I strongly advice relating to that paper in the final version of this draft. Anyway, the contribution is still very novel. Furthermore, this paper is very nicely written, well-organized and gives an excellent introduction to the problem. What is most important, the theoretical contribution is to my opinion huge. To sum it up, very good work with new theoretical advances in this important field.
nips_2017_1972
Near Optimal Sketching of Low-Rank Tensor Regression We study the least squares regression problem where ⇥ is a low-rank tensor, defined as . Here, denotes the outer product of vectors, and A(⇥) is a linear function on ⇥. This problem is motivated by the fact that the number of parameters in ⇥ is only R · P p d number of parameters in ordinary least squares regression. We consider the above CP decomposition model of tensors ⇥, as well as the Tucker decomposition. For both models we show how to apply data dimensionality reduction techniques based on sparse random projections 2 R m⇥n , with m ⌧ n, to reduce the problem to a much smaller problem min ⇥ k A(⇥) bk 2 2 , for which k A(⇥) bk 2 2 = (1± ")kA(⇥) bk 2 2 holds simultaneously for all ⇥. We obtain a significantly smaller dimension and sparsity in the randomized linear mapping than is possible for ordinary least squares regression. Finally, we give a number of numerical simulations supporting our theory. where ⇥ is a low-rank tensor, defined as ⇥ = P p d number of parameters in ordinary least squares regression. We consider the above CP decomposition model of tensors ⇥, as well as the Tucker decomposition. For both models we show how to apply data dimensionality reduction techniques based on sparse random projections 2 R m⇥n , with m ⌧ n, to reduce the problem to a much smaller problem min ⇥ k A(⇥) bk holds simultaneously for all ⇥. We obtain a significantly smaller dimension and sparsity in the randomized linear mapping than is possible for ordinary least squares regression. Finally, we give a number of numerical simulations supporting our theory.
*Summary* This paper studies the tensor L2 regression problem and proposed to use matrix sketching to reduce computation. 1+epsilon bound is established to guarantee the error incurred by sketching. I don't like the writing. First, there is little discussion of and comparison with related work. Second, the proof in the main body of the paper is messy, and it takes much space which could have been used for discussion and experiments. Third, the use of notation is confusing. Some claims in the paper seems false or suspicious. I'd like to see the authors' response. I'll adjust my rating accordingly. *Details* I'd like to see the authors' response to Questions 2, 4, 9, 10. 1. I don't find the description of your algorithm anywhere. I don't find the time complexity analysis anywhere. I want to see a comparison of the computational complexities w/ and w/o sketching. 2. How is this paper related to and different from previous work, e.g., [32], Drineas's work on sketched OLS, and tensor sketch? For example, if A is matrix, which is a special case of tensor, then how does your bounds compare to Drineas's and Clarkson&Woodruff's? 3. The notation is not well described. I look back and forth for the definition of D, p, r, R, m, s, etc. 4. In Line 67, it is claimed that no assumption on incoherence is made. However, in Line 169 and 231, there are requirements on the max leverage scores. How do you explain this? 5. There are incompleteness and errors in the citations. * There are numerous works on sketched OLS. These works should be cited telow Eqn (6). * In Definition 5: SJLT is well known as count sketch. Count sketch has been studied in numerous papers, e.g, Charikar et al: Finding frequent items in data streams. Clarkson & Woodruff: Low rank approximation and regression in input sparsity time Meng and Mahoney: Low-distortion subspace embeddings in input- sparsity time and applications to robust linear regression Nelson & Nguyen: Osnap: Faster numerical linear algebra algorithms via sparser subspace embeddings. Pham and Pagh. Fast and scalable polynomial kernels via explicit feature maps Thorup and Zhang. Tabulation-based 5-independent hashing with appli- cations to linear probing and second moment estimation I don't think [11] is relevant. 6. How does this work extend to regularized regression? 7. Line 182: How is the assumption mild? The denominator is actually big. Matrix completion is not a good reference; it's a different problem. 8. The synthetic data in the experiments are not interesting. Sampling from normal distribution ensures incoherence. You'd better generate data from t-distribution according to the paper * Ma et al: A statistical perspective on algorithmic leveraging. I'd like to see real data experiments to demonstrate the usefulness of this work. 9. The title claims "near optimal". Why is this true? Where's your lower bound? 10. Tensors are rarely low-rank; but sometimes they are approximately low-rank. How does your theory and algorithm apply in this case? === after feedback === I appreciate the authors' patient reply. I think my evaluation of the technical quality is fair. I won't change my rating.
nips_2017_1973
Tractability in Structured Probability Spaces Recently, the Probabilistic Sentential Decision Diagram (PSDD) has been proposed as a framework for systematically inducing and learning distributions over structured objects, including combinatorial objects such as permutations and rankings, paths and matchings on a graph, etc. In this paper, we study the scalability of such models in the context of representing and learning distributions over routes on a map. In particular, we introduce the notion of a hierarchical route distribution and show how they can be leveraged to construct tractable PSDDs over route distributions, allowing them to scale to larger maps. We illustrate the utility of our model empirically, in a route prediction task, showing how accuracy can be increased significantly compared to Markov models.
This paper looks at the problem of representing simple routes on a graph as a probability distribution using Probabilistic Sentential Decision Diagrams (PSDDs). Representing a complex structure such as a graph is difficult, and the authors transform the problem by turning a graph into a Boolean circuit where it is straightforward to perform inference, and as an experiment, use their method on a route prediction method for San Francisco taxi cabs, where it beats two baselines. PSDDs refer to a framework that represents probability distributions over structured objects through Boolean circuits. Once the object is depicted as a Boolean circuit, it becomes straightforward to parameterize it. More formally, PSDD’s are parameterized by including a distribution over each or-gate, and PSDD’s can represent any distribution (and under some conditions, this distribution is unique). The authors focus on the specific problem of learning distributions over simple routes — those that are connected and without cycles. The advantage of SDD circuits is that they have been shown to work on graphs that would be computationally expensive to model with Bayesian nets. With large maps, tractability is still a problem with PSDDs, and the authors do this by considering hierarchical approximations; they break up the map into hierarchies so that representing distributions is polynomial when constraining the size of each region to be a certain size. The paper is well-written, and the authors define PSDD's clearly and succinctly. To me, the results of the paper seems incremental -- the authors apply an existing representation method to graphs, and after some approximations, apply the existing inference techniques. Additionally, the paper spends a couple of pages listing theorems, which appear to be basic algorithmic results. Additionally, I'm not convinced by the baselines chosen for the experiment. For a task of predicting edges given a set of previous edges, the baselines are a naive model which only looks at frequencies and a Markov model that only considers the last edge used. I imagine a deep learning approach or even a simple probabilistic diagram could've yielded more interesting and comparable results. Even if these would require more time to train, it would be valuable to compare model complexity in addition to accuracy. I would need to see more experiments to be convinced by the approach.
nips_2017_1785
Shallow Updates for Deep Reinforcement Learning Deep reinforcement learning (DRL) methods such as the Deep Q-Network (DQN) have achieved state-of-the-art results in a variety of challenging, high-dimensional domains. This success is mainly attributed to the power of deep neural networks to learn rich domain representations for approximating the value function or policy. Batch reinforcement learning methods with linear representations, on the other hand, are more stable and require less hyper parameter tuning. Yet, substantial feature engineering is necessary to achieve good results. In this work we propose a hybrid approach -the Least Squares Deep Q-Network (LS-DQN), which combines rich feature representations learned by a DRL algorithm with the stability of a linear least squares method. We do this by periodically re-training the last hidden layer of a DRL network with a batch least squares update. Key to our approach is a Bayesian regularization term for the least squares update, which prevents over-fitting to the more recent data. We tested LS-DQN on five Atari games and demonstrate significant improvement over vanilla DQN and Double-DQN. We also investigated the reasons for the superior performance of our method. Interestingly, we found that the performance improvement can be attributed to the large batch size used by the LS method when optimizing the last layer.
The authors propose to augment value-based methods for deep reinforcement learning (DRL) with batch methods for linear approximation function (SRL). The idea is motivated by interpreting the output of the second-to-last layer of a neural network as linear features. In order to make this combination work, the authors argue that regularization is needed. Experimental results are provided for 5 Atari games on combinations of DQN/Double DQN and LSTD-Q/FQI. Strengths: I find the proposition of combining DRL and SRL with Bayesian regularization original and promising. The explanation provided for the improved performance seems reasonable, but it could have been better validated (see below). Weaknesses: The presentation of Algorithm 1 and in particular line 7, is a bit strange to me, given that in Section 4, the authors mention that generating D with the current weights results in poor performance. Why not present line 7 as using ER only? Besides, would it be interesting to use p% of trajectories from ER and (1-p)% of trajectories generated from the current weights? The experiments could be more complete. For instance, in Table 1, the results for LS-DDQN_{LSTD-Q} are missing and in Fig.2, the curves for Qbert and Bowling are not provided. For a fairer comparison between DQN and LS-DQN, I think the authors should compare Algorithm 1 against DQN given an equal time-budget to check that their algorithm indeed provides at the end a better performance. Besides, the ablative analysis was just performed on Breakout. Would the same conclusions hold on different games? Detailed comments: l.41: the comment on data-efficiency of SRL is only valid in the batch setting, isn’t it? l.114, l.235: it’s l.148: in the equation above, \mu should be w_k. I find the notation w_k for the last hidden layer to be unsatisfying, as w_k also denotes the weights of the whole network at iteration k l.205, l.245: a LS -> an LS l.222: algortihm
nips_2017_2085
Collapsed variational Bayes for Markov jump processes Markov jump processes are continuous-time stochastic processes widely used in statistical applications in the natural sciences, and more recently in machine learning. Inference for these models typically proceeds via Markov chain Monte Carlo, and can suffer from various computational challenges. In this work, we propose a novel collapsed variational inference algorithm to address this issue. Our work leverages ideas from discrete-time Markov chains, and exploits a connection between these two through an idea called uniformization. Our algorithm proceeds by marginalizing out the parameters of the Markov jump process, and then approximating the distribution over the trajectory with a factored distribution over segments of a piecewise-constant function. Unlike MCMC schemes that marginalize out transition times of a piecewise-constant process, our scheme optimizes the discretization of time, resulting in significant computational savings. We apply our ideas to synthetic data as well as a dataset of check-in recordings, where we demonstrate superior performance over state-of-the-art MCMC methods.
The authors present a variational inference algorithm for continuous time Markov jump processes. Following previous work, they use "uniformization" to produce a discrete time skeleton at which they infer the latent states. Unlike previous work, however, the authors propose to learn this skeleton (a point estimate, via random search) and to integrate out, or collapse, the transition matrix during latent state inference. They compare their algorithm to existing MCMC schemes, which also use uniformization, but which do not collapse out the transition matrix. While this work is well motivated, I found it difficult to tease out which elements of the inference algorithm led to the observed improvement. Specifically, there are at least four dimensions along which the proposed method differs from previous work (e.g. Rao and Teh, 2014): (i) per-state \Omega_i vs shared \Omega for all states; (ii) learned point estimate of discretization vs sampling of discretization; (iii) variational approximation to posterior over latent states vs sampling of latent state sequence; and (iv) collapsing out transition matrix vs maintaining sample / variational factor for it. That the "Improved MCMC" method does not perform better than "MCMC" suggests that (i) does not explain the performance gap. It seems difficult to test (ii) explicitly, but one could imagine a similar approach of optimizing the discretization using sample based estimates of the marginal likelihood, though this would clearly be expensive. Dimensions (iii) and (iv) suggest two natural alternatives worth exploring. First, an "uncollapsed" variational inference algorithm with a factor for the transition matrix and a structured variational factor on the complete set of latent states U. Indeed, given the discretization, the problem essentially reduces to inference in an HMM, and uncollapsed, structured variational approximation have fared well here (e.g. Paisley and Carin, 2009; Johnson and Willsky, 2014). Second, it seems you could also collapse out the transition matrix in the MCMC scheme of Rao and Teh, 2014, though it would require coordinate-wise Gibbs updates of the latent states u_t | u_{\neg t}, just as your proposed scheme requires q(U) to factorize over time. These two alternatives would fill in the gaps in the space of inference algorithms and shed some light on what is leading to the observed performance improvements. In general, collapsing introduces additional dependencies in the model and precludes block updates of the latent states, and I am curious as to whether the gains of collapsing truly outweigh these costs. Without the comparisons suggested above, the paper cannot clearly answer this question. Minor comments: - I found the presentation of MCMC a bit unclear. In some places (e.g. line 252) you say MCMC "requires a homogeneously dense Poisson distributed trajectory discretization at every iteration." but then you introduce the "Improved MCMC" algorithm which has an \Omega_i for each state, and presumably has a coarser discretization. References: Paisley, John, and Lawrence Carin. "Hidden Markov models with stick-breaking priors." IEEE Transactions on Signal Processing 57.10 (2009): 3905-3917. Johnson, Matthew, and Alan Willsky. "Stochastic variational inference for Bayesian time series models." International Conference on Machine Learning. 2014.
nips_2017_2941
Diverse and Accurate Image Description Using a Variational Auto-Encoder with an Additive Gaussian Encoding Space This paper explores image caption generation using conditional variational autoencoders (CVAEs). Standard CVAEs with a fixed Gaussian prior yield descriptions with too little variability. Instead, we propose two models that explicitly structure the latent space around K components corresponding to different types of image content, and combine components to create priors for images that contain multiple types of content simultaneously (e.g., several kinds of objects). Our first model uses a Gaussian Mixture model (GMM) prior, while the second one defines a novel Additive Gaussian (AG) prior that linearly combines component means. We show that both models produce captions that are more diverse and more accurate than a strong LSTM baseline or a "vanilla" CVAE with a fixed Gaussian prior, with AG-CVAE showing particular promise.
This paper investigated the task of image-conditioned caption generation using deep generative models. Compared to existing methods with pure LSTM pipeline, the proposed approach augments the representation with an additional data dependent latent variable. This paper formulated the problem under variational auto-encoder (VAE) framework by maximizing the variational lowerbound as objective during training. A data-dependent additive Gaussian prior was introduced to address the issue of limited representation power when applying VAEs to caption generation. Empirical results demonstrate the proposed method is able to generate diverse and accurate sentences compared to pure LSTM baseline. == Qualitative Assessment == I like the motivation that adding stochastic latent variable to the caption generation framework. Augmenting the prior of VAE is not a novel contribution but I see the novelty applied to caption generation task. Performance-wise, the proposed AG-CVAE achieved more accurate performance compared to both LSTM baseline and other CVAE baselines (see Table 2). The paper also analyzed the diversity of the generated captions in comparison with the pure LSTM based approach (see Figure 5). Overall, the paper is generally well-written with sufficient experimental details. Considering the additive Gaussian prior as major contribution, the current version does not seem to be very convincing to me. I am happy to raise score if my concerns can be addressed in the rebuttal. * Any strong evidence showing the advantages of AG-CVAE over CVAE/GMM-CVAE? The improvements in Table 2 are not very significant. For qualitative results (Figure 5 and other figures in supplementary materials), side-by-side comparisons between AG-CVAE and CVAE/GMM-CVAE are missing. It is not crystal clear to me whether AG-CVAE actually adds more representation power compared to CVAE/GMM-CVAE. Please comment on this in the rebuttal and include such comparisons in the final version of the paper (or supplementary materials). * Diversity evaluation: it is not clear why does AG-CVAE perform worse than CVAE. Also, there is no explanation about performance gap from different variations of AG-CVAE. Since CVAE is a stochastic generative model, I wonder whether top 10 sentences are sufficient for diversity evaluations? The results will be much more convincing if the authors provide a curve (y-axis is the diversity measure and x-axis is the number of sentences). Additional comments: * Pre-defined means of clusters for GMM-CVAE and AG-CVAE (Line 183) It is not surprising that the authors failed to obtain better results when means of clusters u_k are free variables. In principle, it is possible to learn duplicate representations without any constraints (e.g., sparsity or orthogonality) on u_k or c_k. I would encourage the authors to explore this direction a bit in the future. Hopefully, learnable data-dependent prior can boost the performance to some extent.
nips_2017_2724
Bayesian Optimization with Gradients Bayesian optimization has been successful at global optimization of expensiveto-evaluate multimodal objective functions. However, unlike most optimization methods, Bayesian optimization typically does not use derivative information. In this paper we show how Bayesian optimization can exploit derivative information to find good solutions with fewer objective function evaluations. In particular, we develop a novel Bayesian optimization algorithm, the derivative-enabled knowledgegradient (d-KG), which is one-step Bayes-optimal, asymptotically consistent, and provides greater one-step value of information than in the derivative-free setting. d-KG accommodates noisy and incomplete derivative information, comes in both sequential and batch forms, and can optionally reduce the computational cost of inference through automatically selected retention of a single directional derivative. We also compute the d-KG acquisition function and its gradient using a novel fast discretization-free technique. We show d-KG provides state-of-the-art performance compared to a wide range of optimization procedures with and without gradients, on benchmarks including logistic regression, deep learning, kernel learning, and k-nearest neighbors.
Major: To me this paper is an important one as it presents substantial contributions to Bayesian Optimization that are at the same time elegant and practically efficient. While the paper is well written, it could be formally improved in places; below are a few comments questions that might be useful toward such improvements. Besides this, I was a bit puzzled by the fact that Theorem 1 is not very precisely stated; also, given the fact that it is proven in appendix (which is understandable given the space limitations) and with a proof that requires substantial time to proofread formally (as it is almost a second paper in itself, with a specific policy formalism/set-up), I had to adapt my confidence score accordingly. Also, I was wondering if the consistency proof could work in the case of noiseless observations, where it does not make sense to replicate evaluations at the same point(s)? Nevertheless, from the methodological contributions and their practical relevance, I am conviced that the presented approach should be spread to BO/ML researchers and that NIPS is an excellent platform to do so. Minor: * In the abstract: "most optimization methods" is a bit strong. Derivative-free optimization methods should not be forgotten. Yet, gradients are indeed essential in (differentiable) local optimization methods. * In the abstract: syntax issue in "We show d-KG provides". * Introduction: from "By contrast", see comment above regarding *local* optimization. * Introduction: "relative value" does not sound very clear to me. Maybe speak of "variations" or so? * Introduction: "we explore the what, when, and why". In a way yes, but the main emphasis is on d-KG, and even if generalities and other approaches are tackled, it is probably a bit strong to claim/imply that *Bayesian Optimization with Gradients* is *explored* in a paper subjected to such space restrictions. * Introduction: from "And even" to "finite differences"; in such case, why not use this credit to do more criterion-driven evaluations? Are there good reasons to believe that two neigbouring points approximating a derivative will be better than two (more) distant well-chosen points? [NB: could there exist a connection to "twin points", in Gaussian kernel settings?] * Section 3.1: "to find argmin"; so, the set [in case of several minimizers]? * Section 3.1: "linear operator" calls for additional regularity condition(s). By the way, it would be nice to ensure differentiability of the GP via the chosen mu and K. * Section 3.1: the notation used in equation (3.2) seem to imply that there cannot be multiple evaluations at the same x. Notations in terms of y_i (and similar for the gradient), corresponding to points x_i, could accomodate that. * Section 3.1: the paragraph from line 121 to 126 sounds a bit confusing to me (I found the writing rather cryptic there) * Section 3.2: "expected loss"; one would expect a translation by the actual min there (so that loss=0 for successful optim) * Section 3.2: in equation (3.3), conditioning on "z" is abusive. By the way, why this "z" notation (and z^(i)) for "x" points? * Algorithm 2, lines 2: a) =argmax abusive and b) is theta on the unit sphere? restricted to canonical directions? * Section 3.3: in the expression of the d-KG factor, the term \hat{\sigma}^{(n)} appears to be related to (a generalization of) kriging update formulas for batch-sequential data assimilation. Maybe it could make sense to connect one result ro the other? * Section 3.3: the approach with the enveloppe theorem is really nice. However, how are the x^*(W) estimated? Detail needed... * Section 3.3: "K samples"; from which distribution? + Be careful, the letter K was already used before, for the covariance. * Section 3.3, line 191: "conditioning" does not seem right here...conditional? conditionally? * Section 3.4: in Prop.1, you mean for all z^{(1:q)}? Globally the propositions lack mathematical precision... * Section 3.4: ...in particular, what are the assumptions in Theorem 1? E.g., Is f assumed to be drawm from the running GP? * Section 4: notation Sigma is used for the covariance kernel while K was used before. * Section 4: about "the number of function evaluations"; do the figures account for the fact that "d-approaches" do several f evaluations at each iteration? * Section 5: about "low dimensional"; not necessarily...See REMBO and also recent attempts to combine BO and kernel (sparse) decompositions.
nips_2017_2542
Fast, Sample-Efficient Algorithms for Structured Phase Retrieval We consider the problem of recovering a signal x * ∈ R n , from magnitude-only measurements, y i = | a i , x * | for i = {1, 2, . . . , m}. Also known as the phase retrieval problem, it is a fundamental challenge in nano-, bio-and astronomical imaging systems, and speech processing. The problem is ill-posed, and therefore additional assumptions on the signal and/or the measurements are necessary. In this paper, we first study the case where the underlying signal x * is s-sparse. We develop a novel recovery algorithm that we call Compressive Phase Retrieval with Alternating Minimization, or CoPRAM. Our algorithm is simple and can be obtained via a natural combination of the classical alternating minimization approach for phase retrieval, with the CoSaMP algorithm for sparse recovery. Despite its simplicity, we prove that our algorithm achieves a sample complexity of O s 2 log n with Gaussian samples, which matches the best known existing results. It also demonstrates linear convergence in theory and practice and requires no extra tuning parameters other than the signal sparsity level s. We then consider the case where the underlying signal x * arises from structured sparsity models. We specifically examine the case of block-sparse signals with uniform block size of b and block sparsity k = s/b. For this problem, we design a recovery algorithm that we call Block CoPRAM that further reduces the sample complexity to O (ks log n). For sufficiently large block lengths of b = Θ(s), this bound equates to O (s log n). To our knowledge, this constitutes the first end-toend linearly convergent family of algorithms for phase retrieval where the Gaussian sample complexity has a sub-quadratic dependence on the sparsity level of the signal.
The authors' rebuttal contains some points which should be explicitly included in a revised version if the paper is accepted. ----------- The authors study the problem of compressive (or sparse) phase retrieval, in which a sparse signal x\in\R^n is to be recovered from measurements abs(Ax), where A is a design (measurement) matrix, and abs() takes the entry-wise absolute value. The authors propose an iterative algorithm that applies CoSaMP in each iteration, and prove convergence to the correct, unknown sparse vector x with high probability, assuming that the number of measurements is at least C*s*log(n/s), where s is the sparsity of x and n is the ambient dimension. The paper is well written and clear, and the result is novel as far as I can tell. However, the problem under study is not adequately motivated. The authors only consider real-valued signals throughout the paper, yet bring motivation from real phase retrieval problems in science in which the signals are, without exception, complex. The authors aim to continue the work in refs. [21,22] which also considered only real-valued signals. Looking at those papers, I could find no satisfying motivation to study the problem as formulated. The algorithm proposed is compared with those proposed in refs. [21,22]. According to Table 1, it enjoys the same sample complexity requirement and the same running time (up to constants) as SPARTA [22] and TWF [21]. (While the sample complexity requirement is proved in the paper - Theorem 4.2 - I could not find a proof of the running time claimed in Table 1). As far as I could see, the only improvements of the proposed algorithm over prior art is the empirical performance (Section 6). In addition, the authors study a closely related problem, where the signal is assumed to be block sparse, and develop a version of the algorithm adapted to this problem. As for the sparse (non-block) version, in my opinion the limited simulation study offered by the authors is not sufficient to establish that the proposed algorithm improves in any way over those of [21,22]. Without a careful empirical study, or alternatively a formal results showing that the proposed algorithm improves over state of the art, the merit of the proposed algorithm is not sufficiently established. As for the block version, it seems that here the algorithm, which is specifically tailored to the block version, improves over state of the art. However, in my opinion the problem of recovery of *real* block-sparse signals from absolute value of linear measurements is not sufficiently motivated. Other comments: The authors assume a Gaussian measurement matrix. The main results (Theorem 4.1, Theorem 4.2, Theorem 5.2) must explicitly specify this assumption. As written now, the reader may incorrectly infer that the Theorem makes no assumptions on the measurement matrix.
nips_2017_3123
Countering Feedback Delays in Multi-Agent Learning We consider a model of game-theoretic learning based on online mirror descent (OMD) with asynchronous and delayed feedback information. Instead of focusing on specific games, we consider a broad class of continuous games defined by the general equilibrium stability notion, which we call λ-variational stability. Our first contribution is that, in this class of games, the actual sequence of play induced by OMD-based learning converges to Nash equilibria provided that the feedback delays faced by the players are synchronous and bounded. Subsequently, to tackle fully decentralized, asynchronous environments with (possibly) unbounded delays between actions and feedback, we propose a variant of OMD which we call delayed mirror descent (DMD), and which relies on the repeated leveraging of past information. With this modification, the algorithm converges to Nash equilibria with no feedback synchronicity assumptions and even when the delays grow superlinearly relative to the horizon of play.
If we accept that distributed learning is interesting, then this article presents a nice treatment of distributed mirror descent in which feedback may be asynchronous and delayed. Indeed, we are presented with a provably convergent learning algorithm for continuous action sets (in classes of games) even when individual players' feedback are received with differing levels of delay; further more the regret at time T is controlled as a function of the total delay to time T. This is a strong result, achieved by using a suite of very current proof techniques - lambda-Fenchel couplings serving as primula-dual Bregman divergences and associated tools. I have some concerns, but overall I think this is a good paper. - (very minor) In the first para of Section 2.2, "following learning scheme" actually refers to Algorithm 1, over the page. - Lemma 3.2. If the concept of variational stability implies that all Nash equilibria of a game are in a closed and convex set, to me this is a major restriction on the class of games for which the result is relevant. Two example classes are given in the supplementary material, but I think the main paper should be more upfront about what kinds of games we expect the results to hold in. - At the bottom of page 5, we are presented with an assumption (buried in the middle of a paragraph) which is in some sense the converse to Lemma 4.2. While the paper claims the assumption is very weak, I would still prefer it to be made more explicitly, and some more efforts made to explain why it's weak, and what might make it fail - In Algorithm 2, the division by |G^t_i| is very natural. Why is something similar not done in Algorithm 1? - (Minor) Why do we talk about "Last iterate convergence"? This is a term I'm unfamiliar with. I'm more used to "convergence of intended play" or "convergence of actual play". - References 15 and 16 are, I think, repeats? - You should probably be referencing recent works e.g. [Mertikopoulos] and [Bervoets, S., Bravo, M., and Faure, M]
nips_2017_1025
Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite Sum Structure Stochastic optimization algorithms with variance reduction have proven successful for minimizing large finite sums of functions. Unfortunately, these techniques are unable to deal with stochastic perturbations of input data, induced for example by data augmentation. In such cases, the objective is no longer a finite sum, and the main candidate for optimization is the stochastic gradient descent method (SGD). In this paper, we introduce a variance reduction approach for these settings when the objective is composite and strongly convex. The convergence rate outperforms SGD with a typically much smaller constant factor, which depends on the variance of gradient estimates only due to perturbations on a single example.
Summary of the paper ==================== The paper considers the setting of finite-sum problems where each individual function undergoes a random pertuerbation. The authors devise an adapation of the finite-sums algorithm MISO\Finito, called S-MISO, for this setting, and provide a convergence analysis based on the noise (and the analytic) parameters of the model, along with supporting experiments on various datasets from different domains. Evaluation ========== The algorithm devised by the authors seems to effectively exploit the extra structure offered by this setting (in comparison to the generic stochastic optimization setting) - both theoretically where S-MISO is proven to improve upon SGD in a factor dependent on the overall variance and the average individual variance, and empirically, where S-MISO is shown to outperform SGD (designed for generic stochastic problems) and N-SAGA (designed for related settings). This setting subsumes important applications where data augmentation\perturbation have been proven successful (particularly, image and text classification), rendering it meaningful and interesting. On the flip side, the convergence analysis is somewhat loosely stated (e.g., the use of epsilon bar) and there are highly non-trivial user-specified parameters which have to be determined and carefully tuned. Moreover, although the expected performance on multi-layer network are considered in the body of the paper, they are not addressed appropriately in Section 4 where supporting experiments are presented (only 2-layers networks) - I find this very disappointing as this one of the main promises of the paper. Lastly, the paper is relatively easy-to-follow. Comments ======== L8 - As convergence rates are usually stated using O-notation, it is not utterly clear how to inrepret 'smaller constant factor'. L17 - What does 'simple' stand for? proximal-friendly? L21 - Can you provide pointers\cross-reference to support 'fundamental'? Also, consider rephrasing this sentence. L47 - Consider repharsing sentence. L65 - Table 2: Data radius is not seemed to be defined anywhere in the text. L66 - How does concentrating on a quantity that depends only on the minimizer affect the asymptotic properties of the stated convergence analysis? L69 - 'note that\nabla' is missing a space. L77 - In what sense is this rate 'unimprovable'? L78 - This pararaph seems somewhat out of place. Maybe put under 'realted work'? L90 - This paragrpah too seems somewhat out of place (seems to be realted to section 4). L116+L121 - the wording 'when f_i is an expectation' is somewhat confusing. L183+L185 - The use of epsilon bar is not conventional and makes it hard to parse this statement. Please consider restating the convergence analysis in terms of the pre-defined accuracy level (alterntaively, consider providing more explenatation of the upper bound parameters). L192 - Can you elaborate more on concrete choices for gamma? L198 - 'as large as allowed' in terms of EQ 9? L200 - Figure 1: can you elaborate more on 'satisfies the thoery'? L208 - some expremients seem to be 400-epochs long. L226 - Here and after, numbers are missing thousands separator. L261 - Comparing could be more fair if a non-uniform version of SGD would be used. L271 - Line seems out of place.
nips_2017_3476
Efficient Approximation Algorithms for String Kernel Based Sequence Classification Sequence classification algorithms, such as SVM, require a definition of distance (similarity) measure between two sequences. A commonly used notion of similarity is the number of matches between k-mers (k-length subsequences) in the two sequences. Extending this definition, by considering two k-mers to match if their distance is at most m, yields better classification performance. This, however, makes the problem computationally much more complex. Known algorithms to compute this similarity have computational complexity that render them applicable only for small values of k and m. In this work, we develop novel techniques to efficiently and accurately estimate the pairwise similarity score, which enables us to use much larger values of k and m, and get higher predictive accuracy. This opens up a broad avenue of applying this classification approach to audio, images, and text sequences. Our algorithm achieves excellent approximation performance with theoretical guarantees. In the process we solve an open combinatorial problem, which was posed as a major hindrance to the scalability of existing solutions. We give analytical bounds on quality and runtime of our algorithm and report its empirical performance on real world biological and music sequences datasets. acids) and predicting proteins fold (functional three dimensional structure) are essential tasks in bioinformatics. Sequence classification algorithms have been applied to both of these problems with great success [3,10,13,18,19,20,25]. Music data, a real valued signal when discretized using vector quantization of MFCC features is another flavor of sequential data [26]. Sequence classification has been used for recognizing genres of music sequences with no annotation and identifying artists from albums [12,13,14]. Text documents can also be considered as sequences of words from a language lexicon. Categorizing texts into classes based on their topics is another application domain of sequence classification [11,15]. While general purpose classification methods may be applicable to sequence classification, huge lengths of sequences, large alphabet sizes, and large scale datasets prove to be rather challenging for such techniques. Furthermore, we cannot directly apply classification algorithms devised for vectors in metric spaces because in almost all practical scenarios sequences have varying lengths unless some mapping is done beforehand. In one of the more successful approaches, the variable-length sequences are represented as fixed dimensional feature vectors. A feature vector typically is the spectra (counts) of all k-length substrings (k-mers) present exactly [18] or inexactly (with up to m mismatches) [19] within a sequence. A kernel function is then defined that takes as input a pair of feature vectors and returns a real-valued similarity score between the pair (typically inner-product of the respective spectra's). The matrix of pairwise similarity scores (the kernel matrix) thus computed is used as input to a standard support vector machine (SVM) [5,27] classifier resulting in excellent classification performance in many applications [19]. In this setting k (the length of substrings used as bases of feature map) and m (the mismatch parameter) are independent variables directly related to classification accuracy and time complexity of the algorithm. It has been established that using larger values of k and m improve classification performance [11,13]. On the other hand, the runtime of kernel computation by the efficient trie-based algorithm [19,24] is O(k m+1 |Σ| m (|X| + |Y |)) for two sequences X and Y over alphabet Σ. Computation of mismatch kernel between two sequences X and Y reduces to the following two problems. i) Given two k-mers α and β that are at Hamming distance d from each other, determine the size of intersection of m-mismatch neighborhoods of α and β (k-mers that are at distance at most m from both of them). ii) For 0 ≤ d ≤ min{2m, k} determine the number of pairs of k-mers (α, β) ∈ X × Y such that Hamming distance between α and β is d. In the best known algorithm [13] the former problem is addressed by precomputing the intersection size in constant time for m ≤ 2 only. While a sorting and enumeration based technique is proposed for the latter problem that has computational complexity O(2 k (|X| + |Y |), which makes it applicable for moderately large values of k (of course limited to m ≤ 2 only). In this paper, we completely resolve the combinatorial problem (problem i) for all values of m. We prove a closed form expression for the size of intersection of m-mismatch neighborhoods that lets us precompute these values in O(m 3 ) time (independent of |Σ|, k, lengths and number of sequences). For the latter problem we devise an efficient approximation scheme inspired by the theory of locality sensitive hashing to accurately estimate the number of k-mer pairs between the two sequences that are at distance d. Combining the above two we design a polynomial time approximation algorithm for kernel computation. We provide probabilistic guarantees on the quality of our algorithm and analytical bounds on its runtime. Furthermore, we test our algorithm on several real world datasets with large values of k and m to demonstrate that we achieve excellent predictive performance. Note that string kernel based sequence classification was previously not feasible for this range of parameters.
The authors describe an approximation algorithm for k-mer (with mismatches) based string kernels. The contribution is centered around a closed form expression of the intersection size of mismatching neighbourhoods. The algorithm is evaluated in the context of sequence classification using SVM. I think this is a great paper: clear motivation, good introduction, clear contribution, theoretical back-up, nice experimental results. I have a few concerns regarding presentation and structuring, as well as doubts on relevance on the presented theory. Presentation. I think Section 3 is too technical. It contains a lot of notation, and a lot of clutter that actually hinder understanding the main ideas. On the other hand, intuition on crucial ideas is missing, so I suggest to move a few details into Appendix and rather elaborate high-level ideas. -A table with all relevant quantities (and explanations) would make it much easier for a non-string-kernel person to follow establishing Theorem 3.3. An intuitive description of the theorem (and proof idea) would also help. -Algorithm 1 is not really referenced from the main test. It first appears in Thm 3.10. It is also very sparsely annotated/explained. For example, the parameter meanings are not explained. Subroutines also need annotation. I think further deserves to be in the center of Section 3's text, and the text should describe the high level ideas. -It should be pointed out more that Thm 3.3 is the open combinatoric problem mentioned in the abstract. -In the interest of readability, I think the authors should comment more on what it is that their algorithm approximates, what the parameters that control the approximation quality are, and what would be possible failure cases. -Theorem 3.13 should stay in the main text as this concentration inequality is the main theoretical message. All results establishing the Chebbyshev inequality (Thm 3.11, Lemma 3.12, Lemma 3.14 requirements should go to the appendix. Rather, the implications of the Theorem 3.13 should be elaborated on more. I appreciate that the authors note that these bounds are extremely loose. Theory. -Theorem 3.13 is a good first step to understanding the approximation quality (and indeed consistency) of the algorithm. It is, however, not useful in the sense that we do not care about the kernel approximation itself, but we care about the generalization error in the downstream algorithm. Kernel method theory has seen a substantial amount of work in that respect (e.g. for Nystrom, Fourier Feature regression/classification). This should be acknowledged, or even better: established. A simple approach would be perturbation analysis using the established kernel approximation error, better would be directly controlling the error of the estimated decision boundary of the SVM. Experiments. -Runtime should be checked for various m -Where does the approximation fail? An instructive synthetic example would help understanding when the algorithm is appropriate and when it is not. -The authors mention that their algorithm allows for previously impossible settings of (k,m). In Table 3, however, there is only a single case where they demonstrate an improved performance as opposed to the exact algorithm (and the improvement is marginal). Either the authors need to exemplify the statement that their algorithm allows to solve previously unsolved problems (or allow for better accuracy), or they should remove it. Minor. -Figure 1 has no axis units -Typo line 95 "theset" -Typo line 204 "Generating these kernels days;"
nips_2017_3492
Real Time Image Saliency for Black Box Classifiers In this work we develop a fast saliency detection method that can be applied to any differentiable image classifier. We train a masking model to manipulate the scores of the classifier by masking salient parts of the input image. Our model generalises well to unseen images and requires a single forward pass to perform saliency detection, therefore suitable for use in real-time systems. We test our approach on CIFAR-10 and ImageNet datasets and show that the produced saliency maps are easily interpretable, sharp, and free of artifacts. We suggest a new metric for saliency and test our method on the ImageNet object localisation task. We achieve results outperforming other weakly supervised methods.
The paper proposes an approach to learn saliency masks. The proposed approach is based on a neural network and can process multiple images per second (i.e. it is fast). To me the paper is borderline, I would not object rejection or acceptance. I really believe in the concept of learning to explain a model and I think the paper has some good ideas. There are no obvious mistakes but there are clear limitations. -The saliency is only indirectly measured. Either through weakly supervised localisation or by the proposed saliency metric both these methods have clear limitations and I think these limitations should be discussed in the paper. The weakly supervised location is not perfect as a measure. If the context in which an object appears is essential to determine its class, the object localisation does not have to correlate with saliency quality. The results on weakly supervised localisation are interesting, but I think there is a big caveat when using them as a quality metric for saliency. The saliency metric is not perfect because of how it is applied. The estimated salient region is cropped. This crop is then rescaled to the original image size, with the original aspect ratio. This could introduce two artefacts. First, the change of the aspect ratio might impact how well it can be classified. Second in the proposed metric a small salient region is preferred. Since a small region is blown up heavily for re-classification the scale at which the object is now presented to the classifier might not be ideal. (Convnets are generally pretty translation invariant, but the scaling invariance must be learned, and there are probably limits to this). What is not discussed here is how much the masking model depends on the architecture for learning the masks? Did the authors at one point experiment with different architectures and how this influenced the result? Minor comments Are the results in Table 1 obtained for all classes or only for the correct class? - Please specify with LRP variant and parameter setting was used for comparison. They have an epsilon, alpha-beta, and more variants with parameters. *** Post rebuttal edit *** The fact that how well the saliency metric works depends on the quality and the scale invariance of the classifier is strongly limiting the applicability of the proposed method. It can only be applied to networks having this invariance. This has important consequences: - The method cannot be used for models during the training phase, nor for models that do not exhibit this invariance. - This limit the applicability to other domains (e.g. spectrogram analysis with CNN's). - The method is not generally applicable to black-box classifiers as claimed in the title. Furthermore, the response hints at a strong dependence on the masking network. - As a result, it is not clear to me whether we are visualizing the saliency of the U-network or the masking network. If these effects are properly discussed in the paper I think it is balanced enough for publication. If not it should not be published
nips_2017_1594
Principles of Riemannian Geometry in Neural Networks This study deals with neural networks in the sense of geometric transformations acting on the coordinate representation of the underlying data manifold which the data is sampled from. It forms part of an attempt to construct a formalized general theory of neural networks in the setting of Riemannian geometry. From this perspective, the following theoretical results are developed and proven for feedforward networks. First it is shown that residual neural networks arenite di erence approximations to dynamical systems of rst order di erential equations, as opposed to ordinary networks that are static. This implies that the network is learning systems of di erential equations governing the coordinate transformations that represent the data. Second it is shown that a closed form solution of the metric tensor on the underlying data manifold can be found by backpropagating the coordinate representations learned by the neural network itself. This is formulated in a formal abstract sense as a sequence of Lie group actions on the metric bre space in the principal and associated bundles on the data manifold. Toy experiments were run to con rm parts of the proposed theory, as well as to provide intuitions as to how neural networks operate on data.
The paper develops a mathematical framework for working with neural network representations in the context of finite differences and differential geometry. In this framework, data points going though layers have fixed coordinates but space is smoothly curved with each layer. The paper presents a very interesting framework for working with neural network representations, especially in the case of residual networks. Unfortunately, taking the limit as the number of layers goes to infinity does not make practical application very easy and somewhat limits the impact of this paper. The paper is not always completely clear. Since the goal of the paper is to present a minority perspective, clarity should be paramount. experiments are a bit disapoiting. Their goal is not always very explicit. What the experiments do exactly is not very clear (despite their obvious simplicity). The experiments which show how a simple neural networks disentangle data points feel well known and their relation to the current paper feels a bit tenuous. Line 62: The . in tensors (…) ensure consistency in the order of the superscrips and subscripts. => Not very clear. I assume the indices with a dot at the end are to be on the left of indices with a dot at the beginning (so that the dots would sort of align). line 75, Eq 1: Taking the limit as L -> /infty seems to pose a problem for practical applications. Wouldn’t an equation of the form bellow make sense (interpolation between layers): x^a(x + \delta l) = x^a(l) + f^a(x^b(l); l) \delta l instead of x^a(x + 1 ) = x^a(l) + f^a(x^b(l); l) \delta l 6.1 & Figure1: The Harmonic oscillator Section 6.1 and the corresponding figures are unclear. => The given differential equation does not indicate what the parameters are and what the coordinates are. This is a bit confusing since \xi is a common notation for coordinates in differential geometry. We are told that the problem is two dimensional so I assume that x is the two dimensional variable. => Figure 1 is confusing. What does the finite differencing plot show ? what do the x and y axis represent ? What about metric transformations ? What about the scalar metric values ? The fact that the particle stands still is understandable if in the context of a network but confusing w.r.t. the given differential equation for x. => a, b = 1,2 is not clear in this context. Does this mean that a = 1 and b = 2 ? can the state space representation be written in the given way only if a, b = 1,2 ? Figure 2: Every plot has the legend “layer 11”. Typo ? section 6.3 is a bit superfluous.
nips_2017_815
Linearly constrained Gaussian processes We consider a modification of the covariance function in Gaussian processes to correctly account for known linear operator constraints. By modeling the target function as a transformation of an underlying function, the constraints are explicitly incorporated in the model such that they are guaranteed to be fulfilled by any sample drawn or prediction made. We also propose a constructive procedure for designing the transformation operator and illustrate the result on both simulated and real-data examples.
Summary of the Paper: This paper describes a mechanism to incorporate linear operator constraints in the framework of Gaussian process regression. For this, the mean function and the covariance function of the Gaussian processes are changed. The aim of this transformation is to guarantee that samples from the GP posterior distribution satisfy the constraints indicated. These constraints are typically in the form of partial derivatives, although any linear operator can be considered in practice, e.g., integration too. Traditional methods incorporated these constraints by introducing additional observations. This has the limitation that is more expensive and restricted to the observations made. The framework proposed is evaluated in a synthetic problem and in a real problem, showing benefits with respect to the data augmentation strategy. Detailed comments: Quality: I think the quality of the paper is good in general. It is a very well written paper. Furthermore, all the key points are carefully described. It also has a strong related work section. The weakest point is however, the section on experiments in which only a synthetic dataset and a real dataset is considered. Clarity: The clarity of the paper is high. Originality: As far as I know the paper seems original. There are some related methods in the literature. However, they simply augment the observed data with virtual instances that have the goal of guaranteeing the constraints imposed. Significance: I think the problem addressed by the authors is relevant and important for the machine learning community. However, I have the feeling that the authors have not success in noting this. The examples used by the authors are a bit simple. For example they only consider a single real example and only the linear operator of derivatives. I have the feeling that this paper may have potential applications in probabilistic numeric methods, in which often a GP is used. Summing up, I think that this is a good paper. However, the weak experimental section questions its significance. Furthermore, I find difficult to find practical applications within the machine learning community. The authors should have made a bigger effort on showing this. I would consider it hence a borderline paper.
nips_2017_2169
Multi-Armed Bandits with Metric Movement Costs We consider the non-stochastic Multi-Armed Bandit problem in a setting where there is a fixed and known metric on the action space that determines a cost for switching between any pair of actions. The loss of the online learner has two components: the first is the usual loss of the selected actions, and the second is an additional loss due to switching between actions. Our main contribution gives a tight characterization of the expected minimax regret in this setting, in terms of a complexity measure C of the underlying metric which depends on its covering numbers. In finite metric spaces with k actions, we give an efficient algorithm that achieves regret of the form O(max{C 1/3 T 2/3 , √ kT }), and show that this is the best possible. Our regret bound generalizes previous known regret bounds for some special cases: (i) the unit-switching cost regret Θ(max{k 1/3 T 2/3 , √ kT }) where C = Θ(k), and (ii) the interval metric with regret Θ(max{T 2/3 , √ kT }) where C = Θ(1). For infinite metrics spaces with Lipschitz loss functions, we derive a tight regret bound of Θ(T d+1 d+2 ) where d ≥ 1 is the Minkowski dimension of the space, which is known to be tight even when there are no switching costs.
The authors consider the setting of Multi-Armed Bandits with movement costs. Basically, the set of arms is endowed with a metric and the player pays a price when she changes her action depending on the distance with the latest played action. The main contribution of the paper is to generalize previous work to general metrics. They prove matching (up to log factors) upper and lower bounds for the problem and adapt the Slowly-Moving Bandit Algorithm to general metrics (it was previously designed for intervals only). To my opinion the paper is pretty well written and pleasant to read. The literature is well cited. The analysis seem to be quite similar to the previous analysis of the SMB algorithm. The analysis and the algorithm are based on the same idea of hierarchical trees and the main ingredient of the proof seems to deal with non-binary trees. Though, I think it is a nice contribution. Some remarks: - your notion of complexity only deals with parametric spaces of arms. Do you think you can get results for non-parametric spaces, like Lipschitz functions? - what is the complexity of the algorithm for instance if the space of arms is [0,1]^d? I.e. to build the correct tree and to run the algorithm - your hierarchical tree remind me the chaining technique. Though the later cannot be used for bandits, I am wondering if there is any connexion. - maybe some figures would help to understand the algorithm - is it possible to calibrate eta and the depth of the tree in T? - do you think it may be possible to learn the metric online? Typos: - l171: a dot is missing - l258: "actions K. and" - Algo1: a space is missing; add eta > 0 to the input; - Eq (3): some formating issue - l422: I did not understand the explanation on condition (2)c, don't we have $k > = C$ by definition of C? - 476: tilde are missing
nips_2017_671
Inductive Representation Learning on Large Graphs Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.
The authors introduce GraphSAGE, an inductive learning representation learning method for graph-structured data. Unlike previous transductive methods, GraphSAGE is able to generalize the representation to previously unseen nodes. The representation is learned through a recursive process that samples from a node's neighborhood and aggregates the results. GraphSAGE outperforms other popular embedding techniques at three node classification tasks. Quality: The quality of the paper is very high. The framework has several attractive qualities: a representation that is invariant with respect to node index, strong performance at three inductive node classification tasks, and fast training and inference in practice. The authors include code that they intend to release to the public, which is likely to increase the impact of the work. Clarity: The paper is very well-written, well-organized, and enjoyable to read. Originality: The idea is incremental but creative and useful. Significance: This work is likely to be impactful on the NIPS community due to the strong results as well as the fast and publicly available implementation. The work builds on recent developments to offer a tangible improvement to the community. Overall impression: A high-quality paper that is likely to be of interest to the NIPS community and impactful in the field. Clear accept.
nips_2017_3155
Spectrally-normalized margin bounds for neural networks This paper presents a margin-based multiclass generalization bound for neural networks that scales with their margin-normalized spectral complexity: their Lipschitz constant, meaning the product of the spectral norms of the weight matrices, times a certain correction factor. This bound is empirically investigated for a standard AlexNet network trained with SGD on the mnist and cifar10 datasets, with both original and random labels; the bound, the Lipschitz constants, and the excess risks are all in direct correlation, suggesting both that SGD selects predictors whose complexity scales with the difficulty of the learning task, and secondly that the presented bound is sensitive to this complexity.
This paper proposes analysing the margin distributions of deep neural network to analyse their generalization properties. In particular they propose a normalized margin estimator, which is shown to reflect generalization properties of certain neural networks much better than the naive margin computation. Your notion of margin is different than what I'm used to. (I usually hear margin defined as the minimum distance in input space to a point of a different class, as SVMs are commonly depicted). Maybe just take a few words to stress this? Why does the x axis on some (if not all?) your figures not start at 0? Shouldn't the margin always be positive? Unless you're not taking y to be the argmax of F(x) but instead the true class, in which case it would be negative for a missclassification. Maybe mention this. What is the data distribution in your plots? Train, validation, or test? I general your plots should have clearer axes or captions. In Theorem 1.1 it's not clear to me what B and W are (without the appendix), maybe just briefly mention what they are. CIFAR and MNIST are acronyms, and should be capitalized. Ditto for SGD. There are some typos throughout, and I feel like the writing is slightly "casual" and could be improved. Your section on adversarial examples is somewhat puzzling to me. The margins, let's say M1, that you describe initially are in Y-space, they're the distance between the prediction f(x)_y and the next bess guess max_i!=y f(x)_i. This is different than the input space margin, let's say M2, which is something like `min_u ||u-x|| s.t. f(u) \neq f(x)`. As such when you say "The margin is nothing more than the distance an input must traverse before its label is flipped", you're clearly referring to M2, while it would seem your contribution revolves around M1 concept of margin. While intuitively M1 and M2 are probably related, unless I'm missing something about your work relating them, claiming "low margin points are more susceptible to adversarial noise than high margin points" is a much stronger statement than you may realise. Overall I quite like this paper, both in terms of the questions it is trying to answer, and in the way in does. On the other hand, the presentation could be better, and I'm still not convinced that the margin in output space has too much value in the way we currently train neural networks. We know for example that the output probabilities of neural networks are not predictive of confidence (even to an extent for Bayesian neural networks). I think more work relating the input margin with the output margin might be more revealing. I've given myself a 2/5 confidence because I am not super familiar with the math used here, so I may be failing to get deeper implications. Hopefully I'll have time to go through this soon.