id
stringlengths 11
20
| paper_text
stringlengths 29
163k
| review
stringlengths 666
24.3k
|
---|---|---|
nips_2017_1530 | Dynamic Revenue Sharing *
Many online platforms act as intermediaries between a seller and a set of buyers. Examples of such settings include online retailers (such as Ebay) selling items on behalf of sellers to buyers, or advertising exchanges (such as AdX) selling pageviews on behalf of publishers to advertisers. In such settings, revenue sharing is a central part of running such a marketplace for the intermediary, and fixedpercentage revenue sharing schemes are often used to split the revenue among the platform and the sellers. In particular, such revenue sharing schemes require the platform to (i) take at most a constant fraction α of the revenue from auctions and (ii) pay the seller at least the seller declared opportunity cost c for each item sold. A straightforward way to satisfy the constraints is to set a reserve price at c/(1 − α) for each item, but it is not the optimal solution on maximizing the profit of the intermediary. While previous studies (by Mirrokni and Gomes, and by Niazadeh et al) focused on revenue-sharing schemes in static double auctions, in this paper, we take advantage of the repeated nature of the auctions. In particular, we introduce dynamic revenue sharing schemes where we balance the two constraints over different auctions to achieve higher profit and seller revenue. This is directly motivated by the practice of advertising exchanges where the fixed-percentage revenue-share should be met across all auctions and not in each auction. In this paper, we characterize the optimal revenue sharing scheme that satisfies both constraints in expectation. Finally, we empirically evaluate our revenue sharing scheme on real data. | This paper studies marketplaces where there is a single seller, multiple buyers, and multiple units of a single good that are sold sequentially. Moreover, there is an intermediary that sells the item for the seller. The intermediary is allowed to keep some of the revenue, subject to various constraints. The underlying auction is the second-price auction with reserve prices. The intermediary must determine how to set the reserve price in order to maximize its profit. Since this auction format is strategy-proof for the buyers, they assume the buyers bid truthfully. They also make the assumption that the seller will report its costs truthfully.
First, the authors study “single period revenue sharing,” where at every time step, the intermediary can keep at most an alpha fraction of the revenue and it must cover the seller’s reported cost if the item is sold. Next, they study “multi period revenue sharing,” where the aggregate profit of the intermediary is at most an alpha fraction of the buyers’ aggregate payment and the costs must be covered in aggregate as well. For both of these settings, the authors determine the optimal reserve price. Theoretically, the authors show that multi period revenue sharing has the potential to outperform single period revenue sharing when there are many bidders (the second-highest bid is high), and the seller’s cost is “neither too high nor too low.” They analyze several other model variants as well.
The authors experimentally compare the optimal pricing policies they come up with for the various models with the “naïve policy” which sets the reserve price at c/(1-alpha). Their experiments are on real-world data from an ad exchange. They show that the single period and multi period policies improve over the naïve policy, and the multi period policy outperforms the single period policy.
This paper studies what I think is an interesting and seemingly very applicable auction format. It’s well-written and the results are not obvious. I’d like to better understand the authors’ assumption that the seller costs are reported truthfully. The authors write, “Note that the revenue sharing contract guarantees, at least partially, when the constraint binds (which always happens in practice), the goals of the seller and the platform are completely aligned: maximizing profit is the same as maximizing revenue. Thus, sellers have little incentive to misreport their costs.” I think this means that in practice, the cost is generally so low in contrast to the revenue that the revenue sharing constraint dominates the minimum cost constraint. This doesn’t really convince me that the seller has little incentive to misreport her cost as very high to extract more money from the exchange. Also, as far as I understand, this statement detracts from the overall goal of the paper since it says that in practice, it’s always sufficient to maximize revenue, so the intermediary can ignore the sellers and can just run the Myerson auction. The application to Uber’s platform is also a bit confusing to me because Uber sets the price for the passenger and the pay for the driver. So is Uber the intermediary? The drivers seem to have no say in the matter; they do not report their costs.
=====After author feedback=====
That makes more sense, thanks. |
nips_2017_3437 | Stochastic Submodular Maximization: The Case of Coverage Functions
Stochastic optimization of continuous objectives is at the heart of modern machine learning. However, many important problems are of discrete nature and often involve submodular objectives. We seek to unleash the power of stochastic continuous optimization, namely stochastic gradient descent and its variants, to such discrete problems. We first introduce the problem of stochastic submodular optimization, where one needs to optimize a submodular objective which is given as an expectation. Our model captures situations where the discrete objective arises as an empirical risk (e.g., in the case of exemplar-based clustering), or is given as an explicit stochastic model (e.g., in the case of influence maximization in social networks). By exploiting that common extensions act linearly on the class of submodular functions, we employ projected stochastic gradient ascent and its variants in the continuous domain, and perform rounding to obtain discrete solutions. We focus on the rich and widely used family of weighted coverage functions. We show that our approach yields solutions that are guaranteed to match the optimal approximation guarantees, while reducing the computational cost by several orders of magnitude, as we demonstrate empirically. | The paper discusses the development of stochastic optimization for submodular coverage functions. The results and proofs are a mish-mash of existing results and techniques; nevertheless the results are novel and should spark interest for discussions and further research directions.
The central contribution of the paper is providing with a framework to exploit the SGD machinery. To this effect, coverage functions can be harnessed, because the concave upperbound is obtained easily. While this seems restrictive, the authors present applications with good empirical performance vs classic greedy.
I have a couple of points:-
(a) Is there to use projected SGD over Stochastic Frank-Wolfe [A] ? The latter will get rid of messy projection issues for general matroids, albeit the convergence might be slower.
(b) An interesting aspect of the paper is connection between discrete optimization and convex optimization. I would suggest including more related work that discuss these connections [B,C,D,E]. [B] also talks about approx. form of submodularity, [C,D] both follow the notion of “lifting” atomic optimization to continuous domain.
Minor:
The paper is well-written overall. There are minor typos for which I suggest the authors spend some time cleaning up the paper. E.g. Line 22,24 “trough”, line 373 “mosst”. An equation reference is missing in the appendix.
Numbering of Proof Proposition 2 is messed up.
The proof of Theorem 2 seems to be missing, I understand it follows from pipage rounding, but it might be a good idea to give a Lemma or something with reference from which approx. guarantee for rounding follows. In general, adding more information about pipage rounding in the appendix for completeness might be a good idea.
The main text is missing a page of references.
[A] Linear Convergence of Stochastic Frank Wolfe Variants. Donald Goldfarb, Garud Iyengar, Chaoxu Zhou
[B] Restricted Strong Convexity Implies Weak Submodularity. Ethan R. Elenberg , Rajiv Khanna , Alexandros G. Dimakis , and Sahand Negahban
[C] On Approximation Guarantees for Greedy Low Rank Optimization Rajiv Khanna , Ethan R. Elenberg1 , Alexandros G. Dimakis , and Sahand Negahban
[D] Lifted coordinate descent for learning with trace-norm regularization. Miro Dudik Zaid Harchaoui , Jérôme Malick
[E] Guaranteed Non-convex Optimization: Submodular Maximization over Continuous Domains. Andrew An Bian, Baharan Mirzasoleiman, Joachim M. Buhmann, Andreas Krause |
nips_2017_2808 | AdaGAN: Boosting Generative Models
Generative Adversarial Networks (GAN) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a re-weighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove analytically that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes. | AdaGAN is a meta-algorithm proposed for GAN. The key idea of AdaGAN is: at each step reweight the samples and fits a generative model on the reweighted samples. The final model is a weighted addition of the learned generative models. The main motivation is to reduce the mode-missing problem of GAN by reweighting samples at each step.
It is claimed in the Introduction that AdaGAN can also use WGAN or mode-regularized GAN as base generators (line 55). This claim seems to be an overstatement. The key assumption of AdaGAN is that the base generator aims at minimizing the f-divergence (as mentioned in Line 136-137). This assumption does not hold for WGAN or mode-regularized GAN: WGAN minimizes the Wasserstein distance, and the mode-regularized GAN has an additional mode regularizer in the objective.
WGAN and mode-regularized GAN are state-of-the-art generative models targeting the mode-missing problem. It is more convincing if the author can demonstrate that AdaGAN outperforms the two algorithms.
Corollary 1 and 2 assume that the support of learned distribution P_g and the true distribution P_d should overlap for at least 1-beta. This is usually not true in practice. A common situation when training GAN is that the data manifold and the generation manifold are disjoint (see e.g., the WGAN paper). |
nips_2017_1531 | Decomposition-Invariant Conditional Gradient for General Polytopes with Line Search
Frank-Wolfe (FW) algorithms with linear convergence rates have recently achieved great efficiency in many applications. Garber and Meshi (2016) designed a new decomposition-invariant pairwise FW variant with favorable dependency on the domain geometry. Unfortunately it applies only to a restricted class of polytopes and cannot achieve theoretical and practical efficiency at the same time. In this paper, we show that by employing an away-step update, similar rates can be generalized to arbitrary polytopes with strong empirical performance. A new "condition number" of the domain is introduced which allows leveraging the sparsity of the solution. We applied the method to a reformulation of SVM, and the linear convergence rate depends, for the first time, on the number of support vectors. | The main result of the paper is an analysis of the away-step Frank-Wolfe showing that with line search, and over a polyhedral domain, the method has linear convergence. This is somewhat restrictive, but more general than several recent related works, and while they analyze some delicate fixed step-size methods, their main point seems to be that they can analyze the line search case which is important since no one uses the fixed step-size methods in practice (since they depend on some unknown parameters).
Some parts of the paper are written well, and it appears the authors are highly competent. A good amount of proofs are in the main paper, and the appendix supports the paper without being overly long.
However, for someone like myself who is aware of Frank-Wolfe but not familiar with the pairwise or away-step FW and their analysis, nor with the simplex-like polytopes (SLP), the paper is quite hard to follow. Section 3.1 and 3.2 discuss a metric to quantify the difficulty of working over a given polytope, and this is central to the paper and would benefit from a more relaxed description. Also, much of the paper is concerned with the SLP case so that they can compare with previous methods, and show that they are comparable, even if this case (with its special stepsize) is not of much practical interest, so this adds to the complexity. And overall, it is simply a highly technical paper, and (like many NIPS papers) feels rushed in places -- for example, equation (4) by itself is not any kind of valid definition of H_s, since we need something like defining H_s as the maximum possible number such as (4) is true for all points x_t, all optima x^*, and all possible FW and away-directions.
The numerical experiments are not convincing from a practitioners point-of-view, but for accompanying a theory paper, I think they are reasonable, since they are designed to illustrate key points in the analysis (such Fig 2 showing linear convergence) rather than demonstrate this is a state-of-the-art method in practice.
After reading the paper, and not being a FW expert, I come away somewhat confused about what we need to guarantee linear convergence for FW. We assume strong convexity and smoothness (Lipschitz continuity of the gradient), and a polyhedral constraint set. At this point, other papers need the optimal solution (unique, via strong convexity) to be away from the boundary or similar, but this paper doesn't seem to need that. Can you clarify this?
Overall, I'm impressed, and mildly enthusiastic, but I think some of the parts I didn't understand well are my fault and some could be improved by a better-expressed paper, and overall I'm somewhat reserving judgement for now.
Minor comments:
- Line 26, "Besides," is not often used this way in written material.
- Line 36, lots of commas and the "e.g." make this sentence hard to follow
- Line 39, "empirical competency" is not a usual phrase
- Line 63, "SMO" is not defined, and I had to look at the titles of refs [19,20] to figure it out
- Eq (1), it's not clear if you require the polytope to be bounded (some authors use the terminology "polytope" to always mean bounded; maybe this is not so in the literature you cite, but it's unclear to a general reader). Basic FW requires a compact set, but since your objective is assumed to be strongly convex, it is also coercive, so has bounded level sets, so the FW step is still well-defined, and I know FW has been analyzed in this case. So it does not seem to be necessary that the polytope is bounded, and I was very unsure if it was or not (had I been able to follow every step of the proofs, this would probably have answered itself).
- Line 80, "potytope" is a typo
- Eq (4) is not a definition. This part is sloppy
- Eq (5) is vague. What does "second_max" mean? The only definition that makes sense is that if we define I to be the set of all indices that achieve the max, then the second max is the max over all remaining indices, though this could be an empty set, so then the value is not defined. Another definition that doesn't seem to work in your case is that we exclude only one optimal index. It was hard to follow the sentence after (5), especially given the vagueness of (5).
- Line 145 defines e_1, but this should be earlier in line 126.
- Section 3.2 would benefit from more description on what these examples are trying to show. Line 148 says "n which is almost identical to n-1", which by itself is meaningless. What are you trying to express here? I think you are trying to show the lower bound is n-1 and the upper bound is n?
- Line 160, is there a downside to having a slack variable?
- Line 297, "not theoretical" is a typo.
- Line 311, "Besides," is not used this way.
== Update after reading authors' rebuttal ==
Thanks for the updates. I'm still overall positive about the paper. |
nips_2017_1203 | Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration
Computing optimal transport distances such as the earth mover's distance is a fundamental problem in machine learning, statistics, and computer vision. Despite the recent introduction of several algorithms with good empirical performance, it is unknown whether general optimal transport distances can be approximated in near-linear time. This paper demonstrates that this ambitious goal is in fact achieved by Cuturi's Sinkhorn Distances. This result relies on a new analysis of Sinkhorn iterations, which also directly suggests a new greedy coordinate descent algorithm GREENKHORN with the same theoretical guarantees. Numerical simulations illustrate that GREENKHORN significantly outperforms the classical SINKHORN algorithm in practice.
Dedicated to the memory of Michael B. Cohen | In this paper the authors discuss entropic regularized optimal transport with a focus on the computational complexity of its resolution. Their main contribution is a result giving relation between accuracy (in term of error wrt the constraint) and number of iterations for Sinkhorn (and their) algorithm. It show that a given accuracy can be obtained with a complexity near linear to the size of the problem which is impressive.
Using the in-depth computational study the authors propose a new algorithm with the same theoretical property of Sinkhorn but more efficient in practice. very small numerical experiment suggest that the propose GreenKhorn method works in practice better than Sinkhorn.
This paper is very interesting and propose a theoretical computational study that can be of huge help to the community. It also explain why the recent entropic regularization works so well in practice and when it shouldn't be used. The new Greedy Sinkhorn algorithm is very interesting and seem to work very well in practice. I still have a few comment that I would like to be addressed in the rebuttal before giving a definite rating.
Comments:
- The whole paper seem a bit of a mess with theorems given between definitions and proofs scattered everywhere except below the theorem. I understand the the authors wanted to tell a story but in practice it makes the paper more difficult to read. Beginning with theorem 1 is ok and finishing with its proof is also OK. But the remaining should be done in classical math stuff theorem 2 followed by proof then theorem 3 that uses theorem 2 and so on.
- One of the take home message from theorem 1 is actually not that good: if you want to decrease the l1 error on the constraint by 10, you will need to perform 10^3=1000 times more iterations. this also explain why sinkhorn is known to converge rather quickly to a good enough solution but might take forever to actually converge to numerical precision. It should have been discussed a bit more. Actually I would really like to see numerical experiments that illustrate the O(1/k^{1/3}) convergence of the error.
- The propose Greenkhron is very interesting but it has strong similarity with the stochastic sinkhorn of [GCPB16]. My opinion is that the proposed method is better because it basically update only the variable that violate the most the constraints whereas stochastic will spend time updating already valid constraints since all variables are updated. Still good scientific experiments require the authors to provide these comparison and the fact that the code for [GCPB16] is available online gives the authors no excuse. |
nips_2017_2503 | Scalable Variational Inference for Dynamical Systems
Gradient matching is a promising tool for learning parameters and state dynamics of ordinary differential equations. It is a grid free inference approach, which, for fully observable systems is at times competitive with numerical integration. However, for many real-world applications, only sparse observations are available or even unobserved variables are included in the model description. In these cases most gradient matching methods are difficult to apply or simply do not provide satisfactory results. That is why, despite the high computational cost, numerical integration is still the gold standard in many applications. Using an existing gradient matching approach, we propose a scalable variational inference framework which can infer states and parameters simultaneously, offers computational speedups, improved accuracy and works well even under model misspecifications in a partially observable system. | This paper proposes a mean-field method for joint state and parameter inference of ODE systems. This is a problem that continues to receive attention because of its importance in many fields (since ODEs are used so widely) and the limitations of existing gradient-based methods. The solution described in the paper is said to outperform other approaches, be more applicable, and scale better.
The paper lays out its contribution clearly in the introduction. It does a good job of discussing the merits and drawbacks of previous approaches, and explaining how it fits in this context. The proposed methodology appears sound (although it might be useful to explain some of the derivations in a little more detail) and the results indicate that it performs much better than the state of the art. The main advantage of the new method is its execution time, which is shown to be orders of magnitude faster than methods with comparable (or worse) accuracy. In particular, the scalability analysis appears very promising for the applicability of this method to larger models.
Overall, I found this to be a strong paper. My comments mainly concern the presentation, and most are minor.
- The biggest comment has to do with the derivations of the evidence lower bounds and Equation (16). I believe it would be useful to have some more details there, such as explicitly pointing out which properties they rely on. It is also not fully clear to me how the E-M steps of Algorithm 1 make use of the previous derivations (although that may be because I have limited experience in this field). There appears to be some space available for expansion but, if necessary, I suppose that sections 2 and 3 could be combined.
- Besides that, there is a statement that appears contradictory. Lines 82-86 describe how parameters and state are alternately sampled in Calderhead et al. and Dondelinger et al. However, lines 29-30 make it seem like these approaches cannot infer both state and parameters. I would suggest either to rephrase (perhaps remove the "simultaneously"?) or explain why this is a problem.
- As far as the experiments are concerned, I think it would be interesting to run the Lotka-Volterra example of Section 5.1 for longer, to confirm that the inferred state displays periodicity (perhaps comparing with the other methods as well). However, I believe the results do a good job of supporting the authors' claims.
Minor comments:
--------------
l2: grid free -> grid-free (also later in the text)
l52: the phrasing makes it sound like the error has zero mean and zero variance; perhaps rephrase to clarify that the variance can be generally elliptical / different for each state?
Equations 3 - 8: There is a small change in the notation for normal distributions between Eqs 3-4 and 5-8. For the sake of clarity, I think it is worth making it consistent, e.g. by changing the rhs of (3) to N(x_k | 0, C) instead of N(0,C) (or the other way around).
l71: What is the difference between C' and 'C ?
Equation 9: no upper bound on the sum
l99: contains -> contain
l99: "arbitrary large" makes me think that a sum term can include repetitions of an x_j (to effectively have power terms, as you would have in mass-action kinetics with stoichiometry coefficients > 1), but it is not fully clear. Is this allowed? (in other words, is M_{ki} a subset of {1,...,K}, or a multiset of its elements?)
l139: what does "number of states" refer to here? the number of terms in the sum in Eq 9?
Figure 1: in the centre-right plot, it seems like the estimate of \theta_4 is missing for the spline method
l155: Predator is x_2, prey is x_1
l163: move the [ after the e.g. : "used in e.g. [Calderhead et al. ..."
l168: usining -> using
l171: I would add commas around "at roughly 100 times the runtime" to make the sentence read easier
l175: partial -> partially
l187: (3) -> (Figure 3)
l195-196: Figure 2 does not show the noise changing; I think having two references would be more helpful: "[...] if state S is unobserved (Figure 2) and [...] than our approach (Figure 3)."
l198: perhaps "error" is better than "bias" here? Unless it is known how exactly the misspecification biases the estimation?
l202: Lorentz -> Lorenz (and later)
l207: stochstic -> stochastic
l208: number states -> number of states
l209: Due the dimensionality -> Due to the dimensionality,
l212: equally space -> equally spaced
l214: correct inferred -> correctly inferred
Figure 4: In the last two lines of the caption: wheras -> whereas; blue -> black (at least that's how it looks to me); is corresponding to -> corresponds to
l219: goldstandard -> gold standard
l253: modemodel -> modelling |
nips_2017_2992 | Gradient Methods for Submodular Maximization
In this paper, we study the problem of maximizing continuous submodular functions that naturally arise in many learning applications such as those involving utility functions in active learning and sensing, matrix approximations and network inference. Despite the apparent lack of convexity in such functions, we prove that stochastic projected gradient methods can provide strong approximation guarantees for maximizing continuous submodular functions with convex constraints. More specifically, we prove that for monotone continuous DR-submodular functions, all fixed points of projected gradient ascent provide a factor 1 2 approximation to the global maxima. We also study stochastic gradient methods and show that after O(1 2 ) iterations these methods reach solutions which achieve in expectation objective values exceeding ( OPT 2 − ). An immediate application of our results is to maximize submodular functions that are defined stochastically, i.e. the submodular function is defined as an expectation over a family of submodular functions with an unknown distribution. We will show how stochastic gradient methods are naturally well-suited for this setting, leading to a factor 1 2 approximation when the function is monotone. In particular, it allows us to approximately maximize discrete, monotone submodular optimization problems via projected gradient ascent on a continuous relaxation, directly connecting the discrete and continuous domains. Finally, experiments on real data demonstrate that our projected gradient methods consistently achieve the best utility compared to other continuous baselines while remaining competitive in terms of computational effort. | This paper study the problem of continuous submodular maximization subject to a convex constraint. The authors show that for monotone weakly DR-submodular functions, all the stationary points of this problem are actually good enough approximations (gamma^2 / (1 + gamma^2), where gamma is the DR factor) of the optimal value. They also show that stochastic projected gradient descent and mirror descent converge to a stationary point in O(1/eps^2) iterations.
The paper is clear and well presented, with few typos. The results presented can handle more general constraints than prior work and can address the stochastic setting, but at the cost of a worst approximation factor (1/2 for DR-submodular functions as opposed to 1 - 1/e).
Some comments/questions:
-It is worth mentioning that stochastic projected gradient descent was also used recently for submodular minimization in "Chakrabarty, D., Lee, Y. T., Sidford, A., & Wong, S. C. W. Subquadratic submodular function minimization. STOC 2017."
- For the deficiency example of FW in A.2, it would be clearer to present the example with a minibatch setting, with m_i,n = 1/(b+1) batch size, the same example should follow through, but it would make it clearer that a slightly larger minibatch size won't fix the problem.
- Would this deficiency example hold also for the algorithm "Measured continuous greedy" in "Feldman, Moran, Joseph Naor, and Roy Schwartz. "A unified continuous greedy algorithm for submodular maximization." FOCS, 2011, whose updates are slightly different from FW?
- You reference the work [20] on p.2, but I did not find this work online?
- Can you add what are other examples of L-smooth continuous submodular functions, other than the multilinear extension?
- For a more convincing numerical comparison, it would be interesting to see the time comparison (CPU time not just iterations) against FW.
- What is the y-axis in the figures?
Some typos:
-line 453: f_ij - f_i - f_j (flipped)
-lemma B.1: right term in the inner product should be proj(y) - x.
-lemma B.2: first term after equality should be D_phi(x,y)
- Eq after line 514 and line 515: grad phi(x+) (nabla missing)
- In proof of lemma B.5: general norm instead of ell_2
- Eq after line 527 and 528: x_t and x^* flipped in inner product. |
nips_2017_647 | SchNet: A continuous-filter convolutional neural network for modeling quantum interactions
Deep learning has the potential to revolutionize quantum chemistry as it is ideally suited to learn representations for structured data and speed up the exploration of chemical space. While convolutional neural networks have proven to be the first choice for images, audio and video data, the atoms in molecules are not restricted to a grid. Instead, their precise locations contain essential physical information, that would get lost if discretized. Thus, we propose to use continuousfilter convolutional layers to be able to model local correlations without requiring the data to lie on a grid. We apply those layers in SchNet: a novel deep learning architecture modeling quantum interactions in molecules. We obtain a joint model for the total energy and interatomic forces that follows fundamental quantumchemical principles. Our architecture achieves state-of-the-art performance for benchmarks of equilibrium molecules and molecular dynamics trajectories. Finally, we introduce a more challenging benchmark with chemical and structural variations that suggests the path for further work. | Summary:
In order to design new molecules, one needs to predict if the newly designed molecules could reach an equilibrium state. All possible configurations of atoms do not lead to such state. To evaluate that, the authors propose to learn predictors of such equilibrium from annotated datasets, using convolutional networks. Different to previous works, that might lead to inaccurate predictions from convolutions computed in a discretized space, the authors propose continuous filter convolution layers. They compare their approach with two previous ones on 2 datasets and also introduce a third dataset. Their model works better than the others especially when using more training data.
I don't have a very strong background in physics so I'm not the best person to assess the quality of this work, but I found that the proposed solution made sense, the paper is clearly written and seems technically correct.
Minor: don't forget to replace the XXX
Points 3,4 should not be listed as contributions in my opinion, they are results
l. 26 : the both the -> both the
Ref 14: remove first names |
nips_2017_2339 | A Probabilistic Framework for Nonlinearities in Stochastic Neural Networks
We present a probabilistic framework for nonlinearities, based on doubly truncated Gaussian distributions. By setting the truncation points appropriately, we are able to generate various types of nonlinearities within a unified framework, including sigmoid, tanh and ReLU, the most commonly used nonlinearities in neural networks. The framework readily integrates into existing stochastic neural networks (with hidden units characterized as random variables), allowing one for the first time to learn the nonlinearities alongside model weights in these networks. Extensive experiments demonstrate the performance improvements brought about by the proposed framework when integrated with the restricted Boltzmann machine (RBM), temporal RBM and the truncated Gaussian graphical model (TGGM). | Summary: The paper uses doubly truncated Gaussian distributions to develop a family of stochastic non-linearities parameterized by the truncation points. In expectation, different choices of truncation points are shown to recover close approximations to popularly employed non-linearities. Unsurprisingly, learning the truncation points from data leads to improved performance.
Quality and Clarity — The authors present a technically sound piece of work. The proposed family of non-linearities are sensible and the authors do a good job of demonstrating their utility across a variety of problems. The manuscript is sufficiently clear.
Originality and Significance - The connection between the convolution of a Gaussian with an appropriately truncated Gaussian and popular neural network nonlinearities are well known and have been previously explored by several papers in supervised contexts (see [1, 2, 3]). In this context, learning the truncation points is a novel but obvious and incremental next step. To me, the interesting contribution of this paper lies in its empirical demonstration that learning the non-linearities leads to tangible improvements in both density estimation and predictive performance. It is also interesting that the learned non-linearities appear to be sigmoid-relu hybrids.
Detailed Comments:
1) Figures 2 indicates that the learned upper truncations are all relatively small values, in the single digits. This could be a consequence of the initialization to 1 and the low learning rates, causing the recovered solutions to get stuck close to the initial values. Were experiments performed with larger initial values for the upper truncation points?
2) a) Why are the s-learn experiments missing for TruG-TGGM?
b) Table 4 seems to suggest that stochasticity alone doesn’t lead to significant improvements over the ReLU MLP. Only when learning of non-linearities is added does the performance improve. What is unclear from this is whether stochasticity helps in improving performance at all, if the non-linearities are being learned. I will really like to see comparisons for this case with parameterized non-linearities previously proposed in the literature (citations 6/7/8 in the paper) to tease apart the benefits of stochasticity vs having more flexible non-linearities.
Minor:
There exists related older work on learning non-linearities in sigmoidal belief networks [4], which should be cited.
[1] Hernández-Lobato, José Miguel, and Ryan Adams. "Probabilistic backpropagation for scalable learning of bayesian neural networks." International Conference on Machine Learning. 2015.
[2] Soudry, Daniel, Itay Hubara, and Ron Meir. "Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights." Advances in Neural Information Processing Systems. 2014.
[3] Ghosh, Soumya, Francesco Maria Delle Fave, and Jonathan S. Yedidia. "Assumed Density Filtering Methods for Learning Bayesian Neural Networks." AAAI. 2016.
[4] Frey, Brendan J., and Geoffrey E. Hinton. "Variational learning in nonlinear Gaussian belief networks." Neural Computation 11.1 (1999): 193-213. |
nips_2017_1290 | Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols
Learning to communicate through interaction, rather than relying on explicit supervision, is often considered a prerequisite for developing a general AI. We study a setting where two agents engage in playing a referential game and, from scratch, develop a communication protocol necessary to succeed in this game. Unlike previous work, we require that messages they exchange, both at train and test time, are in the form of a language (i.e. sequences of discrete symbols). We compare a reinforcement learning approach and one using a differentiable relaxation (straightthrough Gumbel-softmax estimator (Jang et al., 2017)) and observe that the latter is much faster to converge and it results in more effective protocols. Interestingly, we also observe that the protocol we induce by optimizing the communication success exhibits a degree of compositionality and variability (i.e. the same information can be phrased in different ways), both properties characteristic of natural languages. As the ultimate goal is to ensure that communication is accomplished in natural language, we also perform experiments where we inject prior information about natural language into our model and study properties of the resulting protocol. | Increasing my score based on the authors rebuttal. The argument that the proposed method can complement human-bot training makes sense. Also, it seems RL baseline experiments were exhaustive. But the argument about the learnt language being compositional should be toned down since there is not enough evidence to support it.
Old reviews:
The paper proposes to use Gumbel-softmax for training sender and receiver agents in a referential game like Lazaridou (2016). Unlike the previous work, large number of distraction images and variable length messages are allowed. The results suggest the proposed method outperforms Reinforce baseline. In addition, the paper proposes two way to force the communication to be more like natural language.
My first concern is that the paper disregards continuous communication for being unlike natural language, and insists on using discrete symbols. But during training, continuous communication is allowed transmit gradients between the agents. This use of back-propagation makes it impossible to learn from interactions with human, which is probably vital for learning natural language.
Secondly, I think baseline experiments are not exhaustive and can be improved. Here are my points:
- L109: Learnt baseline (network itself predicts baseline values) is pretty standard in deep RL works for it is efficiency over moving average. Why it hasn't tried? Also it is common to add entropy regularization term to the loss to encourage exploration in deep RL. Has it tried?
- Appropriate baseline here continuous communication during training and discretization during testing as done in (Foerster, 2016)
- L135: a simple solution would be to decrease the temperature to zero during training. Has it tried?
- L163: why not hyper-parameters are not tuned? At least the learning rate should be tuned, especially in Reinforce baseline because the default values are tuned for supervised training.
At last, the claim of compositionally (L11) of the learnt communication is not supported by any evidence. Also, L196 suggests that word 5747 represents animals. Then, it should still represent the same thing even at different position as in natural language word "animal". The fact that it is not suggest that the learnt communication is actually unlike natural language. Why not similar analysis on reinforce baseline?
Other comments:
- L59: It is not a non-stationary environment. Since the sender and the receiver have the same reward, they can be considered as a single RL agent. Then, this agent writes words into the environment as "sender", then observes them at the next step as "receiver".
- How KL is optimized in sec 3.3? It can't be computed directly because there are exponentially many "m"s.
- Why there are two reinforce versions in figure 2 left. What is greedy argmax reinforce?
- It would be good to include a histogram of message lengths
- Why such large vocabulary size? What is the effect of smaller vocabulary? |
nips_2017_2591 | Fair Clustering Through Fairlets
We study the question of fair clustering under the disparate impact doctrine, where each protected class must have approximately equal representation in every cluster. We formulate the fair clustering problem under both the k-center and the k-median objectives, and show that even with two protected classes the problem is challenging, as the optimum solution can violate common conventions-for instance a point may no longer be assigned to its nearest cluster center! En route we introduce the concept of fairlets, which are minimal sets that satisfy fair representation while approximately preserving the clustering objective. We show that any fair clustering problem can be decomposed into first finding good fairlets, and then using existing machinery for traditional clustering algorithms. While finding good fairlets can be NP-hard, we proceed to obtain efficient approximation algorithms based on minimum cost flow. We empirically demonstrate the price of fairness by quantifying the value of fair clustering on real-world datasets with sensitive attributes. | While there is a growing body of work on fair supervised learning, here the authors present a first exploration of fair unsupervised learning by considering fair clustering. Each data point is labeled either red or blue, then an optimal k-clustering is sought which respects a given fairness criterion specified by a minimum threshold level of balance in each cluster. Balance lies in [0,1] with higher values corresponding to more equal numbers of red and blue points in every cluster. This is an interesting and natural problem formulation - though a more explicit example of exactly how this might be useful in practice would be helpful.
For both k-center and k-median versions of the problem, it is neatly shown that fair clustering may be reduced to first finding a good 'fairlet' decomposition and then solving the usual clustering problem on the centers of each fairlet. While this is NP-hard, efficient approximation algorithms are provided based on combining earlier methods with an elegant min cost flow construction.
On 3 example datasets, it is shown that standard clustering yields low balance (perhaps not surprising). A preliminary investigation of the cost of fairness is provided where balance of at least 0.5 is required and the number of clusters is varied. It would be interesting to see also the sensitivity to balance: keep the number of clusters fixed, and vary the required balance.
Minor points:
l. 16 machine learning are -> machine learning is
In a few places, e.g. l. 19, \citep{} would improve the readability, so Kleinberg et al. (2017a) -> (Kleinberg et al. 2017a).
l. 53 define -> defines. In fact, I think you just look at the costs of regular clusterings compared to fair so don't really use this ratio?
Fig 1 caption, would give -> might give. x is assigned to a -> x is assigned to the same cluster as a
l. 102 other measure -> other measures, please could you elaborate?
l. 132, 136 Y -> Y_j as in l. 161
Near Definition 5, an image showing an example of how fairlets together make up the fair clustering would be helpful, though I appreciate it's hard given space constraints - in the Supplement would still be good. |
nips_2017_823 | Multi-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets
Imitation learning has traditionally been applied to learn a single task from demonstrations thereof. The requirement of structured and isolated demonstrations limits the scalability of imitation learning approaches as they are difficult to apply to real-world scenarios, where robots have to be able to execute a multitude of tasks. In this paper, we propose a multi-modal imitation learning framework that is able to segment and imitate skills from unlabelled and unstructured demonstrations by learning skill segmentation and imitation learning jointly. The extensive simulation results indicate that our method can efficiently separate the demonstrations into individual skills and learn to imitate them using a single multi-modal policy. The video of our experiments is available at | The paper describes a new learning model able to discover 'intentions' from expert policies by using an imitation learning framework. The idea is mainly based on the GAIL model which aims at learning by imitation a policy using a GAN approach. The main difference in the article is that the learned policy is, in fact, a mixture of sub-policies, each sub-policy aiming at automatically matching a particular intention in the expert behavior. The GAIL algorithm is thus derived with this mixture, resulting in an effective learning technique. Another approach is also proposed where the intention will be captured through a latent vector by derivating the InfoGAN algorithm for this particular case. Experiments are made on 4 different settings and show that the model is able to discover the underlying intentions contained in the demonstration trajectories.
Comments:
First of all, the task focused in the paper is interesting. Discovering 'intentions' and learning sub-policies is a key RL problem that has been the source of many old and new papers. The proposed models are simple but effective extensions of existing models.
Actually, I do not really understand what is the difference made by the authors between 'intentions' and 'options' in the literature. It seems to me that intentions are more restricted than options since there are no natural switching mechanisms in the proposed approach while options models are able to choose which option to use at any time. Moreover, w.r.t options literature, the main originality of the paper is both in how the intentions are discovered (using GAIL), and also in the fact that this is made by imitation while many options learning models are based on a classical RL setting. A discussion on this point is important, both in the 'related work' part, but also in the experimental part (I will go back to this point later in the review). At the end, it seems that the model discovered sub-policies, but is still unable to know how to assemble these sub-policies in order to solve complex problems. So the proposed model is more a first step to learn an efficient agent than a complete solution. (This is for example illustrated in Section 6 "[...]the categorical intention variable is manually changed [...]"). The question is thus what can be done with the learned policy ? How the result of the algorithm will be used ?
Concerning the presentation of the model, the notations are not clear. Mainly, \pi^i gives the impression that the policy is indexed by the intention i (which is not the case, since, as far as I understand, the indexation by the intention is in fact contained in the notation $\pi(a|s,i)$) . Moreover, the section concerning the definition of the reward is unclear: as far as I understand, in your setting, trajectories are augmented with the intention value at each timestep. But this intention value i_t is not used during learning, but will be used for evaluating the reward value. I think that this has to be made more clear in the paper if accepted.
Concerning the experimental sections:
* First, the environments are quite simple, and only focused on fully-observable MDP. The extension of the proposed model to PO-MDP could be discussed in the paper (since I suppose it is not trivial).
* Second, there is no comparison of the proposed approach with options discovery/hierarchical models techniques like "Imitation Learning with Hierarchical Actions -- Abram L. Friesen and Rajesh P. N. Rao", or even "Active Imitation Learning of Hierarchical Policies
Mandana Hamidi, Prasad Tadepalli, Robby Goetschalckx, Alan Fern". This is a clear weak point of the paper.
* I do not really understand how the quantitative evaluation is made (reward). In order to evaluate the quality of each sub-policy w.r.t each intention, I suppose that a manual matching is made between the value of $i$ and the 'real' intention. Could you please explain better explain that point ?
* The difference between the categorical intentions and continuous one is not well discussed and evaluated. Particularly, on the continuous case, the paper would gain if the interpolation between sub-policies is evaluated as it is suggested in the article.
Conclusion:
An interesting extension of existing models, but with some unclear aspects, and with an experimental section that could be strenghtened |
nips_2017_2590 | Clone MCMC: Parallel High-Dimensional Gaussian Gibbs Sampling
We propose a generalized Gibbs sampler algorithm for obtaining samples approximately distributed from a high-dimensional Gaussian distribution. Similarly to Hogwild methods, our approach does not target the original Gaussian distribution of interest, but an approximation to it. Contrary to Hogwild methods, a single parameter allows us to trade bias for variance. We show empirically that our method is very flexible and performs well compared to Hogwild-type algorithms. | This paper proposes a new parallel approximate sampler for high-dimensional Gaussian distributions. The algorithm is a special case of a larger class of iterative samplers based on a transition equation (2) and matrix splitting that is analysed in [9]. The algorithm is similar to the Hogwild sampler in term of the update formula and the way of bias analysing, but it is more flexible in the sense that there is a scalar parameter to trade-off the bias and variance of the proposed sampler.
I appreciate the detailed introduction about the mathematical background of the family of sampling algorithms and related works. It is also easy to follow the paper and understand the merit of the proposed algorithm. The illustration of the decomposition of the variance and bias in Figure 1 gives a clear explanation about the role of \eta.
Experiments on two covariance matrix shows that the proposed algorithm achieves lower error than Hogwild for some range of the \eta value. However, the range changes with the computation budget as well as the value of the covariance matrix. It is not clear to me if the proposed algorithm is better than Hogwild in practice if we do not have a good method/heuristic to choose a proper value for \eta.
Another problem with the experiment evaluation is that there is no comparison in the only real applications, image pinpointing-deconvolution. Does the clone MCMC give a better image recovery than other competing methods? |
nips_2017_2246 | Sobolev Training for Neural Networks
At the heart of deep learning we aim to use neural networks as function approximators -training them to produce outputs from inputs in emulation of a ground truth function or data creation process. In many cases we only have access to input-output pairs from the ground truth, however it is becoming more common to have access to derivatives of the target output with respect to the input -for example when the ground truth function is itself a neural network such as in network compression or distillation. Generally these target derivatives are not computed, or are ignored. This paper introduces Sobolev Training for neural networks, which is a method for incorporating these target derivatives in addition the to target values while training. By optimising neural networks to not only approximate the function's outputs but also the function's derivatives we encode additional information about the target function within the parameters of the neural network. Thereby we can improve the quality of our predictors, as well as the data-efficiency and generalization capabilities of our learned function approximation. We provide theoretical justifications for such an approach as well as examples of empirical evidence on three distinct domains: regression on classical optimisation datasets, distilling policies of an agent playing Atari, and on large-scale applications of synthetic gradients. In all three domains the use of Sobolev Training, employing target derivatives in addition to target values, results in models with higher accuracy and stronger generalisation. | I have enjoyed reading this paper. The idea is clear and very intuitive. Figure 1 is very well conceived.
Now, I must say that, throughout my reading, I was looking for the authors to "confess" that the first-order derivatives might be entirely useless in certain situations. But in a situation where both f(x) and f'(x) are given, taken as ground truth, then it is certainly a wise idea to use the information about f'(x) in order to train a new network that approximates f(x) to solve a task.
However the problem is that, when a certain model M1 is trained to minimize a loss on a training set, then there is nothing that guarantees the sanity/usefulness of the gradients of M1. The derivatives depend a lot on the activation functions being used, for example. Then I don't see at all why there would be any general principle that would encourage us to imitate those gradients in order to learn another model M2 based on M1.
It is very possible that this doesn't happen in practice, on common datasets and reasonable models, and that's interesting. I think that it's partly what the paper studies : does this thing work in practice ? Okay, great ! Worth discussing in a paper with good experiments. Accept !
In the past, I have had a conversation with one of the great ML authorities in which he/she casually claimed that when two functions (f1, f2) are close, then their derivatives are also close. This doesn't sound unreasonable as a mental shortcut, but it's ABSOLUTELY INCORRECT in the mathematical sense. Yet people use that shortcut.
The authors refer to this indirectly, but I would really like to see the authors be more explicit about that fact, and I think it would make the paper slightly better. Just to be sure that nobody makes that mental shortcut when they read this stimulating paper.
I had a bit more difficulty understanding the subtleties of the Synthetic Gradient portion of the paper. I trust that it's sound.
Also, in Figure 4 I'm looking at certain plots and it seems like "regular distillation" very often gets worse with more iterations. (The x-axis are iterations, right?) Sobolev training looks better, but it's hard to say if it's just because the hyperparameters were not selected correctly. They kinda make regular distillation look bad. Moreover, the y-axis really plays with the scales and makes the differences look way bigger than they are. It's hard to know how meaningful those differences are.
To sum it up, I'm glad that I got to read this paper. I would like to see better, and more meaningful experiments, in a later paper, but I'm happy about what I see now. |
nips_2017_2020 | A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning
This paper takes a step towards temporal reasoning in a dynamically changing video, not in the pixel space that constitutes its frames, but in a latent space that describes the non-linear dynamics of the objects in its world. We introduce the Kalman variational auto-encoder, a framework for unsupervised learning of sequential data that disentangles two latent representations: an object's representation, coming from a recognition model, and a latent state describing its dynamics. As a result, the evolution of the world can be imagined and missing data imputed, both without the need to generate high dimensional frames at each time step. The model is trained end-to-end on videos of a variety of simulated physical systems, and outperforms competing methods in generative and missing data imputation tasks. | The paper presents a time-series model for high dimensional data by combining variational auto-encoder (VAE) with linear Gaussian state space model (LGSSM). The proposed model takes the latent repressentation from VAE as the output of LGSSM. The exact inference of linear Gaussian state space model via Kalman smoothing enables efficient and accurate variational inference for the overall model. To extend the temporal dynamics beyond linear dependency, the authors use a LSTM to parameterize the matrices in LGSSM. The performance of the proposed model is evaluated through bouncing ball and Pendulum experiments.
LGSSM with dynamic parameterization is a nice trick to learn complex temporal dynamics through a simple probabilistic model, but has been exploited in the literature, e.g., [M. Seeger, D. Salinas, V. Flunkert NIPS 2016]. The proposed method does not really disentangle the object representation from the temporal dynamic, as the model is unlikely to infer the correct dynamics of an unseen object. The proposed method only models the temporal dynamic on top of the low dimensional latent representation from VAE.
The proposed method is evaluted on some simple examples, where the object representation is simple (mostly single object with low resolution image) and the dynamic is also simple (bouncing of a ball or Pendulum). It would be more convencing to show experiments with either complex data representation or complex dynamics (e.g., human motion). In Pendulum experiment, the proposed method is compared with other two methods in terms of ELBO. However, it is questionable how to compare ELBO from different models (e.g., the numbers of parameters are probably different.). |
nips_2017_1188 | Hierarchical Methods of Moments
Spectral methods of moments provide a powerful tool for learning the parameters of latent variable models. Despite their theoretical appeal, the applicability of these methods to real data is still limited due to a lack of robustness to model misspecification. In this paper we present a hierarchical approach to methods of moments to circumvent such limitations. Our method is based on replacing the tensor decomposition step used in previous algorithms with approximate joint diagonalization. Experiments on topic modeling show that our method outperforms previous tensor decomposition methods in terms of speed and model quality.
obtained by standalone EM in this case is reasonable and desirable: when asked for a small number of latent variables EM yields a model which is easy to interpret and can be useful for data visualization and exploration. For example, an important application of low-dimensional learning can be found in mixture models, where latent class assignments provided by a simple model can be used to split the training data into disjoint datasets to which EM is applied recursively to produce a hierarchical clustering [12,13]. The tree produced by such clusterings procedure provides a useful aid in data exploration and visualization even if the models learned at each branching point do not accurately represent the training data.
In this paper we develop a hierarchical method of moments that produces meaningful results even in misspecified settings. Our approach is different from previous attemps to design MoM algorithms for misspecified models. Instead of looking for convex relaxations of existing MoM algorithms like in [14][15][16] or analyzing the behavior of a MoM algorithm with a misspecified number of latent states like in [17,18], we generalize well-known simultaneous diagonalization approaches to tensor decomposition by phrasing the problem as a non-convex optimization problem. Despite its non-convexity, the hierarchical nature of our method allows for a fast accurate solution based on low-dimensional grid search. We test our method on synthetic and real-world datasets on the topic modeling task, showcasing the advantages of our approach and obtaining meaningful results. | Spectral methods for learning latent variable models reduce the learning problem to the problem of performing some low-rank factorization over matrices collecting some moments of the target distribution. There are many variants of this in the literature, most approaches use SVD or other forms of tensor decompositions. This paper highlights an important limitation of many of these approaches. The main limitation is that most theoretical guarantees apply when the goal is to recover a model of k states assuming that the mixture distribution from which the moment matrix is computed also has k components. However, in many cases one is interested in learning models for l lower than k, what the authors call the misspecified setting.
This paper proposes a method that performs joint diagonalization by combining a
truncated SVD step with an optimisation. The theoretical analysis shows that this
algorithm can work in the misspecified setting. The paper presents an application
of the proposed algorithm to topic modelling.
I believe this is a nice contribution, it is true that there are already many
variants of the spectral method, each with its own claimed advantages. This been
said, the one proposed in this paper is technically sound and addresses a true
limitation of previous work.
In terms of references the paper is quite complete, there are however a couple of
relevant missing references:
- B.Balle et al. "Local Loss Optimisation in operator models: A new insight into spectral learning"; since it is one of the first papers to frame spectral learning directly as an optimization.
- More closely related
to this submission, B. Balle et al "A Spectral Learning of Finite State
Transducers" is to the best of my knowledge the first paper that proposes a
spectral method based on joint diagonalization. |
nips_2017_2247 | Multi-Information Source Optimization
We consider Bayesian methods for multi-information source optimization (MISO), in which we seek to optimize an expensive-to-evaluate black-box objective function while also accessing cheaper but biased and noisy approximations ("information sources"). We present a novel algorithm that outperforms the state of the art for this problem by using a Gaussian process covariance kernel better suited to MISO than those used by previous approaches, and an acquisition function based on a one-step optimality analysis supported by efficient parallelization. We also provide a novel technique to guarantee the asymptotic quality of the solution provided by this algorithm. Experimental evaluations demonstrate that this algorithm consistently finds designs of higher value at less cost than previous approaches. | This paper deals with the important topic of optimization in cases where in addition to costly evaluations of the objective function, it is possible to evaluate cheaper approximations of it. This framework is referred to as MISO (multi-information source optimization) in the paper, where Bayesian optimization strategies relying on Gaussian process models are considered. An original MISO approach is presented, misoKG, that relies on an adaption of the Knowledge Gradient algorithm in multi-information source optimization settings. The method is shown to achieve
very good results and to outperform considered competitors on three test cases.
Overall, I am fond of the ideas and the research directions tackled by the paper, and I found the contributions principled and practically relevant. Given the potential benefits for the machine learning community and beyond, these contributions should definitely be published in a high-level venue such as NIPS. Yet, the paper could still be improved in several respects. Le me start by general remarks and then add minor comments.
A lot of emphasis is put throughout the paper, and starting in the abstract, on the use of "a novel Gaussian process covariance kernel". In the introduction, "two innovations" are listed as 1) the kernel and 2) the acquisition function. I think that this is misleading, and to me the main contributions concern by far the acquisition function in the first place, and the model is more a modality of application. By the way, the proposed model is a quite simple particular case
of the Linear Model of Corregionalization (which the authors mention), corresponding to an M*(M+1) matrix [1,I], i.e. a concatenation of a vector of ones with an identity matrix. Besides this, it also has quite some similarity with the Kennedy and O'Hagan model that was used by Forrester et al. in optimization, without neither the hierarchical aspect nor regression coefficients. In all, I have the feeling that using the model presented by the authors is indeed a clever thing to do, but to me it doesn't constitute a major contribution in itself. Not also that if misoKG is compared to several competitors, I did not see comparisons of the proposed model (in terms of fit) versus others. Also, what would prevent from using other criteria with this model or other models with the misoKG aquisition function?
Other than that, let me formulate a few comments/questions regarding the main contributions presented in the third section. First, a finite set of points is
introduced at each iteration for the calculation and the optimization of the acquisition function. I am wondering to what extent such a set is needed on the one
hand for the calculation, on the other hand for the optimization, and finally if there are good reasons (beyond the argument of simplicity) why the same
set should be used for both tasks. I conceive that optimizing over l and x at the same time is uneasy, but couldn't one optimize over D for every l and then
compare the solutions obtained for the different l's? By the way, an approach was proposed in "Bayesian Optimization with Gradients" (arXiv:1703.04389) to
avoid discretization in the calculation of a similar acquisition function; could such approach be used here? If yes but a discretization approach is preferred
here, maybe a discussion on when considering discrete subsets anyway would make sense? On a different note, I was wondering to what extent the calculation tricks
used around lines 158-172 differ from the ones presented in the PhD thesis of P. Frazier and related publications. Maybe a note about that would be a nice addition.
Finally, I found the formulation of Theorem 1 slightly confusing. One speaks of a recommendation x_N^star in A and then take limit over N. So we have a sequence of
points in A, right? Besides, it all relies on Corollary 1 from the appendix, but there the reader is referred to "the full version of the paper"...I do not have
that in my possession, and as of now I found the proof of Corollary 1 somehow sketchy. I can understand that there are good reasons for that, but it is bit strange
to comment on a result which proof is in appendix, which itself refers to a full version one doesn't have at hand!
A few minor comments follow:
* About the term "biased" (appearing in several places): I understand well that a zero-mean prior on the bias doesn't mean at all that there is no bias, but
maybe manipulating means of biases without further discussion on the distinction between the bias and its model could induce confusion. Also, around lines 54-55,
it is written that "...IS may be noisy, but are assumed to be unbiased" followed by "IS are truly unbiased in some applications" and "Thus, multi-fidelity methods
can suffer limited applicability if we are only willing to use them when IS are truly unbiased...". As far as I understand, several methods including the ones
based on the Kennedy and O'Hagan settings allow accounting for discrepances in the expectations of the different IS considered. So what is so different between
multi-fidelity optimization and what is presented here regarding the account of possible "biases"? From a different perspective, other approaches like the one of
"Quantile-Based Optimization of Noisy Computer Experiments with Tunable Precision" are indeed relying on responses with common expectation (at any given x).
* In line 43, the "entropy" is the maximizer's entropy, right?
* Still in line 43: why would the one-step optimality mean that the proposed approach should outperform entropy-based ones over a longer horizon?
* About "From this literature, we leverage...", lines 70-72: as discussed above, it would be wothwhile to point out more explicitly what is new versus what is
inherited from previous work regarding these computational methods.
* In line 79 and in the bibliography: the correct last name is "Le Gratiet" (not "Gratiet").
* In line 84: without further precision, argmax is a set. So, the mentioned "best design" should be one element of this set?
* In line 87: the "independence" is a conditional independence given g, I guess...?
* In page 3 (several instances): not clear if the bias is g(.)-f(l,.) or delta_l(.)=f(l,.)-g(.)?
* In line 159, the first max in the expectations should have braces or so around a_i + b_i Z.
* In line 165: using c is a bit slippery as this letter is already used for the evaluation costs...
* In "The Parallel Algorithm": strange ordering between 1) and 2) as 2) seems to describe a step within 1)...? |
nips_2017_1720 | AIDE: An algorithm for measuring the accuracy of probabilistic inference algorithms
Approximate probabilistic inference algorithms are central to many fields. Examples include sequential Monte Carlo inference in robotics, variational inference in machine learning, and Markov chain Monte Carlo inference in statistics. A key problem faced by practitioners is measuring the accuracy of an approximate inference algorithm on a specific data set. This paper introduces the auxiliary inference divergence estimator (AIDE), an algorithm for measuring the accuracy of approximate inference algorithms. AIDE is based on the observation that inference algorithms can be treated as probabilistic models and the random variables used within the inference algorithm can be viewed as auxiliary variables. This view leads to a new estimator for the symmetric KL divergence between the approximating distributions of two inference algorithms. The paper illustrates application of AIDE to algorithms for inference in regression, hidden Markov, and Dirichlet process mixture models. The experiments show that AIDE captures the qualitative behavior of a broad class of inference algorithms and can detect failure modes of inference algorithms that are missed by standard heuristics. | [after feedback]
Thank you for the clarifications. I've updated my score.
I do still have some concerns regarding applicability, since a gold standard "ground truth" of inference is required. I do appreciate the situation described in the feedback, where one is trying to decide on some approximate algorithm to "deploy" in the wild, is actually fairly common. That is, the sort of setting where very slow MCMC can be run for a long time on training data, but on new data, where there is e.g. a real-time requirement, a faster approximate inference algorithm will be used instead.
[original review]
This paper introduces a new method for benchmarking the performance of different approximate inference algorithms. The approach is based on constructing an estimator of the “symmetric” KL divergence (i.e., the sum of the forward and reverse KL) between an approximation to the target distribution and a representation of the “true” exact target distribution.
The overall approach considered is interesting, and for the most part clearly presented. I would agree that there is lots of potential in approaches which directly consider the auxiliary random variables which occur within an approximate inference algorithm. The meta-inference performed here (as described in definition 3.2) relates estimation of the probability density assigned to a particular algorithm output to a marginal likelihood computation, marginalizing out the latent variables in the algorithm; SMC and AIS are then natural choices for meta-inference algorithms.
The primary drawback of this paper is that the proposed divergence estimate D-hat provided by AIDE does not seem to lend itself to use as a practical diagnostic tool. This symmetric KL does not allow us to (say) decide whether one particular inference algorithm is “better” than another, it only allows us to characterize the discrepancy between two algorithms. In particular, if we lack “gold standard” inference for a particular problem, then it is not clear what we can infer from the D-hat — plots such as those in figure 3 can only be produced contingent on knowing which (of possibly several) inference algorithms is unambiguously the “best”.
Do we really need a diagnostic such as this for situations where “gold standard” inference is computationally feasible to perform? I would appreciate it greatly if the authors could clarify how they expect to see AIDE applied. Figure 5 clearly demonstrates that the AIDE estimate is a strong diagnostic for the DPMM, but unless I am mistaken this cannot be computed online — only relative to separate gold-standard executions. The primary use case for this then seems to be for providing a tool for designers of approximate inference algorithms to accurately benchmark their own algorithm performance (which is certainly very useful! but, I don’t see e.g. how figure 5 fits in that context).
I would additionally be concerned about the relative run-time cost of inference and meta-inference. For example, in the example shown in figure 4, we take SMC with 1000 particles and the optimal proposal as the gold standard — how does the computational cost of the gold standard compare to that of e.g. the dashed red line for SMC with optimal proposal and 100 meta-inference runs, at 10 or 100 particles?
Minor questions:
• Why are some of the inner for-loops in algorithm 2 beginning from m = 2, …, M, not from m = 1? |
nips_2017_1274 | Consistent Robust Regression
We present the first efficient and provably consistent estimator for the robust regression problem. The area of robust learning and optimization has generated a significant amount of interest in the learning and statistics communities in recent years owing to its applicability in scenarios with corrupted data, as well as in handling model mis-specifications. In particular, special interest has been devoted to the fundamental problem of robust linear regression where estimators that can tolerate corruption in up to a constant fraction of the response variables are widely studied. Surprisingly however, to this date, we are not aware of a polynomial time estimator that offers a consistent estimate in the presence of dense, unbounded corruptions. In this work we present such an estimator, called CRR. This solves an open problem put forward in the work of [3]. Our consistency analysis requires a novel two-stage proof technique involving a careful analysis of the stability of ordered lists which may be of independent interest. We show that CRR not only offers consistent estimates, but is empirically far superior to several other recently proposed algorithms for the robust regression problem, including extended Lasso and the TORRENT algorithm. In comparison, CRR offers comparable or better model recovery but with runtimes that are faster by an order of magnitude. | Summary. This submission studies robust linear regression under the settings where there are additive white observation noises, and where \alpha=\Omega(1) fraction of the responses are corrupted by an oblivious adversary. For this problem, it proposes a consistent estimator, called CRR, which is computationally easy because it is based on iterative hard thresholding. It is shown to be consistent (i.e., converging to the true solution in the limit of the number of responses tending to infinity) under certain conditions (Theorem 4).
Quality. In Theorem 4, I did not understand how \epsilon is defined, and consequently, the statement of the theorem itself. Is it defined on the basis of the problem or algorithm setting? Or does the statement of the theorem hold "for any" \epsilon>0, or does "there exist" \epsilon for which the statement hold? The same also applies to \epsilon and e_0 in Lemma 5.
In Figure 1, the results with d=500, n=2000, \sigma=2, and k=600 appear in the subfigures (a), (b), and (c), but those in the subfigure (a) seem different from those in (b) and (c), which might question the validity of the results presented here.
Clarity. I think that this submission is basically clearly written.
Originality. Although I have not done an extensive survey in the literature, proposing a computationally efficient algorithm, with consistency guarantee in the limit of the number n of responses tending to infinity, for the robust linear regression problem with Gaussian noise as well as with oblivious adversary, when the fraction \alpha of adversarial corruptions is \Omega(1).
Significance. Even though the fraction \alpha of adversarial corruptions for which CRR is shown to be consistent is quite small (\alpha \le 1/20000), it is still nontrivial to give a consistency guarantee even if a finite fraction of responses are corrupted. It would stimulate further studies regarding finer theoretical analysis as well as algorithmic development.
Minor points:
Line 64: incl(u)ding more expensive methods
Line 76: Among those cite(d)
Line 82 and yet guarantee(s) recovery.
Line 83: while allowing (allow)
Lines 61, 102, and 115: The notation E[...]_2 would require explicit definition.
Equation (5): The equation should be terminated by a period rather than a comma.
Line 164: (more fine -> finer) analysis
Line 173: the (Subset) Strong Smoothness Property
Line 246: consequently ensure(s)
Lines 254-264: I guess that all the norms appearing in this part should be the 2-norm, but on some occasions the subscript 2 is missing.
Lines 285-286: Explicitly mention that the Torrent-FC algorithm was used for comparison. Simply writing "the Torrent algorithm" might be ambiguous, and also in the figures the authors mentioned it as "TORRENT-FC".
Line 293: faster runtimes tha(n) Torrent
Line 295: both Torrent (and) ex-Lasso become(s) intractable with a(n) order
Line 296: Figure(s) 2 (c) and 2(d) show(s)
Line 303: Figure 2 (a -> b) shows
Line 310: we found CRR to (to) be |
nips_2017_3115 | Efficient Second-Order Online Kernel Learning with Adaptive Embedding
Online kernel learning (OKL) is a flexible framework for prediction problems, since the large approximation space provided by reproducing kernel Hilbert spaces often contains an accurate function for the problem. Nonetheless, optimizing over this space is computationally expensive. Not only first order methods accumulate O( √ T ) more loss than the optimal function, but the curse of kernelization results in a O(t) per-step complexity. Second-order methods get closer to the optimum much faster, suffering only O(log T ) regret, but second-order updates are even more expensive with their O(t 2 ) per-step cost. Existing approximate OKL methods reduce this complexity either by limiting the support vectors (SV) used by the predictor, or by avoiding the kernelization process altogether using embedding. Nonetheless, as long as the size of the approximation space or the number of SV does not grow over time, an adversarial environment can always exploit the approximation process. In this paper, we propose PROS-N-KONS, a method that combines Nyström sketching to project the input point to a small and accurate embedded space; and to perform efficient second-order updates in this space. The embedded space is continuously updated to guarantee that the embedding remains accurate. We show that the per-step cost only grows with the effective dimension of the problem and not with T . Moreover, the second-order updated allows us to achieve the logarithmic regret. We empirically compare our algorithm on recent large-scales benchmarks and show it performs favorably. | The paper proposes an efficient second-order online kernel learning mainly by combining KONS and Nystrom method.
NOVELTY
The novelty is limited on both the methodological and theoretical contributions. The achieved results do not have profound implication for the advancement of theory and practice.
WRITING QUALITY
The English writing and organization of this paper are relatively good. The reviewer strongly suggests the authors arrange Table 2 in the main paper rather than in Appendix because the experimental results in Table 2 are the core material.
COMMENTS
This paper works on accelerating second-order online kernel learning and matches the desired state-of-the-art regret bound. The proposed method improves KNOS by using some efficient Nystrom methods instead of the true kernel matrix. However, it seems that this manuscript has not been completed, and many time and accuracy comparisons in Table 1 have not been reported. Although this paper claims that the effective dimension is too large to complete the results in line 341, this explanation for the incomplete experiments is not convincing. If the effective dimension is the concern, this paper should provide more theoretical and practical analysis to show how it affects the PROS-N-KONS. In addition, the reviewer does not find the complexity comparisons among the compared methods. Thus, it is not convincing at all t that the proposed method is efficient.
Also, the reviewer has three concerns on this paper:
1) In the experimental setup, how to determine the values of beta, epsilon, and bandwidth in the kernel matrix? For beta and epsilon, the related parameter selection will take much time. If the learning performance is very sensitive to these two parameters, the efficiency of the proposed method decreases. Regarding the bandwidth, it has an effect on both the effective dimension and the learning accuracy.
2) The experimental comparison is not convincing. The efficiency is the key contribution of the proposed methods, but the run time comparison with baselines is missing. Moreover, the Dual-SGD is conducted in only one selected dataset, which makes the model comparison less reasonable.
3) For the ridge leverage scores, it is surprising to find that the paper does not cite paper [Alaoui], which originally proposes the ridge leverage scores in Nystrom approximation with the application to regression analysis. Actually, to my understanding, [Alaoui] only shows the size in Nystrom approximation could be reduced while does not claim their method would be faster than previous methods, and therein the leverage score approximation in Theorem 4 could cost much time even more than the time spent in Theorem 3 (just check how to satisfy the condition of his Theorem 3 and 4). Hope this concern could be helpful to tackle your negative claim and results in line 341.
[Alaoui] El Alaoui, A., Mahoney, M. W. (2014). Fast randomized kernel methods with statistical guarantees. Advances in Neural Information Processing Systems. |
nips_2017_2712 | Triangle Generative Adversarial Networks
A Triangle Generative Adversarial Network (∆-GAN) is developed for semisupervised cross-domain joint distribution matching, where the training data consists of samples from each domain, and supervision of domain correspondence is provided by only a few paired samples. ∆-GAN consists of four neural networks, two generators and two discriminators. The generators are designed to learn the two-way conditional distributions between the two domains, while the discriminators implicitly define a ternary discriminative function, which is trained to distinguish real data pairs and two kinds of fake data pairs. The generators and discriminators are trained together using adversarial learning. Under mild assumptions, in theory the joint distributions characterized by the two generators concentrate to the data distribution. In experiments, three different kinds of domain pairs are considered, image-label, image-image and image-attribute pairs. Experiments on semi-supervised image classification, image-to-image translation and attribute-based image generation demonstrate the superiority of the proposed approach. | Triangle GAN is an elegant method to perform translation across domains with limited numbers of paired training samples. Empirical results seem good, particularly the disentanglement of known attributes and noise vectors. I'm worried, though, that the comparison to the closest similar method is somewhat unfair, and in general that the relationship to it is somewhat unclear.
Relation to Triple GAN:
Your statements about the Triple GAN model's requirements on p_y(y | x) are somewhat misleading. Both in lines 53-56 and section 2.5, it sounds like you're saying that p_y(y | x) must be known a priori, which would be a severe limitation to their model. This, though, is not the case: rather, the Triple GAN model learns a classifier which estimates p_y(y | x), and show (their Theorem 3.3) that this still leads to the unique desired equilibrium. In cases where p_y(y | x) does not have a density, it is true that their loss function would be undefined. But their algorithm will still simply estimate a classifier which gives high probability densities, and I think their Theorem 3.3 still applies; the equilibrium will simply be unreachable. Your Propositions 1 and 2 also break when the distributions lack a joint density, and so this is not a real advantage of your method.
The asymmetry in the Triple GAN loss function (8), however, still seems potentially undesirable, and your approach seems preferable in that respect. But depending on the complexity of the domains, it may be easier or harder to define and learn a classifier than a conditional generator, and so this relationship is not totally clear.
In fact, it seems like your image-label mappings (lines 164-173) use essentially this approach of a classifier rather than a conditional generator as you described earlier. This model appears sufficiently different from your main Triangle GAN approach that burying its description in a subsection titled Applications is strange. More discussion about its relationship to Triangle GAN, and why the default approach didn't work for you here,
Similarly, the claim that Triple GAN cannot be implemented on your toy data experiment (section 4.1) seems completely wrong: it perhaps cannot with exactly the modeling choice you make of using G_x and G_y, but p_x(x | y) and p_y(y | x) are simple Gaussians and can certainly be modeled with e.g. a network outputting a predictive mean and variance, which is presumably what the Triple GAN approach would be in this case.
Theory:
The claim that "p(x, y) = p_x(x, y) = p_y(x, y) can be achieved in theory" (line 120) is iffy at best. See the seminal paper
Arjovsky and Bottou. Towards Principled Methods for Training Generative Adversarial Networks. ICLR 2017. arXiv: 1701.04862.
The basic arguments of this paper should apply to your model as well, rendering your theoretical analysis kind of moot. This subarea of GANs is fast-moving, but really I think any GAN work at this point, especially papers attempting to show any kind of theoretical results like your section 2.3, needs to grapple with its relationship to the results of that paper and the followup Wasserstein GAN (your [22]) line of work.
As you mention, it seems that many of these issues could be addressed in your framework without too much work, but it would be useful to discuss this in the theory section rather than simply stating results that are known to be of very limited utility.
Section 4.4:
In Figure 6, the edited images are clearly not random samples from the training set: for example, the images where you add eyeglasses only show women. This adds a natural question of, since the images are not random, were they cherry-picked to look for good results? I would strongly recommend ensuring that all figures are based on random samples, or if that is impractical for some reason, clarifying what subset you used (and why) and being random within that subset.
Smaller comments:
Lines 30-33: ALI and BiGAN don't seem to really fit into the paradigm of "unsupervised joint distribution matching" very well, since in their case one of the domains is a simple reference distribution over latent codes, and *any* pairwise mapping is fine as long as it's coherent. You discuss this a little more in section 3, but it's kind of strange at first read; I would maybe put your citations of [9] to [11] first and then mention that ALI/BiGAN can be easily adapted to this case as well.
Section 2.2: It would be helpful to clarify here that Triangle GANs are exclusively a "translation" method: you cannot produce an (x, y) sample de novo. This is fine for the settings considered here, but differs from typical GAN structures enough that it is probably worth re-emphasizing e.g. that when you refer to drawing a sample from p(x), you mean taking a training sample rather than some form of generative model. |
nips_2017_456 | Recycling Privileged Learning and Distribution Matching for Fairness
Equipping machine learning models with ethical and legal constraints is a serious issue; without this, the future of machine learning is at risk. This paper takes a step forward in this direction and focuses on ensuring machine learning models deliver fair decisions. In legal scholarships, the notion of fairness itself is evolving and multi-faceted. We set an overarching goal to develop a unified machine learning framework that is able to handle any definitions of fairness, their combinations, and also new definitions that might be stipulated in the future. To achieve our goal, we recycle two well-established machine learning techniques, privileged learning and distribution matching, and harmonize them for satisfying multi-faceted fairness definitions. We consider protected characteristics such as race and gender as privileged information that is available at training but not at test time; this accelerates model training and delivers fairness through unawareness. Further, we cast demographic parity, equalized odds, and equality of opportunity as a classical two-sample problem of conditional distributions, which can be solved in a general form by using distance measures in Hilbert Space. We show several existing models are special cases of ours. Finally, we advocate returning the Pareto frontier of multi-objective minimization of error and unfairness in predictions. This will facilitate decision makers to select an operating point and to be accountable for it. | This paper proposes a framework which can learn classifiers that satisfy multiple notions of fairness such as fairness through unawareness, demographic parity, equalized odds etc. The proposed framework leverages ideas from two different lines of existing research namely, distribution matching and privileged learning, in order to accommodate multiple notions of fairness. This work builds on two prior papers on fairness - Hardt et. al. and Zafar et. al. in order to create a more generalized framework for learning fair classifiers. The proposed method seems interesting and novel, and the ideas from privileged learning and distribution matching have not been employed in designing fair classifiers so far. The idea of proposing a generalized framework which can handle multiple notions of fairness is quite appealing.
The paper, however, has the following weaknesses:
1) the evaluation is weak; the baselines used in the paper are not even designed for fair classification
2) the optimization procedure used to solve the multi-objective optimization problem is not discussed in adequate detail
Detailed comments below:
Methods and Evaluation: The proposed objective is interesting and utilizes ideas from two well studied lines of research, namely, privileged learning and distribution matching to build classifiers that can incorporate multiple notions of fairness. The authors also demonstrate how some of the existing methods for learning fair classifiers are special cases of their framework. It would have been good to discuss the goal of each of the terms in the objective in more detail in Section 3.3. The part that is probably the most weakest in the entire discussion of the approach is the discussion of the optimization procedure. The authors state that there are different ways to optimize the multi-objective optimization problem they formulate without mentioning clearly which is the procedure they employ and why (in Section 3). There seems to be some discussion about the same in experiments section (first paragraph) and I think what was done is that the objective was first converted into unconstrained optimization problem and then an optimal solution from the pareto set was found using BFGS. This discussion is still quite rudimentary and it would be good to explain the pros and cons of this procedure w.r.t. other possible optimization procedures that could have been employed to optimize the objective.
The baselines used to compare the proposed approach and the evaluation in general seems a bit weak to me. Ideally, it would be good to employ baselines that learn fair classifiers based on different notions (E.g., Hardt et. al. and Zafar et. al.) and compare how well the proposed approach performs on each notion of fairness in comparison with the corresponding baseline that is designed to optimize for that notion. Furthermore, I am curious as to why k-fold cross validation was not used in generating the results. Also, was the split between train and test set done randomly? And, why are the proportions of train and test different for different datasets?
Clarity of Presentation:
The presentation is clear in general and the paper is readable. However, there are certain cases where the writing gets a bit choppy. Comments:
1. Lines 145-147 provide the reason behind x*_n being the concatenation of x_n and z_n. This is not very clear.
2. In Section 3.3, it would be good to discuss the goal of including each of the terms in the objective in the text clearly.
3. In Section 4, more details about the choice of train/test splits need to be provided (see above).
While this paper proposes a useful framework that can handle multiple notions of fairness, there is scope for improving it quite a bit in terms of its experimental evaluation and discussion of some of the technical details. |
nips_2017_3548 | On Separability of Loss Functions, and Revisiting Discriminative Vs Generative Models
We revisit the classical analysis of generative vs discriminative models for general exponential families, and high-dimensional settings. Towards this, we develop novel technical machinery, including a notion of separability of general loss functions, which allow us to provide a general framework to obtain`1 convergence rates for general M -estimators. We use this machinery to analyze`1 and`2 convergence rates of generative and discriminative models, and provide insights into their nuanced behaviors in high-dimensions. Our results are also applicable to differential parameter estimation, where the quantity of interest is the difference between generative model parameters. | The paper compares generative and discriminative models for general exponential families, also in the high-dimensional setting. It develops a notion of separability of general loss functions, and provides a framework for obtaining l_{infty} convergence rates for M-estimators. This machinery is used to analyze convergence rates (both in l_{infty} and l_2) of generative and discriminative models.
I found the paper quite hard to follow, but this might potentially be also due to my lack of expertise in high dimensional estimation. While the introduction, as well as background and setup (Section 2) are accessible and quite easy to follow, in the following sections the paper becomes quite dense with equations and technical results, sometimes given without much guidance to the reader. Sometimes authors' comments made me quite confused; for instance, in lines 215-218, Corollary 4 and 3 are compared (for discriminative and generative models) and it is stated that "both the discriminative and generative approach achieve the same convergence rate, but the generative approach does so with only a logarithmic dependence on the dimension", whereas from what I can see, the dependence on the dimension in both Corollary 3 and 4 is the same (sqrt(log p))?
The core idea in the paper is the notion of separability of the loss. The motivating example in the introduction is that in the generative model with conditional independence assumption, the corresponding log-likelihood loss is fully separable into multiple components, each of which can be estimated separately. In case of the corresponding discriminative model (e.g. logistic regression), the loss is much less separable. This motivates the need to introduce a more flexible notion of separability in Section 3. It turns out that more separable losses require fewer samples to converge.
Unfortunately, I am unable see this motivation behind the definition of separability (Definition 1 and 2), which states that the error in the first order approximation of the gradient is upper bounded by the error in the parameters. I think it would improve the readability of the paper if the authors could motivate why is this particular definition used here and how is it related to the intuitive notion of separability of the loss into multiple components.
Starting from Corollary 3, the authors use notation "\succsim" without explanation. What does it mean?
It is hard for me to judge the significance of the paper, as I not familiar with previous work in this setup (apart from the work by Ng & Jordan), but I found the results quite interesting, if they were only presented more clearly. Nevertheless, I evaluate the theoretical contribution to be quite strong. What I found missing is a clear message about when to use generative vs. discriminative methods in high dimension (similar to the message given by Ng & Jordan in their paper).
Small remarks:
36: "could be shown to separable" -> "could be shown to be separable"
65: n -> in
87: : and j should be in the subscript
259: bayes -> Bayes
254-265: the "Notation and Assumptions" paragraph -- the Bayes classifier is defined only after its error was already mentioned. |
nips_2017_855 | Flexible statistical inference for mechanistic models of neural dynamics
Mechanistic models of single-neuron dynamics have been extensively studied in computational neuroscience. However, identifying which models can quantitatively reproduce empirically measured data has been challenging. We propose to overcome this limitation by using likelihood-free inference approaches (also known as Approximate Bayesian Computation, ABC) to perform full Bayesian inference on single-neuron models. Our approach builds on recent advances in ABC by learning a neural network which maps features of the observed data to the posterior distribution over parameters. We learn a Bayesian mixture-density network approximating the posterior over multiple rounds of adaptively chosen simulations. Furthermore, we propose an efficient approach for handling missing features and parameter settings for which the simulator fails, as well as a strategy for automatically learning relevant features using recurrent neural networks. On synthetic data, our approach efficiently estimates posterior distributions and recovers ground-truth parameters. On in-vitro recordings of membrane voltages, we recover multivariate posteriors over biophysical parameters, which yield model-predicted voltage traces that accurately match empirical data. Our approach will enable neuroscientists to perform Bayesian inference on complex neuron models without having to design model-specific algorithms, closing the gap between mechanistic and statistical approaches to single-neuron modelling. | The authors present an Approximate Bayesian Computation approach to fitting dynamical systems models of single neuron dynamics. This is an interesting paper that aims to bring ABC to the neuroscience community. The authors introduce some tricks to handle the specifics of neuron models (handling bad simulations and bad features), and since fitting nonlinear single neuron models is typically quite painful, this could be a valuable tool. Overall, this paper is a real gem.
My biggest issue is that, in its current form, many of the proposed improvements over [19] are a bit unclear in practice. Sections 2.2-2.3 were difficult to parse for someone less familiar with ABC/MDNs. Since these are the main improvements, it would help to add/clarify here. Additionally, the classifier for bad simulations, the imputation for bad features, and the recurrent network for learning features need to be included in the algorithm in the appendix.
Additionally, the value of imputation for missing features in neural models is plausible, but completely theoretical in the paper’s current form. For the HH model, for instance, latency is not a feature and it seems there aren’t good simulations with bad features. Is there a clear example where imputation would be necessary for inference?
Minor issues:
In section 2. “could” should be replaced with “here we do.” One challenge in reading the paper is that many of the improvements seem to be purely hypothetical, until you get to the results.
It would be helpful to have a more complete description of the features used in the main text, and also to clarify at the beginning of section 2.1 that x=f(s).
Using a recurrent neural network for learning features seems like an interesting direction, but the results here are more of a teaser than anything else. Completely unpacking this could likely be a paper by itself.
For the GLM simulation -- what features are used? Also, the data augmentation scheme could be more clear in the appendix. If I understand correctly it isn’t necessary for the generative model, but just for getting the true posteriors and in PG-MCMC. |
nips_2017_3549 | Maxing and Ranking with Few Assumptions
PAC maximum selection (maxing) and ranking of n elements via random pairwise comparisons have diverse applications and have been studied under many models and assumptions. With just one simple natural assumption: strong stochastic transitivity, we show that maxing can be performed with linearly many comparisons yet ranking requires quadratically many. With no assumptions at all, we show that for the Borda-score metric, maximum selection can be performed with linearly many comparisons and ranking can be performed with O(n log n) comparisons. | Thanks for the rebuttal, which was quite helpful and clarified most of the issues raised in the review.
The authors study the problem of PAC maximum selection (maxing) and ranking of n elements via random pairwise comparisons. Problems of that kind have recently attracted attention in the ML literature, especially in the setting of dueling bandidts. In this paper, the only assumption made is strong stochastic transitivity of the pairwise winning probabilisties. Under this assumption, the authors show that maxing can be performed with linearly many comparisons while ranking has a quadratic sample complexity. Further, for the so-called Borda ranking, maximum selection and ranking can be performed with O(n) and O(n log n) comparisons, respectively.
The paper addresses an interesting topic and is well written. That being said, I'm not convinced that is offers significant new results. It's true that I haven't seen the first two results in exactly this form before, but there are very similar ones, and besides, the results are not very surprising. Further, the argument for the result regarding the non-existence of a PAC ranking algorithm with O(n log n) comparisons in the SST setting is not totally clear.
More importantly, the last two results are already known. Theorem 8 this known as Borda reduction, see e.g., Sparse Dueling Bandits, Generic exploration and k-armed voting bandits, Relative upper confidence bound for the k-armed duelling bandit problem. Likewise, Theorem 9 is already known, see e.g., Crowdsourcing for Participatory Democracies, Efficient elicitation of social choice functions.
Consequently, the only result that remains, being both novel and relevant, is the PAC maxing algorithm and its analysis. Nevertheless, a study of its performance on real data is missing, as well as a comparison with other algorithms.
Indeed, the experiments are not very convincing, especially because only a simple synthetic setting is considered, and no experiments for (Borda) ranking are included. SEQ-ELIMINATE is a subroutine of OPT-MAXIMIZE, so does it really makes sense to compare the two? It would be rather more meaningful to compare one of the two with other maxing algorithms in the main paper and move their comparison, if necessary, to the appendix. Lines 189-299: models with other parameters should be considered. In all maxing experiments, it should be tested whether the found element is the actual maxima.
Minor issues:
- 51: an \epsilon-ranking (and not \epsilon-rankings)
- 100: comparisons (and not comparison)
- 117-118: Please reconsider the statement: "the number of comparisons used by SEQ-ELIMINATE is optimal up to a constant factor when \delta <= 1/n". \delta <= 1/n => n <= 1/\delta => O(n/\epsilon^2 \log(n/\delta)) = O(n/\epsilon^2 \log(1/\delta^2)) != \Omega(n/\epsilon^2 \log(1/\delta))
- 117-118: Please reconsider the statement: "but can be \log n times the lower bound for constant \delta", since n/\epsilon^2 \log(1/\delta) \times \log(n) != n/\epsilon^2 \log(n/\delta).
- 120-122: It is not clear how should such a reduction together with the usage of SEQ-ELIMINATE lead to an optimal sample complexity up to constants. PLease elaborate.
- 130-131: "replacing or retaining r doesn’t affect the performance of SEQ-ELIMINATE". The performance in terms of what? And why there is no effect?
- 165: same comment as for 117-118
- 167: in 120 it was stated that S' should have a size of O(n/\log n), and here It's O(\sqrt(n\log n))
- 165-169: The procedure described here (the reduction + usage of SEQ-ELIMINATE) is different from the one described in 120-122. And again, it's not clear how should such a procedure results in an order-wise optimal algorithm.
- 204: Lemma 12 does not exist.
- 207: Lemma 11 does not exist.
- 210: at least (and not atleast)
- 215: \delta is missing in the arguments list of PRUNE
- 221-222: same comment as for 165 and 117-118
- 238-239: The sample complexity is not apparent in both cases (distinguished in algorithm 2) |
nips_2017_1494 | Multi-way Interacting Regression via Factorization Machines
We propose a Bayesian regression method that accounts for multi-way interactions of arbitrary orders among the predictor variables. Our model makes use of a factorization mechanism for representing the regression coefficients of interactions among the predictors, while the interaction selection is guided by a prior distribution on random hypergraphs, a construction which generalizes the Finite Feature Model. We present a posterior inference algorithm based on Gibbs sampling, and establish posterior consistency of our regression model. Our method is evaluated with extensive experiments on simulated data and demonstrated to be able to identify meaningful interactions in applications in genetics and retail demand forecasting.
interactions are observed only infrequently, subject to a constraint that the interaction order (a.k.a. interaction depth) is given. Neither SVM nor FM can perform any selection of predictor interactions, but several authors have extended the SVM by combining it with 1 penalty for the purpose of feature selection (Zhu et al., 2004) and gradient boosting for FM (Cheng et al., 2014) to select interacting features. It is also an option to perform linear regression on as many interactions as we can and combine it with regularization procedures for selection (e.g. LASSO (Tibshirani, 1996) or Elastic net (Zou & Hastie, 2005)). It is noted that such methods are still not computationally feasible for accounting for interactions that involve a large number of predictor variables.
In this work we propose a regression method capable of adaptive selection of multi-way interactions of arbitrary order (MiFM for short), while avoiding the combinatorial complexity growth encountered by the methods described above. MiFM extends the basic factorization mechanism for representing the regression coefficients of interactions among the predictors, while the interaction selection is guided by a prior distribution on random hypergraphs. The prior, which does not insist on the upper bound on the order of interactions among the predictor variables, is motivated from but also generalizes Finite Feature Model, a parametric form of the well-known Indian Buffet process (IBP) (Ghahramani & Griffiths, 2005). We introduce a notion of the hypergraph of interactions and show how a parametric distribution over binary matrices can be utilized to express interactions of unbounded order. In addition, our generalized construction allows us to exert extra control on the tail behavior of the interaction order. IBP was initially used for infinite latent feature modeling and later utilized in the modeling of a variety of domains (see a review paper by Griffiths & Ghahramani (2011)).
In developing MiFM, our contributions are the following: (i) we introduce a Bayesian multi-linear regression model, which aims to account for the multi-way interactions among predictor variables; part of our model construction includes a prior specification on the hypergraph of interactions -in particular we show how our prior can be used to model the incidence matrix of interactions in several ways; (ii) we propose a procedure to estimate coefficients of arbitrary interactions structure; (iii) we establish posterior consistency of the resulting MiFM model, i.e., the property that the posterior distribution on the true regression function represented by the MiFM model contracts toward the truth under some conditions, without requiring an upper bound on the order of the predictor interactions; and (iv) we present a comprehensive simulation study of our model and analyze its performance for retail demand forecasting and case-control genetics datasets with epistasis. The unique strength of the MiFM method is the ability to recover meaningful interactions among the predictors while maintaining a competitive prediction quality compared to existing methods that target prediction only.
The paper proceeds as follows. Section 2 introduces the problem of modeling interactions in regression, and gives a brief background on the Factorization Machines. Sections 3 and 4 carry out the contributions outlined above. Section 5 presents results of the experiments. We conclude with a discussion in Section 6. | The paper presents a Bayesian method for regression problems where the response variables can depend on multi-way combinations of the predictors. Via a hyper graph representation of the covariates interactions, the model is obtained from a refinement of the Finite Feature Model.
The idea of modelling the interactions as the hyper edges of a hyper graph is interesting but the proposed model seems to be technically equivalent to the original Finite Mixture Model. Moreover, from the experimental results, it is hard to assess if the new parametrised extension is an improvement respect to the baseline.
Here are few questions and comments:
- is the idea of modelling interactions via an unbounded membership model new? Finite Mixture Models have been already used for predicting proteins interactions in [1], what are the key differences between that work and the proposed method?
- the main motivation for modelling nonlinear interactions via multi-way regression is the interpretability of the identified model. Given the data, can one expect the nonlinear realisation to be unique? I think that a brief discussion about the identifiability of the model would improve the quality of the paper.
- can the model handle higher powers of the same predictor (something like y = z x^2)?
- the number of true interactions is usually expected to be very small respect to the number of all possible interactions. Does the model include a sparsity constraint on the coefficients/binary matrices? Is the low-rank assumption of the matrix V related to this?
- is the factorised form of the multi-way interactions (matrix V in equation 3) expected to be flexible enough?
- In the experiments:
i) the true coefficients were chosen at random? Was there a threshold on their magnitude?
ii) how is the `quality’ of the recovered coefficients defined? What is the difference between this and the `exact recovery’ of the interactions?
iii) the standard FMM model (alpha=1) performs quite well independently of the number of multi-way interactions. Was that expected?
iv) why does the random forest algorithm perform so well in the gene dataset and badly on the synthetic dataset? Why was the baseline (Elastic net algorithm) not used on the real-world experiments?
————————————————
[1] Krause et al. 2006 Identifying protein complexes in high-throughput protein interaction screens using an infinite latent feature model |
nips_2017_914 | Learning with Feature Evolvable Streams
Learning with streaming data has attracted much attention during the past few years. Though most studies consider data stream with fixed features, in real practice the features may be evolvable. For example, features of data gathered by limitedlifespan sensors will change when these sensors are substituted by new ones. In this paper, we propose a novel learning paradigm: Feature Evolvable Streaming Learning where old features would vanish and new features would occur. Rather than relying on only the current features, we attempt to recover the vanished features and exploit it to improve performance. Specifically, we learn two models from the recovered features and the current features, respectively. To benefit from the recovered features, we develop two ensemble methods. In the first method, we combine the predictions from two models and theoretically show that with the assistance of old features, the performance on new features can be improved. In the second approach, we dynamically select the best single prediction and establish a better performance guarantee when the best model switches. Experiments on both synthetic and real data validate the effectiveness of our proposal. | This paper presents two new methods for learning with streaming data under the assumption that feature spaces may evolve so old features vanish and new features occur. Examples of this scenario are environmental monitoring sensors or RFIDs for moving goods detection. Two different approaches are proposed, both based on the combination of two models: one trained with the old features and the other one based on the new features. Depending on how these models are combined, the authors propose two different methods for Feature Evolvable Streaming Learning (FESL): FESL-c that combines the predictions of both models, and FESL-s that selects the best model each time. Theoretical bounds on the cumulative loss of the both FESL methods as a function of the cumulative loss of the two models trained over different feature spaces are also provided. Experiments on datasets from the UCI Machine Learning Repository, Reuter’s data, and RFID real data have been conducted, and results show that the proposed models are able to provide competitive results in all cases when compared to three baseline methods.
Overall, I think this is a very interesting paper that addresses a real-world problem that is quite common, for example, in environmental sensoring applications. I think that the paper is well-written and well-organized, and the experiments show the significance of the proposed methodology. Additionally, I positively value the use of RFID real data.
* Contribution with respect to transfer learning. My main concern is about the quantification of the degree of contribution of this work, as I cannot see clearly the difference what is the difference between the proposed approach and the use of transfer learning in two different feature sets (of course, taking into account that transfer learning cannot be applied from t=1,…, T1-B). I would like the authors to clarify this point.
* Theorems 1 and 2. I would also like the authors to discuss about the practical implications of Theorem 1 and Theorem 2, as in both cases convergence to 0 is obtained when T2 is large enough but that is not the case in Figures 1 and 2. Additionally, it seems to me that although the upper bound for FESL-s algorithm is tighter than for the FESL-c method, the convergence is slower. Please, clarify this point.
* Optimal value for s. I do not understand why there is an inequality in Equation 9 instead of an equality according to the assumptions made (the first model is better than the second one for t smaller than s, and the second model is better than the first one for t larger than s). In this regard, I can see in Figure 3 that this assumption holds in practice, but the value for s depends on the problem you are working on. Providing some intuition about the range of values for s would be valuable.
* Datasets and experimental setup. I have some questions:
- I would like the authors to clarify why the Gaussian noise was added to the UCI datasets instead of considering two different subsets of features. I guess that this is because it is assumed that the features spaces are not so different, but I would like the authors to discuss this point.
- Please, provide more details about how batch datasets was transformed into streaming data. I suppose it was done randomly, but then I am wondering whether results on 10 independent rounds are enough to have reliable results and I would like to see the variability of the results provided in terms of the cumulative loss.
- It is not clear to me how the hyperparameters of the methods (delta and eta) were adjusted. Were they set according to Theorems 1 and 2? What about the hyperparameters in the baseline methods?
- I do not understand the sentence (page 7): “So for all baseline methods and our methods, we use the same parameters and the comparison is fair”.
* Results. It would be desirable to provide similar tables to Table 1 for the Reuter’s and RFID data. |
nips_2017_2837 | Finite Sample Analysis of the GTD Policy Evaluation Algorithms in Markov Setting
In reinforcement learning (RL) , one of the key components is policy evaluation, which aims to estimate the value function (i.e., expected long-term accumulated reward) of a policy. With a good policy evaluation method, the RL algorithms will estimate the value function more accurately and find a better policy. When the state space is large or continuous Gradient-based Temporal Difference(GTD) policy evaluation algorithms with linear function approximation are widely used. Considering that the collection of the evaluation data is both time and reward consuming, a clear understanding of the finite sample performance of the policy evaluation algorithms is very important to reinforcement learning. Under the assumption that data are i.i.d. generated, previous work provided the finite sample analysis of the GTD algorithms with constant step size by converting them into convex-concave saddle point problems. However, it is well-known that, the data are generated from Markov processes rather than i.i.d. in RL problems.. In this paper, in the realistic Markov setting, we derive the finite sample bounds for the general convex-concave saddle point problems, and hence for the GTD algorithms. We have the following discussions based on our bounds. (1) With variants of step size, GTD algorithms converge. (2) The convergence rate is determined by the step size, with the mixing time of the Markov process as the coefficient. The faster the Markov processes mix, the faster the convergence. (3) We explain that the experience replay trick is effective by improving the mixing property of the Markov process. To the best of our knowledge, our analysis is the first to provide finite sample bounds for the GTD algorithms in Markov setting. | It is well known that the standard TD algorithm widely used in reinforcement learning does not correspond to the gradient of any objective function, and consequently is unstable when combined with any type of function approximation. Despite the success of methods like deep RL, which combines vanilla TD with deep learning, theoretically TD with nonlinear function approximation is demonstrably unstable. Much work on fixing this fundamental flaw in RL has been in vain, till the work on gradient TD methods by Sutton et al. Unfortunately, these methods work, but their analysis was flawed, based on a heuristic derivation of the method.
A recent breakthrough by Liu et al. (UAI 2015) showed that gradient TD methods are essentially saddle point methods that are pure gradient methods that optimize not the original gradient TD loss function (which they do not), but rather the saddle point loss function that arises when converting the original loss function into the dual space. This neat trick opened the door to a rigorous finite sample analysis of TD methods, and has finally surmounted the obstacles in the last three decades of work on this problem. Liu et al. developed several variants of gradient TD, including proximal methods that have improved theoretical and experimental performance.
This paper is an incremental improvement to Liu et al., in that it undertakes a similar analysis of saddle point TD methods under a more realistic assumption that the samples are derived from a Markov chain (unlike Liu et al., which assumed IID samples). This paper provides an important next step in our understanding of how to build a more rigorous foundation for the currently shaky world of gradient TD methods. The analysis is done thoroughly and it is supplemented with some simple experiments.
The work does not address the nonlinear function approximation case at all. All the current work on Atari video games and deep RL is based on nonlinear function approximations. Is there any hope that gradient TD methods will be shown someday to be robust (at least locally) with nonlinear function approximation? |
nips_2017_2069 | Reducing Reparameterization Gradient Variance
Optimization with noisy gradients has become ubiquitous in statistics and machine learning. Reparameterization gradients, or gradient estimates computed via the "reparameterization trick," represent a class of noisy gradients often used in Monte Carlo variational inference (MCVI). However, when these gradient estimators are too noisy, the optimization procedure can be slow or fail to converge. One way to reduce noise is to generate more samples for the gradient estimate, but this can be computationally expensive. Instead, we view the noisy gradient as a random variable, and form an inexpensive approximation of the generating procedure for the gradient sample. This approximation has high correlation with the noisy gradient by construction, making it a useful control variate for variance reduction. We demonstrate our approach on a non-conjugate hierarchical model and a Bayesian neural net where our method attained orders of magnitude (20-2,000×) reduction in gradient variance resulting in faster and more stable optimization. | Summary
This paper proposes a control variate (CV) for the reparametrization gradient by exploiting a linearization of the data model score. For Gaussian random variables, such a linearization has a distribution with a known mean, allowing its use as a CV. Experiments show using the CV results in faster (according to wall clock time) ELBO optimization for a GLM and Bayesian NN. Furthermore, the paper reports 100 fold (+) variance decreases during optimization of the GLM.
Evaluation
Method: The CV proposed is clever; the observation that the linearization of the data score has a known distribution is non-obvious and interesting. This is a contribution that can easily be incorporated when using the reparametrization trick. The only deficiencies of the method are (1) it requires the model’s Hessian and (2) application is limited to Gaussian random variables. It also does not extend to amortized inference (i.e. variational autoencoders), but this limitation is less important. I think the draft could better detail why it’s limited to Gaussian variables (which is a point on which I elaborate below under ‘Presentation’). Some discussion of the hurdles blocking extension to other distributions would improve the draft and be appreciated by readers. For instance, it looks like the linearized form might have a known distribution for other members of the location-scale family (since they would all have the form f(m) + H(m)(s*epsilon))? Thoughts on this?
Experiments: Experiments are limited to small models (by modern standards)---GLM and one-layer / 50 hidden unit Bayesian NN---but are adequate to demonstrate the correctness and utility of the method. I would have liked to see what held-out performance gains can be had by using the method---you must have looked at held-out LL?---but I realize this is not crucial for assessing the method as a CV. Still I think the paper is strong enough that even a negative or neutral result would not hurt the paper’s quality.
Presentation: The paper’s presentation of the method’s derivation is clear and detailed. I had no trouble seeing the logical flow from equation to equation. However, I think the draft’s description is somewhat ‘shallow’---by that I mean, while the algebra is clear, more discussion and intuition behind the equations could be included. Specifically, after the method is derived, I think a section discussing why it's limited to Gaussian rv’s and why it can’t be extended to amortized inference would be beneficial to readers. It took me two-to-three reads through the draft before I fully understood these limitations. To make room for this discussion, I would cut from the introduction and VI background since, if a person is interested in implementing your CV, they likely already understand VI, the ELBO, etc.
Minutiae: Table 1 is misreferenced as ‘Table 3’ in text.
Conclusion: This paper was a pleasure to read, and I recommend its acceptance. The methodology is novel and interesting, and the experiments adequately support the theory. The draft could be improved with more exposition (on limitations, for instance), but this is not a crucial flaw. |
nips_2017_17 | Wider and Deeper, Cheaper and Faster: Tensorized LSTMs for Sequence Learning
Long Short-Term Memory (LSTM) is a popular approach to boosting the ability of Recurrent Neural Networks to store longer term temporal information. The capacity of an LSTM network can be increased by widening and adding layers. However, usually the former introduces additional parameters, while the latter increases the runtime. As an alternative we propose the Tensorized LSTM in which the hidden states are represented by tensors and updated via a cross-layer convolution. By increasing the tensor size, the network can be widened efficiently without additional parameters since the parameters are shared across different locations in the tensor; by delaying the output, the network can be deepened implicitly with little additional runtime since deep computations for each timestep are merged into temporal computations of the sequence. Experiments conducted on five challenging sequence learning tasks show the potential of the proposed model. | This paper proposes Tensorized LSTMs for efficient sequence learning. It represents hidden layers as tensors, and employs cross-layer memory cell convolution for efficiency and effectiveness. The model is clearly formulated. Experimental results show the utility of the proposed method.
Although the paper is well written, I still have some questions/confusion as follows. I would re-consider my final decision if the authors address these points in rebuttal.
1. My biggest confusion comes from Sec 2.1, when the authors describe how to widen the network with convolution (lines 65-73). As mentioned in text, "P is akin to the number of stacked hidden layers", and the model "locally-connects" along the P direction to share parameters. I think it is a strategy to deepen the network instead of widening it, since increasing P (the number of hidden layers) won't incur more parameters in the convolution. Similarly, as mentioned in lines 103-104, tRNN can be "widened without additional parameters by increasing the tensor size P". It does not make sense, as increasing P is conceptually equivalent to increasing the number of hidden layers in sRNN. This is to deepen the network, not to widen it.
2. The authors claim to deepen the network with delayed outputs (Sec 2.2). They use the parameter L to set up the network depth. However, as shown in Eq. 9, L is determined by P and K, meaning that we can not really set up the network depth as a free parameter. I guess in practice, we would always pre-set P and K before experiments, and then derive L from Eq. 9. It seems over-claimed in lines 6-10, which reads like "we could freely set up the width and depth of the network".
3. The authors claims that the proposed memory cell convolution is able to prevent gradient vanishing/exploding (line 36). This is not verified theoretically or empirically. The words "gradient vanishing/exploding" are even not mentioned in the later text.
4. In the experiments, the authors compared tLSTM variants in the following dimentions: tensor shape (2D or 3D), normalization (no normalization, LN, CN), memory cell convolution (yes or no), and feedback connections (yes or no). There are 2x3x2x2=24 combinations in total. Why just pick up the six combinations in lines 166-171? I understand it become messy when comparing too many methods, but there are some interesting variants like 2D tLSTM+CN. Also, it might help to split the experiments in groups, like one for normalization strategy, one for memory cell convolution, one for feedback connections, etc. |
nips_2017_2653 | Regularizing Deep Neural Networks by Noise: Its Interpretation and Optimization
Overfitting is one of the most critical challenges in deep neural networks, and there are various types of regularization methods to improve generalization performance. Injecting noises to hidden units during training, e.g., dropout, is known as a successful regularizer, but it is still not clear enough why such training techniques work well in practice and how we can maximize their benefit in the presence of two conflicting objectives-optimizing to true data distribution and preventing overfitting by regularization. This paper addresses the above issues by 1) interpreting that the conventional training methods with regularization by noise injection optimize the lower bound of the true objective and 2) proposing a technique to achieve a tighter lower bound using multiple noise samples per training example in a stochastic gradient descent iteration. We demonstrate the effectiveness of our idea in several computer vision applications. | This paper introduces a method for regularizing deep neural network by noise. The core of the approach is to draw connections between applying a random perturbation to layer activations and the optimization of a lower-bound objective function. Experiments for four visual tasks are carried out, and show a slight improvement of the proposed method compared to dropout.
On the positive side:
- The problem of regularization for training deep neural network is a crucial issue, which has a huge potential practical and theoretical impact.
- The connection between regularization by noise and the derivation of a lower bound of the training objective is appealing to provide a formal understanding of the impact of noise regularization techniques, e.g. dropout.
On the negative side:
- The aforementioned connection between regularization by noise and training objective lower bounding seems to be a straightforward adaptation of [9] in the case of deep neural networks. For the most important result given in Eq (6), i.e. the fact that using several noise sampling operations gives a tighter bound on the objective function than using a single random sampling (as done in dropout), the authors refer to the derivation in [9]. Therefore, the improvement and positioning with respect to ref [9] is not clear to me.
- Beyond this connection, the implementation of the model given in Section 4 for the case of dropout is also straightforward: several (random) forward passes are performed and a weighted average of the resulting gradients is computed using Eq (8), before applying the backward pass.
- Experiments are carried out in various vision tasks, which choice could be better justified. It is not clear why these tasks are highly regularization demanding: the noise seems to be applied in a few (fully connected layers). In addition, the improvement brought out by the proposed method is contrasted across tasks (e.g. small improvement in UCF-101 and below the published baseline for image captioning).
As a conclusion, although the approach is interesting, its novelty level seems to be low and the experiments should be more targeted to better illustrate the benefit of the multiple sampling operations. In their rebuttal, I would like the authors to answer my concern about positioning with respect to [9]. |
nips_2017_517 | Deep Mean-Shift Priors for Image Restoration
In this paper we introduce a natural image prior that directly represents a Gaussiansmoothed version of the natural image distribution. We include our prior in a formulation of image restoration as a Bayes estimator that also allows us to solve noise-blind image restoration problems. We show that the gradient of our prior corresponds to the mean-shift vector on the natural image distribution. In addition, we learn the mean-shift vector field using denoising autoencoders, and use it in a gradient descent approach to perform Bayes risk minimization. We demonstrate competitive results for noise-blind deblurring, super-resolution, and demosaicing. | Summary
-------
The paper proposes a way to do Bayesian estimation for image restoration problems by using gradient descent to minimize a bound of the Bayes risk. In particular, the paper uses Gaussian smoothing in the utility function and the image prior, which allows to establish a connection between the gradient of the log-prior and denoising auto-encoders (DAEs), which are used to learn the prior from training data. Several image restoration experiments demonstrate results on par with the state-of-the-art.
Main comments
-------------
The positive aspects of the paper are its good results (on par with state-of-the-art methods) for several image restoration tasks and the ability of the same learned prior to be used for several tasks (mostly) without re-training. Another strong aspect is the ability to jointly do noise- and kernel-blind deconvolution, which other methods typically cannot do.
I found it somewhat difficult to judge the novelty of the paper, since it seems to be heavily influenced/based on the papers by Jin et al. [14] (noise-adaptive approach, general derivation approach) and Bigdeli and Zwicker [4] (using DAEs as priors, mean-shift connection). Please properly attribute where things have been taken from previous work and make the differences to the proposed approach more clear. In particular:
- The mean-shift connection is not really discussed, just written that it is "well-known". Please explain this in more detail, or just remove it from the paper (especially from the title).
- Using a Gaussian function as/in the utility function seems to be taken from [14] (?), but not acknowledged. The entire (?) noise-adaptive approach seems to be adopted from [14], please acknowledge/distinguish more thoroughly (e.g. in lines 168-171).
Although the experiments demonstrate that the proposed method can lead to good results, many approximations and bounds are used to make it work. Can you give an explanation why the method works so well despite this?
Related to this, if I understand correctly, the whole noise sampling approach in section 3.3 is only used to make up for the fact that the learned DAE is overfitted to noisy images? I don't understand how this fixes the problem.
Regarding generalization, why is it necessary to train a different DAE for the demosaicing experiments? Shouldn't the image prior be generic?
Related to this, the noise- and kernel-blind experiments (section 5.2) use different gradient descent parameters. How sensitive is the method towards these hyper-parameters?
The paper doesn't mention at all if the method has reasonable runtimes for typical image sizes. Please provide some ballpark test runtimes for a single gradient descent step (for the various restoration tasks).
Miscellaneous
-------------
- I found the abstract hard to understand without having read the rest of the paper.
- It does seems somewhat artificial to include p'(x)/p(\tilde{x}) in Eq. 5 instead of just removing this term from the utility function and simply replacing p(x) with the smoothed prior p'(x).
- Typo in line 182: "form" -> "from"
- Please define the Gaussian function g_\sigma in Eq. 6 and later.
- Table 2:
- EPLL/TV-L2 + NA not explained, how can they use the noise-adaptive approach?
- Misleading results: e.g. RTF-6 [28] only trained for \sigma=2.55, doesn't make sense to evaluate for other noise levels. Please just leave the table entry blank. |
nips_2017_2434 | Overcoming Catastrophic Forgetting by Incremental Moment Matching
Catastrophic forgetting is a problem of neural networks that loses the information of the first task after training the second task. Here, we propose a method, i.e. incremental moment matching (IMM), to resolve this problem. IMM incrementally matches the moment of the posterior distribution of the neural network which is trained on the first and the second task, respectively. To make the search space of posterior parameter smooth, the IMM procedure is complemented by various transfer learning techniques including weight transfer, L2-norm of the old and the new parameter, and a variant of dropout with the old parameter. We analyze our approach on a variety of datasets including the MNIST, CIFAR-10, Caltech-UCSDBirds, and Lifelog datasets. The experimental results show that IMM achieves state-of-the-art performance by balancing the information between an old and a new network. | [UPDATE]
After digging through the supplementary material I have managed to find at least a hint towards possible limitations of the method being properly acknowledged. Not including an objective evaluation of limitations is a flaw of this otherwise well written paper, especially when the method relies crucially on weight transfer (as the authors point out outside the main paper, i.e. supplementary text and rebuttal). However, weight transfer is known to be an inadequate initialization technique between different problem classes and the authors don't clearly address this issue, nor do they properly qualify the applicability of the method.
In balance, this paper does give sufficient evidence that weight transfer and some form of parameter averaging are promising directions of future investigation, at least in a subset of interesting cases. Perhaps with publication chances of expanding this subset increase.
[REVIEW TEXT]
The paper evaluates the idea of incremental moment matching (IMM) as a solution for catastrophic forgetting in neural networks. The method is thoroughly benchmarked, in several incarnations, against state-of-the-art baselines on standard ‘toy’ problems defined on top of MNIST, as well as more challenging ImagNet2CUB and the Lifelog dataset. A new parameterization, dubbed ‘drop-transfer’ is proposed as an alternative to standard weight initialization of model parameters on new tasks.
Pros:
- Comprehensive experiments are provided which show superior performance on standard MNIST classification variants, in comparison to previous methods.
- Encouraging results are shown on natural image datasets, although here many earlier methods were not applied, so there were few literature results to compare against.
Cons:
- Although differences exist between the pairs or triplets of problems considered to illustrate ‘catastrophic forgetting’, technically speaking, these problems are extremely related, sharing either the input support or the class distribution to some degree; this is in contrast to current challenges in the field, which consider sequences of arbitrarily different reinforcement learning problems, such as different Atari video games (Kirkpatrick et al. 2016, Rusu et al., 2016). Therefore, I find it difficult to tell how proficient the method is at reducing catastrophic forgetting in the general case. I recommend considering more standard challenges from the literature.
- Another important consideration for such works is evaluating how much learning is possible with the given model when following the proposed solution. Since catastrophic forgetting can be alleviated by no further learning throughout most of the model, it follows that proficient learning of subsequent tasks, preferably superior to starting from scratch, is a requirement for any ‘catastrophic forgetting’ solution. As far as I can tell the proposed method does not offer a plausible answer if tasks are too different, or simply unrelated; yet these are the cases most likely to induce catastrophic forgetting. In fact, it is well understood that similar problems which share the input domain can easily alleviate catastrophic forgetting by distilling the source task network predictions in a joint model using target dataset inputs (Li et al., 2016, Learning without Forgetting).
In summary, I believe variations of the proposed method may indeed help solve catastrophic forgetting in a more general setting, but I would like to kindly suggest evaluating it against current standing challenges in the literature. |
nips_2017_1334 | Associative Embedding: End-to-End Learning for Joint Detection and Grouping
We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to multi-person pose estimation and report state-of-the-art performance on the MPII and MS-COCO datasets. | The paper proposes a method for the task of multi-person pose estimation. It is based on the stacked hour glasses architecture for single person pose estimation which is combined with an associative embedding. Body joint heatmaps are associated with tag heatmaps indicating which individual the joint belongs to. A new grouping loss is defined to assess how well predicted tags agree with ground truth joint associations. The approach is evaluated on the MPII multi-person and on MS-COCO, achieving state of the art performance for both of them.
Strengths:
- State-of-the-art performance is achieved for the multi-person pose estimation problem on MPII Multi-person and MS-COCO.
Weakness:
- The paper is rather incremental with respect to [31]. The authors adapt the existing architecture for the multi-person case producing identity/tag heatmaps with the joint heatmaps.
- Some explanations are unclear and rather vague. Especially, the solution for the multi-scale case (end of Section 3) and the pose refinement used in section 4.4 / table 4. This is important as most of the improvement with respect to state of the art methods seems to come from these 2 elements of the pipeline as indicated in Table 4.
Comments:
The state-of-the-art performance in multi-person pose estimation is a strong point. However, I find that there is too little novelty in the paper with respect to the stacked hour glasses paper and that explanations are not always clear. What seems to be the key elements to outperform other competing methods, namely the scale-invariance aspect and the pose refinement stage, are not well explained. |
nips_2017_3501 | Houdini: Fooling Deep Structured Visual and Speech Recognition Models with Adversarial Examples
Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed to alter the true performance measure of the problem at hand. We introduce a novel flexible approach named Houdini for generating adversarial examples specifically tailored for the final performance measure of the task considered, be it combinatorial and non-decomposable. We successfully apply Houdini to a range of applications such as speech recognition, pose estimation and semantic segmentation. In all cases, the attacks based on Houdini achieve higher success rate than those based on the traditional surrogates used to train the models while using a less perceptible adversarial perturbation. | This paper presents a way to create adversarial examples based on a task loss (e.g. word error rate) rather than the loss being optimized for training. The approach is tested on a few different domains (pose estimation, semantic segmentation, speech recognition).
Overall the approach is nice and the results are impressive. My main issues with the paper (prompting my "marginal accept" decision) are:
- The math and notation is confusing and contradictory in places, e.g. involving many re-definitions of terms. It needs to be cleaned up.
- The paper does discuss enough whether optimizing the task loss with Houdini is advantageous against optimizing the surrogate loss used for training. It seems in some cases it is clearly better (e.g. in speech recognition) but the scores given in the image tasks do not really show a huge difference. There also aren't any qualitative examples which demonstrate this.
Overall I would be happy to see this paper accepted if it resolves these issues.
Specific comments:
- There are various typos/grammatical errors (incl. a broken reference at line 172); the paper needs a proofread before publication.
- "we make the network hallucinate a minion" What is a minion?
- "While this algorithm performs well in practice, it is extremely sensitive to its hyper-parameter" This is a strong statement to make - can you qualify this with a citation to work that shows this is true?
- "\eps \in R^d is a d-dimensional..." I don't see you defining that \theta \in R^d, but I assume that's what you mean; you should say this somewhere.
- Some of your terminology involves redefinitions and ambiguity which is confusing to me. For example, you define g_\theta(x) as a network (which I presume outputs a prediction of y given x), but then you later define g_\theta(x, y) as the "score output by the network for an example". What do you mean by the "score output by the network" here? Typically the network outputs a prediction which, when compared to the target, gives a score; it is not a function of both the input and the target output (you have defined y as the target output previous). You seem to be defining the network to include the target output and some "score". And what score do you mean exactly? The task loss? Surrogate loss? Some other score? It seems like maybe you mean that for each possible output y (not a target output) given the intput x, the network outputs a score, but this involves a redefinition of y from "target output" to "possible output". You also define y as the target, but y_\theta(x) as a function which produces the predicted target.
- "To compute the RHS of the above quantity..." Don't you mean the LHS? The RHS does not involve the derivative of l_H.
- The arguments of l_H have changed, from \theta, x, y to \hat{y}, y in equation (9). Why?
- "Convergence dynamics illustrated in Fig. 4 " - Figure 4 does not show convergence dynamics, it shows "most challenging examples for the pose re-targeting class..."
- Suggestion: In image examples you show zoomed-in regions of the image to emphasize the perturbations applied. Can you also show the same zoomed-in region from the original image? This will make it easier to compare.
- "Notice that Houdini causes a bigger decrease in terms of both CER and WER than CTC for all tested distortions values." I think you mean bigger increase.
- In Table 5 it's unclear if "manual transcription" is of the original speech or of the adversarially-modified speech.
- In listening to your attached audio files, it sounds like you are not using a great spectrogram inversion routine (there are frame artefacts, sounds like a buzzing sound). Maybe this is a phase estimation issue? Maybe use griffin-lim?
- Very high-level: I don't see how you are "democratizing adversarial examples". Democratize means "make (something) accessible to everyone". How is Houdini relevant to making adversarial examples accessible to everyone? Houdini seems more like a way to optimize task losses for adversarial example generation, not a way of making examples avaiable to people. |
nips_2017_2307 | Federated Multi-Task Learning
Federated learning poses new statistical and systems challenges in training machine learning models over distributed networks of devices. In this work, we show that multi-task learning is naturally suited to handle the statistical challenges of this setting, and propose a novel systems-aware optimization method, MOCHA, that is robust to practical systems issues. Our method and theory for the first time consider issues of high communication cost, stragglers, and fault tolerance for distributed multi-task learning. The resulting method achieves significant speedups compared to alternatives in the federated setting, as we demonstrate through simulations on real-world federated datasets. | The paper generalizes an existing framework for distributed multitask learning (COCOA) and extends it to handle practical systems challenges such as communications cost, stragglers, etc. The proposed technique (MOCHA) makes the framework robust and faster.
Pros:
- Rigorous theoretical analysis
- Sound experiment methodology
Cons:
- Not very novel (compared to [47]) and just a mild variation over [47] including the analysis.
Major Comments:
---------------
1. The Equation 5 depends on \Delta \alpha_t* (minimizer of sub-problem). My understanding is that this minimizer is not known and cannot be computed accurately at every node, and therefore, we need the approximation. If so, then how do the nodes set \theta^h_t? Appendix E.3 states that this parameter is directly tuned via H_i. However, the quality should be determined by the environment, and not supposed to be 'tuned'. If tuned, then it might act as a 'learning rate' -- instead of taking the full step in the direction of the gradient, the algorithm takes a partial step.
2. The main difference between this paper and [47] seems to be that instead of the same \theta in [47], this paper has defined separate \theta^h_t for each node t.
Minor comments / typos:
-----------------------
39: ...that MTL is -as- a natural ....
108: ...of this (12) in our... -- should be (2) |
nips_2017_224 | k-Support and Ordered Weighted Sparsity for Overlapping Groups: Hardness and Algorithms
The k-support and OWL norms generalize the 1 norm, providing better prediction accuracy and better handling of correlated variables. We study the norms obtained from extending the k-support norm and OWL norms to the setting in which there are overlapping groups. The resulting norms are in general NP-hard to compute, but they are tractable for certain collections of groups. To demonstrate this fact, we develop a dynamic program for the problem of projecting onto the set of vectors supported by a fixed number of groups. Our dynamic program utilizes tree decompositions and its complexity scales with the treewidth. This program can be converted to an extended formulation which, for the associated group structure, models the k-group support norms and an overlapping group variant of the ordered weighted 1 norm. Numerical results demonstrate the efficacy of the new penalties. | Summary:
This paper designs new norms for group sparse estimation. The authors extend the k-support norm and the ordered weighted norm to the group case (with overlaps). The resulting (latent) norms are unfortunately NP-hard to compute, though. The main contribution is an algorithm based on tree decomposition and dynamic programming for computing the best approximation (under the Euclidean norm) under group cardinality constraints. This algorithm improves the previous work by a factor of m (# of groups). A second contribution is an extended formulation for the proposed latent group norms. Such formulation can be plugged into standard convex programming toolboxes, as long as the underlying tree width is small. Some synthetic experiments are provided to justify the proposed norms (and their extended formulations).
Overall this paper appears hard to follow, largely due to its technical nature. The organization is also a bit off: the utility of the dynamic programming algorithm in section 3 is largely unclear, and the experiment section did not verify this part at all. While being mostly a computational paper, strangely, the experiment section only compared the statistical performances of different norms, without conducting any computational comparison. While the technical results seem to be solid and improve previous work, it will be more convincing if more can be said on what kind of problems can benefit from the authors' results. With these much more sophisticated norms, one becomes easily overwhelmed by the fine mathematical details but somehow also gets more suspicious about their practical relevance (which unfortunately is not well addressed by the current experiments).
Other comments:
Lines 93-95: the parameter nu disappears in the convex envelope?
Line 97: the weights need to be nonnegative.
Line 343: this proof of Theorem 2.1 works only for 1 < p < infty? For instance, if p = 1, then Omega = |.|_1 always. Please fully justify the claim "Omega is the ell_p norm if and only if ...", as the experiments did use p=infty.
Line 346: please justify "if lambda < 1, the solution is (1-lambda)y if and only if ..."
Line 141: I do not see why the claim N <= |V| is correct. do you mean max_j |X_j| <= |V|?
Experiments: should also include a set of experiments that favors the latent group norm, to see that if the proposed norms perform (much?) worse under an unfavorable condition. Also, please compare with the elastic net regularizer (in the correlated design case). |
nips_2017_2769 | Local Aggregative Games
Aggregative games provide a rich abstraction to model strategic multi-agent interactions. We introduce local aggregative games, where the payoff of each player is a function of its own action and the aggregate behavior of its neighbors in a connected digraph. We show the existence of a pure strategy -Nash equilibrium in such games when the payoff functions are convex or sub-modular. We prove an information theoretic lower bound, in a value oracle model, on approximating the structure of the digraph with non-negative monotone sub-modular cost functions on the edge set cardinality. We also define a new notion of structural stability, and introduce γ-aggregative games that generalize local aggregative games and admit -Nash equilibrium that is stable with respect to small changes in some specified graph property. Moreover, we provide algorithms for our models that can meaningfully estimate the game structure and the parameters of the aggregator function from real voting data. | Summary:
The authors consider games among n players over a digraph G=(V,E), where each player corresponds to a node in V, the payoff function of each player depends on the choices of its own and those of neighbors. A further assumption is that the payoff function is Lipschitz and convex (or submodular). For games with sufficiently smooth payoff functions, the authors show that an approximate PSNE exists. Also, the authors prove that there exists a digraph for which any randomized algorithms approximating PSNE needs exponentially many value-queries for a submodular payoff function.
Then the authors consider a variant of the game called gamma-local aggregative game, for which the payoff function depends on not only neighbors but all other players under the condition that other players choices affect the payoff in the magnitude of gamma^(distance from the player). For such games, the authors show a sufficient condition that an approximate PSNE exists. They further consider a learning problem for the games where, given approximate PSNEs as examples, the learner’s goal is to predict the underlying digraph. An optimization problem for the learning problem is proposed. Finally, the authors show experimental results over real data sets.
Comments:
The proposed game is a generalization of the aggregative game to a graph setting. The technical contribution of the paper is to derive sufficient/necessary conditions for the existence of approximate PSNEs for the games. On the other hand, the theoretical results for learning graphs from approximate PSNEs is yet to be investigated in the sense that no sample complexity is shown. However, the proposed formulation provides interesting results over real data sets.
I read the authors' rebuttal comments and I would appreciate if the authors reflect your comments to me in the final version. |
nips_2017_1836 | Learning to Compose Domain-Specific Transformations for Data Augmentation
Data augmentation is a ubiquitous technique for increasing the size of labeled training sets by leveraging task-specific data transformations that preserve class labels. While it is often easy for domain experts to specify individual transformations, constructing and tuning the more sophisticated compositions typically needed to achieve state-of-the-art results is a time-consuming manual task in practice. We propose a method for automating this process by learning a generative sequence model over user-specified transformation functions using a generative adversarial approach. Our method can make use of arbitrary, non-deterministic transformation functions, is robust to misspecified user input, and is trained on unlabeled data. The learned transformation model can then be used to perform data augmentation for any end discriminative model. In our experiments, we show the efficacy of our approach on both image and text datasets, achieving improvements of 4.0 accuracy points on CIFAR-10, 1.4 F1 points on the ACE relation extraction task, and 3.4 accuracy points when using domain-specific transformation operations on a medical imaging dataset as compared to standard heuristic augmentation approaches. | This paper addresses an interesting and new problem to augment training data in a learnable and principled manner. Modern machine learning systems are known for their ‘hunger for data’ and until now state-of-the-art approaches have relied mainly on heuristics to augment labeled training data. This paper tries to reduce the tedious task of finding a good combination of data augmentation strategies with best parameters by learning a sequence of best data augmentation strategies in a generative adversarial framework while working with unsupervised data.
Plus points:
1. The motivation behind the problem is to reduce human labor without compromising the final discriminative classification performance. The problem formulation is pretty clear from the text.
2. The solution strategy is intuitive and based on realistic assumptions. The weak assumption of class invariance under data transformations enable the use of unsupervised training data and a generative adversarial network to learn the optimum composition of transformation functions from a set of predefined data transformations. To address the challenge of transformation functions being possibly non-differentiable and non-deterministic, the authors propose a reinforcement learning based solution to the problem.
3. The experiments are well designed, well executed and the results are well analyzed. The two variants (mean-field and LSTM) of the proposed method almost always beat the basic and heuristics based baselines. The proposed method shows its merit on 3 different domains with very different application scenarios. The method experimentally proves its adaptability and usefulness of using domain specific transformation functions in the experiment with Mammography Tumor Classification experiment. The authors did a good job by showing the robustness of the method in the TF Misspecification experimentation (L279-286).
Minus points:
1. I have confusion regarding the role of the discriminator which I’m writing in detail below. I can understand that the generator's task is to produce less number of out-of-distribution null class (eqn. between L113 and 114). With the weak assumption (eq.(2)), this means that the generator's task is to produce mostly an image of the same class as that of the input. Then if I'm understanding correctly, the discriminator's task is to have low value of D(.) when a transformed example is passed through the discriminator (first term of eqn between L117-L118). At the same time, the discriminator's task is to have high value of D(.) when a non-transformed example is passed through the discriminator (second term of eqn between L117-L118). Thus the 'game' being played between the generator and the discriminator is that the generator is trying to (to propose a sequence of transformations to) generate in-class (i.e., non null-class) examples while the discriminator is trying to tell whether the input coming to it is a null-class (coming from a bad generator which produces null-class example) or not (coming from the real distribution of examples or from a good generator). If my understanding (above) is true, I'd request to put some explanatory texts along this line (the game analogy and what is the role of the discriminator i.e., when will the discriminator give a high value and when low value). And if my understanding is not in the right direction, then I'd like to see a better explanation of what the authors tried to achieve by the generative adversarial objective.
Side comment: Please provide eqn. numbers to all equations even if it is not referred anywhere in the text. It helps the readers to refer to them.
In light of the clear motivation of a new problem, succinct solution strategy and well designed and executed experiments I’m positive about the work. However, I’d like to see the authors’ rebuttal about the confusion I wrote above.
=======
After reading the fellow reviewers' comments and the the authors' response in detail, I'm more positive about this work. The authors' did a good job in clarifying my question about the role of the discriminator and have told that they would add a clarification along those lines. I agree with the authors that the focus of the work was on conducting a study of getting optimum sequence of transformation functions for data augmentation and data augmentation makes more sense when the availability of data is limited. Notwithstanding, the authors did a good job by providing the performance of their method fin the fully supervised scenario for the sake of completeness and on the request of the fellow reviewers. This also adds to the robustness of the proposed method. Thus, considering the novel problem definition, neat solution strategy and supportive experiments I am going for an even stronger accept. To me, it is a candidate for an oral presentation. |
nips_2017_1394 | An Inner-loop Free Solution to Inverse Problems using Deep Neural Networks
We propose a new method that uses deep learning techniques to accelerate the popular alternating direction method of multipliers (ADMM) solution for inverse problems. The ADMM updates consist of a proximity operator, a least squares regression that includes a big matrix inversion, and an explicit solution for updating the dual variables. Typically, inner loops are required to solve the first two subminimization problems due to the intractability of the prior and the matrix inversion. To avoid such drawbacks or limitations, we propose an inner-loop free update rule with two pre-trained deep convolutional architectures. More specifically, we learn a conditional denoising auto-encoder which imposes an implicit data-dependent prior/regularization on ground-truth in the first sub-minimization problem. This design follows an empirical Bayesian strategy, leading to so-called amortized inference. For matrix inversion in the second sub-problem, we learn a convolutional neural network to approximate the matrix inversion, i.e., the inverse mapping is learned by feeding the input through the learned forward network. Note that training this neural network does not require ground-truth or measurements, i.e., data-independent. Extensive experiments on both synthetic data and real datasets demonstrate the efficiency and accuracy of the proposed method compared with the conventional ADMM solution using inner loops for solving inverse problems. | An interesting paper that solves linear inverse problems using a combination of two networks: one that learns a proximal operator to the signal class of interest, and the other that serves as a proxy for a large scale matrix inversion. The proximal operator is reusable whenever the signal domain is unchanged. One would need to retrain only the matrix inversion network when the underlying problem is changed. This is a significant advantage towards reusability of the training procedure.
Strengths
+ Novelty
+ A very interesting problem
Weaknesses
- An important reference is missing
- Other less important references are missing
- Bare-bones evaluation
The paper provides an approach to solve linear inverse problems by reducing training requirements. While there is some prior work in this area (notably the reference below and reference [4] of the paper), the paper has some very interesting improvements over them. In particular, the paper combines the best parts of [4] (decoupling of signal prior from the specific inverse problem being solved) and Lista (fast implementation). That said, evaluation is rather skim — almost anecdotal at times — and this needs fixing. There are other concerns as well on the specific choices made for the matrix inversion that needs clarification and justifications.
1) One of the main parts of the paper is a network learnt to invert (I + A^T A). The paper used the Woodbury identity to change it to a different form and learns the inverse of (I+AA^T) since this is a smaller matrix to invert. At test time, we need to apply not just this network but also "A" and "A^T" operators.
A competitor to this is to learn a deep network that inverts (I+A^T A). A key advantage of this is that we do not need apply A and A^T during test time. It is true that that this network learns to inverts a larger matrix ... But at test time we have increased rewards. Could we see a comparison against this ? both interms of accuracy as well as runtime (at training and testing) ?
2) Important reference missing. The paper is closely related to the idea of unrolling, first proposed in, “Lista”
http://yann.lecun.com/exdb/publis/pdf/gregor-icml-10.pdf
While there are important similarities and differences between the proposed work and Lista, it is important that the paper talks about them and places itself in appropriate context.
3) Evaluation is rather limited to a visual comparison to very basic competitors (bicubic and wiener filter). It would be good to have comparisons to
- Specialized DNN: this would provide the loss in performance due to avoiding the specialized network.
- Speed-ups over [4] given the similarity (not having this is ok given [4] is an arXiv paper)
- Quantitative numbers that capture actual improvements over vanilla priors like TV and wavelets and gap to specialized DNNs.
Typos
- Figure 2: Syntehtic |
nips_2017_3293 | Random Projection Filter Bank for Time Series Data
We propose Random Projection Filter Bank (RPFB) as a generic and simple approach to extract features from time series data. RPFB is a set of randomly generated stable autoregressive filters that are convolved with the input time series to generate the features. These features can be used by any conventional machine learning algorithm for solving tasks such as time series prediction, classification with time series data, etc. Different filters in RPFB extract different aspects of the time series, and together they provide a reasonably good summary of the time series. RPFB is easy to implement, fast to compute, and parallelizable. We provide an error upper bound indicating that RPFB provides a reasonable approximation to a class of dynamical systems. The empirical results in a series of synthetic and real-world problems show that RPFB is an effective method to extract features from time series. | This paper proposes a new algorithm for learning on time series data. The basic idea is to introduce random projection filter bank for extracting features by convolution with the time series data. Learning on time series data can then be performed by applying conventional machine learning algorithms to these features. Theoretical analysis is performed by studying both the approximation power and statistical guarantees of the introduced model. Experimental results are also presented to show its effectiveness in extracting features from time series.
Major comments:
(1) It seems that the algorithm requires the information on both $M$ and $\epsilon_0$, which depend on the unknown target function $h^*$ to be estimated. Does this requirement on an apriori information restrict its practical application? Since $Z_i'$ are complex numbers, the hypothesis function in (9) would take complex outputs. It is not clear to me how this complex output is used for prediction.
(2) It is claimed that the discrepancy would be zero if the stochastic process is stationary. However, this seems not to be true as far as I can see. Indeed, the discrepancy in Definition 1 is a random variable even if the stochastic process is i.i.d.. So, it depends on the realization of $X$. If this is true, Theorem 1 is not clear to understand since it involves a discrepancy which would not be zero in the i.i.d. learning case.
(3) There exists some inconsistences in Algorithm 1 and materials in Section 4. Algorithm 1 is a regularization scheme while ERM is considered in Section 4. There are $m$ time series in Algorithm 1, while $m=1$ is considered in Section 4. The notation is not consistent, i.e., $f$ is used in Algorithm 1 and $h$ is used in Section 4.
(4) The motivation in defining features in Section 3 is not clear to me. It would be better if the authors can explain it more clearly so that readers without a background in complex analysis can understand it well.
Minor comments:
(1) line 28: RKSH
(2) line 65, 66: the range of $f\in\mathcal{F}$ should be the same as the first domain of $l$
(3) eq (11), the function space in the brace seems not correct
(4) the notation * in Algorithm 1 is not defined as far as I can see
(5) $\delta_2$ should be $\delta$ in Theorem 1
(6) line 254: perhaps a better notation for feature size is $n$?
(7) Appendix, second line in equation below line 444, a factor of $2$ is missed there?
(8) Appendix, eq (28), the identity should be an inequality? I suggest the authors to pay more attention to (28) since the expectation minus its empirical average is not the same as the empirical average minus expectation. |
nips_2017_2494 | Stochastic Approximation for Canonical Correlation Analysis
We propose novel first-order stochastic approximation algorithms for canonical correlation analysis (CCA). Algorithms presented are instances of inexact matrix stochastic gradient (MSG) and inexact matrix exponentiated gradient (MEG), and achieve ✏-suboptimality in the population objective in poly( 1 ✏ ) iterations. We also consider practical variants of the proposed algorithms and compare them with other methods for CCA both theoretically and empirically. | I read the rebuttal and thank the authors for the clarifications about differences between the proposed method and previous work in [3,4]. I agreed with Reviewer 2's concerns about the limits of the evaluation section. Giving experiments on only one medium scale data set, it is insufficient to show the potential computational advantages of the proposed method (scale easily to very large datasets) compared to the alternative methods.
Summary
The paper proposes two novel first-order stochastic optimization algorithms to solve the online CCA problem. Inspired from previous work, the proposed algorithms are instances of noisy matrix stochastic / exponentiated gradient (MSG / MEG): at each iteration, for a new sample (x_t , y_t), the algorithms work by first updating the empirical estimates of the whitening transformations which define the inexact gradient partial_t, followed by a projection step over the constrain set of the problem to get a rank-k projection matrix (M_t). The theoretical analysis and experimental results show the superiority of the algorithms compared to state-of-the-art methods for CCA.
Qualitative Assessment
Significance - Justification:
The main contributions of the paper are (1) the derivation of a new noisy stochastic mirror descent method to solve a convex relaxation for the online version of CCA and (2) the provision of
generalization bounds (or convergence guarantees) for the proposed algorithms. These contributions appear to be incremental advances considered that similar solutions have been proposed previously for PCA [3] and PLS [4].
Detailed comments:
The paper on the whole is well written, the algorithms are explained clearly, the theorems for the algorithms are solid and sound, and the performance of the algorithms outperforms that of state-of-the-art methods for CCA.
Some questions/comments:
What differentiates your algorithm from that of the previous work in references [3, 4]? Except solving different problems (CCA vs PCA and PLS), is your algorithm fundamentally different in some ways (e.g. add whitening transformations to alleviate the gradient updates for CCA problems or provide convergence analysis)?
The reviewer read references [3, 4] and noticed that the algorithms in your papers are similar to those proposed in [3, 4] in terms of similar convex relaxation and stochastic optimization algorithms. The reviewer would have expected the authors to provide a comparison/summary of the differences / similarities between previous work [3, 4, 21] and current paper in the related work. This would also highlight explicitly the main contributions of the paper.
There are a few typos:
Line 90: build -> built; study -> studied
Line 91: in the batch setting -> in a batch setting
Line 96: pose -> posed; consider -> considered
Line 101-102: rewrite this sentence “none of the papers above give generalization
bounds and focus is entirely on analyzing the quality of the solution to that of the empirical minimizer”. I guess you intend to express the following: “all the papers mentioned above focus entirely / mainly on analyzing the quality of the solution to that of the empirical minimizer, but provide no generalization bounds”
remove line 150 |
nips_2017_1160 | Revenue Optimization with Approximate Bid Predictions
In the context of advertising auctions, finding good reserve prices is a notoriously challenging learning problem. This is due to the heterogeneity of ad opportunity types, and the non-convexity of the objective function. In this work, we show how to reduce reserve price optimization to the standard setting of prediction under squared loss, a well understood problem in the learning community. We further bound the gap between the expected bid and revenue in terms of the average loss of the predictor. This is the first result that formally relates the revenue gained to the quality of a standard machine learned model. | This paper studies revenue optimization from samples for one bidder. It is motivated by ad auction design where there are trillions of different items being sold. For many items, there is often little or no information available about the bidder’s value for that precise good. As a result, previous techniques in sample-based auction design do not apply because the auction designer has no samples from the bidder’s value distribution for many of the items. Meanwhile, ads (for example) are often easily parameterized by feature vectors, so the authors make the assumption that items with similar features have similar bid distributions. Under this assumption, the authors show how to set prices and bound the revenue loss.
At a high level, the algorithm uses the sample and the bid predictor h(x) to cluster the feature space X into sets C1, …, Ck. The algorithm finds a good price ri per cluster Ci. On a freshly sampled feature vector x, if the feature vector falls belongs to the cluster Ci, then the reserve price r(x) = ri. The clustering C1, …, Ck is defined to minimize the predicted bid variance of each group Cj. Specifically, the objective is to minimize sum_{t = 1}^k \sqrt{\sum_{i,j:xi,xj \in Ct} (h(xi) – h(xj))^2}. The authors show that this can be done efficiently using dynamic programming. The best price per cluster can also be found in polynomial time. For each cluster Ct, let St be the set of samples (xi, bi) such that xi falls in Ct. The algorithm sets the price rt to be the bid bi such that (xi,bi) falls in St and bi maximizes empirical revenue over all samples in St.
The authors bound the difference between the expected revenue achieved by their learning algorithm and an upper bound on optimal revenue. This upper bound is simply E_{(x,b)}[b]. They show that the bound depends on the average bid over the sample and the error of the bid predictor h. The bound is also inversely proportional to the number of clusters k and the number of samples. They perform experiments on synthetic data and show that their algorithm performs best on “clusterable” synthetic valuation distributions (e.g., bimodal). It also seems to perform well on real ad auction data.
Overall, I think the motivation of this work is compelling and it’s an interesting new model for the sample-based mechanism design literature. I’d like to see more motivation for the assumption that the learner knows the bid prediction function h. Is it realistic that an accurate prediction function can be learned? It seems to me as though the learner would probably have to have a pretty robust knowledge about the underlying bid function in order for this assumption to hold. The paper could potentially be strengthened by some discussion about this assumption.
In the experimental section, it would be interesting to see what the theoretical results guarantee for the synthetic data and how that compares to the experimental results.
I’m confused about equation (12) from the proof of lemma 1. We know that r(x) >= h(x) – eta^{2/3}. Therefore, Pr[b < r(x)] >= P[b < h(x) – eta^{2/3}] = P[h(x) – b > eta^{2/3}]. But the proof seems to require that Pr[b < r(x)] <= P[b < h(x) – eta^{2/3}] = P[h(x) – b > eta^{2/3}]. Am I confusing something?
=========After feedback =========
Thanks, I see what you mean in Lemma 1. Maybe this could be made a bit clearer since in line 125, you state that r(x) := max(h(x) - eta^{2/3}, 0). |
nips_2017_2661 | A-NICE-MC: Adversarial Training for MCMC
Existing Markov Chain Monte Carlo (MCMC) methods are either based on generalpurpose and domain-agnostic schemes, which can lead to slow convergence, or problem-specific proposals hand-crafted by an expert. In this paper, we propose A-NICE-MC, a novel method to automatically design efficient Markov chain kernels tailored for a specific domain. First, we propose an efficient likelihood-free adversarial training method to train a Markov chain and mimic a given data distribution. Then, we leverage flexible volume preserving flows to obtain parametric kernels for MCMC. Using a bootstrap approach, we show how to train efficient Markov chains to sample from a prescribed posterior distribution by iteratively improving the quality of both the model and the samples. Empirical results demonstrate that A-NICE-MC combines the strong guarantees of MCMC with the expressiveness of deep neural networks, and is able to significantly outperform competing methods such as Hamiltonian Monte Carlo. | This paper describes a novel adversarial training procedure to fit a generative model described by a Markov chain to sampled data. The Markov chain transitions are based on NICE, and so are reversible and volume preserving. It is therefore straightforward to use these as proposals in a Metropolis MCMC method to sample from arbitrary distributions. By repeatedly fitting the Markov chain model to samples from preliminary runs, we can hope that we'll end up with an MCMC method that mixes well on an arbitrary target distribution. Like HMC, the Markov chain is actually on a joint distribution of the parameters of interest, and some auxiliary random draws used to make a deterministic proposal.
It's a promising idea. The MCMC examples in the paper aren't particularly compelling. They're ok, but limited. However, this is the sort of algorithm that could work and be useful in some realistic situations.
Quality
The paper appears to be technically sound. It describes a valid MCMC method. It's unclear when a bootstrapping procedure (a precise procedure isn't given in the paper) will converge to a working proposal mechanism. The paper ends alluding to this issue. Variational inference would find a feasible region of parameters, but (like all MCMC methods), the method could be trapped there.
There is a danger that parts of the distribution are ignored. The cost function doesn't ensure that the proposal mechanism will get everywhere. Future work could investigate that further. Mixing updates with a more standard MCMC mechanism may reveal blind spots.
The toy experiments indicate that the method can learn to hop modes with similar densities in 2D. But don't reflect statistical models where MCMC inference is used. Of course section 3 shows some interesting transitions can be learned in high dimensions. However, the interest is in application to MCMC, as there is plenty of work on generating synthetic images using a variety of procedures.
The experiments on Bayesian logistic regression are a starting point for looking at real statistical analyses. Personally, if forced to use HMC on logistic regression, I would estimate and factor the covariance of the posterior S = LL', using a preliminary run, or other approximate inference. Then reparameterize the space z = L^{-1}x, and run HMC on z. Equivalently set the mass matrix to be S^{-1}. Learning a linear transformation is simpler and more standard than fitting neural nets -- would the added complication still be worth it?
Projects like Stan come with demonstrations. Are there more complicated distributions from real problems where A-NICE-MC would readily out-perform current practice?
Clarity
The paper is fairly clearly written. The MCMC algorithm isn't fully spelled out, or made concrete (what's p(v)?). It would only be reproducible (to some extent) for those familiar with both HMC and NICE.
Originality
While the general idea of fitting models and adapting proposals has appeared periodically in the MCMC literature, this work is novel. The adversarial method proposed seems a good fit for learning proposals, rather than boldly trying to learn an explicit representation of the global distribution.
Significance
Statisticians are unlikely to adopt this work in its current form. It's a rather heavier dependency than "black-box variational inference", and I would argue that fitting neural nets is not generically black-box yet. The work is promising, but more compelling statistical examples would help.
Volume preserving proposals won't solve everything. One might naively think that the only challenge for MCMC is to come up with proposal operators that can make proposals to all the modes. However, typical samples from the posterior of a rich model can potentially vary in density a lot. There could be narrow tall modes, or short large basins of density, potentially with comparable posterior mass. Hierarchical models in particular can have explanations with quite different densities. A volume-preserving proposal can usually only change the log-density by ~1, or the Metropolis rule will reject too often.
Minor
The discussion of HMC and rules that return to (x,v) after two updates are missing the idea of negating the velocity. In HMC, the dynamics only reverse if we deterministically set
v' <- -v'
after the first update.
Please don't use the term "Exact Sampling" in the header to section 4.1. In the MCMC literature that term usually means generating perfect independent samples from a target distribution, a much stronger requirement than creating updates that leave the target distribution invariant: http://dbwilson.com/exact/
The filesize of this paper is larger than it should be, making the paper slow to download and display. Figure 2 seems to be stored as a .jpeg, but should be a vector graphic -- or if you can't manage that, a .png that hasn't passed through being a .jpeg. Figure 3 doesn't seem to be stored as a .jpeg, and I think would be ~10x smaller if it were(?).
Several bibtex entries aren't capitalized properly. Protect capital letters with {.}. |
nips_2017_2407 | Ranking Data with Continuous Labels through Oriented Recursive Partitions
We formulate a supervised learning problem, referred to as continuous ranking, where a continuous real-valued label Y is assigned to an observable r.v. X taking its values in a feature space X and the goal is to order all possible observations x in X by means of a scoring function s : X → R so that s(X) and Y tend to increase or decrease together with highest probability. This problem generalizes bi/multi-partite ranking to a certain extent and the task of finding optimal scoring functions s(x) can be naturally cast as optimization of a dedicated functional criterion, called the IROC curve here, or as maximization of the Kendall τ related to the pair (s(X), Y ). From the theoretical side, we describe the optimal elements of this problem and provide statistical guarantees for empirical Kendall τ maximization under appropriate conditions for the class of scoring function candidates. We also propose a recursive statistical learning algorithm tailored to empirical IROC curve optimization and producing a piecewise constant scoring function that is fully described by an oriented binary tree. Preliminary numerical experiments highlight the difference in nature between regression and continuous ranking and provide strong empirical evidence of the performance of empirical optimizers of the criteria proposed. | The paper studied the problem of continuous ranking, which is a generalization the bi-partite/multi-partite ranking problem in that the ranking label Y is a continuous real-valued random variable. Mathematical formulation of the problem is given. The problem of continuous ranking considered as multiple sub-problems, each corresponds to a bipartite ranking problem. Necessary conditions for the existence of optimal solutions are given. The IROC (and IAUC) measures are proposed for evaluating the performances of scoring functions. Finally, a continuous ranking algorithm called Crank is proposed. Experimental results on toy data showed the effectiveness of the proposed Crank algorithm.
The problem investigated in the paper is very interesting and important for learning to rank and related areas. The paper is well written. The proposed framework, measures, and algorithms are clearly presented. The empirical evaluation of the paper is weak, as it is based on toy data and weak baselines.
Pros.
1. Generalizes the bipartite/multi-partite ranking problems to the continuous ranking problem. Rigorous formulation of the problem and strong theoretical results.
2. Extending the conventional ROC to IROC for measuring the continuous ranking functions.
3. Proposing Crank algorithm for conducting continuous ranking.
Cons.
1. The authors claim that “theoretical analysis of this algorithm and its connection with approximation of IROC* are beyond the scope of this paper and will be the subject of a future work”. I am not convinced that “its connections with approximation of IROC* are beyond the scope of this paper”. As to my understanding of the paper, the goal of proposing IROC is to guide the proposal of new continuous ranking algorithms. Thus, it is very important to build the connections. Otherwise, the performances of the Crank algorithm cannot reflect the effectiveness of the proposed theoretical framework.
2. The experiments in the paper are really weak. The proposed Crank algorithm is tested based on a toy data. The authors conclude in the last section “… finds many applications (e.g., quality control, chemistry)…”. It is necessary to test the proposed framework and solutions on real problems. The baselines are CART and Kendall, which are not designed for the ranking problem. It is better to compare the algorithm with state-of-the-art bipartite and multi-partite ranking models. The generation of the toy examples, the setting of parameters are not given, which makes it is hard to reproduce the results.
3. The literature survey of the paper is not enough. Since the continuous ranking problem is a generalization of the bipartite ranking and multi-partite ranking problems, it is better if the authors could use a section to analyze the advantages and disadvantages of existing ranking models, and their real applications. Currently, only a few references are listed in Section 2.2.
4. Minor issue: Line 52: Kendall tau Kendall $\tau$; Line 66: F^{-1}(u) = inf{s\in R: F(s}>= t), the last t should be u; |
nips_2017_2334 | Faster and Non-ergodic O(1/K) Stochastic Alternating Direction Method of Multipliers
We study stochastic convex optimization subjected to linear equality constraints. Traditional Stochastic Alternating Direction Method of Multipliers [1] and its Nesterov's acceleration scheme [2] can only achieve ergodic O(1/ √ K) convergence rates, where K is the number of iteration. By introducing Variance Reduction (VR) techniques, the convergence rates improve to ergodic O(1/K) [3,4]. In this paper, we propose a new stochastic ADMM which elaborately integrates Nesterov's extrapolation and VR techniques. With Nesterov's extrapolation, our algorithm can achieve a non-ergodic O(1/K) convergence rate which is optimal for separable linearly constrained non-smooth convex problems, while the convergence rates of VR based ADMM methods are actually tight O(1/ √ K) in non-ergodic sense. To the best of our knowledge, this is the first work that achieves a truly accelerated, stochastic convergence rate for constrained convex problems. The experimental results demonstrate that our algorithm is faster than the existing state-of-the-art stochastic ADMM methods. | The paper considers the acceleration of stochastic ADMM. The paper basically combines existing techniques like Nesterov's acceleration technique, and increasing penalty parameter and variance reduction technique. The clearness and readability of the paper needs to be improved. The algorithm is designed for purpose of the theoretical bounds but not for a practically faster algorithm. Overall, the paper needs to be improved.
1. In the abstract and introduction, the paper does not clearly point out the result is based on the average of the last m iterations when state non-ergodic convergence rate. The paper also does not point out the use of increasing penalty parameter which is an important trick to establish the theoretical result.
2. Algorithm 2 looks like a warm start algorithm by increasing the penalty parameter (viz. step size). For a warm start algorithm, the theoretical result is expected for the average of the last loop iterates and outer-loop iterates.
3. If the theoretical result could be established without increasing penalty parameter, it is desirable. Trading penalty parameter (speed) for theoretical bound is not a significant contribution. |
nips_2017_2152 | Counterfactual Fairness
Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school. | This paper presents an interesting and valuable contribution to the small but growing literature on fairness in machine learning. Specifically, it provides at least three contributions: (1) a definition of counter factual fairness; (2) an algorithm for learning a model under counter factual fairness; and (3) experiments with that algorithm.
The value and convincingness of each of these contributions declines steadily. The value of the contributions of the current paper is sufficient for acceptance, though significant improvements could be made in the clarity of exposition of the algorithm and the extent of the experimentation with the algorithm.
Section 4.2 outlines why is is likely that very strong assumptions will need to be made to effectively estimate a model of Y under counterfactual fairness. The assumptions (and the implied analysis techniques) suggest conclusions that will not be particularly robust to violations of those assumptions. This implies a need for significant empirical evaluation of what happens when those assumptions are violated or when other likely non-optimal conditions hold. Realistic conditions that might cause particular problems include measurement error and non-homogeneity of causal effect. Both of these are very likely to be present in the situations in which fairness is of concern (for example, crime, housing, loan approval, and admissions decisions). The paper would be significantly improved by experiments examining model accuracy and fairness under these conditions (and under errors in assumptions).
The discussion of fairness and causal analysis omits at least one troubling aspect of real-world discrimination, which is the effect of continuing discrimination after fair decisions are made. For example, consider gender discrimination in business loans. If discriminatory actions of customers will reduce the probability of success for a female-owned business, then such businesses will have a higher probability of default, even if the initial loan decisions are made in an unbiased way. It would be interesting to note whether a causal approach would allow consideration of such factors and (if so) how they would be considered.
Unmeasured selection bias would seem to be a serious threat to validity here, but is not considered in the discussion. Specifically, data collection bias can affect the observed dependencies (and, thus, the estimation of potential unmeasured confounders U and the strength of causal effects). Selection bias can occur due to selective data collection, retention, or reporting, when such selection is due to two or more variables. It is relatively easy to imagine selection bias arising in cases where discrimination and other forms of bias are present. The paper would be improved by discussing the potential effects of selection bias.
A few minor issues: The paper (lines 164-166 and Figure 1) references Pearl's twin network, but the explanation of twin networks in caption of Figure 1 provides insufficient background for readers to understand what twin networks are and how they can be interpreted. Line 60 mentions "a wealth of work" two sentences after using "wealth" to refer to personal finances, and this may be confusing to readers. |
nips_2017_2153 | Prototypical Networks for Few-shot Learning Vector Institute
We propose Prototypical Networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical Networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend Prototypical Networks to zero-shot learning and achieve state-ofthe-art results on the CU-Birds dataset. | Summary: This paper addresses he recently re-popularised few-shot classification task. The idea is to represent each class as its mean/prototype within a learned embedding space, and then recognising new classes via softmax over distances to the prototypes. The model is trained by randomly randomly sampling classes and instances per episode. This is appealing due to its simplicity and speed compared to other influential few-shot methodologies [21,28]. Some insights are given about the connection to mixture density estimation in the case of Bregman divergence-based distances, linear models, and matching networks. The same framework extends relatively straightforwardly to zero-shot learning by making the class prototype be the result of a learned mapping from meta-data like attributes to the prototype vector. The model performs comparable or better than contemporary few/zero-shot methods on Omniglot, miniImageNet, and CUB.
Strengths/Weaknesses:
+ Intuitive and appealingly elegant method, that is simple and fast.
+ Authors provide several interpretations which draw connections drawn to other methods and help the reader understand well.
+ Some design choices are well explained , e.g. Euclidean distance outperforms cosine for good reason.
+ Good results
- Some other design decisions (normalisation; number of training classes per episode, etc) less well explained. How much of good results is the proposed method per-se, and how much of it is tuning this stuff?
- Why the zero-shot part specifically works so well should be better explained.
Details:
- Recent work also has 54-56% on CUB. (Chanpinyo et al, CVPR’16, Zhang & Salgrama ECCV’16)
- This may not necessarily reduce the novelty of this work, but the use of mean of feature vectors from the same class has been proposed for the enhancement over training general (not few-shot specifically) classification model. [A Discriminative Feature Learning Approach for Deep Face Recognition, Wen et al, ECCV 2016]. The “center” in the above paper matches “prototype”. Probably this connection should be cited.
- For the results of zero-shot learning on CUB dataset, i.e., Table 3 page 7, the meta-data used here are “attribute”. This is good for fair comparison. However, from the perspective of getting better performance, better meta-data embeddings options are available. Refer to table 1 in “Learning Deep Representations of Fine-Grained Visual Descriptions, Reed et al, CVPR 2016”. It would be interesting to know the performance of the proposed method when it is equipped with better meta-data embeddings.
Update: Thanks to the authors for their response to the reviews. I think this paper is acceptable for NIPS. |
nips_2017_216 | Cortical microcircuits as gated-recurrent neural networks
Cortical circuits exhibit intricate recurrent architectures that are remarkably similar across different brain areas. Such stereotyped structure suggests the existence of common computational principles. However, such principles have remained largely elusive. Inspired by gated-memory networks, namely long short-term memory networks (LSTMs), we introduce a recurrent neural network in which information is gated through inhibitory cells that are subtractive (subLSTM). We propose a natural mapping of subLSTMs onto known canonical excitatory-inhibitory cortical microcircuits. Our empirical evaluation across sequential image classification and language modelling tasks shows that subLSTM units can achieve similar performance to LSTM units. These results suggest that cortical circuits can be optimised to solve complex contextual problems and proposes a novel view on their computational function. Overall our work provides a step towards unifying recurrent networks as used in machine learning with their biological counterparts. | A new recurrent neural network model is presented. It has similar functional features as LSTMs
but additive gating instead of multiplicative gating for the input and output gates.
With this additive gating mechanism there are some striking similarities to cortical circuits.
In my opinion, this papers could be really interesting for both computational neuroscientists and machine learners.
For small network sizes the proposed model performs as good or better than simple LSTMs on some non-trivial tasks.
It is, however, somewhat disappointing that For the language modelling task it seems, however, that the multiplicative forget gates are still needed for good performance in larger networks.
Further comments and questions:
1. I guess the sigma-subRNN have also controlled forget gates (option i on line 104).
If yes, it would be fair to highlight this fact more clearly in the results/discussion section.
It is somewhat disappointing that this not so biologically plausible feature
seems to be needed to achieve good performances on the language modelling tasks.
2. Would it be possible to implement the controlled forget gate also with a subtractive gate.
Maybe something like c_t = relu(c_{t-1} - f_t) + (z_t - i_t),
which could be interpreted as a strong inhibition (f_t) reseting the reverberating loop before new input (z_t - i_t) arrives.
But maybe the sign of c_t is important....
3. line 111: Why is it important to have rectified linear transfer functions?
Doesn't sigma-subRNN perform better in some cases?
From the perspective of biological plausibility the transfer function should have the range [0, f_max]
where f_max is the maximal firing rate of a neuron (see e.g. the transfer function of a LIF neuron with absolute refractoriness).
4. line 60 "inhibitory neurons act subtractively": I think one should acknowledge that
the picture is not so simple in biological systems (e.g. Chance et al. 2002 https://doi.org/10.1016/S0896-6273(02)00820-6 or
Müllner et al. 2015 https://doi.org/10.1016/j.neuron.2015.07.003).
5. In my opinion it is not needed to explicitely state the well known Gershgorin circle theorem in the main text.
6. Fig 3f: BGRNN? Should this be subRNN?
I read through the author’s rebuttal and my colleagues’ review, and decided to not change my assessment of the paper. |
nips_2017_1713 | Deep Lattice Networks and Partial Monotonic Functions
We propose learning deep models that are monotonic with respect to a userspecified set of inputs by alternating layers of linear embeddings, ensembles of lattices, and calibrators (piecewise linear functions), with appropriate constraints for monotonicity, and jointly training the resulting network. We implement the layers and projections with new computational graph nodes in TensorFlow and use the Adam optimizer and batched stochastic gradients. Experiments on benchmark and real-world datasets show that six-layer monotonic deep lattice networks achieve state-of-the art performance for classification and regression with monotonicity guarantees. | This paper proposes a novel deep model that is monotonic to user specified inputs, called deep lattice networks (DLN's), which is composed of layers of linear embeddings, ensembles of
lattices and calibrators. The results about the function classes of DLN's are also provided.
Authors should put more effort to make the paper readable for the people that are not familiar with the technicality. However the contributions are interesting and significant. As [3] proposed a 2 layer model, which has a layer of calibrators followed by an ensemble of lattices, the advantages of proposed deeper model could be presented in a more convincing way by showing better results as the results seem closer to Crystals and it is not clear whether the produced results are stable.
The computational complexity should be also discussed clearly. Each projection step in optimization should be explained in more detail. What is the computational complexity of the projection for the calibrator? What proportion of training time the projections take? Are the projections accurate or suboptimal in the first iterations? And how the overall training time compares to baselines?
Does the training of the proposed model produces bad local minima for deeper models? At most how many layers could be trained with current optimization procedure?
It seems most of the results are based on the data that is not publicly available. To make the results reproducible, authors should add more publicly available data to experiments. Also artificial data would help to demonstrate some sanity checks, e.g. a case that two layer [3] did not work well but deeper model work better, the intermediate layer visualizations etc .
Authors should also explain better why the model has interpretability and debuggability advantages. |
nips_2017_356 | Learning with Average Top-k Loss
In this work, we introduce the average top-k (AT k ) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the AT k loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the AT k loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of MAT k learning on the classification calibration of the AT k loss and the error bounds of AT k -SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets. | This paper investigates a new learning setting: optimizing the average k largest (top-k) individual functions for supervised learning. This setting is different from the standard Empirical Risk minimization (ERM), which optimize the average loss function over datasets. The proposed setting is also different from maximum loss (Shalev-Shwartz and Wexler 2016), which optimize the maximum loss. This paper tries to optimize the average top-k loss functions. This can be viewed as a natural generalization of the ERM and the maximum loss. The authors summary it as a convex optimization problem, which can be solved with conventional gradient-based method. The authors give some learning theory analyses of setting on the classification calibration of the Top-k loss and the error bounds of ATk-SVM. Finally, the authors present some experiments to verify the effectiveness of the proposed algorithm.
This work is generally well-written with some advantages as follows:
1) The authors introduce a new direction for supervised learning, which is a natural generalization of ERM and the work of (Shalev-Shwartz and Wexler 2016).
2) Some theoretical analyses are presented for the proposed learning setting.
3) The author present a learning algorithm.
Cons:
1) Some statements are not clear, for example, top-k loss, which is similar to top-k ranking; more important, ensembles loss gives some impressions for ensemble learning whereas they are totally different.
2) When we used the average top-k loss? I do not think that the authors make clear explanations. Intuitively, the performance (w.r.t. accuracy) of average top-k loss is less than ERM without noise and outliers, while I guess the average too-k loss algorithm may have good performance when deal with noise data and outliers.
3) How to choose the parameter k? The authors use cross-validation in experiments, while there is no some analyses.
4) The authors should present some t-test on the performance on benchmark datasets. I doubt some experimental results. For example, the accuracy for German is about 0.79 for standard learning algorithms.
5) How about the efficiency in comparison with ERM and the work of (Shalev-Shwartz and Wexler 2016). |
nips_2017_2274 | Estimating Accuracy from Unlabeled Data: A Probabilistic Logic Approach
We propose an efficient method to estimate the accuracy of classifiers using only unlabeled data. We consider a setting with multiple classification problems where the target classes may be tied together through logical constraints. For example, a set of classes may be mutually exclusive, meaning that a data instance can belong to at most one of them. The proposed method is based on the intuition that: (i) when classifiers agree, they are more likely to be correct, and (ii) when the classifiers make a prediction that violates the constraints, at least one classifier must be making an error. Experiments on four real-world data sets produce accuracy estimates within a few percent of the true accuracy, using solely unlabeled data. Our models also outperform existing state-of-the-art solutions in both estimating accuracies, and combining multiple classifier outputs. The results emphasize the utility of logical constraints in estimating accuracy, thus validating our intuition. | This paper builds on the work of Platanios et al. (2014, 2016) on estimating the accuracy of a set of classifiers for a given task using only unlabeled data, based on the agreement behavior of the classifiers. The current work uses a probabilistic soft logic (PSL) model to infer the error rates of the classifiers. The paper also extends this approach to the case where we have multiple related classification tasks: for instance, classifying noun phrases with regard to their membership in multiple categories, some of which subsume others and some of which are mutually exclusive. The paper shows how a PSL model can take into account these constraints among the categories, yielding better error rate estimates and higher joint classification accuracy.
[UPDATED] I see this paper as just above the acceptance threshold. It is well written and the methodology seems sound. It provides a compelling example of improving prediction accuracy by applying a non-trivial inference algorithm to multiple variables, not just running feed-forward classification.
The paper is less strong in terms of novelty. It's not surprising that a system can achieve better accuracy by using cross-category constraints. The paper does this with an existing PSL system (from Bach et al. 2015). The paper discusses a customized grounding algorithm in Section 3.3.2 (p. 6), but it seems to boil down to iterating over the given instances and classifier predictions and enumerating rule groundings that are not trivially satisfied. The only other algorithmic novelty seems to be the sampling scheme in the second paragraph of Section 3.3.3 (p. 7). [UPDATE: The authors say in their response that this sampling scheme was critical to speeding up inference so it could run at the necessary scale. If this is where a significant amount of the work's novelty lies, it deserves more space and a more thorough exposition.]
The paper could also benefit from deeper and more precise analysis. A central choice in the paper is the use of probabilistic soft logic, where variables can take values in [0, 1], rather than a framework such as Markov Logic Networks (MLNs) that deals with discrete variables. The paper says that this choice was a matter of scalability, and "one could just as easily perform inference for our model using MLNs" (lines 213-214, p. 5). I don't think this is true. The proposed model applies logical operators to continuous-valued variables (for the error rates, and the predictions of probabilistic classifiers), which is meaningful in PSL but not MLNs. If one were to model this problem with an MLN, I think one would replace the error rate variables with learnable weights on rules connecting the classifier outputs to the true category values. Such modeling decisions would change the scalability story: there would be no edges in the MLN between variables corresponding to different input instances (e.g., different noun phrases). So one could do exact inference on the sub-networks for the various instances in parallel, and accumulate statistics to use in an EM algorithm to learn the weights. This might be roughly as efficient as the optimization described in Section 3.3.3. [UPDATE: In the MLN I'm envisioning here, with one connected component per noun phrase, information is still shared across instances (noun phrases) during parameter estimation: the weights are shared across all the instances. But when the parameters are fixed, as in the E-step of each EM iteration, the inference algorithm can operate on each noun phrase independently.]
The experimental results could also use more analysis. It's not surprising that the proposed LEE algorithm is the best by all metrics in the first row of Table 1 (p. 8), where it takes cross-category constraints into account. I have a number of questions about the other two rows:
[UPDATE: Thanks for the answers to these questions.]
* What are the "10%" data sets? I didn't find those mentioned in the text.
* Does simple majority voting really yield an AUC of 0.999 on the uNELL-All data set? If that's the case, why are the AUCs for majority voting so much lower on NELL-7 and NELL-11? Which setting is most representative of the full problem faced by NELL?
* Why does the proposed method yield an AUC of only 0.965 on uNELL-All, performing worse than majority voting and several other methods? [UPDATE: The authors note that the classifier weights from the proposed method end up performing worse than equal weighting on this data set. This could use more analysis. Is there a shortcoming in the method for turning error rates into voting weights?]
* On the other three data sets in these rows, the proposed approach does better than the algorithms of Platanios et al. 2014 and 2016, which also use reasonable models. Why does the new approach do better? Is this because the classifiers are probabilistic, and the new approach uses those probabilities while the earlier approaches binarize the predictions? [UPDATE: This still seems to be an open question. The paper would be more satisfying if included some additional analysis on this issue.]
* For a given probabilistic classifier on a given labeled data set, the two methods for computing an "error rate" -- one thresholding the prediction at 0.5 and counting errors; the other averaging the probabilities given to incorrect classifications -- will yield different numbers in general. Which way was the "true" error rate computed for these evaluations? Is this fair to all the techniques being evaluated? [UPDATE: The authors clarified that they thresholded the predictions at 0.5 and counted errors, which I agree is fair to both probabilistic and non-probabilistic classifiers.]
Smaller questions and comments:
* p. 1, Figure 1: If there is a shortage of space, I think this figure could be removed. I skipped over it on my first reading of the paper and didn't feel like I was missing something.
* p. 2, line 51: "we define the error rate..." Wouldn't it be helpful to consider separate false positive and false negative rates?
* p. 2, line 55: The probabilistic equation for the error rate here doesn't seem syntactically valid. The error rate is not parameterized by a particular instance X, but the equation includes \hat{f}^d_j(X) and (1 - \hat{f}^d_j(X)) outside the P_D terms. Which X is used there? I think what's intended is the average over the data set of the probability that the classifier assigns to the incorrect classification.
* p .4, line 148: Were the techniques described in this Identifiability section used in the experiments? If so, how was the prior weight set?
* p. 5, Equation 3: Why does the model use \hat{f}^{d_1}_j here, rather than using the unobserved predicate f^{d_1} (which would be symmetrical with f^{d_2}) and relying on constraint propagation?
* p. 5, line 219: This line starts using the symbol f for a probability density, which is confusing given the extensive use of f symbols for classifiers and target functions.
* p. 6, line 223: What is p_j? Is it always 1 in the proposed model?
* p. 6, line 225: How were the lambda parameters set in the experiments?
* p. 6, line 257: The use of N^{d_1} in these counting expressions seems imprecise. The expressions include factors of D^2 or D, implying that they're summing over all the target functions in D. So why is the N specific to d_1?
* p. 7, line 300: "response every input NP". Missing "for" here.
* p. 8, Table 1: The use of different scaling factors (10^-2 and 10^-1) in different sections of the table is confusing. I'd recommend just presenting the numbers without scaling. |
nips_2017_2012 | Collecting Telemetry Data Privately
The collection and analysis of telemetry data from user's devices is routinely performed by many software companies. Telemetry collection leads to improved user experience but poses significant risks to users' privacy. Locally differentially private (LDP) algorithms have recently emerged as the main tool that allows data collectors to estimate various population statistics, while preserving privacy. The guarantees provided by such algorithms are typically very strong for a single round of telemetry collection, but degrade rapidly when telemetry is collected regularly. In particular, existing LDP algorithms are not suitable for repeated collection of counter data such as daily app usage statistics. In this paper, we develop new LDP mechanisms geared towards repeated collection of counter data, with formal privacy guarantees even after being executed for an arbitrarily long period of time. For two basic analytical tasks, mean estimation and histogram estimation, our LDP mechanisms for repeated data collection provide estimates with comparable or even the same accuracy as existing single-round LDP collection mechanisms. We conduct empirical evaluation on real-world counter datasets to verify our theoretical results. Our mechanisms have been deployed by Microsoft to collect telemetry across millions of devices. | The authors propose a more efficient mechanism for local differential privacy. In particular, they propose a 1 bit local differential privacy method that enables mean estimation in a distributed and effective way. Also a d-bit method is proposed for histogram estimation. It is argued that these two goals are of partical relevance in collection of telemetry data. The main advance is in the communication efficiency - which is key to large scale deployment. The authors also note that this mechanism is indeed deployed at large scale in industry and likely consistutes the largest deployment of differential privacy to date. While this on its own does not add to the value of the scientific contribution - it does reinforce the point of an efficient, effective and practical approach.
The authors submitted another version of their paper as supplementary material that contains the proofs. But it also contains more figures and extended main text. This is borderline. The supplementary material should only contain the additional material - while the main paper should be self-contained and not repeated. It is also weird, that the reviewer has to hunt for proofs in provided text. There should be a clear separation between submission and supplementary material.
Overall, the presentation is clear - partially due to outsourcing parts of the submission to the supplementary. However, alpha-rounding part remains partially unclear. Can the authors please elaborate on the claimed guarantees?
While the experimental study is interesting - there is a question about reproducibility of the results.As it looks this is real user and company data - how would follow up work compare here? It would be beneficial to provide an alternative solution -- potential simulated data -- so that comparison remain open source.
However, the experiments are considered convincing and support the main claim. |
nips_2017_2546 | Online Learning for Multivariate Hawkes Processes
We develop a nonparametric and online learning algorithm that estimates the triggering functions of a multivariate Hawkes process (MHP). The approach we take approximates the triggering function f i,j (t) by functions in a reproducing kernel Hilbert space (RKHS), and maximizes a time-discretized version of the log-likelihood, with Tikhonov regularization. Theoretically, our algorithm achieves an O(log T ) regret bound. Numerical results show that our algorithm offers a competing performance to that of the nonparametric batch learning algorithm, with a run time comparable to parametric online learning algorithms. | This paper describes an algorithm for optizimization of Hawkes process parameters in on-line settings, where non-parametric form of a kernel is learnt. The paper reports a gradient approach to optimization, with theoretical analysis thereof. In particular, the authors provide: a regret bound, justification for simplification steps (discretization of time and truncation of time over which previous posts influence a new post), an approach to a tractable projection of the solution (a step in the algorithm), time complexity analysis.
The paper is very well written, which is very helpful given it is mathematically involved. I found it tackling an important problem (on-line learning is important for large scale datasets, and non-parametricity is a very reasonable setting when it is hard to specify a reasonable kernel form a priori). The authors provide theoretical grounding for their approach. I enjoyed this paper very much and I think it is going to be influencial for practical application of Hawkes processes.
Some comments:
- Can you comment on the influence of hyperparameter values (regularization coefficients, potentially also discretization step)? In experiments you set them a-priori to some fixed values, and it is not uncommon in the experiments that values of regularization hyperparameters have a huge influence on the experiments.
- Your experiments are quite limited, especially the synthetic experiments where many values are fixed and a single matrix of influence forms is considered. I think it would be much more compelling if some simulation was considered, where values would be repeatedly drawn. Also, why do you think this specific setting is interesting/realistic?
- I suggest making the font in Figures larger (especially for Figure 2 it is too small), it doesn't look good now. This is especially annoying in the supplementary (Figure 4). |
nips_2017_1021 | Deep Hyperalignment
This paper proposes Deep Hyperalignment (DHA) as a regularized, deep extension, scalable Hyperalignment (HA) method, which is well-suited for applying functional alignment to fMRI datasets with nonlinearity, high-dimensionality (broad ROI), and a large number of subjects. Unlink previous methods, DHA is not limited by a restricted fixed kernel function. Further, it uses a parametric approach, rank-m Singular Value Decomposition (SVD), and stochastic gradient descent for optimization. Therefore, DHA has a suitable time complexity for large datasets, and DHA does not require the training data when it computes the functional alignment for a new subject. Experimental studies on multi-subject fMRI analysis confirm that the DHA method achieves superior performance to other state-of-the-art HA algorithms. | Deep Hyperalignment
[Due to time constraints, I did not read the entire paper in detail.]
The problem of functional alignment of multi-subject fMRI datasets is a fundamental and difficult problem in neuroimaging. Hyperalignment is a prominent class of anatomy-free functional alignment methods. The present manuscript presents a extension to this using a deep NN kernel. I believe this to be novel, though I don’t know the literature well enough to be confident of that. While not a surprising research direction, this is a natural and promising method to consider, and the experimental results are excellent.
The methods are explained thoroughly, with detailed proofs in supplementary material. I did however find this section too dense and without a good summary of the final design and parameter choices. A figure sketching the method would have been very helpful. Related work is surveyed well. The new method could be described as an amalgam of several existing approaches. However the various elements have been combined thoughtfully in a novel way, and carefully implemented.
The experiments are impressively thorough, involving several open datasets and several competing methods, with detailed reporting of methods, including the parameters used for each of the alignment methods. Some more detail on preprocessing would have been helpful. It seems there has been some validation of parameters on the test data (“the best performance for each dataset is reported” p6), which is not ideal, however the number of parameter values tested is small and the results are so good that I feel they would stand up to a better training/validation/test split. My main concern about this section is that all of the results reported were computed by the present authors; it would be helpful to also flag any comparable results in the literature.
The quality of the English is mostly good but with occasional problems. E.g “time complexity fairly scales with data size” is unclear, and the sentence in which it appears is repeated three times in the manuscript (including once in the abstract). Some proof-reading is in order. |
nips_2017_1247 | Compatible Reward Inverse Reinforcement Learning
Inverse Reinforcement Learning (IRL) is an effective approach to recover a reward function that explains the behavior of an expert by observing a set of demonstrations. This paper is about a novel model-free IRL approach that, differently from most of the existing IRL algorithms, does not require to specify a function space where to search for the expert's reward function. Leveraging on the fact that the policy gradient needs to be zero for any optimal policy, the algorithm generates a set of basis functions that span the subspace of reward functions that make the policy gradient vanish. Within this subspace, using a second-order criterion, we search for the reward function that penalizes the most a deviation from the expert's policy. After introducing our approach for finite domains, we extend it to continuous ones. The proposed approach is empirically compared to other IRL methods both in the (finite) Taxi domain and in the (continuous) Linear Quadratic Gaussian (LQG) and Car on the Hill environments. | This paper proposes an approach for behavioral cloning that constructs a function space for a particular parametric policy model based on the null space of the policy gradient.
I think a running example (e.g., for discrete MDP) would help explain the approach. I found myself flipping back and forth from the Algorithm (page 6) to the description of each step. I have some lingering confusion about using Eq. (4) to estimate discounted vectors d(s,a) since (4) only computed d(s). I assume a similar estimator is employed for d(s,a).
My main concern with the paper’s approach is that of overfitting and lack of transfer. The main motivation of IRL is that the estimated reward function can be employed in other MDPs (without rewards) that have similar feature representations. Seeking to construct a sufficiently rich feature representation that can drive the policy gradient to zero suggests that there is overfitting to the training data at the expense of generalization abilities. I am skeptical then that reasonable generalization can be expected to modifications of the original MDP (e.g., if the dynamics changes or set of available actions changes) or to other MDPs. What am I missing that allows this approach to get beyond behavioral cloning?
My understanding of the Taxi experiment in particular is that data is generated according to the policy model under line 295 and that the proposed method uses this policy model, while comparison methods are based on different policy models (e.g., BC instead estimates a Boltzmann policy). This seems like an extremely unfair advantage to the proposed approach that could account for all of the advantages demonstrated over BC. What happens when likelihood is maximized for BC under this actual model? Experiments with data generated out of model and generalization to making predictions beyond the source MDP are needed to improve the paper.
In summary, though the method proposed is interesting, additional experiments that mimic “real-world” settings are needed before publication to investigate the benefits of the approach suitability to noisy data and for generalization purposes. |
nips_2017_2945 | Improved Training of Wasserstein GANs
Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms. † | + good results
+ pretty pictures
- exposition could be improved
- claims are not backed up
- minor technical contribution
- insufficient evaluation
Let me first say that the visual results of this paper are great. The proposed algorithm produces pretty pictures of bedrooms and tiny cifar images.
Unfortunately the rest of the paper needs to be significantly improved for acceptance at NIPS. The exposition and structure of the paper is below NIPS standards. For example
L3: "..., but can still generate low-quality samples or fail to converge in some settings." I'm assuming the authors mean "can only..."?
l6: "pathological behavior". Pathological seems the wrong word here...
Sections 3 and 4 should be flipped. This way the exposition doesn't need to refer to a method that was not introduced yet.
Secondly, the paper is full of claims that are not backed up. The reviewer recommends the authors to either remove these claims or experimentally or theoretically back them up. Here are a few:
L101: Proof that only simple functions have gradient norm almost everywhere, note that almost everywhere is important here. There can be exponentially many parts of the landscape that are disconnected by gradient norm!=1 parts.
If a proof is not possible, please remove this claim.
L 144: Two sided penalty works better. Please show this in experiments.
The actual technical contribution seems like a small modification of WGAN. Box constraint is relaxed to a l2 norm on gradients.
Finally the evaluation is insufficient. The paper shows some pretty generation results for various hyper-parameter setting, but doesn't show any concrete numeric evaluation, apart from the easy to cheat and fairly meaningless inception score. The authors should perform a through user study to validate their approach, especially in light of a slim technical contribution.
Minor:
How is (3) efficiently evaluated? Did the authors have to modify specific deep learning packages to compute the gradient of the gradient? How is this second derivative defined for ReLUs?
Post rebuttal:
The authors answered some of my concerns in the rebuttal. I'm glad the authors promised to improve the writing and remove the misleading claims. However the rebuttal didn't fully convince me, see below.
I'm still a bit hesitant to accept the argument that prettier pictures are better, the authors rebuttal didn't change much in that regard. I'm sure some image filter, or denoising CNN could equally improve the image quality. I would strongly suggest the authors to find a way to quantitatively evaluate their method (other than the inception score).
I'm also fairly certain that the argument about the second derivative of ReLU's is wrong. The second derivative of a relu is not defined, as far as I know. The authors mention it's all zero, but that would make it a linear function (and a relu is not linear)... The reviewers had an extensive discussion on this and we agree that for ReLU networks the objective is not continuous, hence the derivative of the objective does not point down hill at all times. There are a few ways the authors should address this:
1. Try twice differentiable non-linearities (elu, tanh or log(1+exp(x)).
2. Show that treating the ReLU as a linear function (second derivative of zero) does give a good enough gradient estimate, for example through finite differences, or by verifying how often the "gradient" points down hill. |
nips_2017_867 | Tensor Biclustering
Consider a dataset where data is collected on multiple features of multiple individuals over multiple times. This type of data can be represented as a three dimensional individual/feature/time tensor and has become increasingly prominent in various areas of science. The tensor biclustering problem computes a subset of individuals and a subset of features whose signal trajectories over time lie in a low-dimensional subspace, modeling similarity among the signal trajectories while allowing different scalings across different individuals or different features. We study the information-theoretic limit of this problem under a generative model. Moreover, we propose an efficient spectral algorithm to solve the tensor biclustering problem and analyze its achievability bound in an asymptotic regime. Finally, we show the efficiency of our proposed method in several synthetic and real datasets. | This paper studies the tensor biclustering problem, which is
defined as follows: given an order-3 tensor that is assumed to
be of rank 1 with additive Gaussian noise, we want to find
index sets along with mode 1 and mode 2 where the factor
vectors have non-zero entries. The paper considers several
methods and analyzes their recovery performance. The
statistical lower bound (i.e. the best achievable performance)
is also evaluated. The performance is also evaluated by experiments.
The paper is clearly written, self-contained, and easy to follow.
Overall, this paper aims to develop a relatively minor
problem, tensor biclustering, theoretically in depth, and it
seems to be succeeded. For computationally efficient methods
(the proposed algorithms), the asymptotic order of variance
where the non-zero indices are correctly recovered is
identified. In addition, the optimal bound of performance is
evaluated, which is helpful as a baseline.
Still, I feel there is room for improvement.
1. The model formulation (1) looks less general. The strongest
restriction may be the setting q=1. Although the paper says
the analysis can be generalized into the case q>1 (Line 83),
it is not discussed in the rest of paper. For me, the analysis
for q>1 is not trivial. It is very helpful if the paper
contains discussion on, for q>1, how we can adapt the
analysis, how the theoretical results change, etc.
2. Synthetic experiments can be elaborate. I expected to see
the experiments that support the theoretical results
(e.g. Theorems 1 and 2). For example, for various n and
\sigma_1^2, you can plot monochrome tiles where each tile
represents the recovery rate at corresponding n and \sigma_1^2
(e.g. the tile is black if the rate is 0 and is white if the
rate is 1), which ideally shows the phase transition that the
theorems foreshow. |
nips_2017_2081 | Joint distribution optimal transportation for domain adaptation
This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function f in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a nonlinear transformation between the joint feature/label space distributions of the two domain P s and P t that can be estimated with optimal transport. We propose a solution of this problem that allows to recover an estimated target P f t = (X, f (X)) by optimizing simultaneously the optimal coupling and f . We show that our method corresponds to the minimization of a bound on the target error, and provide an efficient algorithmic solution, for which convergence is proved. The versatility of our approach, both in terms of class of hypothesis or loss functions is demonstrated with real world classification and regression problems, for which we reach or surpass state-of-the-art results. | The manuscript introduces a new technique for unsupervised domain adaptation (DA) based on optimal transport (OT). Previous related techniques (that are state-of-the-art) use optimal transport to find a mapping between the marginal distributions of source and target.
The manuscript proposes to transports the joint distributions instead of the marginals. As the target does not have labels, the learned prediction function on the target is used as proxy. The consequence is an iterative algorithm that alternates between 1) estimating the transport for fixed data points and labeling, and 2) estimating the labeling function for fixed transport. As such, the algorithm can be seen as a self-learning inspired extension of previous work, which executed 1) and 2) only once.
In addition to the algorithm, the manuscript contains a high probability generalization bound, similar to the discrepancy-based bounds of Ben-David et al [24]. It is used as justification that minimizing the joint OT distance should result in better predictions. Formally, this is not valid, however, because the bound is not uniform in the labeling function, so it might not hold for the result of the minimization.
There are also experiments on three standard datasets that show that the proposed method achieve state-of-the-art performance (though the differences to the previous OT based methods are small).
strength:
- the paper is well written, the idea makes sense
- the method seems practical, as both repeated steps resemble previous OT-based DA work
- the theory seems correct (but is not enough to truly justify the method, see below)
- the experiments are done well, including proper model selection (which is rare in DA)
- the experimental results are state-of-the-art
weaknesses:
- the manuscript is mainly a continuation of previous work on OT-based DA
- while the derivations are different, the conceptual difference is previous work is limited
- theoretical results and derivations are w.r.t. the loss function used for learning (e.g.
hinge loss), which is typically just a surrogate, while the real performance measure would
be 0/1 loss. This also makes it hard to compare the bounds to previous work that used 0-1 loss
- the theorem assumes a form of probabilistic Lipschitzness, which is not explored well.
Previous discrepancy-based DA theory does not need Prob.Lipschitzness and is more flexible
in this respect.
- the proved bound (Theorem 3.1) is not uniform w.r.t. the labeling function $f$. Therefore,
it does not suffice as a justification for the proposed minimization procedure.
- the experimental results do not show much better results than previous OT-based DA methods
- as the proposed method is essentially a repeated application of the previous work, I would have
hoped to see real-data experiments exploring this. Currently, performance after different number
of alternating steps is reported only in the supplemental material on synthetic data.
- the supplemental material feels rushed in some places. E.g. in the proof of Theorem 3.1, the
first inequality on page 4 seems incorrect (as the integral is w.r.t. a signed measure, not a
prob.distr.). I believe the proof can be fixed, though, because the relation holds without
absolute values, and it's not necessary to introduce these in (3) anyway.
- In the same proof, Equations (7)/(8) seem identical to (9)/(10)
questions to the authors:
- please comment if the effect of multiple BCD on real data is similar to the synthetic case
***************************
I read the author response and I am still in favor of accepting the work. |
nips_2017_1129 | Log-normality and Skewness of Estimated State/Action Values in Reinforcement Learning
Under/overestimation of state/action values are harmful for reinforcement learning agents. In this paper, we show that a state/action value estimated using the Bellman equation can be decomposed to a weighted sum of path-wise values that follow log-normal distributions. Since log-normal distributions are skewed, the distribution of estimated state/action values can also be skewed, leading to an imbalanced likelihood of under/overestimation. The degree of such imbalance can vary greatly among actions and policies within a single problem instance, making the agent prone to select actions/policies that have inferior expected return and higher likelihood of overestimation. We present a comprehensive analysis to such skewness, examine its factors and impacts through both theoretical and empirical results, and discuss the possible ways to reduce its undesirable effects. | This paper focuses on the problem arising from skewness in the distribution of value estimates, which may result in over- or under-estimation. With careful analysis, the paper shows that a particular model-based value estimate is approximately log-normally distributed, which is skewed and thus leading to the possibility of over- or under-estimation. It is further shown that positive and negative rewards induce opposite sort of skewness. With simple experiments, the problem of over/underestimation is illustrated.
This is an interesting paper with some interesting insights on over/underestimation of values. I would like the reviewers to clarify the following two issues.
The results are implied for a particular kind of model-based value estimation method, more specifically, one which relies on definition 3.1. The implication of this result to other kinds of estimation methods, especially the model-free ones is not clear. A discussion is required on why a similar effect is expected with other kinds of values estimation method.
It is disappointing to have no experiments with the suggested remedies, such as the balancing the impact of positive and negative rewards and reward shaping. Some experiments are expected here to see whether these are indeed helpful. And an elaborate discussion of reward balancing and how to achieve it could also improve the quality of the paper. |
nips_2017_1993 | Eigen-Distortions of Hierarchical Representations
We develop a method for comparing hierarchical image representations in terms of their ability to explain perceptual sensitivity in humans. Specifically, we utilize Fisher information to establish a model-derived prediction of sensitivity to local perturbations of an image. For a given image, we compute the eigenvectors of the Fisher information matrix with largest and smallest eigenvalues, corresponding to the model-predicted most-and least-noticeable image distortions, respectively. For human subjects, we then measure the amount of each distortion that can be reliably detected when added to the image. We use this method to test the ability of a variety of representations to mimic human perceptual sensitivity. We find that the early layers of VGG16, a deep neural network optimized for object recognition, provide a better match to human perception than later layers, and a better match than a 4-stage convolutional neural network (CNN) trained on a database of human ratings of distorted image quality. On the other hand, we find that simple models of early visual processing, incorporating one or more stages of local gain control, trained on the same database of distortion ratings, provide substantially better predictions of human sensitivity than either the CNN, or any combination of layers of VGG16.
We develop a method for comparing hierarchical image representations in terms of their ability to explain perceptual sensitivity in humans. Specifically, we utilize Fisher information to establish a model-derived prediction of sensitivity to local perturbations of an image. For a given image, we compute the eigenvectors of the Fisher information matrix with largest and smallest eigenvalues, corresponding to the model-predicted most-and least-noticeable image distortions, respectively. For human subjects, we then measure the amount of each distortion that can be reliably detected when added to the image. We use this method to test the ability of a variety of representations to mimic human perceptual sensitivity. We find that the early layers of VGG16, a deep neural network optimized for object recognition, provide a better match to human perception than later layers, and a better match than a 4-stage convolutional neural network (CNN) trained on a database of human ratings of distorted image quality. On the other hand, we find that simple models of early visual processing, incorporating one or more stages of local gain control, trained on the same database of distortion ratings, provide substantially better predictions of human sensitivity than either the CNN, or any combination of layers of VGG16.
Human capabilities for recognizing complex visual patterns are believed to arise through a cascade of transformations, implemented by neurons in successive stages in the visual system. Several recent studies have suggested that representations of deep convolutional neural networks trained for object recognition can predict activity in areas of the primate ventral visual stream better than models constructed explicitly for that purpose (Yamins et al. [2014], Khaligh-Razavi and Kriegeskorte [2014]). These results have inspired exploration of deep networks trained on object recognition as models of human perception, explicitly employing their representations as perceptual distortion metrics or loss functions (Hénaff and Simoncelli [2016], Johnson et al. [2016], Dosovitskiy and Brox [2016]).
On the other hand, several other studies have used synthesis techniques to generate images that indicate a profound mismatch between the sensitivity of these networks and that of human observers. Specifically, Szegedy et al. [2013] constructed image distortions, imperceptible to humans, that cause their networks to grossly misclassify objects. Similarly, Nguyen and Clune [2015] optimized randomly initialized images to achieve reliable recognition by a network, but found that the resulting 'fooling images' were uninterpretable by human viewers. Simpler networks, designed for texture classification and constrained to mimic the early visual system, do not exhibit such failures (Portilla and Simoncelli [2000]). These results have prompted efforts to understand why generalization failures of this type are so consistent across deep network architectures, and to develop more robust training methods to defend networks against attacks designed to exploit these weaknesses (Goodfellow et al. [2014]).
From the perspective of modeling human perception, these synthesis failures suggest that representational spaces within deep neural networks deviate significantly from those of humans, and that methods for comparing representational similarity, based on fixed object classes and discrete sampling of the representational space, are insufficient to expose these deviations. If we are going to use such networks as models for human perception, we need better methods of comparing model representations to human vision. Recent work has taken the first step in this direction, by analyzing deep networks' robustness to visual distortions on classification tasks, as well as the similarity of classification errors that humans and deep networks make in the presence of the same kind of distortion (Dodge and Karam [2017]).
Here, we aim to accomplish something in the same spirit, but rather than testing on a set of handselected examples, we develop a model-constrained synthesis method for generating targeted test stimuli that can be used to compare the layer-wise representational sensitivity of a model to human perceptual sensitivity. Utilizing Fisher information, we isolate the model-predicted most and least noticeable changes to an image. We test these predictions by determining how well human observers can discriminate these same changes. We apply this method to six layers of VGG16 (Simonyan and Zisserman [2015]), a deep convolutional neural network (CNN) trained to classify objects. We also apply the method to several models explicitly trained to predict human sensitivity to image distortions, including both a 4-stage generic CNN, an optimally-weighted version of VGG16, and a family of highly-structured models explicitly constructed to mimic the physiology of the early human visual system. Example images from the paper, as well as additional examples, are available at http://www.cns.nyu.edu/~lcv/eigendistortions/. | PAPER SUMMARY
The paper attempts to measure the degree to which learned image representations from convolutional networks can explain human perceptual sensitivity to visual stimuli. This question is well-motivated on the neuroscience side by recent studies that have used learned image representations to predict brain activity in primates, and on the computer vision side by recent methods for image synthesis which have used image representations learned for object recognition as a surrogate for human perception.
The paper develops a clever experimental paradigm to test the extent to which a (differentiable) image representation agrees with human perception by experimentally measuring the human discrimination threshold for image distortions along eigendirections of the Fisher information matrix corresponding to extremal eigenvalues.
Using this experimental technique, the paper demonstrates that for a VGG-16 model pretrained for ImageNet classification, image representations at earlier layers correlate with human perception more strongly than those at later layers.
The paper next explores image representations from models trained explicitly to match human judgements of perceived image distortion, training both a four-layer CNN and an On-Off model whose structure reflects the structure of the human LGN. Both models are able to fit the training data well, with the CNN slightly outperforming the On-Off model. However using the experimental paradigm from above, the paper shows that image representations from the On-Off model correlate much more strongly with human perception than either the shallow CNN model or representations from any layer of the VGG-16 model trained on ImageNet.
FEATURES FROM IQA CNN
For the four-layer IQA CNN from Section 3.1, it seems that only the image representation from the final layer of the network were compared with human perceptual judgements; however from Figure 3 we know that earlier layers of VGG-16 explain human perceptual judgements much better than later layers. I’m therefore curious about whether features from earlier layers of the IQA CNN would do a better job of explaining human perception than final layer features.
RANDOM NETWORKS AND GANS
Some very recent work has shown that features from shallow CNNs with no pooling and random filters can be used for texture synthesis nearly as well as features from deep networks pretrained on ImageNet (Ustyuzhaninov et al, ICLR 2017), suggesting that the network structure itself and not the task on which it was trained may be a key factor in determining the types of perceptual information its features capture. I wonder whether the experimental paradigm of this paper could be used to probe the perceptual effects of trained vs random networks?
In a similar vein, I wonder whether features from the discriminator of a GAN would better match human perception than ImageNet models, since such a discriminator is explicitly trained to detect synthetic images?
ON-OFF MODEL FOR IMAGE SYNTHESIS
Given the relative success of the On-Off model at explaining human perceptual judgements, I wonder whether it could be used in place of VGG features for image synthesis tasks such as super-resolution or texture generation?
OVERALL
Overall I believe this to be a high-quality, well-written paper with an interesting technical approach and exciting experimental results which open up many interesting questions to be explored in future work.
POST AUTHOR RESPONSE
I did not have any major issues with the paper as submitted, and I thank the authors for entertaining some of my suggestions about potential future work. To me this paper is a clear accept. |
nips_2017_675 | Subset Selection and Summarization in Sequential Data
Subset selection, which is the task of finding a small subset of representative items from a large ground set, finds numerous applications in different areas. Sequential data, including time-series and ordered data, contain important structural relationships among items, imposed by underlying dynamic models of data, that should play a vital role in the selection of representatives. However, nearly all existing subset selection techniques ignore underlying dynamics of data and treat items independently, leading to incompatible sets of representatives. In this paper, we develop a new framework for sequential subset selection that finds a set of representatives compatible with the dynamic models of data. To do so, we equip items with transition dynamic models and pose the problem as an integer binary optimization over assignments of sequential items to representatives, that leads to high encoding, diversity and transition potentials. Our formulation generalizes the well-known facility location objective to deal with sequential data, incorporating transition dynamics among facilities. As the proposed formulation is non-convex, we derive a max-sum message passing algorithm to solve the problem efficiently. Experiments on synthetic and real data, including instructional video summarization, show that our sequential subset selection framework not only achieves better encoding and diversity than the state of the art, but also successfully incorporates dynamics of data, leading to compatible representatives. | The paper consider a new problem of subset selection from set X that represents objects from sequence Y with additional constraint that the selected subset should comply with underlaying dynamic model of set X. The problem seems interesting to me and worth investigation.
The novel problem boils down to maximization problem (11). The authors suggest a new message-passing scheme for the problem.
The problem can be seen as a multilabel energy minimization problem (T nodes, M labels, chain structure with additional global potential -log(\Phi_{card}(x_{r_1}, \ldots, x_{r_T}))) written in overcomplete representation form [23]. The problem is well known in energy minimization community as energy minimization with label costs [2]. It is shown that global potential -log(\Phi_{card}(x_{r_1}, \ldots, x_{r_T})) can be represented as a set of pairwise submodular potentials with auxiliary variables [2, 3]. Therefore, the problem (11) can be seen as multilabel energy minimization with pairwise potentials only. There are tons of methods including message passing based to solve the problem approximately (TRW-S, LBP, alpha-beta swap, etc). While they may work worse than the proposed message passing, the thoughtful comparison should be performed.
I believe that the paper in its current form cannot be accepted. Although the considered problem is interesting, the related work section should include [1,2,3]. Also, different baselines for experimental evaluation should be used (other optimization schemes for (11)), as it is clear from the experiments that DPP based methods do not suit for the problem. I think that the improved version may have a great impact.
typos:
- (8) \beta was missed
- L229 more more -> more
- L236 jumping, jumping -> jumping
[1] Wainwright, M.J., Jordan, M.I.: Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 2008
[2] Delong, Andrew, et al. "Fast approximate energy minimization with label costs." International journal of computer vision 96.1 (2012): 1-27.
[3] M. A. Fischler and R. C. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381–395, 1981. |
nips_2017_882 | Deanonymization in the Bitcoin P2P Network
Recent attacks on Bitcoin's peer-to-peer (P2P) network demonstrated that its transaction-flooding protocols, which are used to ensure network consistency, may enable user deanonymization-the linkage of a user's IP address with her pseudonym in the Bitcoin network. In 2015, the Bitcoin community responded to these attacks by changing the network's flooding mechanism to a different protocol, known as diffusion. However, it is unclear if diffusion actually improves the system's anonymity. In this paper, we model the Bitcoin networking stack and analyze its anonymity properties, both pre-and post-2015. The core problem is one of epidemic source inference over graphs, where the observational model and spreading mechanisms are informed by Bitcoin's implementation; notably, these models have not been studied in the epidemic source detection literature before. We identify and analyze near-optimal source estimators. This analysis suggests that Bitcoin's networking protocols (both pre-and post-2015) offer poor anonymity properties on networks with a regular-tree topology. We confirm this claim in simulation on a 2015 snapshot of the real Bitcoin P2P network topology. | The paper explores the effect of modifying the transaction-flooding protocol of the bitcoin network as a response to deanonymisation attacks in the original pre-2015 protocol. The authors model the original protocol (trickle) and the revised variant (diffusion) and suggest that deanonymisation can equally be established with both protocols. The authors further support their claim by exploring both models in simulations.
Quality of Presentation:
The paper is well-written (minor aspects: some incomplete sentences, unresolved reference) and provides the reader with the necessary background information to follow.
The representation of the transaction-flooding protocols is comprehensive and introduced assumptions are highlighted.
Content-related questions:
- The authors refer to probability of detecting participants of 30%, which is relatively low (but indeed considerable). A fundamental assumption of performing the deanonymisation, however, is that the listening supernode would connect to all active bitcoin nodes. How realistic would it be in the future (assuming further growth of the network) that a supernode would be able to do so? -- Edit: The argument for the use of botnets made by the authors is a convincing counterpoint.
- While the overall work is involved, I find that the paper does not comprehensively answer the original claim that both trickle and diffusion perform equally good or bad at maintaining anonymity. Observing Figure 6, we can find that Trickle and Diffusion simulated show differences in probability of around 0.1 (with Diffusion performing worse than Trickle), which is not negligible. Figure 7 shows the operation on a snapshot of the real network, with an inverted observation: Diffusion performs better than Trickle. (Admittedly, they converge for larger number of eavesdropper connections.) The discrepancy and conflicting observations are worth discussing, and the statement that both protocols have `similar probabilities of detection' needs to be clarified (How do you establish 'similarity'?). |
nips_2017_1498 | Predicting Organic Reaction Outcomes with Weisfeiler-Lehman Network
The prediction of organic reaction outcomes is a fundamental problem in computational chemistry. Since a reaction may involve hundreds of atoms, fully exploring the space of possible transformations is intractable. The current solution utilizes reaction templates to limit the space, but it suffers from coverage and efficiency issues. In this paper, we propose a template-free approach to efficiently explore the space of product molecules by first pinpointing the reaction center -the set of nodes and edges where graph edits occur. Since only a small number of atoms contribute to reaction center, we can directly enumerate candidate products. The generated candidates are scored by a Weisfeiler-Lehman Difference Network that models high-order interactions between changes occurring at nodes across the molecule. Our framework outperforms the top-performing template-based approach with a 10% margin, while running orders of magnitude faster. Finally, we demonstrate that the model accuracy rivals the performance of domain experts. | Summary:
This work provides a novel approach to predict the outcome of organic chemical reactions. A reaction can be computationally regarded as graph-prediction problem: given the input of several connected graphs (molecules), the model aims to predict a fully-connected graph (reaction product) that can be obtained by performing several graph edits (reaction) on some edges and nodes (reaction center) in the input graphs.
Past reaction predictions involving exhaustively enumeration of reaction centers and fitting them to a large number of existing reaction templates, which is very inefficient and hard to scale. In this work, the author proposed a template-free method to predict the outcome. It is a 3 step pipeline: 1) identify the reaction center given the input graphs using a Weisfeiler-Lehman Network. 2) generate candidate products based their reactivity score and chemical constraints. 3) rank the candidate products using a Weisfeiler-Lehman Difference Network.
The proposed method outperformed an existing state-of-art method on a benchmark chemistry dataset in both accuracy (10% rise) and efficiency (140 times faster), and also outperformed human chemist experts.
Qualitative Evaluation:
Quality:
The work is technically sound. The proposed method is well supported by experiments in both real world dataset and human expert comparisons.
Clarity:
This work describes their work and methods clearly. The experiments are introduced in details.
Originality:
This work provides a novel solution for reaction outcome prediction, which does not need prior knowledge of reaction templates.
The author may want to relate some past NIPS work on computational chemistry to their work:
Kayala, Matthew A., and Pierre F. Baldi. "A machine learning approach to predict chemical reactions." Advances in Neural Information Processing Systems. 2011.
Significance:
The work outperforms the state-of-art of reaction product prediction in both accuracy and efficiency. The user study experiment shows that it also outperforms human experts. |
nips_2017_229 | A simple model of recognition and recall memory
We show that several striking differences in memory performance between recognition and recall tasks are explained by an ecological bias endemic in classic memory experiments -that such experiments universally involve more stimuli than retrieval cues. We show that while it is sensible to think of recall as simply retrieving items when probed with a cue -typically the item list itself -it is better to think of recognition as retrieving cues when probed with items. To test this theory, by manipulating the number of items and cues in a memory experiment, we show a crossover effect in memory performance within subjects such that recognition performance is superior to recall performance when the number of items is greater than the number of cues and recall performance is better than recognition when the converse holds. We build a simple computational model around this theory, using sampling to approximate an ideal Bayesian observer encoding and retrieving situational co-occurrence frequencies of stimuli and retrieval cues. This model robustly reproduces a number of dissociations in recognition and recall previously used to argue for dual-process accounts of declarative memory. | This is a beautifully written paper on a central aspect of human memory research. The authors present a simple model along with a clever experimental test and simulations of several other benchmark phenomena.
My main concern is that this paper is not appropriate for NIPS. Most computer scientists at NIPS will not be familiar with, or care much about, the intricate details of recognition and recall paradigms. Moreover, the paper is necessarily compact and thus it's hard to understand even the basic descriptions of phenomena without appropriate background. I also think this paper falls outside the scope of the psych/neuro audience at NIPS, which traditionally has focused on issues at the interface with machine learning and AI. That being said, I don't think this entirely precludes publication at NIPS; I just think it belongs in a psychology journal. I think the authors make some compelling points addressing this in the response. If these points can be incorporated into the MS, then I would be satisfied.
Other comments:
The model doesn't include any encoding noise, which plays an important role in other Bayesian models like REM. The consequences of this departure are not clear to me, since the models are so different in many ways, but I thought it worth mentioning.
In Figure 2, I think the model predictions should be plotted the same way as the experimental results (d').
I didn't understand the point about associative recognition in section 5.3. I don't think a model of associative recognition was presented. And I don't see how reconsolidation is relevant here.
Minor:
- Eq. 1: M is a set, but in the summation it denotes the cardinality of the set. Correct notation would be m \in \mathcal{M} underneath the summation symbol. Same issue applies to Eq. 2.
- I'm confused about the difference between M' and M_k.
- What was k in the simulations? |
nips_2017_2439 | Balancing information exposure in social networks
Social media has brought a revolution on how people are consuming news. Beyond the undoubtedly large number of advantages brought by social-media platforms, a point of criticism has been the creation of echo chambers and filter bubbles, caused by social homophily and algorithmic personalization. In this paper we address the problem of balancing the information exposure in a social network. We assume that two opposing campaigns (or viewpoints) are present in the network, and that network nodes have different preferences towards these campaigns. Our goal is to find two sets of nodes to employ in the respective campaigns, so that the overall information exposure for the two campaigns is balanced. We formally define the problem, characterize its hardness, develop approximation algorithms, and present experimental evaluation results. Our model is inspired by the literature on influence maximization, but there are significant differences from the standard model. First, balance of information exposure is modeled by a symmetric difference function, which is neither monotone nor submodular, and thus, not amenable to existing approaches. Second, while previous papers consider a setting with selfish agents and provide bounds on bestresponse strategies (i.e., move of the last player), we consider a setting with a centralized agent and provide bounds for a global objective function. | This paper proposes the problem of exposure balancing in social networks where the independent cascade is used as the diffusion model. The usage of symmetric difference is quite interesting and the approximation scheme to shovel the problem is plausible.
The experiments setup and the way the two cascades are defined and curated are interesting too.
It’s not clear if the two social network campaigns should be opposing or not? Is it a necessity for the model?
Furthermore, some hashtags are not really in opposition to each other. For example, on obamacare issue, the tag @sentedcurze could represent many many topics related to Senator Ted Cruz including the anti-obamacare.
Any elaboration on the choice of \alpha=0 for the correlated setting.
Given the small cascades size, is the diffusion network learned reliable enough? It looks more like a heuristic algorithm for learning the diffusion network which might be of high variance and less robust given the small cascades size.
In Figure 3 why for some cases the difference between methods is increasing and for some cases it is decreasing?
Using the term “correlated” at the first though reminds the case that the two cascades are competing against each other to attract the audience. This looks to be a better definition of correlation than the one used in the paper. But in this paper, it’s not clear until the user reaches to page 3. I would suggest to clarify this earlier in the paper.
Showing standard deviations on the bar charts of figure 1 would have helped assess the significance of the difference visually. Furthermore, reporting significance levels and p-values could make the results stronger.
A few very related work are missing. Especially paper [1] has formulated the problem of exposure shaping in social networks. Furthermore, paper [2] considers two cascades who are competing against each other and optimizes one of them so they can hit the same number of people, i.e., trying to balance their exposure. Given papers [1] and [2] I would say the current paper is not the first study of balancing g exposures in social networks. They are closely related.
Paper [3] has proposed a point-process based model for competing cascades in social networks that is related and finally paper [4] proposes an algorithm to optimize the activities in social networks so that the network reaches to a specific goal.
[1] Farajtabar, Mehrdad, et al. "Multistage campaigning in social networks." Advances in Neural Information Processing Systems. 2016.
[2] Farajtabar, Mehrdad, et al. "Fake News Mitigation via Point Process Based Intervention." arXiv preprint arXiv:1703.07823 (2017).
[3] Zarezade, Ali, et al. "Correlated Cascades: Compete or Cooperate." AAAI. 2017.
[4] Zarezade, Ali, et al. "Cheshire: An Online Algorithm for Activity Maximization in Social Networks." arXiv preprint arXiv:1703.02059 (2017). |
nips_2017_2290 | Safe Adaptive Importance Sampling
Importance sampling has become an indispensable strategy to speed up optimization algorithms for large-scale applications. Improved adaptive variants-using importance values defined by the complete gradient information which changes during optimization-enjoy favorable theoretical properties, but are typically computationally infeasible. In this paper we propose an efficient approximation of gradient-based sampling, which is based on safe bounds on the gradient. The proposed sampling distribution is (i) provably the best sampling with respect to the given bounds, (ii) always better than uniform sampling and fixed importance sampling and (iii) can efficiently be computed-in many applications at negligible extra cost. The proposed sampling scheme is generic and can easily be integrated into existing algorithms. In particular, we show that coordinate-descent (CD) and stochastic gradient descent (SGD) can enjoy significant a speed-up under the novel scheme. The proven efficiency of the proposed sampling is verified by extensive numerical testing. | The authors present a "safe" adaptive importance sampling strategy for coordinate descent and stochastic gradient methods. Based on lower and upper bounds on the gradient values, an efficient approximation of gradient based sampling is proposed. The method is proven to be the best strategy with respect to the bounds, always better than uniform or fixed importance sampling and can be computed efficiently for negligible extra cost. Although adaptive importance sampling strategies have been previously proposed, the authors present a novel formulation of selecting the optimal sampling distribution as a convex optimization problem and present an efficient algorithm to solve it.
This paper is well written and a nice contribution to the study of importance sampling techniques.
Comments:
Proof of Lemma 2.1 -- seems to be missing a factor of 2 in alpha^*.
Example 3.1 - In (7) you want to maximize, so in example 3.1, it would appear that setting c to the upper or lower bound is better than using uniform sampling. Is that what this example is intending to show? It is confusing with the statement directly prior claiming the naive approach of setting c to the upper or lower bound can be suboptimal.
Line 4 of Algorithm 4 - given that m = max(l^{sort}), will this condition ever be satisfied?
Line 7 of Algorithm 4 - should be u^{sort} instead of c^{sort}?
I think the numerical results could benefit from comparisons to other adaptive sampling schemes out there (e.g., [2], [5],[21]), and against fixed importance sampling with say a non-uniform distribution.
Why are there no timing results for SGD?
Title of the paper in reference [14] is incorrect.
Add reference: Csiba and Richtarik. "Importance Sampling for Minibatches" 2016 on arXiv.
============
POST REBUTTAL
============
I have read the author rebuttal and I thank the authors for addressing my questions/comments. I think this paper is a clear accept. |
nips_2017_3058 | Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 Englishto-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. | The paper presents a new architecture for encoder/decoder models for sequence-to-sequence modeling that is solely based on (multi-layered) attention networks combined with standard Feed-Forward networks as opposed to the common scheme of using recurrent or convolutional neural networks. The paper presents two main advantages of this new architecture: (1) Reduced training time due to reduced complexity of the architecture, and (2) new State-of-the-Art result on standard WMT data sets, outperforming previous work by about 1 BLEU point.
Strengths:
- The paper argues well that (1) can be achieved by avoiding recurrent or convolutional layers and the complexity analysis in Table 1 strengthens the argument.
- (2) is shown by comparing the model performance against strong baselines on two language pairs, English-German and English-French.
The main strengths of the paper are that it proposes an entirely novel architecture without recurrence or convolutions, and advances state of the art.
Weaknesses:
- While the general architecture of the model is described well and is illustrated by figures, architectural details lack mathematical definition, for example multi-head attention. Why is there a split arrow in Figure 2 right, bottom right? I assume these are the inputs for the attention layer, namely query, keys, and values. Are the same vectors used for keys and values here or different sections of them? A formal definition of this would greatly help readers understand this.
- The proposed model contains lots of hyperparameters, and the most important ones are evaluated in ablation studies in the experimental section. It would have been nice to see significance tests for the various configurations in Table 3.
- The complexity argument claims that self-attention models have a maximum path length of 1 which should help maintaining information flow between distant symbols (i.e. long-range dependencies). It would be good to see this empirically validated by evaluating performance on long sentences specifically.
Minor comments:
- Are you using dropout on the source/target embeddings?
- Line 146: There seems to be dangling "2" |
nips_2017_1339 | Large-Scale Quadratically Constrained Quadratic Program via Low-Discrepancy Sequences
We consider the problem of solving a large-scale Quadratically Constrained Quadratic Program. Such problems occur naturally in many scientific and web applications. Although there are efficient methods which tackle this problem, they are mostly not scalable. In this paper, we develop a method that transforms the quadratic constraint into a linear form by sampling a set of low-discrepancy points [16]. The transformed problem can then be solved by applying any state-of-the-art large-scale quadratic programming solvers. We show the convergence of our approximate solution to the true solution as well as some finite sample error bounds. Experimental results are also shown to prove scalability as well as improved quality of approximation in practice. | Contributions:
The paper proposes a sampling based method to approximate QCQPs. The method starts from a sampling of the hypercube, the mapping it to the sphere, and finally to the ellipsoid of interest. Unfortunately the authors don't provide enough evidence why this particular sampling is more efficient than simpler ones. How is the complexity dependence on the dimension? (Net seems to be exponential in dimension). The theorems seem to hide most dimensionality constants. My main question: How does the approach compare experimentally and theoretically to naive sampling?
While the main advantage of the contribution is the scalability, the authors do unfortunately not in detail compare (theoretically and practically) the resulting method and its scalability to existing methods for QCQP.
The paper is considering convex QCQP only, and does not comment if this problem class is NP hard or not. For the list of applications following this statement, it should be mentioned if those are convex or not, that is if they fall into the scope of the paper or not. This is not the only part of this paper where it occurred to me that the writing lacked a certain amount of rigor and referenced wikipedia style content instead of rigorous definitions.
Experiments: No comparison to state-of-the-art second order cone solvers or commercial solvers (e.g. CPLEX and others) is included. Furthermore, as stated now, unfortunately the claimed comparison is not reproducible, as details on the SDP and RLT methods are not included. For example no stopping criteria and implementation details are included. No supplementary material is provided.
How was x^* computed in the experimental results? How about adding one real dataset as well from the applications mentioned at the beginning?
Riesz energy: "normalized surface measure" either formally define it in the paper, or precisely define it in the appendix, or at least provide a scientific reference for it (not wikipedia). The sentence "We omit details due to lack of space" probably takes as much space as to define it? (Current definition on the sphere is not related to the actual boundary of S).
Style: At several points in the paper you mention "For sake of compactness, we do not go deeper into this" and similar. This only makes sense if you at least more precisely state then why you discuss it, or at least make precise what elements you require for your results. In general, if such elements do add to the value of the paper, then they would be provided in an appendix, which the authors here did not.
line 105: missing 'the'
line 115: in base \eta: Definition makes no sense yet at this point because you didn't specify the roles of m and eta yet (only in the next sentence, for a special case).
line 149: space after 'approaches'. remove 'for sake of compactness'
== update after rebuttal ==
I thank the authors for partially clarifying the comparison to other sampling schemes, and on x^*.
However, I'm still disappointed by the answer "We are presently working on comparing with large scale commercial solvers such as CPLEX and ADMM based techniques for SDP in several different problems as a future applications paper.". This is not to be left for the future, this comparison to very standard competing methods would be essential to see the merits of the proposed algorithm now, and significantly weakens the current contribution. |
nips_2017_1279 | Bandits Dueling on Partially Ordered Sets
We address the problem of dueling bandits defined on partially ordered sets, or posets. In this setting, arms may not be comparable, and there may be several (incomparable) optimal arms. We propose an algorithm, UnchainedBandits, that efficiently finds the set of optimal arms -the Pareto front-of any poset even when pairs of comparable arms cannot be a priori distinguished from pairs of incomparable arms, with a set of minimal assumptions. This means that UnchainedBandits does not require information about comparability and can be used with limited knowledge of the poset. To achieve this, the algorithm relies on the concept of decoys, which stems from social psychology. We also provide theoretical guarantees on both the regret incurred and the number of comparison required by UnchainedBandits, and we report compelling empirical results. | The paper addresses a variant of the dueling bandit problem where certain comparisons are not resolvable by the algorithm. In other words, whereas dueling bandit algorithms deal with tournaments (i.e. fully connected directed graphs), the algorithms in this paper deal with more general directed graphs.
The authors offer only vague conceptual reasons for the significance of this new problem setting, however from a practical point of view, this is an extremely important problem that the existing literature does not address at all. More specifically, ties between arms are major stumbling blocks for most dueling bandit algorithms; indeed, many papers in the literature explicitly exclude problems with ties from their analyses. On the other hand, in practice one often has to deal with "virtual ties": these are pairs of arms that could not be distinguished from each other given the number of samples available to the algorithm.
A good example to keep in mind is a product manager at Bing or Google who has to pick the best ranker from a pool of a hundred proposed rankers using pairwise interleaved comparisons, but only has access to say %1 of the traffic for a week: if it happens to be that there are certain pairs of rankers in the pool that are so similar to each other that they couldn't be distinguished from each other even using the entirety of the available samples, then for all practical purposes the are ties in the resulting dueling bandit problem. So, our product manager would want an algorithm that would not waste the whole traffic on such virtual ties.
The authors propose two algorithms that deal with this problem under two different levels of generality:
1- an algorithm to find an \eps-approximation of the pareto front in the more general social poset setting, which is an extension of the Condorcet assumption in the full tournament case;
2- as algorithm to find the exact pareto front in the much more restrictive poset setting, which is an analogue of the total ordering assumption for regular dueling bandits.
I personally find the former much more interesting and important because there is plenty of experimental evidence that transitivity is very unlikely to hold in practice, although the second algorithm could lead to very interesting future work.
The authors use a peeling method that uses epochs with exponentially increasing lengths to gradually eliminate arms with smaller and smaller gaps. In the analysis, this has the effect of rendering the regret incurred from most edges in the comparison graph negligible, hence the linear dependence on the gaps in regret bounds. The price that one pays for this simpler proof technique is an extra factor of K, which in the worst case makes the bound of the form O(K^2 log T / \Delta).
As a general comment, I think it would be a good idea for the authors to devote more space in the paper to situating the results presented here with respect to the existing dueling bandit literature: I think that would be much more helpful to the reader than the lengthy discussion on decoys, for instance. In particular, there is a very precise lower bound proven for Condorcet dueling bandits in (Komiyama et al, COLT 2015), and I think it's important for the authors to specify the discrepancy between the result in Theorem 1 and this lower bound in the full tournament setting when \eps is smaller than the smallest gap in the preference matrix.
Furthermore, given the numerous open questions that arise out of this work, I highly recommend having a lengthier discussion of future work. For instance, there is some recent work on using Thompson Sampling for dueling bandits in (Wu and Liu, NIPS 2016) and it is an interesting question whether or not their methods could be extended to this setting. Moreover, it would be interesting to find out if more refined results for the von Neumann problem, such as those presented in (Balsubramani et al, COLT 2016) can be transported here.
Another criticism that I have of the write-up is that in the experiments it's not at all clear how IF2 and RUCB were modified, so please elaborate on the specifics of that. |
nips_2017_369 | Learning Spherical Convolution for Fast Features from 360°Imagery
While 360°cameras offer tremendous new possibilities in vision, graphics, and augmented reality, the spherical images they produce make core feature extraction non-trivial. Convolutional neural networks (CNNs) trained on images from perspective cameras yield "flat" filters, yet 360°images cannot be projected to a single plane without significant distortion. A naive solution that repeatedly projects the viewing sphere to all tangent planes is accurate, but much too computationally intensive for real problems. We propose to learn a spherical convolutional network that translates a planar CNN to process 360°imagery directly in its equirectangular projection. Our approach learns to reproduce the flat filter outputs on 360°d ata, sensitive to the varying distortion effects across the viewing sphere. The key benefits are 1) efficient feature extraction for 360°images and video, and 2) the ability to leverage powerful pre-trained networks researchers have carefully honed (together with massive labeled image training sets) for perspective images. We validate our approach compared to several alternative methods in terms of both raw CNN output accuracy as well as applying a state-of-the-art "flat" object detector to 360°data. Our method yields the most accurate results while saving orders of magnitude in computation versus the existing exact reprojection solution. | This paper describes a method to transform networks learned on perspective images to take spherical images as input. This is an important problem as fisheye and 360-degree sensors become more and more ubiquitous but training data is relatively scarce. The method first transforms the network architecture to adapt the filter sizes and pooling operations to convolutions on a equirectangular representation/projection. Next the filters are learned to match the feature responses of the original network when considering the projections to the tangent plane of the respective feature response. The filters are pre-learned layer-by-layer and fine-tuned to output features as similar as possible to the original network projected to the tangent planes. Detection experiments on Pano2Vid and PASCAL demonstrate that the technique performs slightly below the optimal performance using per-pixel tangent projections (however significantly faster) while outperforming several baselines, including cube map projections.
The paper is well written and largely easy to read. While simple and sometimes quite ad hoc (Sec 3.2), I like the idea and am not aware of any closely related work (though I am not an expert in that domain). The technical novelty is quite limited though as it largely boils down to retraining a newly structured network on equirectangular images based on a pre-trained model. I think the experiments still warrant publication (though I would have believed that a pure computer vision venue would be more appropriate than NIPS due to the specific application), so overall I am slightly on the positive side.
Detailed comments (in order of the paper):
Introduction: The upper bound the method can achieved is defined by running a CNN on all possible tangent projections. This seems fair, but does not deal with very large and close objects which would be quite distorted on the tangent plane (and occur frequently in 360 degree videos / GoPro Videos / etc). I don't think this is a weakness but maybe a comment on this might be good.
Related work: I am missing a section discussing related work on CNNs operating on distorted inputs and invariance of CNNs against geometric transformations, eg., spatial transformer networks, warped convolutions, and other works which consider invariance.
l. 117: W -> W x W pixels
Eq. 2: I find ":=" odd here, maybe just "=" or approximate? Also, it is a bit sloppy to have x/y on the lhs and \phi/\psi on the rhs without formalizing their connection in that context (it is only textual before). Maybe ok though.
After this equation there is some text describing the overall approach, but I couldn't fully grasp it at this point. I would add more information here, for example that the goal is to build a network on the equirectangular image and match the outputs after each convolution to the corresponding outputs of the CNN computed on the tangent plane.
Fig. 2b: It is unclear to me how such a distortion pattern (top image) can arise. Why are the boundaries so non-smooth (double curved left image boundary for instance). I don't see how this could arise.
Fig. 2 caption: It would be good to say "each row of the equirectangular image (ie, \phi)" and in (b) what is inversely projected.
l. 131: It is never clearly described how these true outputs are computed from exhaustive planar reprojections. Depending on the receptive field size, is a separate network run on each possible tangent plane? More details would be appreciated.
Sec. 3.2: I found the whole description quite ad-hoc and heuristic. I don't have a concrete comment for improvement here though except that it should be mentioned early on that this can only lead to an approximate result when minimizing the loss in the end.
l. 171: It is unclear here if the loss is over all feature maps (must be) and over all layers (probably not?). Also, how are the weights initialized? This seems never mentioned.
l. 182: It is unclear why and how max-pooling is replaced by dilated convolutions. One operation has parameters while the others has not.
l. 191: It is not Eq. 2 that is used but a loss that considers the difference between the lhs and the rhs of that equation I assume? Why not stating this explicitly or better reuse the equation from the beginning of Sec 3.3.?
l. 209: What happens to the regions with no content? Are they excluded or 0-padded?
l. 222: Why is detection performance evaluated as classification accuracy. Shouldn't this be average precision or alike?
Experiments: Some more qualitative results would be nice to shed some light. Maybe some space can be made by squeezing the text partially. |
nips_2017_2872 | Gradient descent GAN optimization is locally stable
Despite the growing prominence of generative adversarial networks (GANs), optimization in GANs is still a poorly understood topic. In this paper, we analyze the "gradient descent" form of GAN optimization, i.e., the natural setting where we simultaneously take small gradient steps in both generator and discriminator parameters. We show that even though GAN optimization does not correspond to a convex-concave game (even for simple parameterizations), under proper conditions, equilibrium points of this optimization procedure are still locally asymptotically stable for the traditional GAN formulation. On the other hand, we show that the recently proposed Wasserstein GAN can have non-convergent limit cycles near equilibrium. Motivated by this stability analysis, we propose an additional regularization term for gradient descent GAN updates, which is able to guarantee local stability for both the WGAN and the traditional GAN, and also shows practical promise in speeding up convergence and addressing mode collapse. | The authors present a dynamical system based analysis of simultaneous gradient descent updates for GANs, by considering the limit dynamical system that corresponds to the discrete updates. They show that under a series of assumptions, an equilibrium point of the dynamical system is locally asymptotically stable, implying convergence to the equilibrium if the system is initialized in a close neighborhood of it. Then they show how some types of GANs fail to satisfy some of their conditions and propose a fix to the gradient updates that re-instate local stability. They give experimental evidence that the local-stability inspired fix yields improvements in practice on MNIST digit generation and simple multi-modal distributions.
However, I do think that these drawbacks are remedied by the fact that their modification, based on local asymptotic theory, did give noticeable improvements. Moreover, the local stability proof is a bit non-trivial from theoretical point of view.
I think this paper provides some nice theoretical results in GAN training and gives interesting insights to GAN training through the dynamical system lens. The most promising part of the paper is the fact that their theory inspired fix to gradient descent yields experimental improvements.
The theory part of the paper has some drawbacks: the assumptions that under equilibrium the discriminator outputs always zero is fairly strong and hard to believe in practice. In particular, it's hard to believe that the generator will be outputting a completely indistinguishable distribution from the real data, which is the only case where this could be an equilibrium for the discriminator. Also the fact that this is only a local stability without any arguments about the size of the region of attraction, might make the result irrelevant in practical training, where initializing in a tiny ball around the equilibrium would be impossible. Finally, it is not clear that local stability of the limit dynamical system will also imply local stability of the discrete system with stochastic gradients: if the variance in the stochastic gradient is large and the region of attraction is tiny, then a tiny step outside of the equilibrium together with a high variance gradient step could push the system outside of the region of attraction. |
nips_2017_1371 | Non-Convex Finite-Sum Optimization Via SCSG Methods
We develop a class of algorithms, as variants of the stochastically controlled stochastic gradient (SCSG) methods [21], for the smooth non-convex finitesum optimization problem. Assuming the smoothness of each component, the complexity of SCSG to reach a stationary point with E ∇f (x) 2 ≤ ε is O min{ε −5/3 , ε −1 n 2/3 } , which strictly outperforms the stochastic gradient descent. Moreover, SCSG is never worse than the state-of-the-art methods based on variance reduction and it significantly outperforms them when the target accuracy is low. A similar acceleration is also achieved when the functions satisfy the Polyak-Lojasiewicz condition. Empirical experiments demonstrate that SCSG outperforms stochastic gradient methods on training multi-layers neural networks in terms of both training and validation loss. | Summary:
This paper proposes a variant of a family of stochastic optimization algorithms SCSG (closely related to SVRG), and analyzes it in the context of non-convex optimization. The main difference is that the outer loop computes a gradient on a random subset of the data, and the stochastic gradients in the inner loop have a random cardinality and are not restricted to those participating in the outer gradient. The analysis of the stochastic gradients despite this mismatch is the main technical contribution of the paper. The result is a set of new bounds on the performance of the algorithm which improve on both sgd and variance reduced algorithms, at least in the low-medium accuracy regimes. Experimental evidence supports the claims of improvement in objective reduction (both training error and test error) per iteration both before the end of the first pass on data and after, both by SCSG over SGD and by varying batch size variant over the fixed size variant, but only on two networks applied to the venerable and small MNIST dataset.
Pros of acceptance:
- The claimed results are interesting and shed light on an important regime.
- The proof method might be more widely applicable.
Cons:
- The experimental section is insufficient for the claims made.
Quality:
Theoretical contribution:
The main contribution of the paper seems to be an analysis of the discrepancy due to using partial long-term gradients, using the behavior of sampled means without replacement. If novel, this is a useful contribution to our understanding of a trick that is practically important for SVRG-type algorithms.
I’ve spot checked a few proofs in the supplementary material, and found a minor error:
- Lemma B.3 assumes eta_j*L < 1, but table 2 does not mention L in giving the step sizes.
The empirical evaluation uses only one small and old dataset, which is bad methodology (there are many old small datasets, any reasonable algorithm will be good at one). Other than that, it is well designed, and the comparison in terms of number of gradients is reasonable, but should be complemented by a wall-time comparison as some algorithms have more overhead than others. The paper claims SCSG is never worse than SVRG and better in early stages; this justifies a direct comparison to SVRG (with B = n) for each part. An added plot in the existing comparison for early stages, and also a separate experiment that compares (on a log-log plot) the rates of local convergence to high precision inside a basin of attraction.
Clarity:
The paper is written quite well: the algorithm is clear including the comparison to previous variants of SCSG and of SVRG, the summary of bounds is helpful, and the presentation of bounds in appropriate detail in different parts.
Some minor comments:
- L.28 table 1:
o What is “best achievable” referring to in [26,27]? A lower bound? An algorithm proven optimal in some sense? should that be "best available" instead?
- L.30 - 33: these sentences justify the comparison to SGD in a precision regime, but are not very clear. Substituting at the desired regime (eps = n^-1/2 or some such) would make the bounds directly comparable.
- L.56 - 59 are confusing. Applying variance reduction in the non-convex is not novel.
- Algorithm 1, L.2: \subset instead of \in, for consistency use the concise notation of 7.
- L.106: “In order to minimize the amount the tuning works” something is wrong with that sentence.
- L.116: “and the most of the novelty” first “the” is probably superfluous.
Supplementary material:
- L.8 n should be M
- L.13 B > 0 does not appear in the rest of the lemma or in the proof. Presume it is \gamma?
-
Originality:
I leave to reviewers more up to date with the non-convex literature to clarify whether the claimed novelties are such.
Significance:
This paper seems to complete the analysis of the existing SCSG, a practical SVRG variant, to the common non-convex regime. Specifically, showing theoretically that it improves on SGD in early stages is quite important.
-----
Updates made after rebuttal phase:
- The authors addressed the issues I'd raised with the proofs. One was a typo, one was a misinterpretation on my part. There may well still be quality issues, but they appear to be minor. I removed the corresponding con. I think the paper could be an 8 if its empirical evaluation was not so flawed. |
nips_2017_2471 | Adaptive Classification for Prediction Under a Budget
We propose a novel adaptive approximation approach for test-time resourceconstrained prediction motivated by Mobile, IoT, health, security and other applications, where constraints in the form of computation, communication, latency and feature acquisition costs arise. We learn an adaptive low-cost system by training a gating and prediction model that limits utilization of a high-cost model to hard input instances and gates easy-to-handle input instances to a low-cost model. Our method is based on adaptively approximating the high-cost model in regions where low-cost models suffice for making highly accurate predictions. We pose an empirical loss minimization problem with cost constraints to jointly train gating and prediction models. On a number of benchmark datasets our method outperforms state-of-the-art achieving higher accuracy for the same cost. | 1. Summary of paper
This paper introduced an adaptive method for learning two functions. First, a `cheap' classifier that incurs a small resource cost by using a small set of features. Second, a gating function that decides whether an input is easy enough to be sent to the cheap classifier or is too difficult and must be sent to a complex, resource-intensive classifier. In effect, this method bears a lot of similarity to a 2-stage cascade, where the first stage and the gating function are learned jointly.
2. High level subjective
The paper is written clearly for a reader familiar with budgeted learning, although I think a few parts of the paper are confusing (I'll go into this more below). I believe the idea of approximating an expensive, high-accuracy classifier is new (cascades are the most similar, but I believe cascades usually just consider using the same sort of classifiers in every stage). That said, it is very similar to a 2-stage cascade with different classifiers. I think there are certainly sufficient experiments demonstrating this method. Likely this method would be interesting to industry practitioners due to the increased generalization accuracy per cost.
3. High level technical
Why not try to learn a cost-constrained q function directly? If you removed the KL term and used q in the feature cost function instead of g, wouldn't you learn a q that tried as best to be accurate given the cost constraints? Did you learn a g separately in order to avoid some extra non-convexity? Or is it otherwise crucial? I would recommend explaining this choice more clearly.
I would have loved to see this model look less like a cascade. For instance, if there were multiple expensive 'experts' that were learned to be diverse w.r.t each other and then multiple gating functions along with a single cheap classifier were learned jointly, to send hard instances to one of the experts, then this would be a noticeable departure from a cascade. Right now I'm torn between liking this method and thinking it's a special cascade (using techniques from (Chen et al., 2012) and (Xu et al., 2012)).
4. Low level technical
- Line 161/162: I don't buy this quick sentence. I think this is much more complicated. Sure we can constrain F and G to have really simple learners but this won't constrain computation per instance, and the only guarantee you'll have is that your classifier is at most as computationally expensive as the most expensive classifier in F or G. You can do better by introducing computational cost terms into your objective.
- Line 177: There must be certain conditions required for alternating minimization to converge, maybe convexity and a decaying learning rate? I would just clarify that your problem satisfies these conditions.
- I think Figure 5 is a bit confusing. I would move Adapt_Gbrt RBF to the top of the second column in the legend, and put GreedyMiser in its original place. This way for every row except the last we are comparing the same adaptive/baseline method.
5. Summary of review
While this method achieves state-of-the-art accuracy in budgeted learning, it shares a lot of similarities with previous work (Chen et al., 2012) and (Xu et al., 2012) that ultimately harm its novelty. |
nips_2017_2470 | Thy Friend is My Friend: Iterative Collaborative Filtering for Sparse Matrix Estimation
The sparse matrix estimation problem consists of estimating the distribution of an n × n matrix Y , from a sparsely observed single instance of this matrix where the entries of Y are independent random variables. This captures a wide array of problems; special instances include matrix completion in the context of recommendation systems, graphon estimation, and community detection in (mixed membership) stochastic block models. Inspired by classical collaborative filtering for recommendation systems, we propose a novel iterative, collaborative filteringstyle algorithm for matrix estimation in this generic setting. We show that the mean squared error (MSE) of our estimator converges to 0 at the rate of O(d 2 (pn) −2/5 ) as long as ω(d 5 n) random entries from a total of n 2 entries of Y are observed (uniformly sampled), E[Y ] has rank d, and the entries of Y have bounded support. The maximum squared error across all entries converges to 0 with high probability as long as we observe a little more, Ω(d 5 n ln 5 (n)) entries. Our results are the best known sample complexity results in this generality. | This paper studies a rather generic form of sparse matrix estimation problem. Inspired by [1-3], it suggests an algorithm that is roughly based on averaging matrix elements that are r-hop common neighbours of the two of nodes for which the estimate is being computed.
The authors then prove sample complexity bounds for the MSE to go to zero.
While there is a lot of related work in this direction the present paper discussed this nicely and treats the many cases in a unified framework and allows for generic noise function and sparser matrices. The paper provides (as far as I know) best bounds that hold in this generality. I think this is a very good paper that should be accepted.
Some comments suggesting improvements follow:
While in the sparse regime treating the generality of the noise is interesting and as far as I know original. It might be worth mentioning that in the denser case results on universality with respect to the noise are known for the matrix estimation problem ("MMSE of probabilistic low-rank matrix estimation: Universality with respect to the output channel" by Lesieur et al, and "Asymptotic mutual information for the two-groups stochastic block model" by Deshpande, Abbe, Montanari).
The algorithm seems related to a spectral algorithms designed for sparse graphs, based on the non-backtracking matrix. While the introduction mentions in only one sentence relation to only standard spectral algorithm, the relation to belief propagation and non-backtracking operator is discussed on page 6. At the same time the original works suggesting and analyzing those approaches are not cited there (i.e. "Spectral redemption in clustering sparse networks" by Krzakala et al, or "Non-backtracking spectrum of random graphs: community detection and non-regular Ramanujan graphs" by Bordenave et al.). This relation should perhaps be discussed in the section of related work or in the introduction and not in the section describing in details of the algorithm.
The introduction discusses an unpublished work on mixed membership SBM, in particular a gap between information theoretical lower bound and a conjectures algorithmic threshold in that case. Perhaps it should be mentioned that such results originate from the normal SBM where both the information-theoretic threshold for detection, and the conjectured algorithmic threshold were studied in detail, e.g. in "Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications" by Decelle et al. Also in that case the gap between the two threshold is d (for large d).
While the main contribution of the paper is theoretical, it would have been nice to see some practical demonstration of the algorithm, comparison to other algorithms (at the same time this should not be used as an argument for rejection). Evidence of the scalability of the algorithm should be presented.
Minor points:
While the o(), O(), \Omega() notations are rather standard I was not very familiar with the \omega() and had to look it up to be sure. Perhaps more of NIPS audience would not be familiar with those and the definition could be shortly reminded.
I've read the author's feedback and took it into account in my score. |
nips_2017_261 | Towards Accurate Binary Convolutional Neural Network
We introduce a novel scheme to train binary convolutional neural networks (CNNs) -CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations. | The paper describes a new method for training binary convolutional neural networks. The scheme addresses both using binary weights and activations. The key insight here is to approximate the real valued weights via a linear combination of M binary basis weights. The coefficients for reconstructing the real weights can be found using least squares in the forward pass, and then pulled outside the convolution to allow for fast binary convolution at test time. A similar approach is taken for the activations, but in this case the weights and shifts are trained as normal during backpropagation. The result is a network that requires M more binary convolutions than a straightforward binary neural network, but it is expected that these will be significantly more hardware friendly. Experiments on ImageNet show that the approach outperforms competing approaches.
Binary convolutional networks are an important topic with with obvious commercial applications. The idea in the paper is good, and the results on ImageNet are encouraging.
Comments
1. Given that you approximate a real valued convolution with M binary convolutions, it seems to me that the approximate cost would be similar to using M binary CNNs like XNOR nets. With this in mind, I think the approach should really be compared to an ensemble of M XNOR nets trained with different initial weights, both in terms of total train time and prediction accuracy.
2. The paper suggests the v_i shifts are trainable. Why are values for these given in table 4 and the supplementary material. Are these the initial values?
3. The paper uses fixed values for the u_i's but mentions that these could also be trained. Why not train them too? Have you tried this?
4. It's not clear from the paper if solving regression problems for the alphas during training adds much to the train time.
5. It would be nice to have an estimation of the total cost of inference for the approach on a typical net (e.g. ResNet-18) given some realistic assumptions about the hardware, and compare this with other methods (perhaps in the supplementary material) |
nips_2017_1442 | Generative Local Metric Learning for Kernel Regression
This paper shows how metric learning can be used with Nadaraya-Watson (NW) kernel regression. Compared with standard approaches, such as bandwidth selection, we show how metric learning can significantly reduce the mean square error (MSE) in kernel regression, particularly for high-dimensional data. We propose a method for efficiently learning a good metric function based upon analyzing the performance of the NW estimator for Gaussian-distributed data. A key feature of our approach is that the NW estimator with a learned metric uses information from both the global and local structure of the training data. Theoretical and empirical results confirm that the learned metric can considerably reduce the bias and MSE for kernel regression even when the data are not confined to Gaussian. | Metric learning is one of the fundamental problems in person re-identification. This paper presents a metric learning method using Nadaraya-Watson (NW) kernel regression. The key feature of the work is that the NW estimator with a learned metric uses information from both the global and local structure of the training data. Theoretical and empirical 9 results confirm that the learned metric can considerably reduce the bias and MSE for kernel regression even when the data are not confined to Gaussian.
The main contribution lies in the following aspects:
1. Provided a formation on how metric learning can be embedded in a kernel regression method.
2. Proposed and proved a theorem that guarantees the existence of a good metric that can eliminate the leading order bias at any point regardless of assumption of Gaussian distribution of the data.
3. Under Gaussian model assumption, provided an efficient algorithm that can guarantee to achieve an optimal metric solution for reducing regression error.
Overall, this paper’s reasoning is rigorous. Description is clear. The experiments sufficiently demonstrate the author’s claims for the proposed method.
However, it needs major revision before it is accepted. My comments are as follows:
1. The structure of the paper is confused. I suggest the authors add contributions of the papers in the introduction section. The usage and importance of this paper needs to be further clarified.
2. I also find some grammar problems in this paper. Author needs to carefully check these mistakes, which is very important for readers.
3. The limitations of the proposed method are scarcely discussed.
4. The works also cannot describe the latest developments since there are too few references after 2016.
5. The authors only show very few experiments results in section 5, I think it is not enough. Except MSE, the authors also should provide other comparisons, such as Root mean squared error of classication accuracy(RMSE) or MSE, etc. In this way, the results of the paper will be more reasonable. |
nips_2017_3544 | Learning to Model the Tail
We describe an approach to learning from long-tailed, imbalanced datasets that are prevalent in real-world settings. Here, the challenge is to learn accurate "fewshot" models for classes in the tail of the class distribution, for which little data is available. We cast this problem as transfer learning, where knowledge from the data-rich classes in the head of the distribution is transferred to the data-poor classes in the tail. Our key insights are as follows. First, we propose to transfer meta-knowledge about learning-to-learn from the head classes. This knowledge is encoded with a meta-network that operates on the space of model parameters, that is trained to predict many-shot model parameters from few-shot model parameters. Second, we transfer this meta-knowledge in a progressive manner, from classes in the head to the "body", and from the "body" to the tail. That is, we transfer knowledge in a gradual fashion, regularizing meta-networks for few-shot regression with those trained with more training data. This allows our final network to capture a notion of model dynamics, that predicts how model parameters are likely to change as more training data is gradually added. We demonstrate results on image classification datasets (SUN, Places, and ImageNet) tuned for the long-tailed setting, that significantly outperform common heuristics, such as data resampling or reweighting. | Summary
-------
The paper proposes an approach for transfer learning for
multi-class classification problems that aids the learning of categories
with few training examples (the categories in the tail of the
distribution of numbers of examples per category). It is based
on ideas of meta-learning: it builds a (meta-)model of the dynamics
that accompany the change in model parameters as more training
data is made available to a classifier.
Specifically, the proposed approach takes inspiration from
existing work on meta-learning [20] but extends it by applying
it to CNNs, utilizing deep residual networks as the
meta-model, and applying the framework to a general 'long-tail
problem' setting in which the number of training examples
available is different and not fixed between categories.
Experiments are conducted on curated versions of existing
datasets (curated such that they exhibit strong long-tail
distributions): SUN-397 [13], Places [7], and ImageNet
[5]. The performance of the proposed method is demonstrated to
be considerably higher than several more adhoc baselines from
the literature.
Novelty and significance
------------------------
While the proposed method draws inspiration from a variety of
existing methods in transfer learning (notably [20]), the
particular formulation using deep residual networks as a
meta-model, modeling the dynamics of adding progressively more
training examples, and handling the case of different numbers
of available training examples per category in a unified and
automatic way are novel aspects of the paper to my knowledge.
These aspects could potentially be useful to a wider range of
applications than (image) classification and facilitate the
treatment of long-tail problems, as they arise in many
real-world application scenarios.
Technical correctness
---------------------
The proposed method of (meta-)modeling the gradual addition of
training examples seems plausible and technically correct.
It would, however, be interesting to learn more about the
complexity of the proposed approach in terms of training time,
since a significant number of individual models has to be
trained prior to training the meta-model.
Experimental evalution
----------------------
The experimental evaluation seems convincing: it is done on
three different existing datasets, compares the proposed
approach to existing, more ad-hoc baselines as well as [20],
and highlights the contributions to performance of the
different proposed system components (notably, taking the step
from fixed to recursive splitting of categories). Performance
is indeed best for the proposed method, but considerable
margins, and particularly pronounced for tail categories.
Presentation
------------
The paper text seems to suffer from a latex compilation issue
that results in missing multiple bibliographic and figure
references (questions marks instead of numbers: line 79, 176,
239, 277, Tab. 1), which is unfortunate and a possible
indication that the paper submission has been rushed or a
premature version uploaded. The text, hoever, does contain
enough information in order to know what is going on despite
not having the references.
The authors should please comment on this and what the correct
references are in their rebuttal.
Apart from this issue, the paper is well written and clearly
presents the proposed ideas, experimental results, and
relation to prior work.
Typo line 74: 'changeling' - challenging; line 96: 'the the' |
nips_2017_1443 | Information Theoretic Properties of Markov Random Fields, and their Algorithmic Applications
Markov random fields are a popular model for high-dimensional probability distributions. Over the years, many mathematical, statistical and algorithmic problems on them have been studied. Until recently, the only known algorithms for provably learning them relied on exhaustive search, correlation decay or various incoherence assumptions. Bresler [4] gave an algorithm for learning general Ising models on bounded degree graphs. His approach was based on a structural result about mutual information in Ising models. Here we take a more conceptual approach to proving lower bounds on the mutual information. Our proof generalizes well beyond Ising models, to arbitrary Markov random fields with higher order interactions. As an application, we obtain algorithms for learning Markov random fields on bounded degree graphs on n nodes with r-order interactions in n r time and log n sample complexity. Our algorithms also extend to various partial observation models. | The setup in Eq. 1 is unclear on a first reading. What is the
exact meaning of r here? This is a critical constant in the
major result proven, and not carefully defined anywhere in the
paper. I *think* that r is the maximum clique size. This is
particularly ambiguous because of the statement that
theta^{i1,i2,...,il}(a1,a2,...,an) "is assumed to be zero on
non-cliques". I've assumed the meaning of this is that, more
precisely, that function ignores aj unless there is some value
t such that aj=it. Then, this makes sense as a definition of a
Markov random field with maximum clique size r. (I encourage
clarification of this in the paper if correct, and a rebuttal
if I have misunderstood this.)
In any case, this is an unusual definition of an MRF in the
NIPS community. I think the paper would have more impact if a
more standard definition of an MRF were used in the
introduction, and then the equivalence/conversion were just
done in the technical part. The main result only references
the constant r, so no need to introduce all this just to state
the result.
On a related note, it's worth giving a couple sentences on the
relationship to Bresler's result. (For an Ising model, r=2,
etc.) In particular, one part of this relationship is a bit
unclear to me. As far as I know, Bresler's result applies to
arbitrary Ising models, irrespective of graph structure. So,
take an Ising model with a dense structure so that there are
large cliques. In this case, it seems to me that the current
paper's result will use (r-1) nodes while Bresler's will still
use a single node-- thus in that particular case the current
paper's result is weaker. Is this correct? (It's not a huge
weakness even if true but worth discussing in detail since it
could guide future work.) [EDIT: I see now that this is *not* the case, see response below]
I also feel that the discussion of Chow-Liu is missing a very
important aspect. Chow-Liu doesn't just correctly recover the
true structure when run on data generated from a tree. Rather,
Chow-Lui finds the *maximum-likelihood* tree for data from an
*arbitrary* distribution. This is a property that almost all
follow-up work does not satisfy (Srebro showed bounds). The
discussion in this paper is all true, but doesn't mention that
"maximum likelihood" or "model robustness" issue at all, which
is hugely important in practice.
For reference: The basic result is that given a single node $u$,
and a hypothesized set of "separators" $S$ (neighbors of $u$) then
there will be some set of nodes $I$ with size at most $r-1$ such
that $u$ and $I$ have positive conditional mutual information.
The proof of the central result proceeds by setting up a
"game", which works as follows:
1) We pick a node $X_u$ to look at.
2) Alice draws two joint samples $X$ and $X'$.
3) Alice draws a random value $R$ (uniformly from the space of
possible values of $X_u$
4) Alice picks a random set of neighbors of $X_u$, call them $X_I$.
5) Alice tells Bob the values of $X_I$
6) Bob get's to wager on if $X_u=R$ or $X'_u=R$. Bob wins his
wager if $X_u=R$ an loses his wager if $X'_u=R$ and nothing
happens if both or neither are true.
Here I first felt like I *must* be missing something, since
this is just establishing that $X_u$ has mutual information
with it's neighbors. (There is no reference to the "separator"
set S in the main result.) However, it later appears that this
is just a warmup (regular mutual information) and can be
extended to the conditional setting.
Actually, couldn't the conditional setting itself be phrased
as a game, something like
1) We pick a node $X_u$ and a set of hypothesized "separators"
$X_S$ to look at.
2) Alice draws two joint samples $X$ and $X'$ Both are
conditioned on the some random value for $X_S$.
3) Alice draws a random value $R$ (uniformly from the space of
possible values of $X_u$
4) Alice picks a random set of nodes (not including $X_u$ or $X_S$, call them $X_I$.
5) Alice tells Bob the values of $X_I$
6) Bob get's to wager on if $X_u=R$ or $X'_u=R$. Bob wins his
wager if $X_u=R$ an loses his wager if $X'_u=R$ and nothing
happens if both or neither are true.
I don't think this adds anything to the final result, but is
an intuition for something closer to the final goal.
After all this, the paper discusses an algorithm for greedily
learning an MRF graph (in sort of the obvious way, by
exploiting the above result) There is some analysis of how
often you might go wrong estimating mutual information from
samples, which I appreciate.
Overall, as far as I can see, the result appears to be
true. However, I'm not sure that the purely theoretical result
is sufficiently interesting (at NIPS) to be published with no
experiments. As I mentioned above, Chow-Liu has the major
advantage of finding the maximum likelihood solution, which the current method
does not appear to have. (It would violate hardness results
due to Srebro.) Further, note that the bound given in Theorem
5.1, despite the high order, is only for correctly recovering
the structure of a single node, so there would need to be
another lever of applying the union bound to this result with
lower delta to get a correct full model.
EDIT AFTER REBUTTAL:
Thanks for the rebuttal. I see now that I should understand $r$ not as the maximum clique size, but rather as the maximum order of interactions. (E.g. if one has a fully-connected Ising model, you would have r=2 but the maximum clique size would be n). This answers the question I had about this being a generalization of Bressler's result. (That is, this paper's is a strict generalization.) This does slightly improve my estimation of this paper, though I thought this was a relatively small concern in any case. My more serious concerns are that a pure theoretical result is appropriate for NIPS. |
nips_2017_2623 | Teaching Machines to Describe Images via Natural Language Feedback
Robots will eventually be part of every household. It is thus critical to enable algorithms to learn from and be guided by non-expert users. In this paper, we bring a human in the loop, and enable a human teacher to give feedback to a learning agent in the form of natural language. We argue that a descriptive sentence can provide a much stronger learning signal than a numeric reward in that it can easily point to where the mistakes are and how to correct them. We focus on the problem of image captioning in which the quality of the output can easily be judged by non-experts. In particular, we first train a captioning model on a subset of images paired with human written captions. We then let the model describe new images and collect human feedback on the generated descriptions. We propose a hierarchical phrase-based captioning model, and design a feedback network that provides reward to the learner by conditioning on the human-provided feedback. We show that by exploiting descriptive feedback on new images our model learns to perform better than when given human written captions on these images. | The paper presents an approach for automatically captioning images where the model also incorporates natural language feedback from humans along with ground truth captions during training. The proposed approach uses reinforcement learning to train a phrase based captioning model where the model is first trained using maximum likelihood training (supervised learning) and then further finetuned using reinforcement learning where the reward is weighted sum of BLEU scores w.r.t to the ground truth and the feedback sentences provided by humans. The reward also consists of phrase level rewards obtained by using the human feedback.
The proposed model is trained and evaluated on MSCOCO image caption data. The proposed model is compared with a pure supervised learning (SL) model, a model trained using reinforcement learning (RL) without any feedback. The proposed model outperforms the pure SL model by a large margin and the RL model by a small margin.
Strengths:
1. The paper is well motivated with the idea of using human in the loop for training image captioning models.
2. The baselines (SL and RL) are reasonable and the additional experiment of using 1 GT vs. 1 feedback caption is insightful and interesting.
3. The work can be great significance especially if the improvements are significantly large over the RL without any feedback baseline.
Weaknesses:
1. The paper is motivated with using natural language feedback just as humans would provide while teaching a child. However, in addition to natural language feedback, the proposed feedback network also uses three additional pieces of information – which phrase is incorrect, what is the correct phrase, and what is the type of the mistake. Using these additional pieces is more than just natural language feedback. So I would like the authors to be clearer about this in introduction.
2. The improvements of the proposed model over the RL without feedback model is not so high (row3 vs. row4 in table 6), in fact a bit worse for BLEU-1. So, I would like the authors to verify if the improvements are statistically significant.
3. How much does the information about incorrect phrase / corrected phrase and the information about the type of the mistake help the feedback network? What is the performance without each of these two types of information and what is the performance with just the natural language feedback?
4. In figure 1 caption, the paper mentions that in training the feedback network, along with the natural language feedback sentence, the phrase marked as incorrect by the annotator and the corrected phrase is also used. However, from equations 1-4, it is not clear where the information about incorrect phrase and corrected phrase is used. Also L175 and L176 are not clear. What do the authors mean by “as an example”?
5. L216-217: What is the rationale behind using cross entropy for first (P – floor(t/m)) phrases? How is the performance when using reinforcement algorithm for all phrases?
6. L222: Why is the official test set of MSCOCO not used for reporting results?
7. FBN results (table 5): can authors please throw light on why the performance degrades when using the additional information about missing/wrong/redundant?
8. Table 6: can authors please clarify why the MLEC accuracy using ROUGE-L is so low? Is that a typo?
9. Can authors discuss the failure cases of the proposed (RLF) network in order to guide future research?
10. Other errors/typos:
a. L190: complete -> completed
b. L201, “We use either … feedback collection”: incorrect phrasing
c. L218: multiply -> multiple
d. L235: drop “by”
Post-rebuttal comments:
I agree that proper evaluation is critical. Hence I would like the authors to verify that the baseline results [33] are comparable and the proposed model is adding on top of that.
So, I would like to change my rating to marginally below acceptance threshold. |
nips_2017_566 | Compression-aware Training of Deep Networks
In recent years, great progress has been made in a variety of application domains thanks to the development of increasingly deeper neural networks. Unfortunately, the huge number of units of these networks makes them expensive both computationally and memory-wise. To overcome this, exploiting the fact that deep networks are over-parametrized, several compression strategies have been proposed. These methods, however, typically start from a network that has been trained in a standard manner, without considering such a future compression. In this paper, we propose to explicitly account for compression in the training process. To this end, we introduce a regularizer that encourages the parameter matrix of each layer to have low rank during training. We show that accounting for compression during training allows us to learn much more compact, yet at least as effective, models than state-of-the-art compression techniques.
low rank in the training loss, and rely on a stochastic proximal gradient descent strategy to optimize the network parameters. In essence, and by contrast with methods that aim to learn uncorrelated units to prevent overfitting [5,54,40], we seek to learn correlated ones, which can then easily be pruned in a second phase. Our compression-aware training scheme therefore yields networks that are well adapted to the following post-processing stage. As a consequence, we achieve higher compression rates than the above-mentioned techniques at virtually no loss in prediction accuracy.
Our approach constitutes one of the very few attempts at explicitly training a compact network from scratch. In this context, the work of [4] has proposed to learn correlated units by making use of additional noise outputs. This strategy, however, is only guaranteed to have the desired effect for simple networks and has only been demonstrated on relatively shallow architectures. In the contemporary work [51], units are coordinated via a regularizer acting on all pairs of filters within a layer. While effective, exploiting all pairs can quickly become cumbersome in the presence of large numbers of units. Recently, group sparsity has also been employed to obtain compact networks [2,50]. Such a regularizer, however, acts on individual units, without explicitly aiming to model their redundancies. Here, we show that accounting for interactions between the units within a layer allows us to obtain more compact networks. Furthermore, using such a group sparsity prior in conjunction with our compression-aware strategy lets us achieve even higher compression rates.
We demonstrate the benefits of our approach on several deep architectures, including the 8-layers DecomposeMe network of [1] and the 50-layers ResNet of [23]. Our experiments on ImageNet and ICDAR show that we can achieve compression rates of more than 90%, thus hugely reducing the number of required operations at inference time. | The authors present a regularization term that encourages weight matrices to be low rank during network training, effectively increasing the compression of the network, and making it possible to explicitly reduce the rank during post-processing, reducing the number of operations required for inference. Overall the paper seems like a good and well written paper on an interesting topic. I have some caveats however: firstly the authors do not mention variational inference, which also explicitly compresses the network (in the literal sense of reducing the number of bits in its description length) and can be used to prune away many weights after training - see 'Practical Variational Inference for Neural Networks' for details. More generally, almost any regularizer provides a form of implicit 'compression-aware training' (that's why they work - simpler models generalize better) and can often be used to prune networks post-hoc. For example a network trained with l1 or l2 regularization will generally end up with many weights very close to 0, which can be removed without greatly altering network performance. I think it's important to clarify this, especially since the authors use an l2 term in addition to their own regularizer during training. They also don't compare seem to compare how well previous low rank post processing works with and without their regulariser, or with other regularisers used in previous work. All of these caveats could be answered by providing more baseline results in the experimental section, demonstrating that training with this particular regulariser does indeed lead to a better accuracy / compression tradeoff than other approaches.
In general I found the results a little hard to interpret, so may be missing something: the graph I wanted to see was a set of curves for accuracy vs compression ratio (either in terms of number of parameters or number of MACs) rather than accuracy against the strength of the regularisation term. On this graph it should be possible to explicitly compare your approach vs previous regularisers / compressors. |
nips_2017_567 | Multiscale Semi-Markov Dynamics for Intracortical Brain-Computer Interfaces
Intracortical brain-computer interfaces (iBCIs) have allowed people with tetraplegia to control a computer cursor by imagining the movement of their paralyzed arm or hand. State-of-the-art decoders deployed in human iBCIs are derived from a Kalman filter that assumes Markov dynamics on the angle of intended movement, and a unimodal dependence on intended angle for each channel of neural activity. Due to errors made in the decoding of noisy neural data, as a user attempts to move the cursor to a goal, the angle between cursor and goal positions may change rapidly. We propose a dynamic Bayesian network that includes the on-screen goal position as part of its latent state, and thus allows the person's intended angle of movement to be aggregated over a much longer history of neural activity. This multiscale model explicitly captures the relationship between instantaneous angles of motion and long-term goals, and incorporates semi-Markov dynamics for motion trajectories. We also introduce a multimodal likelihood model for recordings of neural populations which can be rapidly calibrated for clinical applications. In offline experiments with recorded neural data, we demonstrate significantly improved prediction of motion directions compared to the Kalman filter. We derive an efficient online inference algorithm, enabling a clinical trial participant with tetraplegia to control a computer cursor with neural activity in real time. The observed kinematics of cursor movement are objectively straighter and smoother than prior iBCI decoding models without loss of responsiveness. | The paper describes a novel brain-computer-interface algorithm for controlling movement of a cursor to random locations on a screen using neuronal activity (power in the "spike-spectrum" of intra-cortically implanted selected electrodes).
The algorithm uses a dynamic Bayesian network model that encodes possible target location (from a set of possible positions on a 40x40 grid, layed out on the screed). Target changes can only occur once a countdown timer reaches zero (time intervals are drawn at random) at which time the target has a chance of switching location. This forces slow target location dynamics.
Observations (power in spike spectrum) are assumed to be drawn from a multi modal distribution (mixture of von Mises functions) as multiple neurons may affect the power recording on a single electrode and are dependent on the current movement direction. The position is simply the integration over time of the movement direction variable (with a bit of decay).
The novel method is compared with several variations of a Kalman filter (using same movement state space but missing the latent target location) and an HMM. Experiments are performed off line (using data from closed loop BCI session using either a Kalman filter or an HMM) and also in 2 online closed-loop sessions (alternating between Kalman filter and the new method). Experiments performed an a tetraplegic human subject.
Overall the paper (sans supplementary material with is essential to understanding the experiment) is well written. The new model vastly outperforms the standard Kalman Filter, as is clear from the multiple figures and supplementary movies (!).
Some comments:
- I gather that the von Mieses distribution was used both for modelling the observations given the intended movement direction and for modelling the next goal position. I found the mathematical description of the selection of the goal position unclear. Is there a distribution over possible goal positions or is a single goal position selected at each time point during inference? Can you clarify?
- Once a timer is set, can the goal be changed before the timer expires? My understanding is that it cannot, no matter what the observations show. I find this to be a strange constraint. There should be more straightforward ways of inducing slow switch rates that would not force the estimate of the target position to stay fixed as new evidence accumulates. e.g. forced transition rates or state cascades.
- My understanding is that the model has no momentum. I.e., the cursor can "turn on a dime". most objects we control can not (e.g hand movements). Would it not make more sense to use a model that preserves some of the velocity and controls some of the accelleration? It is known that some neurons in motor cortex are direction sensitive and other are force sensitive. Would it not be more natural?
- After an initial model learning period, the model is fixed for the rest of the session. It would be interesting to see how the level of performance changes as time passes. Is the level of performance maintained? Is there some degredation as the network / recordings drift? Is there some improvement as the subject masters control of the system? Would an adaptive system perform better? These questions are not addressed.
- The paper is geared at an engineering solution to the BCI problem. There is no attempt at providing insights into what the brain encodes most naturally or how the subject (and neural system) adapt / learn to make the most out of the provided BCI.
- citations are in the wrong format should be numerical [1] , [1-4] etc... not Author et al (2015). |
nips_2017_1344 | Inhomogeneous Hypergraph Clustering with Applications
Hypergraph partitioning is an important problem in machine learning, computer vision and network analytics. A widely used method for hypergraph partitioning relies on minimizing a normalized sum of the costs of partitioning hyperedges across clusters. Algorithmic solutions based on this approach assume that different partitions of a hyperedge incur the same cost. However, this assumption fails to leverage the fact that different subsets of vertices within the same hyperedge may have different structural importance. We hence propose a new hypergraph clustering technique, termed inhomogeneous hypergraph partitioning, which assigns different costs to different hyperedge cuts. We prove that inhomogeneous partitioning produces a quadratic approximation to the optimal solution if the inhomogeneous costs satisfy submodularity constraints. Moreover, we demonstrate that inhomogenous partitioning offers significant performance improvements in applications such as structure learning of rankings, subspace segmentation and motif clustering. | In this paper the authors consider a novel version of hypergraph clustering. Essentially, the novelty of the formulation is the following: the cost of a cut hyperedge depends on which nodes go on each side of the cut. This is quantified by a weight function over all possible subsets of nodes for each hyperedge. The complexity of such formulation may at first glance seem unnecessary, but the authors illustrate real-world applications where it makes sense to use their formulation. The paper is nicely written. My main critique is that some key ideas of the paper -as the authors point out themselves- are heavily based on [11].
Also some theoretical contributions are weak, both from a technical perspective and with respect to their generality. For instance, Theorem 3.1 is not hard to prove, and is true under certain conditions that may not hold. On the positive side, the authors do a good work with respect to analyzing the shortcomings of their method.Also, the experimental results are convincing to some extent.
My comments follow:
1. The authors provide a motivational example in the introduction. I suggest that the caption under figure 1 should be significantly shortened. The authors may refer the reader to Section 1 for details instead.
2. The authors should discuss in greater details hypergraph clustering, and specifically the formulation where we pay 1 for each edge cut. Τhere is no canonical matrix representation of hypergraphs, and optimizing over tensors is NP-hard. The authors should check the paper "Hypergraph Markov Operators, Eigenvalues and Approximation Algorithms" by Anand Louis (arxiv:1408.2425)
3. As pointed out by [21] there is a random walk interpretation for motif-clustering. Is there a natural random walk interpretation of the current formulation in the context of motif clustering?
4. The experimental results are convincing to some extent. It would have been nicer to see more results on motif-clustering. Is there a class of motifs and graphs where your method may significantly outperform existing methods [10,21]?
Finally, there is a typo in the title, and couple of others (e.g., "is guaranteed to find a constant-approximation solutions"->"is guaranteed to find a constant-approximation solution") along the way. Use ispell or some other similar program to detect all typos, and correct them. |
nips_2017_3243 | Effective Parallelisation for Machine Learning
We present a novel parallelisation scheme that simplifies the adaptation of learning algorithms to growing amounts of data as well as growing needs for accurate and confident predictions in critical applications. In contrast to other parallelisation techniques, it can be applied to a broad class of learning algorithms without further mathematical derivations and without writing dedicated code, while at the same time maintaining theoretical performance guarantees. Moreover, our parallelisation scheme is able to reduce the runtime of many learning algorithms to polylogarithmic time on quasi-polynomially many processing units. This is a significant step towards a general answer to an open question on the efficient parallelisation of machine learning algorithms in the sense of Nick's Class (NC). The cost of this parallelisation is in the form of a larger sample complexity. Our empirical study confirms the potential of our parallelisation scheme with fixed numbers of processors and instances in realistic application scenarios. | The authors have presented a parallelization algorithm for aggregating weak learners based on Radon partitioning. They present theoretical analysis to motivate the algorithm along with empirical results to support the theory. The theoretical analysis is interesting, and the empirical results demonstrate training time and/or AUC improvements over multiple baseline algorithms, on multiple datasets. The authors also preemptively and convincingly address several questions/concerns in the Evaluation and Discussion sections. I recommend acceptance.
Specific Notes
--------------
- Line 51: "the the" -> "the"
- Line 79: Is this the same size math font as elsewhere? It seems small and kind of hard to read.
- Figures 1-3: The text really needs to be bigger in the figure images.
- Figure 3: This one may be harder to distinguish if printed in black and white.
- Lines 318-320: "the Radon machine outperforms the averaging baseline..." To clarify, is this based on an average of paired comparisons?
- Line 324: A pet peeve of mine is the use of the word significant in papers when not referring to statistical significance results.
- Lines 340-341: Missing some spaces - "machineachieved", and "machineis."
- Lines 341-349: The difference in performance with the AverageAtTheEnd method doesn't seem terribly meaningful. Why wasn't this result included somewhere in Figure 1? Can you advocate more for your method here?
- Line 377: "Long and servedio [22]" The use of author names here seems inconsistent with citations elsewhere (even immediately following). |
nips_2017_1611 | Online Dynamic Programming
We consider the problem of repeatedly solving a variant of the same dynamic programming problem in successive trials. An instance of the type of problems we consider is to find a good binary search tree in a changing environment. At the beginning of each trial, the learner probabilistically chooses a tree with the n keys at the internal nodes and the n + 1 gaps between keys at the leaves. The learner is then told the frequencies of the keys and gaps and is charged by the average search cost for the chosen tree. The problem is online because the frequencies can change between trials. The goal is to develop algorithms with the property that their total average search cost (loss) in all trials is close to the total loss of the best tree chosen in hindsight for all trials. The challenge, of course, is that the algorithm has to deal with exponential number of trees. We develop a general methodology for tackling such problems for a wide class of dynamic programming algorithms. Our framework allows us to extend online learning algorithms like Hedge [16] and Component Hedge [25] to a significantly wider class of combinatorial objects than was possible before. | The aim of this paper is to extend results in online combinatorial optimization to new combinatorial objects such as $k$-multipaths and binary search trees. Specifically, the authors are interested in extending Expanded-Hedge (EH) and Component Hedge (CH) to online combinatorial games, for which the offline problem can be solved in a bottom-up way using Dynamic Programming.
Though the results about online optimization with $k$-multipaths are interesting, the overall paper could benefit from more polishing: some definitions and notations are not clearly defined, and some assertions look incorrect. Namely:
In Section 4, the set (of subsets) $\mathcal H_v$ is not clearly defined, so it is very difficult for the reader to capture the family of DP tasks defined according to the recurrence relation (3). In this section, some concrete examples of online optimization problems characterized by this DP family would be helpful.
In Section 5, the notion of Binary Search Tree should be clearly defined. Notably, if the authors are interested by the set of all rooted binary trees with $n + 1$ leaves, the cardinality of this set is indeed the Catalan number of order $n$ (as indicated in Line 268). But, contrary to what is claimed in Lines 250-251, there is a well-known characterization of the polytope specified by the convex hull of all rooted binary trees of order $n$: this is the $n$th dimensional associahedron. Since this polytope is integral, we can use (at least theoretically) the Component Hedge algorithm for projecting points onto this polytope, and decomposing them (by Caratheodory's theorem) into convex combinations of binary trees.
Section 5.2 is of particular importance, as it explains how the online BST problem can be solved using previous results obtained for $k$-multipaths. But the translation of this problem into an online $2$-multipath learning task is not clear at all. To this very point, it is important to explain in unambiguous terms how we can learn BSTs using the CH (or EH) technique used for $k$-multipaths. According to the sets $V$, $\Omega$, and $F_{ij}$ defined in Lines 265-266, are we guaranteed that (i) every $k$-multipath of the induced graph encodes a rooted binary tree with $n+1$ leaves (correctness), and (ii) every possible rooted binary tree with $n+1$ leaves can be encoded by a $k$-multipath of this induced graph? |
nips_2017_2377 | Reconstruct & Crush Network
This article introduces an energy-based model that is adversarial regarding data: it minimizes the energy for a given data distribution (the positive samples) while maximizing the energy for another given data distribution (the negative or unlabeled samples). The model is especially instantiated with autoencoders where the energy, represented by the reconstruction error, provides a general distance measure for unknown data. The resulting neural network thus learns to reconstruct data from the first distribution while crushing data from the second distribution. This solution can handle different problems such as Positive and Unlabeled (PU) learning or covariate shift, especially with imbalanced data. Using autoencoders allows handling a large variety of data, such as images, text or even dialogues. Our experiments show the flexibility of the proposed approach in dealing with different types of data in different settings: images with CIFAR-10 and CIFAR-100 (not-in-training setting), text with Amazon reviews (PU learning) and dialogues with Facebook bAbI (next response classification and dialogue completion). | The paper proposes a classification technique using deep nets to deal with:
(a) covariate shift (i.e., when the train and test data do not share the same distribution)
(b) PU settings (i.e., when there are only positive and unlabeled datapoints are available).
The key idea is to train an auto-encoder that reconstruct the positive instances with small error and the remaining instances (negative or unlabeled) with a large error (above a desired threshold). This structure forces the network to learn patterns that are intrinsic to the positive class (as opposed to features that are discriminative across different classes). The experiments highlight that the proposed method outperforms baselines across different tasks with different data types (image, short text, and dialogues).
Overall, I enjoyed reading the paper. It is honest, concise and well-written. The idea of considering the reconstruction error of auto-encoders as an energy function is indeed interesting. While my overall recommendation is accept, there are a few points that I am not confident about:
(1) The baselines; For the image classification experiments, the authors use a standard CNN as a baseline and I'm not quite certain if these count as a competitive baseline (specially for the "not-in-training" setup). My question is that are there other baselines that are more suitable for such settings?
(2) The design of auto-encoders are discussed only briefly. The suggested structures are not trivial (e.g., (32)3c1s-(32)3c1s-(64)3c2s-(64)3c2-(32)3c1s-512f-1024f). The only discussion on the structure is the following sentence: "The choice of architectures for standard classifier and auto-encoder is driven by necessity of fair comparison." which is frankly not sufficient. At the same time, It seems to me this is the general trend in research on Neural Nets. Thus, I'm not confident how much this concern is valid.
Overall, my recommendation is accept however with not so great confidence in my review. |
nips_2017_2110 | Structured Generative Adversarial Networks
We study the problem of conditional generative modeling based on designated semantics or structures. Existing models that build conditional generators either require massive labeled instances as supervision or are unable to accurately control the semantics of generated samples. We propose structured generative adversarial networks (SGANs) for semi-supervised conditional generative modeling. SGAN assumes the data x is generated conditioned on two independent latent variables: y that encodes the designated semantics, and z that contains other factors of variation. To ensure disentangled semantics in y and z, SGAN builds two collaborative games in the hidden space to minimize the reconstruction error of y and z, respectively. Training SGAN also involves solving two adversarial games that have their equilibrium concentrating at the true joint data distributions p(x, z) and p(x, y), avoiding distributing the probability mass diffusely over data space that MLE-based methods may suffer. We assess SGAN by evaluating its trained networks, and its performance on downstream tasks. We show that SGAN delivers a highly controllable generator, and disentangled representations; it also establishes start-of-the-art results across multiple datasets when applied for semi-supervised image classification (1.27%, 5.73%, 17.26% error rates on MNIST, SVHN and CIFAR-10 using 50, 1000 and 4000 labels, respectively). Benefiting from the separate modeling of y and z, SGAN can generate images with high visual quality and strictly following the designated semantic, and can be extended to a wide spectrum of applications, such as style transfer. | Summary:
This paper proposes a novel GAN structure for semi-supervised learning, a setting in which there exist a small dataset with class labels along with a larger unlabeled dataset. The main idea of this paper is to disentangle the labels (y) from the hidden states (z) using two GAN problems that represent p(x,y) and p(x,z). The generator is shared between both GAN problems, but each problem is trained simultaneously using ALI[4]. There are two adversarial games defined for training the joints p(x, y) and p(x, z). Two "collaborative games" are also defined in order to better disentangle y from z and enforce structure on y.
Pros:
1) Overall, the paper is written well. All the steps are motivated intuitively. The presented ideas look rational.
2) The presented results on the semi-supervised learning task (Table 2) are very impressive. The proposed method outperforms previous GAN and VAE works. The qualitative results in terms of generative samples look very interesting.
Cons:
1) I found the mutual predictability measure very ad-hoc. There exist other measures such as mutual information that have been used in past (e.g. infoGAN) for measuring disentanglability of DGMs.
2) The collaborative games presented in Eq.4 and Eq.5 is equivalent to the mutual information term introduced in infoGAN[2] (with \lambda=1).
In this case, the paper can be considered as the combination of infoGAN and ALI applied to semi-supervised learning. This slightly limits the technical novelty of the paper.
Questions:
1) I am wondering why the adversarial game L_{xy} is only formed on the labeled set X_l. A similar game could be created on the unlabeled set. The only difference would be that instead of labels, one would use y=C(x) for the first term in Eq.3. However in this paper, instead, a collaborative game R_y is introduced.
2) Theorem 3.3 does not state whether minimizing R_z or R_y w.r.t G will change the equilibrium of the adversarial game or not. In other words, with minimization over G in both Eq.4 and Eq.5, can we still guarantee that G will match the data distribution.
Minor comments:
1) line 110: Goodfellow et al is missing the reference.
2) line 197: Is l-2 distance resulted from assuming a normal prior on z?
3) line 213: p_g(z|y) --> p_g(x|y)
4) With all the different components (p, p_g, p_c, etc) it is a bit challenging to follow the structure of adversarial and collaborative games. A figure would help readers to follow the logic of the paper easier. |
nips_2017_2019 | Adaptive Batch Size for Safe Policy Gradients
Policy gradient methods are among the best Reinforcement Learning (RL) techniques to solve complex control problems. In real-world RL applications, it is common to have a good initial policy whose performance needs to be improved and it may not be acceptable to try bad policies during the learning process. Although several methods for choosing the step size exist, research paid less attention to determine the batch size, that is the number of samples used to estimate the gradient direction for each update of the policy parameters. In this paper, we propose a set of methods to jointly optimize the step and the batch sizes that guarantee (with high probability) to improve the policy performance after each update. Besides providing theoretical guarantees, we show numerical simulations to analyse the behaviour of our methods. | Summary:
This paper derives conditions for guaranteed improvement when using policy gradient methods. These conditions are for stochastic gradient estimates and also bound the amount of improvement with high probability. The authors then show how these bounds can be optimized by properly selecting the step size and batch size parameters. This is in contrast to previous work that only considers how the step size can be optimized. The result is an algorithm that can guarantee improvement with high probability.
Note: I did not verify all of the math, but at an intuitive level the theorems and lemmas in the main body are believable.
Major:
1. What am I supposed to take away from this paper? Am I supposed to think "I should use policy gradient methods with their safe step size & batch size approach if I need a safe RL algorithm"? Or, am I supposed to think "This paper is presenting some interesting theoretical insights into how step sizes and batch sizes impact policy gradient methods"?
If the answer is the former, then the results are abysmally bad. The axes are all scaled by 10^7 trajectories. Even the fastest learning methods are therefore impractically slow. This doesn't appear to be a practical algorithm. Compare to the results in your reference [23], where high confidence improvement is achieved using hundreds to thousands of episodes. So, the proposed algorithm does not appear to be practical or competitive as a practical method.
This makes me think that the contribution is the latter: a theoretical discussion about step sizes, batch sizes, and how they relate to guaranteed improvement. This is interesting and I would recommend acceptance if this was the pitch of the paper. However, this is not made clear in the introduction. Until I reached the empirical results I thought that the authors were proposing a practical algorithm. I propose that the introduction should be modified to make it clear what the contribution is: are you presenting theoretical results (e.g., like a PAC RL paper), or are you proposing a practical algorithm? If the answer is the latter, then you need to compare to existing works or explain why it is not an issue that your method requires so much data.
2. The assumption that the policy is Gaussian is crippling. In many of the real-world applications of RL (e.g., dialogue systems) the actions are discrete, and so the policy is not Gaussian. Does this extend to other policy forms, like softmax policies? Is this extension straightforward, or are there additional challenges?
Medium:
1. Line 92. You say that convergence is assured because the angular difference is at most \pi/2. A similar claim was made by Peters and Schaal in their natural actor-critic paper, and it later turned out to be false (See "GeNGA: A generalization of natural gradient ascent with positive and negative convergence results" by Thomas at ICML 2014). Your claim is different from Peters and Schaal's, but due to the history of these off-hand claims tending to be false, you should provide a proof or a reference for this claim.
2. You use concentration bounds that tend to be wildly over-conservative (Chebyshev & Hoeffding) because they must hold for a broad class of distributions. This is obviously the correct first step. However, have you considered using bounds that are far tighter (but make false distributional assumptions), like Student's t-test? If the gradient estimate is the mean of n i.i.d gradient estimates, then by the Central Limit Theorem the gradient estimate is approximately normally distributed, and so Student's t-test is reasonable. Empirically, does it tend to result in improvements with high probability while requiring far less data? It seems like a practical implementation should not use Hoeffding / Chebyshev / Bernstein...
3. Page 7 is one log paragraph.
Minor:
1. Line 74 "paramters"
2. Capitalization in section titles is inconsistent (e.g., section 3 capitalizes Policies but not Size or Scalar).
3. Line 91 "but, being the \alpha_i non-negative". Grammar error here.
4. Integrals look better with a slight space before the dx terms: $$\int_{\mathcal X} f(x) \, \mbox{d}x$$. The $\,$ here helps split the terms visually.
5. Line 102 "bound requires to compute" - requires who or what to compute something? Grammar.
6. The line split on line 111 is awkward.
7. Equation below 113, use \text{} for "if" and "otherwise".
8. Below 131, $\widehat$ looks better on the $\nabla$ symbol, and use $\max$ for max.
9. Line 140 say what you mean by the "optimal step size" using an equation.
10. Line 192, don't use contractions in technical writing (i.e., don't write "doesn't", write "does not").
AFTER AUTHOR REBUTTAL:
My overall review has not changed. I've increased to accept (7 from 6) since the authors are not claiming that they are presenting a practical algorithm, but rather presenting interesting theoretical results. |
nips_2017_2018 | Concrete Dropout
Dropout is used as a practical tool to obtain uncertainty estimates in large vision models and reinforcement learning (RL) tasks. But to obtain well-calibrated uncertainty estimates, a grid-search over the dropout probabilities is necessarya prohibitive operation with large models, and an impossible one with RL. We propose a new dropout variant which gives improved performance and better calibrated uncertainties. Relying on recent developments in Bayesian deep learning, we use a continuous relaxation of dropout's discrete masks. Together with a principled optimisation objective, this allows for automatic tuning of the dropout probability in large models, and as a result faster experimentation cycles. In RL this allows the agent to adapt its uncertainty dynamically as more data is observed. We analyse the proposed variant extensively on a range of tasks, and give insights into common practice in the field where larger dropout probabilities are often used in deeper model layers. | This paper proposes an approach for automatic tuning of dropout
probabilities under the variational interpretation of dropout as a Bayesian approximation. Since the dropout probability characterizes the overall
posterior uncertainty, such tuning is necessary for calibrated inferences,
but has thus far required expensive manual tuning, e.g., via offline grid
search. The proposed method replaces the fixed Bernoulli variational distribution
at training time with a concrete/gumbel-softmax distribution, allowing use of the
reparameterization trick to compute low-variance stochastic gradients
of the ELBO wrt dropout probabilities. This is evaluated on a range of tasks
ranging from toy synthetic datasets to capturing uncertainty in semantic
segmentation and model-based RL.
The motivation seems clear and the proposed solution is simple and
natural, combining dropout Bayesian inference with the concrete
distribution in the obvious way. The technical contributions of the
paper seem relatively minimal, but this is not necessarily a problem;
it's great when a simple, straightforward method turns out to
give impressive results.
That said, given the straightforwardness of the approach, the
significance of this paper rests heavily on the evaluation, and here I
am not really convinced. Although there are a lot of experiments, none
seems to show a compelling advantage and collectively they fail to
answer many natural questions about the approach. For example:
- what is the training time overhead versus standard dropout? (line
281 says 'negligible' but this is never quantified, naively I'd
expect differentiating through the gumbel-softmax to add
significant overhead)
- do the learned dropout probabilities match the (presumably optimal)
probabilities obtained via grid search?
- how to set the temperature of the concrete distribution (and does
this matter)? The code in the appendix seems to used a fixed t=0.1
but this is never mentioned or discussed in the paper.
- how does this method compare to other reasonable baselines, eg
explicit mean-field BNNs, or Gaussian dropout with tuned alpha? The
latter seems like a highly relevant comparison, since the Gaussian
approach seems simpler both theoretically and in implementation (is
directly reparameterizable without the concrete approximation), and
to dismiss it by reference to a brief workshop paper [28] claiming
the approach 'underperforms' (I don't see such a conclusion in
that paper even for the tasks it considers, let alone in any
more general sense) is lazy.
- there is a fair amount of sloppiness in the writing, including
multiple typos and cases where text and figures don't agree (some
detailed below). This does not build confidence in the
meticulousness of the experiments.
More specific comments:
Synthetic data (sec 4.1): I am not sure what
to conclude from this experiment. It reads as simply presenting the
behavior of the proposed approach, without any critical evaluation of
whether that behavior is good. Eg, Fig 1b describes aleatoric
uncertainty as following an 'increasing trend' (a bit of a stretch
given that the majority of steps are decreasing), and 1c describes
predictive uncertainty as 'mostly constant' (though the overall shape
actually looks very similar to 1b) -- but why is it obvious that
either of these behaviors is correct or desirable? Even if this
behavior qualitatively matches what you'd get from a more principled
Bayesian approach (I was expecting the paper to argue this but it
doesn't), a simple synthetic task should be an opportunity for a real
quantitative comparison: how closely do the dropout uncertainties
match what you'd get from, e.g., HMC, or even exact
Bayesian linear regression (feasible here since the true function is
linear -- so concrete dropout on a linear model should be a fair
comparison).
UCI (sec 4.2): the experimental setup is not clear to me -- is the
'dropout' baseline using some fixed probability (in which case, what
is it?) or is it tuned by grid search (how?). More fundamentally,
given that the goal of CDropout is to produce calibrated uncertainties,
why are its NLL scores (fig 2, a reasonable measure of uncertainty)
*worse* than standard dropout in the majority of cases?
MNIST (sec 4.3): again no real comparison to other approaches, no
evaluation of uncertainty quality. The accuracy 'matches' hand-tuned
dropout -- does it find the same probabilities? If so this is worth
saying. If not, seems strange to refer to the converged probabilities
as 'optimal' and analyze them for insights.
Computer vision (sec 4.4): this is to me the most promising
experiment, showing that concrete dropout matches or slightly beats
hand-tuned dropout on a real task. It's especially nice to see that
the learned probabilities are robust to initialization since this is a
natural worry in non-convex optimization. Some things that seem
confusing here:
- what is the metric "IoU"? presumably Intersection over Union but
this is never spelled out or explained.
- the differences in calibration between methods are
relatively small compared to the overall miscalibration of all the
methods - why is this?
- text says "middle layers are largely deterministic" which seems
inconsistent with Fig 8c showing convergence to around p=0.2?
(nitpick: "Table" 2 is actually a figure)
Model-based RL (sec 4.5): It's nice that the model matches our
intuition that dynamics uncertainty should decrease with
experience. But it sounds like this tuning doesn't actually improve
performance (only matches the cumulative reward of a fixed 0.1
dropout), so what's the point? Presumably there should be some
advantages to better-calibrated dynamics uncertainty; why do they not
show up in this task?
The method proposed in this paper seems promising and potentially
quite useful. If so it should be possible to put together a careful
and clear evaluation showing compelling advantages (and minor costs)
over relevant baselines. I'd like to read that paper! But as currently
submitted I don't think this meets the NIPS bar.
UPDATE: increasing my rating slightly post rebuttal. I still think this paper is marginal, but the clarifications promised in the rebuttal (describing the tuning of dropout baselines, and the problems with Gaussian dropout, among others) will make it stronger. It's worth noting that recent work (Molchanov et al., ICML 2017 https://arxiv.org/abs/1701.05369) claims to fix the variance and degeneracy issues with Gaussian dropout; this would be an interesting comparison for future work. |
nips_2017_3391 | Subspace Clustering via Tangent Cones
Given samples lying on any of a number of subspaces, subspace clustering is the task of grouping the samples based on the their corresponding subspaces. Many subspace clustering methods operate by assigning a measure of affinity to each pair of points and feeding these affinities into a graph clustering algorithm. This paper proposes a new paradigm for subspace clustering that computes affinities based on the corresponding conic geometry. The proposed conic subspace clustering (CSC) approach considers the convex hull of a collection of normalized data points and the corresponding tangent cones. The union of subspaces underlying the data imposes a strong association between the tangent cone at a sample x and the original subspace containing x. In addition to describing this novel geometric perspective, this paper provides a practical algorithm for subspace clustering that leverages this perspective, where a tangent cone membership test is used to estimate the affinities. This algorithm is accompanied with deterministic and stochastic guarantees on the properties of the learned affinity matrix, on the true and false positive rates and spread, which directly translate into the overall clustering accuracy. | Summary:
The paper describes a new geometric insight and a new algorithm for noiseless subspace clustering based on a geometric insight about the tangent cone.
The key idea (Proposition 2) is that if x and x’ are from the same subspace, then -x -\sign(< x,x’ >) \beta x’ should within a union of non-convex cones that comes directly from x and the underlying union of subspaces for all positive \beta, while if x and x’ are not from the same subspace, then when \beta is sufficiently large then the vector will not be on this union of non-convex cones.
The authors then propose an algorithm that tests every ordered pair of observed data points with a similar condition for a crude approximation of the ``union of non-convex cones’’ of interest ---- on the conic hull (therefore convex) of the observed data points. The algorithm is parameterized by a tuning parameter \beta. The authors then provide geometric conditions under which: (1). if \beta is sufficiently large then there are no false positives. (2) if \beta is sufficiently small, then the true positive rate is at least a fixed fraction. The geometric conditions are different for (1) and (2). For (1) the condition is reasonable, but for (2) it is quite restrictive, as I will explain later.
In addition, the authors analyzed the method under semi-random model (uniform observations on each subspace, or uniformly random subspaces themselves ) and described how to control the true positive rate.
Evaluation:
The paper is well-written.
The technical results appear to be correct (didn’t have time to check the proofs in details). The idea of testing the “affinity” of pairs of data points using tangent cone is new as far as I know. The true positive rate control in Section 4 is new and useful.
The result itself (in Section 4 and 5) seems quite preliminary are has a number of issues.
In particular, it is not clear under what condition there exists a \beta that simultaneously satisfy conditions in Theorem 4 and Theorem 7. Or rather that the largest \rho one can hope to get when we require \beta > \beta_L, and how that number depends on parameters of the stochastic model.
On the other hand, the proposed approach does not seem to solve a larger class of problems than known before through using SSC. Specifically, SSC allows the number of subspaces to be exponential in the ambient dimension, while the current approach seems to require the sum of dimensions of all subspaces to be smaller than the ambient dimension n. This is critical and not an artifact of the proof because in that case, the tangent cone would be R^n (or is it a half space?). (For example, consider three 1D disjoint subspaces in 2D).
Part of the motivation of the work is to alleviate the ``graph connectivity’’ problem that SSC suffers.
I believe the authors are not aware of the more recent discoveries on this front. This issue is effectively solved the noiseless subspace clustering problem with great generality [a]. Now we know that if SSC provides no false discoveries then there is a post processing approach that provides the correct clustering despite potentially disconnected graph for data points in the subspace. The observation was also extended to noisy SSC. The implication of this to the current paper (which also construct an affinity graph) is that as long as there are no false discoveries and at least d true discoveries (d is the subspace dimension) then the correct clustering can be obtained. Some discussions of the advantage of the proposed approach to the robust post-processing approach described in [a].
Lastly, since the proposed method is based on a seemingly very unstable geometric condition, I feel that it is much more important to analyze the robustness to stochastic noise as in [b,15] and adversarial perturbation as in [a,b].
Overall, I feel that this paper contains enough interesting new insights that should merit an accept to NIPS.
Detailed comments/pointers and references:
1. Noiseless subspace clustering is, in great generality, a solved problem.
- For l1 SSC, there is a postprocessing approach that provides perfect clustering under only identifiability condition as long as there is no false positives.
- In other word, on noiseless subspace clustering problem, SSC does not suffer from the graph connectivity problem at all.
- As a result, the authors should explicitly compare the geometric condition used in this paper to the condition for SSC to yield no-false discoveries [14].
2. Even those geometric conditions in [14] can be removed if we allow algorithms exponential in subspace dimension d (and polynomial in everything else). In fact, L0 verson of SSC can already achieve that solve all subspace clustering problems that are identifiable [a].
3. The noisy subspace clustering and missing data subspace clustering problems are where many of the remaining open problems are. So it will be great if the authors can continue to pursue the new tangent cone idea and figure out how they can solve the noisy/missing data problems in comparison to those approaches existing in the literature.
[a] Wang, Yining, Yu-Xiang Wang, and Aarti Singh. "Graph connectivity in noisy sparse subspace clustering." Artificial Intelligence and Statistics. 2016.
[b] Wang, Yu-Xiang, and Huan Xu. "Noisy Sparse Subspace Clustering." Proceedings of the 30th International Conference on Machine Learning (ICML-13). 2013. |
nips_2017_2906 | Imagination-Augmented Agents for Deep Reinforcement Learning
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines. | This paper presents an approach to model-based reinforcement learning where, instead of directly estimating the value of actions in a learned model, a neural network processes the model's predictions, combining with model-free features, to produce a policy and/or value function. The idea is that since the model is likely to be flawed, the network may be able to extract useful information from the model's predictions while ignoring unreliable information. The approach is studied in procedurally generated Sokoban puzzles and a synthetic Pac-Man-like environment and is shown to outperform purely model-free learning as well as MCTS on the learned model.
---Quality---
I feel the authors have done an exceptional job of presenting and evaluating their claims. The experiments are thorough and carefully designed to tease issues apart and to clearly answer well-stated questions about the approach. I found the experiments to provide convincing evidence that I2A is taking advantage of the learned model, is robust to model flaws, and can leverage the learned model for multiple tasks.
I had one question regarding the experiments comparing I2A and MCTS. I2A was trained for millions of steps to obtain a good policy. Was a comparable amount of data used to train the MCTS leaf evaluation function? I looked in the appendix, but didn't see that detail. Also, out of curiosity, if you have access to a perfect simulator, isn't it a bit funny to make I2A plan with the learned model? Why not skip the model learning part and train I2A to interpret rollouts from the simulator?
---Clarity---
I felt the paper was exceptionally well-written. The authors did a good job of motivating the problem and their approach and clearly explained the big ideas. As is often the case with deep learning papers, it would be *extremely* difficult to reproduce these results based purely on the paper itself -- there are always lots of details regarding architecture, meta-parameters, etc. However, the appendix does a good job of narrowing this gap. I hope the authors will consider publishing source code as well.
---Originality---
As far as I am aware, this is a highly novel approach to MBRL. I have never seen anything like it, and it is fundamentally, conceptually different than existing MBRL approaches.
The authors did a good job of citing relevant work (the references section is prodigious). Nevertheless, there has been some other recent work outside the context of deep nets studying the problem of planning with approximate, learned models that the authors may consider discussing (recognizing of course the limitations of space). For instance, Talvitie UAI 2014 and AAAI 2016, Jiang et al. AAMAS 2015, Venkatraman et al. ISER 2016. The presented approach is unquestionably distinct from these, but there may be connections or contrasts worth making. Also, the I2A architecture reminds me of the Dyna architecture, which similarly combines model-based and model-free updates into a single value function/policy (the difference being that it directly uses reward/transition predictions from the model). Again, I2A is definitely new, but the connection may be valuable to discuss.
---Significance---
I think this is an important paper. It is addressing an important problem in a creative and novel way, with demonstrably promising results. It seems highly likely that this paper will inspire follow-up work, since it effectively proposes a new category of MBRL algorithms that seems well-worth exploring.
***After author response***
I have read the authors' response and thank them for the clarifications. |
nips_2017_2907 | Extracting low-dimensional dynamics from multiple large-scale neural population recordings by learning to predict correlations
A powerful approach for understanding neural population dynamics is to extract low-dimensional trajectories from population recordings using dimensionality reduction methods. Current approaches for dimensionality reduction on neural data are limited to single population recordings, and can not identify dynamics embedded across multiple measurements. We propose an approach for extracting low-dimensional dynamics from multiple, sequential recordings. Our algorithm scales to data comprising millions of observed dimensions, making it possible to access dynamics distributed across large populations or multiple brain areas. Building on subspace-identification approaches for dynamical systems, we perform parameter estimation by minimizing a moment-matching objective using a scalable stochastic gradient descent algorithm: The model is optimized to predict temporal covariations across neurons and across time. We show how this approach naturally handles missing data and multiple partial recordings, and can identify dynamics and predict correlations even in the presence of severe subsampling and small overlap between recordings. We demonstrate the effectiveness of the approach both on simulated data and a whole-brain larval zebrafish imaging dataset. | The authors suggest a method to estimate latent low dimensional dynamics from sequential partial observations of a population. Using estimates of lagged covariances, the method is able to reconstruct unobserved covariances quite well. The extra constraints introduced by looking at several lags allow for very small overlaps between the observed subsets.
This is an important work, as many imaging techniques have an inherent tradeoff between their sampling rate and the size of the population. A recent relevant work considered this from the perspective of inferring connectivity in a recurrent neural network (parameters of a specific nonlinear model with observations of the entire subset) [1].
The results seem impressive, with an ability to infer the dynamics even for very low overlaps, and the spiking network and zebrafish data indicate that the method works for certain nonlinear dynamics as well.
A few comments:
1. In Figure 3b the error increases eventually, but this is not mentioned.
2. Spiking simulation – the paper assumes a uniform sampling across clusters. How sensitive is it to under-sampling of some clusters?
3. The zebrafish reconstruction of covariance. What is the estimated dimensionality of the data? The overlap is small in terms of percentage (1/41), but large in number of voxels (200K). Is this a case where the overlap is much larger than the dimensionality, and the constraints from lagged covariances do not add much? This example clearly shows the computational scaling of the algorithm, but I’m not sure how challenging it is in terms of the actual data.
[1] Soudry, Daniel, et al. "Efficient" shotgun" inference of neural connectivity from highly sub-sampled activity data." PLoS computational biology 11.10 (2015): e1004464. |
nips_2017_2050 | Sparse convolutional coding for neuronal assembly detection
Cell assemblies, originally proposed by Donald Hebb (1949), are subsets of neurons firing in a temporally coordinated way that gives rise to repeated motifs supposed to underly neural representations and information processing. Although Hebb's original proposal dates back many decades, the detection of assemblies and their role in coding is still an open and current research topic, partly because simultaneous recordings from large populations of neurons became feasible only relatively recently. Most current and easy-to-apply computational techniques focus on the identification of strictly synchronously spiking neurons. In this paper we propose a new algorithm, based on sparse convolutional coding, for detecting recurrent motifs of arbitrary structure up to a given length. Testing of our algorithm on synthetically generated datasets shows that it outperforms established methods and accurately identifies the temporal structure of embedded assemblies, even when these contain overlapping neurons or when strong background noise is present. Moreover, exploratory analysis of experimental datasets from hippocampal slices and cortical neuron cultures have provided promising results. * Both authors contributed equally. † Majority of this work was done while co-author was at 3 . ‡ Majority of this work was done while co-author was at 1 .
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. | The paper proposes a new method for the identification of spiking motifs in neuronal population data. The authors’ approach is to use a sparse convolutional coding model that is learned using an alternation between LASSO and matching pursuit. The authors address the problem of false positives and compare their algorithm to 3 competing methods in both simulation experiments and real neural data.
It appears that the basic model is not original, but that the regularization has been improved, leading to a method that performs well beyond the state of the art.
I believe that the method could be of some use to the neuroscience community as it may be powerful enough to determine if population codes fire in coordinated assemblies and if so, what is the nature of the assemblies? This is a step towards addressing these long-standing questions in the neuroscience community.
Overall, the paper is clearly written, the method is reasonably conceived and described, and the simulation studies were sufficiently conceived for a paper of this length. Some of the methods for evaluating performance were unclear. I am not totally certain how to assess Figure 4, for example since lines 169-175 where the “functional association” metric is described is too impoverished to determine what was actually done and what is being compared across detection thresholds.
I also am of the opinion that the authors are over-interpreting their data analysis results when they claim that the motif appearance is related to re-activation of former assemblies (do they mean replay?).
Also, it appears that few of the motif activations in Figure 7 include all of the cells that are a part of the motif. This is concerning as it is not a part of the model description and makes it appear as though there is a poor model fit to the data. |
nips_2017_3298 | Filtering Variational Objectives
When used as a surrogate objective for maximum likelihood estimation in latent variable models, the evidence lower bound (ELBO) produces state-of-the-art results. Inspired by this, we consider the extension of the ELBO to a family of lower bounds defined by a particle filter's estimator of the marginal likelihood, the filtering variational objectives (FIVOs). FIVOs take the same arguments as the ELBO, but can exploit a model's sequential structure to form tighter bounds. We present results that relate the tightness of FIVO's bound to the variance of the particle filter's estimator by considering the generic case of bounds defined as log-transformed likelihood estimators. Experimentally, we show that training with FIVO results in substantial improvements over training the same model architecture with the ELBO on sequential data. | This paper introduces Monte Carlo objective to improve the scaling of the IWAE and provides a tighter lower bound to the log likelihood than traditional ELBO. Various theoretical properties of Monte Carlo objectives are discussed. An example of Monte Carlo objective is provided to use particle filters for modeling sequential data. The paper is generally well written and I enjoyed reading it.
* For the FIVO algorithm, it'd be more self-contained if including a brief description of the resampling criteria or its exact procedure used in the experiment, so that the reference doesn't need to be consulted.
* Although not required, it'd be helpful to provide extra examples of Monte Carlo objectives other than applications of sequential data as FIVOs.
* I feel it's fairer to stay with a batch of 4 instead of 4N for ELBO to reduce the impact of different batch configurations, even if the computational budget does not match up (line 259). The authors could also consider reporting the running time of each method, and normalize it somehow when interpreting the results. This could also give readers some idea about the overhead of resampling.
* I'm a bit worried about the use of the biased gradients to train FIVO. Is it possible that part of the superior performance of FIVO over IWAE somehow benefits from training using the incorrect gradients? It'd be useful to also report the performance of FIVO trained with the unbiased gradients so that the readers will have the idea about the impact of the large variance coming from the resampling term.
* It's not clear to me why checking on the KL divergence between the inference model q(z|x) to the prior p(z) addresses the need to make use of the uncertainty (line 279 to 283). What's the connection between "the degree to which each model uses the latent states" and the KL divergence? How to interpret ELBO's KL collapses during training? Doesn't the inference model collapses onto the prior suggest that it carries less information about the data?
* Typo: it should be z ~ q(z|x) in line 111-112. |
nips_2017_172 | Uprooting and Rerooting Higher-Order Graphical Models
The idea of uprooting and rerooting graphical models was introduced specifically for binary pairwise models by Weller [19] as a way to transform a model to any of a whole equivalence class of related models, such that inference on any one model yields inference results for all others. This is very helpful since inference, or relevant bounds, may be much easier to obtain or more accurate for some model in the class. Here we introduce methods to extend the approach to models with higher-order potentials and develop theoretical insights. In particular, we show that the triplet-consistent polytope TRI is unique in being 'universally rooted'. We demonstrate empirically that rerooting can significantly improve accuracy of methods of inference for higher-order models at negligible computational cost. | This paper presents a reparametrization method for inference in undirected graphical models. The method generalizes the uprooting and rerooting approach from binary pairwise graphical models to binary graphical models with factors of arbitrary arity. At the heart of the method is the observation that all non-unary potentials can be made symmetric, and once this has been done, the unary potentials can be changed to pairwise potentials to obtain a completely symmetric probability distribution, one where x and its complement have the exact same probability. This introduces one extra variable, making the inference problem harder. However, we can now "clamp" a different variable from the original distribution, and if we choose the right variable, approximate inference might perform better in the new model than in the original. Inference results in the reparametrized model easily translate to inference results in the original model.
The idea is supported by a number of theorems. I believe the results, but I did not study the proofs in the supplementary material. In addition to the main result, which is that inference results in the new model can be mapped back to inference results in the original model, the authors also discuss how the reparametrized problems relate to each other in the marginal polytope and how factors can be decomposed into sums of "even" factors -- basically, parity over k variables.
There are also some basic experiments on small, synthetic datasets which demonstrate that rerooting can yield better approximations in some cases.
Given that the negligible cost of reparametrization and mapping the results, this seems like a promising way to improve inference. There's no guarantee that the results will be better, of course -- in general, rerooting could lead to better or worse results. Furthermore, we can clamp variables without rerooting, at the cost of doubling the inference time for each clamped variable. (Just run inference twice, once with each value clamped.) If rerooting is only done once, then you're getting at best a factor-of-two speedup compared to clamping. A factor of two is a small constant, and I didn't see any discussion of how rerooting could be extended to yield better results. So the practical benefit from the current work may be marginal, but the ideas are interesting and could be worth developing further in future work.
Quality: I believe the quality is high, though as I mentioned, I have not studied the proofs in the supplementary material. The limitations are not discussed in detail, but this is typical for conference papers.
Clarity: The writing was fairly clear. Figure 2 has very small text and is uninterpretable in a black and white printout.
Originality: I believe this is original, but it's a natural evolution of previous work on rerooting. This is a meaningful increment, supported by theoretical results and token empirical results, but not transformative.
Significance: As stated earlier, it's not clear that these methods will yield any improvements on real-world inference tasks (yet). And if they do, it's at best a factor-of-two speedup. Thus, this work will mainly be relevant to people working in the area of graphical models and probabilistic inference. It's still significant to this group, though, since the results are interesting and worthy of further exploration. |
nips_2017_2051 | Quantifying how much sensory information in a neural code is relevant for behavior
Determining how much of the sensory information carried by a neural code contributes to behavioral performance is key to understand sensory function and neural information flow. However, there are as yet no analytical tools to compute this information that lies at the intersection between sensory coding and behavioral readout. Here we develop a novel measure, termed the information-theoretic intersection information I II (S; R; C), that quantifies how much of the sensory information carried by a neural response R is used for behavior during perceptual discrimination tasks. Building on the Partial Information Decomposition framework, we define I II (S; R; C) as the part of the mutual information between the stimulus S and the response R that also informs the consequent behavioral choice C. We compute I II (S; R; C) in the analysis of two experimental cortical datasets, to show how this measure can be used to compare quantitatively the contributions of spike timing and spike rates to task performance, and to identify brain areas or neural populations that specifically transform sensory information into choice. | This manuscript proposes a new information-theoretic measure for quantifying the amount of information about the stimuli carried in the neural response, which is also used for behavior for a perceptual discrimination tasks. This work builds on the previous literature of Partial Information Decomposition. The information-theoretic measure is tested using a simulation and two published experimental datasets.
Overall, I find that this is a potentially interesting contribution. The manuscript is generally well-written, except it misses the definitions of a few key quantities. The general approach the authors use is principled and potentially lead to better way to examine the relation between the neural response, behavior and the stimuli.
I have a few general as well as technical concerns about the applicability of this measure in typical experiments which I will detail below.
Major concerns:
Eq.(1) is a key equation, however, the terms at the r.h.s are not defined, and only intuitions are given. For example, how to compute SI(C:{S,R})? I think it would be useful to have that spell out explicitly. Minor comment- there is a typo in the last two terms, and now they appear to be identical.
Eq. (2) gives the definition of the proposed new measure. However, what is the difference between this measure, and the bivariate redundancy measure proposed in Harder et al. (PRE, 2013; reference (21) in the submission)? In Harder et al., a measure with similar structure is proposed, see their Eq. (13). It is not clear to me how the "new definition" the authors proposed here is different from the Eq. (13) there. Many of the properties of the measure proposed here seems to be shared with the measure defined there. However, since the definition of SI(C:{S,R}) is not given in the current paper, I might be missing something important here. In any case, I'd like to understand whether the measure defined in Eq. (2) is indeed different from Eq. (13) of Harder et al (2013), as this seems to be critical for judging the novelty of the current paper.
Regardless of whether the measure the authors proposed is indeed novel, there is a crucial question in terms of practically how useful this method is. I'd think one of the main challenges for using this method is estimating the joint probably of the stimulus, activity and the choice. But in the Supplementary Information, the authors only explained how to proceed after assuming this joint probability has been estimated. Do the authors make any important assumptions to simplify the computations here, in particular for the neural response R? For example, is the independence across neurons assumed? Furthermore, in many tasks with complex stimuli such as in many vision experiments, I'd think it is very difficult to estimate even the prior distribution on the stimulus, not to say the joint probability of the stimulus, the activity and the behavior. I understand that the authors here only focus on simple choice tasks in which the stimuli are modeled as binary variables, but the limitations of this approach should be properly discussed.
In the experimental section, it seems that the number information is a bit small, e.g., ~10^{-3} bit. Can these number map to the values of choice probability and are they compatible? In general, it might be useful to discuss the relation of this approach to the approach of using choice probability, both in terms of the pros and the cons, as well as in which cases they relate to each other.
In summary, my opinion is that the theoretical aspects of the paper seems to be incremental. However, I still find the paper to be of potential interests to the neurophysiologists. I have some doubts on the general applicability of the methods, as well as the robustness of the conclusions that can be drawn from this method, but still this paper may represent an interesting direction and may prove useful for studying the neural code of choice behavior.
Minor:
In eq. (2), the notation for I_{II}(R) is confusing, because it doesn't contain (R,C), but nonetheless it depends on them. This further leads to the confusion for I_{II}(R1,R2) at the top of line 113. What does it mean? |
nips_2017_2237 | Temporal Coherency based Criteria for Predicting Video Frames using Deep Multi-stage Generative Adversarial Networks
Predicting the future from a sequence of video frames has been recently a sought after yet challenging task in the field of computer vision and machine learning. Although there have been efforts for tracking using motion trajectories and flow features, the complex problem of generating unseen frames has not been studied extensively. In this paper, we deal with this problem using convolutional models within a multi-stage Generative Adversarial Networks (GAN) framework. The proposed method uses two stages of GANs to generate crisp and clear set of future frames. Although GANs have been used in the past for predicting the future, none of the works consider the relation between subsequent frames in the temporal dimension. Our main contribution lies in formulating two objective functions based on the Normalized Cross Correlation (NCC) and the Pairwise Contrastive Divergence (PCD) for solving this problem. This method, coupled with the traditional L1 loss, has been experimented with three real-world video datasets viz. Sports-1M, UCF-101 and the KITTI. Performance analysis reveals superior results over the recent state-of-the-art methods. | This method provides 2 contributions for next frame prediction from video sequences.
The first is the introduction of a normalized cross correlation loss, which provide a better similarity score to judge if the predicted frame is close to the true future. The second is the pairwise contrastive divergence loss, based on the idea of similarity of the image features. Results are presented on the UCF101 and Kitti datasets, and a numerical comparison using image similarity metrics (PSNR, SSIM) with Mathieu et al ICLR16 is performed.
Comments:
The newly proposed losses are interesting, but I suspect a problem in the evaluation.
I fear that the evaluation protocol comparing the present approach to previous work [13] is different to [13]: In the table, the authors don't mention evaluating their method on 10% of the test set and only in the moving area as Mathieu et al. did.
Did the authors performed the sanity check of comparing the obtained PSNR/SSIM score of frame copy to see if they match the ones reported in [13]?
Please be more specific about the details of the evaluation (full test set? Full images? -> in this case, what are the scores for a frame copy baseline?)
The authors answered my concerns in the rebuttal.
- Providing generated videos for such paper submission would be appreciated.
In the displayed examples, including the ones from the supplementary material, the movements are barely visible.
- Predicting several frames at a time versus one?
In Mathieu et al. work, results showed that predicting only one frame at the time worked better. Have you tried such experiment?
l.138: I don't understand the h+4?
Minor
The first sentence, "Video frame prediction has always been one of the fundamental problems in computer vision as it caters to a wide range of applications [...]" is somehow controversial in my opinion: The task was, in the best of my knowledge, introduced only in 2014. As for the applications, even if i believe they will exist soon, I am not aware of systems using next frames predictions yet.
l. 61 uses -> use
l. 65 statc -> statistical ?
l 85. [5]is -> [5] is
Eq 4: use subscripts instead of exponents for indexes
l 119 (b)P -> (b) P
l 123-125 weird sentence
l. 142 ,u -> , u
l. 153 tries to -> is trained to
l. 163 that, -> that
l. 164 where, -> where
[13] ICLR 16
[17] ICLR 16
please check other references |
nips_2017_3299 | On Frank-Wolfe and Equilibrium Computation
We consider the Frank-Wolfe (FW) method for constrained convex optimization, and we show that this classical technique can be interpreted from a different perspective: FW emerges as the computation of an equilibrium (saddle point) of a special convex-concave zero sum game. This saddle-point trick relies on the existence of no-regret online learning to both generate a sequence of iterates but also to provide a proof of convergence through vanishing regret. We show that our stated equivalence has several nice properties, as it exhibits a modularity that gives rise to various old and new algorithms. We explore a few such resulting methods, and provide experimental results to demonstrate correctness and efficiency. | The paper draws a connection between the classical Frank-Wolfe algorithm for constrained smooth & convex optimization (aka conditional gradient method) and using online learning algorithms to solve zero-sum games. This connection is made by casting the constrained convex optimization problem as a convex-concave saddle point problem between a player that takes actions in the feasible set and another player that takes actions in the gradient space of the objective function. This saddle point problem is derived using the Flenchel conjugate of the objective. Once this is achieved, a known and well explored paradigm of using online learning algorithms can be applied to solving this saddle point problem (where each player applies its own online algorithm to either minimize or maximize), and the average regret bounds obtained by the algorithms translate back to the approximation error with respect to the objective on the original offline convex optimization problem.
The authors show that by applying this paradigam with different kinds of online learning algorithms, they can recover the original Frank-Wolfe algorithm (though with a slightly different step size and rate worse by a factor of log(T)) and several other variants, including one that uses the averaged gradient, using stochastic smoothing for non-smooth objectives and even a new variant that converges for non-smooth objectives (without smoothing), when the feasible set is strongly convex.
***Criticism***
Pros:
I think this is a really nice paper, and I enjoyed reading it. While, in hindsight, the connection between offline optimization and zero-sum games via Fenchel conjugates is not surprising, I find this perspective very refreshing and original, in particular in light of the connection with online learning. I think NIPS is the best place for works making such a connection between offline optimizaiton and online learning. I truly believe this might open a window to obtain new and improved projection-free algorithms via the lens of online learning, and this is the main and worthy contribution of this paper, in my opinion. This view already allows the authors to give a FW-like algorithm for non-smooth convex optimization over strongly convex sets which overcomes the difficulties of the original method. There is an easy example of minimizing a piece-wise linear function over the euclidean unit ball for which FW fails, but this new variant works (the trick is to indeed use the averaged gradient). Unfortunately, as I discuss next, the authors do a somewhat sloppy work presenting this result.
Cons:
- I find Section 4.3 to be very misleading. Algorithm 5 is a first-order method for convex non-smooth optimization. We know that in this setting the best possible (worst-case) rate is 1/\eps^2. The construction is simply minimization of a piece-wise linear function over a unit Euclidean ball (which is certainly strongly convex with a constant strong convexity parameter). Hence, claiming Algorithm 5 obtains a log(T)/T rate is simply not true and completely misleading. Indeed, when examining the details of the above ``bad" construction, we have that L_T will be of order \eps when aiming for \eps error. The authors should do a better job in correctly placing the result in the context of offline optimization, and not simply doing plug&play from the online world. Again, stating the rate as log(T)/T is simply misleading. In the same vain, I find the claims on lines 288-289 to be completely false.
It is also disappointing and in a sense unsatisfactory that the algorithm completely fails when gradient=0 is achievable for the problem at hand. I do hope the authors will be able to squeeze more out of this very nice result (as I said, it overcomes a standard lower bound for original FW I had in mind).
Minor things:
- When referring to previous works [13,15] the authors wrongly claim that they rely on a stronger oracle. They also require the standard linear oracle but use it to construct a stronger oracle. Hence, they work for every polytope.
- In Definition 2 is better to state it with gradient instead of subgradient since we assume the function is smooth.
-Line 12: replace an with a
- on several occasions when using ``...=argmin", note in general the argmin is a set, not a singleton. Same goes for the subdifferential.
-Line 103: I believe ``often times" should be a single word
- Theorem 3: should state that there exists a choice for the sequence \gamma_t such that...
- Theorem 5: should also state the gamma_t in the corresponding Algorithm 2
- Supp material Definition on line 82: centered equation is missing >=0. Also, ``properly contains" in line 84 not defined.
- Supp material: Section 4 is missing (though would love to hear the details (: )
*** post rebuttal ***
I've read the rebuttal and overall satisfied with the authors response. |
nips_2017_3430 | Style Transfer from Non-Parallel Text by Cross-Alignment
This paper focuses on style transfer on the basis of non-parallel text. This is an instance of a broad family of problems including machine translation, decipherment, and sentiment modification. The key challenge is to separate the content from other aspects such as style. We assume a shared latent content distribution across different text corpora, and propose a method that leverages refined alignment of latent representations to perform style transfer. The transferred sentences from one style should match example sentences from the other style as a population. We demonstrate the effectiveness of this cross-alignment method on three tasks: sentiment modification, decipherment of word substitution ciphers, and recovery of word order. | This paper presents a method for learning style transfer models based on non-parallel corpora. The premise of the work is that it is possible to disentangle the style from the content and that when there are two different corpora on the same content but in distinctly different styles, then it is possible to induce the content and the style components. While part of me is somewhat skeptical whether it is truly possible to separate out the style from the content of natural language text, and that I tend to think sentiment and word-reordering presented in this work as applications correspond more to the content of an article than the style, I do believe that this paper presents a very creative and interesting exploration that makes both theoretical and empirical contributions.
— (relatively minor) questions:
- In the aligned auto-encoder setting, the discriminator is based on a simple feedforward NN, while in the cross-aligned auto-encoder setting, the discriminators are based on convolution NNs. I imagine ConvNets make stronger discriminators, thus it’d be helpful if the paper can shed lights on how much the quality of the discriminators influence the overall performance of the generators.
- Some details on the implementation and experiments are missing, which might matter for reproducibility. For example, what kind of RNNs are used for the encoder and the generator? What kind of convolution networks are used for the discriminators? What are the vocabulary sizes (and the corresponding UNK scheme) for different datasets? The description for the word reordering dataset is missing. If the authors will share their code and the dataset, it’d help resolving most these questions, but still it’d be good to have some of these detailed specified in the final manuscript. |
nips_2017_1063 | QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding
Parallel implementations of stochastic gradient descent (SGD) have received significant research attention, thanks to its excellent scalability properties. A fundamental barrier when parallelizing SGD is the high bandwidth cost of communicating gradient updates between nodes; consequently, several lossy compresion heuristics have been proposed, by which nodes only communicate quantized gradients. Although effective in practice, these heuristics do not always converge. In this paper, we propose Quantized SGD (QSGD), a family of compression schemes with convergence guarantees and good practical performance. QSGD allows the user to smoothly trade off communication bandwidth and convergence time: nodes can adjust the number of bits sent per iteration, at the cost of possibly higher variance. We show that this trade-off is inherent, in the sense that improving it past some threshold would violate information-theoretic lower bounds. QSGD guarantees convergence for convex and non-convex objectives, under asynchrony, and can be extended to stochastic variance-reduced techniques. When applied to training deep neural networks for image classification and automated speech recognition, QSGD leads to significant reductions in end-to-end training time. For instance, on 16GPUs, we can train the ResNet-152 network to full accuracy on ImageNet 1.8× faster than the full-precision variant. | Update: I decrease slightly the grade due to the mismatch between theoretical and practical results that could be better covered. Still this paper has strong experimental results and some theoretical results. I would encourage the authors to improve on the gap between the two.
In this paper the author introduce Quantized SGD (QSGD), a scheme for reducing the communication cost of SGD when performing distributed optimization. The quantization scheme is useful as soon as one has to transmit gradients between different machines. The scheme is inspired by 1 BitSGD where 32 bits are used to transfer the norm of the gradient and then 1 bit per coordinate to encode the sign. The author extends this approach by allowing to send extra bits per coordinate to encode the scale of the coordinate. They obtain an unbiaised stochastic quantized version of the gradient. Therefore, one can still see QSGD as an SGD scheme, but with extra variance due to the quantization. Because of the quantization of the scale of each coordinate, small coordinates will be approximated by 0 and therefore the quantized gradient will be sparse, unlike 1BitSGD which gives a dense gradient. The authors use this property to further compress the gradient by encoding the non zero coordinates and only transfering information for those. In order to encode the position of those coordinates efficiently, the authors uses the Elias code, a variable length integer encoding scheme.
The authors then applies this approach to deep learning model training.
The authors also present how to quantize SVRG, a variance reduced SGD method with linear rate of convergence that can be used for convex optimization.
Strengths
=========
- The introduction is very clear and the authors do a great job at presenting the source of inspiration for this work, 1BitSGD. They also explain the limits of 1BitSGD (namely that there is no convergence guarentees) and clearly state the contribution of their paper (the ability to choose the number of bits and therefore the trade off between precision and communication cost, as well as having convergence guarantees).
- the authors reintroduce the main results on SGD in order to properly understand the convergence of SGD in relation with the variance or second order moment of the gradients. This allow the reader to quickly understand how the quantization will impact convergence without requiring in depth technical knowledge of SGD.
- As a consequence of the above points, this paper is very well self contained and can be read even with little knowledge about quantization or SGD training.
- The authors did a massive investigation work, regarding both synchronous and asynchronous SGD, convex and non convex, as well as variance reduced SGD. Despite all this work, the authors did a good job at keeping the reading complexity low by separating properly each concern, sticking to simple case in the main paper (synchronous SGD) and keeping more complex details for the supplementary materials.
- This paper contains a large amount of experiments, on different datasets, different architecture and comparing with existing methods such as 1bit SGD.
- This paper offers a non trivial extension of 1bit SGD. For instance this method introduce sparsity in the gradient update while keeping convergence guarantees.
Improvements
============
- The result on sparsity in Lemma 3.1, whose proof is in Lemma A.5 in the supplementary material seems wrong to me. Line 514 in the supplementary materials, first `u_i` should be replaced by `|u_i|` but more importantly, according to the definition on line 193 in the main paper, the probability is not `|u_i|` but `s |u_i|`. Thus, the sparsity bound becomes `s^2 + sqrt(n) s`. I would ask the authors to confirm this and update any conclusion that would be invalidated. This will have an impact on Lemma A.2. I would also encourage the authors to move this proof with the rest of the proof of Lemma 3.1.
- Line 204, the description of Elias(k) is not super clear to me, maybe giving a few examples would make it better. The same applies to the description in the supplementary materials. Besides, the authors should cite the original Elias paper [1].
- The authors could comment on how far is the synchronous data-parallel SGD from current state of the art practice in distributed deep learning optimization. My own experience with mini-batches is that although theoretically and asymptotically using a batch size of B can make convergence take B times less iterations, this is not always true especially when far from the optimum, see [2] for instance, in practice the `L R^2/T` can dominate training for a significant amount of time. Other approaches than mini-batch exist, such as EASGD. I am not that familiar with the state of the art in distributed deep learning but I believe this paper could benefit from giving more details on the state of the art techniques and how QSGD can improve them (for instance for EASGD I am guessing that QSGD can allow for more frequent synchronization between the workers). Besides the model presented in this paper requires O(W^2) communication at every iteration where W is the number of worker, while only O(W) could be used with a central parameter server, thus this paper setup is particularly beneficial to QSGD. Such an approach is shortly described in the supplementary materials but as far as I could see there was no experiment with it.
Overall this papers contains novel ideas, is serious and well written and therefore I believe it belongs at NIPS. It has detailed theoretical and experimental arguments in favor of QSGD. As noted above, I would have enjoyed more context on the impact on state of art distributed methods and there is a mistake in one of the proof that needs to be corrected.
References
==========
[1] Elias, P. (1975). Universal codeword sets and representations of the integers. IEEE transactions on information theory.
[2] Sutskever, Ilya, et al. On the importance of initialization and momentum in deep learning. ICML 2013 |