id
stringlengths 11
20
| paper_text
stringlengths 29
163k
| review
stringlengths 666
24.3k
|
---|---|---|
nips_2017_1937 | A Universal Analysis of Large-Scale Regularized Least Squares Solutions
A problem that has been of recent interest in statistical inference, machine learning and signal processing is that of understanding the asymptotic behavior of regularized least squares solutions under random measurement matrices (or dictionaries). The Least Absolute Shrinkage and Selection Operator (LASSO or least-squares with 1 regularization) is perhaps one of the most interesting examples. Precise expressions for the asymptotic performance of LASSO have been obtained for a number of different cases, in particular when the elements of the dictionary matrix are sampled independently from a Gaussian distribution. It has also been empirically observed that the resulting expressions remain valid when the entries of the dictionary matrix are independently sampled from certain non-Gaussian distributions. In this paper, we confirm these observations theoretically when the distribution is sub-Gaussian. We further generalize the previous expressions for a broader family of regularization functions and under milder conditions on the underlying random, possibly non-Gaussian, dictionary matrix. In particular, we establish the universality of the asymptotic statistics (e.g., the average quadratic risk) of LASSO with non-Gaussian dictionaries. | The paper studies the universality of high-dimensional regularized least squares problems with respect to the distribution of the design matrix. The main result of the paper is that, under moment conditions on the entries of the design matrix A, the optimal value of the least squares problem, and characteristics of the optimal solution are independent of the details of the distribution of the A_ij’s.
The proof uses the Lindeberg interpolation method, in a clever incremental way, to transform the problem to its Gaussian counterpart while preserving the characteristics of the optimization, then uses Gaussian comparison theorems (variants of Gordon’s escape through a mesh result) to analyze the Gaussian setting.
The consequences are nice formulas predicting the exact values of the cost, the limiting empirical distribution of the entries of the solution and of the error vector.
These formulas were previously known to be correct only in restricted settings.
These exact asymptotic characterizations of random estimation problems have been studied by several authors and in different settings. There are similarities, at least in spirit, to the work of El Karoui [1], and Donoho-Montanari [2] on robust high-dimensional regression (a problem that could be seen as the dual of the one considered here), and to many papers that have appeared in the information-theory community on “single letter formulas” characterizing the MMSE of various estimation channels (linear regression, noisy rank-one matrix estimation, CDMA,…) See e.g., [3,4,5] and references therein.
Although most of these efforts have mainly dealt with the Gaussian design setting, some have proved “channel universality”, which is essentially the focus of this paper. The authors should probably fill this bibliographic gap.
Second, I would like to point out a different approach to the problem, inspired by rigorous work in statistical physics, and that will make the connection to the above mentioned work clearer: one could consider the log-sum-exp approximation to the optimization problem, with a “inverse temperature” parameter \beta. And one recovers the optimization in the limit \beta -> \infty. One could study the convergence of log-partition function obtained in this way, and obtain a formula for it, which in this setting would bare the name of “the replica-symmetric (RS) formula”.
The “essential optimization” problem found by the authors is nothing else but the zero-temperature limit of this RS formula. (One would of course have to justify swapping the limits n \to \infty and \beta \to \infty.)
This being said, the techniques used in this work seem to be of a different nature, and add to the repertoire of approaches to compute asymptotic values of these challenging problems.
[1] Asymptotic behavior of unregularized and ridge-regularized high-dimensional robust regression estimators : rigorous results https://arxiv.org/abs/1311.2445
[2] High Dimensional Robust M-Estimation: Asymptotic Variance via Approximate Message Passing https://arxiv.org/abs/1310.7320
[3] The Mutual Information in Random Linear Estimation : https://arxiv.org/abs/1607.02335
[4] Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula https://arxiv.org/abs/1606.04142
[5] Fundamental limits of symmetric low-rank matrix estimation : https://arxiv.org/abs/1611.03888 |
nips_2017_1294 | Pixels to Graphs by Associative Embedding
Graphs are a useful abstraction of image content. Not only can graphs represent details about individual objects in a scene but they can capture the interactions between pairs of objects. We present a method for training a convolutional neural network such that it takes in an input image and produces a full graph definition. This is done end-to-end in a single stage with the use of associative embeddings. The network learns to simultaneously identify all of the elements that make up a graph and piece them together. We benchmark on the Visual Genome dataset, and demonstrate state-of-the-art performance on the challenging task of scene graph generation. | This paper proposes the use of a Hourglass net with associative embeddings to generate a graph (relating objects and their relationships) from an image. The model is presented as one of end-to-end learning. The Hourglass net provides heatmaps for objects and relationships, feature vectors are extracted from the top locations in the heatmaps and used with FC layers for predicting object-classes, bounding boxes and relationships among objects. Associate embeddings are used to link vertexes and edges, each vertex having an unique embedding. Experimental results show the high performance of the proposed methodology.
I think this is a valuable work, yet I have doubts on whether it meets the NIPS standards. On the one hand,
* the performance of this methodology is quite good in the approached subtasks. On the other hand, the novelty/originality of this work is somewhat limited:
* it is based in a standard Hourglass network, standard in the sense that nothing novel seems to be added to the network, it simply estimates heatmaps and the FC layers provide estimates
* it incorporates associative embeddings to generate the graph. Whereas this is an interesting approach, it is quite similar as the way it is used in [20] https://arxiv.org/pdf/1611.05424.pdf
Hence, this work is another use for Hourglass-nets + associative embeddings as proposed in [20] obtaining good performance. I do not think this is enough contribution to be presented at NIPS.
Further detailed comments:
* The authors themselves acknowledge one of their contribution is a "November el use" of associative embeddings. Whereas I agree there is novelty on the usage of this methodology, the contribution in this side is limited (in the nips sense)
* The authors fail at explaining both, the details and intuition of the proposed methodology (in all parts of the paper, but mostly in the intro, where this info is mostly important). The authors, assume the reader has the same degree of understanding of their proposed methodology ( not only of general technical stuff related to the approach, but also of methodology itself), making it kind of tedious to read the paper, and a bit complicated to follow it
* Authors should make more explicit the differences and emphasize the importance of the contribution of this paper, certainly it is a different problem than that approached in [20], but is that difference enough contribution as to publish a paper in NIPS?, please argue on the differences and provide arguments supporting the strength of your contribution
I removed a comment on overlap, my recommendation is not based on such assumption
Typo
* "them. [18]" |
nips_2017_3392 | The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process
Many events occur in the world. Some event types are stochastically excited or inhibited-in the sense of having their probabilities elevated or decreased-by patterns in the sequence of previous events. Discovering such patterns can help us predict which type of event will happen next and when. We model streams of discrete events in continuous time, by constructing a neurally self-modulating multivariate point process in which the intensities of multiple event types evolve according to a novel continuous-time LSTM. This generative model allows past events to influence the future in complex and realistic ways, by conditioning future event intensities on the hidden state of a recurrent neural network that has consumed the stream of past events. Our model has desirable qualitative properties. It achieves competitive likelihood and predictive accuracy on real and synthetic datasets, including under missing-data conditions. | The proposed submission deals with an interesting and important problem: how to automatically learn the potentially complex temporal influence structures for the multivariate Hawkes process. The proposed neutrally self-modulating multivariate point process model can capture a range of superadditive, subadditive, or even subtractive influence structures from the historical events on the future event, and the model is quite flexible. Also, the model in evaluated on both the synthetic and the real data, and yields a competitive likelihood and prediction accuracy under missing data.
Comments:
1. Compared with existing work, one potential contribution of this submission is in the increased flexibility of the proposed model. First, in modeling the intensity function, a non-linear transfer function is introduced and is applied to the original defined intensity for multivariate Hawkes processes. Second, it models the hidden state h(t) as a continuous-time LSTM.
2. Since the model is more flexible and has more parameters to estimate, it will be harder to accurately learn the parameters. The algorithm basically follows the previous Back Propagation Through Time (Du et al, 2016) strategy and proposes using the Monte Carlo trick in evaluating the integral quantify in the likelihood. I think the convergence of the current algorithm might be very slow since they have too many parameters. Since the learning issue in this model is challenging, it is worthwhile discussing more on the computational efficiency. A more tractable and efficient learning algorithm might be needed in dealing with large-scale problems.
3. For the numerical section, only the likelihood under missing data is evaluated. It will be more convincing to add the results for the intensity accuracy evaluation. |
nips_2017_2595 | Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task. The video presenting our experiments is available at https://goo.gl/SMrQnI. | Summary:
This paper introduces a method called hindsight experience replay (HER), which is designed to improve performance in sparse reward, RL tasks. The basic idea is to recognize that although a trajectory through the state-space might fail to find a particular goal, we can imagine that the trajectory ended at some other goal state-state. This simple idea when realized via the experience replay buffer of a deep q learning agent improves performance in several interesting simulated arm manipulation domains.
Decision:
I think this paper is an accept. The main idea is conceptually simple (in a good way). The paper is clear, and includes results on a toy domain to explain the idea, and then progresses to show good performance on simulated arm tasks. I have several questions that I invite the authors to respond to, in order to refine my final judgement of the paper.
I will start with high-level questions. The auxiliary task work [1] (and hinted at with the Horde paper), shows that learning to predict and control many reward functions in parallel, results in improvement performance on the main task (for example the score in an atari game). One can view the auxiliary tasks as providing dense reward signals for training a deep network and thus regularizing the agent’s representation of the world to be useful for many task. In the auxiliary task work, pixel control tasks seems to improve performance the most, but the more general idea of feature control was also presented. I think the paper under review should discuss the connections with this work, and investigate what it might mean to use the auxiliary task approach in the simulated arm domain. Are both methods fundamentally getting at the same idea? What are the key differences?
The work under-review is based on episodic tasks with goal states. How should we think about continuing tasks common in continuous control domains? Is the generalization straightforward under the average reward setting?
The related work section mentions prioritized DQN is orthogonal. I don’t agree. Consider the possibility that prioritized DQN performs so well in all the domains you tested, that there remains no room for improvement when adding HER. I think this should be validated empirically.
One could consider options as another approach to dealing with sparse reward domains: options allow the agent to jump through the state space, perhaps making it easier to find the goal state. Clearly there are challenges with discovering options, but the connection is worth mention.
The result on the real robot, is too quickly discussed and doesn’t seem to well support the main ideas of the paper. It feels like: “lets add a robot result to make the paper appear stronger”. I suggest presenting a clearer result or dropping it and pulling in something else from the appendix.
Footnote 3 says count-based exploration method was implemented by first discretizing the state. My understanding of Bellemare’s work was that it was introduced as a generalization of tabular exploration bonuses to the case of general function approximation. Was discretization required here, and if not what effect does it have on performance?
The paper investigates the combination of HER with shaped rewards, showing the shaping actually hurts performance. This is indeed interesting, but a nice companion result would be investigating the combination of HER and shaping in a domain/task where we know shaping alone works well.
[1] Jaderberg, M., Mnih, V., Czarnecki, W. M., Schaul, T., Leibo, J. Z., Silver, D., & Kavukcuoglu, K. (2016). Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397. |
nips_2017_2904 | On Fairness and Calibration
The machine learning community has become increasingly concerned with the potential for bias and discrimination in predictive models. This has motivated a growing line of work on what it means for a classification procedure to be "fair." In this paper, we investigate the tension between minimizing error disparity across different population groups while maintaining calibrated probability estimates. We show that calibration is compatible only with a single error constraint (i.e. equal false-negatives rates across groups), and show that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier. These unsettling findings, which extend and generalize existing results, are empirically confirmed on several datasets. | Note: Equal opportunity as used in this paper was originally defined as “Equalized odds” in Hardt et al. 2017. This needs to be fixed in multiple places.
The paper analyzes the interaction between relaxations of equalized odds criteria and classifier calibration. The work extends and builds on the results of a previous work [23].
1. Incompatibility of calibration and non-discrimination: The previous work [23] already establish incompatibility of matching both FPR and FNR (equalized odds) and calibration in exact as well as approximate sense (except in trivial classifiers). This result has been extended to satisfying non-discrimination with respect to two generalized cost functions expressed as a linear combinations of FPR and TPR of respective groups. This generalization while new is not entirely surprising .
2. The authors show that if the equalized odds is relaxed to non-discrimination with respect to a single cost function expressed as a linear combination of FPR and FNR, then the relaxation and calibration can be jointly achieved under certain feasibility conditions.
3. Additional experiments are provided to compare the effects of enforcing non-discrimination and/or calibration.
While there are some new results and additional experiments, the conclusions are fairly similar to [23], essentially establishing incompatibility of satisfying equalized-odds style non-discrimination and classification calibration.
Additional comments:
1. Why was EO trained not implemented and compared for the heart disease and income datasets?
2. Line 165-166: there seems to be a missing explanation or incorrect assumption here as if only one of a_t or b_t is required to be nonzero then g_t can be zero with just FPR=0 OR FNR=0, respectively.
[Post rebuttal] Thanks for the clarifications on the contributions and the condition on a_t,b_t.
I see the significance of the quantification of degradation performance of classifiers even when training is mindful of fairness and calibration conditions -- a similar quantification of accuracy of "best" fair predictor for exclusively EO condition (without calibration) would form a good complementary result (I dont expect the authors to attempt it for this paper, but may be as future work). That said, given the existing literature, I am still not very surprised by the other conclusions on incompatibility of two fairness conditions with calibration. |
nips_2017_1060 | Reliable Decision Support using Counterfactual Models
Decision-makers are faced with the challenge of estimating what is likely to happen when they take an action. For instance, if I choose not to treat this patient, are they likely to die? Practitioners commonly use supervised learning algorithms to fit predictive models that help decision-makers reason about likely future outcomes, but we show that this approach is unreliable, and sometimes even dangerous. The key issue is that supervised learning algorithms are highly sensitive to the policy used to choose actions in the training data, which causes the model to capture relationships that do not generalize. We propose using a different learning objective that predicts counterfactuals instead of predicting outcomes under an existing action policy as in supervised learning. To support decision-making in temporal settings, we introduce the Counterfactual Gaussian Process (CGP) to predict the counterfactual future progression of continuous-time trajectories under sequences of future actions. We demonstrate the benefits of the CGP on two important decision-support tasks: risk prediction and "what if?" reasoning for individualized treatment planning. | Thanks to the authors for a very interesting paper. The main contribution that I see is the proposal of a counterfactual GP model, which can take into account discrete actions, events, and outcomes, and as the authors argue allows for a much better model of discrete event data, e.g. electronic medical records, by explicitly accounting for future actions.
A minor point: the citation for Neyman shouldn't be to 1990, as the paper is from 1923. Yes, the accessible translated version is from 1990, but the citation should include the fact that it's of an article from 1923, seeing as Neyman died in 1981. ;)
I have some suggestions for strengthening the presentation and outlined some of what confused me. Hopefully addressing this will improve the paper.
The framework adopted, of counterfactual predictions, is a very reasonable way to think about causal inference. I believe that the two main assumptions are standard in some of the literature, but, following Pearl, I do not think that they are the best way to think about what is going on. Overall, I would very much like to see a graphical model in Section 2.
Regarding Assumption 1, I would like an account following or at least engaging with Pearl's critique of VanderWeele's claim about the consistency rule as an assumption rather than a theorem.
Regarding Assumption 2, you cite Pearl so I am sure you are aware of his viewpoint, but I think it is very important to make clear what the no unmeasured confounders assumption is, and I believe that a graphical model is the best way to do this. It is worth pointing out as well that, following Pearl, we can go beyond the (often unrealistic) assumption of no unmeasured confounders (see e.g. the back-door and front-door criteria) as long as we're willing to state clearly what our structural assumptions are. Far from a criticism of how you've set things up, I believe that this viewpoint will mean you can cover more cases, as long as there is a way to setup the appropriate regression using a GP (which there should be!)
Having I think understood Assumptions 1 and 2, it now seems like the claim is that we can estimate P(Y[a]) by marginalizing P(Y | A = a, X = x) with respect to X. But I'm having trouble connecting this with the claim that the CGP will do better than the RGP. The end of the "Model." section has the key insight, which I think you ought to expand a bit on and move earlier. It'd be helpful, I think, to write down the likelihood of the RGP so that it can be compared to Eq (3). It would also help to refer to Figure 1 (which is indeed best viewed in color, but could be made quite a bit more legible in grayscale!)---what would RGP do for Figure 1, for example?
Now having read the experiments I'm confused again. In the "Experimental" scenario it is the case that after 12 hours there is a structural break, and all patients are assigned to a control condition, since no actions (treatments) are taken. But why are the RGP and CGP equivalent here?
The Observational scenario makes sense, and the fact that the CGP performs well is reassuring, but why does this support your claim that the CGP is able to "learn a counterfactual predictive model"? Shouldn't you ask the CGP to make predictions under different treatment regimens---to take the example from Figure 1, shouldn't you ask the CGP to predict what will happen if Drug A is administered at hour 13, if Drug B is administered at hour 13, or if no drug is administered? It seems to me that the CGP is a better predictive model than the RGP, and thus does a better job predicting. I think it's true that this means that under your assumptions it will also do a better job on counterfactuals, but you should evaluate this.
A question: how did you fit your models? I guess you maximized Equation 3, but there's some mixture modeling stuff going on here, and the model is highly non-convex. Were there issues with optimization? Did you need to do random restarts? How did you pick initial values? Where did the confidence intervals on parameters come from? |
nips_2017_3167 | Learned in Translation: Contextualized Word Vectors
Computer vision has benefited from initializing multiple deep layers with weights pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initialization of only the lowest layer of deep models with pretrained word vectors. In this paper, we use a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors. We show that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks: sentiment analysis (SST, IMDb), question classification (TREC), entailment (SNLI), and question answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe improves performance of our baseline models to the state of the art. | This paper proposes to pretrain sentence encoders for various NLP tasks using machine translation data. In particular, the authors propose to share the whole pretrained sentence encoding model (an LSTM with attention), not just the word embeddings as have been done to great success over the last few years. Evaluations are carried out on sentence classification tasks (sentiment, entailment & question classification) as well as question answering. In all evaluations, the pretrained model outperforms a randomly initialized model with pretrained GloVe embeddings only.
This is a good paper that presents a simple idea in a clear, easy to understand manner. The fact that the pretraining helps all tasks across the board is an important finding, and therefore I recommend this paper for publication.
Are the authors going to release their pretrained models?
The authors should cite https://arxiv.org/pdf/1611.02683.pdf, where the encoders are pretrained as language models. |
nips_2017_1535 | VAIN: Attentional Multi-agent Predictive Modeling
Multi-agent predictive modeling is an essential step for understanding physical, social and team-play systems. Recently, Interaction Networks (INs) were proposed for the task of modeling multi-agent physical systems. One of the drawbacks of INs is scaling with the number of interactions in the system (typically quadratic or higher order in the number of agents). In this paper we introduce VAIN, a novel attentional architecture for multi-agent predictive modeling that scales linearly with the number of agents. We show that VAIN is effective for multiagent predictive modeling. Our method is evaluated on tasks from challenging multi-agent prediction domains: chess and soccer, and outperforms competing multi-agent approaches. | This paper extends interaction networks (INs) with an attentional mechanism so that it scales linearly (as opposed to quadratically in vanilla INs) with the number of agents in a multi-agent predictive modeling setting: the embedding network is evaluated once per agent rather than once for every interaction. This allows to model higher-order interactions between agents in a computationally efficient way. The method is evaluated on two new non-physical tasks of predicting chess piece selection and soccer player movements.
The paper proposes a simple and elegant attentional extension of Interaction Networks, and convincingly shows the benefit of the approach with two interesting experiments. The idea is not groundbreaking but seems sufficiently novel, especially in light of its effectiveness. The paper is clear and well written. I appreciate that the authors provide an intuitive yet precise explanation (in words) of their mechanism instead of resorting to unnecessarily complicated notation and formulas.
Some confusion comes from the discussions about single-hop and multi-hop, terms that should probably be defined somewhere. On line 105, the authors write: "The architecture can be described as a single-hop CommNet with attention. Although CommNets typically use more hops for learning communications, more than one hop did not help on our predictive modeling tasks", but on lines 136 and 262 it says that VAIN uses multiple hops.
The experiments are well thought-out and interesting. I'm only missing some more insights or visualizations on what kind of interactions the attention mechanism learns to focus on or ignore, as the attention mechanism is the main contribution of the paper.
Further, I wonder if a disadvantage of the proposed attention mechanism (with the fixed-length softmax layer depending on the number of agents) does not take away one of the benefits of vanilla INs, namely that it is straightforward to dynamically add or remove agents during inference. If so, perhaps this can be mentioned.
Some minor remarks and typo's:
- typo in sentence on line 103-104: two times "be"
- equation (3) should be C = (P_i, F_i), or the preceding sentence should be changed
- line 138: sentence should start with "*While* the original formulation ..."
- line 284: generally *fares* poorly.
Overall, I believe this work would be a valuable contribution to NIPS. |
nips_2017_3019 | Collaborative Deep Learning in Fixed Topology Networks
There is significant recent interest to parallelize deep learning algorithms in order to handle the enormous growth in data and model sizes. While most advances focus on model parallelization and engaging multiple computing agents via using a central parameter server, aspect of data parallelization along with decentralized computation has not been explored sufficiently. In this context, this paper presents a new consensus-based distributed SGD (CDSGD) (and its momentum variant, CDMSGD) algorithm for collaborative deep learning over fixed topology networks that enables data parallelization as well as decentralized computation. Such a framework can be extremely useful for learning agents with access to only local/private data in a communication constrained environment. We analyze the convergence properties of the proposed algorithm with strongly convex and nonconvex objective functions with fixed and diminishing step sizes using concepts of Lyapunov function construction. We demonstrate the efficacy of our algorithms in comparison with the baseline centralized SGD and the recently proposed federated averaging algorithm (that also enables data parallelism) based on benchmark datasets such as MNIST, CIFAR-10 and CIFAR-100. | This paper explores a fixed peer-to-peer communication topology without parameter server. To demonstrate convergence, it shows that the Lyaounov functions that is minimized includes a regularizer term that incorporates the topology of the network. This leads to convergence rate bounds in the convex setting and convergence guarantees in the non-convex setting.
This is original work of high technical quality, well positioned with a clear introduction. It is very rare to see proper convergence bounds in such a complex parallelization setting, the key to the proof is really neat (I did not check all the details). As we know, insight from convex-case proofs usually apply to the non-convex case, especially as the end convergence should be approximately convex. Thanks to the proof, one can predict the impact of the topology over convergence (through the eigenvalues). This proof should also open to further work. Experiments are also well designed, thought they do not cover the hypothetical situation where a parameter server could fail.
However, several critical limitations will impact the short term significance. Some of them could probably be addressed by the time of publication.
First, the loss in both training speed and convergence accuracy of CDSGD compared to plain SGD is very large, and is only mitigated with the use of a momentum. While the authors show how the algorithm can be modified to support both Polyak and Nesterov momentums, I could not find anywhere which momentum CDMSGD actually use! There is no explanation of why this momentum helps performance so much.
Second, compared to other data-distributed SGD approaches such as Federated Averaging, the main practical contribution of the paper is to get rid of the parameter server, at some efficiency cost on controlled experiments (figure 2.b). So this work significance would greatly improve if the authors clearly identified the reasons for not using a parameter server. In the introduction, they hint at better privacy preserving (if I understood right), but Federated Averaging also claims privacy preserving WITH a parameter server. In the experiments, final accuracy with momentum is better than Federated averaging: is this because of the decentralized approach or the use of a momentum? Note also that this paper does not seem to mention the main reason I have seen for using decentralized approach, which is robustness in case of failure of the central server.
Last, the stochastic Lyapunov gradient (eq 7) shows that the regularization can be very strong especially with small learning rates, and the effects are visible in the experiments. What this regularization amounts to would be worth studying, as this is a striking side-effect of the algorithm.
Potential issues:
Line 92: f_j(x)=1/N f(x) does not make sense to me.
It should be f_j(x)=1/N sum_i f^i_j(x) ??
Line 119: it implies that all eigenvalues are non-zero, which contradicts the fully connected topology where they are all zero but one.
Line 143: "Gradient of V(x) also has a Lipshitz": you either remove gradient or replace 'has' with 'is'. |
nips_2017_3183 | Learning Combinatorial Optimization Algorithms over Graphs
The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real-world applications, it is typically the case that the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. This provides an opportunity for learning heuristic algorithms that exploit the structure of such recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of a graph embedding network capturing the current state of the solution. We show that our framework can be applied to a diverse range of optimization problems over graphs, and learns effective algorithms for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems. | The authors propose a reinforcement learning strategy to learn new heuristic (specifically, greedy) strategies for solving graph-based combinatorial problems. An RL framework is combined with a graph embedding approach. The RL approach effectively learns a greedy policy for selecting constructing an approximate solution. The approach is innovative and the empirical results appear promising. An important advantage of the work is that the learned policy is not restricted to a fixed problem size, in contrast to earlier work.
One drawback of the paper is that I found the writing unnecessarily dense and unclear at various places. In particular, it would be good to include more details on the intuitive description of the approach. I believe the ideas can be stated clearly in words because the concept of learning a greedy policy is not that different from learning any policy as done in RL (Learning the “next move to make” in a game is quite analogous to learning what is the next node in the graph to select. So, the authors can focus more on what makes this work different from learning a game strategy.) The over reliance on formal notation does not help.
I did not fully check the formal details but some points were unclear. For example, a state is defined as a sequence of action nodes on the graph. Each node in the tagged graph is represented by its p-dimensional embedding (a p-dimensional vector). A state is then defined as the sum of the vectors corresponding to the set of action nodes so far. Why is this choice of state definition the “right one”?
The empirical results are promising but some further insights would be helpful. For most combinatorial problems there is a known basic greedy strategy that already performs quite well. Does RL rediscover those strategies for the domains under consideration? (If the framework has the potential to discover new algorithmic strategy, it would be good to know that the approach can also re-discover known strategies.) |
nips_2017_3110 | A Learning Error Analysis for Structured Prediction with Approximate Inference
In this work, we try to understand the differences between exact and approximate inference algorithms in structured prediction. We compare the estimation and approximation error of both underestimate (e.g., greedy search) and overestimate (e.g., linear relaxation of integer programming) models. The result shows that, from the perspective of learning errors, performances of approximate inference could be as good as exact inference. The error analyses also suggest a new margin for existing learning algorithms. Empirical evaluations on text classification, sequential labelling and dependency parsing witness the success of approximate inference and the benefit of the proposed margin. | This paper is on the important topic of learning with approximate
inference. Previous work, e.g., Kulesza and Pereira (2007), has demonstrated the
importance of matching parameter update rules and inference approximation
methods. This paper presents a new update rule based on PAC Bayes bounds, which
is fairly agnostic to the inference algorithm used -- it assumes a
multiplicative error bound on model score and supports both over and under
approximations.
The example given in section 3.2 is a great illustration of how approximation
error is more subtle than we might think it is. Sometimes an approximate
predictor can fit the training data better because it represents a different
family of functions!
The experiments presented in the paper are on NLP problems. However, in NLP the
"workhorse" approximate inference algorithm used is beam search. I strongly
encourage the authors to extend their experiments to include beam search
(varying the beam size to control the approximation quality).
If, for some reason, beam search does not match their method, it should be
made very clear.
Related to beam search is work done by Liang Huang (e.g.,
http://www.aclweb.org/anthology/N12-1015) about how perceptron with beam
search can diverge and how to fix the learning algorithm (see "early update"
and "max violation" updates).
This paper is pretty influential in NLP.
This is a case where the learning rule is modified. I think the NLP
community (at least) would be very interested if beam search could actually
work with your approach.
Overall, I like this paper and I think that it could have a real impact because
learning with approximate inference is such an important topic and the approach
describe here seems to be pretty effective. Unfortunately, the current state of
the paper is a bit too rough for me to recommend acceptance. (If only we had the
option to "accept with required revisions")
Question
========
I'm confused about Theorem 6.
a) Isn't this bound looser than the one in theorem 4? Should making *more*
assumptions (about \rho *and* \tau) lead to a tighter bound?
b) I expected \rho to be replaced with a bound based solely on \tau. What's
the deal?
Am I missing something?
Lower-level comments
====================
The authors assume that the reader is already familiar with the concepts of
under-estimation and over-estimation. I do not believe most readers will know
about this. I strongly recommend defining these terms very early in the paper.
I would also recommend explaining early on why these two classes of
approximations have been studied in the past (and why they are typically studied
separately) -- this would be much more useful than the current discussion in the
related work section.
bottom of page 1: Please define risk? In particular is e(h) /empirical/ risk or
true risk?
39: typo? "We investigate three wildly used structured prediction models"
-> I think you meant "widely", but I suppose "wildly" also works :P
41: At this point in the paper, I'm very surprised to see "text classification"
on a list of things that might use approximate inference... It might be
useful to explain how this is relevant -- it seems as if the answer is that
it's just a synthetic example.
There are some settings, such as "extreme classification", where the number
of labels is huge and approximate inference might be used (e.g., based on
locality sensitive hashing or approximate nearest neighbors).
42: typo "high order" -> "higher-order"
45: It's tough to say that "Collins [5]" was "the first" since many results for
the /unstructured/ multi-label case technically do transfer over.
47,49,50,etc: Citations are not nouns use use \citet instead of \citep and
\usepackage{natbib}.
"[37] provided..." -> "Collins (2001) provided ..."
49: "Taskar’s bound" is undefined, presumably it's in one of the citations in
the paragraph. (Also, you probably mean "Taskar et al.'s bound".)
There are many excellent citations in related work (section 2), but without much
discussion and clear connection to the approach in the paper.
68: "w the parameter." -> "w is the parameter vector." (probably want to say $w
\in \mathbb{R}^d$ as well)
70: clearly over and under estimation method do not partition the space of
approximate inference algorithms: what sorts of algorithms does it omit? does it
matter? Also, what is the role of \rho? Do we need to know it? does it have
constant for all x? (Looks like later, e.g., theorem 4, it's allowed to depend
on the example identifier) Why are multiplicative bounds of interest? Which
approx methods give such bounds? (Later it is generalized based on
stability. Probably worth mentioning earlier in the paper.)
The notation h[w](x) is unusual, I'd expect/prefer h_w(x), or even h(w, x).
73, 74: definitions of H, H- and H+ are a little sloppy and verbose. The
definition of H is pretty useless.
73: The typesetting of R doesn't mach line 68. Please be consistent.
section 3.1
- L(Q,S,h[w]) seems unnecessary (it's fine to just say that the expectation is over S).
- Also, S is undefined and the relationship between S and x_i is not explained
(I assume x_i \in S and |S|=m). Alternatively E_{x \sim S} would do the
trick.
Lemma 2: Missing citation to PAC-Bayes Theorem.
Please explain the relationship between the prior p(w') and Q(w'|w). The use
of a prior comes out of the blue for readers who are unfamiliar with PAC Bayes
bounds.
A few words about why I should even be thinking about priors at this point
would be useful.
Also, if I don't care about priors can't I optimize the prior away?
77: Should technically pass Q(.|w) to KL divergence, i.e., $D(Q(.|w) || P(.))$.
80: It might be more useful to simply define under and over estimation with this
definition of margin.
theorem 4:
- assume h'[w] be a ..." -> "theorem 4: assume h'[w] is ..."
- Should be a \rho_i approximation of h[w](x_i)?
- what is \bar \cal Y ?
- why bother subscripting \lambda_S? just use \lambda
90: a better/more-intuitive name for square root term?
107: definition 8, "contains" is not a good term since it's easy to mistaking it
for set containment or something like that. Might be better to call it
"dominates".
Section 3.2 could be summarized as approximate inference algorithms give us a
/different/ family of predictors. Sometimes the approximate inference family has
a better predictor (for the specific dataset) than the exact family.
Giving high-level intuitions such as the one I provided with resonate more
with the reader.
Figures 1 and 2 take a little too much effort to understand.
Consider hoisting some of the discussion from the main text into the
caption.
At the very least, explain what the visual elements used mean mean (e.g.,
What is the gray arc? What are the columns and rows?)
159: "... or by tuning." on development data or training data?
246: "non-comparableness" -> "incomparability" |
nips_2017_3376 | Neural Variational Inference and Learning in Undirected Graphical Models
Many problems in machine learning are naturally expressed in the language of undirected graphical models. Here, we propose black-box learning and inference algorithms for undirected models that optimize a variational approximation to the log-likelihood of the model. Central to our approach is an upper bound on the logpartition function parametrized by a function q that we express as a flexible neural network. Our bound makes it possible to track the partition function during learning, to speed-up sampling, and to train a broad class of hybrid directed/undirected models via a unified variational inference framework. We empirically demonstrate the effectiveness of our method on several popular generative modeling datasets. | # Overview
The paper presents approximate inference methods for learning undirected graphical models. Learning a Markov random field (MRF) p(x) involves computing the partition function -- an intractable integral $Z(\theta) = \int \bar p_\theta(x) dx$ (where $\bar p_\theta(x)$ is the energy function). The authors use a tractable importance distribution $q_\phi(x)$ to derive an upperbound on the partition function. They argue that optimising the parameters of p and q under this bound can be seen as variational inference with \chi^2-divergence (instead of KL).
Essentially the proposed methodology circumvents sampling from an MRF (which would require computing the partition function or a form of MCMC) by sampling from an approximation q and using the same approximation to estimate an upperbound on the log partition function of the MRF (which arises in VI's objective). To obtain an approximations q that is potentially multi-modal the authors employ mixture of K components (termed "non-parametric VI" Gershman et al 2012) and auxiliary variables (in the approximation).
The paper then presents 3 applications:
1. 5x5 Ising model: the idea behind this toy application is to assess the quality of the approximate partition function since one can compute the exact one.
2. RBM for MNIST: here they model an RBM p(x) over MNIST-digits using their upperbound on partition function to optimise a lowerbound on the likelihood of the data.
3. Variational auto encoder for (binarised) MNIST: here they model p(x|z)p(z) where the prior p(z) is an RBM.
The main point of the empirical comparison was to show that the proposed method can be used to fit a variational auto-encoder with an RBM prior. The authors contrast that with a variational auto-encoder (with the standard Gaussian prior) whose approximate posterior is augmented with auxiliary random variables as to be able to model multiple posterior modes.
# Qualitative assessment
* Quality
The paper is mostly technically sound, but I would like to have a few points clarified:
1. You refer to equation just before line 54 as ELBO, but the ELBO would involve computations about the posterior, which is really not the case here. I can see the parallel to the ELBO, but perhaps this should be clarified in the text.
2. In line 91, I find the text a bit misleading: "the variance is zero, and a single sample computes the partition function". If p = q, the bound in Eq (2) is tight and the expectation equals Z(theta)^2, but that does not mean you should be able to compute it with a single sample.
3. Most of the text in Section 3.4 (optimisation) assumes we can simulate x ~ q(x) directly, but that's not really the case when auxiliary variables are employed (where we actually simulate a ~ q(a) and then x ~ q(x|a)). I am curious about how you estimated gradients, did you employ a reparameterisation for q(a)? How did you deal with the KL term from q(a,x) in Section 5.3 where you have a VAE with an RBM prior?
4. Is the gradient in line 159 correct? I think it might be missing a minus.
5. The energy function in line 204 should have been exponentiated, right?
6. Can you clarify why $\log B(\bar p, q)$ in Equation 7 does not affect p's gradient? As far as I can tell, it is a function of p's parameters (\theta).
* Clarity
A few results could have been derived explicitly: e.g. gradients of upperbound with respect to q in each of q's formulation that the paper explored; similarly, the estimator for the case where q is q(a,x).
* Significance
The main advance is definitely interesting and I can imagine people will be interested in the possibility of using MRF priors. |
nips_2017_1270 | A graph-theoretic approach to multitasking
A key feature of neural network architectures is their ability to support the simultaneous interaction among large numbers of units in the learning and processing of representations. However, how the richness of such interactions trades off against the ability of a network to simultaneously carry out multiple independent processes -a salient limitation in many domains of human cognition -remains largely unexplored. In this paper we use a graph-theoretic analysis of network architecture to address this question, where tasks are represented as edges in a bipartite graph G = (A ∪ B, E). We define a new measure of multitasking capacity of such networks, based on the assumptions that tasks that need to be multitasked rely on independent resources, i.e., form a matching, and that tasks can be multitasked without interference if they form an induced matching. Our main result is an inherent tradeoff between the multitasking capacity and the average degree of the network that holds regardless of the network architecture. These results are also extended to networks of depth greater than 2. On the positive side, we demonstrate that networks that are random-like (e.g., locally sparse) can have desirable multitasking properties. Our results shed light into the parallel-processing limitations of neural systems and provide insights that may be useful for the analysis and design of parallel architectures. | The authors lay an ambitious groundwork for studying the multitasking capability of general neural network architectures. They prove a variety of theorems concerning behavior of their defined criteria--the multitasking capacity \alpha--and how it relates to graph connectivity and size via graph matchings and degree.
Pros: The problem being studied is quite clearly motivated, and the authors present a genuinely novel new measure for attempting to understand the multitasking capability of networks. The authors fairly systematically calculate bounds on their parameter in a variety of different graph-theoretic scenarios.
Cons: It's unclear how practically useful the parameter \alpha will be for real networks currently in use. Admittedly, this work is preliminary, and the authors point out this shortcoming in their conclusion. While it is likely the topic of an upcoming study, it would benefit the thrust of the paper considerably if some \alpha's were calculated for networks that are actually in use, or *some* sort of numerical comparison were demonstrated (ex: it's recently become popular to train neural networks to perform multiple tasks simultaneously--what does \alpha have to say about the topologies of these networks?)
Plainly, it's not obvious from a first read of the paper that \alpha *means* anything short of being an abstract quantity one can calculate for graphs. Additional reads clarify the connection, but making some sort of empirical contact with networks used in practice (beyond the throwaway lines in the conclusion about pseudorandom graphs) would greatly clarify the practical use of (or even intuition for) the \alpha parameter.
That said, this reviewer thinks the article should still be accepted in the absence of such an empirical study, because performing such a study would likely (at least) double the length of the paper (which would be silly). At the minimum, though, this reviewer would appreciate a bit more discussion of the practical use of the parameter. |
nips_2017_2716 | PRUNE: Preserving Proximity and Global Ranking for Network Embedding
We investigate an unsupervised generative approach for network embedding. A multi-task Siamese neural network structure is formulated to connect embedding vectors and our objective to preserve the global node ranking and local proximity of nodes. We provide deeper analysis to connect the proposed proximity objective to link prediction and community detection in the network. We show our model can satisfy the following design properties: scalability, asymmetry, unity and simplicity. Experiment results not only verify the above design properties but also demonstrate the superior performance in learning-to-rank, classification, regression, and link prediction tasks. | The paper presents a NN model for learning graph embeddings that preserves the local graph structure and a global node ranking similar to PageRank. The model is based on a Siamese network, which takes as inputs two node embeddings and compute a new (output) representation for each node using the Siamese architecture. Learning is unsupervised in the sense that it makes use only of the graph structure. The loss criterion optimizes a weighted inner product of the output representations (local loss) plus a node ranking criterion (global loss). Some links with a community detection criterion are also discussed. The model is evaluated on a series of tasks: node ranking, classification and regression, link prediction, and compared to other families of unsupervised embedding learning methods.
The paper introduces new loss criteria and an original method for learning graph embeddings. The introduction of a node ranking term based on the graph structure is original. The authors find that this term brings interesting benefits to the learned embeddings and allows their method to outperform the other unsupervised baselines for a series of tasks. The links with the community criteria is also interesting. On the other hand, the organization and the form of the paper makes it somewhat difficult to follow at some places. The motivation for using a Siamese architecture is not explained. Two representations are learned for each node (notations u and z in the paper). Since the z representation is a deterministic function of the u representation, why are these two different representations needed, and why not directly optimizing the initial u representation? The explanation for the specific proximity preserving loss term is also unclear. For example, it is shown that this criterion optimizes a “second-order” proximity, but this term is never properly defined. Even if the result looks interesting, lemma 3.4 which expresses links with a community detection criterion is confusing.
In the experimental section, a more detailed description of the baselines is needed in order to understand what makes them different from the proposed method. The latter outperforms all the baselines on all the tests, but there is no analysis of the reasons for this good behavior.
Overall the paper reveals intriguing and interesting findings. The proposed method has several original aspects, but the paper requires a better organization/presentation in order to fully appreciate the proposed ideas. |
nips_2017_2024 | PASS-GLM: polynomial approximate sufficient statistics for scalable Bayesian GLM inference
Generalized linear models (GLMs)-such as logistic regression, Poisson regression, and robust regression-provide interpretable models for diverse data types. Probabilistic approaches, particularly Bayesian ones, allow coherent estimates of uncertainty, incorporation of prior information, and sharing of power across experiments via hierarchical models. In practice, however, the approximate Bayesian methods necessary for inference have either failed to scale to large data sets or failed to provide theoretical guarantees on the quality of inference. We propose a new approach based on constructing polynomial approximate sufficient statistics for GLMs (PASS-GLM). We demonstrate that our method admits a simple algorithm as well as trivial streaming and distributed extensions that do not compound error across computations. We provide theoretical guarantees on the quality of point (MAP) estimates, the approximate posterior, and posterior mean and uncertainty estimates. We validate our approach empirically in the case of logistic regression using a quadratic approximation and show competitive performance with stochastic gradient descent, MCMC, and the Laplace approximation in terms of speed and multiple measures of accuracy-including on an advertising data set with 40 million data points and 20,000 covariates. | [EDIT] I have read the rebuttal and I have subsequently increased my score to 7.
# Summary of the paper
When data are too numerous or streaming, the authors investigate using approximate sufficient statistics for generalized linear models. The crux is a polynomial approximation of the link function.
# Summary of the review
The paper is well written and the method is clear and simple. Furthermore, it is computationally super-cheap when the degree of the polynomial approximation is low, a claim which is backed by interesting experiments. I like the underlying idea of keeping it simple when the model allows it. My main concern is the theoretical guarantees, which 1) lack clarity, 2) involve strong assumptions that should be discussed in the main text, and 3) are not all applicable to the recommended experimental setting.
# Major comments
- L47 "constant fraction": some subsampling methods have actually been found to use less than a constant fraction of the data when appropriate control variates are available, see e.g. [Bardenet et al, MCMC for tall data, Arxiv 2015], [Pollock et al., the scalable Langevin [...], Arxiv 2016]. Actually, could your polynomial approximation to the likelihood could be useful as a control variate in these settings as well? Your objection that subsampling requires random data access still seems to hold for these methods, though.
- L119 "in large part due to...": this is debatable. I wouldn't say MCMC is expensive 'because of' the normalizing constant. "Approximating this integral": it is not the initial purpose of MCMC to approximate the normalizing constant, this task is typically carried out by an additional level above MCMC (thermodynamic integration, etc.).
- Section 4: the theoretical results are particularly hard to interpret for several reasons. First, the assumptions are hidden in the appendix, and some of them are quite strong. The strongest assumptions at least should be part of the main text. Second, in -say- Corollary 4.1, epsilon depends on R, and then L207 there's a necessary condition involving R and epsilon, for which one could wonder whether it is ever satisfied. Overall, the bounds are hard to make intuitive sense of, so they should be either simplified or explained. Third, some results like Corollary 4.2 are "for M sufficiently large", while the applications later typically consider M=2 for computational reasons. This should be discussed.
- The computational issues of large d and large M should be investigated further.
- Corollary 4.2 considers a robust loss (Huber's) but with a ball assumption on all data points, and a log concave prior on theta. I would typically use Huber's loss in cases where there are potential outlier inner products of x and theta; is that not in contradiction with the assumptions?
- Corollary 4.3 Assumption (I) is extremely strong. The appendix mentions that it can be relaxed, I think it should be done explicitely. |
nips_2017_360 | Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to perform well uniformly. | Paper address the problem of learning a joint recognition model across variety of domains and tasks. It develops a tunable deep network architecture that, by means of adapter residual modules, can be steered for a variety of visual domains. Methods archives good performance across variety of domains (10 in total) while having high degree of parameter sharing. Paper also introduces a corresponding challenge benchmark - Decathlon Challenge. The benchmark evaluates the ability of a representation to be applied across 10 very different visual domains.
Generally the paper is very well written and presents an interesting approach and insights. I particularly applaud drawing references to related works and putting the approach in true context. Additional comments are below.
> References
Generally the references are good and adequate. The following references are thematically related and should probably be added:
Learning Deep Feature Representations with Domain Guided Dropout for Person Re-identification,
Tong Xiao, Hongsheng Li, Wanli, Ouyang, Xiaogang Wang,
CVPR, 2016.
Expanding Object Detector's Horizon: Incremental Learning Framework for Object Detection in Videos,
Alina Kuznetsova, Sung Ju Hwang, Bodo Rosenhahn, and Leonid Sigal,
CVPR, 2015.
> Clarifications
Lines 201-202 talk about shrinking regularization. I think this is very interesting and could be a major benefit of the model. However, it is unclear if this is actually tested in practice in experiments. Can you please clarify.
Also, authors claim that the proposed model ensures that performance does not degrade in the original domain. This does not actually appears to be the case (or supported experimentally). For example, performance on ImageNet in Table 1 goes from 59.9 to 59.2. This should be discussed and explained.
> Experiments
In light of the comments above, Table 2 should include the proposed model (or models), e.g., Res. adapt. decay and Res. adapt. finetune all.
Finally, explanation of Scratch+ is unclear and should be clarified in Lines 303-305. I read the sentence multiple times and still unsure what was actually done. |
nips_2017_106 | Interactive Submodular Bandit
In many machine learning applications, submodular functions have been used as a model for evaluating the utility or payoff of a set such as news items to recommend, sensors to deploy in a terrain, nodes to influence in a social network, to name a few. At the heart of all these applications is the assumption that the underlying utility/payoff function is known a priori, hence maximizing it is in principle possible. In many real life situations, however, the utility function is not fully known in advance and can only be estimated via interactions. For instance, whether a user likes a movie or not can be reliably evaluated only after it was shown to her. Or, the range of influence of a user in a social network can be estimated only after she is selected to advertise the product. We model such problems as an interactive submodular bandit optimization, where in each round we receive a context (e.g., previously selected movies) and have to choose an action (e.g., propose a new movie). We then receive a noisy feedback about the utility of the action (e.g., ratings) which we model as a submodular function over the context-action space. We develop SM-UCB that efficiently trades off exploration (collecting more data) and exploration (proposing a good action given gathered data) and achieves a O( √ T ) regret bound after T rounds of interaction. More specifically, given a bounded-RKHS norm kernel over the context-action-payoff space that governs the smoothness of the utility function, SM-UCB keeps an upperconfidence bound on the payoff function that allows it to asymptotically achieve no-regret. Finally, we evaluate our results on four concrete applications, including movie recommendation (on the MovieLense data set), news recommendation (on Yahoo! Webscope dataset), interactive influence maximization (on a subset of the Facebook network), and personalized data summarization (on Reuters Corpus). In all these applications, we observe that SM-UCB consistently outperforms the prior art. | Pros:
- The paper combines submodular function maximization with contextual bandit, and propose the problem of interactive submodular bandit formulation and SM-UCB solution. It includes several existing studies as special cases.
- The empirical evaluation considers several application contexts and demonstrates the effectiveness of the SM-UCB algorithm.
Cons:
- The discussion on the key RKHS condition is not enough, and its applicability for actual applications is unclear. More importantly, there is no discussion about constant B for the RKHS norm for various applications, and also the maximum information gain \gamma_t, but both of them are needed in the algorithm.
- The technical analysis is heavily dependent on prior study, in particular [33]. The additional technical contribution is small.
The paper studies a contextual bandit learning setting, where in each round a context \phi is presented, which corresponds to an unknown submodular set function f_\phi. The user needs to select one item x to add to the existing selected set S_\phi, with the purpose of maximizing the marginal gain of x given S_\phi. After selecting x, the user obtains a noisy feedback of the marginal gain. The regret is comparing against the (1-1/e) fraction of the optimal offline solution, when all the submodular functions for all contexts are known. The key assumption to make the learning work is that the marginal contribution \Delta in terms of the subset S, the new item x, and the context \phi, has a bounded RKHS norm. Intuitively, this norm makes sure that similar contexts, similar subsets, and similar item would give similar marginal results.
The authors propose the SM-UCB algorithm to solve the problem. The proposed algorithm and its regret analysis is heavily based on the prior work on contextual Gaussian process bandit optimization [33]. In fact, it looks like all the authors need to do is to treat existing subset S already selected as part of the context, and then plugging it into the algorithm and the analysis of [33]. The proof of Theorem 1 given in the supplementary material is only to connect their current regret definition with the regret definition of [33], using the submodularity property. Then they just apply Theorem 1 of [33] to obtain the result. Therefore, from the technical point of view, the paper's new contribution is small.
[Note: After reading the authors' feedback, I get a bit clearer understanding, in that the difference in model from [33] seems to be that the current action may affect future contexts. However, I am still not quite sure how much the analysis differs from [33], since there is only a two-and-a-half page analysis in the supplementary material, and it mostly relies on [33] to achieve the main result. Please clarify the technical contribution of the current paper, and what are the novelty in the analysis that differs from [33]]
Thus, the main contribution of the paper, in my view, is to connecting submodular maximization with prior work on contextual bandit and give the formulation and the result of interactive submodular bandit problem. Their empirical evaluation covers several application cases and demonstrate advantages of the AM-UCB algorithm, giving readers some concrete examples on how to apply their general results in different contexts.
However, one important thing the authors did not discuss much is the key assumption on the bounded RKHS norm. First, RKHS norm should be better defined and explained. Second, its implication on several application contexts should be discussed. Third, the choice of kernel functions should be discussed. More importantly, the bound B should be evaluated for the applications they discuss and use in the experiment section. This is because this complexity measure B is not only used in the regret bound, but also appears as part of \beta_t in the algorithm. Similar situation occurs for the maximum information gain paramter \gamma_t. The authors do not discuss at all how to obtain these parameters for the algorithm. Without these parameters, it is not even clear how the authors evaluate their SM-UCB algorithm in the experiments. This is a serious ommission, and the readers would not be able to apply their algorithm, or reproduce their experiments without knowing how to properly set B and \gamma_t.
Furthermore, In the empirical evaluation section, the authors use linear kernel and Jaccard kernel, but why these are reasonable ones? What are their implications to the application context? What are the other possible kernels to use? In general, I do not have a good idea how to pick a good kernel function for an application, and what are the impact and limitations of this assumption. I think this should be the key contribution of the paper, since the other technical contribution is minor, but the authors fail to provide satisfactory answers in this regard.
Other minor comments:
- line 133, "T_i" --> "T_j"
- line 164, I(y_A | \Delta) --> I(y_A; \Delta)
- Related work
It would be nice if the authors could compare their approach with other related approaches combining multi-armed bandit with combinatorial optimization tasks, such as
a) combinatorial multi-armed bandit with application to influence maximization and others: Combinatorial Multi-Armed Bandit and Its Extension to Probabilistically Triggered Arms Wei Chen, Yajun Wang, Yang Yuan, Qinshi Wang, in Journal of Machine Learning Research, 2016
b) Online influence maximization. Siyu Lei, Silviu Maniu, Luyi Mo, Reynold Cheng, and Pierre Senellart. In KDD, 2015.
c) Stochastic Online Greedy Learning with Semi-bandit Feedbacks. Tian Lin, Jian Li, Wei Chen, In NIPS'2015 |
nips_2017_2025 | Bayesian GAN
Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood. We present a practical Bayesian formulation for unsupervised and semi-supervised learning with GANs. Within this framework, we use stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of the generator and discriminator networks. The resulting approach is straightforward and obtains good performance without any standard interventions such as label smoothing or mini-batch discrimination. By exploring an expressive posterior over the parameters of the generator, the Bayesian GAN avoids mode-collapse, produces interpretable and diverse candidate samples, and provides state-of-the-art quantitative results for semi-supervised learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN, Wasserstein GANs, and DCGAN ensembles.
over the network weights as a point mass centred on a single mode. Thus even if the generator does not memorize training examples, we would expect samples from the generator to be overly compact relative to samples from the data distribution. Moreover, each mode in the posterior over the network weights could correspond to wildly different generators, each with their own meaningful interpretations. By fully representing the posterior distribution over the parameters of both the generator and discriminator, we can more accurately model the true data distribution. The inferred data distribution can then be used for accurate and highly data-efficient semi-supervised learning.
In this paper, we propose a simple Bayesian formulation for end-to-end unsupervised and semisupervised learning with generative adversarial networks. Within this framework, we marginalize the posteriors over the weights of the generator and discriminator using stochastic gradient Hamiltonian Monte Carlo. We interpret data samples from the generator, showing exploration across several distinct modes in the generator weights. We also show data and iteration efficient learning of the true distribution. We also demonstrate state of the art semi-supervised learning performance on several benchmarks, including SVHN, MNIST, CIFAR-10, and CelebA. The simplicity of the proposed approach is one of its greatest strengths: inference is straightforward, interpretable, and stable. Indeed all of the experimental results were obtained without many of the ad-hoc techniques often used to train standard GANs.
We have made code and tutorials available at https://github.com/andrewgordonwilson/bayesgan. | Summary:
The paper introduces a Bayesian type of GAN algorithms, where the generator G and discriminator D do not have any fixed initial set of weights that gets gradually optimised. Instead, the weights for G and for D get sampled from two distributions (one for each), and it is those distributions that get iteratively updated. Different weight realisations of G may thus generate images with different styles, corresponding to different modes in the dataset. This, and the regularisation effect of the priors on the weights, promotes diversity and alleviates the mode collapse issue. The many experiments conducted in the paper support these claims.
Quality, Originality, Clarity:
While most GAN papers today mainly focus on slight variations of the original net architecture or of the original GAN objective function, this paper really presents a new, original and very interesting algorithm that combines features of GANs with a strongly Bayesian viewpoint of Neural Nets (similar in spirit, for example, to the viewpoint conveyed by Radford Neal in his PhD thesis), where the weights are parameters that should be sampled or integrated out, like any other parameter in the Bayesian framework. This leads to a very elegant algorithm. The paper contains many good experiments that prove that the algorithm is highly effective in practice (see Fig 1 and Table 1), and the produced images (though not always state of the art it seems) indeed reflect a high diversity of the samples. Moreover, the paper is very clearly written, in very good english, with almost no typos, and the algorithm is well explained. Overall: a very good paper!
Other remarks:
- Am I right in understanding that at each \theta_g sampling step, you are actually not really sampling from p(\theta_g | \theta_d), but rather from something more like p(\theta_g | \theta_d) * p(theta_d | theta_g^old) (and similar for p(\theta_d | theta_g)) ? In that case, this should be stressed and explained more clearly. In particular, I got confused at first, because I did not understand how you could sample from p(\theta_g | \theta_d) if \theta_d was not fixed.
- Algorithm 1: the inner loop over M does not stick out clearly. The 'append \theta_g^k to sample set' should be inside this loop, but seems outside. Please ensure that both inner- and outer loop are clearly visible from first eye sight.
- l.189 & l.192: clash of notations between z (of dimension d, to generate the synthetic dataset) and z (of dimension 10, as random input to the generator G). Use different variable for the first z
- Figure 1: What is p_{ML-GAN} ? I guess that the 'ML' stays for 'Maximum-Likelihood' (like the maximum-likelihood gan mentioned on l.267 without further introduction.) and that the MLGAN is just the regular GAN. In that case, please call it simply p_{GAN}, or clearly introduce the term Maximum-Likelihood GAN and its abbreviation MLGAN (with a short justification of why it is a maximum likelihood, which would actually be an interesting remark).
- l.162-171: Describe the architectures more carefully. Anyone should be able to implement *exactly* your architecture to verify the experiments. So sentences like ('The networs we use are slightly different from [10] (e.g. we have 4 hidden layers and fewer filters per layer)', without saying exactly how many filters, is not ok. Also, the architecture for the synthetic dataset is described partly at l.162-163 and partly at l.191. Please group the description of the network in one place. The authors may consider describing roughly their architectures in the main part, and put a detailed description in the appendix.
Response to rebuttal:
We thank the authors for answering our questions. We are happy to hear that the code will be released (as there is usually no better way to actually reproduce the experiments with the same parameter settings). Please enclose the short explanations of the rebuttal on GANs being the Maximum Likelihood solution with improper priors into your final version. |
nips_2017_1724 | Learning Causal Structures Using Regression Invariance
We study causal discovery in a multi-environment setting, in which the functional relations for producing the variables from their direct causes remain the same across environments, while the distribution of exogenous noises may vary. We introduce the idea of using the invariance of the functional relations of the variables to their causes across a set of environments for structure learning. We define a notion of completeness for a causal inference algorithm in this setting and prove the existence of such algorithm by proposing the baseline algorithm. Additionally, we present an alternate algorithm that has significantly improved computational and sample complexity compared to the baseline algorithm. Experiment results show that the proposed algorithm outperforms the other existing algorithms. | The authors propose using regression systematically in a causal setup where an intervention refers to a change in the distribution of some of the exogenous variables. They propose two algorithms for this, one is complete but computationally intractable, whereas the other is a heuristic.
One of the most important problems is the motivation for this new type of intervention. Please provide real world examples of when this type of intervention is realistic.
Another important problem is about Assumption 1. This assumption is very convenient for the analysis but it is very unclear what it entails. How should the variances be related to one another for Assumption 1 to hold? And under what perturbation? Authors should provide a more detailed alternate assumption instead that clearly lays out how noise variances relate to each other, while also considering the causal graph. Otherwise, it is very difficult to assess how restrictive this assumption is.
Line 38: "Note that this model is one of the most problematic models in the literature of causal inference"
-> Additive noise models with Gaussian exogenous variables are arguably one of the most restrictive models. It is not clear if the hardness of identification in this case is relevant for practical questions.
Line 62, 63, 64: "Also since the exogenous noises of both variables have changed, commonly used interventional tests are also not capable of distinguishing between these two structures [4]"
This is not true. As long as the noise variables are independent (they are always independent in Pearlian setting under causal sufficiency), a simple conditional independence test in the two environments, one pre-interventional and one post-interventional, yields the true causal direction (assuming either X1 -> X2 or X1 <- X2). The fact that exogenous variables have different distributions does not disturb this test: Simply observe X1 and X2 are independent under an intervention on X2 to conclude X1->X2.
Line 169: "ordinary interventional tests" are described as the changes observed in the distribution of effect given a perturbation of the cause distribution. However, the more robust interventional test commonly used in the literature checks the conditional independence of the two variables in the post-interventional distribution, as explained in the above comment. Please see "Characterization and Greedy Learning of Interventional Equivalence Classes" by Buhlmann for details.
Based on the two comments above, please do not include "discovering causal graph where interventional methods cannot" in the list of contributions, this is incorrect and severely misleading. I believe the authors should simply say that methods used under perfect interventions cannot be used any more under this type of intervention - which changes the distribution of the exogenous variable - since no new conditional independencies are created compared to the observational distribution. This would be the correct statement, but it is not what the paper currently reads. Please edit the misleading parts of the text.
Example 1 is slightly confusing: Authors use I = {X1} which does not have any subscript as the previous notation I_{i,j}. I believe it would be more clear to write it as I = {\emptyset, X1} instead.
It is not clear if the algorithm LRE is complete, I believe not. Please explicitly mention this in the paper.
The regression invariance set - R - can be indicative of the set of parents of a variable. Authors already make use of this observation in designing the LRE algorithm. What if every exogenous variable changed across every two intervention, i.e., I_{i,j} = [n]. How would you change LRE algorithm for this special case. I would intuitively believe it would greatly simplify it. This may be a useful special case to consider that can boost the usefulness of regression invariant sets.
Another extension idea for the paper is to design the experiments I_{i,j}. What is the minimum number of interventions required?
Simulation results are very preliminary. The results of the real dataset simulations is not clear: Authors have identified a set of possible causes of the number of years of education. Since the true causal graph is unknown it is not clear how to evaluate the results. |
nips_2017_1403 | OnACID: Online Analysis of Calcium Imaging Data in Real Time
Optical imaging methods using calcium indicators are critical for monitoring the activity of large neuronal populations in vivo. Imaging experiments typically generate a large amount of data that needs to be processed to extract the activity of the imaged neuronal sources. While deriving such processing algorithms is an active area of research, most existing methods require the processing of large amounts of data at a time, rendering them vulnerable to the volume of the recorded data, and preventing real-time experimental interrogation. Here we introduce OnACID, an Online framework for the Analysis of streaming Calcium Imaging Data, including i) motion artifact correction, ii) neuronal source extraction, and iii) activity denoising and deconvolution. Our approach combines and extends previous work on online dictionary learning and calcium imaging data analysis, to deliver an automated pipeline that can discover and track the activity of hundreds of cells in real time, thereby enabling new types of closed-loop experiments. We apply our algorithm on two large scale experimental datasets, benchmark its performance on manually annotated data, and show that it outperforms a popular offline approach. | This paper proposes an online framework for analyzing calcium imaging data. This framework is built upon the popular and now widely used constrained non-negative matrix factorization (CNMF) method for cell segmentation and calcium time-series analysis (Pnevmatikakis, et al., 2016). While the existing CNMF approach is now being used by many labs across the country, I’ve talked to many neuroscientists that complain that this method cannot be applied to large datasets and thus its application has been limited. This work extends this method to a real-time decoding setting, making it an extremely useful contribution for the neuroscience community.
The paper is well written and the results are compelling. My only concern is that the paper appears to combine multiple existing methods to achieve their result. Nonetheless, I think the performance evaluation is solid and an online extension of CNMF for calcium image data analysis and will likely be useful to the neuroscience community.
Major comments:
- There are many steps in the described method that rely on specific thresholds or parameters. It would be useful to understand the sensitivity of the method to these different hyperparameters. In particular, how does the threshold used to include new components affect performance?
- Lines 40-50: This paragraph seems to wander from the point that you nicely set up before this paragraph (and continue afterwards). Not sure why you’re bringing up closed loop, except for saying that by doing this in real time you can do closed-loop experiments. I’m not sure the idea that you want to communicate here.
- The performance evaluations do not really address the use of isotonic regression for deconvolution. In Figure 3C, the traces appear to be demixed calcium after segmentation and not the deconvolved spike trains. Many of the other comparisons focus on cell segmentation. Please comment on the use of deconvolution and how it possibly might help in cell segmentation.
- The results show that the online method outperforms the original CNMF method. Can the authors comment on where the two differ?
Minor comments:
Figure 1: The contours cannot be seen in B. Perhaps a white background in B (and later in the results) can help to see the components and contours. C is hard to see and interpret initially as the traces overlap significantly. |
nips_2017_2165 | Mapping distinct timescales of functional interactions among brain networks
Brain processes occur at various timescales, ranging from milliseconds (neurons) to minutes and hours (behavior). Characterizing functional coupling among brain regions at these diverse timescales is key to understanding how the brain produces behavior. Here, we apply instantaneous and lag-based measures of conditional linear dependence, based on Granger-Geweke causality (GC), to infer network connections at distinct timescales from functional magnetic resonance imaging (fMRI) data. Due to the slow sampling rate of fMRI, it is widely held that GC produces spurious and unreliable estimates of functional connectivity when applied to fMRI data. We challenge this claim with simulations and a novel machine learning approach. First, we show, with simulated fMRI data, that instantaneous and lag-based GC identify distinct timescales and complementary patterns of functional connectivity. Next, we analyze fMRI scans from 500 subjects and show that a linear classifier trained on either instantaneous or lag-based GC connectivity reliably distinguishes task versus rest brain states, with ∼80-85% cross-validation accuracy. Importantly, instantaneous and lag-based GC exploit markedly different spatial and temporal patterns of connectivity to achieve robust classification. Our approach enables identifying functionally connected networks that operate at distinct timescales in the brain. | The authors apply instantaneous and lag-based Granger-Geweke causality (iGC and dGC) to infer brain network connections at distinct timescales from functional MRI data. With simulated fMRI data and through analysis of real fMRI recordings, they presented the following novel insights. First, iGC and dGC provide robust measures of brain functional connectivities, resolving over long-term controversy in the field. Second, iGC and dGC can identify complementary functional connections, in particular, the latter can find excretory-inhibitory connections. Third, downsampling the time series to different extents provides an efficient method for recovering connections at different timescales.
Clarity:
The paper is well organised and clearly written with nice figures. The result of real data analysis may be difficult to understand without background knowledge in neuroscience.
Significance:
I think that the insights written in this paper have significant impact on researchers working on this topic. Recently analyses of resting-state fMRI data have led to many interesting results in neuroscience. However, most publications employed correlation matrices for measuring functional connectivities. So, if iGC and dGC would become alternative tools, we could extract/grasp multi timescale processing in the brain. In order to convince more researchers of practical usefulness of iGC and dGC, it would be necessary to compare with existing techniques further and to work on more real fMRI examples.
On the other hand, the scope of this paper is rather narrow and impacts on other research topics in the NIPS community are rather limited.
Correctness:
Since the authors applied existing connectivity measures, I don't expect any technical errors.
Originality:
I feel that a certain amount of researchers are sceptical about applications of GC measures to fMRI data. I have not seen the messages of this paper which may resolve over long-term controversy in the field.
To sum up, the novel insights derived from analysis of simulated and real fMRI data with GC measures are interesting and could have potential to advance the research field. Since further real-world examples would be preferable to convince other researchers in the field, I would say that this paper is weak accept.
There are a few minor comments.
- p.3 Eq.(2): I know a MVAR model with variable time lag for each connection which is determined by DTI fibre length. The simulation model in Section 2 assumes variable timescale at each node. I wonder whether we can set timescale of each connections separately at a predetermined value in this formulation or we have to further extend the model in Section 2 to construct more realistic model incorporating knowledge from structural MRI data.
- Section 3: I guess that the language task could contain slow brain processing and might be a good example to show multi timescale network structures by fMRI downsampling. I wonder whether there exist other tasks in HCP which we can see similar results. |
nips_2017_982 | NeuralFDR: Learning Discovery Thresholds from Hypothesis Features
As datasets grow richer, an important challenge is to leverage the full features in the data to maximize the number of useful discoveries while controlling for false positives. We address this problem in the context of multiple hypotheses testing, where for each hypothesis, we observe a p-value along with a set of features specific to that hypothesis. For example, in genetic association studies, each hypothesis tests the correlation between a variant and the trait. We have a rich set of features for each variant (e.g. its location, conservation, epigenetics etc.) which could inform how likely the variant is to have a true association. However popular empirically-validated testing approaches, such as Benjamini-Hochberg's procedure (BH) and independent hypothesis weighting (IHW), either ignore these features or assume that the features are categorical or uni-variate. We propose a new algorithm, NeuralFDR, which automatically learns a discovery threshold as a function of all the hypothesis features. We parametrize the discovery threshold as a neural network, which enables flexible handling of multi-dimensional discrete and continuous features as well as efficient end-to-end optimization. We prove that NeuralFDR has strong false discovery rate (FDR) guarantees, and show that it makes substantially more discoveries in synthetic and real datasets. Moreover, we demonstrate that the learned discovery threshold is directly interpretable. | This paper proposes a procedure to learn a parametric function mapping a set of features describing a hypothesis to a (per-hypothesis) significance threshold. In principle, if this function is learnt appropriately, the approach would allow prioritising certain hypotheses based on their feature representation, leading to a gain in statistical power. The function generating the per-hypothesis rejection regions is trained to control the False Discovery Proportion (FDP) with high probability, provided some restrictive assumptions are satisfied.
While multiple hypothesis testing is perhaps not a mainstream machine learning topic nowadays, key application domains such as computational biology or healthcare still heavily rely on multiple comparisons, making this a topic worth studying. A clear trend in this domain is the design of methods able to incorporate prior knowledge to upweight or downweight some hypotheses; the method described in this paper is one more example of this line of work.
Despite the fact that the topic is relevant, I have some serious concerns about the practical applicability of the proposed approach, as well as other aspects of the paper.
***** Major Points *****
(1) In my opinion, the paper heavily relies on assumptions that are simply false for all motivating applications described in the paper and tackled in the experiments.
Firstly, the method assumes that triplets (P_{i}, \mathbf{X}_{i}, H_{i}) are i.i.d. If this assumption were not satisfied, the entire method would break down, as cross-validation would no longer be appropriate to validate the decision rules learnt on a training set.
Unfortunately, the independence assumption cannot be accepted in bioinformatics. For example, in genome-wide association studies, variants tend to be statistically dependent due to phenomena such as linkage disequilibrium (LD). In RNA-Seq analysis, gene expression levels are correlated according to complex co-expression networks, which are possibly tissue-specific. More generally, unobserved confounding variables such as population structure, batch effects or cryptic relatedness might introduce additional non-trivial statistical dependencies between the observed predictors. There exist straightforward extensions of the Benjamini-Hochberg (BH) procedure to guarantee FDR control under general dependence. Unless the authors extend their approach to allow for arbitrary dependencies between hypotheses, I do not believe the method is valid for the applications included in the experiments. Moreover, the assumption that all triplets (P_{i}, \mathbf{X}_{i}, H_{i}) are identically distributed is also hard to justify in practice.
Another questionable assumption, though perhaps of lesser practical importance when compared to the two previous issues, is the fact that the alternative distribution $f_{1}(p \mid \mathbf{x})$ must be non-increasing. The authors claim this is a standard assumption, which is not necessarily true (e.g. see [1] for a recent counter-example, where the alternative distribution is modelled as a beta distribution with learnable mean and precision).
(2) The paper seems to make a heavy emphasis on the use of neural networks, to the point this fact is present in the title and name of the method. However, neural networks play a small role in the entire approach, as essentially all that is needed is a parametric function to map the input feature space describing each hypothesis to a significance threshold in the unit interval. Most importantly, all motivating examples are extremely low-dimensional (1D to 5D) making it likely that virtually any parametric function with enough capacity will suffice.
I believe the authors should include exhaustive experiments to study how different function classes perform in this task. By doing so, the authors might prove to which extent neural networks are an integral part of their approach. Moreover, it might occur that simpler, easier to interpret functions perform equally well, in which case it might be beneficial to use those instead.
(3) While the paper contains virtually no grammar mistakes, being easy to read in that regard, the quality of the exposition could be significantly improved.
In particular, Section 3 lacks detail in the description of Algorithm 1. Throughout the paper, the notion of cross-validation is rather confusing, as in this particular case distinct folds correspond to subsets of features (i.e. hypotheses) rather than samples used to compute the corresponding P-values. I believe this should be clarified early-on. Also, the variables $\gamma_{i}$ appear out of nowhere and are not appropriately discussed in the text. In my opinion, the authors should additionally adopt the notation used in Equation (8) in Supplementary Section 2 for Equation (3), in order to make it clearer that what they are truly optimising is only a smooth approximation to the true objective function.
The description of the implementation and training procedure in Supplementary Section 2 is also insufficient, as it is lacking a proper justification of the hyperparameter choices (e.g. network architecture, $\lambda$ values, optimisation scheme) and experimental results describing the robustness of their approach to such hyperparameters. In practice, setting these hyperparameters will be a considerable hurdle for many practitioners in computational biology and statistical genetics, which might not necessarily be familiar with standard techniques in deep learning. Therefore, I believe these issues should be described in greater detail for this work to be of use for such communities.
***** Minor Points *****
The use of cross-validation makes the results random by construction, which is an undesirable property in statistical association testing. The authors should carry out experiments to demonstrate the stability of their results across different random partitions and to which extent a potential underlying stratification of features might be problematic for the cross-validation process.
Decision rules in multiple hypothesis testing ultimately strongly depend on the number n of hypothesis being tested. However, the proposed approach does not explicitly depend on n but, rather, relies on the cross-validation set and the test set containing exactly the same number of hypotheses and all hypotheses being i.i.d. It would be interesting if the authors were able to relax that requirement by accounting for the number of tests explicitly rather than implicitly.
The paper contains several wrong or imprecise claims that however do not strongly affect the overall contribution:
- “The P-value is the probability of a hypothesis being observed under the null” -> This is not a correct definition of P-value. A P-value is the probability of observing a value of the test statistic that indicates an association at least as extreme under the assumption that the null hypothesis holds.
- “The two most popular quantities that conceptualize the false positives are the false discovery proportion (FDP) and the false discovery rate (FDR)” -> Family-Wise Error Rate continues to be popular and has a considerably longer history than any of those quantities. Moreover, while it is true that FDR is extremely popular and de-facto replacing FWER in many domains, FDP is relatively less frequently used in comparison to FWER.
- Definition 1 is mathematically imprecise, as it is not handling the case $D(t)=0$ properly.
- “Moreover, $\mathbf{x}$ may be multidimensional, ruling out the possibility of non-parametric modelings, such as spline-based methods or the kernel based methods, whose number of parameters grows exponentially with the dimensionality” -> Kernel-based methods can be applied in this setting via inducing variables (e.g. [2]). Additionally, like mentioned earlier, many other approximators other than neural networks require less an exponentially-growing number of parameters on the dimension of the feature space.
- Lemma 1 does not show that the “mirroring estimator” bounds the true FD from above, as Line 155 seems to claim. It merely indicates that its expectation bounds the expected FD, which is a different statement than what is discussed between Lines 143-151.
- As mentioned earlier in this review, the statement in Lines 193-194 is not completely correct. Five dimensions is definitely not a high-dimensional setting by any means. At least thousands of features should have been used to justify that claim.
The paper would benefit from proof-reading, as it contains frequent typos. Some examples are:
- Line 150: $(t(\mathbf{x}), 1)$ -> $(1-t(\mathbf{x}), 1)$
- Line 157: “approaches $0$ very fast as $p \rightarrow 0$” -> “approaches $0$ very fast as $p \rightarrow 1$”
- Line 202: “Fig. 3 (b,c)” -> “Fig. 4 (b,c)”
- Lines 253-254: The alternative distribution $f_{1}$ is missing parenthesis
- Supplementary Material, proof of Theorem 1 and Lemma 2: All terms resulting from an application of Chernoff’s bound are missing parenthesis as well. While it might have been intentional, in my opinion it makes the proof a bit harder to read.
References:
[1] Li, Y., & Kellis, M. (2016). Joint Bayesian inference of risk variants and tissue-specific epigenomic enrichments across multiple complex human diseases. Nucleic acids research, 44(18), e144-e144.
[2] Dustin Tran, Rajesh Ranganath, David M. Blei - The variational Gaussian process. A powerful variational model that can universally approximate any posterior. International Conference on Learning Representations, 2016 |
nips_2017_774 | Learning spatiotemporal piecewise-geodesic trajectories from longitudinal manifold-valued data
We introduce a hierarchical model which allows to estimate a group-average piecewise-geodesic trajectory in the Riemannian space of measurements and individual variability. This model falls into the well defined mixed-effect models. The subject-specific trajectories are defined through spatial and temporal transformations of the group-average piecewise-geodesic path, component by component. Thus we can apply our model to a wide variety of situations. Due to the non-linearity of the model, we use the Stochastic Approximation ExpectationMaximization algorithm to estimate the model parameters. Experiments on synthetic data validate this choice. The model is then applied to the metastatic renal cancer chemotherapy monitoring: we run estimations on RECIST scores of treated patients and estimate the time they escape from the treatment. Experiments highlight the role of the different parameters on the response to treatment. | The paper proposes a mixed effects dynamical model which describes individual trajectories as being generated from transformations of an average behaviour. The work is motivated by modelling of cancer treatment, but the proposed method is more generally applicable.
The paper is well-written and -structured, and it considers an interesting application. I have limited knowledge of Riemannian geometry, but the methodology was not hard to follow and appears sound (to the extent that I can judge); that said, I cannot comment on the novelty of the approach. I generally found the presentation to be clear (despite some inconsistencies in the notation). The evaluation is fairly comprehensive and explained well in the text, and the model seems to perform well, especially with regards to capturing a variety of behaviours. It is interesting that the error does not monotonically increase as the sample size increases, but I suppose its variation is not that great anyway.
Overall, I believe this is a strong paper that is worthy of acceptance. Most of my comments have to do with minor language or notation issues and are given below. I only have two "major" observations. The first concerns Section 2.2, where the notation changes from the previous section. Specifically, the time warping transformation (previously \psi) is now denoted with \phi, while the space warping (previously \phi) is now strangely \phi(1) (line 154). This is pretty confusing, although presumably an oversight. Secondly, in Figure 1 and its description in Section 4.1, there is no explanation of what the vertical axis represents - presumably the number of cancerous cells, or something similar?
Other than that, I think it would be useful to slightly expand the explanation in a couple of places. The first of these is the introduction of the transformations (Section 2.1), where I personally would have welcomed a more intuitive explanation of the time and space warping (although Figure 1b helps with that some). The second place is Section 2.2: here, various decisions are said to be appropriate for the target application (eg lines 140, 155, 166), but without explaining why that is; some details of how these quantities and decisions correspond to the application might be useful. Some of the terminology could also be better explained, especially relating to the treatment, e.g. "escaping", "rupture". Finally, to better place the work in context, the paper could perhaps include a more concrete description of what the proposed model allows (for example, in this particular application) that could not be done with previous approaches (Section 4.3 mentions this very briefly).
Detailed comments:
-----------------
l21: Actually -> Currently?
l22: I don't think "topical" is the appropriate word here; perhaps "contested"?
l23: [Mathematics] have -> has
l24: [their] efficiency -> its
l29: the profile are fitting -> the profiles are fitted?
l36: [amounts] to estimate -> to estimating
l45: in the aim to grant -> with the aim of granting?
l50: temporary [efficient] -> temporarily
l54: little [assumptions] -> few
l54: to deal with -> (to be) dealt with
l55: his -> its
l56: i.e -> i.e.
Equation between lines 81-82: t_R^m in the subscript in the final term should be t_R^{m-1} (similarly in Equation (*))
l91: I count 2m-2 constraints, not 2m+1, but may be mistaken
l93: constrain -> constraint (also repeated a couple of times further down)
l106: "we decree the less constrains possible": I am not sure of the meaning here; perhaps "we impose/require the fewest constraints possible"?
l118: in the fraction, I think t_{R,i}^{l-1} should be t_R^{l-1} (based on the constraints above)
l131: used to chemotherapy monitoring -> either "used for/in chemotherapy monitoring" or "used to [do something] in chemotherapy monitoring"
l134: n = 2 -> m = 2 (if I understand right)
l139: especially -> effectively / essentially (also further down)
l154: \phi_i^l (1), as mentioned earlier in the comments
l156: *** -> fin?
l160: I think (but may be wrong) that the composition should be the other way around, i.e. \gamma_i^1 = \gamma_0^1 \circ \phi_i^1, if \phi is the time warping here
l167: [interested] to -> in
l177: independents -> independent
l183-185: is it not the case that, given z and \theta, the measurements are (conditionally) independent?
l241: the table 1 -> Table 1
Figure 2: there is a thick dark green line with no observations - is that intentional?
Table 1: I found the caption a little hard to understand, but I am not sure what to suggest as an improvement
l245: the biggest [...], the best -> the bigger [...], the better
l257: Symetric -> Symmetric
l260: is 14, 50 intended to be 14.50?
l260: Figure 3a, -> no comma
l264: early on [the progression] -> early in / early on in
l264: to consider with quite reserve -> to be considered with some reserve
Figure 3: (last line of caption) large scale -> large range?
l280: "we have explicit it": I'm not sure what you mean here
l284: Last -> Lastly |
nips_2017_2656 | Efficient Modeling of Latent Information in Supervised Learning using Gaussian Processes
Often in machine learning, data are collected as a combination of multiple conditions, e.g., the voice recordings of multiple persons, each labeled with an ID. How could we build a model that captures the latent information related to these conditions and generalize to a new one with few data? We present a new model called Latent Variable Multiple Output Gaussian Processes (LVMOGP) that allows to jointly model multiple conditions for regression and generalize to a new condition with a few data points at test time. LVMOGP infers the posteriors of Gaussian processes together with a latent space representing the information about different conditions. We derive an efficient variational inference method for LVMOGP for which the computational complexity is as low as sparse Gaussian processes. We show that LVMOGP significantly outperforms related Gaussian process methods on various tasks with both synthetic and real data. | # Update after author feedback and reviewer discussion
I thank the authors for feedback. I maintain my assessment and do not recommend publication at this stage.
The core contribution is representing conditions with latent variables, and deriving a VI algorithm to cope with intractibility. This is interesting, but the discussion around it could be much improved.
Some possible improvements are addressed in the author feedback, eg I'm not sure how Fig 1 could have been understood without the complementary explanation brought up in the feedback.
Beyond what has been addressed in the author feedback, some work is needed to make this paper appealing (which the idea under study, the method and the results seem to call for):
- clarifying the mathematical formulation, eg what forms of $k_H$ are we examining, provide a full probabilistic model summary of the model, point out design choices
- pointing out differences or similarities with existing work
- remove gratuitous reference to deep learning in intro (it detracts)
- make sure that all important questions a reader might have are addressed
# Overall assessment
The issue addressed (modelling univariate outputs which were generated under different, known conditions) and the modelling choice (representing conditions as latent variables) are interesting.
I see no reason why to even mention and work with the (restrictive) parameterization of section 2.2, and not immediately the parameterization allowing "missing data" section 4. The latter is more interesting and equally simple.
Overall, the paper is reasonably clear, but seems lacking in originality and novelty.
## Novelty
Wrt work on multi-output / multi-task GP, I can see that:
- this model has latent variables, not task IDs, and so can generalize over tasks (called conditions in the paper)
- the computational treatment using inducing points helps
The two seem very close though. The assumption of Kronecker structure is a staple. In particular, experiments should try compare to these methods (eg considering every condition a distinct task).
The application of Titsias 2009 seems straightforward, I cannot see the novelty there, nor why so much space is used to present it.
## Presentation and clarity
1. Paper requires English proofreading. I usually report typos and mistakes in my reviews, but here there are too many grammar errors.
2. Motivating example with braking distance is helpful.
2. I don't understand fig 1 c,d at all: why do we not see the data points of fig 1a? where do these new points come from? what is depicted in the top vs the bottom plot?
2. References: the citation of Bussas 2017 is wrong. The complete correct citation is more like: Matthias Bussas, Christoph Sawade, Tobias Scheffer, and Niels Landwehr. Varying-coefficient models for geospatial transfer learning.. Machine Learning, doi:10.1007/s10994-017-5639-3, 2017. |
nips_2017_1156 | Learning Chordal Markov Networks via Branch and Bound
We present a new algorithmic approach for the task of finding a chordal Markov network structure that maximizes a given scoring function. The algorithm is based on branch and bound and integrates dynamic programming for both domain pruning and for obtaining strong bounds for search-space pruning. Empirically, we show that the approach dominates in terms of running times a recent integer programming approach (and thereby also a recent constraint optimization approach) for the problem. Furthermore, our algorithm scales at times further with respect to the number of variables than a state-of-the-art dynamic programming algorithm for the problem, with the potential of reaching 20 variables and at the same time circumventing the tight exponential lower bounds on memory consumption of the pure dynamic programming approach. | The authors present a branch and bound algorithm for learning Chordal Markov networks. The prior state of the art algorithm is a dynamic programming approach based on a recursive characterization of clique tress and storing in memory the scores of already-solved subproblems. The proposed algorithm uses a branch and bound algorithm to search for an optimal chordal Markov network. The algorithm first uses a dynamic programming algorithm to enumerate Bayesian network structures, which are later used as pruning bounds. A symmetry breaking technique is introduced to prune the search space.
The special property of Chordal Markov networks necessitates that no score pruning can be done. Given that this step is exponential in time and space. It is unlikely that learning CMSL can scale to even medium-size datasets (>20-30variables).
What is the definition of decomposable DAG? This is important for understanding other concepts defined later on, such as ordered decomposable DAG.
The symmetry breaking rule is useful. What is its relation to the symmetry breaking rules introduced in (van Beek&Hoffmann 15) for learning BNs? The authors referenced that paper and should have discussed the relevance.
The paper did not give a clear picture on how much time is spent on computing scores, and how much time is spent on optimization. Also, the total runtime of the proposed algorithm includes computing optimal BNs and segment trees. How much do they take relative to the total runtime? The aggregated time does not provide a complete picture. |
nips_2017_1331 | Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
Recent work has shown that state-of-the-art classifiers are quite brittle, in the sense that a small adversarial change of an originally with high confidence correctly classified input leads to a wrong classification again with high confidence. This raises concerns that such classifiers are vulnerable to attacks and calls into question their usage in safety-critical systems. We show in this paper for the first time formal guarantees on the robustness of a classifier by giving instance-specific lower bounds on the norm of the input manipulation required to change the classifier decision. Based on this analysis we propose the Cross-Lipschitz regularization functional. We show that using this form of regularization in kernel methods resp. neural networks improves the robustness of the classifier with no or small loss in prediction performance. | This paper fills an important gap in the literature of robustness of classifiers to adversarial examples by proposing the first (to the best of my knowledge) formal guarantee (at an example level) on the robustness of a given classifier to adversarial examples. Unsurprisingly, the bound involves the Lipschitz constant of the Jacobians which the authors exploit to propose a cross-Lipschitz regularization. Overall the paper is well written, and the material is well presented. The proof of Theorem 2.1 is correct. I did not check the proofs of the propositions 2.1 and 4.1.
This is an interesting work. The theorem 2.1 is a significant contribution.
The experimental section is not very insightful as very simple models are used on small/toy-ish datasets. Some claims in the paper are not consistent:
- "the current main techniques to generate adversarial samples [3, 6] do not integrate box constraints"
There exist approaches integrating box constraints: see [1,2]
- "It is interesting to note that classical regularization functionals which penalize derivatives of the resulting classifier function are not typically used in deep learning."
Penalizing the derivatives is a well-known strategy to regularize neural networks[3,4]. In fact, it has recently been exploited to improve the robustness of neural networks to adversarial examples[3].
I have the following question for the authors:
Theorem 2.1 says that the classifier decision does not change inside a ball of small radius for a given p-norm. Yet, we can still find adversarial examples even when the norm of the gradient is small (yes, we can. I have tried!). Does this mean the topology induced by the p-norm does not do justice to the perceptual similarity we have of the image and its adversarial version? In other words; Does close in terms of p-norm reflect "visually close"?
Overall I like the paper. Though it leaves me with mixed feelings due to weak experimental section, I lean on the accept side.
[1] Intriguing properties of neural networks
[2] DeepFool: a simple and accurate method to fool deep neural networks
[3] Parseval Networks: Improving Robustness to Adversarial Examples
[4] Double back propagation: increasing generalization performance |
nips_2017_3050 | Estimating Mutual Information for Discrete-Continuous Mixtures
Estimation of mutual information from observed samples is a basic primitive in machine learning, useful in several learning tasks including correlation mining, information bottleneck, Chow-Liu tree, and conditional independence testing in (causal) graphical models. While mutual information is a quantity well-defined for general probability spaces, estimators have been developed only in the special case of discrete or continuous pairs of random variables. Most of these estimators operate using the 3H-principle, i.e., by calculating the three (differential) entropies of X, Y and the pair (X, Y ). However, in general mixture spaces, such individual entropies are not well defined, even though mutual information is. In this paper, we develop a novel estimator for estimating mutual information in discrete-continuous mixtures. We prove the consistency of this estimator theoretically as well as demonstrate its excellent empirical performance. This problem is relevant in a wide-array of applications, where some variables are discrete, some continuous, and others are a mixture between continuous and discrete components.
1 Introduction mutual information estimators, on both the theoretical as well as practical fronts [46,31,44,45,22,19,7,15,14,17,16].
The previous estimators focus on either of two cases -the data is either purely discrete or purely continuous. In these special cases, the mutual information can be calculated based on the three (differential) entropies of X, Y and (X, Y ). We term estimators based on this principle as 3H-estimators (since they estimate three entropy terms), and a majority of previous estimators fall under this category [19,16,46].
In practical downstream applications, we often have to deal with a mixture of continuous and discrete random variables. Random variables can be mixed in several ways. First, one random variable can be discrete whereas the other is continuous. For example, we want to measure the strength of relationship between children's age and height, here age X is discrete and height Y is continuous. Secondly, a single scalar random variable itself can be a mixture of discrete and continuous components. For example, consider X taking a zero-inflated-Gaussian distribution, which takes value 0 with probability 0.1 and is a Poisson distribution with mean 10 with probability 0.9. This distribution has both a discrete component as well as a component with density, and is a well-known model for gene expression readouts [24,37]. Finally, X and / or Y can be high dimensional vector, each of whose components may be discrete, continuous or mixed.
In all of the aforementioned mixed cases, mutual information is well-defined through the RadonNikodym derivative (see Section 2) but cannot be expressed as a function of the entropies or differential entropies of the random variables. Crucially, entropy is not well defined when a single scalar random variable comprises of both discrete and continuous components, in which case, 3H estimators (the vast majority of prior art) cannot be directly employed. In this paper, we address this challenge by proposing an estimator that can handle all these cases of mixture distributions. The estimator directly estimates the Radon-Nikodym derivative using the k-nearest neighbor distances from the samples; we prove 2 consistency of the estimator and demonstrate its excellent practical performance through a variety of experiments on both synthetic and real dataset. Most relevantly, it strongly outperforms natural baselines of discretizing the mixed random variables (by quantization) or making it continuous by adding a small Gaussian noise.
The rest of the paper is organized as follows. In Section 2, we review the general definition of mutual information for Radon-Nikodym derivative and show that it is well-defined for all the cases of mixtures. In Section 3, we propose our estimator of mutual information for mixed random variables. In Section 4, we prove that our estimator is 2 consistent under certain technical assumptions and verify that the assumptions are satisfied for most practical cases. Section 5 contains the results of our detailed synthetic and real-world experiments testing the efficacy of the proposed estimator. | Summary: This paper proposes an estimator for the mutual information of a pair (X,Y) of random variables given N joint samples. While most previous work has focused separately on the purely discrete and purely continuous methods (with significantly different methods used in each setting), the novel focus here on the case of combinations or mixtures or discrete and random variables; in this case, the mutual information is well-defined but cannot always be expressed in terms of entropy, making most previous estimators unusable. The proposed estimator generalizes the classic KSG estimator by defining the behavior at samples points with k-NN distance 0. The L_2 consistency of the estimator is proven, under a sort of continuity condition on the density ratio that allows the existence of atoms, and suitable scaling of k with N. Finally, experimental results on synthetic and real data compare the proposed estimator to some baseline estimators based on perturbing the data to make it purely discrete or continuous.
Main Comments:
This paper addresses a significant hole that remains in the nonparametric dependence estimation literature despite significant recent progress in this area; many current estimators are difficult or impossible to use with many data sets due to a combination or mixture of discrete and continuous components. While the proposed estimator seems like a fairly simple correction of the KSG estimator, and the consistency result is thus likely straightforward given the results of [16], the empirical section is strong and I think the work is of sufficient practical importance to merit acceptance. The paper is fairly well-written and easy to read.
Minor Comments/questions:
1) As far as I understand 3H estimators can work as long as each individual dimension of the variables is fully discrete or full continuous, but fail if any variable is a mixture of the two (i.e., if an otherwise continuous distribution has any atoms). It might be good to clarify this, and also to compare to some 3H estimators (e.g., in experiment II).
2) Lines 39-42: Perhaps there is a typo or I am misreading something, but it isn't clear to my why the given definition of X isn't entirely discrete (it appears to be supported on {0,1,2,...}).
3) Lines 129-130: could be probably clearer "the average of Radon-Nikodym derivative" should be "the average of the logarithm Radon-Nikodym derivative of the joint distribution P_{XY} with respect to the product distribution P_XP_Y."
4) As discussed in Appendix A, a key observation of the paper is that, to be well-defined, mutual information requires only the trivial condition of absolute continuity of the joint distribution w.r.t. the marginal distribution (rather than w.r.t. another base measure, as for entropy). I think it would be worth explicitly mentioning this in the main paper, rather than just in the Appendix.
5) Although this paper doesn't study convergence rates, could the authors conjecture about how to quantify parameters that determine the convergence rate for mixed discrete/continuous variables? For continuous distributions, the smoothness of the density is usually relevant, while, for discrete distributions, the size of the support is usually relevant. However, mixture distributions are discontinuous and have infinite support.
6) Theorem 1 assumes that k -> infinity as N -> infinity. Can the authors conjecture whether the proposed estimator is consistent for fixed values of k? I ask because of recent work [4,16,43] showing that the KL and KSG estimators are consistent even for fixed k.
7) For some reason, the PDF is encoded such that I cannot search, highlight, or copy text. Perhaps it was converted from a postscript? Anyhow, this made the paper significantly more difficult to review.
#### AFTER READING AUTHOR REBUTTAL ####
"H(X,Y) is not well-defined as either discrete entropy or continuous entropy" - I think |
nips_2017_2833 | ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching
We investigate the non-identifiability issues associated with bidirectional adversarial training for joint distribution matching. Within a framework of conditional entropy, we propose both adversarial and non-adversarial approaches to learn desirable matched joint distributions for unsupervised and supervised tasks. We unify a broad family of adversarial models as joint distribution matching problems. Our approach stabilizes learning of unsupervised bidirectional adversarial learning methods. Further, we introduce an extension for semi-supervised learning tasks. Theoretical results are validated in synthetic data and real-world applications. | Adversarially Learned Inference (a.k.a. Adversarial Feature Learning)
is an interesting extension to GANs, which can be used to train a
generative model by learning generator G(z) and inference E(x)
functions, where G(z) maps samples from a latent space to data and
E(x) is an inference model mapping observed data to the latent
space. This model is trained adversarially by jointly training E(x)
and G(z) with a discriminator D(x,z) which is trained to distinguish
between real (E(x), x) samples and fake (z, G(z)) samples. This is an
interesting approach and has been shown to generate latent
representations which are useful for semi-supervised learning.
The authors highlight an issue with the ALI model, by constructing a
small example for which there exist optimal solutions to the ALI loss
function which have poor reconstruction, i.e. G(E(x)) can be very
different to x. This identifiability issue is possible because the ALI
objective imposes no restrictions on the conditional distributions of
the generator and inference models. In particular, this allows for
solutions which violate a condition known as cycle consistency, which
is considered a desirable property in unsupervised learning. They
propose adding a cross-entropy term to the loss, which enforces a
constraint on one of the two conditionals.
The ALICE training objective is shown to yield better reconstructions
than ALI on toy data. They also evaluate on the inception score, a
metric proposed in Salimans et al. (2017), which also shows better
performance for the model trained with the ALICE objective.
The description of how the model can be applied to semi-supervised
learning seemed a little strange to me. The authors consider a case
where the latent space z corresponds to the labels and semi-supervised
learning involves training a model with the ALICE objective, when some
paired (z,x) samples are available. They claim that these labelled
samples can be leveraged to address the identifiability issues in
ALI. While this setting holds for the experiments they present in
section 5.2, a more common application of this kind of model to
semi-supervised learning is to use the inference network to generate
features which are then used for classification. The results in the
ALI paper follow this approach and evaluate ALI for semi-supervised
learning on some standard benchmarks (CIFAR10 and SVHN). This is
possibly the strongest evidence in the ALI paper that the model is
learning a useful mapping from data space to latent space, so it would
be useful to see the same evaluation for ALICE.
In the ALI and BIGAN papers, it is shown that cycle consistency is
satisfied at the optimum when the E and G mappings are
deterministic. Similarly, in this paper the authors show that the
optimal solution to a model trained with cross-entropy regularisation
has deterministic mappings. In lines 228-233, the authors suggest that
a model trained with deterministic mappings is likely to underfit and
taking the alternative approach of training a model with stochastic
mappings and a cross-entropy regularisation gives faster and more
stable training. While this is an interesting claim, it was not
immediately clear to me why this would be the case from the
description provided in the paper.
This paper is on an interesting topic, highlighting a significant
issue with the ALI training objective and proposing a solution using
cross-entropy regularisation. It would be more convincing if the
authors had evaluated their model on the same set of experiments as
the original ALI paper, especially the semi-supervised learning
benchmarks, and provided more justification for the use of
cross-entropy regularisation over deterministic mappings |
nips_2017_2539 | Group Additive Structure Identification for Kernel Nonparametric Regression
The additive model is one of the most popularly used models for high dimensional nonparametric regression analysis. However, its main drawback is that it neglects possible interactions between predictor variables. In this paper, we reexamine the group additive model proposed in the literature, and rigorously define the intrinsic group additive structure for the relationship between the response variable Y and the predictor vector X, and further develop an effective structure-penalized kernel method for simultaneous identification of the intrinsic group additive structure and nonparametric function estimation. The method utilizes a novel complexity measure we derive for group additive structures. We show that the proposed method is consistent in identifying the intrinsic group additive structure. Simulation study and real data applications demonstrate the effectiveness of the proposed method as a general tool for high dimensional nonparametric regression. | Note: Since the supplement appears to include the main paper, I simply reviewed that, and all line numbers below correspond to the supplement.
Summary:
This studies group-additive nonparametric regression models, in which, for some partitioning of the predictor variables, the regression function is additive between groups of variables; this model interpolates between the fully nonparametric model, which is difficult to fit, and the additive model, which is sometimes too restrictive. Specifically, the paper studies the problem where the group structure is not known in advance and must be learned from the data. To do this, the paper proposes a novel penalty function, based on the covering numbers of RKHS balls, which is then added to the kernel ridge regression objective. This results in an objective that can be optimized over both the group structure (which, together with the kernel determines a function space via direct sum of RKHSs over each group of variables) and the regression estimate within each group. Two algorithms are presented for approximately solving this compound optimization problem, and then theoretical results are presented showing (a) the rate at which the empirical risk of the estimate approaches the true risk of the true optimum, and (b) consistency of the group structure estimate, in that the probability it matches the true group structure approaches 1 as n -> infinity. Finally, experimental results are presented on both synthetic and real data.
Main Comments:
The key innovation of the paper appears to be recognizing that the complexity of a particular group structure can be quantified in terms of covering numbers of the direct sum space. The paper is comprehensive, including a well-motivated and novel method, and reasonably solid theoretical and empirical results. I'm not too familiar with other approaches to fitting models between the additive and nonparametric models, but, assuming the discussion in Lines 31-50 is fairly, complete, this paper seems like a potentially significant advance. As noted in the Discussion section, the main issue with the method appears to be difficulty solving the optimization problem over group structure when the number of variables in large. The paper is also fairly clearly written, aside from a lot of typos.
Minor Comments/Questions:
Just curious: is there any simple characterization of "interaction" between two variables that doesn't rely on writing the whole model in Equation (1)?
Line 169: I don't quite understand the use of almost surely here. Is this meant as n -> infinity?
Equation (3) is missing a summation (over i).
Line 205: Perhaps "translation invariant kernel" is a more common term for this than "convolutional kernel"
Typos:
Line 145: "on RHS" should be "on the RHS"
Line 152: "not only can the bias of \hat f_{\lambda,G} reduces" should be "not only can the bias of \hat f_{\lambda,G} reduce"
Line 160: "G^* exists and unique" should be "G^* exists and is unique"
Line 247: "turning parameters" should be "tuning parameters"
Algorithm 2: Line 1: "State with" should be "start with"
Line 251: "by compare" should be "by comparing". |
nips_2017_1238 | Expectation Propagation with Stochastic Kinetic Model in Complex Interaction Systems
Technological breakthroughs allow us to collect data with increasing spatiotemporal resolution from complex interaction systems. The combination of highresolution observations, expressive dynamic models, and efficient machine learning algorithms can lead to crucial insights into complex interaction dynamics and the functions of these systems. In this paper, we formulate the dynamics of a complex interacting network as a stochastic process driven by a sequence of events, and develop expectation propagation algorithms to make inferences from noisy observations. To avoid getting stuck at a local optimum, we formulate the problem of minimizing Bethe free energy as a constrained primal problem and take advantage of the concavity of dual problem in the feasible domain of dual variables guaranteed by duality theorem. Our expectation propagation algorithms demonstrate better performance in inferring the interaction dynamics in complex transportation networks than competing models such as particle filter, extended Kalman filter, and deep neural networks. | This paper considers the variational approach to the inference problem on certain type of temporal graphical models. It defines a convex formulation of the Bethe free energy, resulting in a method with optimal convergence guarantees. The authors derive Expectation-Propagation fixed point equations and gradient-based updates for solving the optimization problem. A toy example is used to illustrate the approach in the context of transportation dynamics and it is shown that the proposed method outperforms sampling, extended Kalman Filtering and a neural network method.
In the variational formulation, the optimization problem is generally non-convex, due to the entropy terms. Typical variational methods bound the concave part linearly and employ an EM-like algorithm. The key contribution of this paper is to work out a dual formulation on the natural parameters so that the dual problem becomes convex. The derivations seem correct and this looks like an important contribution, if I have not missed some important detail. Authors should elaborate on the generality and applicability of their approach to other models.
I think it would be useful to compare their method with the standard double-loop approach [11] both in terms of describing the differences and in numerical comparison (in terms of error and cpu-time).
The authors state that they are minimizing the Bethe free energy, in found a bit confusing that they call this a mean field forward and backward in the algorithm.
Regarding the experiments, the state representation goes from a factorized state with the state of an agent being one component to location states in which the identities of the agents are lost and the components become strongly coupled. This is OK, but to me it contradicts a bit the motivation in the introduction, in which the authors state "we are not satisfied with only a macroscopic description of the system (...) the goal is (..) to track down the actual underlying dynamics".
Some details about the other methods would be good to know. For example, how many particles N where used for PF? How sensitive are the results (accuracy and time) w.r.t N? Also it is not clear how the NNs are trained. Is the output provided the actual hidden state?
The paper looks too dense. I think substantial part in the Introduction is not necessary and more space could be gain so that formulas do not need to be in reduced font and figure 1 can be made larger.
Other minor comments:
It should be stated in the beginning that x is continuous variable unless otherwise stated
Eq (3) no need to define g_v, since it is not used in the rest of the paper
line 130: better $h(x_{t-1},c_v)$ than $h(x_{t},c_v)$
Eq (4) I think $1-\tau$ should be $(1-\tau)$
Equations between 139-140. \psi(|x_{t-1,1}) -> \psi(x_{t-1,1})
Equations between 139-140. "F_dual" -> "F_Dual"
line 104: "dynamics" -> "dynamic"
line 126: remove "we have that"
line 193: -> "by strong duality both solutions exist and are unique"
line 194: "KKT conditions gives" -> "KKT conditions give"
Use \eqref{} instead of \ref{} for equations
Algorithm 1: the name can be improved
Algorithm 1: Gradient ascent: missing eqrefs in Eqs. ...
Algorithm 1: Gradient ascent: missing parenthesis in all $\langle f(x_t \rangle$
Figure 1
line 301: "outperforms against" -> "outperform"
line 302: "scales" -> "scale" |
nips_2017_2059 | Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks
Matrix completion models are among the most common formulations of recommender systems. Recent works have showed a boost of performance of these techniques when introducing the pairwise relationships between users/items in the form of graphs, and imposing smoothness priors on these graphs. However, such techniques do not fully exploit the local stationary structures on user/item graphs, and the number of parameters to learn is linear w.r.t. the number of users and items. We propose a novel approach to overcome these limitations by using geometric deep learning on graphs. Our matrix completion architecture combines a novel multi-graph convolutional neural network that can learn meaningful statistical graph-structured patterns from users and items, and a recurrent neural network that applies a learnable diffusion on the score matrix. Our neural network system is computationally attractive as it requires a constant number of parameters independent of the matrix size. We apply our method on several standard datasets, showing that it outperforms state-of-the-art matrix completion techniques. | This paper generalizes a recent NIPS 2016 paper [10] by allowing convolutional neural network (CNN) models to work on multiple graphs. It extracts local stationary patterns from signals defined on the graphs simultaneously. In particular, it is applied to recommender systems by considering the graphs defined between users and between items. The multi-graph CNN model is followed by a recurrent neural network (RNN) with long short-term memory (LSTM) cells to complete the score matrix.
Strengths of the paper:
* The proposed deep learning architecture is novel for solving the matrix completion problem in recommender systems with the relationships between users and the relationships between items represented as two graphs.
* The proposed method has relatively low computational complexity.
* The proposed method has good performance in terms of accuracy.
Weaknesses of the paper:
* The multi-graph CNN part of the proposed method is explained well, but the RNN part leaves open many questions. Apart from illustrating the process using an example in Figure 3, there is not much discussion to explain why it can be regarded as a matrix diffusion process. Does the process correspond to optimizing with respect to some criterion? What is the effect of changing the number of diffusion steps on the completed matrix and how is the number determined? This needs more detailed elaboration.
In addition, to demonstrate that incorporating the pairwise relationships between users or items helps, you may add some degenerate versions of RMGCNN or sRMGCNN by removing one or both graphs while keeping the rest of the architecture unchanged (including the RNN part).
Minor comments (just some examples of the language errors):
#18: “Two major approach”
#32: “makes well-defined”
#66: “Long-Short Term Memory”
#76: “turns out an NP-hard combinatorial problem”
#83-84: “to constraint the space of solutions”
#104-105: “this representation … often assumes”
#159: “in details”
Table 1 & Table 2: strictly speaking you should show the number of parameters and complexity using the big-O notation.
The reference list contains many arXiv papers. For those papers that have already been accepted, the proper sources should be listed instead. |
nips_2017_26 | Concentration of Multilinear Functions of the Ising Model with Applications to Network Data
We prove near-tight concentration of measure for polynomial functions of the Ising model under high temperature. For any degree d, we show that a degreed polynomial of a n-spin Ising model exhibits exponential tails that scale as
Our concentration radius is optimal up to logarithmic factors for constant d, improving known results by polynomial factors in the number of spins. We demonstrate the efficacy of polynomial functions as statistics for testing the strength of interactions in social networks in both synthetic and real world data. | Summary:
The paper considers concentration of bilinear functions of samples from an ferromagnetic Ising model under high temperature (satisyfing Dobrushin's condition which implies fast mixing). The authors show that for the case when there is no external field, that any bilinear function with bounded coefficients concentrate around a radius of O(n) around the mean where n is the number of nodes in the Ising Model. The paper also shows that for Ising models with external fields, bilinear functions of mean shifted variables again obey such concentration properties.
The key technical contribution is adapting the theory of exhangeable pairs from [Cha05] which has been applied for linear functions of the Ising model and apply it to bilinear functions. Naive applications give concentration radius of n^{1.5} which the authors improve upon. The results are also optimal due to the lower bound for the worst case.
Strengths:
a) The key strength is the proof technique - the authors show that the usual strategy of creating an exchangeable pair from a sample X from the Ising model and X' which is one step of glauber dynamics is not sufficient. So they restrict glauber dynamics to a desirable part of the configuration space and show that the same exchangeable trick can be done on this censored glauber dynamics. They also show that this glauber dynamics mixes fast using path coupling results and also by a coupling argument by chaining two glauber Markov Chains tightly. This tightly chained censored glauber dynamics really satisfies all conditions of the exchangeable pairs argument needed. The authors show this carefully by several Lemmas in the appendix. I quite enjoyed reading the paper.
b) I have read the proofs in the appendix. Barring some typos that can be fixed, the proof is correct to the best of my knowledge and the technique is the major contribution of the paper.
c) The authors apply this to test if samples from a specific distribution on social networks is from an Ising model in the high temperature regime. Both synthetic and real world experiments are provided.
Weaknesses:
Quite minor weaknesses:
a) Page 1 , appendix : Section 2, point 2 Should it not be f(X)- E[f(X)] ??
b) Why is Lemma 1 stated ? It does not help much in understanding Thm 1 quoted. A few lines connecting them could help.
c) Definition 1, I think It should be intersection and not union ?? Because proof of lemma 5 has a union of bad events which should be the negation of intersection of good events we want in Defn 1.
d) Typo in the last equation in Claim 3.
e) Lemma 9 statement. Should it not be (1-C/n)^t ??
f) In proof of Lemma 10, authors want to say that due to fast mixing, total variation distance is bounded by some expression. I think its a bit unclear what the parameter eta is ?? It appears for the first time. It will be great if the authors can explain that step better .
g) Regarding the authors experiments with Last.fm - Whats the practical use case of testing if samples came from an Ising model or not ? Actually the theoretical parts of this paper are quite strong. But I just wanted to hear author's thoughts.
h) Even for linear functions of uniform distribution on the boolean hypercube, at a different radius there are some anti concentration theorems like Littlewood Offord. Recently they have been generalized to bilinear forms (by Van Vu , terry tao with other co authors). I am wondering if something could be said about anti concentration theorems for Bilinear forms on Ising Models ?? |
nips_2017_1758 | Adaptive Accelerated Gradient Converging Method under Hölderian Error Bound Condition
Recent studies have shown that proximal gradient (PG) method and accelerated gradient method (APG) with restarting can enjoy a linear convergence under a weaker condition than strong convexity, namely a quadratic growth condition (QGC). However, the faster convergence of restarting APG method relies on the potentially unknown constant in QGC to appropriately restart APG, which restricts its applicability. We address this issue by developing a novel adaptive gradient converging methods, i.e., leveraging the magnitude of proximal gradient as a criterion for restart and termination. Our analysis extends to a much more general condition beyond the QGC, namely the Hölderian error bound (HEB) condition. The key technique for our development is a novel synthesis of adaptive regularization and a conditional restarting scheme, which extends previous work focusing on strongly convex problems to a much broader family of problems. Furthermore, we demonstrate that our results have important implication and applications in machine learning: (i) if the objective function is coercive and semialgebraic, PG's convergence speed is essentially o( 1 t ), where t is the total number of iterations; (ii) if the objective function consists of an 1 , ∞ , 1,∞ , or huber norm regularization and a convex smooth piecewise quadratic loss (e.g., square loss, squared hinge loss and huber loss), the proposed algorithm is parameter-free and enjoys a faster linear convergence than PG without any other assumptions (e.g., restricted eigen-value condition). It is notable that our linear convergence results for the aforementioned problems are global instead of local. To the best of our knowledge, these improved results are first shown in this work. | Summary: This paper studies the convergence rate of the proximal gradient and its accelerated version under the more general Holderian error bound condition, further extending a line of recent efforts that has weakened the usual strong convexity assumption to the quadratic growth condition. The main contribution is a restarting scheme that eliminates the need of knowing the Holderian constant. The authors complement their theoretical claim with some numerical experiments, which seem to verify the efficiency of the proposed adaptive algorithm.
As far as I can tell, this work is a fine synthesis of some recent developments around semialgebraic functions, error bound, and the proximal gradient algorithm. The results are not too surprising given the existing known results. Nevertheless, it is a very welcome addition to the field. And given its practical nature, the proposed algorithm might be useful for a variety of applications (some toy demonstrations in the experiment).
Some comments:
While the technical proofs involve many pieces from relevant references, the general idea is in fact quite simple (and dates back at least to FISTA): one first derives some condition (e.g. Theorem 5) that is guaranteed to hold if the correct parameter is used, then in the algorithm one periodically checks this condition; if it ever falls short then we know our parameter of choice is too small and we increase it (the familiar doubling trick) and rerun the algorithm. This high level description of the adaptive algorithm perhaps worth explicitly mentioning in the revision (to give the less familiar readers a clear guidance).
Line 85: no need to mention that "g(x) > -\infty for all x" since the range is assumed to be (-\infty, \infty].
Theorem 1: the option I and II are only defined in the appendix. better mention them in the main text (or at least say "see appendix"). The calculations and notations in the proofs of Theorem 1 and 2 are a little sloppy: when theta > 1/2, the rate does not depend on epsilon (the target accuracy) at all? Some more details between say line 68 and line 69 are needed.
Section 4: Can one develop a similar proof for the objective value gap? Is it because of the DUAL algorithm you use to solve Eq (8) that limits the guarantee to the minimal gradient?
Can we say anything about the dependence of the constants theta and c on the dimension and the number of samples? It could be possible that a linear rate is in fact worse than a sublinear rate if such dependence is not carefully balanced.
Experiments: On some datasets (cpusmall and bodyfat), why proximal gradient takes so many more iterations? This looks a little unusual. How many of the proximal mappings are spent on searching for the step size (for each method)? Also, to give a fuller picture, maybe also include the result for the decrease of the objective values. Another observation here: is it really meaningful to consider an absolute epsilon = 10^{-6} say? For optimization per se, sure. But what if say we only care about the classification accuracy? Would it be possible that an solution at epsilon = 10^{-3} already gives us comparable classification accuracy? (This should be easy to verify since the authors did experiment with a classification dataset.) |
nips_2017_924 | Regularized Modal Regression with Applications in Cognitive Impairment Prediction
Linear regression models have been successfully used to function estimation and model selection in high-dimensional data analysis. However, most existing methods are built on least squares with the mean square error (MSE) criterion, which are sensitive to outliers and their performance may be degraded for heavy-tailed noise. In this paper, we go beyond this criterion by investigating the regularized modal regression from a statistical learning viewpoint. A new regularized modal regression model is proposed for estimation and variable selection, which is robust to outliers, heavy-tailed noise, and skewed noise. On the theoretical side, we establish the approximation estimate for learning the conditional mode function, the sparsity analysis for variable selection, and the robustness characterization. On the application side, we applied our model to successfully improve the cognitive impairment prediction using the Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort data. | The authors present a regularized modal regression method. The statistical learning view of this proposed method is studied and the resulting model is applied to Alzheimer's disease studies. There are several presentation and evaluation issues for this work in its current form.
Firstly, the authors motivate the paper using Alzheimer's disease studies and argue that modal regression is the way to analyze correlations between several disease markers of the disease. This seems very artificial. The necessity of use conditional mode for regression has nothing specific for the Alzheimer's application. The motivation for RMR makes sense without any AD related context. The authors then argue that the main contribution for the work is designing modal regression for AD (line 52). However, for the rest of the methods the discussion is about statistical learning -- no AD at all. This presentation is dubious. The authors need to decide if the work is statistical learning theory for modal regression OR an application of modal regression for AD (for the latter more applied journals are relevant and not this venue).
Moving beyond this overall presentation issue, if we assume that modal regression for AD is the focus of the paper, then the results should show precisely this performance improvement compared to the rest mean-based regression. However, as shown in section 4 this does not seem to be the case. In several cases the performance of LSR is very close to RMR. It should be pointed out that when reporting such sets of error measures, one should perform statistical tests to check if the performance difference is significant i.e., p-value differences between all the entries of Tables. Beyond, the tables the message is not clear in the Figures -- the visualizations need to be improved (or quantified in some sense). This non-appealing improvements of AD data evaluations is all the more reason that the authors should choose 'standard' regression datasets and show the performance gains of RMR explicitly (once this is done, the results of AD can be trusted and any small improvement can be deemed OK). |
nips_2017_2497 | Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice
It is well known that weight initialization in deep networks can have a dramatic impact on learning speed. For example, ensuring the mean squared singular value of a network's input-output Jacobian is O(1) is essential for avoiding exponentially vanishing or exploding gradients. Moreover, in deep linear networks, ensuring that all singular values of the Jacobian are concentrated near 1 can yield a dramatic additional speed-up in learning; this is a property known as dynamical isometry. However, it is unclear how to achieve dynamical isometry in nonlinear deep networks. We address this question by employing powerful tools from free probability theory to analytically compute the entire singular value distribution of a deep network's input-output Jacobian. We explore the dependence of the singular value distribution on the depth of the network, the weight initialization, and the choice of nonlinearity. Intriguingly, we find that ReLU networks are incapable of dynamical isometry. On the other hand, sigmoidal networks can achieve isometry, but only with orthogonal weight initialization. Moreover, we demonstrate empirically that deep nonlinear networks achieving dynamical isometry learn orders of magnitude faster than networks that do not. Indeed, we show that properly-initialized deep sigmoidal networks consistently outperform deep ReLU networks. Overall, our analysis reveals that controlling the entire distribution of Jacobian singular values is an important design consideration in deep learning. | The article is focused on the problem of understanding the learning dynamics of deep neural networks depending on both the activation functions used at the different layers and on the way the weights are initialized. It is mainly a theoretical paper with some experiments that confirm the theoretical study. The core of the contribution is made based on the random matrix theory. In the first Section, the paper describes the setup -- a deep neural network as a sequence of layers -- and also the tools that will be used to study their dynamics. The analysis mainly relies on the study of the singular values density of the jacobian matrix, this density being computed by a 4 step methods proposed in the article. Then this methodology is applied for 3 different architectures: linear networks, RELU networks and tanh networks, and different initialization of the weights -- Ginibre ensemble and Circular Real Ensemble. The theoretical results mainly show that tanh and linear networks may be trained efficiently -- particularly using orthogonal matrix initialization -- while RELU networks are poorly conditioned for the two initialization methods. The experiments on MNIST and CIFAR-10 confirm these results.
First, I must say that the paper is a little bit out of my main research domain. Particularly, I am not familiar with random matrix theory. But, considering that I am not aware of this type of article, I find the paper well written: the focus is clear, the methodology is well described, references include many relevant prior works helping to understand the contribution, and the authors have also included an experimental section. Concerning the theoretical part of the paper, the proposed results seem correct, but would certainly need a deeper checking by an expert of the domain. Concerning the experimental section, the experiments confirm the theoretical results. But I must say that I am a little bit skeptical concerning the topic of the paper since all the demonstrations are focused on the simple deep neural network architectures with linear+bias transformations between the layers. It is a little bit surprising to see results on CIFAR-10 which is typically a computer vision benchmark where convolutional models are used, which is not the case here. So I have some doubt concerning the relevance of the paper w.r.t existing architectures that are now much more complex than sequences of layers. On another side, this paper is certainly a step toward the analysis of more complex architectures, and the author could argue on how difficult it could/will be to use the same tools (random matrix theory) to target for example convolutional networks. |
nips_2017_286 | Semi-Supervised Learning for Optical Flow with Generative Adversarial Networks
Convolutional neural networks (CNNs) have recently been applied to the optical flow estimation problem. As training the CNNs requires sufficiently large amounts of labeled data, existing approaches resort to synthetic, unrealistic datasets. On the other hand, unsupervised methods are capable of leveraging real-world videos for training where the ground truth flow fields are not available. These methods, however, rely on the fundamental assumptions of brightness constancy and spatial smoothness priors that do not hold near motion boundaries. In this paper, we propose to exploit unlabeled videos for semi-supervised learning of optical flow with a Generative Adversarial Network. Our key insight is that the adversarial loss can capture the structural patterns of flow warp errors without making explicit assumptions. Extensive experiments on benchmark datasets demonstrate that the proposed semi-supervised algorithm performs favorably against purely supervised and baseline semi-supervised learning schemes. | Summary:
The paper presents a semi-supervised approach to learning optical flow using a generative adversarial network (GAN) on flow warp errors. Rather than using a handcrafted loss (e.g., deviation of brightness constancy + deviation from smoothness) the paper explores the use of a GAN applied to flow warp errors.
Strengths:
+ novel semi-supervised approach to learning; some concerns on the novelty in the light of [21]
+ generally written well
Weaknesses:
- some key evaluations missing
Comments:
Supervised (e.g., [8]) and unsupervised (e.g., [39]) approaches to optical flow prediction have previously been investigated, the type of semi-supervised supervision proposed here appears novel. The main contribution is in the introduction of an adversarial loss for training rather than the particulars of the flow prediction architecture. As discussed in Sec. 2, [21] also proposes an adversarial scheme. A further comparative discussion is suggested. Is the main difference in the application domain? This needs to be clarified. Also the paper claims that "it is not making explicit assumptions on the brightness constancy". While the proposed approach does not use an Lp or robust loss, as considered in previous work, the concept of taking flow warp error images as input does in fact rely on a form of brightness constancy assumption. The deviation from prior work is not the assumption of brightness constancy but the actual loss.
In reference to [39], it is unclear what is meant by the approach "requires significantly more sophisticated data augmentation techniques" since the augmentation follows [8] and is similar to that used in the current paper.
The main weakness of the paper is in the evaluation. One of the main stated contributions (line 64) is that the proposed approach outperforms purely supervised approaches. Judging from Table 3, this may be a gross misstatement. For example, on Sintel-Clean FlowNet2 achieves 3.96 aEPE vs 6.28 by the proposed approach.
Why weren't results reported on the KITTI test sets using the test server? Are the reported KITTI results including occluded regions (in other words all points) or limited to non-occluded regions?
The limitations of the presented training procedure (Sec. 4.4) should have been presented much earlier. It would have been interesting to train the network with data all from the same domain, e.g., driving scenes. For instance, rather than train on FlyingChairs and KITTI raw, it would have been interesting to train on KITTI raw (no ground truth) and Virtual KITTI (with ground truth) and evaluate on KITTI TEST.
Reported fine-tuned results should be presented for all approaches.
Minor point:
- line 22 "Due to the lack of the large-scale ground truth flow" --> "Due to the lack of large-scale ground truth flow"
- line 32: "With the limited quantity and unrealistic of ground truth"
Overall rating:
The training approach itself appears to be novel (a more detailed comparative discussion with respect to [21] is suggested) and will be of interest to those working on flow and depth prediction. Based on the issues cited with the evaluation I cannot recommend acceptance at this point.
Rebuttal:
The reviewer acknowledges that the authors have addressed some of my concerns/suggestions regarding their evaluation. The performance is still below that of the state-of-the-art. Finetuning may help but has not been evaluated. Is the performance deficit, as suggested by the authors, due to the capacity of their chosen network or is it due to their proposed GAN approach? Would the actual training of a higher capacity network with the proposed adversarial loss be problematic? These are topics to be explored. |
nips_2017_1835 | Differentially private Bayesian learning on distributed data
Many applications of machine learning, for example in health care, would benefit from methods that can guarantee privacy of data subjects. Differential privacy (DP) has become established as a standard for protecting learning results. The standard DP algorithms require a single trusted party to have access to the entire data, which is a clear weakness, or add prohibitive amounts of noise. We consider DP Bayesian learning in a distributed setting, where each party only holds a single sample or a few samples of the data. We propose a learning strategy based on a secure multi-party sum function for aggregating summaries from data holders and the Gaussian mechanism for DP. Our method builds on an asymptotically optimal and practically efficient DP Bayesian inference with rapidly diminishing extra cost. | Title: Differentially private Bayesian learning on distributed data
Comments:
- This paper develops a method for differential privacy (DP) Bayesian learning in a distributed setting, where data is split up over multiple clients. This differs from the traditional DP Bayesian learning setting, in which a single party has access to the full dataset. The main issue here is that performing DP methods separately on each client would yield too much noise; the goal is then to find a way to add an appropriate amount of noise, without compromising privacy, in this setting. To solve this, the authors introduce a method that combines existing DP Bayesian learning methods with a secure multi-party communication method called the DCA algorithm. Theoretically, this paper shows that the method satisfies differential privacy. It also shows specifically how to combine the method with existing DP methods that run on each client machine. Experiments are shown on Bayesian linear regression, on both synthetic and real data.
- Overall, I feel that this method provides a nice contribution to work on Bayesian DP, and extends where one can use these methods to data-distributed settings, which seems like a practical improvement.
- I feel that this paper is written quite nicely. It is therefore easy to understand previous work, the new method, and how the new method fits in context with previous work.
- One small note: I might’ve missed it, but please make sure the acronym TA (for trusted aggregator, I believe) is defined the paper. I believe “TA” might be used in a few places without definition. |
nips_2017_2151 | Successor Features for Transfer in Reinforcement Learning
Transfer in reinforcement learning refers to the notion that generalization should occur not only within a task but also across tasks. We propose a transfer framework for the scenario where the reward function changes between tasks but the environment's dynamics remain the same. Our approach rests on two key ideas: successor features, a value function representation that decouples the dynamics of the environment from the rewards, and generalized policy improvement, a generalization of dynamic programming's policy improvement operation that considers a set of policies rather than a single one. Put together, the two ideas lead to an approach that integrates seamlessly within the reinforcement learning framework and allows the free exchange of information across tasks. The proposed method also provides performance guarantees for the transferred policy even before any learning has taken place. We derive two theorems that set our approach in firm theoretical ground and present experiments that show that it successfully promotes transfer in practice, significantly outperforming alternative methods in a sequence of navigation tasks and in the control of a simulated robotic arm. | This paper presents a RL optimization scheme and a theoretical analysis of its transfer performance. While the components of this work aren't novel, it combines them in an interesting, well-presented way that sheds new light.
The definition of transfer given in Lines 89–91 is nonstandard. It seems to be missing the assumption that t is not in T. The role of T' is a bit strange, making this a requirement for "additional transfer" rather than just transfer. It should be better clarified that this is a stronger requirement than transfer, and explained what it's good for — the paper shows this stronger property holds, but never uses it.
Among other prior work, Theorem 1 is closely related to Point-Based Value Iteration, and holds for the same reasons. In fact, w can be viewed as an unnormalized belief over which feature is important for the task (except that w can be negative).
The corollary in Lines 163–165 can be strengthened by relaxing the condition to hold for *some* s, s' that have positive occurrences.
The guarantee in (10) only comes from the closest w_j. Wouldn't it make sense to simplify and accelerate the algorithm by fixing this j at the beginning of the task? Or is there some benefit in practice beyond what the analysis shows? Empirical evidence either way would be helpful.
Very much missing is a discussion of the limitations of this approach. For example, how badly does it fail when the feature-space dimension becomes large?
Beyond the two 2D experiments presented, it's unclear how easy it is to find reward features such that similar tasks are close in feature-space. This seems to go to the heart of the challenge in RL, rather than to alleviate it.
The connection drawn to temporal abstraction and options is intriguing but lacking. While \psi is clearly related to control and w to goals, the important aspect of identifying option termination is missing here. |
nips_2017_3202 | Communication-Efficient Distributed Learning of Discrete Probability Distributions
We initiate a systematic investigation of distribution learning (density estimation) when the data is distributed across multiple servers. The servers must communicate with a referee and the goal is to estimate the underlying distribution with as few bits of communication as possible. We focus on non-parametric density estimation of discrete distributions with respect to the 1 and 2 norms. We provide the first non-trivial upper and lower bounds on the communication complexity of this basic estimation task in various settings of interest. Specifically, our results include the following:
1. When the unknown discrete distribution is unstructured and each server has only one sample, we show that any blackboard protocol (i.e., any protocol in which servers interact arbitrarily using public messages) that learns the distribution must essentially communicate the entire sample. 2. For the case of structured distributions, such as k-histograms and monotone distributions, we design distributed learning algorithms that achieve significantly better communication guarantees than the naive ones, and obtain tight upper and lower bounds in several regimes. Our distributed learning algorithms run in near-linear time and are robust to model misspecification.
Our results provide insights on the interplay between structure and communication efficiency for a range of fundamental distribution estimation tasks.
1. When the unknown discrete distribution is unstructured and each server has only one sample, we show that any blackboard protocol (i.e., any protocol in which servers interact arbitrarily using public messages) that learns the distribution must essentially communicate the entire sample. 2. For the case of structured distributions, such as k-histograms and monotone distributions, we design distributed learning algorithms that achieve significantly better communication guarantees than the naive ones, and obtain tight upper and lower bounds in several regimes. Our distributed learning algorithms run in near-linear time and are robust to model misspecification.
Our results provide insights on the interplay between structure and communication efficiency for a range of fundamental distribution estimation tasks.
1 Introduction | Summary; The paper studies the classical problem of estimating the probability mass function (pmf) of a discrete random variable given iid samples, but distributed among different nodes. The key quantity of interest is how much communication must be expended by each node (in a broadcast, but perhaps interactive, setting) to a central observer which then outputs the estimate of the underlying pmf. The main results of the paper areclearly stated and are easy to follow. The results mostly point out that in the worst case (i.e., no assumptions on the underlying pmf) there is nothing better for each node to do than to communicate its sample to the central observer.
The paper addresses a central topic of a long line of recent works on distributed parameter estimation. The distinguishing feature here is to have as few parametric assumptions on the underlying distribution as possible.
1. In the case when no assumption at all is made on the underlying distribution (other than the alphabet size), the main result is to show that essentially nothing better can be achieved than when all nodes communicate their raw samples (noninteractively) to the central observer. The upper bound is hence trivial and doesnt take much to analyze. The key innovation is in the lower bound where the authors consider a specific near-uniform distribution (where neighboring letters have probabilities that are parametrically (\delta_i in the notation of the paper) close to each other). Then the authors show that any (interactive) protocol allows the central observer to learn these \delta_i parameters and which itself is tantamount to learning the sample itself. The key calculation is to bound the mutual information between the transcripts and the \delta_i parameters which is then fed into a standard hypothesis testing framework (Fano's inequality). Although this machinery is standard, I found the setup of the hypothesis test creative and interesting and novel. Overall, this part of the paper is of fundamental interest (no assumptions at all on the underlying hypothesis) and nicely carried out.
2. I am less sure about the two `structured' settings considered in this paper: k-histograms and monotone pmfs. I can imagine monotone pmfs coming up in some application domain, but far less sure about why k-histograms makes sense. The main challenging aspect of this setting is that only k intervals are allowed, but their position is arbitrary. I understand the formulation makes for pretty math (and fits in nicely with a bunch of existing literature), but cannot see any canonical/practical setting where allowing this flexibility makes sense. I would appreciate it if the authors can elaborate on this point in the rebuttal stage.
3. In the context of the last sentence of the previous comment, I would also like to see some discussion by authors on any practical implications of this work. For instance the authors mention that the problem is very well motivated and cite some works from the literature [44, 40, 30, 54, 47]. Of these, I would guess that [47] is the only recent practical work (the others are either more than a decade old or also correspond to an already stylized setting) -- on this topic, can the authors please fix the reference [40] so that it includes its title? I would like to see concrete settings where the structured settings are motivated properly and the authors actually try out their algorithms (along with competing baselines) in that setting. In the context of this paper, I would like to see a (brief) discussion of how the settings might be related to the distributed computing revolution underlying modern data centers. Most NIPS papers have a section where they discuss their algorithms in the context of some concrete practical setting and report empirical results.
3. The schemes proposed in the two structured settings are more sophisticated than the baseline one (where the nodes simply communicate their samples directly), but are directly motivated from their centralized counterparts. In particular, the k-histogram setting uses ideas from [3] and approximation in the \A_k norm and the monotone setting from the classical work of Birge. Overall, the proofs are correct and represent a reasonable innovation over a long line of work on this topic.
Summary: I recommend this submission as far better suited to COLT or even SODA. |
nips_2017_2404 | Doubly Stochastic Variational Inference for Deep Gaussian Processes
Gaussian processes (GPs) are a good choice for function approximation as they are flexible, robust to overfitting, and provide well-calibrated predictive uncertainty. Deep Gaussian processes (DGPs) are multi-layer generalizations of GPs, but inference in these models has proved challenging. Existing approaches to inference in DGP models assume approximate posteriors that force independence between the layers, and do not work well in practice. We present a doubly stochastic variational inference algorithm that does not force independence between layers. With our method of inference we demonstrate that a DGP model can be used effectively on data ranging in size from hundreds to a billion points. We provide strong empirical evidence that our inference scheme for DGPs works well in practice in both classification and regression. | This paper presents a doubly stochastic variational inference for deep Gaussian processes. The first source of stochasticity comes from posterior sampling, devised to solve the intractability issue of propagating uncertainty through the layers. The second source of stochasticity comes from minibatch learning, allowing the method to scale to big datasets, even with 1 billion instances.
The paper is in general well written and the authors make an effort to explain the intuitions behind the proposed method. However, the paper is not placed well in the related literature. The "related work" section is missing reference to [11] (presumably it was meant to be included in line 65) and the paper is completely missing reference to the very related (Hensman and Lawrence, 2014; Nested Variational Compression in DGPs). The idea of using SVI and mini-batch learning for DGPs has also been shown in (Frigola et al. 2014; Variational GP State-Space Models) and (Damianou, 2015; PhD Thesis).
To further expand on the above, it would be easier for the reader if the authors explained modeling and inference choices with respect to other DGP approaches. For example, using the exact conditional p(f|u) to obtain the cancellation is found in a number of approaches, but in the paper this is somehow presented as a new element. On the other hand, absorbing the noise \epsilon in the kernel is (to my knowledge) unique to this approach, however this design choice is underplayed (same holds for the mean function), stating that "it is for notational convenience", although in practice it has implications that go beyond that (e.g. no need to have q(h), h being the intermediate noisy layer). This is counter-intuitive, because these are actually neat ideas and I'd expect them to be surfaced more. Therefore, I suggest to the authors to clearly relate these details to other DGP approaches, to facilitate comprehension and theoretical comparison.
Regarding the technical aspect, I find the method sound and significant. The method is not very novel, especially given (Hensman and Lawrence, 2014), but it does introduce some nice new ideas and shows extensive and impressive results. It is perhaps the first paper promising "out-of-the-box" training of DGPs and this is a significant step forward - although given this strong claim it'd be great to also have the code at submission time, especially since it's < 200 lines.
It would be worth discussing or exploring some further properties of the method. For example, what does it imply to absorb the layer noise in the kernel? How does the model construction and sampling for inference affect the uncertainty calibration?
I find the experiments extensive, well executed and convincing. However, I didn't understand what line 223 means ("we used the input dimension..") and why can't we see a deeper AEPDGP (does that choice make it fair for one model but unfair for the other?). Since the method is relatively fast (Table 3) and performance improves with the number of layers, this begs the question of "how deep can you go?". It'd be great to see a deeper architecture in the final version (in [8] the authors make a similar observation about the number of layers).
Other comments:
- including an algorithm showing the steps of the sampling-based inference would greatly improve clarity
- line 176: Shouldn't f_i^l be f_i^{l-1} before "(recall..."?
- line 211: It's not clear to me what "whiten inputs and outputs" means and why this is done
- line 228: What does "our DGP falls back to the single-layer GP" mean? Learning the identity mapping between layers?
- line 264: Reference missing.
- line 311: Does "even when the quality of the approximation..." refer to having more inducing points per layer, or is it a hint about the within-the-layer approximation?
- which kernel is used for classification? |
nips_2017_2544 | Hash Embeddings for Efficient Word Representations
We present hash embeddings, an efficient method for representing words in a continuous vector form. A hash embedding may be seen as an interpolation between a standard word embedding and a word embedding created using a random hash function (the hashing trick). In hash embeddings each token is represented by k d-dimensional embeddings vectors and one k dimensional weight vector. The final d dimensional representation of the token is the product of the two. Rather than fitting the embedding vectors for each token these are selected by the hashing trick from a shared pool of B embedding vectors. Our experiments show that hash embeddings can easily deal with huge vocabularies consisting of millions of tokens. When using a hash embedding there is no need to create a dictionary before training nor to perform any kind of vocabulary pruning after training. We show that models trained using hash embeddings exhibit at least the same level of performance as models trained using regular embeddings across a wide range of tasks. Furthermore, the number of parameters needed by such an embedding is only a fraction of what is required by a regular embedding. Since standard embeddings and embeddings constructed using the hashing trick are actually just special cases of a hash embedding, hash embeddings can be considered an extension and improvement over the existing regular embedding types. | This work describes an extension of the standard word hashing trick for embedding representation by using a weighted combination of several vectors indexed by different hash functions to represent each word. This can be done using a predefined dictionary or during online training. The approach has the benefit of being easy to understand and implement and greatly reduces the number of embedding parameters.
The results are generally good and the approach is quite elegant. Additionally, the hash embeddings seem to act as an effective regularizer. However, table 2 would benefit from describing the final selected vocabulary sizes as well as the parameter reduction provided by the hash embedding. Table 3 is also missing a joint state of the art model for the DBPedia dataset [1].
Line 255 makes the claim that the ensemble would train in the same amount of time as the single large model. However, this would only be true if each model in the ensemble had an architecture with fewer weights (that were not embedding weights). From the description, it seems that the number of non-embedding weights in each network in the ensemble is the same as that in the large model so that training time would be significantly larger for the ensemble.
Table 3 highlights the top 3 best models, however, a clearer comparison might be to split the table into embedding only approaches vs RNN/CNN approaches. It would also be interesting to see these embeddings used in the more context-sensitive RNN/CNN models.
Minor comments:
L9: Typo: million(s)
L39: to(o) much
L148: This sentence is awkwardly rewritten.
L207: What is patience?
L235: Typo: table table
Table 4: Were these obtained through summing the importance weights? What were the order of the highest/lowest weights?
[1] Miyato, Takeru, Andrew M. Dai, and Ian Goodfellow. "Virtual adversarial training for semi-supervised text classification." ICLR 2017. |
nips_2017_1245 | Data-Efficient Reinforcement Learning in Continuous State-Action Gaussian-POMDPs
We present a data-efficient reinforcement learning method for continuous stateaction systems under significant observation noise. Data-efficient solutions under small noise exist, such as PILCO which learns the cartpole swing-up task in 30s. PILCO evaluates policies by planning state-trajectories using a dynamics model. However, PILCO applies policies to the observed state, therefore planning in observation space. We extend PILCO with filtering to instead plan in belief space, consistent with partially observable Markov decisions process (POMDP) planning. This enables data-efficient learning under significant observation noise, outperforming more naive methods such as post-hoc application of a filter to policies optimised by the original (unfiltered) PILCO algorithm. We test our method on the cartpole swing-up task, which involves nonlinear dynamics and requires nonlinear control. | SUMMARY
In this paper, the authors propose to learn and plan with a model that represents uncertainty in a specific case of partial observability - that of noisy observations. Their method the analogue of a known model based method - PILCO - for this noisy observation case. It handles the noise by assuming a joint Gaussian distribution over the latent states and observations at each time step when making predictions (at execution time, observations are known and a filtered latent state is used). On the kind of dynamical system that these method are commonly evaluated on - the cart-pole swing-up in this case - the proposed methods seems to be better than relevant baselines at dealing with noise.
TECHNICAL QUALITY
The proposed algorithm is supported by a derivation of the update equation and experimental validation on the cart-pole swing-up task. The derivation seems technically solid. I just wondered why in (5) the covariance between M_t|t-1 and Z_t is 0, since in Fig. 2 it seems that Z_t is a noisy version of B_t|t-1 of which M_t|t-1 is the mean.
Although the experiments only consider a single system, the results seem quite convincing. The empirical loss per episode is significantly lower, and predictions of the cost per time step are much better than for the compared methods. I just wondered what is meant by the "empirical loss distribution", and whether the 100 independent runs that the error bars are based on are 100 evaluations of the same learned controller; or 100 independently trained controllers. In Figs. 7 and 8 I would be interested to see the n=0 case (or, if that causes numerical problems, n=0.01 or similar).
In terms of related work, the authors mainly restrict themselves to discussing Bayesian model learning and Bayesian RL techniques. It might be interesting to also discuss other styles of policy search methods for the partially observable setting.
It might be insightful to describe what about the current method limits us to the noisy observation case. If we observed positions but not velocities, coulnd't we filter that, too?
NOVELTY
I haven't seen earlier extensions of PILCO-type methods to the partially observable case. So I would consider the method quite novel.
SIGNIFICANCE AND RELEVANCE
Noisy dynamical systems occur in many (potential) application domain of reinforcement learning (robotics, smart grid, autonomous driving), so, this is a relevant research topic for the RL community. On the investigated system, the improvement relative to the state of the art seems quite significant, however, the method is only evaluated on a single system.
CLARITY
The paper is mostly well-written. However, in some part the clarity could be improved. For example, the paper does not mention explicitly they assume the reward function to be known. The policy improvement step (8 in the algorithm) is not explained at all. For readers already familiar with PILCO this is quite obvious but it would be good to give a short introduction from the others.
The technical section in 4.1 - 4.2 is very dense. I understand space is rather limited but it could be good to explain this part in more detail in the appendix. For example, the paper states V=E[V'] is assumed fixed, but doesn't state how this fixed value is computed or set.
Figures 1 and 2 are presented as analogues, yet the latent state X is not shown in the right image. It would be easier to understand the relationship if the underlying latent process was shown in both images.
There are quite many grammar issues. I'll point some out under minor comments.
MINOR COMMENTS
Grammar - please rephrase sentences in lines 7, 52, 86-87, 134,209,250, 283, 299-300
Contractions and abbreviations - please write out (doesn't, etc.)
Some sentence start with note but don't seem grammatical. I suggest writing "Note that, ..." or similar.
Line 55: Greater -> Larger
Line 65: Seems like the author name shouldn't appear here
Line 99: Less -> fewer
line 112: "Overfitting leads to bias". Overcapacity generally causes a high variance, not necessarily bias. Or is the bias here a result of optimizing the policy on the erroneous model? (similar in line 256)
line 118: I guess k(x_i, x_j) should have a superscript a.
line 118: are the phi_a additional hyperparameters? How are they set?
line 147, 148 "detailed section" -> detailed in Section"
line 162: it's not clear to me what you mean by "hierarchically random"
line 174: it seems you switched x and z in p(x)=N(z, Sigma).
line 177: Gaussian conditioning is not a product.
line 181: "the distribution f(b)". This seems a bit odd. Maybe assign f(b) to a random variable?
line 189: not sure what you mean by "in principle".
- you might want to introduce notation \mathbb{C} and \mathbb{V}
line 211: "a mixture of basis functions" -> do you mean it is a linear combination of basis functions?
line 245: "physically" -> "to physically"
line 273 "collect" -> to collect
line 274: trails -> trials
line 283: this seems a strong claim, you might want to rephrase (fPILCO doesn't predict how well other methods will do as is implied here )
line 290 - you might want to choose another symbol for noise factor than n (often number of data points).
297 : data - > data points ?
315: Give greater performance than otherwise -> improved performance with respect to the baselines ?
References: I would consistently use either full names or initials. Some references like 11 are quite messy (mentioning the year 3 times in this case). The words Gaussian and POMDPs in paper titles should be capitalized correctly. |
nips_2017_1023 | Online to Offline Conversions, Universality and Adaptive Minibatch Sizes
We present an approach towards convex optimization that relies on a novel scheme which converts adaptive online algorithms into offline methods. In the offline optimization setting, our derived methods are shown to obtain favourable adaptive guarantees which depend on the harmonic sum of the queried gradients. We further show that our methods implicitly adapt to the objective's structure: in the smooth case fast convergence rates are ensured without any prior knowledge of the smoothness parameter, while still maintaining guarantees in the non-smooth setting. Our approach has a natural extension to the stochastic setting, resulting in a lazy version of SGD (stochastic GD), where minibathces are chosen adaptively depending on the magnitude of the gradients. Thus providing a principled approach towards choosing minibatch sizes. | This paper considers two striking problems in optimizing empirical loss in learning problems: adaptively choosing step size in offline optimization, and mini-batch size in stochastic optimizations.
With a black-box reduction to into AdaGrad algorithm and utilizing gradient normalization and importance weighting in averaging the intermediate solutions, this paper presents a variant of AdaGrad algorithm that attains same convergence rates as GD in convex and smooth optimization without the knowledge of smoothness parameter in setting the learning rate or any linear search (except strongly convex setting, where the algorithm still needs to know the strong convexity parameter H). In stochastic setting, the paper presents a LazySGD algorithm that adaptively chooses the mini-batch size according to the norm of sampled gradients. The paper is reasonably well written, and the structure and language are good. The paper is technically sound and the proofs seem to be correct as far as I checked. The authors also included some preliminary experiments on a toy example to demonstrates some their theoretical insights. Overall, the paper is well written and has some nice contributions. |
nips_2017_3343 | Few-Shot Adversarial Domain Adaptation
This work provides a framework for addressing the problem of supervised domain adaptation with deep models. The main idea is to exploit adversarial learning to learn an embedded subspace that simultaneously maximizes the confusion between two domains while semantically aligning their embedding. The supervised setting becomes attractive especially when there are only a few target data samples that need to be labeled. In this few-shot learning scenario, alignment and separation of semantic probability distributions is difficult because of the lack of data. We found that by carefully designing a training scheme whereby the typical binary adversarial discriminator is augmented to distinguish between four different classes, it is possible to effectively address the supervised adaptation problem. In addition, the approach has a high "speed" of adaptation, i.e. it requires an extremely low number of labeled target training samples, even one per category can be effective. We then extensively compare this approach to the state of the art in domain adaptation in two experiments: one using datasets for handwritten digit recognition, and one using datasets for visual object recognition. | Overall
The paper tackles the problem of domain adaptation when a few (up to 4) unlabeled samples exist in the target domain. An approach is presented based on the idea of adversarial training. The main contribution of the paper is in proposing an augmentation approach to alleviate the scarcity of target domain data, that is by taking pairs of source-target samples as input to combinatorially increase the number of available training samples. The approach proves to be very effective on two standard domain adaptation benchmarks, achieving new state-of-the-art results using only 4 labelled samples.
The problem is very relevant in general and the achieved results are substantial. This by itself warrants the paper to be published in NIPS. However, several important experiments are missing for a better and in-depth understanding of the proposed method. Should it have included those experiments, I’d have rated it as top 15% papers.
Related Works
+ A structured and thorough coverage of the related works regarding domain adaptation.
- related works are missing which have GAN’s discriminator to also predict class labels as in this work
- related works are missing on using pairs of samples as for augmenting low number of available data, especially for learning common embedding space
Approach
+ The incremental novelty of the augmentation technique along with the new loss for the context of deep adversarial domain adaptation is a plausible and effective approach.
- Many details are missing: batchsize, learning rate, number of general iterations, number of iterations to train DCD and g_t/h, number of trials in adversarial training of the embedding space before convergence, etc.
- introduction refers to the case where zero samples from a class is available, however this case is not discussed/studied neither in the approach section nor in the experiments.
Experiments
+ the results using only a few labelled samples are impressive. Specially, when compared with prior and concurrent works using similar approaches.
- Why not go above 4? It would be informative to see the diminishing return
- A systematic study of the components of the architecture is missing, specially comparing when g_t and g_s are shared and not, but also regarding many other architectural choices.
- In sec 3.1 it is mentioned a classification loss is necessary for the better working of the adaptation model. It seems conceptually redundant, except with the added loss more weight can be put on classification as opposed to domain confusion. In that respect, the experimental setup lacks demonstrating the importance of this factor in the loss function.
- It seems the most basic configuration of an adversarial training is used in this work, plots and figures showing the stability of convergence using the introduced losses and architectures is important.
- An experiment where for some classes no target sample is available? |
nips_2017_3470 | Predicting Scene Parsing and Motion Dynamics in the Future
The ability of predicting the future is important for intelligent systems, e.g. autonomous vehicles and robots to plan early and make decisions accordingly. Future scene parsing and optical flow estimation are two key tasks that help agents better understand their environments as the former provides dense semantic information, i.e. what objects will be present and where they will appear, while the latter provides dense motion information, i.e. how the objects will move. In this paper, we propose a novel model to simultaneously predict scene parsing and optical flow in unobserved future video frames. To our best knowledge, this is the first attempt in jointly predicting scene parsing and motion dynamics. In particular, scene parsing enables structured motion prediction by decomposing optical flow into different groups while optical flow estimation brings reliable pixel-wise correspondence to scene parsing. By exploiting this mutually beneficial relationship, our model shows significantly better parsing and motion prediction results when compared to well-established baselines and individual prediction models on the large-scale Cityscapes dataset. In addition, we also demonstrate that our model can be used to predict the steering angle of the vehicles, which further verifies the ability of our model to learn latent representations of scene dynamics. | The paper proposes a deep-learning-based approach to joint prediction of future optical flow and semantic segmentation in videos. The authors evaluate the approach in a driving scenario and show that the two components - flow prediction and semantic segmentation prediction - benefit from each other. moreover, they show a proof-of-concept experiment on using the predictions for steering angle prediction.
The paper is related to works of Jin et al. and Neverova et al. However, as far as I understand, both of these have not been officially published at the time of submission (and the work of Neverova et al. only appeared on arxiv 2 months before the deadline), so they should not be considered prior art.
Detailed comment:
Pros:
1) The idea seems sound: predicting segmentation and optical flow are both important tasks, and they should be mutually beneficial.
2) The architecture is clean and seems reasonable
3) Experimental evaluation on Citiscapes is reasonable, and some relevant baselines are evaluated: copying previous frame, warping with the estimated flow, predicting segmentation without flow, predicting flow without segmentation, the proposed model without the “transformation layer”. The improvements brought by the modifications are not always huge, but are at least noticeable.
Cons:
1) Steering angle prediction experiments are not convincing at all. The baseline is very unclear, and it is unclear what does the performance of “~4” mean. There is no obvious baseline - training a network with the same architecture as used by the authors, but trained from scratch. The idea of the section is good, but currently it’s not satisfactory at all.
2) I am confused by the loss function design. First, the weighting of the loss component is not mentioned at all - is there indeed no weighting? Second, the semantic segmentation loss is different for human-annotated and non-human-annotated frames - why is that? I understand why human-annotated loss should have a higher weight, but why use a different function - L1+GDL instead of cross-entropy? Have you experimented with other variants?
3) Figures 3 and 4 are not particularly enlightening. Honestly, they do not tell too much. Of course it is always possible to find a sample on which the proposed method works better. The authors could find a better way to represent the results - perhaps with a video, or with better-designed figures.
4) Writing looks rushed and unpolished. For instance, caption of Table 2 is wrong.
Minor:
5) I’m somewhat surprised the authors use EpicFlow - it’s not quite state of the art nowadays
6) Missing relevant citation: M. Bai, W. Luo, K. Kundu and R. Urtasun: Exploiting Semantic Information and Deep Matching for Optical Flow. ECCV 2016.
7) Many officially published papers are cited as arxiv, to mention a few: Cityscapes, FlowNet, ResNet, Deep multi-scale video prediction, Context encoders
Overall, the paper is interesting and novel, but seems somewhat rushed, especially in some sections. Still, I believe it could be accepted if the authors fix some most embarrassing problems mentioned above, in particular, the part on steering angle prediction. |
nips_2017_1710 | Variance-based Regularization with Convex Objectives
We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems. | The article "Variance-based Regularization with Convex Objectives" considers the problem of the risk minimization in a wide class of parametric models. The authors propose the convex surrogate for variance which allows for the near-optimal and computationally efficient estimation.
The idea of the paper is to substitute the variance term in the upper bound for the risk of estimation with the convex surrogate based on the certain robust penalty. The authors prove that this surrogate provides good approximation to the variance term and prove certain upper bounds on the overall performance of the method. Moreover, the particular example is provided where the proposed method beats empirical risk minimization.
The experimental part of the paper considers the comparison of empirical risk minimization with proposed robust method on 2 classification problems. The results show the expected bahaviour, i.e. the proposed method is better than ERM on rare classes with not significant loss on more common classes.
To sum up, I think that this is very strong work on the edge between the theoretical and practical machine learning. My only concern is that the paper includes involved theoretical analysis and is much more suitable for full journal publication than for short conference paper (the Arxiv paper is 47 pages long!!!). |
nips_2017_2010 | Subset Selection under Noise
The problem of selecting the best k-element subset from a universe is involved in many applications. While previous studies assumed a noise-free environment or a noisy monotone submodular objective function, this paper considers a more realistic and general situation where the evaluation of a subset is a noisy monotone function (not necessarily submodular), with both multiplicative and additive noises. To understand the impact of the noise, we firstly show the approximation ratio of the greedy algorithm and POSS, two powerful algorithms for noise-free subset selection, in the noisy environments. We then propose to incorporate a noise-aware strategy into POSS, resulting in the new PONSS algorithm. We prove that PONSS can achieve a better approximation ratio under some assumption such as i.i.d. noise distribution. The empirical results on influence maximization and sparse regression problems show the superior performance of PONSS. | This paper considers the problem of maximizing a monotone set function subject to a cardinality constraint. The authors consider a novel combination of functions with both bounded submodularity ratio and additive noise. These setting have been considered separately before, but a joint analysis leads to a novel algorithm PONSS. This has improved theoretical guarantees and experimental performance when compared to previous noise-agnostic greedy algorithms. The paper flows well and is generally a pleasure to read.
Clarity
Small typo: "better on subclass problems" on line 26
The paper could benefit from a more thorough literature survey, including more recent results on the tightness of the submodularity ratio and applications to sparse regression [1] [2].
One comment on the beginning of Section 2: in this setting minimizing f subject to a cardinality constraint is not the same as maximizing -f subject to a cardinality constraint. Generally if -f is (weakly) submodular then f is (weakly) supermodular, and -f will not have the same submodularity ratio as f.
I would suggest including "i.i.d. noise distribution" when the assumptions for PONSS instead are first mentioned on page 2 ("fixed noise distribution"), instead of waiting until Section 5.
Impact
Influence maximization is an interesting application of noisy submodular optimization, since the objective function is often hard to compute exactly.
The authors should address that the PONSS guarantees hold with probability < 1/2 while the other guarantees are deterministic, and explain whether this affects the claim that the PONSS approximation ratio is "better".
O(Bnk^2) iterations can be prohibitive for many applications, and approximate/accelerated versions of the standard greedy algorithm can run in sublinear time.
Experiments
Most of my concerns are with the experiments section. In the sparse regression experiment, the authors report final performance on the entire dataset. They should explain why the did not optimize on a training set and then report final performance on a test set. Typically the training set is used to check how algorithms perform on the optimization, and the test set is used to check whether the solution generalizes without overfitting. Additionally, it appears that all of the R^2 values continue to increase at the largest value k. The authors should continue running for larger k or state that they were limited by large running times of the algorithms.
It would be more interesting to compare PONSS with other greedy algorithms (double greedy, random greedy, forward backward greedy, etc.), which are known to perform better than standard greedy on general matroid constraints
Questions
-What is the meaning of multiplicative domination in PONSS experiments if the noise estimate parameter \theta is set to 1? It would be interesting to quantify the drop in performance when 1 > \theta^' > \epsilon is used instead of \epsilon.
-For what range of memory B do the claims on line 170 hold?
-Could PONSS be parallellized or otherwise accelerated to reduce running time?
[1] A. A. Bian, J. M. Buhmann, A. Krause, and S. Tschiatschek. Guarantees for greedy maximization of non-submodular functions with applications, ICML 2017.
[2] E. R. Elenberg, R. Khanna, A. G. Dimakis, and S. Negahban. Restricted strong convexity implies weak submodularity, https://arxiv.org/abs/1612.00804 |
nips_2017_1782 | Partial Hard Thresholding: Towards A Principled Analysis of Support Recovery
In machine learning and compressed sensing, it is of central importance to understand when a tractable algorithm recovers the support of a sparse signal from its compressed measurements. In this paper, we present a principled analysis on the support recovery performance for a family of hard thresholding algorithms. To this end, we appeal to the partial hard thresholding (PHT) operator proposed recently by Jain et al. [IEEE Trans. Information Theory, 2017]. We show that under proper conditions, PHT recovers an arbitrary s-sparse signal within O(sκ log κ) iterations where κ is an appropriate condition number. Specifying the PHT operator, we obtain the best known results for hard thresholding pursuit and orthogonal matching pursuit with replacement. Experiments on the simulated data complement our theoretical findings and also illustrate the effectiveness of PHT. | Review prior to rebuttal:
The paper provides analytical results on partial hard thresholding for sparse signal recovery that are based on RIP and on an alternative condition (RSC and RSS) and that is applicable to arbitrary signals.
The description of previous results is confusing at times. First, [4] has more generic results than indicated in this paper in line 51. Remark 7 provides a result for arbitrary signals, while Theorems 5 and 6 consider all sufficiently sparse signals. Second, [13] provides a guarantee for sparse signal recovery (Theorem 4), and so it is not clear what the authors mean by “parameter estimation only” on line 59. If exact recovery is achieved, it is obvious that the support is estimated correctly too. Third, it is also not clear why Theorem 2 is described as an RIP-based guarantee - how does RIP connect to the condition number, in particular since RIP is a property that can go beyond specific matrix constructions?
The simulation restricts itself to s-sparse signals, and so it is not clear that the authors are testing the aspects of the results that go beyond those available in existing work. It is also not clear how the results test the dependence on the condition number of the matrix.
Minor comments follow.
Title: “A Towards” is not grammatically correct.
Line 128: the names of the properties are grammatically incorrect: convex -> convexity, smooth -> smoothness? or add “property” at the end of each?
Line 136: calling M/m “the condition number of the problem” can be confusing, since this terminology is already used in linear algebra. Is there a relationship between these two quantities?
Line 140: there is no termination condition given in the algorithm. Does this mean when the gradient in the first step is equal to zero? When the support St does not change in consecutive iterations?
Line 185: the paper refers to a comparison of the analysis of [4] being “confined to a special signal” versus the manuscript’s “generalization… [to] a family of algorithms”. It is not clear how these two things compare to one another.
Line 301: degrade -> degradation
Line 304: reduces -> reduce
Response to rebuttal: The authors have addressed some of the points above (in particular relevant to the description of prior work), but the impact of the contribution appears to be "borderline" for acceptance. Given that the paper is acceptable if the authors implement the changes described in the rebuttal, I have set my score to "marginally above threshold". |
nips_2017_2082 | Multiresolution Kernel Approximation for Gaussian Process Regression
Gaussian process regression generally does not scale to beyond a few thousands data points without applying some sort of kernel approximation method. Most approximations focus on the high eigenvalue part of the spectrum of the kernel matrix, K, which leads to bad performance when the length scale of the kernel is small. In this paper we introduce Multiresolution Kernel Approximation (MKA), the first true broad bandwidth kernel approximation algorithm. Important points about MKA are that it is memory efficient, and it is a direct method, which means that it also makes it easy to approximate K −1 and det(K). | The authors consider the problem of large-scale GP regression; they propose a multiresolution approximation method for the Gram matrix K. In the literature, most approximation approaches assume either (1) a low rank representation for K, which may not be supported by the data, or (2) a block-diagonal form for K, the structure of which has to be identified by clustering methods, which is not trivial for high-dimensional data. The current paper proposes MKA, a novel approximation approach that uses captures local and global properties for K. The Gram matrix K is approximated as a Kronecker sum of low-rank and diagonal matrices, a fact that significantly reduces the computational complexity of the linear algebra calculations required in the context of GP regression.
The paper initiates a very interesting discussion on the nature of local and global kernel approximations, but I feel that certain aspects ofthe methodology proposed are not sufficiently clear. Below, I list some considerations that I had while reading the paper.
What is the effect of the clustering mentioned in step 1 of the methodology? Is the method less sensitive to the result of clustering than the local-based methods in the literature?
Does the approximate matrix \tilde{K} converge to true matrix K as the d_{core} parameter is increased? By looking at the experiments of Figure 2, it appears that MKA is rather insensitive to the d_{core} value.
The effect of approximating K in many stages as described in Section 3 is not obvious or trivial. The authors attribute the flexibility of MKA to the reclustering of K_l before every stage. However, I would suspect that a slight variation of the output at any stage would have dramatic consequences in the stages that follow.
It is not clear which compression method was used in the experiments. I would think that the cost of SPCA is prohibitive, given the objective of the current paper. It could still be worth mentioning SPCA if there was some experimental comparison with the use of MMF in the context of MKA.
I think that the experimental section would be stronger if there was also a demonstration of how well the MKA approximates the original GP. Although we get an idea of the predictive mean in Figure 1, there is no information of the predictive variance.
Minor comments:
The line for the Full GP method in Figure 2 is almost not visible. |
nips_2017_865 | Online Prediction with Selfish Experts
We consider the problem of binary prediction with expert advice in settings where experts have agency and seek to maximize their credibility. This paper makes three main contributions. First, it defines a model to reason formally about settings with selfish experts, and demonstrates that "incentive compatible" (IC) algorithms are closely related to the design of proper scoring rules. Second, we design IC algorithms with good performance guarantees for the absolute loss function. Third, we give a formal separation between the power of online prediction with selfish versus honest experts by proving lower bounds for both IC and non-IC algorithms. In particular, with selfish experts and the absolute loss function, there is no (randomized) algorithm for online prediction-IC or otherwise-with asymptotically vanishing regret. | In this paper, the standard (binary outcome) "prediction with expert advice" setting is altered by treating the experts themselves as players in the game. Specifically, restricting to the class of weight-update algorithms, the experts seek to maximize the weight assigned to them by the algorithm, as a notion of "rating" for their predictions. It is shown that learning algorithms are IC, meaning that experts reveal their true beliefs to the algorithm, if and only if the weight-update function is a strictly proper scoring rule. Because the weighted-majority algorithm (WM) has a weight update which is basically the negative of the loss function, WN is thus IC if and only if the (negative) loss is strictly proper. As the (negative) absolute loss function is not strictly proper, WM is not IC for absolute loss. The remainder of the paper investigates the absolute loss case in detail, showing that WM with the spherical scoring rule performs well, and moreover performs better than WM with absolute loss; no other deterministic IC (or even non-IC) algorithm for absolute loss can beat the performance in the non-strategic case; analogous results for the randomized case.
I like the results, but find the motivation to be on loose footing. First, it is somewhat odd that experts would want to maximize the algorithm's weight assigned to them, as after all the algorithm's weights may not correspond to "performance" (a modified WM which publishes bogus weights intermittently can still achieve no regret), and even worse, the algorithm need not operate by calculating weights at all. I did find the 538 example compelling however.
My biggest concern is the motivation behind scoring the algorithm or the experts with non-proper scoring rules. A major thrust of elicitation (see Making and Evaluating Point Forecasts, Gneiting 2011) is that one should always measure performance in an incentive-compatible way, which would rule out absolute loss. The authors do provide an example in 538's pollster ratings, but herein lies a subtlety: 538 is using absolute loss to score *poll* outcomes, based on the true election tallies. Poll results are not probabilistic forecasts, and in fact, that is a major criticism of polls from e.g. the prediction market community. Indeed, the "outcome" for 538's scoring is itself a real number between 0 and 1, and not just the binary 0,1 of e.g. whether the candidate won or lost.
I would be much more positive about the paper if the authors could find a natural setting where non-proper losses would be used.
Minor comment: Using "r" as the outcome may be confusing as it often denotes a report. It seems that "y" is more common, in both the scoring rules and prediction with expert advice literature. |
nips_2017_1618 | Alternating Estimation for Structured High-Dimensional Multi-Response Models
We consider the problem of learning high-dimensional multi-response linear models with structured parameters. By exploiting the noise correlations among different responses, we propose an alternating estimation (AltEst) procedure to estimate the model parameters based on the generalized Dantzig selector (GDS). Under suitable sample size and resampling assumptions, we show that the error of the estimates generated by AltEst, with high probability, converges linearly to certain minimum achievable level, which can be tersely expressed by a few geometric measures, such as Gaussian width of sets related to the parameter structure. To the best of our knowledge, this is the first non-asymptotic statistical guarantee for such AltEst-type algorithm applied to estimation with general structures. | Bounds on the estimation error in a multi-response structured Gaussian linear model is considered, where the parameter to be estimated is shared by all m responses and where the noise covariance may be different from identity. The authors propose an alternating procedure for estimation of the coefficient vector that essentially extends the GDS (generalized Dantzig selector). The new procedure accommodates general "low-complexity" structures on the parameter and non-identity covariance of the noise term.
The paper is outside my field of expertise, so I apologize in advance for any mistakes due to my misunderstanding.
The paper is very technical, and without the appropriate background, I was not able to really appreciate the significance of the contribution. That said, the theme of the results is quite clear, especially with the effort of the authors to supplement the results with intuitive explanations.
1. The paper is very technical, maybe too technical, and it is hard for a non-expert like myself to make sense of the conditions/assumptions made in the different theorems (despite the authors' evident effort to help with making sense of these conditions).
2. Again, I was not able to appreciate if the different assumptions and results are sensible or whether they are mostly ad-hoc or artifacts of the proofs, but overall the contribution over GDS and existing methods does not appear to be sufficiently significant for acceptance to NIPS.
3. The manuscript is not free of grammatical errors or bad wording, for example:
-l.168: "at the same order"
-caption of Fig 2: "The plots for oracle...implies..."
-caption of Fig 2: "keep unchanged" (better: "remain unchanged")
## comments after reading author rebuttal:
The authors have provided a clear non-technical summary of the paper, which is much appreciated. I would like to apologize for not being able to give a fair evaluation when originally reviewing the paper (this is because I did not have the technical background to appreciate the results). |
nips_2017_2119 | On Tensor Train Rank Minimization: Statistical Efficiency and Scalable Algorithm
Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop a randomized optimization method, in which the time complexity is as efficient as the space complexity is. In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor. | This paper looks at the tensor train (TT) decomposition as an efficient storage method for tensors with statistical guarantees.
Traditional methods for tensor decomposition include Tucker decomposition [29] and CP decomposition. The advantage of TT decomposition is that it can avoid the exponential space complexity of Tucker decomposition. However, its statistical performance is less understood, which the authors attempt to address in this paper.
In addition, the authors provide an efficient alternative to existing TT algorithms.
It’s interesting to note that Tucker and TT decompositions have complementary bottleneck - One has space complexity issues and the second has computational complexity issues.
The authors look at the problem of tensor completion which is an extension of the matrix completion problem that has been extensively researched in the literature.
In matrix completion problem, the standard assumption made is that the matrix is low-rank so that the problem becomes well-posed. This assumption is also observed to be the case in practical applications such as recommender system.
The authors make a similar observation for the tensor to be estimated (i.e. a low-rank tensor). Just as nuclear norm/schatten-p norm is used a relaxation to the rank matrix, the authors propose the Schatten TT norm as a convex relaxation of the TT rank of a tensor.
Comments:
- The authors mention that the TT rank is a tuple in section 2, while mentioning later in section 6 that it can be described by a single parameter, making the definition confusing.
- The problem formulation (4), the algorithmic approach (section 3.1) all mimick the ideas in matrix completion literature. The innovation is more to do with the appropriate representation of tensors while extending these ideas from the matrix completion literature.
- Even in the matrix factorization literature, the scalable methods are the ones that already assume a low-rank factor structure for the matrix to be completed to save space and computation. For tensors, it is all the more imperative to assume a low-rank factor structure. The authors do this in (6) to save space and time.
- Theorem 1 is quite interesting and provides a more tractable surrogate for the TT schatten norm.
- The statistical results are a new contribution for tensor completion, although the assumption of incoherence (Assumption 2) and subsequent proofs are by now very standard in the M-estimation literature.
- The authors use an ADMM algorithm again for solving (6) in section 4.2. It is not clear this would have a better advantage as compared to just the simpler alternating minimization over the TT factors of the tensor. It is not clear if the convergence of ADMM to a local minima for non-convex problems with multiple factors is proven in the literature. The authors don’t cite a paper on this either. On the other hand, it is easy to see that the simpler alternating minimization (keeping other factors fixed)
converges to a local optimum or a critical point. The authors don’t discuss any of these details in 4.2.
- The number of samples needed for statistical convergence is in the worst-case the the product of the dimensions of the tensor. Perhaps a better result would be to show that the sample complexity is proportional to the true TT rank of the tensor? The authors don’t mention any discussion on the lower-bound on the sample complexity in the paper, making the goodness of the sample complexity bound in Theorem 5 questionable.
Overall, the paper presents an interesting investigation of the TT decomposition for tensors and uses it in the context of tensor completion with statistical guarantees. It is not clear if the algorithm used for alternating optimization (ADMM) is actually guaranteed to converge though it might work in practice. Also, the authors don’t discuss the goodness (what’s a good lower bound?) of the sample complexity bounds derived for their algorithm for tensor completion. The approach to modeling, algorithm development, scalability and statistical convergence mimicks that taken for matrix completion - However, the authors do a good job of combining it all together and putting their approach in perspective with respective to existing algorithms in the literature. |
nips_2017_2754 | Robust Imitation of Diverse Behaviors
Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment. | This work deals with the problem of modeling joint action/state trajectory spaces of dynamical systems using a supervised imitation learning paradigm. It approaches the problem by defining a Variational Autoencoder (VAE) that maps the state sequence, via a latent representation, into a (state,action) pair.
The approach is validated on three articulated body controller modeling problems: a robotic arm, a 2D biped, and a 3D human body motion modeling.
Prior work: the work closely follows (extends) the approach of Generative Adversarial Imitation Learning [11]. The difference here is that, through the use of VAE, the authors claim to have avoided the pitfall of GAN know as mode collapsing.
Summary
+ Addresses a challenging problem of learning complex dynamics controllers / control policies
+ Well-written introduction / motivation
+ Appealing qualitative results on the three evaluation problems. Interesting experiments with motion transitioning.
- Modeling formulation is somewhat different from GAIL (latent representation) but it rather closely follows GAIL
- Many key details are omitted (either on purpose, placed in appendix, or simply absent, like the lack of definitions of terms in the modeling section, details of the planner model, simulation process, or the details of experimental settings)
- Experimental evaluation is largely subjective (videos of robotic arm/biped/3D human motion)
- Paper appears written in haste or rewritten from a SIGGRAPH like submission
- Relies significantly on work presented in an anonymous NIPS submission
Detailed comments
In general, I like the focus of this paper; designing complex controllers for dynamic systems is an extremely challenging task and the approach proposed here looks reasonable. The use of the VAE is justified and the results, subjectively, are appealing.
However, so many details are missing here and some parts of the discussion are unclear or not justified, in my view.
For instance, the whole modeling section (3), while not so obscure, still ends up not defining many terms. Eg, what is \pi_E in the first eq of 3.2? (Not numbered, btw) What is \theta in eq3? (also used elsewhere) . What are all the weight terms in the policy model, end of sec 3 (not numbered)? You say you initialize theta to alpha wights, but what does that mean?
What is the purpose of Lemma 1? The whole statement following it, l 135-145 sound unclear and contradictory. Eg, you say that eq5 avoids the problem of collapse, yet you state in those lines that is also has the same problem?
The decoder models for (x,a) are not explicitly defined.
How is the trajectory simulation process actually accomplished? This is somewhat tersely described in l168-172.
How are transitions between categories of different motion types modeled as you do not explicitly encode the category class? It is obviously done in the z-space, but is it some linear interpolation? It appears to be (l174-176) but is it timed or instantaneous?
What are all the numbers in the Action Decoder Sizes column of Tab 1, appendix? |
nips_2017_1233 | Learning ReLUs via Gradient Descent
In this paper we study the problem of learning Rectified Linear Units (ReLUs) which are functions of the form x ↦ max(0, ⟨w, x⟩) with w ∈ R d denoting the weight vector. We study this problem in the high-dimensional regime where the number of observations are fewer than the dimension of the weight vector. We assume that the weight vector belongs to some closed set (convex or nonconvex) which captures known side-information about its structure. We focus on the realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to a planted weight vector. We show that projected gradient descent, when initialized at 0, converges at a linear rate to the planted model with a number of samples that is optimal up to numerical constants. Our results on the dynamics of convergence of these very shallow neural nets may provide some insights towards understanding the dynamics of deeper architectures. | ### Summary
The paper proves that projected gradient descent in a single-layer ReLU network converges to the global minimum linearly: ||w_t-w^*||/||w_0-w^*||=(1/2)^t with high probability. The analysis assumes the input samples are iid Gaussian and the output is realizable, i.e. the target value y_k for input x_k is constructed via y_k=ReLU(w^*.x_k). The paper studies a regression setting and uses least squares as the loss function.
### Pros and Cons
To the best of my knowledge, the result is novel; I am not aware of any proof of "convergence to global optimum" for the nonconvex loss resulted by fitting to a ReLu function via least squares. On top of that, the analysis considers a nonconvex regularization term with little assumptions on it. Of course, there still a clear gap between the results of this paper and models used in practice: notably, relaxing the Gaussian iid input assumption, and extending to multilayer setup. However, this result may become a stepping stone toward that goal.
### Proofs
I followed the building blocks of the proof at the coarse level, and the logic makes sense. However, I did not check at a very detailed level. I was wondering if the authors could include a simple simulation to confirm the correctness of the result? Given that the data is very simple (iid Gaussian) and the network is a single layer ReLU, using a very simple regularization scheme, it seems to take a few lines of Matlab to verify the empirical convergence rate and compare that with the result. In fact, I wonder why this is not attempted by the authors already?
### Regularization
The authors state that the result holds true regardless of regularization R being convex or nonconvex. What if R is such that it creates two far apart and disjoint regions, one containing the origin (initialization) and the other containing w^*. Wouldn't this create a problem for projected gradient descent initialized by w=0 to reach w^*?
### Minor Comments
1. There are typos, please proof read.
2. What is the meaning of A in L_A (Eq 5.12)?
3. What is the value of the constant gamma in line 180, or otherwise how is it related to other constants or quantities in the paper? |
nips_2017_3152 | The Expressive Power of Neural Networks: A View from the Width
The expressive power of neural networks is important for understanding deep learning. Most existing works consider this problem from the view of the depth of a network. In this paper, we study how width affects the expressiveness of neural networks. Classical results state that depth-bounded (e.g. depth-2) networks with suitable activation functions are universal approximators. We show a universal approximation theorem for width-bounded ReLU networks: width-(n + 4) ReLU networks, where n is the input dimension, are universal approximators. Moreover, except for a measure zero set, all functions cannot be approximated by width-n ReLU networks, which exhibits a phase transition. Several recent works demonstrate the benefits of depth by proving the depth-efficiency of neural networks. That is, there are classes of deep networks which cannot be realized by any shallow network whose size is no more than an exponential bound. Here we pose the dual question on the width-efficiency of ReLU networks: Are there wide networks that cannot be realized by narrow networks whose size is not substantially larger? We show that there exist classes of wide networks which cannot be realized by any narrow network whose depth is no more than a polynomial bound. On the other hand, we demonstrate by extensive experiments that narrow networks whose size exceed the polynomial bound by a constant factor can approximate wide and shallow network with high accuracy. Our results provide more comprehensive evidence that depth may be more effective than width for the expressiveness of ReLU networks. | SUMMARY
* This paper studies how width affects the expressive power of neural networks. It suggests an alternative line of argumentation (representation of shallow by deep narrow) to the view that depth is more effective than width in terms of expressivity. The paper offers partial theoretical results and experiments in support of this idea.
CLARITY
* The paper expresses the objectives clearly and is easy to follow. However, the technical quality needs attention, including confused notions of continuity and norms.
RELEVANCE
* The paper discusses that too narrow networks cannot be universal approximators. This is not surprising (since the local linear map computed by a ReLU layer has rank at most equal to the number of active units). Nonetheless, it highlights the fact that not only depth but also width is important.
* The universal approximation results seem to add, but not that much to what was already known. Here it would help to comment on the differences.
* The paper presents a polynomial lower bound on the required number of narrow layers in order to represent a shallow net, but a corresponding upper bound is not provided.
* As an argument in favor of depth, the theoretical results do not seem to be conclusive. For comparison, other works focusing on depth efficiency show that exponentially larger shallow nets are needed to express deep nets. The paper also presents experiments indicating that a polynomial number of layers indeed allows deep narrow nets to express functions computable by a shallow net.
OTHER COMMENTS
* The paper claims that ``as pointed out in [2]: There is always a positive measure of network parameters such that deep nets can be realized by shallow ones without substantially larger size''. This needs further clarification, as that paper states `` we prove that besides a negligible set, all functions that can be implemented by a deep network of polynomial size, require exponential size in order to be realized (or even approximated) by a shallow network.''
* In line 84, what is the number of input units / how do the number of parameters of the two networks compare?
* In related works, how to the bounds from [14,18] compare with the new results presented in this paper?
* In line 133. Lebesgue measurable functions are not generalizations of continuous functions.
* In line 134 the argumentation for L1 makes no sense.
* In line 163 ``if we ignore the size of the network ... efficient for universal approximation'' makes no sense.
* The comparison with previous results could be made more clear.
* Theorem 2 is measuring the error over all of R^n, which does not seem reasonable for a no-representability statement.
* Theorem 3 talks about the existence of an error epsilon, which does not seem reasonable for a no-representability statement. Same in Theorem 4.
* In line 235 the number of points should increase with the power of the dimension.
CONCLUSION
The paper discusses an interesting question, but does not answer it conclusively. Some technical details need attention. |
nips_2017_3334 | The power of absolute discounting: all-dimensional distribution estimation
Categorical models are a natural fit for many problems. When learning the distribution of categories from samples, high-dimensionality may dilute the data. Minimax optimality is too pessimistic to remedy this issue. A serendipitously discovered estimator, absolute discounting, corrects empirical frequencies by subtracting a constant from observed categories, which it then redistributes among the unobserved. It outperforms classical estimators empirically, and has been used extensively in natural language modeling. In this paper, we rigorously explain the prowess of this estimator using less pessimistic notions. We show that (1) absolute discounting recovers classical minimax KL-risk rates, (2) it is adaptive to an effective dimension rather than the true dimension, (3) it is strongly related to the Good-Turing estimator and inherits its competitive properties. We use powerlaw distributions as the cornerstone of these results. We validate the theory via synthetic data and an application to the Global Terrorism Database. | SUMMARY: The paper discusses methods for estimating categorical distributions, in particular a method called "absolute discounting". The paper gives bounds for the KL-risk of absolute discounting, and discusses properties such as minimax rate-optimality, adaptivity, and its relation to the Good–Turing estimator.
CLARITY: The paper is presented in reasonably clear language, and is well-structured.
NOVELTY: The contributions of the paper seem good, but perhaps rather incremental, and not ground-breakingly novel. The paper advocates for the use of absolute discounting, and gives good arguments in favor, including theoretical properties and some experimental results. But the technique as such isn't especially novel, and more general versions of it exist that aren't referenced. The literature review might not be thorough enough: for example, there are many relevant techniques in Chen & Goodman's comprehensive and widely-cited report (1996, 1998) that aren't mentioned or compared to. The paper does cite this report, but could perhaps engage with it better.
Also, the paper does not give any insight on its relation to clearly relevant Bayesian techniques (e.g. Pitman-Yor processes and CRPs) that are similar to (and perhaps more general than) the form absolute discounting presented in this paper. These other techniques probably deserve a theoretical and/or experimental comparison. At the very least they could be mentioned, and it would be helpful to have their relationship to "absolute discounting" clarified.
SCORE:
Overall, I enjoyed reading the paper, and I hope the authors will feel encouraged to continue and improve their work. The main contributions seem useful, but I have a feeling that their relevance to other techniques and existing research isn't clarified enough in this version of the paper.
A few minor suggestions for improvements to the paper follow below.
SECTION 8 (Experiments):
Some questions.
Q1: Is 500 Monte-Carlo iterations enough?
Q2: If the discount value is set based on the data, is that cheating by using the data twice?
LANGUAGE:
Some of the phrasing might be a bit overconfident, e.g.:
Line 1: "Categorical models are the natural fit..." -> "Categorical models are a natural fit..."?
Line 48: "[we report on some experiments], which showcases perfectly the all-dimensional learning power of absolute discounting" (really, "perfectly"?)
Line 291: "This perfectly captures the importance of using structure [...]"
NOTATION:
Totally minor, but perhaps worth mentioning as it occurs a lot: "Good-Turing" should be written with an en-dash ("–" in Unicode, "--" in LaTeX), and not with a hyphen ("-" in Unicode, "-" in LaTeX).
BIBLIOGRAPHY:
In the bibliography, "Good–Turing" is often written in lowercase ("good-turing"), perhaps as a result of forgetting the curly brace protections in the BibTeX source file. (Correct syntax: "{G}ood--{T}uring".)
The final reference (Zip35, Line 364) seems incomplete: only author, title and year are mentioned, no information is given where or how the paper was published. |
nips_2017_3407 | Structured Bayesian Pruning via Log-Normal Multiplicative Noise
Dropout-based regularization methods can be regarded as injecting random noise with pre-defined magnitude to different parts of the neural network during training. It was recently shown that Bayesian dropout procedure not only improves generalization but also leads to extremely sparse neural architectures by automatically setting the individual noise magnitude per weight. However, this sparsity can hardly be used for acceleration since it is unstructured. In the paper, we propose a new Bayesian model that takes into account the computational structure of neural networks and provides structured sparsity, e.g. removes neurons and/or convolutional channels in CNNs. To do this we inject noise to the neurons outputs while keeping the weights unregularized. We establish the probabilistic model with a proper truncated log-uniform prior over the noise and truncated log-normal variational approximation that ensures that the KL-term in the evidence lower bound is computed in closed-form. The model leads to structured sparsity by removing elements with a low SNR from the computation graph and provides significant acceleration on a number of deep neural architectures. The model is easy to implement as it can be formulated as a separate dropout-like layer.
employs group-wise sparsity in convolutional filters, Perforated CNNs [3] drop redundant rows from the intermediate dataframe matrices that are used to compute convolutions, and Structured Sparsity Learning [24] provides a way to remove entire convolutional filters or even layers in residual networks. These methods allow to obtain practical acceleration with little to no modifications of the existing software. In this paper, we propose a tool that is able to induce an arbitrary pattern of structured sparsity on neural network parameters or intermediate data tensors. We propose a dropout-like layer with a parametric multiplicative noise and use stochastic variational inference to tune its parameters in a Bayesian way. We introduce a proper analog of sparsity-inducing log-uniform prior distribution [8,16] that allows us to formulate a correct probabilistic model and avoid the problems that come from using an improper prior. This way we obtain a novel Bayesian method of regularization of neural networks that results in structured sparsity. Our model can be represented as a separate dropout-like layer that allows for a simple and flexible implementation with almost no computational overhead, and can be incorporated into existing neural networks.
Our experiments show that our model leads to high group sparsity level and significant acceleration of convolutional neural networks with negligible accuracy drop. We demonstrate the performance of our method on LeNet and VGG-like architectures using MNIST and CIFAR-10 datasets. | Summary:
The paper addresses the actual problem of compression and efficient computation of deep NNs. It presents a new pruning technique for deep NNs, which is an enhancement of the Bayesian dropout method. Contrary to standard Bayesian dropout, the proposed method prunes NNs in a structured way. The NN-model is altered by dropout layers with an advanced probabilistic model (with truncated log-uniform prior distribution and truncated log-uniform posterior distribution). For the altered model, the authors propose a new "signal-to-ratio" pruning metric applicable to both individual neurons and more complex structures (e.g., filters).
The submission and supplementary files contain a theoretical explanation of the method and derivation of the corresponding formulas. Moreover, an experimental evaluation of the approach is presented, with promising results.
Quality:
The paper has very good technical quality. It contains a thorough theoretical analysis of the proposed model and its properties. It also presents several experimental results, that evaluate the qualities of the method and compare it with two concurrent approaches.
Clarity:
The presentation is comprehensible and well-organized, the method is described in detail including derivation of formulas and implementation details.
Originality and significance:
The paper addresses the actual problem of compression and efficient computation of deep NNs. Its main contribution is the presentation of a new general technique for structured pruning of NNs that is based on Bayesian dropout. Contrary to the original method,
1) it uses more sophisticated and proper noise distributions and it involves a novel pruning metric,
2) the subject of pruning are not weights, but more complex structures (e.g., filters, neurons).
3) The method can be applied to several NN architectures (CNN, fully-connected,...), and it seems that it can be easily added to existing NN implementations.
The presented approach proved in the experiments to massively prune the network and to speed up the computation on GPU and CPU, while inducing only small accuracy loss. However, based on the experimental results presented, the benefit over the concurrent "SSL" approach is relatively small.
Typo:
...that is has ... (line 117)
Comments:
- Table 1 and Table 2 contain rows with results for concurrent methods called "SSL" and "SparseVD", without a short description of the main principles of these methods in the text.
- It would be useful to assess also the efficiency of the pruning process itself.
- The method has several parameters that need to be set. An interesting question is, how does the actual setting of these parameters affect the balance between sparsity and accuracy of the final model. |
nips_2017_3406 | Inverse Reward Design
Autonomous agents optimize the reward function we give them. What they don't know is how hard it is for us to design a reward function that actually captures what we want. When designing the reward, we might think of some specific training scenarios, and make sure that the reward will lead to the right behavior in those scenarios. Inevitably, agents encounter new scenarios (e.g., new types of terrain) where optimizing that same reward may lead to undesired behavior. Our insight is that reward functions are merely observations about what the designer actually wants, and that they should be interpreted in the context in which they were designed. We introduce inverse reward design (IRD) as the problem of inferring the true objective based on the designed reward and the training MDP. We introduce approximate methods for solving IRD problems, and use their solution to plan risk-averse behavior in test MDPs. Empirical results suggest that this approach can help alleviate negative side effects of misspecified reward functions and mitigate reward hacking. | The authors develop "Inverse Reward Design" which is intended to help mitigate unintended consequences when designing reward functions that will guide agent behaviour. This is a highly relevant topic for the NIPS audience and a very interesting idea.
The main potential contribution of this paper would be the clarity of thought it could bring to how issues of invariance can speak to the IRL problem. However, I find that the current presentation to be unclear at many points, particularly through the development of the main ideas. I have done my best to detail those points below.
My main concern is that the presentation leading up to and including 4.2 is very confused.
The authors define P(\tilde w | w^*). (They always condition on the same \tilde M so I am dropping that.)
At l.148, the authors state that through Bayes' rule, get P(w^* | \tilde w) \propto P(\tilde w | w^*) P(w^*)
In the text, the display just after l.148 is where things start to get confused, by the use of P(w) to mean P(w^* = w)
The full Bayes' rule statement is of course P(w^* | \tilde w) = P(\tilde w | w^*) P(w^*) / P(\tilde w), or
P(w^* | \tilde w) = P(\tilde w | w^*) P(w^*) / \int_\w^* P(\tilde w | w^*) P(w^*) dw^*
I am confused because 1) the authors don't address the prior P(w^*) and 2) they state that the normalizing constant integrates "over the space of possible proxy rewards" and talk about how this is intractable "if \tilde{w} lies in an infinite or large finite set." Since w^* and \tilde{w} live in the same space (I believe) this is technically correct, but a very strange statement, since the integral in question is over w^*. I am concerned that there is substantial confusion here at the core idea of the whole paper. I suspect that this has resulted just from notational confusion, which often arises when shorthand [P(w^*)] and longhand [P(w^* = w)] notation is used for probabilities. I also suspect that whatever was implemented for the experiments was probably the right thing. However, I cannot assess this in the current state of the paper.
Finally, I would suggest that the authors address two questions that I think are natural in this setting. 1) What do we do about the prior P(w^*)? (Obviously there won't be a universal answer for this, but I imagine the authors have some thoughts.) 2)
Below are additional comments:
l.26 - The looping example isn't given in enough detail to help the reader. The example should stand alone from the cited paper.
Figure 1 - I understand the podium in the figure on the right, but the meaning of the numbers is not clear. E.g. is the first number supposed to convey [2][0] or twenty?
l.50 - "Extracting the true reward is..." - Do you mean "We define..."
Figure 2: uknown
l.134 - I think somewhere *much* earlier it needs to be made clear that this is a feature-based reward function that generalizes over different kinds of states.
l.138 - \xi and \phi(\xi) are not precisely defined. I don't know what it means for an agent to "select trajectory \xi"
l.145 - Notation in (1) and (2) is confusing. I find conditioning on \xi drawn from ... inside the [] strange. Subscript the expectation with it instead?
l.148 - P(w) should be P(w|\tilde{M}) unless you make additional assumptions. (true also in (3)) On the other hand, since as far as I can tell everything in the paper conditions on \tilde{M}, you may as well just drop it.
l.149 - Probabilities are normalized. If you're going to talk about the normalizing constant, write it explicitly rather than using \propto.
l.150 - Is this the same w? Looks different.
l.151 - I think you want = rather than \propto. Something is wrong in the integral in (3); check the variable of integration. There are \bar\phi and \tilde\phi and it's not clear what is happening.
l.158 - "constrained to lie on a hypercube" - Just say this in math so that it's precise.
l.166 - Can't (4) just go from 1 to N?
l.228 - I wouldn't open a figure caption with "Our proof-of-concept domains make the unrealistic assumption." Furthermore, I have no idea how the figure caption relates to the figure.
=== UPDATE
The authors have clarified section 4 and I understand much better what they were after. I have updated my score accordingly. |
nips_2017_322 | Variable Importance using Decision Trees
Decision trees and random forests are well established models that not only offer good predictive performance, but also provide rich feature importance information. While practitioners often employ variable importance methods that rely on this impurity-based information, these methods remain poorly characterized from a theoretical perspective. We provide novel insights into the performance of these methods by deriving finite sample performance guarantees in a high-dimensional setting under various modeling assumptions. We further demonstrate the effectiveness of these impurity-based methods via an extensive set of simulations. | The article tackles the problem of variable importance in regression trees. The strategy is to select the variables based on the impurity reduction they induce on label Y. The main feature of this strategy is that the impurity reduction measure is based on the ordering of Y according to the ranking of the X variable under consideration, therefore it measures the relationship between Y and any variable in a more robust way than simple correlation would. The authors prove that this strategy is consistent (i.e. the true explanatory variables are selected) in a range of settings. This is then illustrated on a simulated example where the results displayed are somewhat the ones one could have expected: the proposed procedure is able to account for monotone but non linear relationships between X and Y so it yields better results than simple correlations. In the same time, it cannot efficiently deal with interaction since explanatory variables are considered one by one - compared with procedures that consider a complete tree rather than only a stump.
Overall the idea that is presented is very simple but as mentioned by the authors the theoretical analysis of the strategy is not completely trivial. Although there are some weaknesses in the manuscript the ideas are well presented and the sketchs of proof presented in section 3.1 and 3.2 provide a good lanscape of the complete proof. However there are some weaknesses in the Simulation section, and it is a bit surprising (and disappointing) that there is no discussion section at all.
Regarding the Simulation section, first it seems that the quality criterion that is represented is actually not the criterion under study in section 2 and 3. The theoretical results state the consistency of the procedure, i.e. the probability that the complete set of s predictors is retrieved, so the authors could have displayed the proportion of runs where the set is correctly found. Instead, they display the fraction of the support that is recovered, and it is not clear whether this is done on a single simulated dataset or averaged. If only on a single simulation then it would be more relevant to repeat the experiment on several runs to have a clearer picture of what happens. Another remark is that the authors mention that they consider the model Y=X\beta +f(X_S) + w, but do they consider this model both to generate the data and perform the analysis, or just for the generation and then a purely linear model is used for analysis ? It seems that the second hypothesis is the good one but this should be clearly mentioned. Also it would be nice to have on an example the histogram of the criterion that is optimized by each procedure (when possible) to empirically evaluate the difficulty of the problem.
Regarding the proof itself, some parts of it should be more commented. For instance, the authors derive the exact distribution of \tilde{U}, but then they use a truncated version of it. It may seem weird to approximate the (known) true distribution to derive the final result on this distribution. The authors should motivate this step and also provide some insight about what is lost due to this approximation. Also, is there any chance that the results presented here can be easily extended to the classification case ? And lastly, since the number of active feature is unknown in practice, is there any way the strategy presented here could help estimating this quantity (also I understand it is not the core of the article) ? |
nips_2017_145 | Decoding with Value Networks for Neural Machine Translation
Neural Machine Translation (NMT) has become a popular technology in recent years, and beam search is its de facto decoding method due to the shrunk search space and reduced computational complexity. However, since it only searches for local optima at each time step through one-step forward looking, it usually cannot output the best target sentence. Inspired by the success and methodology of AlphaGo, in this paper we propose using a prediction network to improve beam search, which takes the source sentence x, the currently available decoding output y 1 , · · · , y t−1 and a candidate word w at step t as inputs and predicts the long-term value (e.g., BLEU score) of the partial target sentence if it is completed by the NMT model. Following the practice in reinforcement learning, we call this prediction network value network. Specifically, we propose a recurrent structure for the value network, and train its parameters from bilingual data. During the test time, when choosing a word w for decoding, we consider both its conditional probability given by the NMT model and its long-term value predicted by the value network. Experiments show that such an approach can significantly improve the translation accuracy on several translation tasks. | This paper addresses one of the limitation of NMT, the so-called
exposure bias, that results from the fact that each word is chosen
greedily. For this, the authors build on standard technique of
reinforcement learning and try to predict, for each outgoing
transition of a given state, the expected reward that will be achieved
if the system take this transition.
The article is overall very clear and the proposed ideas quite
appealing, even if many of the decisions seem quite ad hoc (e.g. why
doing a linear combination between the (predicted) expected reward and
the prediction of the NMT system and not decoding directly with the
former) and it is not clear whether the proposed approach can be
applied to other NLP tasks.
More importantly, several implementation "details" are not specified.
For instance, in Equation (6), the BLEU function is defined at the
sentence level while in the actual BLEU metric is defined at the
corpus level. The authors should have specified which approximation of
BLEU that have used.
The connection between structure prediction (especially in NLP) and
reinforcement learning has been made since 2005 (see for instance [1],
[2] or, more recently, [3]). It is, today, at the heart of most
dependency parsers (see, among others, the work of Goldberg) that have
deal with similar issues for a long time. The related work section
should not (and most not) be only about neural network works.
The MT system used in the experiments is far behind the performance
achieved by the best systems of the WMT campaign: the best system (a
"simple" phrase-base system as I remember) achieves a BLEU score of
~20 for En -> De and 35 for En -> Fr while the scores reported by the
authors are respectively ~15 and ~30. I wonder if the authors would
have observed similar gains had they considered a better baseline
system.
More generally, most of the reported experimental results are quite
disturbing. Figure 3.a shows that increasing the beam actually
degrades translation quality. This counter-intuitive results should
have been analyzed. Figure 3.b indicates that taking into account the
predicted value of the expected BLEU actually hurts translation
performance: the less weight the predicted score gets in the linear
combination, the better the translation.
[1] Learning as Search Optimization: Approximate Large Margin Methods for Structured
Prediction, Hal Daumé III and Daniel Marcu, ICML'05
[2] Searched-based structured prediction, Hal Daumé III, John Langford
and Daniel Marcu, ML'09
[3] Learning to Search Better than Your Teacher, Kai-Wei Chang, Akshay
Krishnamurthy, Alekh Agarwal, Hal Daumé III and John Langford |
nips_2017_519 | Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees
Greedy optimization methods such as Matching Pursuit (MP) and Frank-Wolfe (FW) algorithms regained popularity in recent years due to their simplicity, effectiveness and theoretical guarantees. MP and FW address optimization over the linear span and the convex hull of a set of atoms, respectively. In this paper, we consider the intermediate case of optimization over the convex cone, parametrized as the conic hull of a generic atom set, leading to the first principled definitions of non-negative MP algorithms for which we give explicit convergence rates and demonstrate excellent empirical performance. In particular, we derive sublinear (O(1/t)) convergence on general smooth and convex objectives, and linear convergence (O(e −t )) on strongly convex objectives, in both cases for general sets of atoms. Furthermore, we establish a clear correspondence of our algorithms to known algorithms from the MP and FW literature. Our novel algorithms and analyses target general atom sets and general objective functions, and hence are directly applicable to a large variety of learning settings. | %%%% Summary
The authors consider constrained convex optimization problems where the constraints are conic and given by a bounded set of atoms. The authors propose linear oracle based method to tackle such problems in the spirit of Frank-Wolfe algorithm and Matching Pursuit techniques. The main algorithm is the Non-Negative Matching Pursuit and the authors propose several active set variants. The paper contains convergence analysis for all the algorithms under different scenarios. In a nutshel, the convergence rate is sublinear for general objectives and linear for strongly convex objectives. The linear rates involve a new geometric quantity, the cone width. Finally the authors illustrate the relevance of their algorithm on several machine learning tasks and different datasets.
%%%% Main comments
I did not have time to check the affine invariant algorithms and analyses presented in the appendix.
The papers contains interesting novel ideas and extensions for linear oracle based optimization methods but the technical presentation suffer some weaknesses:
- Theorem 2 which presentation is problematic and does not really provide any convergence guaranty.
- All the linear convergence rates rely on Theorem 8 which is burried at the end of the appendix and which proof is not clear enough.
- Lower bounds on the number of good steps of each algorithm which are not really proved since they rely on an argument of the type "it works the same as in another close setting".
The numerical experiments are numerous and convincing, but I think that the authors should provide empirical evidences showing that the computational cost are of the same order of magnitude compared competing methods for the experiments they carried out.
%%%% Details on the main comments
%% Theorem 2
The presention and statement of Theorem 2 (and all the sublinear rates given in the paper) has the following form:
- Given a fixed horizon T
- Consider rho, a bound on the iterates x_0 ... x_T
- Then for all t > 0 the suboptimality is of the order of c / t where c depends on rho.
First, the proof cannot hold for all t > 0 but only for 0 < t <= T. Indeed, in the proof, equation (16) relies on the fact that the rho bound holds for x_t which is only ensured for t < = T.
Second the numerator actually contains rho^2. When T increases, rho could increase as well and the given bound does not even need to approach 0.
This presentation is problematic. One possible way to fix this would be to provide a priori conditions (such as coercivity) which ensure that the sequence of iterates remain in a compact set, allowing to define an upper bound independantly of the horizon T.
In the proof I did not understand the sentence "The reason being that f is convex, therefore, for t > 0 we have f (x t ) < = f (0)."
%% Lemma 7 and Theorem 8
I could not understand Lemma 7.
The equation is given without any comment and I cannot understand its meaning without further explaination. Is this equation defining K'? Or is it the case that K' can be chosen to satisfy this equation? Does it have any other meaning?
Lemma 7 deals only with g-faces which are polytopes. Is it always the case? What happens if K is not a polytope? Can this be done without loss of generality? Is it just a typo?
Theorem 8:
The presentation is problematic. In Lemma 7, r is not a feasible direction. In Theorem 8, it is the gradient of f at x_t. Theorem 8 says "using the notation from Lemma 7". The proof of Theorem 8 says "if r is a feasible direction". All this makes the work of the reader very hard.
Notations of Lemma 7 are not properly used:
- What is e? e is not fixed by Lemma 7, it is just a variable defining a maximum. This is a recurent mistake in the proofs.
- What is K? K is supposed to be given in Lemma 7 but not in Theorem 8.
- Polytope?
All this could be more explicit.
"As x is not optimal by convexity we have that < r , e > > 0". Where is it assumed that $x$ is not optimal? How does this translate in the proposed inequality?
What does the following mean?
"We then project r on the faces of cone(A) containing x until it is a feasible direction"
Do the author project on an intersection of faces or alternatively on each face or something else?
It would be more appropriate to say "the projection is a feasible direction" since r is fixed to be the gradient of f. It is very uncomfortable to have the value of r changing within the proof in an "algorithmic fashion" and makes it very hard to check accuracy of the arguments.
In any case, I suspect that the resulting r could be 0 in which case the next equation does not make sense. What prevents the resulting r from being null?
In the next sentences, the authors use Lemma 7 which assumes that r is not a feasible direction. This is contradictory with the preceeding paragraph. At this point I was completely confused and lost hope to understand the details of this proof.
What is r' on line 723 and in the preceeding equation?
I understand that there is a kind of recursive process in the proof. Why should the last sentence be true?
%% Further comments
Line 220, max should be argmax
I did not really understand the non-negative matrix facotrization experiment. Since the resulting approximation is of rank 10, does it mean that the authors ran their algorithm for 10 steps only? |
nips_2017_483 | Learning Efficient Object Detection Models with Knowledge Distillation
Despite significant accuracy improvement in convolutional neural networks (CNN) based object detectors, they often require prohibitive runtimes to process an image for real-time applications. State-of-the-art models often use very deep networks with a large number of floating point operations. Efforts such as model compression learn compact models with fewer number of parameters, but with much reduced accuracy. In this work, we propose a new framework to learn compact and fast object detection networks with improved accuracy using knowledge distillation [20] and hint learning [34]. Although knowledge distillation has demonstrated excellent improvements for simpler classification setups, the complexity of detection poses new challenges in the form of regression, region proposals and less voluminous labels. We address this through several innovations such as a weighted cross-entropy loss to address class imbalance, a teacher bounded loss to handle the regression component and adaptation layers to better learn from intermediate teacher distributions. We conduct comprehensive empirical evaluation with different distillation configurations over multiple datasets including PASCAL, KITTI, ILSVRC and MS-COCO. Our results show consistent improvement in accuracy-speed trade-offs for modern multi-class detection models.
drop [3,20,34]. However, those results are shown only for problems such as classification, using simpler networks without strong regularization such as dropout.
Applying distillation techniques to multi-class object detection, in contrast to image classification, is challenging for several reasons. First, the performance of detection models suffers more degradation with compression, since detection labels are more expensive and thereby, usually less voluminous. Second, knowledge distillation is proposed for classification assuming each class is equally important, whereas that is not the case for detection where the background class is far more prevalent. Third, detection is a more complex task that combines elements of both classification and bounding box regression. Finally, an added challenge is that we focus on transferring knowledge within the same domain (images of the same dataset) with no additional data or labels, as opposed other works that might rely on data from other domains (such as high-quality and low-quality image domains, or image and depth domains).
To address the above challenges, we propose a method to train fast models for object detection with knowledge distillation. Our contributions are four-fold:
• We propose an end-to-end trainable framework for learning compact multi-class object detection models through knowledge distillation (Section 3.1). To the best of our knowledge, this is the first successful demonstration of knowledge distillation for the multi-class object detection problem.
• We propose new losses that effectively address the aforementioned challenges. In particular, we propose a weighted cross entropy loss for classification that accounts for the imbalance in the impact of misclassification for background class as opposed to object classes (Section 3.2), a teacher bounded regression loss for knowledge distillation (Section 3.3) and adaptation layers for hint learning that allows the student to better learn from the distribution of neurons in intermediate layers of the teacher (Section 3.4).
• We perform comprehensive empirical evaluation using multiple large-scale public benchmarks.
Our study demonstrates the positive impact of each of the above novel design choices, resulting in significant improvement in object detection accuracy using compressed fast networks, consistently across all benchmarks (Sections 4.1 -4.3).
• We present insights into the behavior of our framework by relating it to the generalization and under-fitting problems (Section 4.4). | Overall
This paper proposes a framework for training compact and efficient object detection networks using knowledge distillation [20]. It is claimed that this is the first successful application of knowledge distillation to object detection (aside from a concurrent work noted in Section 2 [34]), and to the best of my knowledge I confirm this claim. In my view, the main contributions of this work are the definition of a learning scheme and associated losses necessary to adapt the ideas of [20] and [32] for object detection. This requires some non-trivial problems to be solved - in particular, the definition of the class-weighted cross entropy to handle imbalance between foreground and background classes and the definition of a teacher bounded regression loss to guide the bounding box regression. Another minor contribution is the adaptation layer proposed in 3.4 for hint learning. On a skeptical note, these contributions can be viewed as rather straightforward technical adaptations to apply knowledge distillation to a related but novel task, object detection. In this regard, the goal may be considered a low-hanging fruit and the contributions as incremental.
I have two main criticisms to the manuscript. First, I am not convinced that knowledge distillation is particularly well suited to the problem of object detection (see argument below). This leads to my second concern: there are no experiments demonstrating advantages of the proposed model compression method to other model compression approaches.
The manuscript is well written and easy to read, with only a few minor typos. The related work is well structured and complete, and the experiments seem fairly rigorous.
Related Works
+ The related works is well structured and seems to be comprehensive. The related work section even identifies a recent concurrent work with on the same topic and clearly points out the difference in the two approaches.
Approach
- It seems to me that knowledge distillation is not particularly well suited to the problem of object detection where there are only a few classes (and quite often, only foreground and background). One of the main features of [20] is the ability to distill knowledge from an ensemble containing generalist and specialist models into a single streamlined model. The specialist models should handle rare classes or classes that are easily confused, usually subsets of a larger class (such as dog breeds). There is no use of ensemble learning in the proposed approach, and there would be little benefit as there are relatively few classes. My intuition is that this will give much less power to the soft targets and knowledge distillation.
- The problem of class balancing for object detection needs to be more clearly motivated. What are the detection rates from the RPN and the ratio of foreground to background?
Experiments
+ Good experimental design clearly demonstrates the advantages of the method.
+ Important commonly used benchmark datasets are used (PASCAL, MS COCO, ILSVRC)
+ Experiments use several teacher and student models in a variety of configurations, allowing the reader to clearly identify trends (deeper teachers = better performance, bigger datasets = smaller gains)
+ Ablation study justifies the choice of weighted cross-entropy and bounded regression loss functions (note: In my opinion ‘ablation’ is a slight misuse of the terminology, although it is used this way in other works)
- A comparison against other compressed models is lacking.
- It would have been interesting to see results with a deeper and more modern network than VGG-16 as teacher.
Other comments
L29 - Deeper networks are easier to train - clarify what you mean by this. Problems like vanishing gradients make it difficult to train deeper networks.
L40-41 - This line is unclear. Should you replace ‘degrades more rapidly’ with ‘suffers more degradation’?
L108 - Typos: features, takes
L222 - Typo: Label 1 |
nips_2017_2686 | Riemannian approach to batch normalization
Batch Normalization (BN) has proven to be an effective algorithm for deep neural network training by normalizing the input to each neuron and reducing the internal covariate shift. The space of weight vectors in the BN layer can be naturally interpreted as a Riemannian manifold, which is invariant to linear scaling of weights. Following the intrinsic geometry of this manifold provides a new learning rule that is more efficient and easier to analyze. We also propose intuitive and effective gradient clipping and regularization methods for the proposed algorithm by utilizing the geometry of the manifold. The resulting algorithm consistently outperforms the original BN on various types of network architectures and datasets. | Paper Summary
Starting from the observation that batch-normalization induces a particular
form of scale invariance on the weight matrix, the authors propose instead to
directly learn the weights on the unit-sphere. This is motivated from
information geometry as an example of optimization on a Riemannian manifold, in
particular the Stiefel manifold V(1,n) which contains unit-length vectors. As
the descent direction on the unit sphere is well known (eq 7), the main
contribution of the paper is in extending popular optimization algorithms
(SGD+momentum and Adam) to constrained optimization on the unit-sphere.
Furthermore, the authors propose orthogonality as a (principled) replacement
for L2 regularization, which is no longer meaningful with norm constraints.
The method is shown to be effective across two families of models (VGG, wide
resnet) on CIFAR-10, CIFAR-100 and SVHN.
Review
I like this paper, as it proposes a principled alternative to batch
normalization, which is also computationally lightweight, compared to other
manifold optimization algorithms (e.g. natural gradient). That being said a few
flaws prevent me from giving a clear signal for acceptance.
(1) Clarity. I believe the paper would greatly benefit from increased
simplicity and clarity in the exposition of the idea. First and foremost,
concepts of element-wise scale invariance and optimization under unit-norm
constraints are much simpler to grasp (especially to people not familiar with
information geometry) than Riemannian manifolds, exponential maps and parallel
transport. I recommend the authors preface exposition of their methods with
such an intuitive explanation. Second, I also found there was a general lack of
clarity about what type of invariance is afforded by BN, and which direction
were the weights norms constrained in (column vs row). On line 75, the authors
write that "w^T x is invariant to linear scaling", when really this refers to
element-wise scaling of the columns of the matrix W \in mathbb{R}^{n x p},
where $p$ is the number of hidden units. In Algorithm 4, there is also a discrepancy
between the product "W x" (implies W is p x n) and the "column vectors w_i"
being unit-norm.
(2) Experimental results. As this is inherently an optimization algorithm, I
would expect there to be training curves. Does the algorithm simply converge
faster, does it yield a better minimum, or both ? Also, I would like to see
a controlled experiment showing the impact of the orthogonality regularizer:
what is the test error with and without ?
(3) Baselines and related work. This work ought to be discussed in the more
general context of Riemannian optimization for neural networks, e.g. natural
gradient descent algorithms (TONGA, K-FAC, HF, natural nets, etc.). Path-SGD also
seems closely related and should be featured as a baseline.
Finally, I am somewhat alarmed and puzzled that the authors chose to revert to
standard SGD for over-complete layers, and only use their method in the
under-complete case. Could the authors comment on how many parameters are in this
"under-complete" regime vs total number of parameters ? Are there no appropriate
relaxations of orthogonality ? I would have much rather seen the training curves
using their algorithm for *all* parameters (never mind test error for now) then
this strange hybrid method. |
nips_2017_2870 | Min-Max Propagation
We study the application of min-max propagation, a variation of belief propagation, for approximate min-max inference in factor graphs. We show that for "any" highorder function that can be minimized in O(ω), the min-max message update can be obtained using an efficient O(K(ω + log(K)) procedure, where K is the number of variables. We demonstrate how this generic procedure, in combination with efficient updates for a family of high-order constraints, enables the application of min-max propagation to efficiently approximate the NP-hard problem of makespan minimization, which seeks to distribute a set of tasks on machines, such that the worst case load is minimized. | As the title suggests, this paper is interested in
solving min-max problems using message-passing algorithms. The
starting point is Eq. 1: You want to optimize a vector x such
that the max over a discrete set of functions is minimize. The
set of functions should be finite.
It's well known that changing the semiring can change the message-passing algorithm from a joint optimization
(max product) algorithm to a marginalization algorithm (sum
product). So, just like the (+,*) semiring yields the sum-product
algorithm and the (max,*) semiring yields the max-product
algorithm, this paper uses the (min,max) semi-ring. Then,
Eqs. 2-3 are exactly the standard BP messages on factors and
single variables, just with the new semi-ring. (It would be good
if the paper would emphasize more strongly that Eqs. 2 and 3 are
exactly analogous-- I had to work out the details myself to
verify this.)
Since that's a semi-ring you get all the normal properties
(exactness on trees, running times). However, you don't seem to
inherit much of the folk theorems around using BP on loopy
graphs when you change the semi-ring (where most applications
probably lie) so this paper investigates
that experimentally.
The paper assumes that solving the minimization problem on any particular
factor can be solved in a fixed amount of time. This is
reasonable to make analysis tractable, but it's worth pointing
out that in many applications, doing this might itself involve
running another entire inference algorithm (e.g. max-product
BP!) There is also an algorithmic specialization to speed up
message-computation for the case
where there is a special "choose-one" form for the factors,
similar to some factor forms that are efficient with different
semigroups.
Experimentally, it is compared to a previous algorithm that runs
a bisection search with a CSP solver in each iteration. This
appears to perform slightly better, although apparently at
higher cost.
As far as I know, this particular semi-ring variant of belief
propagation has not been investigated before. I don't see strong
evidence of a huge practical contribution in the expeirments
(though that might reflect my lack of expertise on these
problems). Nevertheless, the algorithm is extremely simple in
concept, and it certainly seems worth noting that there is
another semi-ring for BP that solves another possibly
interesting set of problems. Even if the results here aren't
that strong in the current form, this might be a first step
towards using some of the many improvements that have been made
to other semi-ring versions of BP.
(I was surprised that this particular semi-ring hasn't been
proposed before, but I'm not aware of such a use, and a brief
search did not find anything. If this is not actually novel, I
would downgrade my rating of this paper.)
- Fig. 5 is very hard to read-- illegible in print
- In eq. 13 I believe you want a -> m?
EDIT AFTER REBUTTAL:
I've read the rebuttal and other reviews, see no reason to change my recommendation. (I remain positive.) |
nips_2017_2871 | What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model -uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks. | I have read the other reviews and the rebuttal.
Additional comment: The authors asked what I mean by 'numerical uncertainty'. I mean what is called 'algorithmic uncertainty' in this Wikipedia article: https://en.wikipedia.org/wiki/Uncertainty_quantification . Deep learning models are very high dimensional optimisation problems and the optimisers does not converge in almost all cases. Hence, this numerical error, i.e. the difference between the true minimum and the output of the optimiser, is a very significant source of error, which we are uncertain about (because we do not know the solution). Additionally, even rounding errors can add to the uncertainty of deep learning, as is e.g. discussed in this paper: http://proceedings.mlr.press/v37/gupta15.pdf .
Hence, I think that the question in the title "What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?" is not really answered. Although the paper is apart from this very good, this limits my rating at 'good paper, accept'. In my opinion, the authors should make clearer which kinds of uncertainty they consider (and which they ignore) or why they believe that the uncertainties considered are the dominant ones (which I find completely unclear).
% Brief Summary
The paper is concerned with evaluating which types of uncertainty are needed for computer vision. The authors restrict their analysis to aleatoric and epistemic uncertainty (leaving out numerical uncertainty). Aleatoric uncertainty includes the uncertainty from statistical noise in data. Epistemic uncertainty is usually another term for ignorance, i.e. things one could in theory know but doesn't know in practice. In this paper however, the authors use epistemic uncertainty as a synonym for model uncertainty (or structural uncertainty, i.e. ignorance over the true physical model which created the data). The paper raises the question how these two sources of uncertainty can be jointly quantified, and when which source of uncertainty is dominant. In particular, this is discussed in the setting of computer vision with deep learning. While classically it was not possible to capture uncertainty in deep learning, Bayesian neural networks (BNN) have made it possible to capture at least epistemic uncertainty. The authors aim to jointly treat epistemic and aleatoric uncertainty in BNNs.
To expand the existing literature, the authors propose to include aleatoric uncertainty over pixel output sigma by fixing a Laplace likelihood which contains sigma, which is to be minimised with respect to the parameter vector theta. They also observe that this inclusion of aleatoric uncertainty can be interpreted as loss attenuation.
This novel method is then applied to pixel-wise depth regression and semantic segmentation. For semantic segmentation performance, a notable increase in performance is achieved by modelling both aleatoric and epistemic uncertainty.
In Chapter 5, an analysis of the effects of epistemic and aleatoric uncertainty follows. Notably, the authors experimentally validate the well-known claims that aleatoric uncertainty cannot be reduced with more data, and that only epistemic uncertainty increases for out-of-data inputs. The paper concludes by claiming that aleatoric uncertainty is particularly important for large data situations and real-time applications, while epistemic uncertainty is particularly important for safety-critical applications and small datasets.
% Comment
I am skeptical about the use of the classifications of uncertainties which can arise in this setting in this paper. While I agree with the definition of aleatoric uncertainty, I do not agree that epistemic uncertainty is the same as model uncertainty. For me, model uncertainty is the ignorance about the true physical model which creates the data. Epistemic uncertainty is supposed to capture everything which is unknown, but is deterministic and could in theory be known. So, I think that the numerical uncertainty is missing here. In large dimensional optimisation problems (as they arise in deep learning), it is completely unclear whether the optimisation algorithm has converged. I would even expect that the numerical uncertainty could be a dominant source of error, in particular when the computation has too be performed online, e.g. when the self-driving car learns while it's driving. Hence, I would like to ask the authors on their thoughts on this.
The second issue I want to arise is the following: The authors convincingly argue that epistemic uncertainty is reduced when more data is available. In statistics, people talk about the statistical consistency in model. A model is statistically consistent when its output is a Dirac measure on the ground truth, in the limit of infinite data. It would be very interesting to see whether statistical consistency results for deep learning (if they exist) can be connected to the paper's pitch on the relative unimportance of epistemic uncertainty in big data situations.
Since I think (as explained above) that the classifications of uncertainties is missing the numerical uncertainty of the optimiser, I would argue that the paper does not completely live up to comprehensively answer the question asked in the title. While it is fine, that this is not part in the paper; I think it should be discussed in the introduction. Since, the paper is---a part from this---good, I vote for 'accept'. |
nips_2017_1548 | Variational Inference via χ Upper Bound Minimization
Variational inference (VI) is widely used as an efficient alternative to Markov chain Monte Carlo. It posits a family of approximating distributions q and finds the closest member to the exact posterior p. Closeness is usually measured via a divergence D(q||p) from q to p. While successful, this approach also has problems. Notably, it typically leads to underestimation of the posterior variance. In this paper we propose CHIVI, a black-box variational inference algorithm that minimizes D χ (p||q), the χ-divergence from p to q. CHIVI minimizes an upper bound of the model evidence, which we term the χ upper bound (CUBO). Minimizing the CUBO leads to improved posterior uncertainty, and it can also be used with the classical VI lower bound (ELBO) to provide a sandwich estimate of the model evidence. We study CHIVI on three models: probit regression, Gaussian process classification, and a Cox process model of basketball plays. When compared to expectation propagation and classical VI, CHIVI produces better error rates and more accurate estimates of posterior variance. | Continuing on the recent research trend of proposing alternative divergences to perform variational inference, the author proposed a new variational objective CUBO for VI. Inference is equivalent to minimizing CUBO with respect to the variational distributions. CUBO is a proxy to the Chi-divergence between true posteriors and variational distributions, and is also an upper bound of the model's log-evidence.
The author claimed that the key advantages of CHIVI over standard VI is the EP-like mass covering property of the resulted variational approximations that tend to over-estimate posterior variance instead of under-estimation as with standard VI. Compared to EP, CHIVI optimizes a global objective function that is guaranteed to converge. The author also claimed that CUBO and ELBO together form a sandwich bound of the marginal likelihood that can be used for model selection. The technical content of the paper appears to be correct albeit some small careless mistakes that I believe are typos instead of technical flaw (see #4 below).
The idea of having a sandwich bound for the log-marginal likelihood is certainly good. While the author did demonstrate that the bound does indeed contains the log-marginal likelihood as expected, it is not entirely clear that the sandwich bound will be useful for model selection. This is not demonstrated in the experiment despite being one of the selling point of the paper. It's important to back up this claim using simulated data in experiment.
Another key feature of CHIVI is the over-dispersed approximation that it produces, which the author claimed lead to better estimates of the posterior. I believe a more accurate claim would be 'lead to more conservative estimates of the posterior uncertainty', as the goodness of approximation is fundamentally limited by the richness of the variational distributions, which is orthogonal to the choice of the variational objective function. The author provided some evidence for the claim in the Bayesian probit regression, GPC and the basketball player experiments. However, while it is certainly believable that the 'goodness of posterior approximations' plays a role in the test classification error of the first two experiments, the evidence is a little weak because of the discrete nature of the models' outputs. The author can perhaps make a stronger case by demonstrating the superior posterior uncertainty approximation through Bayesian neural network regression, in which the test LL would be more sensitive to the goodness of approximation, or through active learning experiment in which good posterior uncertainty estimates is crucial. There are examples in both https://arxiv.org/pdf/1502.05336.pdf and https://arxiv.org/pdf/1602.02311.pdf.
The basketball player example provides pretty good visualisations of the posterior uncertainty, but I'm not entirely sure how to interpret the quantitative results in Table 3 confidently. (e.g., Is the difference of 0.006 between CHIVI and BBVI for 'Curry' a big difference? What is the scale of the numbers?)
One aspect that I find lacking in the paper is how computationally demanding CHIVI is compared to standard VI/BBVI? A useful result to present would be side-by-side comparisons of wall clock time required for CHIVI, BBVI and EP in the first two experiments. The variance of the stochastic gradients would certainly play a role in the speed of convergence, and as the author mentioned in the conclusion, the gradients can have high variance. I am quite curious if the reparameterization trick would actually result in lower variance in the stochastic gradients compared to score function gradient when optimizing CUBO?
One important use of the ELBO in standard VI is to learn the model's parameters/hyper-parameters by maximizing ELBO wrt the parameters. This appears to be impossible with CUBO as it's an upper bound of the log-marginal likelihood. The author used grid search to find suitable hyper-parameters for the GPC example. However, this is clearly problematic even for a moderately large number of hyper-parameters. How can this be addressed under the CUBO framework? What is the relationship between the grid search metric (I assume this is cross-validation error?) and the sandwich bound for the parameters on the grid?
While the paper is pretty readable, there is certainly room for improvements in the clarity of the paper. I find paragraphs in section 1 and 2 to be repetitive. It is clear enough from the Introduction that the key advantages of CHIVI are the zero avoiding approximations and the sandwich bound. I don't find it necessary to be stressing that much more in section 2. Other than that, many equations in the paper do not have numbers. The references to the appendices are also wrong (There is no Appendix D or F). There is an extra period in line 188.
The Related Work section is well-written. Good job!
Other concerns/problems:
1. In the paper, a general CUBO_n is proposed where n appears to be a hyper-parameter of the bound. What was the choice of n in the experiments? How was it selected? How sensitive are the results to the choice of n? What property should one expect in the resulted approximation given a specific n? Is there any general strategy that one should adopt to select the 'right' n?
2. In line 60, what exactly is an 'inclusive divergence'?
3. In line 118 and 120, the author used the word 'support' to describe the spread of the distributions. I think a more suitable word here would be 'uncertainty'.
4. The equation in line 125 appears to be wrong. Shouldn't there be a line break before the last equal sign, and shouldn't the last expression be equal to E_q[(\frac{p(z,x)}{q(z)})^2]?
5. Line 155: 'call' instead of 'c all'
6. Line 189: The proposed data sub-sampling scheme is also similar to the scheme first proposed in the Stochastic Variational Inference paper by Hoffman et. al., and should be cited? (http://jmlr.org/papers/volume14/hoffman13a/hoffman13a.pdf)
7. Line 203: Does KLVI really fail? Or does it just produce a worse approximations compared to Laplace/EP? Also , 'typically' instead of 'typical'.
8. Line 28/29: KLVI faces difficulties with light-tailed posteriors when the variational distribution has heavier tails. Does CHIVI not face the same difficulty given the same heavy-tailed variational distributions? |
nips_2017_459 | Safe and Nested Subgame Solving for Imperfect-Information Games
In imperfect-information games, the optimal strategy in a subgame may depend on the strategy in other, unreached subgames. Thus a subgame cannot be solved in isolation and must instead consider the strategy for the entire game as a whole, unlike perfect-information games. Nevertheless, it is possible to first approximate a solution for the whole game and then improve it in individual subgames. This is referred to as subgame solving. We introduce subgame-solving techniques that outperform prior methods both in theory and practice. We also show how to adapt them, and past subgame-solving techniques, to respond to opponent actions that are outside the original action abstraction; this significantly outperforms the prior state-of-the-art approach, action translation. Finally, we show that subgame solving can be repeated as the game progresses down the game tree, leading to far lower exploitability. These techniques were a key component of Libratus, the first AI to defeat top humans in heads-up no-limit Texas hold'em poker. | The authors present algorithms and experimental results for solving zero-sum imperfect information extensive-form games, with heads-up no-limit Texas Hold'em poker as the motivating application.
They begin by reviewing the distinction between "unsafe" and "safe" subgame solving. In unsafe subgame solving, the opponent's strategy outside the subgame, and hence the distribution of the player's state inside the initial information set, is held fixed. In safe subgame solving, the opponent's strategy in the subgame's head is allowed to vary as well, and hence the distribution of true states is responsive to the player's strategy.
They then introduce "reach" subgame solving, in which a "gift" (the additional value that the opponent could have obtained by choosing a different subgame) is computed for each path that leads to the subgame. These gifts are added to the margin for each information set.
Finally, they introduce nested subgame solving as a method of dealing with large action spaces: the trunk strategy is computed for an abstracted version of the game, but any time the opponent plays an action that is not in the abstraction, a new subgame is generated and solved on-the-fly that contains the action.
Experimental results suggest that the two methods combined are extremely effective in practice.
Overall I found the exposition well organized and easy to follow, and the performance of the algorithm seems impressive. I have only minor comments about the presentation.
1. Gifts are motivated as "allowing P2 to be less concerned with P1 reaching the subgame along that path" (L.173-174). My guess is that the gift can be added to the margin because it's a measure of how unattractive a branch is to the opponent, and the goal of maxmargin solving is to make the subgame as unattractive as possible to the opponent; but a little more concreteness / hand-holding would be helpful here.
2. At L.297-303, the authors note that in spite of having better theoretical guarantees, the no-split variants seem to always outperform the split variants. They note that this is not necessarily the case if the gifts are scaled up too aggressively; however, that does not seem directly relevant to the point (I don't believe the experimental results include scaling rather than just failure to divide).
If the full gift is simply awarded to every subgame, with no division but also no scaling, is exploitability ever worse in practice? If so, can the authors provide an example? If not, do the authors have an intuition for why that might be?
At L.210 the authors say that any split is sufficient for the theoretical guarantees, but it seems clear that some splits will leave more on the table than others (as an extreme example, imagine awarding the entire gift to the lowest-probability subgame; this might be close to not using Reach at all).
3. At L.88: "Figure 4a" probably should be "Figure 1". |
nips_2017_2127 | Deep Hyperspherical Learning
Convolution as inner product has been the founding basis of convolutional neural networks (CNNs) and the key to end-to-end visual representation learning. Benefiting from deeper architectures, recent CNNs have demonstrated increasingly strong representation abilities. Despite such improvement, the increased depth and larger parameter space have also led to challenges in properly training a network. In light of such challenges, we propose hyperspherical convolution (SphereConv), a novel learning framework that gives angular representations on hyperspheres. We introduce SphereNet, deep hyperspherical convolution networks that are distinct from conventional inner product based convolutional networks. In particular, SphereNet adopts SphereConv as its basic convolution operator and is supervised by generalized angular softmax loss -a natural loss formulation under SphereConv. We show that SphereNet can effectively encode discriminative representation and alleviate training difficulty, leading to easier optimization, faster convergence and comparable (even better) classification accuracy over convolutional counterparts. We also provide some theoretical insights for the advantages of learning on hyperspheres. In addition, we introduce the learnable SphereConv, i.e., a natural improvement over prefixed SphereConv, and SphereNorm, i.e., hyperspherical learning as a normalization method. Experiments have verified our conclusions. | This paper presents a novel architecture, SphereNet, which replaces the traditional dot product with geodesic distance as the convolution operators and fully-connected layers. SphereNet also regularizes the weights for softmax to be norm 1 for angular softmax. The results show that SphereNet can achieve superior performance in terms of accuracy and convergence rate as well as mitigating the vanishing/exploding gradients in deep networks.
Novelty: Replacing dot product similarity with angular similarity has widely existed in the deep learning literature. With that being said, most works focus on using angular similarity for Softmax or loss functions. This paper introduces spherical operation for convolution, which is novel in the literature as far as I know.
Significance: The spherical operations the paper introduces achieve faster convergence rate and better accuracy performance. It also mitigates the vanishing/exploding gradients brought by dot products, which is a long standing problem. It will be interesting to lots of people in ML community.
Improvement: I have several suggestions and questions as listed below:
- For angular Softmax, the bias is removed from the equation. However, the existence of bias is very important for calibrating the output, especially for imbalanced datasets. It would be great if the authors could add that back. Otherwise, it's better to try current model on an imbalanced dataset to measure the effect.
- In the first two subfigures of Figure 4, the baseline methods such as standard CNN actually converges faster at the very beginning. But there is a big accuracy drop in the middle. What is the root cause for that? The paper seems no giving any explanation.
- Though the experimental results show faster convergence rate in terms of accuracy vs iteration, it's unknown whether this observation still hold true for accuracy vs time. Traditional dot product operators leverage fast matrix multiplication libraries for speedup, while there is much less support for angular operators.
Minor on grammar:
- Line 216, "this problem will automatically solved" -> "be solved"
- Line 319, "The task can also viewed as a ..." -> "be viewed" |
nips_2017_3547 | Stochastic Mirror Descent in Variationally Coherent Optimization Problems
In this paper, we examine a class of non-convex stochastic optimization problems which we call variationally coherent, and which properly includes pseudo-/quasiconvex and star-convex optimization problems. To solve such problems, we focus on the widely used stochastic mirror descent (SMD) family of algorithms (which contains stochastic gradient descent as a special case), and we show that the last iterate of SMD converges to the problem's solution set with probability 1. This result contributes to the landscape of non-convex stochastic optimization by clarifying that neither pseudo-/quasi-convexity nor star-convexity is essential for (almost sure) global convergence; rather, variational coherence, a much weaker requirement, suffices. Characterization of convergence rates for the subclass of strongly variationally coherent optimization problems as well as simulation results are also presented. | This paper analyzes the stochastic mirror descent (SMD) algorithm for a specific class of non-convex functions the variational coherence (VC) assumption. The authors study mirror descent by establishing a correspondence between the SMD iterates to an ordinary differential equation. They introduce a new divergence measure named Fenchel coupling and exploit its monotonicity property (due to the VC assumption) to demonstrate the SMD algorithm decreases its objective value until it gets close to the optimum.
The paper is clearly written and - omitting certain details -the proof is relatively easy to follow. The paper however has some significant problems in its presentation:
1) Interest for machine learning community: The main contribution of this paper is a proof of asymptotic convergence of SMD for certain non-convex functions. Although this is an interesting result for the optimization community, I’m however not sure this will be of a broad interest for the ML community which is typically more interested in deriving convergence rates. The authors do not give examples of ML problems where this new analysis could be relevant. There is also no experiment results. Can you please try to address this issue in the rebuttal?
2) Related work: The paper does not cover existing work in optimization for non-convex functions, not does it cover prior work relating ODEs to numerical optimization. I think existing proofs of convergence for stochastic gradient descent and proximal methods should be mentioned, e.g. Ghadimi, Saeed, and Guanghui Lan. "Stochastic first-and zeroth-order methods for nonconvex stochastic programming." SIAM Journal on Optimization 23.4 (2013): 2341-2368.
For ODEs, see references in [18] as well as
Stochastic Approximation and Recursive Algorithms and Applications
Kushner, Harold, Yin, George
I’m sure the authors are familiar with this line of work so please cite it appropriately.
Fenchel coupling
The paper uses the Fenchel coupling to quantify the distance between primal and dual variables thereby serving as an energy function to prove convergence. Can the authors comment on the difficulty of directly using the discrete primal variable to prove convergence? e.g. using a proof similar to SGD for non-convex functions (Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming, Saeed Ghadimi, Guanghui Lan)
Can the authors comment on whether other types of divergences would be appropriate for the analysis? It seems to me that the monotonicity property is due to the variational coherence assumption (as can be seen in E 3.39 in the appendix), can you confirm this? If so, then I wonder if the role of the Fenchel coupling is really that important for the analysis to hold?
Convergence rate: As mentioned earlier, the ML community would probably be more interested in a convergence rate. Can the authors elaborate on the difficulty of establishing a rate using their ODE technique?
Theorem 3.4: This result was at first puzzling to me. It says that the SMD algorithm oscillates in a region near the optimum. This is known to happen for SGD with constant step-size while using a decreasing step size (+ assumptions mentioned in the paper) will then guarantee convergence to an optimum. Can the authors clarify this?
Analysis:
Page 5 appendix above line 92, should be sum_k \alpha_k = + \infty
Line 174: you are citing a book, could you give a more precise reference?
--------------
Post-rebuttal:
I find the answers in the rebuttal convincing and updated my recommendation score accordingly. Overall, I think the paper would benefit from some clarifications which I think is easily do-able between now and the camera ready version. This includes:
1) Importance of the Fenchel coupling. The use of the Fenchel coupling (instead of the commonly used Bregman divergence) is a nice trick to get convergence for the type of non-convex functions considered in the paper. Also the ODE technique used in this paper could potentially push the development of similar techniques for different optimization techniques so this could really be valuable for the ML community. The submitted version of the paper did not explain this well but the answer provided in the rebuttal is convincing. I would recommend developing this aspect even more in the revised version.
2) Interest regarding the ML community, they should develop this more in the introduction. Adding experimental results as they promised to R3 would be valuable as well
3) As R2 pointed out, the intuition behind the analysis is not always clear. Given the rather convincing answers in the rebuttal, I think the authors can easily improve this aspect in the revised version.
4) Minor modifications: Fix the numbering and clarify the use of certain terms whose meaning might differ for different communities, e.g. linear decrease, last iterate. |
nips_2017_1115 | Minimal Exploration in Structured Stochastic Bandits
This paper introduces and addresses a wide class of stochastic bandit problems where the function mapping the arm to the corresponding reward exhibits some known structural properties. Most existing structures (e.g. linear, Lipschitz, unimodal, combinatorial, dueling, . . . ) are covered by our framework. We derive an asymptotic instance-specific regret lower bound for these problems, and develop OSSB, an algorithm whose regret matches this fundamental limit. OSSB is not based on the classical principle of "optimism in the face of uncertainty" or on Thompson sampling, and rather aims at matching the minimal exploration rates of sub-optimal arms as characterized in the derivation of the regret lower bound. We illustrate the efficiency of OSSB using numerical experiments in the case of the linear bandit problem and show that OSSB outperforms existing algorithms, including Thompson sampling. | Problem Studied: The paper introduces and studies a wide range of stochastic bandit problem with known structural properties. This includes linear, lipschitz, unimodal, combinatorial, dueling bandits.
Results: The paper derives an instance specific regret lower bound based on the structure and gives an algorithm which matches the lower bound. The algorithm needs to solve a semi-infinite LP in the general case, but they show that it can be efficiently done for special cases (which follows from previous literature).
Comments: The paper introduces a nice general framework for several of existing known results for several bandit problems such as linear, lipschitz, unimodal, combinatorial, dueling bandits. This encapsulates all these in very broad theoretical framework which can be useful for any new bandit problems. |
nips_2017_121 | Learning to See Physics via Visual De-animation
We introduce a paradigm for understanding physical scenes without human annotations. At the core of our system is a physical world representation that is first recovered by a perception module and then utilized by physics and graphics engines. During training, the perception module and the generative models learn by visual de-animation -interpreting and reconstructing the visual information stream. During testing, the system first recovers the physical world state, and then uses the generative models for reasoning and future prediction. Even more so than forward simulation, inverting a physics or graphics engine is a computationally hard problem; we overcome this challenge by using a convolutional inversion network. Our system quickly recognizes the physical world state from appearance and motion cues, and has the flexibility to incorporate both differentiable and non-differentiable physics and graphics engines. We evaluate our system on both synthetic and real datasets involving multiple physical scenes, and demonstrate that our system performs well on both physical state estimation and reasoning problems. We further show that the knowledge learned on the synthetic dataset generalizes to constrained real images. | This paper presents an approach to physical scene understanding which utilizes the combination of physics engines and renderers. The approach trains a network to predict a scene representation based on a set of input frames and subject to the constraints of the physics engine. This is trained using either SGD or reinforcement learning depending on whether a differentiable physics engine is used. The approach is applied to several simple scenarios with both real and synthetic data.
Overall the idea in the paper is interesting and promising, however, the paper as it's written has several issues. Some of these may be able to be addressed in a rebuttal.
First, there are some significant missing references, specifically related to using physical models in human tracking. Kyriazis and Argyros (CVPR2013, CVPR 2014); Pham et al (PAMI 2016); Brubaker et al (ICCV 2009, IJCV 2010); Vondrak et al (SIGGRAPH 2012, PAMI 2013). In terms of simple billiards-like scenarios, see also Salzmann and Urtasun (ICCV 2011). This is a significant blind spot in the related works and should be addressed.
Second, the paper is missing some important technical details. In particular, it is never specified how the system actually works at test time. Is it simply run on successive sets of three input frames independently using a forward pass of the perception model? Does this mean that the physics and graphics rendering engines are not used at all at test time? If this is the case, why not use the physics engine at test time? It could, for instance, be used as a smoothing prior on the per-frame predictions of the perception model? This issue is particularly relevant for prediction of stability in the block world scenario. Is the network only run on a single set of three frames? This makes it particularly difficult to assess the technical aspects of this work.
Other comments:
- Absolute mass is never predictable without knowing something about what an object is made of. Only relative mass plays a role in the physics and visual perception can only tell you volume (assuming calibration), not density. Any network which appears to "learn" to predict absolute mass is really just learning biases about mass/density from the training set. This is fine, but important to recognize so that overboard claims aren't made.
- The table entries and captions are inconsistent in Fig 7(b-c). The table used "full" and "full+", whereas the caption references "joint" and "full". |
nips_2017_2264 | Unified representation of tractography and diffusion-weighted MRI data using sparse multidimensional arrays
Recently, linear formulations and convex optimization methods have been proposed to predict diffusion-weighted Magnetic Resonance Imaging (dMRI) data given estimates of brain connections generated using tractography algorithms. The size of the linear models comprising such methods grows with both dMRI data and connectome resolution, and can become very large when applied to modern data. In this paper, we introduce a method to encode dMRI signals and large connectomes, i.e., those that range from hundreds of thousands to millions of fascicles (bundles of neuronal axons), by using a sparse tensor decomposition. We show that this tensor decomposition accurately approximates the Linear Fascicle Evaluation (LiFE) model, one of the recently developed linear models. We provide a theoretical analysis of the accuracy of the sparse decomposed model, LiFE SD , and demonstrate that it can reduce the size of the model significantly. Also, we develop algorithms to implement the optimization solver using the tensor representation in an efficient way. * | The paper propose an efficient approximation of the "Linear Fascicle Evaluation model" (LiFE), which is used for tractography. The key issue with the LiFE model is that it computationally scales poorly such that it is impractical to analyze MRI data at full resolution. The authors propose a sparse approximation based on a Tucker model (tensor model). The authors prove that the approximation is both good and achieves a high compression; this is further evaluated experimentally. The model appears solid and likewise for the analysis. My key objection to the paper is that the actual evaluation of the model for tractography is treated very superficially (just a couple of lines of text). I understand that within the tight boundaries of a NIPS paper, the authors does not have much space for such an evaluation, so I'm willing to forgive this limitation of the paper.
Other than that, the paper is very well written with intuitive figures guiding the reader along the way. As a non-expert, I found the paper a please to read. I do stress that I'm not an expert in neither tractography or tensor models.
A few minor comments (not important in the rebuttal):
*) In line 36, the authors point out that tractography is poorly understood and that competing methods can give wildly different results. The authors then argue that this motivates convex models. I don't buy this argument -- the reason different models give different results is more likely due to different modeling assumptions, rather than whether a global optimum has been found or not.
*) After the propositions: the authors take the time to explain in plain words the key aspects of their propositions. Most authors neglect this, and I just want to say that such efforts make the paper much easier to read. So: thank you!
*) In line 195, you write: "...dataset Fig. 4.a." There's something wrong with that sentence. Perhaps the figure reference should be in a parenthesis?
*) In line 202: typo: "aproach" --> "approach".
== Post rebuttal ==
I have read the other reviews and the rebuttal, and stand by my evaluation that this paper should be accepted. |
nips_2017_1564 | Bridging the Gap Between Value and Policy Based Reinforcement Learning
We establish a new connection between value and policy based reinforcement learning (RL) based on a relationship between softmax temporal value consistency and policy optimality under entropy regularization. Specifically, we show that softmax consistent action values correspond to optimal entropy regularized policy probabilities along any action sequence, regardless of provenance. From this observation, we develop a new RL algorithm, Path Consistency Learning (PCL), that minimizes a notion of soft consistency error along multi-step action sequences extracted from both on-and off-policy traces. We examine the behavior of PCL in different scenarios and show that PCL can be interpreted as generalizing both actor-critic and Q-learning algorithms. We subsequently deepen the relationship by showing how a single model can be used to represent both a policy and the corresponding softmax state values, eliminating the need for a separate critic. The experimental evaluation demonstrates that PCL significantly outperforms strong actor-critic and Q-learning baselines across several benchmarks.
By contrast, value based methods, such as Q-learning [44,22,30,42,21], can learn from any trajectory sampled from the same environment. Such "off-policy" methods are able to exploit data from other sources, such as experts, making them inherently more sample efficient than on-policy methods [10]. Their key drawback is that off-policy learning does not stably interact with function approximation [35,Chap.11]. The practical consequence is that extensive hyperparameter tuning can be required to obtain stable behavior. Despite practical success [22], there is also little theoretical understanding of how deep Q-learning might obtain near-optimal objective values.
Ideally, one would like to combine the unbiasedness and stability of on-policy training with the data efficiency of off-policy approaches. This desire has motivated substantial recent work on off-policy actor-critic methods, where the data efficiency of policy gradient is improved by training an offpolicy critic [19,21,10]. Although such methods have demonstrated improvements over on-policy actor-critic approaches, they have not resolved the theoretical difficulty associated with off-policy learning under function approximation. Hence, current methods remain potentially unstable and require specialized algorithmic and theoretical development as well as delicate tuning to be effective in practice [10,41,8].
In this paper, we exploit a relationship between policy optimization under entropy regularization and softmax value consistency to obtain a new form of stable off-policy learning. Even though entropy regularized policy optimization is a well studied topic in RL [46,39,40,47,5,4,6,7]-in fact, one that has been attracting renewed interest from concurrent work [25,11]-we contribute new observations to this study that are essential for the methods we propose: first, we identify a strong form of path consistency that relates optimal policy probabilities under entropy regularization to softmax consistent state values for any action sequence; second, we use this result to formulate a novel optimization objective that allows for a stable form of off-policy actor-critic learning; finally, we observe that under this objective the actor and critic can be unified in a single model that coherently fulfills both roles. | SUMMARY:
The paper considers entropy regularized discounted Markov Decision Process (MDP), and shows the relation between the optimal value, action-value, and policy. Moreover, it shows that the optimal value function and policy satisfy a temporal consistency in the form of Bellman-like equation (Theorem 1), which can also be extended to its n-step version (Corollary 2).
The paper introduces Path Consistent Learning by enforcing the temporal consistency, which is essentially a Bellman residual minimization procedure (Section 5).
SUMMARY OF EVALUATION:
Quality: Parts of the paper is sound (Section 3 and 4); parts of it is not (Section 5)
Clarity: The paper is well-written.
Originality: Some results seem to be novel, but similar ideas and analysis have been proposed/done before.
Significance: It might become an important paper in the entropy regularized RL literature.
EVALUATION:
This is a well-written paper that has some interesting results regarding the relation between V, Q, and pi for the entropy regularized MDP setting. That being said, I have some concerns, which I hope the authors clarify.
- Enforcing temporal consistency by minimizing its l2 error is minimizing the empirical Bellman error. Why do we need a new terminology (Path consistency) for this?
- The empirical Bellman error, either in the standard form or in the new form in this paper, is a biased estimate of the true Bellman error. Refer to the discussion in Section 3 of
Antos, Szepesvari, Munos, “Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path,” Machine Learning Journal, 2008.
Therefore, for stochastic dynamics, minimizing Equation (15) does not lead to the desired result. This is my most crucial comment, and I hope the authors provide an answer to it.
- Claiming that this paper is bridging the gap between the value and policy based RL is beyond what has actually been shown. The paper shows the relation between value and policy for a certain type of reward function (which has the entropy of policy as an additional reward term).
- On LL176-177, it is claimed that a new actor-critic paradigm is presented where the policy is not distinct from the values. Isn’t this exactly the same thing in the conventional value-based approach for which the optimal policy has a certain relationship with the optimal value function? In the standard setting, the greedy policy is optimal; here we have relationship of Equation (10).
- It might be noted that relations similar to Equations (7), (8), (9), and (10) have been observed in the context of Maximum Entropy Inverse Optimal Control. See Corollary 2 and the discussion after that in the following paper:
Ziebart, et al., “The Principle of Maximum Causal Entropy for Estimating Interacting Processes,” IEEE Transactions on Information Theory, 2013.
Here the Bellman equations are very similar to what we have in this paper.
Ziebart et al.’s framework is extended by the following paper to address problems with large state spaces:
Huang, et al., “Approximate MaxEnt Inverse Optimal Control and its Application for Mental Simulation of Human Interactions,” AAAI, 2016.
This latter paper concerns with the inverse optimal control (i.e., RL), but the approximate value iteration algorithms used for solving the IOC essentially use the same Bellman equations as we have here. The paper might want to compare the difference between these two approaches.
- One class of methods that can bridge the gap between value and policy based RL algorithms is classification-based RL algorithms. The paper ignores these approaches in its discussion. |
nips_2017_789 | Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels from 35.24% to 9.11%. | Summary-
This paper shows that an exponential moving average (in parameter space) of
models produced during SGD training of neural networks can be used as a good
teacher model for the student model which corresponds to the last (current) SGD
iterate. The teacher model is used to provide a target (softmax probabilities)
which is regressed to by the student model on both labelled and unlabelled
data. This additional loss serves as a regularizer. In other words, the model
is being trained to be consistent with its Polyak-averaged version.
Furthermore, noise is injected into both the student and teacher models to
increase the regularization effect (similar motivation as dropout and other
related methods). This is shown to be more effective than (1) Just using noise
(and no moving average), and (2) using a moving average over softmax
probabilities (not parameters) which is updated once every epoch.
Strengths-
- The proposed teacher model is a convenient one to maintain. It just requires keeping a moving average.
- The additional computational cost is just that of a forward prop and benefits seem to be worth it (at least for small amounts of labelled data).
- The supplementary material provides enough experimental details.
Weakness-
- Comparison to other semi-supervised approaches : Other approaches such as variants of Ladder networks would be relevant models to compare to.
Questions/Comments-
- In Table 3, what is the difference between \Pi and \Pi (ours) ?
- In Table 3, is EMA-weighting used for other baseline models ("Supervised", \Pi, etc) ? To ensure a fair comparison, it would be good to know that all the models being compared to make use of the EMA benefits.
- The proposed model benefits from two factors : noise and keeping an exponential moving average. It would be good to see how much each factor contributes on its own. The \Pi model captures just the noise part, so it would be useful to know how much gain can be obtained by just using a noise-free exponential moving average.
- If averaging in parameter space is being used, it seems that it should be possible to apply the consistency cost in the intermediate layers of the model as well. That could potentially provide a richer consistency gradient. Was this tried ?
Minor comments and typos-
- In the abstract : "The recently proposed Temporal Ensembling has ... ": Please cite.
- "when learning large datasets." -> "when learning on large datasets."
- "zero-dimensional data points of the input space": It may not be accurate to say that the data points are zero-dimensional.
- "barely applying", " barely replicating" : "barely" -> "merely"
- "softmax output of a model does not provide a good Bayesian approximation outside training data". Bayesian approximation to what ? Please explain. Any model will have some more generalization error outside training data. Is there another source of error being referred to here ?
Overall-
The paper proposes a simple and effective way of using unlabelled data and
improving generalization with labelled data. The most attractive property is
probably the low overhead of using this in practice, so it is quite likely that
this approach could be impactful and widely used. |
nips_2017_475 | Coded Distributed Computing for Inverse Problems
Computationally intensive distributed and parallel computing is often bottlenecked by a small set of slow workers known as stragglers. In this paper, we utilize the emerging idea of "coded computation" to design a novel error-correcting-code inspired technique for solving linear inverse problems under specific iterative methods in a parallelized implementation affected by stragglers. Example machinelearning applications include inverse problems such as personalized PageRank and sampling on graphs. We provably show that our coded-computation technique can reduce the mean-squared error under a computational deadline constraint. In fact, the ratio of mean-squared error of replication-based and coded techniques diverges to infinity as the deadline increases. Our experiments for personalized PageRank performed on real systems and real social networks show that this ratio can be as large as 10 4 . Further, unlike coded-computation techniques proposed thus far, our strategy combines outputs of all workers, including the stragglers, to produce more accurate estimates at the computational deadline. This also ensures that the accuracy degrades "gracefully" in the event that the number of stragglers is large. | The paper considers a distributed computing scenario where a bunch of processors have different delays in computing. A job (here finding a solution of a system of linear equations) is distributed among the processors. After a certain deadline the output of the processors are combined and the result is produced.
Previous literature mainly neglect the slower processors and work with the best k out of n processors' outputs. Usually an MDS code is used to distribute the task among the processors. I understand the main contribution is to include the stragglers computations and not treat them as mere erasures.
I have few critical comments regarding the work.
1. Correctness of results: I think there are serious problems in the assumptions (see line 116-117). If x_i s are all iid, it is not necessary that r_i s are all iid as well. This only happens when M is square invertible matrix. For example, if a row of M is zero then the corresponding entry of r_i is going to be always zero. The main results all depend on this assumption, which makes me doubtful about the veracity of results.
2. Assumptions without justification: From the introduction it seems like the main result is contained in Theorem 4.4. This theorem is true under two very specific assumptions. One, the solutions of a systems of linear equations are all iid variables, and the number of iterations of each processor within a deadline are all iid random variables. These two seem like very stringent condition and I see no reason for these to be true in real systems. The authors also do not provide any motivation for these assumptions to be true either. Given this, I am unable to judge whether the theoretical results are of any value here.
Another key condition for the results to be true is n-k = o(\sqrt(n). Again it was not explained why this should be the case even asymptotically.
Also, I do not understand the implication of the random variable C(l) satisfying the ratio of variance and mean-square to be large. Why would this assumption make any sense?
Overall lack of motivations for these assumptions seem concerning.
3. Unfair comparison: The replication scheme, that the paper compare their coding scheme with, has several problems. And I do not think that it is a good comparison. The assumption is that n -k is very small, i.e., o(\sqrt{n}). So computation of only o(\sqrt{n}) elements are computed twice. The rest of the processors are uncoded. This is a very specific coding scheme, and it is obvious that the Fourier matrix based scheme (which is an MDS code) will beat this scheme.
In the introduction it was claimed that: “To compare with the existing coding technique in [6], ..” But the theoretical comparison to the method of [6] is missing.
4. Novelty: I do not think the presented scheme is new. It is the same MDS code based scheme that was proposed by Lee et al (in ref [6]).
Overall the same encoding method has been used in both [6] and in this paper. The only difference is that the results of the stragglers are also used in this paper while decoding - so it is expected to get a better error rate. But in general, the claim of introduction, the prevention of ill-conditioning of MDS matrices in the erasure channel, remains unmitigated. It is just that a particular problem is considered in the paper.
5. Experiments: 9. The figures are not properly labeled. In the third plot of fig 2, ref [12] is mentioned, which should be [6] (I think). This is quite confusing.
I am further skeptical about this plot. At certain value of the T_deadline, the method of [6], should result in absolute 0 error. But this is not observed in the plot. It is also not clear why in the extension of the coded method the mean square error would increase.
6. More minor comments:
The authors say “iterative methods for linear inverse problem” as if it is a very standard problem: however it is not to me. A more understandable term would be `finding solutions of a linear system'. So for quite long time I could not figure out what the exact problem they are trying to solve. This create difficulty reading the entirety of the introduction/abstract.
\rho has been used as spectral radius, and also in the heavy tail distributions.
=============
After the rebuttal:
I am able to clarify my confusion about independence of the random variables. So I do not have about the concern with correctness anymore. The point that I am still worried about is the unfair comparison with repetition codes! I do not think n-k =o(n) is the correct regime to use repetition code.
It seems to me from the rebuttal is, if T_deadline is large enough, then coding does not help (which makes sense). Is it clear what is the crossover point of T_deadline?
I am satisfied with the rebuttal in other points. |
nips_2017_876 | Testing and Learning on Distributions with Symmetric Noise Invariance
Kernel embeddings of distributions and the Maximum Mean Discrepancy (MMD), the resulting distance between distributions, are useful tools for fully nonparametric two-sample testing and learning on distributions. However, it is rare that all possible differences between samples are of interest -discovered differences can be due to different types of measurement noise, data collection artefacts or other irrelevant sources of variability. We propose distances between distributions which encode invariance to additive symmetric noise, aimed at testing whether the assumed true underlying processes differ. Moreover, we construct invariant features of distributions, leading to learning algorithms robust to the impairment of the input distributions with symmetric additive noise. | Overview: In this paper the authors present a method for making nonparametric methods invariant to noise from positive definite distributions by utilizing estimated characteristic functions while disregarding magnitude and only considering the phase. The authors apply the technique to two sample testing and nonparametric regression on measures, ie each "sample" is a collection of samples. The authors demonstrate that the techniques introduced behave as expected by performing tests on synthetic data and demonstrate reasonable performance of the new two sample tests and new feature on real and semi-synthetic datasets.
Theory: The fundamental idea behind this paper (disregarding the magnitude of a characteristic function) is not terribly revolutionary, but it is elegant, interesting, and likely to be useful so I think worthy of publication. Though the underlying concept is somewhat technical, the authors manage to keep the exposition clear. The authors explain the concept using population version of estimators and give what seem to be very reasonable finite sample estimators, but it would be nice to see some sort of consistency type result to round things off.
Experiments: The synthetic experiment clearly demonstrates that the new features behave as one might expect in for a two sample test, but it is disappointing that the technique seems to greatly reduce the test power compared to current nonparametric techniques. This difference is pretty striking in the >=2000 sample regime. While the technique is not effective for the Higgs Boson two sample test its ineffectiveness is in itself very intriguing, the difference between the two classes seems to be some sort of psd noise phenomena. For the last two experiments, the proposed technique works well on the semi-synthetic dataset (real data with synthetic noise added), but not particularly better, on the final real world data experiment. Ultimately the proposed technique doesn't demonstrate decidedly superior testing or regression performance on any real world dataset, although it does slightly outperform in certain sample regimes.
Exposition: The paper is a pleasure to read but there are some errors and problems which I will list at the end of this review.
Final Eval.: Overall I think the idea is interesting enough to be worthy of publication.
--Corrections By Line--
3: "rarely" should be "rare"
14: "recently" should be "which have recently been" to sound better
53: To be totally mathematically precise perhaps you should state the E is a psd rv again
115: add "the" after "However"
116: I'm assuming that "characteristic function" was meant to be included after "supported everywhere"
119: Should probably mention a bit more or justify why these "cases appear contrived"
123: The notation P_X and P_Y hasn't been introduced, but perhaps it is obvious enough
123: Here K is a function on a pair of probability measures on line 88 K is a function of two collections of data
134: We need some justification that this is actually an unbiased estimator. A lot of the obvious estimators in the MMD and related techniques literature are not unbiased
208: \chi^2(4)/4 is not technically correct, its the sample that is divided by 4 not the measure
268: I'm not familiar with this dataset. Please mention the space which the labels lie in. I'm assuming it is R.
Update: I've bumped it up a point after reading the other reviews and rebuttal. Perhaps I was too harsh to begin with. |
nips_2017_1967 | On Optimal Generalizability in Parametric Learning
We consider the parametric learning problem, where the objective of the learner is determined by a parametric loss function. Employing empirical risk minimization with possibly regularization, the inferred parameter vector will be biased toward the training samples. Such bias is measured by the cross validation procedure in practice where the data set is partitioned into a training set used for training and a validation set, which is not used in training and is left to measure the outof-sample performance. A classical cross validation strategy is the leave-one-out cross validation (LOOCV) where one sample is left out for validation and training is done on the rest of the samples that are presented to the learner, and this process is repeated on all of the samples. LOOCV is rarely used in practice due to the high computational complexity. In this paper, we first develop a computationally efficient approximate LOOCV (ALOOCV) and provide theoretical guarantees for its performance. Then we use ALOOCV to provide an optimization algorithm for finding the regularizer in the empirical risk minimization framework. In our numerical experiments, we illustrate the accuracy and efficiency of ALOOCV as well as our proposed framework for the optimization of the regularizer. | This paper proposes an efficiently computable approximation of leave-one-out cross validation for parametric learning problems, as well as an algorithm for jointly learning the regularization parameters and model parameters. These techniques seem novel and widely applicable.
The paper starts out clearly written, though maybe some less space could have been spent on laying the groundwork, leaving more room for the later sections where the notation is quite dense.
Can you say anything about the comparsion between ALOOCV and LOOCV evaluated on only a subset of the data points (as you mention in l137-140), both in terms of computation cost and approximation accuracy?
Other comments:
l75: are you referring to PRESS? Please name it then.
l90: "no assumptions on the distribution" -- does that mean, no prior distribution?
Definition 7: the displayed equation seems to belong to the "such that" in the preceding sentence; please pull it into the same sentence. Also, I find it odd that an analytic function doesn't satisfy this definition (due to the "there exists one and only one"). What about a two-dimensional function that has non-differentiabilities in its uppper-right quadrant, so that along some cross sections, it is analytic?
l186-187: (relating to the remarks about Def7) This sounds a bit odd; it might be better to say something like "We remark that the theory could be extended to ...".
l250: Are you saying that you are not jointly learning the regularization parameter in this second example? If the material in section 4 doesn't apply here, I missed why; please clarify in that section.
Typographic:
l200: few -> a few
References: ensure capitalization of e.g. Bayes by using {} |
nips_2017_1791 | Adversarial Ranking for Language Generation
Generative adversarial networks (GANs) have great successes on synthesizing data. However, the existing GANs restrict the discriminator to be a binary classifier, and thus limit their learning capacity for tasks that need to synthesize output with rich structures such as natural language descriptions. In this paper, we propose a novel generative adversarial network, RankGAN, for generating high-quality language descriptions. Rather than training the discriminator to learn and assign absolute binary predicate for individual data sample, the proposed RankGAN is able to analyze and rank a collection of human-written and machine-written sentences by giving a reference group. By viewing a set of data samples collectively and evaluating their quality through relative ranking scores, the discriminator is able to make better assessment which in turn helps to learn a better generator. The proposed RankGAN is optimized through the policy gradient technique. Experimental results on multiple public datasets clearly demonstrate the effectiveness of the proposed approach. | The paper proposes an alternative parameterization of the GAN discriminator based on relative ranks among a comparison set. The method shows better quantitative results for language generation on three (relatively small) language datasets.
Mathematically, there are some potential problems of the proposed method.
- Firstly, the original GAN identified the equilibrium of Eqn. (1) only for R(s|U,C) being a proper probability over a Bernoulli variable. However, the definition in Eqn. (4) does not guarantee this condition. Thus, it is difficult to see whether the proposed criterion can ensure the generator distribution to match the true data distribution.
- However, if one regards the expected log probability in Eqn. (4) as a definition of energy, the practically used criterion Eqn. (7) is more similar to the energy-based GAN. From this perspective, if one completes the two expectations on U and C omitted in Eqn. (7), (i.e., the full formula for the a term should be E_{s ~ P_h} [ E_{U ~ bar{U}} E_{C ~ bar{C}} log R(s|U,C) ] where bar{U} and bar{C} represent all possible reference and comparison sets respectively), and plugs Eqn. (4) into the full formula, it becomes E_{s ~ P_h} [ E_{U ~ bar{U}} E_{C ~ bar{C}} ( E_{u ~ U} log P(s|u,C) ) ] <= E_{s ~ P_h} [ log P(s) ], which is a lower bound of the ranker-defined likelihood. In other words, the energy function is defined as a lower bound of the likelihood, which is a little bit strange and not conventional.
- Putting things together, I don’t really understand what the proposed criterion is doing, and why it leads to better empirical performances.
In terms of the experiment result, despite the good quantitative results, it would be nice if some qualitative analysis were conducted to understand why and how the generated sentences could achieve higher scores. Also, is the result sensitive to the size of the comparison set |C| or the size of the reference set |U|?
In summary, the proposed method achieves nice empirical results in improving language generation under the GAN setting. However, given the content of the paper, it is still not clear why this method should lead to such improvement. |
nips_2017_295 | Universal Style Transfer via Feature Transforms
Universal style transfer aims to transfer arbitrary visual styles to content images. Existing feed-forward based methods, while enjoying the inference efficiency, are mainly limited by inability of generalizing to unseen styles or compromised visual quality. In this paper, we present a simple yet effective method that tackles these limitations without training on any pre-defined styles. The key ingredient of our method is a pair of feature transforms, whitening and coloring, that are embedded to an image reconstruction network. The whitening and coloring transforms reflect a direct matching of feature covariance of the content image to a given style image, which shares similar spirits with the optimization of Gram matrix based cost in neural style transfer. We demonstrate the effectiveness of our algorithm by generating high-quality stylized images with comparisons to a number of recent methods. We also analyze our method by visualizing the whitened features and synthesizing textures via simple feature coloring. | This submission describes a model for general and efficient style transfer. It is general in the sense that it does not need training for each style image that one would want to apply (unlike [26]), and it is efficient because it does not require solving an optimization problem as the initial work of Gatsby et al. [9].
The main idea is to train a deep decoder network, so that from any given layer in a VGG, one can generate the most likely image. This decoder is not trained as a classical autoencoder. Instead, given a fixed supervised VGG, an decoder counterpart of a VGG is trained so as to reconstruct the image.
Given this trained decoder, the authors propose to transfer style by taking a given layer, make the covariance of the activations the identity and in a second step, make them follow the covariance of the style image at the same layer (called coloring). Given these new activations the desired style-transfered image is obtained by applying the corresponding decoder.
Pros:
- The proposed approach is very simple and intuitive. It does not require fancy losses, and uses simple baseline techniques to approximately match the distributions of activations.
- This paper exhibits the very large expressive power of CNNs. Given a well trained decoder, simple linear operations in the feature space give meaningful outputs.
- The proposed Coarse to Fine approach leads to very good visual performance.
- The proposed algorithm is fairly efficient as for each new stye, it only requires computing several singular value decompositions (two for each layer, one for the image, one for the style).
- The proposed model only has one parameter, namely the tradeoff between style and content. This is a great advantage to say [26].
- Overall the paper reads well and is easy to follow.
Cons:
- The description of the Decoder training could be made more clear by introducing the fixed encoder, let's say \phi. Then the training of the decoder could be written as the minimization w.r.t. a decoder \psi of the loss. That would avoid using strange notations including I_\input I_\output (which actually depends on the function that is optimized over...).
- The formal description of the whitening and coloring could be made more practical by introducing the SVD of the activations (instead of using the eigenvalue decomposition of the covariance matrix).
Questions:
- To a large extent, this approach relies on the training of a good decoder, they quality of which strongly depends on the choice of data used for training. It would be interesting to see more results with worse decoders, trained on less data (or images with different statistics).
- The model assumes that the "support" of the decoder (subset of the input space that was covered by the inputs) contains the activations of the style images. Provided that this is true and that the decoder is well regularized, the output should be correct. Would it be possible to assess to what degree does a dataset such as COCO cover well this space? Otherwise, the decoder might have ill-defined behaviors for parts of the input space that it has never seen during training.
Overall, I think this submission in interesting and shows good results with a simple approach. I lean towards acceptance. |
nips_2017_3319 | GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the 'Fréchet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark. | Generative adversarial networks (GANs) are turning out to be a very important advance in machine learning. Algorithms for training GANs have difficulties with convergence. The paper proposes a two time-scale update rule (TTUR) which is shown (proven) to converge under certain assumptions. Specifically, it shows that GAN Adam updates with TTUR can be expressed as ordinary differential equations, and therefore can be proved to converge using a similar approach as in Borkar' 1997 work. The recommendation is to use two different update rules for generator and discriminator, with the latter being faster, in order to have convergence guarantees.
***Concerns:
1-Assumptions A1 to A6 and the following Theorem 1 are symmetric wrt h and g, thus one can assume it makes no difference which one of the networks (discriminator or generator) has the faster updates. A discussion on why this symmetry does not hold in practice, as pointed out by the authors in introduction, is necessary.
2-The assumptions used in the proof should be investigated in each of the provided experiments to make sure the proof is in fact useful, for example one of the key assumptions that a = o(b) is clearly not true in experiments (they use a = O(a) instead).
3-The less variation pointed out in Figure 2 are probably due to lower learning rate of the generator in TTUR compared to SGD, however since the learning rate values are not reported for Figure 2, we can not be sure.
4-In Figure 6, variance and/or confidence intervals should be reported.
5-The reported learning rate for TTUR appears to be only linearly faster for the discriminator (3 times faster in Figure 6, 1.25 times faster in Figure 3). This is not consistent with the assumption A2. Moreover, it is well known that a much faster discriminator for regular GANs undermines the generator training. However, according to Theorem 1, the system should converge. This point needs to be addressed.
6-There is insufficient previous/related work discussion, specifically about the extensive use of two time updates in dynamic systems, as well as the research on GAN convergence.
7-In general, theorems, specifically Theorem 2, are not well explained and elaborated on. The exposited ideas can become more clear and the paper more readable with a line or two of explanation in each case.
In summary, while the proofs for convergence are noteworthy, a more detailed investigation and explanation of the effect of assumptions in each of the experiments, and how deviating from them can hurt the convergence in practice, will help clarify the limitations of the proof. |
nips_2017_34 | Deep Subspace Clustering Networks
We present a novel deep neural network architecture for unsupervised subspace clustering. This architecture is built upon deep auto-encoders, which non-linearly map the input data into a latent space. Our key idea is to introduce a novel self-expressive layer between the encoder and the decoder to mimic the "selfexpressiveness" property that has proven effective in traditional subspace clustering. Being differentiable, our new self-expressive layer provides a simple but effective way to learn pairwise affinities between all data points through a standard backpropagation procedure. Being nonlinear, our neural-network based method is able to cluster data points having complex (often nonlinear) structures. We further propose pre-training and fine-tuning strategies that let us effectively learn the parameters of our subspace clustering networks. Our experiments show that our method significantly outperforms the state-of-the-art unsupervised subspace clustering techniques. | The paper addresses the problem of subspace clustering, i.e., separating a collection of data points lying in a union of subspaces according to underlying subspaces, using deep neural networks. To do so, the paper builds on the sparse subspace clustering idea: among all possible representations of a data point as a combination of other points in the dataset, the representation that uses the minimum number of points, corresponds to points from the same subspace. In other words, SSC uses the idea that for a data matrix $X$, a sparse solution of $X = X C$ (subject to $diag(C) = 0$) represents each point as a combination of a few other points from the same subspace. The paper proposes a deep neural network to transform the data into a new representation $Z = f_W(X)$ for which one searches for a sparse representation of $Z = Z C$, with the hope to learn more effective representations of data for clustering. To achieve this, the paper uses an auto-encoder scheme, where the middle hidden layer outputs are used as $Z$. To enforce $Z - Z C = 0$, the paper repeats the middle layer, with the hope that these two consecutive layers obtain same activations. If so, the weights between the two layers would correspond to coefficient matrix $C$ which will be used then, similar to existing methods, to build an affinity matrix on data and to obtain clustering. The paper demonstrates experiments on Extended YaleB and ORL face datasets as well as COIL20/100.
The reviewer believes that the paper is generally well-written and easy to read. The paper also takes a sufficiently interesting approach to connect subspace clustering with deep learning. On the other hand, there are several issues with the technical approach and the experiments that limit the novelty and correctness of the paper.
1) The reviewer believes that the idea of the paper is similar to the one in [36], which uses an auto-encoder scheme, taking the middle layer as $Z$. While [36] fixes the $C$ and solves for $Z$, the proposed method repeats the middle layer to also solve for $C$. Despite this similarity, these is only a sentence in the paper (line 82) referring to this work and the experiments do not compare against [36]. Thus there is a need to clearly discuss advantages and disadvantages with respect to [36] and compare against it in the experiments.
2) It is not clear to the reviewer how the paper enforces that the activations of the middle and the middle plus one layer must be the same. While, clearly one can initialize $C$ so that the consecutive activations will be the same, one the back propagation is done and the weights of the auto-encoder change, the activations of middle layer and the one after it will not be the same (unless the $\lambda_2$ is very large and the network achieves it global minima, which is generally not the case). As a result, in the next iteration, the network will minimize $Z - Y C$ for different $Y$ and $Z$, which is not desired.
3) One of the drawbacks of the proposed approach is the fact that the number of weights between the middle layers of the network is $N^2$, given $N$ data points. Thus, it seems that the method will not be scalable to very large datasets.
4) In the experiments (line 243), the paper discusses training using mini-batches. Notice that while using mini-batches makes sense in the context of recognition, where each sample can be applied separately to the network, using mini-batches does not make sense in the context of clustering discussed in the paper. Using a mini-batch of size $M < N$, how does the paper train a neural network between two layers, each with $N$ nodes? In other words, it is not clear how the mini-batch instances and in what order will be input the the network. |
nips_2017_2670 | Saliency-based Sequential Image Attention with Multiset Prediction
Humans process visual scenes selectively and sequentially using attention. Central to models of human visual attention is the saliency map. We propose a hierarchical visual architecture that operates on a saliency map and uses a novel attention mechanism to sequentially focus on salient regions and take additional glimpses within those regions. The architecture is motivated by human visual attention, and is used for multi-label image classification on a novel multiset task, demonstrating that it achieves high precision and recall while localizing objects with its attention. Unlike conventional multi-label image classification models, the model supports multiset prediction due to a reinforcement-learning based training process that allows for arbitrary label permutation and multiple instances per label. | This submission proposes a recurrent attention architecture (composed of multiple steps) that operates on a saliency map that is computed with a feed forward network. The proposed architecture is based on the biological model of covert attention and overt attention. This is translated into a meta-controller (covert attention) that decides which regions need overt attention (which are then treated by lower-level controllers that glimpse in the chosen regions). This is implemented as a form of hierarchical reinforcement learning, where the reward function for the meta-controller is explicitly based on the idea of multiset labels. The (lower-level) controllers receive reward for directing attention to the area with high priority, as output by a previously computed priority map. The whole architecture is trained using REINFORCE, a standard RL algorithm. The proposed architecture is used for multi-label, specifically multi-set setting where each image can have multiple labels, and labels may appear multiple times. The experiments are conducted on MNIST and MNIST multiset datasets, where images contain one single digit or multiple digits.
The paper is well written and the model is well explained. The detailed visualizations in the supplementary material are required to understand the architecture with many components. The details of the architecture when considered with the supplementary material, are well motivated and explained; the paper is comprehensive enough to allow the reproduction of results. The topic is interesting and relevant for the community.
Some questions are as follows:
- The “meta controller” unit takes a saliency map and the class label prediction from the previous step as inputs and learns a covert attention mask. How is the saliency map initialized in the first step? Which saliency map does the “interface” unit use, the saliency map learned in the previous step or the initial one?
- The controller takes the priority map and the glimpse vector as inputs, runs them through a recurrent net (GRU) and each GRU unit outputs a label. How are the number of GRU units determined here? Using the number of gaussians learned by the gaussian attention mechanism? Is this the same as the number of peaks in the priority map?
A missing related work would be Hendricks et.al., Generating Visual Explanations, ECCV 2016 that also trains a recurrent net (LSTM) using REINFORCE algorithm to generate textual explanations of the visual classification decision. However it does not contain an integrated visual attention mechanism.
The main weakness of the paper is the experimental evaluation. The only experimental results are reported on synthetic datasets, i.e. MNIST and MNIST multi-set. As the objects are quite salient on the black background, it is difficult to judge the advantage of the proposed attention mechanism. In natural images, as discussed in the limitations, detecting saliency with high confidence is an issue. However, this work having been motivated partially as a framework for an improvement to existing saliency models should have been evaluated in more realistic scenarios. Furthermore, the proposed attention model is not compared with any existing attention models. Also, a comparison with human gaze based as attention (as discussed in the introduction) would be interesting. A candidate dataset is CUB annotated with human gaze data in Karessli et.al, Gaze Embeddings for Zero-Shot Image Classification, CVPR17 (another un-cited related work) which showed that human gaze based visual attention is class-specific.
Minor comment:
- The references list contains duplicates and the publication venues and/or the publication years of many of the papers are missing. |
nips_2017_753 | Learning the Morphology of Brain Signals Using Alpha-Stable Convolutional Sparse Coding
Neural time-series data contain a wide variety of prototypical signal waveforms (atoms) that are of significant importance in clinical and cognitive research. One of the goals for analyzing such data is hence to extract such 'shift-invariant' atoms. Even though some success has been reported with existing algorithms, they are limited in applicability due to their heuristic nature. Moreover, they are often vulnerable to artifacts and impulsive noise, which are typically present in raw neural recordings. In this study, we address these issues and propose a novel probabilistic convolutional sparse coding (CSC) model for learning shift-invariant atoms from raw neural signals containing potentially severe artifacts. In the core of our model, which we call αCSC, lies a family of heavy-tailed distributions called α-stable distributions. We develop a novel, computationally efficient Monte Carlo expectation-maximization algorithm for inference. The maximization step boils down to a weighted CSC problem, for which we develop a computationally efficient optimization algorithm. Our results show that the proposed algorithm achieves state-of-the-art convergence speeds. Besides, αCSC is significantly more robust to artifacts when compared to three competing algorithms: it can extract spike bursts, oscillations, and even reveal more subtle phenomena such as cross-frequency coupling when applied to noisy neural time series. | In this paper, the authors propose a novel probabilistic convolutional sparse coding model (alpha-stable CSC) for learning shift-invariant atoms from raw neural signals. They propose an inference strategy based on Monte Carlo EM to deal efficiently with heavy tailed noise and take into account the polarity of neural activations with a positivity constraint. The formulation allows the use of fast quasi-Newton methods for the M-step which outperform previously proposed state-of-the-art ADMM based algorithms. The experiment results on LFP data demonstrate that such algorithms can be robust to the presence of transient artifacts in data and reveal insights on neural time-series without supervision.
The main contribution of this study is extending a general (alpha-stable distribution) framework, comparing to the Gaussian distribution. From experiment point of view, the proposed approach is very useful. However, there are a few comments and suggestions for the authors to consider:
(1) Figure 1(a) only demonstrates the PDFs of alpha-stable distributions characterized by alpha and beta. Please try to describe the characters of the scale parameter and location parameter.
(2) As written in the paper, “As an important special case of the alpha-stable distributions, we obtain the Gaussian distribution when alpha= 2 and beta = 0”, the authors emphasize that the current CSC models might be limited as they consider an L2 reconstruction error. While the extending model is more robust to corrupted data than the Gaussian models. However, I wonder if the experiments should include Gaussian Mixture Model as a fair comparison.
(3) How to determine the parameters (i.e., lambda, N, T, L, K), respectively, in synthetic and LEP data experiments? |
nips_2017_2887 | Learning from Complementary Labels
Collecting labeled data is costly and thus a critical bottleneck in real-world classification tasks. To mitigate this problem, we propose a novel setting, namely learning from complementary labels for multi-class classification. A complementary label specifies a class that a pattern does not belong to. Collecting complementary labels would be less laborious than collecting ordinary labels, since users do not have to carefully choose the correct class from a long list of candidate classes. However, complementary labels are less informative than ordinary labels and thus a suitable approach is needed to better learn from them. In this paper, we show that an unbiased estimator to the classification risk can be obtained only from complementarily labeled data, if a loss function satisfies a particular symmetric condition. We derive estimation error bounds for the proposed method and prove that the optimal parametric convergence rate is achieved. We further show that learning from complementary labels can be easily combined with learning from ordinary labels (i.e., ordinary supervised learning), providing a highly practical implementation of the proposed method. Finally, we experimentally demonstrate the usefulness of the proposed methods. | This paper proposes a novel problem setting and algorithm for learning from complementary labels. It shows an unbiased estimator of the classification risk can be obtained only from complementary labels, if a loss function satisfies a particular symmetric condition. The paper also provides estimation error bounds for the proposed method. Experiments show the approach can produce results similar to standard learning with ordinary labels on some datasets.
The idea and the theoretical results are interesting. But I question the usefulness of the proposed learning problem. Though using complementary labels can be less laborious than ordinary labels, complementary labels are less informative than ordinary labels. If using N training instances with complementary labels cannot perform better (this is almost certain) than learning with N/(K-1) training instances with ordinary labels, why should one use complementary labels? Moreover, based on the theoretical results, learning with complementary labels are restricted to a limited number of loss functions, while there are many state-of-the-art effective learning methods with ordinary labels.
The paper only suggests in the conclusion section that such complementary label learning can be useful in the context of privacy-aware machine learning. Then why not put this study within the privacy-aware learning setting and compare to the privacy-aware learning techniques?
For section 3: (1) On line 51, it says “ that incurs a large loss for large z”. Shouldn’t be “that incurs a large loss for small z”? (2) It is not clear how the derivation on Line 79-80 was conducted. Why should \sum_{y\not= \bar{y}}\ell(g_y(x))/(K-1) = \ell(g_{\bar{y}}(x)) ?
The experiments are very basic. In the benchmark experiments, it only compares to a self-constructed ML formulation and a one-hidden-layer neural network with ordinary labels. The way of translating the complementary labels to multi-labels is not proper since it actually brings much wrong information into the labels. The one-hidden-layer neural network is far from being a state-of-the-art multi-label classification model. It is better to just compare with the standard learning with ordinary labels using the same PC with sigmoid loss.
Another issue is: why not use the whole datasets instead of part of them? It would also be interesting to see the comparison between complementary label learning with ordinary label learning with a range of different number of classes on the same datasets. |
nips_2017_2671 | Variational Inference for Gaussian Process Models with Linear Complexity
Large-scale Gaussian process inference has long faced practical challenges due to time and space complexity that is superlinear in dataset size. While sparse variational Gaussian process models are capable of learning from large-scale data, standard strategies for sparsifying the model can prevent the approximation of complex functions. In this work, we propose a novel variational Gaussian process model that decouples the representation of mean and covariance functions in reproducing kernel Hilbert space. We show that this new parametrization generalizes previous models. Furthermore, it yields a variational inference problem that can be solved by stochastic gradient ascent with time and space complexity that is only linear in the number of mean function parameters, regardless of the choice of kernels, likelihoods, and inducing points. This strategy makes the adoption of largescale expressive Gaussian process models possible. We run several experiments on regression tasks and show that this decoupled approach greatly outperforms previous sparse variational Gaussian process inference procedures. | Summary: The paper employs an alternative view of Titsias's sparse variational inference technique for Gaussian process regression, treating the mean function and covariance function as linear operators in RKHS. By decoupling the basis functions that each of these functions uses, the variational lower bound can now be computed in a O(NMa + NMb^2) complexity, linear in N, and linear in Ma (number of mean parameters) and quadratic in Mb (number of covariance parameters). Previous approaches have O(NM^2) complexity. This means one now can use much bigger Ma compared to Mb without significantly increasing the complexity. The catch is, however, the posterior prediction is no longer **calibrated** and as good, i.e. smaller Mb results in larger predictive variances. The method is compared against existing approaches on several regression datasets, showing lower predictive errors and achieving higher variational lower bounds.
Details:
The catch is that the predictive variance is large if Mb is small resulting in a poor predictive performance if measured using the log-likelihood. This can be seen in figure 1, and is mentioned in the text but has not been fully experimented (as the paper only uses the squared error metric which does not use the predictive variances).
Unless I have missed it, the basis locations are not optimised but selected using several initial mini-batches. Would there be any problems if we optimise locations of the inducing points/bases as in Titsias's approach?
The inducing points/inputs are grounded on some physical locations and can be plotted. Can we visualise the subspace parameterisation? For example, grounding some bases and plotting the samples generated from the corresponding GPs?
And why is there a kink/sudden change in the traces for all methods in the middle of Figure 1? The learning curves for SVI are still going down after 2000 iterations -- does this use SNGD for the q(u)'s parameters? I think recent works on SVI for GP models (Hensman et al 2014, Bonilla et al 2014, 2015) all use SGD for the mean and Cholesky parameters. This seems to work well in practice, and avoid the need to deal with natural gradients or select a separate learning rate. It would be better to compare to these variants of SVI.
Extensions to non-Gaussian likelihoods seem non-trivial as computing the expected log likelihood would decouple the variational parameters. |
nips_2017_3076 | Bregman Divergence for Stochastic Variance Reduction: Saddle-Point and Adversarial Prediction
Adversarial machines, where a learner competes against an adversary, have regained much recent interest in machine learning. They are naturally in the form of saddle-point optimization, often with separable structure but sometimes also with unmanageably large dimension. In this work we show that adversarial prediction under multivariate losses can be solved much faster than they used to be. We first reduce the problem size exponentially by using appropriate sufficient statistics, and then we adapt the new stochastic variance-reduced algorithm of Balamurugan & Bach (2016) to allow any Bregman divergence. We prove that the same linear rate of convergence is retained and we show that for adversarial prediction using KL-divergence we can further achieve a speedup of #example times compared with the Euclidean alternative. We verify the theoretical findings through extensive experiments on two example applications: adversarial prediction and LPboosting. | This paper shows that "certain" adversarial prediction problems under multivariate losses can be solved "much faster than they used to be". The paper stands on two main ideas: (1) that the general saddle function optimization problem stated in eq. (9) can be simplified from an exponential to a quadratic complexity (on the sample size), and (2) that the simplified optimization problem, with some regularization, can be solved using some extension of the SVRG (stochastic variance reduction gradient) method to Bregman divergences.
The paper is quite focused on the idea of obtaining a faster solution of the adversarial problem. However, the key simplification is applied to a specific loss, the F-score, so one may wonder if the benefits of the proposed method could be extended to other losses. The extension of the SVRG is a more general result, it seems that the paper could have been focused on proposing Breg-SVRG, showing the adversarial optimization with the F-score as a particular application.
In any case, I think the paper is technically correct and interesting enough to be accepted. |
nips_2017_1642 | Mean Field Residual Networks: On the Edge of Chaos
We study randomly initialized residual networks using mean field theory and the theory of difference equations. Classical feedforward neural networks, such as those with tanh activations, exhibit exponential behavior on the average when propagating inputs forward or gradients backward. The exponential forward dynamics causes rapid collapsing of the input space geometry, while the exponential backward dynamics causes drastic vanishing or exploding gradients. We show, in contrast, that by adding skip connections, the network will, depending on the nonlinearity, adopt subexponential forward and backward dynamics, and in many cases in fact polynomial. The exponents of these polynomials are obtained through analytic methods and proved and verified empirically to be correct. In terms of the "edge of chaos" hypothesis, these subexponential and polynomial laws allow residual networks to "hover over the boundary between stability and chaos," thus preserving the geometry of the input space and the gradient information flow. In our experiments, for each activation function we study here, we initialize residual networks with different hyperparameters and train them on MNIST. Remarkably, our initialization time theory can accurately predict test time performance of these networks, by tracking either the expected amount of gradient explosion or the expected squared distance between the images of two input vectors. Importantly, we show, theoretically as well as empirically, that common initializations such as the Xavier or the He schemes are not optimal for residual networks, because the optimal initialization variances depend on the depth. Finally, we have made mathematical contributions by deriving several new identities for the kernels of powers of ReLU functions by relating them to the zeroth Bessel function of the second kind. | This paper analytically investigates the properties of Resnets with random weights using a mean field approximation. The approach used is an extension of previous analysis of feed forward neural networks. The authors show that in contrast to feed forward networks Resnets do exhibit a sub-exponential behavior (polynomial or logarithmic) when inputs are propagated forward or gradients are propagated backwards through the layers.
The results are very interesting because they give an analytic justification and intuition of why Resnets with a large number of layers can be trained reliably. The paper also extends the mean field technique for studying neural network properties, which is of value for analyzing other architectures. The presentation of the results is overall clear and while the paper is well written.
Overall this is an interesting paper with convincing results. |
nips_2017_1424 | SGD Learns the Conjugate Kernel Class of the Network
We show that the standard stochastic gradient decent (SGD) algorithm is guaranteed to learn, in polynomial time, a function that is competitive with the best function in the conjugate kernel space of the network, as defined in Daniely et al. [2016]. The result holds for logdepth networks from a rich family of architectures. To the best of our knowledge, it is the first polynomial-time guarantee for the standard neural network learning algorithm for networks of depth more that two.
As corollaries, it follows that for neural networks of any depth between 2 and log(n), SGD is guaranteed to learn, in polynomial time, constant degree polynomials with polynomially bounded coefficients. Likewise, it follows that SGD on large enough networks can learn any continuous function (not in polynomial time), complementing classical expressivity results. | This paper studies the problem of learning a class of functions related to neural networks using SGD. The class of functions can be defined using a kernel that is related to the neural network structure (which was defined in [Daniely et al. 2016]). The result shows that if SGD is applied to a neural network with similar structure, but with many duplicated nodes, then the result can be competitive to the best function in the class.
Pros:
- This is one of the first analysis for SGD on multi-layer, non-linear neural networks.
- Result applies to various architectures and activation functions.
Cons:
- The definition of the kernel is not very intuitive, and it would be good to discuss what is the relationship between functions representable using a neural network and functions representable using this kernel?
- The same result seems to be easier to prove if all previous layers have random weights and SGD is applied only to last layer. It's certainly good that SGD in this paper runs on all layers (which is similar to what's done in practice). However, for most practical architectures just training the last layer is not going to get a good performance (when the task is difficult enough). The paper does not help in explaining why training all layers is more powerful than training the last layer.
Overall this is still an interesting result. |
nips_2017_2143 | Adaptive Active Hypothesis Testing under Limited Information
We consider the problem of active sequential hypothesis testing where a Bayesian decision maker must infer the true hypothesis from a set of hypotheses. The decision maker may choose for a set of actions, where the outcome of an action is corrupted by independent noise. In this paper we consider a special case where the decision maker has limited knowledge about the distribution of observations for each action, in that only a binary value is observed. Our objective is to infer the true hypothesis with low error, while minimizing the number of action sampled. Our main results include the derivation of a lower bound on sample size for our system under limited knowledge and the design of an active learning policy that matches this lower bound and outperforms similar known algorithms. | The paper studies the problem of active hypothesis testing with limited information. More specifically, denoting the probability of ‘correct indication’ by q(j,w) if the true hypothesis is j and the chosen action is w, they assume that q(j,w) is bounded below by \alpha(w) for all j, and this quantity is available to the controller. They first study the incomplete-bayesian update rule, where all q’s are replaced with alpha’s, hence approximate, and show that with this IB update rule, one needs at least O(log(1/\delta)) to achieve (1-\delta) posterior error probability. Then, they show that their gradient-based policy achieves the order-wise optimal sample complexity.
I have a few technical questions.
On the upper bound: The authors claim that the upper bound is obtained by considering the worst case scenario where q(w,j) = \alpha(w) for every w if j is the true hypothesis. If that’s the case, how is different the analysis given in this paper from that for Bayesian update case? (If my understanding is correct, q(w,j) = \alpha(w) implies IB update rule = Bayesian update rule.) Further, what can happen in reality is that q(w,j) is arbitrarily close to 1, and alpha(w) is arbitrarily close to 1/2. In this case, can the upper bound provide any guarantee?
On the lower bound: Can this bound recover the existing lower bounds for the case of complete information?
On simulation results: It will be great if the authors can present the lower & upper bounds on E[T] and compare them with the simulation results. |
nips_2017_1269 | Bayesian Dyadic Trees and Histograms for Regression
Many machine learning tools for regression are based on recursive partitioning of the covariate space into smaller regions, where the regression function can be estimated locally. Among these, regression trees and their ensembles have demonstrated impressive empirical performance. In this work, we shed light on the machinery behind Bayesian variants of these methods. In particular, we study Bayesian regression histograms, such as Bayesian dyadic trees, in the simple regression case with just one predictor. We focus on the reconstruction of regression surfaces that are piecewise constant, where the number of jumps is unknown. We show that with suitably designed priors, posterior distributions concentrate around the true step regression function at a near-minimax rate. These results do not require the knowledge of the true number of steps, nor the width of the true partitioning cells. Thus, Bayesian dyadic regression trees are fully adaptive and can recover the true piecewise regression function nearly as well as if we knew the exact number and location of jumps. Our results constitute the first step towards understanding why Bayesian trees and their ensembles have worked so well in practice. As an aside, we discuss prior distributions on balanced interval partitions and how they relate to an old problem in geometric probability. Namely, we relate the probability of covering the circumference of a circle with random arcs whose endpoints are confined to a grid, a new variant of the original problem. | This paper analyses concentration rates (speed of posterior concentration) for Bayesian regression histograms and demonstrates that under certain conditions and priors, the posterior distribution concentrates around the true step regression function at the minimax rate.
The notation is clear. Different approximating functions are considered, starting from the set of step functions supported on equally sized intervals, up to more flexible functions supported on balanced partitions. The most important part of the paper is building the prior on the space of approximating functions.
The paper is relatively clear and brings up an interesting first theoretical result regarding speed of posterior concentration for Bayesian regression histograms. The authors assume very simple conditions (one predictor, piecewise-constant functions), but this is necessary in order to get a first analysis.
Proof seems correct, although it is out of this reviewer's expertise. This reviewer wonders why the authors did not considered sending this work to the Annals of Statistics instead of NIPS, given the type of analysis and provided results.
Minor:
- You might want to check the reference of Bayesian Hierarchical Clustering (Heller et.al, 2005) and the literature of infinite mixture of experts.
- l. 116: You mention recommendations for the choice of c_K in Section 3.3, but this Section does not exist.
- Assumption 1 is referred to multiple times in page 4, but it is not defined until page 5.
Writing (typos):
- l. 83: we will
- l. 231: mass near zero
- l. 296: (b) not clear (extend to multi-dimensional?) |
nips_2017_3108 | Value Prediction Network
This paper proposes a novel deep reinforcement learning (RL) architecture, called Value Prediction Network (VPN), which integrates model-free and model-based RL methods into a single neural network. In contrast to typical model-based RL methods, VPN learns a dynamics model whose abstract states are trained to make option-conditional predictions of future values (discounted sum of rewards) rather than of future observations. Our experimental results show that VPN has several advantages over both model-free and model-based baselines in a stochastic environment where careful planning is required but building an accurate observation-prediction model is difficult. Furthermore, VPN outperforms Deep Q-Network (DQN) on several Atari games even with short-lookahead planning, demonstrating its potential as a new way of learning a good state representation. | This paper proposes a new learning architecture, VPN, whose components can be
learned end-to-end to perform planning over pre-specified temporally extended actions. An advantage of this approach is that it explicitly avoids working in the original observation space and plans indirectly over a learned representation.
This is made possible by a applying a model-free DQN-like approach for learning values which are then used in combination with reward and duration models of options
to form planning targets. The proposed approach is shown to outperform DQN in a
variety of Atari games.
Model-based RL is a difficult problem when it comes to a large domain like ALE.
The development of new model-based methods which do not explicitly try to
reconstruct/predict the next observation seem to be going in the right direction.
VPN is a contribution in this new line of research.
However, there are a few conceptual elements which remain unclear.
First, why predicting the expected discount/duration separately from the
option-conditional expected cumulative return to termination ? Using the
Bellman-like equations for the reward model of options (see equation 17 of the options paper), the discount factor can effectively be folded into the reward prediction. This would avoid the need to learn a separate discount prediction for each option.
Second : The nature of the "next abstract state" (line 109) is a key element of the
proposed approach to avoid learning a transition model directly
in the original space. As you mention later on line 251, there is no incentive/constraint for the learned state representation to capture useful properties for planning and could in the extreme case become a tabular representation. The appendix then shows that you use a particular architecture for the transition module which predicts the difference. How much this prior on the architecture affects quality of the representation learned by the system ? What kind of representation would be learned in the absence of a residual connection ?
Can you also elaborate on the meaning of the weighting (1/d, d-1/d) in equation (1) ? Why is it necessary ? Where does that come from ?
Line 21 : "predict abstractions of observations"
This expression is unclear, but I guess that it means a feature/representation
of the state. You also write : "not clear how to roll such predictions", but isn't that exactly what VPN is doing ?
Typo : line 31 "dyanmics" |
nips_2017_2863 | From Bayesian Sparsity to Gated Recurrent Nets
The iterations of many first-order algorithms, when applied to minimizing common regularized regression functions, often resemble neural network layers with prespecified weights. This observation has prompted the development of learningbased approaches that purport to replace these iterations with enhanced surrogates forged as DNN models from available training data. For example, important NPhard sparse estimation problems have recently benefitted from this genre of upgrade, with simple feedforward or recurrent networks ousting proximal gradient-based iterations. Analogously, this paper demonstrates that more powerful Bayesian algorithms for promoting sparsity, which rely on complex multi-loop majorizationminimization techniques, mirror the structure of more sophisticated long short-term memory (LSTM) networks, or alternative gated feedback networks previously designed for sequence prediction. As part of this development, we examine the parallels between latent variable trajectories operating across multiple time-scales during optimization, and the activations within deep network structures designed to adaptively model such characteristic sequences. The resulting insights lead to a novel sparse estimation system that, when granted training data, can estimate optimal solutions efficiently in regimes where other algorithms fail, including practical direction-of-arrival (DOA) and 3D geometry recovery problems. The underlying principles we expose are also suggestive of a learning process for a richer class of multi-loop algorithms in other domains. | This paper proposes a recurrent network for sparse estimation inspired on sparse Bayesian learning (SBL). It first shows that a recurrent architecture can implement a variant of SBL, showing through simulations that different quantities have different time dynamics, which motivates leveraging ideas from recurrent-network design to adapt the architecture. This leads to a recurrent network that seems to outperform approaches based on optimization in simulations and two applications.
Strengths: The idea of learning a recurrent network for sparse estimation has great potential impact. The paper is very well written. The authors motivate their design decisions in detail and report numerical experiments that are quite thorough and indicate that the technique is successful for challenging sparse-decomposition problems.
Weaknesses: A lot of the details about the implementation of the method and the experiments are deferred to the supplementary material, so that the main paper is a bit vague. I find this understandable due to length limitations. |
nips_2017_3092 | Convolutional Phase Retrieval
We study the convolutional phase retrieval problem, which considers recovery of an unknown signal x ∈ C n from m measurements consisting of the magnitude of its cyclic convolution with a known kernel a of length m. This model is motivated by applications to channel estimation, optics, and underwater acoustic communication, where the signal of interest is acted on by a given channel/filter, and phase information is difficult or impossible to acquire. We show that when a is random and m is sufficiently large, x can be efficiently recovered up to a global phase using a combination of spectral initialization and generalized gradient descent. The main challenge is coping with dependencies in the measurement operator; we overcome this challenge by using ideas from decoupling theory, suprema of chaos processes and the restricted isometry property of random circulant matrices, and recent analysis for alternating minimizing methods. | The authors of the article consider phase retrieval problems where the measurements have a specific form: they are the convolution of the n-dimensional input signal x with a random filter of length m. The authors show that, provided that m is larger than O(n), up to log factors, and up to a term that is related with the sparsity of x in the Fourier domain, a simple non-convex algorithm can solve such problems with high probability.
The proof is quite technical. Indeed, the measurement vectors are highly correlated with one another. To overcome this difficulty, the authors use "decoupling techniques", together with results from [Rauhut, 2010] and [Krahmer, Mendelson, Rauhut, 2014].
I liked this article. I had not realized it was so complex to prove the convergence of a simple non-convex phase retrieval method in a very structured case, and, in my opinion, it is good that it has been done. The result by itself is interesting, and the proof techniques developed by the authors seem general enough to me so that they can be reused in other settings.
As previously said, the proof is quite technical, and long. There are 38 pages in the appendix. I tried to read as much of it as possible, but, given the time constraints of the review process, I certainly did not have the time to deeply understand everything. However, what I read seems both clear and solid.
The organization is also good. However, maybe the authors could discuss in more detail the case of coded diffraction patterns ([CLS15b]). Indeed, the case of phase retrieval with coded diffraction patterns is, at least at first sight, very similar to the case considered in the article: seen in the Fourier domain, coded diffraction patterns consist in convolving the input signal with (m/n) random filters of length n, while, in convolutional phase retrieval, the input signal is convolved with a single random filter of length m. I would appreciate the authors to compare their result and their proof techniques with the ones obtained for coded diffraction patterns.
Minor remarks and typos:
- Before Equation (4): add "that"?
- l.95 (and also in the "basic notations" paragraph of the appendix): ", because" should be ". Because".
- Paragraph 3.1: I did not know the name "alternating direction method". I think that "alternating projection method" is more common.
- Equation (12), and the previous two: "i\phi" is missing in the exponential function.
- l.180: ". Controlling" should be ", controlling".
- l. 209: "are not" should be "is not".
- Equation (19) (and the one after Equation (46) in the appendix): it took me a while to understand why this equation was true. Maybe it could be proved?
- l.220: "this control obtains" should be "we obtain this control".
- Equations (21) and (23): \mathcal{Q} should be \hat\mathcal{L}.
- The paragraph starting at line 244 seems a bit redundant with the explanations around Equation (22).
- Experiments: the main theorem says that, when x is proportional to the all-one vector, the initialization step still works, but the local descent does not. It would be nice to add an experiment that checks whether this is really what happens in numerical simulations.
- [CC15] has been published.
- The accent on "Candès" is missing in some of the bibliography entries.
- Appendix, l.58: "we proof" should be "we prove".
- Proposition 2.2: some "that" and "such that" could be added.
- l.92: "be" should be "is".
- l.150: I do not understand "for \gamma_2 functional of the set \mathcal{A}".
- Lemma 3.11: the last equation contains a \sqrt{\delta} while, in the statement, it is a \delta (no square root).
- l.202: I think that a "1/m" is missing in front of the expectation.
- l.217: "for w" should be "for all w".
- l.220: "define" should be "be defined".
- l.222: "define" should be "defined".
- l.224: "is constant" should be "is a constant".
- Equation after line 226: I think that there should be no parentheses inside the sup.
- Theorem 4.3: "are" should be "be".
- l.268: a word is missing before "that".
- Second line of the equation after line 278: the upper bound in the second integral should depend on d, not v.
- l. 279: "where the" should be "where in the", and "Choose" should be "choosing".
- Equation after line 292: I do not understand why the first inequality is true.
- l. 297: "so that" should be removed.
- l.305: "is complex" should be "is a complex".
- Equation just before line 326: there should be parentheses aroung the logarithms.
- Lemma 5.3: I do not understand the meaning of the first inequality: what is the difference between the first and the second term?
- l.466: "is close" should be "are close".
- l.481: "are defined" should be "is defined".
- l.484 and 486: "first" and "second" have been exchanged.
- I think that "we have [Equation] holds" (used multiple times) could be replaced with "we have that [Equation] holds" or simply "[Equation] holds".
- Equation after line 492: it could be said that it is a consequence of Lemma 6.2.
- Lemma 6.5, main equation: a multiplicative constant is missing before \delta.
- Equation after l.510, sixth line and after, until the end of the proof: the \leq sign should be a \geq sign.
- Equation after l.513: why w^{(r)}? Also, I think that there should maybe be close to 1 multiplicative constants before the last two terms of the sum.
- Equation after l.551: I do not understand the presence of ||x||.
- Lemma 6.3 and lemma 8.1 are not perfectly identical (there is a 2 in 6.3 that becomes a 2+\epsilon in 8.1).
- References: the name of the authors is missing in reference [led07]. |
nips_2017_2460 | QMDP-Net: Deep Learning for Planning under Partial Observability
This paper introduces the QMDP-net, a neural network architecture for planning under partial observability. The QMDP-net combines the strengths of model-free learning and model-based planning. It is a recurrent policy network, but it represents a policy for a parameterized set of tasks by connecting a model with a planning algorithm that solves the model, thus embedding the solution structure of planning in a network learning architecture. The QMDP-net is fully differentiable and allows for end-to-end training. We train a QMDPnet on different tasks so that it can generalize to new ones in the parameterized task set and "transfer" to other similar tasks beyond the set. In preliminary experiments, QMDP-net showed strong performance on several robotic tasks in simulation. Interestingly, while QMDP-net encodes the QMDP algorithm, it sometimes outperforms the QMDP algorithm in the experiments, as a result of end-to-end learning. | The paper proposes a novel policy network architecture for partially observable environments. The network includes a filtering module and a planning module. The filtering module mimics computation of the current belief of the agent given its previous belief, the last action and the last observation. The model of the environment and the observation function are replaced with trainable neural modules. The planning module runs value iteration for an MDP, whose transition function and reward function are also trainable neural modules. The filtering model and the planning module are stacked to resemble the QMDP algorithm for solving POMDP. All the modules are trained jointly according to an imitation learning criterion, i.e. the whole network is trained to predict the next action of an expert given the past observations and actions. The experimental evaluation shows better generalization to new tasks.
The approach can be as an extension of Value Iteration Networks (VIN) to partially observable environments by means of adding the filtering network. The paper is in general well written. I did not understand though why the internal model of the network does not use the same observation and action spaces as the real model of the world. Sentences like "While the belief image is defined in M, action inputs are defined in \hat{M}" were therefore very hard to understand. The choice of notation f^t_T is quite confusing.
The experimental evaluation seems correct. I would suggest a convolutional LSTM as another baseline for the grid world. A fully-connected LSTM is obviously worse eqipped than the proposed model to handle their local structure. I am somewhat suspicious if the fact that expert trajectories came from QMDP could give QMDP-net an unfair advantage, would be good if the paper clearly argued why it is not the case.
One limitation is this line of research (including VIN) is that it assumes discrete state spaces and the amount of computation is proportional to the number of states. As far as I can I see this makes this approach not applicable to real-world problems with high-dimensional state spaces.
Overall, the paper proposes a novel and elegant approach to build policy networks and validates it experimentaly. I recommend to accept the paper. |
nips_2017_3001 | Learning Neural Representations of Human Cognition across Many fMRI Studies
Cognitive neuroscience is enjoying rapid increase in extensive public brain-imaging datasets. It opens the door to large-scale statistical models. Finding a unified perspective for all available data calls for scalable and automated solutions to an old challenge: how to aggregate heterogeneous information on brain function into a universal cognitive system that relates mental operations/cognitive processes/psychological tasks to brain networks? We cast this challenge in a machine-learning approach to predict conditions from statistical brain maps across different studies. For this, we leverage multi-task learning and multi-scale dimension reduction to learn low-dimensional representations of brain images that carry cognitive information and can be robustly associated with psychological stimuli. Our multi-dataset classification model achieves the best prediction performance on several large reference datasets, compared to models without cognitive-aware low-dimension representations; it brings a substantial performance boost to the analysis of small datasets, and can be introspected to identify universal template cognitive concepts.
Due to the advent of functional brain-imaging technologies, cognitive neuroscience is accumulating quantitative maps of neural activity responses to specific tasks or stimuli. A rapidly increasing number of neuroimaging studies are publicly shared (e.g., the human connectome project, HCP [1]), opening the door to applying large-scale statistical approaches [2]. Yet, it remains a major challenge to formally extract structured knowledge from heterogeneous neuroscience repositories. As stressed in [3], aggregating knowledge across cognitive neuroscience experiments is intrinsically difficult due to the diverse nature of the hypotheses and conclusions of the investigators. Cognitive neuroscience experiments aim at isolating brain effects underlying specific psychological processes: they yield statistical maps of brain activity that measure the neural responses to carefully designed stimulus. Unfortunately, neither regional brain responses nor experimental stimuli can be considered to be atomic: a given experimental stimulus recruits a spatially distributed set of brain regions [4], while each brain region is observed to react to diverse stimuli. Taking advantage of the resulting data richness to build formal models describing psychological processes requires to describe each cognitive
conclusion on a common basis for brain response and experimental study design. Uncovering atomic basis functions that capture the neural building blocks underlying cognitive processes is therefore a primary goal of neuroscience [5], for which we propose a new data-driven approach.
Several statistical approaches have been proposed to tackle the problem of knowledge aggregation in functional imaging. A first set of approaches relies on coordinate-based meta-analysis to define robust neural correlates of cognitive processes: those are extracted from the descriptions of experimentsbased on categories defined by text mining [6] or expert [7]-and correlated with brain coordinates related to these experiments. Although quantitative meta-analysis techniques provide useful summaries of the existing literature, they are hindered by label noise in the experiment descriptions, and weak information on brain activation as the maps are reduced to a few coordinates [8]. A second, more recent set of approaches models directly brain maps across studies, either focusing on studies on similar cognitive processes [9], or tackling the entire scope of cognition [10,11]. Decoding, i.e. predicting the cognitive process from brain activity, across many different studies touching different cognitive questions is a key goal for cognitive neuroimaging as it provides a principled answer to reverse inference [12]. However, a major roadblock to scaling this approach is the necessity to label cognitive tasks across studies in a rich but consistent way, e.g., building an ontology [13].
We follow a more automated approach and cast dataset accumulation into a multi-task learning problem: our model is trained to decode simultaneously different datasets, using a shared architecture. Machine-learning techniques can indeed learn universal representations of inputs that give good performance in multiple supervised problems [14,15]. They have been successful, especially with the development of deep neural network [see, e.g., 16], in sharing representations and transferring knowledge from one dataset prediction model to another (e.g., in computer-vision [17] and audioprocessing [18]). A popular approach is to simultaneously learn to represent the inputs of the different datasets in a low-dimensional space and to predict the outputs from the low-dimensional representatives. Using very deep model architectures in functional MRI is currently thwarted by the signal-to-noise ratio of the available recordings and the relative little size of datasets [19] compared to computer vision and text corpora. Yet, we show that multi-dataset representation learning is a fertile ground for identifying cognitive systems with predictive power for mental operations.
Contribution. We introduce a new model architecture dedicated to multi-dataset classification, that performs two successive linear dimension reductions of the input statistical brain images and predicts psychological conditions from a learned low-dimensional representation of these images, linked to cognitive processes. In contrast to previous ontology-based approaches, imposing a structure across different cognitive experiments is not needed in our model: the representation of brain images is learned using the raw set of experimental conditions for each dataset. To our knowledge, this work is the first to propose knowledge aggregation and transfer learning in between functional MRI studies with such modest level of supervision. We demonstrate the performance of our model on several openly accessible and rich reference datasets in the brain-imaging domain. The different aspects of its architecture bring a substantial increase in out-of-sample accuracy compared to models that forgo learning a cognitive-aware low-dimensional representation of brain maps. Our model remains simple enough to be interpretable: it can be collapsed into a collection of classification maps, while the space of low-dimensional representatives can be explored to uncover a set of meaningful latent components. | This paper introduces a framework to analyze heterogeneous fMRI datasets combining multi-task supervised learning with multi-scale dictionary learning. The result shows that classification of task conditions in a target dataset can be improved by integrating the large-scale database of humman connectome project (HCP) using their method.
The proposed entire framework looks fairly reasonable to me. However, the method contains many different technical ideas and each of them is not necessarily well validated. The technical ideas include at least the following: 1) use of resting state data instead of task data for training the first stage; 2) use of multi-scale spatial dictionary combining multiple runs of sparse dictionary learning; 3) using Dropout for regularization; and 4) adding HCP data to improve classification on another target dataset. In my view, current results only support point 4. The following comments are related to the other points.
In the experiment (Sec2.2), the comparison between single and multi-scale dictionaries (methods 2 and 3) seems to be made for different dictionary sizes (256 and 336). However, I think they should be balanced for a fair comarison (for instance, learn 336 atoms at once for the non-multi-scale dictionary), since currently the effects of having different dimensionalities and different spatial scales cannot be dissociated. I would also suggest comparing with simply using PCA to reduce to the same dimensionality, which may clarify the advantage of using localized features.
Moreover, the methods 1-3 use L2-regularization but 4-5 uses Dropout, which may also not be a fair comparison. What happens if you use L2-regualarization for methods 4-5, or otherwise Dropout for 1-3?
I also wondered what happens if the spatial dictionary is trained based on the task data rather than resting-state data. The use of rest data for training sounds reasonable for the goal of "uncovering atomic bases" (Introduction) but it would be necessary to compare the two cases in terms of both test accuracy and interpretability. In particular, can similar spatial maps in Figs4,5 be obtained even without resting-state decomposition?
Other minor comments:
- Eq1,3: what does the objective actually mean, when taking expectation with respect to M? Is it correct that the objective is minimized rather than maximized? - Eq3: "log" seems to be dropped.
- Fig2: what are the chance levels for each dataset? No result by "full multinomial" for LA5c.
- Line235: what does "base condition" mean? |
nips_2017_3000 | Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos
The Multiplicative Weights Update (MWU) method is a ubiquitous meta-algorithm that works as follows: A distribution is maintained on a certain set, and at each step the probability assigned to action γ is multiplied by (1 − C(γ)) > 0 where C(γ) is the "cost" of action γ and then rescaled to ensure that the new values form a distribution. We analyze MWU in congestion games where agents use arbitrary admissible constants as learning rates and prove convergence to exact Nash equilibria. Interestingly, this convergence result does not carry over to the nearly homologous MWU variant where at each step the probability assigned to action γ is multiplied by (1 − ) C(γ) even for the simplest case of two-agent, two-strategy load balancing games, where such dynamics can provably lead to limit cycles or even chaotic behavior. | The paper revisits the convergence result of multiplicative weighted update (MWU) in a congestion game, due to Kleinberg, Piliouras, and Tardos (STOC 2009), and establishes a connection between MWU and Baum-Welch algorithm in a neat and elegant style. By showing the monotonicity of the potential function in a congestion game, the authors prove that any MWU with linear updating rule converges to the set of fixed points, which is a superset of all the Nash equilibria of the congestion game. The Baum-Eagon inequality offers a new interpretation of the dynamics of the linear updating rule in MWU and their results in congestion games are quite general, and so are the conditions on initialization and isolated Nash equilibrium in a congestion game with finite set of agents and pure strategies.
The results in this paper hold for any congestion game irrespective of the topology of the strategy sets by the nature of their game, however, one should emphasize the assumption that, individual earning rates $\epsilon_i$ are bounded above, in order for Baum-Eagon inequality to work in their context. An elaborate analysis on the upper bounds would be nicer, since for any $i$, this paper requires that
$$\frac{1}{\epsilon_i} > \sup_{i,\mathbf{p}\in \Delta,\gamma\in S_i} \{c_{i\gamma}\}\geq \max_{e} c_e(|N|),$$ which can be huge when the number of agents is large enough.
Although Theorem 3.7 holds, the proof presented by the authors is flawed. To prove any point $\mathbf{y} \in \Omega$ is a fixed point of $\zeta$, one could guess from the argument in the paper that the authors intend to show that $\Phi(\mathbf{y}) = \Phi(\zeta(\mathbf{y}))$, thus by monotonicity of $\Phi$, $\zeta(\mathbf{y})=\mathbf{y}$. However, they applied the notation $\mathbf{y}(t)$, even though $\mathbf{y}$ is, according to their definition, the limit point of an orbit $\mathbf{p}(t)$ as $t$ goes to infinity. The subsequent argument is also hard to follow.
A easy way to prove theorem 3.7 is by contradiction. Suppose that $\mathbf{y}$ is not a fixed point, then by definition (line221), there exists some $i,\gamma$ s.t., $y_{i\gamma} > 0$ and $c_{i\gamma}\neq \hat{c}_i$. If $c_{i\gamma} > \hat{c}_i$, then by convergence and continuity of the mapping, $\exists t_0$ and $\epsilon_0 > 0$, s.t., $\forall t > t_0$, $p_{i\gamma}(t) \geq \epsilon_0 > 0$, and $c_{i\gamma}(t) \geq \hat{c}_i(t)+ \epsilon_0.$ Then it's not hard to see that
$$p_{i\gamma}(t+1) = p_{i\gamma}(t) \frac{\frac{1}{\epsilon_i} - c_{i\gamma}(t) }{\frac{1}{\epsilon_i} - \hat{c}_{i}(t)} \leq p_{i\gamma}(t) (1- \epsilon_0 \epsilon_i),$$ which yields $\lim_{t\to \infty} p_{i\gamma}(t) = y_{i\gamma}=0$, contradicting with $y_{i\gamma} > 0$.
If $c_{i\gamma} < \hat{c}_i$, then by convergence and continuity of the mapping, $\exists t_0$ and $\epsilon_0 > 0$, s.t., $\forall t > t_0$, $p_{i\gamma}(t) \geq \epsilon_0 > 0$, and $c_{i\gamma}(t) + \epsilon_0 < \hat{c}_i(t).$ Then it's not hard to see that $$p_{i\gamma}(t+1) = p_{i\gamma}(t) \frac{\frac{1}{\epsilon_i} - c_{i\gamma}(t) }{\frac{1}{\epsilon_i} - \hat{c}_{i}(t)} \geq p_{i\gamma}(t) (1+ \epsilon_0)$$. Then $\lim_{t\to \infty} p_{i\gamma}(t)$ doesn't lie in the simplex.
In Remark 3.5, the authors claim that Lemma 3.3 and thus Theorem 3.4 and 3.1 hold in the extension of weighted potential games. However, a underlying step in the proof of Lemma 3.3 might not hold without adding extra conditions: the non-negativity of coefficients in $Q(p)$ in a weighted potential game. Similar to the proof of Lemma 3.3, in the weighted case, $\frac{\partial Q}{\partial p_{i\gamma}} = \frac{1}{\epsilon_i} - \frac{c_{i\gamma}}{w_i}$. Whether the RHS is non-negative is unknown simply by the definition of $\epsilon_i$ nor the fact that $w_i \in [0,1]$.
On Line 236, the constant in $Q(p)$ should rather be $Q(p)=const -\Phi$ and $const = \sum_{i\in N} 1/{\epsilon_i} + (|N|-1) 1/\beta$. |
nips_2017_1452 | Using Options and Covariance Testing for Long Horizon Off-Policy Policy Evaluation
Evaluating a policy by deploying it in the real world can be risky and costly. Off-policy policy evaluation (OPE) algorithms use historical data collected from running a previous policy to evaluate a new policy, which provides a means for evaluating a policy without requiring it to ever be deployed. Importance sampling is a popular OPE method because it is robust to partial observability and works with continuous states and actions. However, the amount of historical data required by importance sampling can scale exponentially with the horizon of the problem: the number of sequential decisions that are made. We propose using policies over temporally extended actions, called options, and show that combining these policies with importance sampling can significantly improve performance for long-horizon problems. In addition, we can take advantage of special cases that arise due to options-based policies to further improve the performance of importance sampling. We further generalize these special cases to a general covariance testing rule that can be used to decide which weights to drop in an IS estimate, and derive a new IS algorithm called Incremental Importance Sampling that can provide significantly more accurate estimates for a broad class of domains. | [I have read the other reviews and the author feedback, and updated my rating to be borderline accept. The paper will be substantially stronger with more realistic expts.
In the absence of such expts, I suspect that the IncrIS estimator will run into substantial computational challenges and sub-optimal MSE in realistic scenarios.]
The paper introduces new off-policy estimators to evaluate MDP policies. The options-based estimator of Theorem3 is a straightforward adaptation of per-step importance sampling.
Section6 makes an interesting observation: if at any step in the trajectory we know that the induced state distribution is the same between the behavior policy and the evaluation policy, we can partition the trajectory into two sub-parts and conduct per-step importance sampling on each sub-part. This observation may prove to be more generally useful; in this paper, they exploit this observation to propose the IncrIS estimator.
IncrIS uses sample variance and covariance estimates (presumably; this detail is missing in their experiment description) to detect what prefix of sampled trajectories can be safely ignored [i.e. yields smaller (estimated) MSE of per-step importance sampling on the resulting suffix]. This is an interesting approach to bias-variance trade-offs in importance-sampling-estimator design that can be explored further; e.g. covariance testing requires us to fix the choice of k (ignored prefix length) across the entire set of trajectories. Can we condition on the observed trajectories to adaptively discard importance weights?
In summary, the contributions up to Section 6 seem very incremental. IncrIS introduces an intriguing bias-variance trade-off but the experimental evidence (MSE in a constructed toy MDP)is not very convincing. In realistic OPE settings, it is likely that horizons will be large (so IncrIS can shine), but there may be lots of sampled trajectories (computational challenge for IncrIS to compute \hat{MSE}), and we almost certainly won't be in the asymptotic regime where we can invoke strong consistency of IncrIS (so, \hat{MSE} may yield severely sub-optimal choices, and IncrIS may have poor empirical MSE). Due to these intuitions, a more realistic experiment will be much more informative in exploring the strengths and weaknesses of the proposed estimators.
Minor:
66: \ldots
112: one *can* leverage *the* advantage? |
nips_2017_2353 | Online Learning of Optimal Bidding Strategy in Repeated Multi-Commodity Auctions
We study the online learning problem of a bidder who participates in repeated auctions. With the goal of maximizing his T-period payoff, the bidder determines the optimal allocation of his budget among his bids for K goods at each period. As a bidding strategy, we propose a polynomial-time algorithm, inspired by the dynamic programming approach to the knapsack problem. The proposed algorithm, referred to as dynamic programming on discrete set (DPDS), achieves a regret order of O( √ T log T ). By showing that the regret is lower bounded by Ω( √ T ) for any strategy, we conclude that DPDS is order optimal up to a √ log T term. We evaluate the performance of DPDS empirically in the context of virtual trading in wholesale electricity markets by using historical data from the New York market. Empirical results show that DPDS consistently outperforms benchmark heuristic methods that are derived from machine learning and online learning approaches. | This paper studies the online learning (stochastic and full-information) problem of bidding in multi commodity first price auctions. The paper introduces a polynomial time algorithm that achieves a regret of \sqrt{T log(T)} that has a near optimal dependence on T.
The main challenge that the paper has to deal with is to find a computationally efficient algorithm for computing the best biding strategy given a known distribution.The authors first demonstrate that natural approaches for solving this problem exactly are not computationally efficient (this is not a formal np-hardness proof). Then, they provide a FPTAS for solving the problem using dynamic programming. Once they have a FPTAS for the offline problem, their results hold for the stochastic online setting using existing reductions. I haven’t carefully looked in to the details of their analysis of the dynamic programming, but I think the effectiveness of it here is interesting and surprising — specially given that the variation of this problem for the second price auctions is hard to approximate.
As for the weaknesses, I find the theorems to be hard to interpret. Particularly because there are a lot of variables in them. The authors should provide one simple variation of theorem 1 with fewer parameters. For example, when l = 0, and u = 1, the lipschitz conditions holds with L = 1 and p = 1, and B = K. Assigning \alpha = \sqrt{t} gives a regret of \tilde O(\sqrt{t \log(T)}).
As mentioned above the results mentioned here hold for the full-information online stochastic setting, that is, the valuation and the clearing price are drawn from a the same distribution at every step. I find this to be a weird choice of a setting. In most cases, full information settings are accompanied by adversarial arrival of the actions, rather than stochastic. This is indeed due to the fact that the statistical and computational aspects of online stochastic full-information setting is very similar to the offline stochastic full-information setting. So, I think a much more natural setting would have been to consider the adversarial online setting.
In this regard, there are two very relevant works on no-regret learning in auctions that should be cited here. Daskalakis and Syrgkanis FOCS’16 and later on Dudik et al. FOCS’17 study the problem of online learning in Simultaneous Second Price auctions (SiSPA). This is also the problem of optimal bidding in a repeated multi-commodity auction where each item is assigned based on a second price auctions, rather than a first price auction that is considered in this paper. In particular, the latter paper discusses computationally efficient mechanisms for obtaining a no-regret algorithm in the adversarial setting when one can solve the offline problem efficiently. Their methods seem to apply to the problem discussed in this paper and might be specially useful since this paper provides an approximation method for solving the offline problem. It is worth seeing whether the machinery used in their paper can directly strengthen your result so that it holds for the adversarial online setting.
After rebuttal:
I suggest that the authors take a closer look at the works of Dudik et al. and Daskalakis and Syrgkanis. In their response, the authors mention that those works are inherently for discrete spaces. From a closer look at the work of Dudik et al. it seems that they also work with some auctions specifically in the continuous space by first showing that they can discretize the auction space. This discretization step is quite common in no-regret learning in auctions. Given that the current submission also shows that one can discretize the auctions space first, I think the results of Dudik et al. can be readily applied here. This would strengthen the results of this paper and make the setting more robust. I suggest that the authors take a closer look at these works when preparing their discussion. |
nips_2017_2135 | Optimized Pre-Processing for Discrimination Prevention
Non-discrimination is a recognized objective in algorithmic decision making. In this paper, we introduce a novel probabilistic formulation of data pre-processing for reducing discrimination. We propose a convex optimization for learning a data transformation with three goals: controlling discrimination, limiting distortion in individual data samples, and preserving utility. We characterize the impact of limited sample size in accomplishing this objective. Two instances of the proposed optimization are applied to datasets, including one on real-world criminal recidivism. Results show that discrimination can be greatly reduced at a small cost in classification accuracy. | This paper presents a constrained optimization framework to address (ethically/socially sensitive) discrimination in a classification setting. Such discrimination can occur in machine learning because (1) the training data was obtained from an unfair world and reflects that unfairness, (2) the algorithm trained on the data may introduce biases or unfairness even if there are no issues in the training data, or (3) the output is used in a discriminatory fashion. The present paper falls in the category of preprocessing, meaning it attempts to address unfairness coming from (1) by transforming the data to reduce or eliminate its discriminatory aspects.
As notation, D represents one or more variables of a sensitive nature for which we wish to reduce discrimination, such as race or sex, X represents other predictors, and Y the outcome variable for (binary) classification. Existing literature on fairness and discrimination contains a variety of formal definitions of (un)fairness. Two such definitions are used here, (I) dependence of the predicted outcome \hat Y on D should be low, and (II) similar individuals should receive similar predicted outcomes. These two criteria form the constraints of the optimization problem posed in this paper, and the objective is to minimize a probability distance measure between the joint distribution of the transformed data and the joint distribution of the original data. The paper gives conditions under which the optimization problem is (quasi)convex, and proves a result bounding errors resulting from misspecification of the prior distribution of the data. The utility of the method is demonstrated empirically on two datasets and shown to be competitive with the most similar existing method of Zemel et al. (2013).
One important limitation of this paper is the assumption that the joint distribution of the data is known. Proposition 2 is a good attempt at addressing this limitation. However, there may be subtle issues related to fairness encoded in the joint distribution which are only hinted at in the Supplementary Material in Figures 2-4. For example, what does it mean for two individuals with different values of D to be similar otherwise? Are there important aspects of similarity not captured in the available data--i.e. confounding variables? These questions are fundamental limitations of individual fairness (IF), and while I may have objections to (IF) I recognize it has been accepted in previous literature and is not a novel proposal of the present work. However, even accepting (IF) without objections, Proposition 2 appears only to address the other two aspects of the optimization problem, and does not address the effect of prior distribution misspecification on (IF). If this could be changed it would strengthen the paper. Some acknowledgement of the subtlety in defining fairness--issues concerning the relations between D and other variables in X, and the implications of those relations on Y, which are addressed fairly implicitly in (IF)--would also strengthen the exposition. See, for example, Johnson et al. "Impartial Predictive Modeling: Ensuring Fairness in Arbitrary Models," or Kusner et al. "Counterfactual Fairness."
Another limitation of the present work is the large amount of calibration or tuning parameters involved. There are tolerances for each constraint, and the potentially contradictory nature of the two kinds of constraints (discrimination vs distortion) is not acknowledged except in the Supplementary Materials where Figure 1 shows some constraint tolerances may be infeasible. Addressing this in the main text and giving conditions (perhaps in the Supplementary Materials) under which the optimization problem is feasible would also strengthen the paper.
Finally, the choice of the distortion metric is both a subtle issue where the fundamental goal of fairness is entirely implicit--a limitation of (IF)--and a potentially high dimensional calibration problem. The metric can be customized on a per-application basis using domain and expert knowledge, which is both a limitation and a strength. How should this be done so the resulting definition of (IF) is sensible? Are there any guidelines or default choices, or references to other works addressing this? Figure 2 is rather uninformative since neither the present method nor its competitor have been tuned. The comparison is (probably) fair, but the reader is left wondering what the best performance of either method might look like, and whether one strictly dominates the other.
* Quality
The paper is well supported by theoretical analysis and empirical experiments, especially when the Supplementary Materials are included. The work is fairly complete, though as mentioned above it is both a strength and a weakness that there is much tuning and other specifics of the implementation that need to be determined on a case by case basis. It could be improved by giving some discussion of guidelines, principles, or references to other work explaining how tuning can be done, and some acknowledgement that the meaning of fairness may change dramatically depending on that tuning.
* Clarity
The paper is well organized and explained. It could be improved by some acknowledgement that there are a number of other (competing, often contradictory) definitions of fairness, and that the two appearing as constraints in the present work can in fact be contradictory in such a way that the optimization problem may be infeasible for some values of the tuning parameters.
* Originality
The most closely related work of Zemel et al. (2013) is referenced, the present paper explains how it is different, and gives comparisons in simulations. It could be improved by making these comparisons more systematic with respect to the tuning of each method--i.e. compare the best performance of each.
* Significance
The broad problem addressed here is of the utmost importance. I believe the popularity of (IF) and modularity of using preprocessing to address fairness means the present paper is likely to be used or built upon. |
nips_2017_2134 | Nonlinear Acceleration of Stochastic Algorithms
Extrapolation methods use the last few iterates of an optimization algorithm to produce a better estimate of the optimum. They were shown to achieve optimal convergence rates in a deterministic setting using simple gradient iterates. Here, we study extrapolation methods in a stochastic setting, where the iterates are produced by either a simple or an accelerated stochastic gradient algorithm. We first derive convergence bounds for arbitrary, potentially biased perturbations, then produce asymptotic bounds using the ratio between the variance of the noise and the accuracy of the current point. Finally, we apply this acceleration technique to stochastic algorithms such as SGD, SAGA, SVRG and Katyusha in different settings, and show significant performance gains. | Summary of the paper
====================
In principle, the extrapolation algorithm devised by Scieur et al. (2016) forms a scheme which allows one to speed up a given optimization algorithm by averaging iterates. The current paper extends this idea to the stochastic case by using the same the averaging technique only this time for noisy iterates. After recalling the results for the deterministic case (as shown in Scieur et al. (2016)), a theoretical upper bound is derived, based on theses results, by quantifying the amount of noise introduced into the optimization process. Finally, numerical experiments which demonstrate the speed-up offered by this technique for least-square and finite sum problems are provided.
Evaluation
==========
Extending this technique for stochastic algorithms is a crucial step for this technique as this family of algorithms dominants many modern applications. From practical point of view, this seems to provide a significantly acceleration to some (but not all) of the state-of-the-art algorithms. Moreover, the theoretical analysis provided in the paper shows that this stochastic generalization effectively subsumes the deterministic case. However, this paper does not provide a satisfying analysis of the theoretical convergence rate expected by applying the Algorithm 1 on existing stochastic algorithms (especially, for finite sum problems). It is therefore hard to predict in what settings, for which algorithms and to what extent should we expect acceleration. In my opinion, this forms a major weakness point of this paper.
General Comments
================
- As mentioned above, I would suggest providing a more comprehensive discussion which allows more intuition regarding the benefits of this technique with respect to other existing algorithms.
- Consider adding a more precise statement to address the theoretical convergence rate expected when applying this technique to a given optimization algorithm with a given convergence rate.
Minor Comments
==============
L13: f(x) -> f ?
L13: Smooth and strong convexity are not defined\addressed at this point of the paper.
L25: depends on
L31: Many algorithms do exhibit convergence properties which are adaptive to the effective strong convexity of the problem: vanilla gradient descent, SAG (https://arxiv.org/abs/1309.2388, Page 8, 'an interesting consequence').
L52: f(x) -> f ?
L55: Might be somewhat confusing to diverse from the conventional way of defining condition number.
L61: The linear approximation may not be altogether trivial (also may worth to mention that the gradient vanishes..)
L72: in the description of Algorithm 1, using boldface font to denote the vector of all ones is not aligned with vector notation used throughout the paper, and thus, is not defined a-priori.
L81: can you elaborate more on why 'Convergence is guaranteed as long as the errors' implies to 'ensures a good rate of decay for the regularization parameter'.
L103: lambda..
L118: I would split the technical statement from the concrete application fro SGD.
L121: Maybe elaborate more on the technical difficulty you encounter to prove a 'finite-step' version without lim?
L136: G is symmetric?
L137: corresponds?
L137: I'd suggest further clarifying the connection with Taylor reminder by making it more concrete.
L140: corresponds
L140: h in I-h A^\top A
L149: The
Figure 2 Legend: the 'acc' prefix is a bit confusing as 'acceleration' in the context of continuous optimization this term is used somewhat differently.
L227: Can you give more intuition on why 'the algorithm has a momentum term' implies 'underlying dynamical system is harder to extrapolate.'? |