id
stringlengths
11
20
paper_text
stringlengths
29
163k
review
stringlengths
666
24.3k
nips_2017_2213
Inverse Filtering for Hidden Markov Models This paper considers a number of related inverse filtering problems for hidden Markov models (HMMs). In particular, given a sequence of state posteriors and the system dynamics; i) estimate the corresponding sequence of observations, ii) estimate the observation likelihoods, and iii) jointly estimate the observation likelihoods and the observation sequence. We show how to avoid a computationally expensive mixed integer linear program (MILP) by exploiting the algebraic structure of the HMM filter using simple linear algebra operations, and provide conditions for when the quantities can be uniquely reconstructed. We also propose a solution to the more general case where the posteriors are noisily observed. Finally, the proposed inverse filtering algorithms are evaluated on real-world polysomnographic data used for automatic sleep segmentation.
The paper addresses recovery of the observation sequence given known posterior state estimates, but unknown observations and/or sensor model and also in an extension, noise-corrupted measurements. There is a nice progression of the problem through IP, LP, and MILP followed by a more careful analytical derivation of the answers in the noise-free case, and a seemingly approximate though empirically effective approach (cf. Fig 2) using spherical K-means as a subroutine in the noisy case. Honestly, most of the motivations seem to be unrealistic, especially the cyber-physical security setting where one does not observe posteriors, but simply an action based on a presumed argmax w.r.t. posteriors. The EEG application (while somewhat narrow) seems to be the best motivation, however, the sole example is to compare resconstructed observations to a redundant method of sensing -- is this really a compelling application? Is it actually used in practice? Some additional details are provided on page 8, but this is not enough for me to fully understand the rationale for the approach. One is given posterior state estimates of sleep stage for a new patient and the goal is to provide the corresponding mean EEG frequency observations? I don't see the point. Minor terminological question: I've always viewed the HMM to be a type of sequential graphical model structure allowing any type of random variable (discrete or continuous) of which discrete HMMs and Kalman filters (continuous Gaussian) are special cases. Does the terminology "HMM" really assume discrete random variables? Overall, though the motivation for the work is not very strong for me, the results in this paper address a novel problem, and make technically innovative and honestly surprising insights in terms of the tractability and uniqueness of the recovery problems addressed in the paper. I see this paper as a very technically interesting solution that may find future application in use cases the authors have not considered.
nips_2017_598
Learning Overcomplete HMMs We study the problem of learning overcomplete HMMs-those that have many hidden states but a small output alphabet. Despite having significant practical importance, such HMMs are poorly understood with no known positive or negative results for efficient learning. In this paper, we present several new results-both positive and negative-which help define the boundaries between the tractable and intractable settings. Specifically, we show positive results for a large subclass of HMMs whose transition matrices are sparse, well-conditioned, and have small probability mass on short cycles. On the other hand, we show that learning is impossible given only a polynomial number of samples for HMMs with a small output alphabet and whose transition matrices are random regular graphs with large degree. We also discuss these results in the context of learning HMMs which can capture long-term dependencies.
The paper studies the learnability of HMMs in the setting when the output label size m is smaller than the number of states n. This setting is particularly tricky since the usual full-rank assumption that comes up in tensor decomposition based methods is not valid. This paper gives both algorithms and information-theoretic lower-bounds on the learnability of overcomplete HMMs, which clearly increases our state of understanding. 1. They show that HMMs with m=polylog(n) even for transition-matrix being a random dense regular graphs, can not be learning using poly(n) samples even when the window size is poly(n). 2. They show that overcomplete HMMs can be learned efficiently with window size O(log_m n), under some additional four conditions (well-conditioned-ness of T, no-short-cycles, sparsity of graph given by T and random sparse output distributions) Positives: 1. The lower bound is interesting and surprisingly strong. This is unlike the parity-based lower bounds in Mossel-Roch which are more computational by nature, and more similar to moment-based identifiability lower bounds known for Gaussian mixtures and multiview models (see Rabani-Swamy-Schulman). 2. This paper tries to understand in depth the effect of the structure of the transition matrix on learnability in the overcomplete setting. 3. The techniques developed in this paper are new and significantly depart from previous works. For instance, the algorithmic result using tensor decompositions departs significantly from previous techniques in analyzing the Khatri-Rao (KR) product. Analyzing overcomplete tensor decompositions and these KR products are challenging, and the only non-generic analysis I am aware of (by Bhaskara et al. for smoothed analysis) can not be used here since the different tensor modes are dependent. At the heart of this argument is an interesting coupling argument that shows that for any two different starting states, the random walk distributions after O(log_m n) steps will be significantly different if the observation matrices are random. This crucially depends on the well-conditioned-ness of T and no-short-cycle condition which they argue is needed for identifiability. 4. I like the discussion of assumptions section 2.4; while the paper assumes four conditions, and they give qualitative reasons for why most of them are necessary. Negatives: 1. The proofs are fairly dense (especially Theorem 1 , Lemma 3) and the error analysis is hand-wavy at places . Though I have no reason to suspect the correctness, it would be good to improve the readability of these proofs. 2. It is not clear why some of the conditions are necessary. For instance, the random sparse support assumption seems stronger than necessary. Similarly, while the lower bound example in Proposition 1 explains why an example with just short cycles is not identifiable, this doesn't explain why there should be no short-cycles in a connected, well-conditioned T. Summary: Overall, the paper presents a nice collection of results that significantly improves our understanding on overcomplete HMMs. I think the techniques developed here may be of independent interest as well. I think the paper is easily above the bar. Comments: The identifiability results for HMMs using O(log_m n ) should also be attributed to Allman-Mathias-Rhodes'2010. They already show that the necessary conditions are true in a generic sense, and they introduced the algorithm based on tensor decompositions. Typo in Section 2.2: "The techniques of Bhaskara et al. can be applied.." --> ".. can not be applied ..."
nips_2017_1512
Nonlinear random matrix theory for deep learning Neural network configurations with random weights play an important role in the analysis of deep learning. They define the initial loss landscape and are closely related to kernel and random feature methods. Despite the fact that these networks are built out of random matrices, the vast and powerful machinery of random matrix theory has so far found limited success in studying them. A main obstacle in this direction is that neural networks are nonlinear, which prevents the straightforward utilization of many of the existing mathematical results. In this work, we open the door for direct applications of random matrix theory to deep learning by demonstrating that the pointwise nonlinearities typically applied in neural networks can be incorporated into a standard method of proof in random matrix theory known as the moments method. The test case for our study is the Gram matrix , where W is a random weight matrix, X is a random data matrix, and f is a pointwise nonlinear activation function. We derive an explicit representation for the trace of the resolvent of this matrix, which defines its limiting spectral distribution. We apply these results to the computation of the asymptotic performance of single-layer random feature networks on a memorization task and to the analysis of the eigenvalues of the data covariance matrix as it propagates through a neural network. As a byproduct of our analysis, we identify an intriguing new class of activation functions with favorable properties.
This looks like a good theoretical contribution and an interesting direction in the theory of deep learning to me. In this paper, the authors compute the correlation properties (gram matrix) of the vector than went through some step of feedforward network with non linearities and random weights. Given the current interest in the theoretical description of neural nets, I think this is a paper that will be interesting for the NIPS audience and will be a welcome change between the hundreds of GAN posters. Some findings are particularly interesting.They applied their result to a memorization task and obtained an explicit characterizations of the training error. Also the fact that for some type of activation the eigenvalues of the data covariance matrix are constant suggests interesting directions for the future. Few comments and suggestions follows;: - In section 1.3 : I think that the interest for Random Network is slightly older than the authors (maybe) know. In particular, these were standard setting in the 80s (more than 20 years before any work presented by the authors) and have generated a lot of attention, especially in the statistical physics community where the generalization properties were extensivly studied, (see, just to name a few: -https://doi.org/10.1209/0295-5075/27/2/002 -https://doi.org/10.1088/0305-4470/21/1/030-10.1103/PhysRevLett.82.2975 ) - In connection with the random kitchen sinks: isn't the theory presented here a theory of what is (approximatly) going in in the Kernel space ? Given the connection between random projections and kernel methods there seems to be something interesting in this direction. Section 4.2 "Asymptotic performance of random feature methods" should propbably discuss this point. Perharps it is linked to the work of El Karoui, et al ? - Can the author provided an example of an activation that leads to the 0 value for the parameter zeta? This is not clear from the definition (10) (or maybe I missed something obvious ?) is it just ANY even function? I did not check all the computations in the supplementary materials (espcially the scary expansion in page 11!) but I am a bit confuse about the statement "t is easy to check many other examples which all support that eqn. (S19) is true, but a proof is still lacking." . Why is it called a lemma then? I am missing something?
nips_2017_1220
Consistent Multitask Learning with Nonlinear Output Relations Key to multitask learning is exploiting the relationships between different tasks in order to improve prediction performance. Most previous methods have focused on the case where tasks relations can be modeled as linear operators and regularization approaches can be used successfully. However, in practice assuming the tasks to be linearly related is often restrictive, and allowing for nonlinear structures is a challenge. In this paper, we tackle this issue by casting the problem within the framework of structured prediction. Our main contribution is a novel algorithm for learning multiple tasks which are related by a system of nonlinear equations that their joint outputs need to satisfy. We show that our algorithm can be efficiently implemented and study its generalization properties, proving universal consistency and learning rates. Our theoretical analysis highlights the benefits of non-linear multitask learning over learning the tasks independently. Encouraging experimental results show the benefits of the proposed method in practice.
The paper tackles multi-task learning problems where there are non-linear relationships between tasks. The relationships between tasks is encoded as a set of non-linear constraints that the outputs of each task must satisfy (e.g . y1^2 + y2^2 = 1). In a nutshell, he proposed technique can be summarized as: use kernel regression to make predictions for each task independently, then project the prediction vector onto the constrained set. Overall, I like the idea of being able to take advantage of non-linear relationships between tasks. However, I am not sure how practical it is to specify the non-linear constraints between tasks in practice. The weakest point of the paper is the empirical evaluation. It would be good if the authors can discuss real applications where one can specify a reasonable set non-linear relationships between tasks. The examples provided in the paper are not very satisfactory. One imposes deterministic relationships between tasks (i.e. if the output for task 1 is known then the output for task 2 can be inferred exactly), and the other approximates the relationship through a point cloud (which raises the question why was the data in the point cloud not used as training data. Also to obtain a point cloud one needs to be in a vector-valued regression setting). The paper could be significantly improved by providing a more thorough empirical evaluation section that analyses the performance of the proposed method on multiple real problems. One of the main contributions of the paper, as stated by the authors, is the extension of the SELF approach beyond vector-valued regression. But evaluation is done only on vector-valued regression problems. The technique depends on using a kernel K for the kernel regression. Is there any guidance on how to select such a kernel? What kernel was used in the experimental section? It would also be good if the authors analyze the effect of various design and parameter choices. How sensitive are the results to the kernel K, to the regularization parameter \lambda, to the parameters \delta and \mu? How were all the above parameters set in the experimental section? Looking at figure 1, I am very surprised that the performance of STL seems to converge to a point significantly worse than that of NL-MTL. Given a large amount of data I would expect STL to perform just as well or better than any MTL technique (information transfer between tasks is no longer relevant if there is enough training data). This does not seem to be the case in figure 1. Why is that?
nips_2017_2520
Minimax Estimation of Bandable Precision Matrices The inverse covariance matrix provides considerable insight for understanding statistical models in the multivariate setting. In particular, when the distribution over variables is assumed to be multivariate normal, the sparsity pattern in the inverse covariance matrix, commonly referred to as the precision matrix, corresponds to the adjacency matrix representation of the Gauss-Markov graph, which encodes conditional independence statements between variables. Minimax results under the spectral norm have previously been established for covariance matrices, both sparse and banded, and for sparse precision matrices. We establish minimax estimation bounds for estimating banded precision matrices under the spectral norm. Our results greatly improve upon the existing bounds; in particular, we find that the minimax rate for estimating banded precision matrices matches that of estimating banded covariance matrices. The key insight in our analysis is that we are able to obtain barely-noisy estimates of k×k subblocks of the precision matrix by inverting slightly wider blocks of the empirical covariance matrix along the diagonal. Our theoretical results are complemented by experiments demonstrating the sharpness of our bounds.
+++++++ Summary +++++++ The paper establishes the first theoretical guarantees for statistical estimation of bandable precision matrices. The authors propose an estimator for the precision matrix given a set of independent observations, and show that this estimator is minimax optimal. Interestingly the minimax rate for estimating banded precision matrices is shown to be equal to the corresponding rate for estimating banded covariance matrices. The upper bound follows by studying the behavior of inverses of small blocks along the diagonal of the covariance matrix together with classical random matrix theory results. The lower bound follows by constructing two specially defined subparameter spaces and applying testing arguments. The authors support their theoretical development via representative numerical results. +++++++ Strengths +++++++ Clarity: The paper is very nicely written and the authors appropriately convey the ideas and relevance of their work. Novelty: While I am not too familiar with the surrounding literature, the ideas in this paper appear novel. In particular, the estimator is both intuitive, as well as (relatively) easy to compute via the tricks that the authors discuss in Section 2.2. The techniques used to prove both upper and lower bounds are interesting and sound. Significance: The authors establish matching upper and lower bounds, thereby effectively resolving the question of bandable precision matrix estimation (at least in the context of Gaussian random variables). Typo: Should it be F_alpha in (3), not P_alpha? Update after authors' response: Good paper; no further comments to add.
nips_2017_1489
EX 2 : Exploration with Exemplar Models for Deep Reinforcement Learning Deep reinforcement learning algorithms have been shown to learn complex tasks using highly general policy classes. However, sparse reward problems remain a significant challenge. Exploration methods based on novelty detection have been particularly successful in such settings but typically require generative or predictive models of the observations, which can be difficult to train when the observations are very high-dimensional and complex, as in the case of raw images. We propose a novelty detection algorithm for exploration that is based entirely on discriminatively trained exemplar models, where classifiers are trained to discriminate each visited state against all others. Intuitively, novel states are easier to distinguish against other states seen during training. We show that this kind of discriminative modeling corresponds to implicit density estimation, and that it can be combined with countbased exploration to produce competitive results on a range of popular benchmark tasks, including state-of-the-art results on challenging egocentric observations in the vizDoom benchmark.
Review of submission 1489: EX2: Exploration with Exemplar Models for Deep Reinforcement Learning Summary: A discriminative novelty detection algorithm is proposed to improve exploration for policy gradient based reinforcement learning algorithms. The implicitly-estimated density by the discriminative novelty detection of a state is then used to produce a reward bonus added to the original reward for down-stream policy optimization algorithms (TRPO). Two techniques are discussed to improve the computation efficiency. Comments - One motivation of the paper is to utilize implicit density estimation to approximate classic count based exploration. The discriminative novelty detection only maintains a density estimation over the states, but not state-action pairs. To the best of my knowledge, most theoretical results on explorations are built on the state-action visit count, not state visit count. Generally, exploration strategies for RL algorithms investigate reducing the RL agent’s uncertainty over the MDP’s state transition and reward functions. The agent’s uncertainty (in terms of confidence intervals or posterior distributions over environment parameters) decreases as the inverse square root of the state-action visit count. - It is not clear from the texts how the added reward bonus is sufficient to guarantee the novel state action pairs to be visited for policy gradient based RL methods. Similar reward bonuses have been shown to improve exploration for value-based reinforcement learning methods, such as MBIE-EB (Strehl & Littman, 2009) and BEB (Kolter & Ng, 2009), but not policy gradient based RL methods. Value-based and policy gradient based RL methods differ a lot on the technical details. - Even we assume the added reward bonuses could result in novel state action pairs to be visited as desired, it is not clear why policy gradient methods could benefit from such exploration strategy. The paper, State-Dependent Exploration for Policy Gradient Methods (Thomas Ruckstieß, Martin Felder, and Jurgen Schmidhuber, ECML 2008), showed that limiting the exploration within an episode (i.e., returning the same action for any given state within an episode) could be helpful for policy gradient RL methods by reducing the variance in the gradients and improving the credit assignment. The soundness of the proposed techniques would be improved if necessary explanations were provided. - There is one closely related recent paper on improving exploration for policy gradient methods, IMPROVING POLICY GRADIENT BY EXPLORING UNDER-APPRECIATED REWARDS (Ofir Nachum et al. ICLR 2017). It would be good for the paper to discuss the relationship and even compare against it empirically.
nips_2017_2923
Gradients of Generative Models for Improved Discriminative Analysis of Tandem Mass Spectra Tandem mass spectrometry (MS/MS) is a high-throughput technology used to identify the proteins in a complex biological sample, such as a drop of blood. A collection of spectra is generated at the output of the process, each spectrum of which is representative of a peptide (protein subsequence) present in the original complex sample. In this work, we leverage the log-likelihood gradients of generative models to improve the identification of such spectra. In particular, we show that the gradient of a recently proposed dynamic Bayesian network (DBN) [7] may be naturally employed by a kernel-based discriminative classifier. The resulting Fisher kernel substantially improves upon recent attempts to combine generative and discriminative models for post-processing analysis, outperforming all other methods on the evaluated datasets. We extend the improved accuracy offered by the Fisher kernel framework to other search algorithms by introducing Theseus, a DBN representing a large number of widely used MS/MS scoring functions. Furthermore, with gradient ascent and max-product inference at hand, we use Theseus to learn model parameters without any supervision.
This paper introduces Theseus, an algorithm for matching MS/MS spectra to peptide in a D.B. This is a challenging and important task. It is important because MS/MS is currently practically the only common high-throughput method to identify which proteins are present in a sample. It is challenging because the data is analog (intensity vs. m/z graphs) and extremely noisy. Specifically, insertions, deletions, and variations due to natural losses (e.g. carbon monoxide) can occur compared to a theoretical peptide spectra. This work builds upon an impressive body of work that has been dedicated to this problem. Specifically, the recently developed DRIP (Halloran et al 2014, 2016) is a DBN, i.e. a generative model that tries to match the observed spectra to a given one in the D.B. modeling insertions/deletions etc. and computes for it a likelihood and the most likely assignment using a viterbi algorithm. A big issue in the field is calibration of scores (in this case - likelihoods for possible peptide matches from the D.B to an observed spectra) as these vary greatly between spectra (e.g. different number of peaks to match) so estimating which score is “good enough” (e.g. in terms of overall false positive rate) is challenging. Another previous algorithm, Percolator, has been developed to address this by training an SVM to discriminate between “good” matching peptides scores and “negative” scores from decoy peptides. A recent work (Halloran et al 2016) tried to then map the DRIP viterbi path to a fixed sized vector so Percolator could be applied atop of DRIP. Here, the authors propose an alternative to the above heuristic mapping. Instead, they employ Fisher Kernels where two samples (in our case - these would be the viterbi best path assigned to two candidates matches) are compared by their derivative in the feature space of the model. This elegant solution to apply discriminative models (here Percolator) to generative models (here DRIP) was suggested for other problems in the comp bio domain by Jaakkola and Haussler 1998. They derive the likelihood and its derivatives and the needed update algorithms for this, and show employing this to several datasets improve upon state of the art results. Pros: This work tackles an important problem. It results in what seems to be a significant improvement upon state of the art. It offers a new model (Theseus) and an elegant application Jaakkola and Haussler’s Fisher Kernel idea to the problem at hand. Overall, this appears to be high-quality computational work advancing an important topic. Cons: Major points of criticism: 1. Writing: While the paper starts well (intro) it quickly becomes impenetrable. It is very hard to follow the terms, the variables and how they all come together. For example, Fig1 is referred to in line 66 and includes many terms not explained at that point. Sec. 2.1 refers to c_b which is not defined (is it like c(x) ?) How are the natural losses modeled in this framework, Lines 137-145 are cryptic, evaluation procedure is too condensed (lines 221-227). In general, the authors spend too much space deriving all the equations in the main text instead of actually explaining the model and what is new compared to previous work. This makes evaluating both the experiments and the novelty hard. To make things worse, the caption of the main figure (Fig3) refers to labels/algorithms (“Percolator”, “DRIP Percolator Heuristic”) which do not appear in the figure. Given the complexity of the domain and the short format of NIPS papers, we recommend the authors revise the text to make the overall model (Theseus) and novel elements more clear, and defer some of the technical details to a longer more detailed supplementary (which in this case seems mandatory).
nips_2017_1329
Few-Shot Learning Through an Information Retrieval Lens Few-shot learning refers to understanding new concepts from only a few examples. We propose an information retrieval-inspired approach for this problem that is motivated by the increased importance of maximally leveraging all the available information in this low-data regime. We define a training objective that aims to extract as much information as possible from each training batch by effectively optimizing over all relative orderings of the batch points simultaneously. In particular, we view each batch point as a 'query' that ranks the remaining ones based on its predicted relevance to them and we define a model within the framework of structured prediction to optimize mean Average Precision over these rankings. Our method achieves impressive results on the standard few-shot classification benchmarks while is also capable of few-shot retrieval.
This work is a great extension of few shot learning paradigms in deep metric learning. Rather than considering pairs of samples for metric learning (siamese networks), or 3 examples (anchor, positive, and negative -- triplet learning), the authors propose to use the full minibatchs' relationships or ranking with respect to the anchor sample. This makes sense from the structured prediction, and efficient learning. Incorporating mean average precision into the loss directly, allows for optimization of the actual task at hand. Some comments: 1. t would be great to see more experimental results, on other datasets (face recognition). 2. Figure 1 is not very powerful, it would be better to show how the distances change as training evolves. 3. Not clear how to set \lambda, as in a number of cases, a wrong value for \lambda leads to weak results. 4. Authors must provide code for this approach to really have an impact in deep metric learning, as this work requires a bit of implementation.
nips_2017_3049
Action Centered Contextual Bandits Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.
In the context of multi-armed bandit problems for mobile health, modelling correctly the linearity or non-linearity of the rewards is crucial. The paper tries to take advantage of the fact that a general "Do nothing" action is generally available, that corresponds to the baseline. The idea is that, while the reward for each action may not be easily captured by a linear model, it is reasonable to assume that the relative error between playing an action and playing the "Do thing" action follows a linear model. The article revisits bandit strategies in this context. Comments: 1) Line 29: See also the algorithm OFUL, and more generally Kernel-UCB, GP-UCB, to cite a few other algorithms with provable regret minimization guarantees. 2) Line 35, 51, and so on: I humbly suggest you replace "mhealth" with something else, e.g. "M-health", since "mhealth" looks like a typo. 3) Question about the model: is \bar {f}_t assumed to be known? If not and if it is completely arbitrary, it seems counter-intuitive that one can prove any learning guarantee, given that a bandit setting is considered (only one action per step), and \bar {f}_t may change at any time to create maximal confusion. 4) Line 132: "might different for different" 5) Line 162: What is \bar{a}_t ? I can only see the definition for a_t and \bar{a}_t^\star. Likewise, what is \bar{a} on line 165? It seems these are defined only later in the paper, which is confusing. Please, fix this point. 6) Question: What is the intuition for choosing \pi_t to be (up to projection) P(s_{t,\bar a}^\top \bar \theta \geq 0) ? Especially, please comment on the value 0 here. 7) Lemma 5: Should 0 be replaced with a in the probability bound? 8) Section 9: I understand that you heavily used the proof of Agrawal and Goyal, but I discourage the use of sentences such as "For any filtration such that ... is true". I suggest you rather control probability of events. 9) Proof, line 62: there is a missing reference. ................. In the end the setting of the paper and definition of regret look a bit odd, but the authors provide an interesting idea to overcome the possibly nasty non-stationary term, thanks to randomization. Also deriving the weighted least-squares estimate and TS analysis in this context is interesting, even though relying heavily on previous existing analysis for TS. Finally, the idea to consider that only the relative errors follow a linear model might be generalized to other problems. Weak accept.
nips_2017_1910
Clustering Billions of Reads for DNA Data Storage Storing data in synthetic DNA offers the possibility of improving information density and durability by several orders of magnitude compared to current storage technologies. However, DNA data storage requires a computationally intensive process to retrieve the data. In particular, a crucial step in the data retrieval pipeline involves clustering billions of strings with respect to edit distance. Datasets in this domain have many notable properties, such as containing a very large number of small clusters that are well-separated in the edit distance metric space. In this regime, existing algorithms are unsuitable because of either their long running time or low accuracy. To address this issue, we present a novel distributed algorithm for approximately computing the underlying clusters. Our algorithm converges efficiently on any dataset that satisfies certain separability properties, such as those coming from DNA data storage systems. We also prove that, under these assumptions, our algorithm is robust to outliers and high levels of noise. We provide empirical justification of the accuracy, scalability, and convergence of our algorithm on real and synthetic data. Compared to the state-of-the-art algorithm for clustering DNA sequences, our algorithm simultaneously achieves higher accuracy and a 1000x speedup on three real datasets.
The paper presents a solution to a new type of clustering problem that has emerged from studies of DNA-based storage. Information is encoded within DNA sequences and retrieved using short-read sequencing technology. The short-read sequencer will create multiple short overlapping sequence reads and these have to be clustered to establish whether they are from the same place in the original sequence. The characteristics of the clustering problem is that the clusters are pretty tight in terms of edit distance (25 max diameter here - that seems quite broad given current sequencing error rates) but well separated from each other (much larger distance between them than diameter). I thought this was an interesting and timely application. DNA storage is a hot topic and this kind of clustering is one of the computational bottlenecks to applying the method. I guess the methods presented will also be of general interest to the NIPS audience as an example of a massive data inference problem. The methods look appropriate for the task and the clustering speed achieved looks very impressive. The paper is well written and the problem, methods and results were easy to follow. My main concern with the work centred around how realistic is the generative model of the data. The substitution error rate and the insertion/deletion error rates are set to be the same parameter p/3. I think that current Illumina high-throughput sequencing is associated with much higher substitution error rates than insertion/deletion rates, I think perhaps 100-fold less insertion/deletion than substitution for some illumina technologies. If insertion/deletion rates are much lower than substitution errors then perhaps the problem is less challenging than stated here since there would be less requirement for dealing with edit distances in the case insertions/deletions are rare. Also, for most technologies the error rates from sequencing are can be well established and therefore methods can be fine-tuned given knowledge of these parameters.
nips_2017_2586
Position-based Multiple-play Bandit Problem with Unknown Position Bias Motivated by online advertising, we study a multiple-play multi-armed bandit problem with position bias that involves several slots and the latter slots yield fewer rewards. We characterize the hardness of the problem by deriving an asymptotic regret bound. We propose the Permutation Minimum Empirical Divergence (PMED) algorithm and derive its asymptotically optimal regret bound. Because of the uncertainty of the position bias, the optimal algorithm for such a problem requires non-convex optimizations that are different from usual partial monitoring and semi-bandit problems. We propose a cutting-plane method and related bi-convex relaxation for these optimizations by using auxiliary variables.
#Summary This submission studies a multi-armed bandit problem, PBMU. In this problem, there are L slots (to place ads) and K possible ads. At each round, the player need to choose L from the K possible ads and place them into the L slots. The reward of placing the i-th ad into the l-th slot is i.i.d. drawn from a Bernoulli distribution Ber(\theta_i \kappa_l), where \theta_i is the (unknown) quality of the i-th ad and \kappa_l is the (also unknown) position bias of the l-th slot. The authors derived a regret lower bound and proposed two algorithms for PBMU. The second algorithm has a matching regret bound. The authors also discussed how to solve the related optimization problems appeared in the algorithms. The authors give a complete characterization of the hardness of PMMAB in this paper. Their main contributions are: 1.A regret lower bound, which states that the expected regret of a strongly consistent algorithm for PMMAB is lower bounded by ClogT-o(log T). Where C is a parameter determined by every arm's mean profile and slot's position bias. 2.An algorithm for PMMAB. The authors called it Permutation Minimum Empirical Divergence(PMED) which can be compared with the Deterministic Minimum Empirical Divergence (DMED) algorithm for SMAB (see Honda and Takemura COLT2010). They give an asymptotically upper bound of a modification of PMED which is Clog T + o(log T). From the technical point, I think the regret lower bound in this paper is a standard, which follows some techniques from Honda and Takemura COLT2010. In their upper bound analysis, the authors solve some new difficulties. In PMED algorithm, they compute Maximum Likelihood Estimator and use it to compute a K times K matrix which optimizes a corresponding optimization problem in the lower bound analysis. The authors then convert the matrix into a convex combination of permutation matrix (due to Birkhoff–von Neumann Theorem). Though this idea is natural, the analysis is not straightforward. The authors analyze a modification of PMED in which they add additional terms to modify the optimization problems. They obtain a corresponding upper bound for this modified algorithm. Experimental results are attached. #Treatment of related work In the introduction part, the authors introduced some other related models and compared them with their new model. They also gave convincing reasons why this new model deserves study. To the best of my knowledge, their model is novel. However, the author did not make much effort in comparing their techniques with previous (closely related) results. This made it difficult to evaluate the technical strength of their results. #Presentation The presentation of the paper still needs some work. In Section 3, the authors defined a set \mathcal{Q}. However, it is dubious why they need to define this quality when deriving the regret lower bound (the definition of this quality is not furthered used in Section 3 and related parts in the appendix). On the other hand, the authors did use several properties of this quantity in Section 4 (to run Algorithm 2). Thus, I advise the authors to place the definition of \mathcal{Q} into Section 4 in order to eliminate unnecessary confusion and change the claim of the lower bound correspondingly. Line 16-20 of Algorithm 1 (PMED and PMED-Hinge) is difficult to follow. To my understand, \hat{N}_{i, l} is the number of necessary drawings for the pair (i, l). The algorithm then decomposes the matrix \hat{N} into a linear combination of permutation matrices and pull the arms according to the permutation matrices and the coefficients. When will c_v^{aff} be smaller than c_v^{req}? Shouldn't we pull (v_1, v_2, \ldots, v_L) for c_v^{req} times (instead of once)? Should c_v^{req} be rounded up to an integer? There are some other typos/minor issues. See the list below. 1. Line 57-58: Non-convex optimization appears in your algorithm does not necessarily mean that PBMU intrinsically requires non-convex optimization. 2. Line 63-65: Define K, or remove some details. 3. Line 86-87: at round t -> of the first t rounds. 4. Line 102-105: According to the definition, (1), (2), \ldots, (L) is a permutation of {1, 2, 3, \ldots, L}, which is not what you want. 5. Line 115-117: The fact that d_{KL} is non-convex does not necessarily make PBMU intrinsically difficult. 6. Line 118, the definition of \mathcal{Q}: What do you mean by \sum (q_{i, l} = q_{i + 1, l})? 7. Line 122: (1) is not am inequality. Do you want to refer to the Inequality in Lemma 1? "the drawing each pair (i, l)", remove "the". 8. Line 124-127: Please elaborate on the reason why the constraint \theta_i?\kappa_i?= \theta_i \kappa_i is necessary. The argument given here is not easy to follow. 9. Line 138: is based on -> is closed related to 10. Line 141: what do you mean by "clarify"? 11. Algorithm 1, Line 12: Explicitly define \hat{1}(t), \ldots, \hat{L}(t). Currently it is only implicitly defined in Line 160. 12. Algorithm 1, Line 14: "where e_v for each v is a permutation matrix" -> "where for each v, e_v is a permutation matrix". 13. Line 172-174: Explicitly define the naive greedy algorithm. 14. Line 343: native->naive. 15. Algorithm 1, Line 17: \max_{c \ge 0} ... \ge 0 -> \max_{c \ge 0} s.t. ... \ge 0. ------------------- The rebuttal addressed most of my major questions. Score changed to 7.
nips_2017_758
Integration Methods and Optimization Algorithms We show that accelerated optimization methods can be seen as particular instances of multi-step integration schemes from numerical analysis, applied to the gradient flow equation. Compared with recent advances in this vein, the differential equation considered here is the basic gradient flow, and we derive a class of multi-step schemes which includes accelerated algorithms, using classical conditions from numerical analysis. Multi-step schemes integrate the differential equation using larger step sizes, which intuitively explains the acceleration phenomenon.
The paper provides an interpretation of a number of accelerated gradient algorithms (and other convex optimization algorithms) based on the theory of numerical integration and numerical solution of ODEs. In particular, the paper focuses on the well-studied class of multi-step methods to interpret Nesterov's method and Polyak's heavy ball method as discretizations of the natural gradient flow dynamics. The authors argue that this interpretation is more beneficial than existing interpretations based on different dynamics (e.g. Krichene et al) because of the simplicity of the underlying ODE. Notice that the novelty here lies in the focus on multistep methods, as it was already well-known that accelerated algorithms can be obtained by appropriately applying Euler's discretization to certain dynamics (again, see Krichene et al). A large part of the paper is devoted to introducing important properties of numerical discretization schemes for ODEs, including consistency and zero-stability, and their instantiation in the case of multistep methods. Given this background, the authors proceed to show that for LINEAR ODES, i.e., for gradient flows deriving from the optimization of quadratic functions, natural choices of parameters for two-step methods yield Nesterov and Polyak's method. The interpretation of Nesterov's method is carried out both in the smooth and strongly convex case and in the smooth-only case. I found the paper to be an interesting read, as I already believed that the study of different numerical discretization methods is an important research direction to pursue in the design of more efficient optimization algorithms for specific problems. However, I need to make three critical points, two related to the paper in question and one more generally related to the idea of using multistep methods to define faster algorithms: 1) The interpretation proposed by the authors only works in explaining Nesterov's method for functions that are smooth in the l_2-norm and is not directly applicable for different norms. Indeed, it seems to me that different dynamics besides the gradient flow must be required in this case, as the method depends on a suitable choice of prox operator. The authors do not acknowledge this important limitation and make sweeping generalizations that need to be qualified, such as claiming that convergence rates of optimization algorithms are controlled by our ability to discretize the gradient flow equation. I believe that this incorrect when working with norms different from l_2. 2) The interpretation only applies to quadratic functions, which makes it unclear to me whether these techniques can be effectively deployed to produce new algorithms. For comparison, other interpretations of accelerated methods, such as Allen-Zhu and Orecchia's work led to improved approximation algorithms for certain classes of linear programs. 3) Regarding the general direction of research, an important caveat in considering multistep methods is that it is not completely clear how such techniques would help, even for restricted class of problems, for orders larger than 2 (i.e. larger than those that correspond to Nesterov 's AGD). Indeed, it is not hard to argue that both Nesterov's AGD (and also Nemirovski's Mirror Prox) can leverage smoothness to fully cancel the discretization error accrued in converting the continuous dynamics to a discrete-time algorithm. It is then not clear how a further reduction in the discretization error, if not properly related to problem parameters, could yield faster convergence. In conclusion, I liked the intepretation and I think it is worth disseminating it. However, I am not sure that it i significant enough in its explanatory power, novel enough in the techniques introduced (multistep methods are a classical tool in numerical systems) or promising enough to warrant acceptance in NIPS.
nips_2017_3420
Acceleration and Averaging In Stochastic Descent Dynamics We formulate and study a general family of (continuous-time) stochastic dynamics for accelerated first-order minimization of smooth convex functions. Building on an averaging formulation of accelerated mirror descent, we propose a stochastic variant in which the gradient is contaminated by noise, and study the resulting stochastic differential equation. We prove a bound on the rate of change of an energy function associated with the problem, then use it to derive estimates of convergence rates of the function values (almost surely and in expectation), both for persistent and asymptotically vanishing noise. We discuss the interaction between the parameters of the dynamics (learning rate and averaging rates) and the covariation of the noise process. In particular, we show how the asymptotic rate of covariation affects the choice of parameters and, ultimately, the convergence rate.
Summary: The authors consider mirror descent dynamics in continuous time for constrained optimization problems. They work in the context of stochastic differential equations and consider parametrized dynamics which include many of the recently considered parameters for such dynamics. This includes time dependant learning rate, sensitivity and averaging parameters which provide a great flexibility in the analysis. The dynamics also feature acceleration terms which were recently proposed in a deterministic setting. In a first step, the authors carry out an asymptotic analysis in the deterministic setting. They then introduce a stochastic variant of the dynamics and describe uniqueness properties of its solutions. Combining properties of the deterministic dynamics and stochastic calculus the authors propose three results: Almost sure convergence of the sample trajectories, convergence rates, both in expectation and for sample trajectories. Main comments: First of all, I have a limited knowledge of stochastic differential calculus and do not feel competent to validate all the technical details in the manuscript, although none of them felt particularly awkward. I believe that this work is of very good quality. Technical elements are clearly presented and the authors do not neglect fundamental aspects of the problem such as existence and uniqueness. Furthermore, the overall exposition is clear and of good quality and the text is well balanced between the main proof arguments and further details given in the appendix. To my understanding, the convergence analysis carried out by the authors extends significantly existing results from the litterature. The model is flexible enough to allow many possible choice of the dynamics. For example the authors are able to show that unit learning rate may robustly maintain good properties of accelerated dynamics in the deterministic case provided that suitable averaging is performed. Overall, I think that this constitutes valuble work and should be considered for publication. Additional comments and questions: 1- The authors motivate the introduction of stochastic dynamics by emphasizing on stochastic methods for finite sums. However the additive noise structure is not particularly well suited to model such cases. Indeed, in these settings the variance of the noise usually features a multiplicative term. 2- All the results are presented in abstract form with the addition of an other parameter "r" and assumptions on it. It is not really clear how the assumptions on r restrict the choice of the other parameters. As a result, the convergence rates are not easily read or interpreted. Beyon the proposed corollaries, it is not really clear how the given result provide interesting guaranties in terms of function values. For example would the work of the authors allow to get better rates in the small noise regime? 3- Do the authors have any comment on how the analysis which is carried out in a stochastic, continuous time setting could impact the discrete time setting. Is there a clear path to the conception of efficient algorithms and in particular is it possible to provide a discrete time equivalent for the different time dependent parameters in the ODE?
nips_2017_1741
Tomography of the London Underground: a Scalable Model for Origin-Destination Data The paper addresses the classical network tomography problem of inferring local traffic given origin-destination observations. Focusing on large complex public transportation systems, we build a scalable model that exploits input-output information to estimate the unobserved link/station loads and the users' path preferences. Based on the reconstruction of the users' travel time distribution, the model is flexible enough to capture possible different path-choice strategies and correlations between users travelling on similar paths at similar times. The corresponding likelihood function is intractable for medium or large-scale networks and we propose two distinct strategies, namely the exact maximum-likelihood inference of an approximate but tractable model and the variational inference of the original intractable model. As an application of our approach, we consider the emblematic case of the London underground network, where a tap-in/tap-out system tracks the starting/exit time and location of all journeys in a day. A set of synthetic simulations and real data provided by Transport For London are used to validate and test the model on the predictions of observable and unobservable quantities.
I thank the authors for the clarification in their rebuttal. It is even more clear that the authors should better contrast their work with aggregate approaches such as Dan Sheldon's collective graphical models (e.g., Sheldon and Dietterich (2011), Kumar et al. 2013, Bernstein and Sheldon 2016). Part of the confusion came from some of the modeling choices: In equation (1) the travel times added by one station is Poisson distributed?! Poisson is often used for link loads (how many people there are in a given station), not to model time. Is the quantization of time too coarse for a continuous-time model? Treating time as a Poisson r.v. requires some justification. Wouldn't a phase-type distribution(e.g., Erlang) be a better choice for time? Such modeling choices must be explained. The main problem, I fear, is not in the technical details. The paper is missing a bigger insight into inferring traffic loads from OD pairs. Something beyond solving this particular problem as a mixture of Poissons with path constraints. It is important to emphasize that the evaluation is not predictive (of the future) but rather it gives estimates of hidden variables. It is a transductive method as it cannot predict anything beyond the time horizon given in the training data. I am OK with that. Figure 4 is missing a simple baseline using the shortest paths. The results of the model in Figure 5 don't seem to match the data in the smaller stations (e.g., Finchley Road). Minor comments: What is the difference between the distributions Poisson and poisson in eqs (7) and (9)? Figure 3: botton => bottom A. Kumar, D. Sheldon, and B. Srivastava. 2013. Collective Diffusion over Networks: Models and Inference. In Proceedings of International Conference on Uncertainty in Artificial Intelligence. 351–360. Garrett Bernstein, Daniel Sheldon,Consistently Estimating Markov Chains with Noisy Aggregate Data, AISTATS, 2016. Du, J., Kumar, A., and Varakantham, P. (2014, May). On understanding diffusion dynamics of patrons at a theme park. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems (pp. 1501-1502). International Foundation for Autonomous Agents and Multiagent Systems.
nips_2017_1526
Dual Discriminator Generative Adversarial Nets We propose in this paper a novel approach to tackle the problem of mode collapse encountered in generative adversarial network (GAN). Our idea is intuitive but proven to be very effective, especially in addressing some key limitations of GAN. In essence, it combines the Kullback-Leibler (KL) and reverse KL divergences into a unified objective function, thus it exploits the complementary statistical properties from these divergences to effectively diversify the estimated density in capturing multi-modes. We term our method dual discriminator generative adversarial nets (D2GAN) which, unlike GAN, has two discriminators; and together with a generator, it also has the analogy of a minimax game, wherein a discriminator rewards high scores for samples from data distribution whilst another discriminator, conversely, favoring data from the generator, and the generator produces data to fool both two discriminators. We develop theoretical analysis to show that, given the maximal discriminators, optimizing the generator of D2GAN reduces to minimizing both KL and reverse KL divergences between data distribution and the distribution induced from the data generated by the generator, hence effectively avoiding the mode collapsing problem. We conduct extensive experiments on synthetic and real-world large-scale datasets (MNIST, CIFAR-10, STL-10, ImageNet), where we have made our best effort to compare our D2GAN with the latest state-of-the-art GAN's variants in comprehensive qualitative and quantitative evaluations. The experimental results demonstrate the competitive and superior performance of our approach in generating good quality and diverse samples over baselines, and the capability of our method to scale up to ImageNet database.
This paper presents a variant of generative adversarial networks (GANs) that utilizes two discriminators, one tries to assign high scores for data, and the other tries to assign high scores for the samples, both discriminating data from samples, and the generator tries to fool both discriminators. It has been shown in section 3 that the proposed approach effectively optimizes the sum of KL and reverse KL between generator distribution and data distribution in the idealized non-parametric setup, therefore encouraging more mode coverage than other GAN variants. The paper is quite well written and the formulation and analysis seems sound and straightforward. The proposed approach is evaluated on toy 2D points dataset as well as more realistic MNIST, CIFAR-10, STL and ImageNet datasets. I have one concern about the new formulation, as shown in Proposition 1, the optimal discriminators have the form of density ratios. Would this cause instability issues? The density ratios can vary wildly from 0 to infinity when the modes of the model and the data do not match well. On the other hand, it is very hard for neural nets to learn to predict extremely large values with finite weights. On the toy dataset, however, the proposed D2GAN seems to be more stable than standard GAN and unrolled GAN, is this in general true though? I appreciate the authors’ effort in doing an extensive hyperparameter search for both the proposed method and the baselines on the MNIST dataset. However, for a better comparison, it feels we should train the same generator network using all the methods compared, and tune the other hyperparameters for that network architecture (or a few networks). Also, learning rate seems to be the most important hyperparameter but it is not searched extensively enough. The paper claims they can scale up their proposed method to ImageNet, but I feel this is a little bit over-claiming, as they only tried on a subset of ImageNet and downsampled all images to 32x32, while there are already existing methods that can generate ImageNet samples for much higher resolution. Overall the paper seems like a valid addition to the GAN literature, but doesn’t seem to change state-of-the-art too much compared to other approaches.
nips_2017_2227
Reconstructing perceived faces from brain activations with deep adversarial neural decoding Here, we present a novel approach to solve the problem of reconstructing perceived stimuli from brain responses by combining probabilistic inference with deep learning. Our approach first inverts the linear transformation from latent features to brain responses with maximum a posteriori estimation and then inverts the nonlinear transformation from perceived stimuli to latent features with adversarial training of convolutional neural networks. We test our approach with a functional magnetic resonance imaging experiment and show that it can generate state-of-the-art reconstructions of perceived faces from brain activations.
The authors propose a brain decoding model tailored to face reconstruction from BOLD fMRI measurements of perceived faces. There are some promising aspects to this contribution, but overall in its current state there are also a number of concerning issues. Positive points: - a GAN decoder was trained on face embeddings coming from a triplet loss or identity-predicting face embedding space to output the original images. Modulo my inability to follow the deluge of GAN papers closely, this is a novel contribution in that it is the application of the existant imagenet reconstruction GAN to faces. This itself may be on the level of a workshop contribution. - The reconstructed faces often coincide in gender and skin color with the original faces, indicating that the BOLD fMRI data contained this information. Reconstructing from BOLD fMRI is a very hard task and requires good data acquisition. The results indicate that a good job has been done on this front. - the feature reconstruction numbers indicate that using a GAN for face reconstruction is a step forward with respect to other (albeit much older and inherently weaker) methods. Points of concern: No part of the reconstruction method proposed seems novel (as claimed) and validation is shaky in parts. - The Bayesian framework seems superfluous. The reconstruction is performed using ridge regression with voxel importance weighted by inverse noise level. The subsection of deriving the posterior can be cut. [Notational quirk: If I understand correctly, B is not necessarily square nor necessarily invertible if it were. If the pseudoinverse is meant, it should be written with a different symbol, e.g. +] - No reason is given as to why PCA is performed on the face embeddings and how the number of components was chosen. (If the full embedding had been chosen, without PCA, would the reconstructions of embeddings look perfect? Is the application of PCA the "crucial aspect that made this work"? If so, please elaborate with experiments as to why this could be - projecting properly into the first two (semantically meaningful) axes does seem like an important thing to be able to do and could add robustness. - No information is provided as to which voxels were used for reconstruction. Were they selected according to how well they are predicted? If so, was the selection explicit or implicit by using the inverse noise variance weighting? If explicit (e.g. noise level cutoff), how many were selected and according to which cutoff criterion? Were these voxels mostly low-level visual or high-level visual? Could a restriction to different brain areas, e.g. known face-selective vs areas of the low-level visual hierarchy be used and evaluated? This question is not only neuroscientifically relevant. It would provide insight into how the decoding currently works. - The evaluation of the reconstruction seems to hang in the void a bit, as well as its comparison with other (incorrigibly worse) methods. The GAN that was trained outputs faces. It was trained to do this and it seems to do it well. This means that the baseline correlation of *any* output of the GAN will have a relatively high correlation with any face due to alignment of features. The permutation test of significance is appropriate and this outcome should be highlighted more (how about comparing to reconstructions from random vectors also?). A histogram of correlation scores in supplementary material would also provide insight. The comparison against the other methods seems a bit unfair. These other methods are (in modern view) clearly "all-purpose hacks", because there was not much else available to do at the time. Nobody expects eigenfaces to do well in any measure, nor does anybody expect mean and some restricted form of (diagonal?) covariance to be sufficient to model natural image data. These methods are thus clearly lacking in expressiveness and could never yield as good reconstructions as the neural network model. There are other face representations using active appearance modeling that could have been tried, which perform much better reconstruction. (No fMRI reconstruction has been attempted with these and thus it could be an interesting extra comparison.) This outcome is good for the vggface/GAN couple in terms of face representation and reconstruction (and as stated, this may be worth writing about separately if it hasn't been done before). It is however only the increase in feature decoding accuracy that shows that the representation is also slightly better for BOLD data (as opposed to reconstruction accuracy). However, looking at some of the eigenface/identity reconstructions, they also sometimes get core aspects of the faces right, so they are not doing that badly on certain perceptual axes. Lastly, why is it that the GAN reconstruction gets to have a "best possible reconstruction from this embedding" column, whereas eigenfaces and identity reconstruction do not get this privilege? Why does Table 1 not have the equivalent ratios for the other methods? - The title of this contribution is wildly inappropriate and must be changed to reflect the content of the work. It would be less unacceptable if there had also been a reconstruction from general natural images to parallel the face reconstruction, or if there had been several other restricted data types. In the current state the title must contain sufficient information to infer that it revolves around face reconstruction and nothing else. - Other possible axes of comparison: What about neural image representations trained on imagenet, such as Alexnet convolutional layers? How well does a GAN reconstruction from those do on faces? What about trying the face representation and reconstruction of the original DCGAN paper? All in all, the contribution is significant but could be much more so, if more experiments had been made.
nips_2017_2514
Multi-Task Learning for Contextual Bandits Contextual bandits are a form of multi-armed bandit in which the agent has access to predictive side information (known as the context) for each arm at each time step, and have been used to model personalized news recommendation, ad placement, and other applications. In this work, we propose a multi-task learning framework for contextual bandit problems. Like multi-task learning in the batch setting, the goal is to leverage similarities in contexts for different arms so as to improve the agent's ability to predict rewards from contexts. We propose an upper confidence bound-based multi-task learning algorithm for contextual bandits, establish a corresponding regret bound, and interpret this bound to quantify the advantages of learning in the presence of high task (arm) similarity. We also describe an effective scheme for estimating task similarity from data, and demonstrate our algorithm's performance on several data sets.
Summary. The paper is about contextual bandits with N arms. In each round the learner observes a context x_{ti} for each arm, chooses an arm to pull and receives reward r_{ti}. The question is what structure to impose on the rewards. The authors note that E[r_{ti}] = < x_{ti}, theta > is a common choice, as is E[r_{ti}] = < x_{ti}, theta_i > The former allows for faster learning, but has less capacity while the latter has more capacity and slower learning. The natural question addressed in this paper concerns the middle ground, which is simultaneously generalized by kernelization. The main idea is to augment the context space so the learner observes (z_{ti}, x_{ti}) where z_{ti} lies in some other space Z. Then a kernel can be defined on this augmented space that measures similarity between contexts and determines the degree of sharing between the arms. Contribution. The main contribution as far as I can tell is the idea to augment the context space in this way. The regret analysis employs the usual techniques. Novelty. There is something new here, but not much. Unless I am somehow mistaken the analysis of Valko et al. should apply directly to the augmented contexts with more-or-less the same guarantees. So the new idea is really the augmentation. Impact. It's hard to tell what will be the impact of this paper. From a theoretical perspective there is not much new. The practical experiments are definitely appreciated, but not overwhelming. Eg., what about linear Thompson sampling? Correctness. I only skimmed the proofs in the supplementary material, but the bound passes plausibility tests. Overall. This seems like a borderline paper to me. I would increase my score if the authors can argue convincingly that the theoretical results are really doing more than the analysis in the cited paper of Valko et al. Other comments. - On L186 you remark that under "further assumption that after time t, n_{a,t} = t/N". But this assumption is completely unjustified, so how meaningful are conclusions drawn from it? - It's a pity that the analysis was not possible for Algorithm 1. I presume the sup-"blah blah" algorithms don't really work? It could be useful to show their regret relative to Algorithm 1 in one of the figures. Someone needs to address these issues at some point (but I know this is a tricky problem). Minors. - I would capitalize A_t and other random variables. - L49: "one estimate" -> "one estimates". - L98: "z_a" -> "z_a \in \mathcal Z". - L110: The notation t_a as a set that depends on t and a is just odd. - L114: Why all the primes? - There must be some noise assumption on the rewards (or bounded, or whatever). Did I miss it? - L135: augmented context here is also (a, x_a).
nips_2017_2985
Permutation-based Causal Inference Algorithms with Interventions Learning directed acyclic graphs using both observational and interventional data is now a fundamentally important problem due to recent technological developments in genomics that generate such single-cell gene expression data at a very large scale. In order to utilize this data for learning gene regulatory networks, efficient and reliable causal inference algorithms are needed that can make use of both observational and interventional data. In this paper, we present two algorithms of this type and prove that both are consistent under the faithfulness assumption. These algorithms are interventional adaptations of the Greedy SP algorithm and are the first algorithms using both observational and interventional data with consistency guarantees. Moreover, these algorithms have the advantage that they are nonparametric, which makes them useful also for analyzing non-Gaussian data. In this paper, we present these two algorithms and their consistency guarantees, and we analyze their performance on simulated data, protein signaling data, and single-cell gene expression data.
The paper describes two permutation-based causal inference algorithms that can deal with interventional data. These algorithms extend a recent hybrid causal inference algorithm, Greedy SP, which performs a greedy search on the intervention equivalence classes, similar to GIES [8], but based on permutations. Both of the proposed algorithms are shown to be consistent under faithfulness, while GIES is shown not to be consistent in general. Algorithm1 is computationally inefficient, so the evaluation is performed only with Algorithm 2 (IGSP) with two different types of independence criteria (linear Gaussian and kernel based). The evaluation on simulated data shows that IGSP is more consistent than GIES. Moreover, the authors apply the algorithm also to two biological datasets, a well-known benchmark dataset (Sachs et al.) and a recent dataset with gene expressions. Quality Pros: - I didn’t check in depth the proofs, but to the best of my knowledge the paper seems to be technically sound. - It seems to provide a good theoretical analysis in the form of consistency guarantees, as well as an empirical evaluation on both simulated and real-world datasets. - The evaluation on the perturb-seq seems to be methodologically quite good. Cons: - The paper doesn’t seem to discuss a possibly obvious drawback: the potentially higher order conditional independence tests may be very brittle in the finite data case. - The evaluation compares only with GIES, while there are other related methods, e.g. some of the ones mentioned below. - The empirical evaluation on real-world data is not very convincing, although reconstructing causal networks in these settings reliably is known to be very difficult. - For the Sachs data, the paper seems to suggest that the consensus network proposed in the original paper is the true causal graph in the data, while it is not clear if this is the case, so I think Figure 4 is a bit misleading. Many methods have been evaluated on this dataset, so the authors could include some more comparisons, at least in the Supplement. Clarity Pros: - The paper is well-organized and clearly written. - It is quite clear how the approach is different from previous work. - From a superficial point of view, the proofs in the Appendix seem to be quite well-written, which seems to be quite rare among the other submissions. Cons: - Some parts of the background could be expanded to make it more self-contained, for example the description of Greedy SP. - It probably sounds obvious to the authors, but maybe it would be good to point out in the beginning of the paper that the paper considers the causal sufficient case, and the case in which the intervention target is known. - For the Sachs data, I would possibly use a Table of causal relations similar to the one in the Supplement of [13] to compare also with other methods. Original Pros: - The paper seems to provide an interesting extension of Greedy SP to include interventional datasets, which allows it to be applied on some interesting real-world datasets. Cons: - The related work section could refer to the several constraint-based (most of them in practice hybrid) methods that have been developed recently to reconstruct causal graphs from observational and interventional datasets (even when we cannot assume causal sufficiency), for example: -- A. Hyttinen, F. Eberhardt, and M. Järvisalo: Constraint-based Causal Discovery: Conflict Resolution with Answer Set Programming, Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, 2014. -- Triantafillou, Sofia, and Ioannis Tsamardinos. "Constraint-based causal discovery from multiple interventions over overlapping variable sets." Journal of Machine Learning Research 16 (2015): 2147-2205. -- Magliacane, Sara, Tom Claassen, and Joris M. Mooij. "Ancestral causal inference." Advances in Neural Information Processing Systems. 2016. - The last paper also offers some (possibly limited) asymptotic consistency guarantees, maybe the authors could discuss its limitations with respect to their guarantees. Significance Pros: - This paper and other permutation-based causal discovery approaches seem to provide a novel and interesting alternative to previous methods Cons: - The applicability of these methods to real-world data is reduced by the assumption that there are no latent confounders. Typos: - References: [13] N. Meinshausen, A. Hauser, J. M. Mooijc, -> Mooij
nips_2017_2454
Query Complexity of Clustering with Side Information Suppose, we are given a set of n elements to be clustered into k (unknown) clusters, and an oracle/expert labeler that can interactively answer pair-wise queries of the form, "do two elements u and v belong to the same cluster?". The goal is to recover the optimum clustering by asking the minimum number of queries. In this paper, we provide a rigorous theoretical study of this basic problem of query complexity of interactive clustering, and give strong information theoretic lower bounds, as well as nearly matching upper bounds. Most clustering problems come with a similarity matrix, which is used by an automated process to cluster similar points together. However, obtaining an ideal similarity function is extremely challenging due to ambiguity in data representation, poor data quality etc., and this is one of the primary reasons that makes clustering hard. To improve accuracy of clustering, a fruitful approach in recent years has been to ask a domain expert or crowd to obtain labeled data interactively. Many heuristics have been proposed, and all of these use a similarity function to come up with a querying strategy. Even so, there is a lack systematic theoretical study. Our main contribution in this paper is to show the dramatic power of side information aka similarity matrix on reducing the query complexity of clustering. A similarity matrix represents noisy pair-wise relationships such as one computed by some function on attributes of the elements. A natural noisy model is where similarity values are drawn independently from some arbitrary probability distribution f + when the underlying pair of elements belong to the same cluster, and from some f − otherwise. We show that given such a similarity matrix, the query complexity reduces drastically from Θ(nk) (no similarity matrix) to O( where H 2 denotes the squared Hellinger divergence. Moreover, this is also information-theoretic optimal within an O(log n) factor. Our algorithms are all efficient, and parameter free, i.e., they work without any knowledge of k, f + and f − , and only depend logarithmically with n. Our lower bounds could be of independent interest, and provide a general framework for proving lower bounds for classification problems in the interactive setting. Along the way, our work also reveals intriguing connection to popular community detection models such as the stochastic block model and opens up many avenues for interesting future research.
NOTE: I am reviewing two papers that appear to be by the same authors and on the same general topic. The other one is on noisy queries without any side information. This paper gives upper and lower bounds on the query complexity of clustering based on noiseless pairwise comparisons, along with random side-information associated with every pair. While I am familiar with some of the related literature, my knowledge of it is far from complete, so it's a little hard to fully judge the value of the contributions. The paper is also so dense that I couldn't spend as much time as ideal checking the correctness -- especially in the long upper bound proof. My overall impression of both papers is that the contributions seem to be valuable (if correct), but the writing could do with a lot of work. I acknowledge that the contributions are more important than the writing, but there are so many issues that my recommendation is still borderline. It would have been very useful to have another round of reviews where the suggestions are checked, but I understand this is not allowed for NIPS. Regarding this paper, there are a few very relevant works that were not cited: - The idea of replacing Bernoulli(p) and Bernoulli(q) by a general distribution in the stochastic block model is not new. A Google (or Google Scholar) search for "labeled [sometimes spelt labelled] stochastic block model" or the less common "weighted stochastic block model" reveals quite a few papers that have looked at similar generalizations. - There is also at least one work studying the SBM with an extra querying (active learning) element: "Active Learning for Community Detection in Stochastic Block Models" (Gadde et al, ISIT 2016). It is with a different querying model, where one only queries a single node and gets its label. But for fixed k (e.g., k=2 or k=100) and linear-size communities, this is basically the same as your pairwise querying model -- just quickly find a node from each community via random sampling, and then you can get the label of any new node by performing a pairwise query to each of those. At first glance it looks like your lower bound contradicts their (sublinear-sample) upper bound, but this might be because your lower bound only holds as k -> Infinity (see below) In any case, in light of these above works, I would ask the authors to remove/re-word all statements along the lines of "we initiate a rigorous theoretical study", "significantly generalizes them", "This is the first work that...", "generalize them in a significant way", "There is no systematic theoretical study", etc. Here are some further comments: - I think the model/problem should be presented more clearly. It looks like you are looking at minimax bounds, but this doesn't seem to be stated explicitly. Moreover, is the minimax model without any restriction on the cluster sizes? For instance, do you allow the case where there are k-1 clusters of size 1 and the other of size n-k+1? - It looks like Theorem 2 can't be correct as stated. For the SBM with k=2 (or probably any k=O(1)) we know that *no* queries are required even in certain cases where the Hellinger distance scales as log(n)/n. It looks like part if the problem is that the proof assumes k -> Infinity, which is not stated in the theorem (it absolutely needs to be). I'm not sure if that's the only part of the problem -- please check very carefully whether Theorem 2 more generally contradicts cases where clustering is possible even with Q=0. - The numerical section does not seem to be too valuable, mainly because it does not compare against any existing works, and only against very trivial baselines. Are your methods really the only ones that can be considered/implemented? - There are many confusing/unclear sentences, like "assume, otherwise the clustering is done" (is there text missing after "assume"?) and "the theorem is already proved from the nk lower bound" (doesn't that lower bound only apply when there's no side information?) - The paper is FULL of grammar mistakes, typos, and strange wording. Things like capitalization, plurals, spacing, and articles (a/an/the) are often wrong. Commas are consistently mis-used (e.g., they usually should not appear after statements like "let" / "we have" / "this means" / "assume that" / "since" / "while" / "note that", and very rarely need to be used just before an equation). Sentences shouldn't start with And/Or/Plus. The word "else" is used strangely (often "otherwise" would be better). Footnotes should start with a capital letter and end with a full stop. This list of issues is far from complete -- regardless of the decision, I strongly urge the authors to proof-read the paper with as much care as possible. Some less significant comments: - The abstract seems too long, and more generally, I don't think it's ideal to spend 3 pages before getting to the contributions - Is your use of terminology "Monte Carlo" and "Las Vegas" standard? Also, when you first mention Las Vegas on p4, it should be mentioned alongside *AVERAGE* query complexity (the word 'average' is missing) - Footnote 1: Should the 1- really be there after p= ? - In the formal problem statement "find Q subset of V x V" makes it sound like the algorithm is non-adaptive. - I disagree with the statement "clearly exhibiting" in the last sentence of the Simulations section. - Many brackets are too small throughout the appendix [POST-AUTHOR FEEDBACK COMMENTS] The authors clarified a few things in the responses, and I have updated the recommendation to acceptance. I still hope that a very careful revision is done for the final version considering the above comments. I can see that I was the only one of the 3 reviewers to have significant comments about the writing, but I am sure that some of these issues would make a difference to a lot more NIPS readers, particularly those coming from different academic backgrounds to the authors. Some specific comments: - The authors have the wrong idea if they feel that "issues like capitalization and misplaced commas" were deciding factors in the review. These were mentioned at the end as something that should be revised, not as a reason for the borderline recommendation. - Thank you for clarifying the (lack of) contradiction with Gadde's paper. - It is very important that theorems and lemmas are stated precisely. e.g., If a result only holds for sufficiently large k, this MUST be stated in the theorem, and not just in the proof. - Regarding the specific grammar issues addressed in the responses, I still suggest revising the use of Else/And/Or/Plus, the sentence "assume, otherwise...", and so on. I am convinced that most of these are grammatically incorrect as used in the paper, or at least unsuitable language for an academic paper. (The authors may disagree)
nips_2017_2100
Dynamic Routing Between Capsules A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.
Overview :this paper introduces a dynamic routing process for connecting layers in a feedforward neural net, as described in Procedure 1 on p 3. The key idea here is that the coupling coeff c_ij between unit i and unit j is computed dynamically (layerwise), taking into account the agreement between the output v_j of unit j, and the prediction from unit i \hat{u}_{{j|i}. This process is iterates between each layer l and l+1, but does not (as far as I can tell) spread further back. Another innovation used in the paper is a form of nonlinearity as in eq 1 for units which uses the length of the capsule output v_j to encode strength of activity, and the direction of v_j to encode the values of the capsule parameters. A shallow CapsNet model is trained on MNIST, and obtains very good performance (a check of the MNIST leaderboard shows best performance of 0.23 obtained with a committee of deep conv nets), cf performance in Table 1. I regard this paper as very interesting, as it has successfully married the capsules idea with conv nets, and makes use of the dynamic routing capabilities. The results on highly overlapping digits (sec 6) are also impressive. This idea could rally take off and be heavily used. One major question about the paper is the convergence of the iteration in Proc 1. It would be highly desirable to prove that this converges (if it does so). This issue is as far as I can tell not discussed in the paper. It would also be very nice to see an example of the dynamic routing process in action, especially to see how a unit whose \hat{u}_{j|i} is out of agreement with the others gets suppressed. This process of assessing agreement of bottom-up contributions is reminiscent of the work in Rao and Ballard (Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects, Nature Neurosci. 2(1) 79-87, 1999), esp the example where local bar detectors all predict the same longer bar. One could e.g. look at how activations of 3 units in a line --- compare to one that is disoriented, e.g. |-- in your network. I would encourage the authors to publish their code (this can be following acceptance). This would greatly facilitate uptake by the community. Quality: As far as I can tell this is technically sound. Clarity: The writing is clear but my detailed comments give a number of suggestions to make the presentation less idiosyncratic. Originality: The idea of capsules is not new, but this paper seems to have achieved a step change in their performance and moved beyond the "transforming autoencoder" framework in which they were trained. Significance: The level of performance that can be achieved with a relatively shallow network is impressive. Given the interest in object recognition etc it is possible that this dynamic routing idea could really take off. Other points: p 1 Need to ref Hinton et al (20111) when you first mention "capsules" in the text. p2 and Procedure 1:I strongly recommend "squashing" to "squidging" as it is a term that has been used in the NN literature (and sounds more mathematical). p 2 The constraint that the sum_j c_{ij} = 1 is reminiscent of the "single parent constraint" in Hinton et al (2000). It would be good to discuss this more. p 3. The 0.5 in eq (4) seems like a hack, although it would look better if you introduced a weighting parameter alpha and set it to 0.5. Also I don't understand why you are not just using softmax loss (for a single digit classifier) on the ||v_j||'s (as they lie in 0,1). (For the multiclass case you can use multiple independent sigmoids.) p 3 I recommend "PrimaryCapsules" rather than "PrimaryCaps" as Caps sounds like CAPS. p 5 Lee et al [2016] baseline -- justify why this is an appropriate baseline. p 5 sec 5.1 Tab. 4 -> Fig. 4 p 6 Figure 4 caption: "Each column" -> "Each row". p 6 Sec 6 first para -- it is confusing to talk about your performance level before you have introduced the task -- leave this for later. p 6 sec 6: note a heavily overlapping digits task was used in Hinton et al (2000). p 8. It would be better to put the related work earlier, e.g. after sec 4. p 9. The ordering of the references (as they appear in the paper) is unusual and makes them hard to find. Suggest alphabetical or numbered in order of use. These are standard bibtex options.
nips_2017_1466
Sparse Approximate Conic Hulls We consider the problem of computing a restricted nonnegative matrix factorization (NMF) of an m × n matrix X. Specifically, we seek a factorization X ≈ BC, where the k columns of B are a subset of those from X and C ∈ R k×n ≥0 . Equivalently, given the matrix X, consider the problem of finding a small subset, S, of the columns of X such that the conic hull of S ε-approximates the conic hull of the columns of X, i.e., the distance of every column of X to the conic hull of the columns of S should be at most an ε-fraction of the angular diameter of X. If k is the size of the smallest ε-approximation, then we produce an O(k/ε 2/3 ) sized O(ε 1/3 )-approximation, yielding the first provable, polynomial time ε-approximation for this class of NMF problems, where also desirably the approximation is independent of n and m. Furthermore, we prove an approximate conic Carathéodory theorem, a general sparsity result, that shows that any column of X can be ε-approximated with an O(1/ε 2 ) sparse combination from S. Our results are facilitated by a reduction to the problem of approximating convex hulls, and we prove that both the convex and conic hull variants are d-SUM-hard, resolving an open problem. Finally, we provide experimental results for the convex and conic algorithms on a variety of feature selection tasks.
The paper "Sparse approximate conic hulls" develops conic analogues of approximation problems in convex geometry, hardness results for approximate convex and conic hulls, and considers these in the context of non-negative matrix factorization. The paper also presents numerical results comparing the approximate conic hull and convex hull algorithms, a modified approximate conic hull algorithm (obtained by first translating the data), and other existing algorithms for a feature-selection problem. Numerical results are also presented. The first theoretical contribution is a conic variant on the (constructive) approximate Caratheodory theorem devised for the convex setting. This is obtained by transforming the rays (from the conic problem) into a set of vectors by the "gnomic projection" applying the approximate Caratheodory theorem in the convex setting, and transforming back. The main effort is to ensure the error behavior can be controlled when the gnomic projection/its inverse are applied. This requires the points to have angle to the "center" strictly smaller than pi/2. This kind of condition on the points persists throughout the "conic" results in the paper. The second main contribution is establishing hardness results for approximate conic and convex hull problems. Finally, the main point of the paper seems to be that the epsilon-approximate conic hull algorithm can be used to approximately solve NMF under the column subset restriction. The paper is quite well written, albeit a little dense, with much of the core technical content relegated to the supplementary material. The results are interesting from a purely convex geometry point of view. Nevertheless, I am somewhat unconvinced about certain issues in the formulation of the problems in the paper (rather than the technical results of the paper, which are nice): -- the choice of angular metric seems a bit arbitrary (there are other ways to put a distance on the non-negative orthant that is naturally adapted to the conic setting, such as the Hilbert metric. Perhaps the angular metric is best suited to using Frobenius norm error in the matrix factorization problem? If so it would be great if the authors could make this more clear. -- The gamma-shifted conic version performs well in experiments (and the original conic version does not), which is interesting. How should we choose the shift though? What reason is there not to make the shift very large? Is it possible to "pull back" the gamma shift throughout the paper, and formulate a meaningful version of approximate Caratheodory that has a parameter gamma? Perhaps choosing the best-case over gamma is a more interesting formulation (from a practical point of view) than the vanilla conic version studied in this paper. In addition to these concerns, I'm not sure how much innovation there is over the existing convex method of Blum et al, and over the existing hardness results for NMF in Arora et al. (I am not expert enough to know for sure, but the paper feels a little incremental). Minor comments: -- p3 line 128: this is not the usual definition of "extreme" point/ray in convex geometry (although lines 134-136 in some sense deal with the difference between the usual definition and the definition in this paper, via the notion of "degenerate") -- p4 line 149: one could also consider using the "base" of the cone defined by intersection with any hyperplane defined by a vector in the interior of the dual cone of the points (not just the all ones vector) instead of using the gnomic projection. This preserves the extreme points, so the basic strategy might work, but perhaps doesn't play nicely with Frobenius norm error in the NMF problem? It would be helpful if the authors could briefly explain why this approach is not favorable, and why the gnomic projection approach makes the most sense. -- p5 line 216, theorem 3.4: It would be useful if the authors describe the dependence on gamma in the parameters of the theorem. It seems this will be very bad as gamma approaches pi/2, and it would be good to be upfront about this (since the authors do a good job of making clear that for the problem setup in this paper the bounded angle assumption is necessary) -- p6 line 235, there is some strange typesetting with the reference [16] in the Theorem statement. -- p6 line 239: again the dependence on gamma would be nice to have here (or in a comment afterwards). -- p6 line 253: it may be useful to add a sentence to say how this "non-standard" version of d-sum differs from the "standard" version, for the non-expert reader. -- p8 lines 347-348: this is an interesting observation, that the gamma-shifted conic case is a sort of interpolant between the convex and conic cases. It would be interesting to be able to automatically tune gamma for a given scenario.
nips_2017_2192
Aggressive Sampling for Multi-class to Binary Reduction with Applications to Text Classification We address the problem of multi-class classification in the case where the number of classes is very large. We propose a double sampling strategy on top of a multi-class to binary reduction strategy, which transforms the original multi-class problem into a binary classification problem over pairs of examples. The aim of the sampling strategy is to overcome the curse of long-tailed class distributions exhibited in majority of large-scale multi-class classification problems and to reduce the number of pairs of examples in the expanded data. We show that this strategy does not alter the consistency of the empirical risk minimization principle defined over the double sample reduction. Experiments are carried out on DMOZ and Wikipedia collections with 10,000 to 100,000 classes where we show the efficiency of the proposed approach in terms of training and prediction time, memory consumption, and predictive performance with respect to state-of-the-art approaches.
This paper presents a learning reduction for extreme classification, multiclass classification where the output space ranges up 100k different classes. Many approaches to extreme classification rely on inferring a tree structure among the output labels, as in the hierarchical softmax, so that only a logarithmic number of binary predictions need to be made to infer the output class, or a label embedding approach that can be learned efficiently through least-squares or sampling. The authors note the problems with these approaches: the inferred latent tree may not be optimal and lead to cascading errors, and the label embedding approach may lead to prediction errors when the true label matrix is not low-rank. An alternative approach is to reduce extreme classification to pairwise binary classification. The authors operate in the second framework and present a scalable sampling based method with theoretical guarantees as to its consistency with the original multiclass ERM formulation. Experiments on 5 text datasets comparing a number of competing approaches (including standard one-vs-all classification when that is computationally feasible) show that the method is generally superior in terms of accuracy and F1 once the number of classes goes above 30k, and trains faster while using far less memory, although suffers somewhat in terms of prediction speed. The Aggressive Double Sampling reduction is controlled by two parameters, one of which controls the frequency of sampling each class, which is inversely proportional to the empirical class frequency in order to avoid entirely missing classes in the long tail. The second parameter sets the number of adversarial examples to draw uniformly. This procedure is repeated to train the final dyadic classifier. At prediction time, inference is complicated by the fact that generating pairwise features with all possible classes is intractable. A heuristic is used to identify candidate classes by making an input space centroid representation of each class as the average of vectors in the class, with prediction being carried out by a series of pairwise classifications with only those candidates. Theoretical analysis of the reduction is complicated by the fact that the reduction from multi-class to dyadic binary classification changes a sum of independent random variables into a sum of random variables with a particular dependence structure among the dyadic examples. The other complication is caused by the oversampling of rare classes, shifting the empirical distribution. The proof involves splitting the sum of dependent examples into several sums of independent variables based on the graph structure induced by the example construction, and using the concentration equalities for partially dependent variables introduced by Janson. Although the finite-sample risk bound is biased by the ratios between the true class probabilities and the oversampled ones, these biases decrease linearly with the size of the sampled and re-sampled training sets. The analysis appears sound but I have not verified the proof in detail. Overall, this paper presents a strong method for extreme classification, with small memory and computational requirements and superior performance on especially the largest datasets. The binary reduction algorithm appears theoretically sound, interesting, and useful to practitioners.
nips_2017_1865
Bayesian Compression for Deep Learning Compression and computational efficiency in deep learning have become a problem of great significance. In this work, we argue that the most principled and effective way to attack this problem is by adopting a Bayesian point of view, where through sparsity inducing priors we prune large parts of the network. We introduce two novelties in this paper: 1) we use hierarchical priors to prune nodes instead of individual weights, and 2) we use the posterior uncertainties to determine the optimal fixed point precision to encode the weights. Both factors significantly contribute to achieving the state of the art in terms of compression rates, while still staying competitive with methods designed to optimize for speed or energy efficiency.
This paper describes a method for compressing a neural network: pruning its neurons and reducing the floating point precision of its weights to make it smaller and faster at test time. The method works by training the neural net with SVI using a factored variational approximation that shares a scale parameter between sets of weights. The entropy term of the variational objective incentivizes weights to have a large variance, which leads to compression by two separate mechanisms: neurons with high-variance weights relative to their means can be pruned, and layers can have their floating point precision reduced to a level commensurate with the variance of their weights. It's a fairly principled approach in which compression arises naturally as a consequence of Bayesian uncertainty. It also seems effective in practice. I have three main reasons for not giving a higher score. First, I'm not convinced that the work is hugely original. It leans very heavily on previous work, particularly Kingma et al. on variational dropout and Molchanov et al. on sparsification arising from variational dropout. The two novelties are (1) hierarchical priors and (2) reduced floating point precision, which are certainly useful but they are unsurprising extensions of the previous work. Additionally, the paper does a poor job of placing itself in the context of these previous papers. Despite repeatedly referencing the variational dropout papers for forms of variational approximations or derivations, the concept of 'variational dropout' is never actually introduced! The paper would be much clearer if it at least offered a few-sentence summary of each of these previous papers, and described where it departs from them. Second, the technical language is quite sloppy in places. The authors regularly use the phrase "The KL divergence between X and Y" instead of "the KL divergence from X to Y". "Between" just shouldn't be used because it implies symmetry. Even worse, in all cases, the order of "X" and "Y" is backwards, so that even "from X to Y" would be incorrect. Another example of sloppy language is the use of the term "posterior" to describe the variational approximation q. This causes genuine confusion, because the actual posterior is a very relevant object. At the very least, the term should be "posterior approximation". In the paper's defense, the actual equations are generally clear and unambiguous. Finally, the experiments are a bit unimaginative. For example, although the authors argue that Bayesian inference and model compression are well aligned, we know they have quite different goals and you would expect to find some situations where they could be in conflict. Why not explore this tradeoff by looking at performance and compression as a function of hyperparameters like $\tau_0$? How much compression do you get if you select $\tau_0$ by maximizing the marginal likelihood estimate rather than choosing an arbitrary value?
nips_2017_77
Scalable Generalized Linear Bandits: Online Computation and Hashing Generalized Linear Bandits (GLBs), a natural extension of the stochastic linear bandits, has been popular and successful in recent years. However, existing GLBs scale poorly with the number of rounds and the number of arms, limiting their utility in practice. This paper proposes new, scalable solutions to the GLB problem in two respects. First, unlike existing GLBs, whose per-time-step space and time complexity grow at least linearly with time t, we propose a new algorithm that performs online computations to enjoy a constant space and time complexity. At its heart is a novel Generalized Linear extension of the Online-to-confidence-set Conversion (GLOC method) that takes any online learning algorithm and turns it into a GLB algorithm. As a special case, we apply GLOC to the online Newton step algorithm, which results in a low-regret GLB algorithm with much lower time and memory complexity than prior work. Second, for the case where the number N of arms is very large, we propose new algorithms in which each next arm is selected via an inner product search. Such methods can be implemented via hashing algorithms (i.e., "hash-amenable") and result in a time complexity sublinear in N . While a Thompson sampling extension of GLOC is hash-amenable, its regret bound for d-dimensional arm sets scales with d 3/2 , whereas GLOC's regret bound scales with d. Towards closing this gap, we propose a new hashamenable algorithm whose regret bound scales with d 5/4 . Finally, we propose a fast approximate hash-key computation (inner product) with a better accuracy than the state-of-the-art, which can be of independent interest. We conclude the paper with preliminary experimental results confirming the merits of our methods.
The paper identifies two problems with existing Generalized Linear Model Bandit algorithms: their per-step computational complexity is linear in t (time-steps) and N (number of arms). The first problem is solved by the proposed GLOC algorithm (first that achieves constant-in-t runtime and O(d T^½ polylog(T) ) regret) and the second by a faster approximate variant (QGLOC) based on hashing. Minor contributions are a (claimed non-trivial) generalization of linear online-to-confidence bounds to GLM, an exploration-exploitation tradeoff parameter for QGLOC that finally justifies a common heuristic used by practitioners and a novel simpler hashing method for computing QGLOC solutions. I think the paper is interesting and proposes a novel idea, but the presentation is sometimes confused and makes it hard to evaluate the impact of the contribution w.r.t. the existing literature. the authors claim that all existing GLMB algorithms have a per-step complexity of t. This is clearly not the case in the linear setting when \mu is the identity, and closed forms for \hat{theta}_t allow the solution to be computed incrementally. This cannot be done in general in the GLM where \hat{\theta}_t is the solution of a convex optimization problem that requires O(t) time to be created and solved. Indeed, GLOC itself uses ONS to estimate \theta, and then obtain a closed form solution for \hat{\theta}, sidestepping the O(t) convex problem. The authors should improve the presentation and comparison with existing methods, highlighting when and where this dependence on t comes from and why other methods (e.g., why hot restarting the convex solver with \hat{\theta}_{t-1}) cannot provide the same guarantees, or other examples in which closed forms can be computed. The authors should expand the discussion on line 208 to justify the non-triviality: what original technical results were necessary to extend Abbasi-Yadkori [2] to GLMs? While the road to solving the O(t) problem is somewhat clear, the approach to reduce the O(N) complexity is not well justified. In particular (as i understood): GLOC-TS is hash-amenable but has d^3/2 regret (when solved exactly) QGLOC is hash-amenable and has d^5/4 regret, but only when solved exactly GLOC-TS has no know regret bound when solved inexactly (using hashing) QGLOC has a fixed-budget regret guarantee (presented only in the appendix, d^5/4 or d^3/2?) that holds when solved inexactly (using hashing), but reintroduces a O(t) space complexity (no O(t) time complexity?) This raises a number of questions: does GLOC-TS have a regret bound when solved inexactly? Does any of these algorithms (GLOC, QGLOC, GLOC-TS) guarantee O(d^{1,5/4,3/2}\sqrt{T}) regret while using sublinear space/time per step BOTH in t and N at the same time? The presentation should also be improved (e.g., L1-sampling requires choosing m, but QGLOC's theorem makes no reference to how to choose m) The experiments report regret vs. t (time), and I take t here is the number of iterations of the algorithm. In this metric, UCB-GLM (as expected) outperforms the proposed method. It is important to report metrics (runtime) that were the reason for the introduction of GLOC, namely per-step time and space complexity, as t and N change. Minor things: L230 "certain form" is a vague term L119 S must be known in advance, it is very hard to estimate, and has a linear impact on the regret. How hard it is to modify the algorithm to adapt to unknown S? L109+L127 The loss fed to the online learner is \ell(x,y) = -yz +m(z) and m(z) is \kappa strongly convex under Ass. 1. For strongly convex functions, ONS is not necessary and simple gradient descent achieves logarithmic regret. Under appropriate conditions on \kappa and y, the problem might become simpler. L125 Ass. 1 is common in the literature, but no intuition for a value of \kappa is provided. Since the regret scales linearly with \kappa, it would be useful to show simple examples of known \kappa e.g. in the probit model and 0/1 rewards used in the experiments. It would also be useful to show that it does not satisfy the previous remark. L44 3 is not included in the "existing algorithm". Typo, or TS does not suffer from both scalability issues? L158 S should be an input of Alg. 1 as well as Alg. 2 AFTER REBUTTAL I have read the author rebuttal, and I confirm my accept. I believe the paper is interesting, but needs to make a significant effort to clarify the message on what it does and what it does not achieve.
nips_2017_2009
Unsupervised Sequence Classification using Sequential Output Statistics We consider learning a sequence classifier without labeled data by using sequential output statistics. The problem is highly valuable since obtaining labels in training data is often costly, while the sequential output statistics (e.g., language models) could be obtained independently of input data and thus with low or no cost. To address the problem, we propose an unsupervised learning cost function and study its properties. We show that, compared to earlier works, it is less inclined to be stuck in trivial solutions and avoids the need for a strong generative model. Although it is harder to optimize in its functional form, a stochastic primal-dual gradient method is developed to effectively solve the problem. Experiment results on real-world datasets demonstrate that the new unsupervised learning method gives drastically lower errors than other baseline methods. Specifically, it reaches test errors about twice of those obtained by fully supervised learning.
This paper presents a method for predicting a sequence of labels without a labeled training set. This is done by incorporating a prior probability on the possible output sequnces of labels, that is a linear classifier is sought such that the distribution of sequnces it predicts is close (in the KL sense) to a given prior. The authors present a cost function, an algorithm to optimize it, and experimental results on real data of predicting characters in two tasks: OCR and spell correction. The paper is technically sound and generally clear. The idea is novel (some prior work is cited by the authors) and interesting. The main reason for my score is a concern regarding the difficulty of obtaining a useful prior model p_LM. Such models usually suffer from sparseness of data, that is unless the vocabulary is quite small, it is common for some test samples to have zero probability under the prior/LM learned from a training set. It is true that this isuue is well known and solutions exist in the literature, however I would like the authors to adress the influence of it on their method. Another aspect of the same issue is the scaling of the algorithm to large vocabularies and/or long sub-sequences (parameter N in the paper). Indeed, the experiments use quite a small vocabulary (29 characters) and short sub-sequences (Ngrams with N=2,3). In NLP data at word level, the vocabulary is orderes of magnitude larger. Some more comments: - I would also like the authors to adress the application of their method to non-NLP domains. - The inclusion of the supervised solution in the figures of the 2D cost raises a question - it seems the red dot (the supervised solution) is always in a local minimum of the unsupervised cost - I don't understand why this should be the case. - If it is assumed that there is a structure in the output space of sequences, why not incorporate this structure into the classifier, that is why use a point prediction p(y_t|x_t) of a single label independently of its neighbors? Specific comments: (*) lines 117-118, 'negative cross entropy', isn't the following formula the cross entropy (and not the negative of it)? (*) line 119, this sentence is somewhat confusing, if I understand correctly \bar p is the expected frequency of a *given* sequence {i_1,...i_n}, and not of *all* sequences. (*) lines 163-166, here too the sign of the cross entropy seems to be reversed, also when referred to in the text, e.g in line 163, if p_LM goes to 0, then -log(0) is +inf and not -inf as written in the paper, and it indeed suits the sentence in line 167, since you expect that to penalize a minimization problem, some term goes to +inf and not to -inf. Typos: (*) lines 173 and 174, "that p_LM" seems wrong, did the authors mean "for which p_LM" ? (*) line 283, word 'rate' appears twice.
nips_2017_1708
Deep Voice 2: Multi-Speaker Neural Text-to-Speech We introduce a technique for augmenting neural text-to-speech (TTS) with lowdimensional trainable speaker embeddings to generate different voices from a single model. As a starting point, we show improvements over the two state-ofthe-art approaches for single-speaker neural TTS: Deep Voice 1 and Tacotron. We introduce Deep Voice 2, which is based on a similar pipeline with Deep Voice 1, but constructed with higher performance building blocks and demonstrates a significant audio quality improvement over Deep Voice 1. We improve Tacotron by introducing a post-processing neural vocoder, and demonstrate a significant audio quality improvement. We then demonstrate our technique for multi-speaker speech synthesis for both Deep Voice 2 and Tacotron on two multi-speaker TTS datasets. We show that a single neural TTS system can learn hundreds of unique voices from less than half an hour of data per speaker, while achieving high audio quality synthesis and preserving the speaker identities almost perfectly. 3. Using these two single-speaker models as a baseline, we demonstrate multi-speaker neural speech synthesis by introducing trainable speaker embeddings into Deep Voice 2 and Tacotron. We organize the rest of this paper as follows. Section 2 discusses related work and what makes the contributions of this paper distinct from prior work. Section 3 presents Deep Voice 2 and highlights the differences from Deep Voice 1. Section 4 explains our speaker embedding technique for neural TTS models and shows multi-speaker variants of the Deep Voice 2 and Tacotron architectures. Section 5.1 quantifies the improvement for single speaker TTS through a mean opinion score (MOS) evaluation and Section 5.2 presents the synthesized audio quality of multi-speaker Deep Voice 2 and Tacotron via both MOS evaluation and a multi-speaker discriminator accuracy metric. Section 6 concludes with a discussion of the results and potential future work.
This paper presents a solid piece of work on the speaker-dependent neural TTS system, building on previous works of Deep Voice and Tacotron architecture. The key idea is to learn a speaker-dependent embedding vector jointly with the neural TTS model. The paper is clearly written, and the experiments are presented well. My comments are as follows. -- the speaker embedding vector approach is very similar to the speaker code approach for speaker adaptation studied in ASR, e.g., Ossama Abdel-Hamid, Hui Jiang, "Fast speaker adaptation of hybrid NN/HMM model for speech recognition based on discriminative learning of speaker code", ICASSP 2013 This should be mentioned in the paper. ASR researchers later find that using fixed speaker embeddings such i-vectors can work equally well (or even better). It would be interesting if the authors could present some experimental results on that. The benefit is that you may be able to do speaker adaptation fairly easily, without learning the speaker embedding as the presented approach. --in the second of paragraph of Sec. 3, you mentioned that the phoneme duration and frequency models are separated in this work. Can you explain your motivation? --the CTC paper from A. Graves should be cited in section 3.1. Also, CTC is not a good alignment model for ASR, as it is trained by marginalizing over all the possible alignments. For ASR, the alignments from CTC may be very biased from the ground truth. I am curious how much segmentation error do you have, and have you tried some other segmentation models, e.g., segmental RNN? --in Table 1, can you present comparisons between Deep Voice and Tocotron with the same sample frequency? --is Deep Voice 2 trained end-to-end, or the modules are trained separately? It would be helpful to readers to clarify that. --the speaker embedding are used as input to several different modules, some ablation experiments in the appendix would be helpful to see how much difference it would make for each module.
nips_2017_2857
Approximation and Convergence Properties of Generative Adversarial Learning Generative adversarial networks (GAN) approximate a target data distribution by jointly optimizing an objective function through a "two-player game" between a generator and a discriminator. Despite their empirical success, however, two very basic questions on how well they can approximate the target distribution remain unanswered. First, it is not known how restricting the discriminator family affects the approximation quality. Second, while a number of different objective functions have been proposed, we do not understand when convergence to the global minima of the objective function leads to convergence to the target distribution under various notions of distributional convergence. In this paper, we address these questions in a broad and unified setting by defining a notion of adversarial divergences that includes a number of recently proposed objective functions. We show that if the objective function is an adversarial divergence with some additional conditions, then using a restricted discriminator family has a moment-matching effect. Additionally, we show that for objective functions that are strict adversarial divergences, convergence in the objective function implies weak convergence, thus generalizing previous results.
The authors present a formal analysis to characterize general adversarial learning. The analysis shows that under certain conditions on the objective function the adversarial process has a moment-matching effect. They also show results on convergence properties. The writing is quite dense and may not be accessible to most of the NIPS audience. I did not follow the full details myself. I defer to reviewers closer to the area. One general reservation I have about a formal analysis along the line presented is whether the results actually provide any insights useful in the practice of generative adversarial networks. In particular, do the results suggest properties of domains / distributions that make such domains suitable for a GAN approach, and, conversely, what practical domain properties would suggest a bad fit for GAN? Do the results provide guidelines for the choice of deep net architecture or learning parameters? Ideally, a formal analysis provides such type of feedback to the practitioners. If no such feedback can be provided, the formal results may still be of interest but unfortunately, they will remain largely a theoretical exercise. But at least, I would like to see a brief discussion on the impact on the practice of these results because it's the practical success of the GAN approach that has made it so exciting. (A similar issue arises with any formal framework that "explains" the success of deep learning. Such explanations become much more interesting if they also identify domain properties that predict where deep learning would work well and where it would not, in relation to real-world domains. I.e. what are the lessons for the practice of deep learning?) I'm happy with the authors' response. It would be great to see the feedback incorporated in the revised paper. I've raised my rating.
nips_2017_1323
Zap Q-Learning The Zap Q-learning algorithm introduced in this paper is an improvement of Watkins' original algorithm and recent competitors in several respects. It is a matrix-gain algorithm designed so that its asymptotic variance is optimal. Moreover, an ODE analysis suggests that the transient behavior is a close match to a deterministic Newton-Raphson implementation. This is made possible by a two time-scale update equation for the matrix gain sequence. The analysis suggests that the approach will lead to stable and efficient computation even for non-ideal parameterized settings. Numerical experiments confirm the quick convergence, even in such non-ideal cases.
The paper makes the following contributions: 1) It shows that the asymptotic variance of tabular Q-learning decreases slower than the typical 1/n rate even when an exploring policy is used. 2) It suggests a new algorithm, Zap Q(lambda) learning to fix this problem. 3) It shows that in the tabular case the new algorithm can deliver optimal asymptotic rate and even optimal asymptotic variance (i.e., optimal constants). 4) The algorithm is empirically evaluated on both a simple problem and on a finance problem that was used in previous research. Significance, novelty --------------------- Q-learning is a popular algorithm, at least in the textbooks. It is an instance of the family of stochastic approximation algorithms which are (re)gaining popularity due to their lightweight per-update computational requirements. While (1) was in a way known (see the reference in the paper), the present paper complements and strengthens the previous work. The slow asymptotic convergence at minimum must be taken as a warning sign. The experiments show that the asymptotics in a way correctly predicts what happens in finite time, too. Fixing the slow convergence of Q-learning is an important problem. The fix, while relatively straightforward (but not entirely trivial), is important. My only worry is about the computational cost associated with matrix inversions. Even with incremental updates (Sherman-Morrison), the per-update cost is O(d^2) when we have d features. This may be OK in some applications, but it is a limitation (not a show-stopper by any means). The empirical evaluations are of sufficient depth and breadth for a conference publication proposing a new method (I actually like the way that they are conducted). The condition A3 seems strong (as noted by the authors themselves). I don't have high hopes for the generalization to the full linear feature-based case as in this Q-learning, as many of its relatives, tend not to have any strong guarantees (i.e., performance can be quite bad). However, this is not the problem of *this* paper. Soundness --------- I believe that the results are sound. I could not check the long version (yet) as I was asked to write a review just two days ago (as an extra review) and I did not have the time to check the proofs. From what I understand about these type of proofs, I believe that the proof can go through but there are some delicate steps (e.g., nonsmooth right-hand side of the ODE). That the authors discovered that A3 is necessary for their proof to go through at least shows that they are quite careful and attentive to details. (I am siding with them in that I believe it may be possible to remove A3, but this won't be trivial). The experiments are done with care; demonstrating nicely the effect of learning. Perhaps a sensitivity study could complement the experiments presented, but this is not critical in my opinion for a conference publication. Quality of writing ------------------ Very well written pose. Some may say that perhaps fewer symbols could be used, but at the end the notation was consistent and systematic and I did not find the paper too hard to read. Related work ------------ There is some similarity to [1-5], which would be good to cite. While these papers are definitely relevant (they seem to employ the same two-time scale logic in similar, TD/LMS context), the present paper is sufficiently different so that besides citing these as implementing the same idea in a different setting, no other modification is necessary to the paper. Minor comments -------------- * Perhaps Theorem 2.1 does not need Q2? * Line 120: "the gain sequence gamma" should be "the gain sequences gamma and alpha" * Line 124: The uninitiated reader may find it hard to interpret the phrase "An ODE approximation holds". * Numerical results: I expect the authors to share their code (github is excellent for this) or otherwise make it clear what parameter settings are used. A few details seem to be missing to replicate the experiments (e.g., what stepsize was used with RPJ?) * Line 151: "the Watkins'": remove "the" * Line 152: Which analysis does "this" refer to? (Middle of the page.) * Figure 3: I am guessing W_n is the error. I am not sure whether this was introduced beforehand. Maybe just remind the reader if it was. * Fig 4: Why is Speedy-Q not converging? Will it?? Overall vanilla Q-learning is not even that bad here.. * Line 174: "trails" should be "trials"? * Line 202: How is the condition number defined given that A is not even symmetric, so it may have complex eigenvalues? Magnitudes I assume? * Line 209: Hmm, I see W_n is defined here.. Move this up? References [1] H. Yao and Z-Q. Lie. Preconditioned temporal difference learning. In Proceedings of the 25th International Conference on Machine learning, 2008. [2] H. Yao, S. Bhatnagar, and Cs. Szepesvari. Temporal difference learning by direct preconditioning. In Multidisciplinary Symposium on Reinforcement Learning (MSRL), 2009. [3] Yao, H., Bhatnagar, S., and Szepesvári, Cs., LMS-2: Towards an Algorithm that is as Cheap as LMS and Almost as Efficient as RLS, CDC, pp. 1181--1188, 2009 [4] Pan, Yangchen, Adam M. White, and Martha White. "Accelerated Gradient Temporal Difference Learning." AAAI. 2017. [5] Givchi, A., & Palhang, M. (2014). Quasi Newton Temporal Difference Learning. ACML.
nips_2017_3042
Fader Networks: Manipulating Images by Sliding Attributes This paper introduces a new encoder-decoder architecture that is trained to reconstruct images by disentangling the salient information of the image and the values of attributes directly in the latent space. As a result, after training, our model can generate different realistic versions of an input image by varying the attribute values. By using continuous attribute values, we can choose how much a specific attribute is perceivable in the generated image. This property could allow for applications where users can modify an image using sliding knobs, like faders on a mixing console, to change the facial expression of a portrait, or to update the color of some objects. Compared to the state-of-the-art which mostly relies on training adversarial networks in pixel space by altering attribute values at train time, our approach results in much simpler training schemes and nicely scales to multiple attributes. We present evidence that our model can significantly change the perceived value of the attributes while preserving the naturalness of images.
*** Summary *** This paper aims to augment an autoencoder with the ability to tweak certain (known) attributes. These attributes are known at training time and for a dataset of faces include aspects like [old vs young], [smiling vs not smiling], etc. They hope to be able to tweak these attributes along a continuous spectrum, even when the labels only occur as binary values. To achieve this they propose an (encoder, decoder) setup where the encoder maps the image x to a latent vector z and then the decoder produces an image taking z, together with the attributes y as inputs. When such a network is trained in the ordinary fashion, the decoder learns to ignore y because z already encodes everything that the network needs to know. To compel the decoder network to use y, the authors propose introducing a adversarial learning framework in which a discriminator D is trained to infer the attributes from z. Thus the encoder must produce representations that are invariant to the attributes y. The writing is clear and any strong researcher should be able to reproduce their results from the presentation here. *** Conclusions *** This paper presents a nice idea. Not an inspiring, or world changing idea, but a worthy enough idea. Moreover, this technique offers a solution that large number of people are actually interested in (for good or for malice). What makes this paper a clear accept is that the authors take this simple useful idea and execute it wonderfully. The presentation is crystal clear. The experiments are compelling (both quantitatively and qualitatively). The live study with Mechanical Turk workers answers some key questions any reasonable reviewer might ask. I expect this work to actually be used in the real world and happily champion it for acceptance at NIPS 2017. *** Small Writing Quibble *** "We found it extremely beneficial to add dropout" Adverbs of degree (like "very", "extremely", etc) add nothing of value to the sentence. Moreover, it expresses an opinion in a sly fashion. Instead of saying dropout is "extremely" important, say ***why*** it is important. What happens when you don't use dropout? How do the results compare qualitatively and quantitatively?
nips_2017_93
Probabilistic Models for Integration Error in the Assessment of Functional Cardiac Models This paper studies the numerical computation of integrals, representing estimates or predictions, over the output f (x) of a computational model with respect to a distribution p(dx) over uncertain inputs x to the model. For the functional cardiac models that motivate this work, neither f nor p possess a closed-form expression and evaluation of either requires ≈ 100 CPU hours, precluding standard numerical integration methods. Our proposal is to treat integration as an estimation problem, with a joint model for both the a priori unknown function f and the a priori unknown distribution p. The result is a posterior distribution over the integral that explicitly accounts for dual sources of numerical approximation error due to a severely limited computational budget. This construction is applied to account, in a statistically principled manner, for the impact of numerical errors that (at present) are confounding factors in functional cardiac model assessment.
Summary The paper presents a method for assessing the uncertainty in the evaluation of an expectation over the output of a complex simulation model given uncertainty in the model parameters. Such simulation models take a long time to solve, given a set of parameters, so the task of averaging over the outputs of the simulation given uncertainty in the parameters is challenging. One cannot simply run the model so many times that error in the estimate of the integral is controlled. The authors approach the problem as an inference task. Given samples from the parameter posterior one must infer the posterior over the integral of interest. The parameter posterior can be modelled as a DP mixture and my matching the base measure of the DP mixture with the kernel used in the Gaussian process based Bayesian Quadrature (BQ) procedure for solving the integral. The method is benchmarked using synthetic and a real-world cardiac model against simply using the mean and standard error of simulations from the model for parameter samples. It is shown to be more conservative - the simple standard error approach is over-confident in the case of small sample size. Comments: This is an interesting topic and extends recent developments in probabilistic numerics to the problem of estimating averages over simulations from a model with uncertain parameters. I'm not aware of other principled approaches to solving this problem and I think it is of interest to people at NIPS. The paper is well written and easy to follow. A lot of technical detail is in the supplementary so really this is a pretty substantial paper if one considers it all together. Putting technical details in the supplement improves the readability. There is a proof of consistency of the approach in a restricted setting which gives some rigorous foundations to the approach. The matching of the mixture model components and kernel into conjugate pairs (e.g. Gaussian mixture with squared exponential kernel) seems like a neat trick to make things tractable. It is presented as an advantage that the method is conservative, e.g. the broad posterior over the quantity of interest is more likely to contain the ground truth. However, I was not sure what was the reason for the conservative behaviour and how general that behaviour is - is it generally the case that the method will be conservative or could there be cases where the procedure also underestimates the uncertainties of interest? Is this simply an empirical observation based on a restricted set of examples? I guess this is difficult to address theoretically as it is not an asymptotic issue.
nips_2017_1144
The Numerics of GANs In this paper, we analyze the numerics of common algorithms for training Generative Adversarial Networks (GANs). Using the formalism of smooth two-player games we analyze the associated gradient vector field of GAN training objectives. Our findings suggest that the convergence of current algorithms suffers due to two factors: i) presence of eigenvalues of the Jacobian of the gradient vector field with zero real-part, and ii) eigenvalues with big imaginary part. Using these findings, we design a new algorithm that overcomes some of these limitations and has better convergence properties. Experimentally, we demonstrate its superiority on training common GAN architectures and show convergence on GAN architectures that are known to be notoriously hard to train.
This paper presents a novel analysis of the typical optimization algorithm used in GANs (simultaneous gradient ascent) and identifies problematic failures when the Jacobian has large imaginary components or zero real components. Motivated by these failures, they present a novel consensus optimization algorithm for training GANs. The consensus optimization is validated on a toy MoG dataset as well as CIFAR-10 and CelebA in terms of sample quality and inception score. I found this paper enjoyable to read and the results compelling. My primary concern is the lack of hyperparameter search when comparing optimization algorithms and lack of evidence that the problems identified with simultaneous gradient ascent are truly problems in practice. Major concerns: 1. The constant discriminator/generator loss is very weird. Any ideas on what is going on? Does this hold when you swap in different divergences, e.g. WGAN? 2. Which losses were you using for the generator/discriminator? From Fig 4, it looks like you’re using the modified generator objective that deviates from the minimax game. Do your convergence results hold for non-zero sum games when f and g are different functions, i.e. f != -g? 3. Experiments: You note that simultaneous and alternating gradient descent may require tiny step sizes for convergence. In your experiments did you try tuning learning rates? Additional optimization hyperparameters? Without sweeping over learning rates it seems unfair to say that one optimization technique is better than another. If consensus optimization is more robust across learning rates then that should be demonsrated empirically. 4. Empirical validation of the failures of simultaneous gradient ascent: are zero real components and large imaginary parts really the problem in practice? You never evaluate this. I expected to see empirical validation in the form of plots showing the distribution of eigenvalues of the Jacobian over the course of optimization for simultaneous gradient ascent. Minor concerns: 1. From the perspective of consensus optimization and \gamma being a Lagrange multiplier, I’d expect to see \gamma *adapt* over time. In particular, ADMM-style algorithms for consensus optimization have dynamics on \gamma as well. Have you explored these variants? 2. Many recent approaches to optimization in GANs have proposed penalties on gradients (typically the gradient of the discriminator w.r.t. Its inputs as in WGAN-GP). Are these related to the approach you propose, and if so is another explanation for the success of consensus optimization just that you’re encouraging a smooth discriminator? One way of testing this would be to remove the additional \nabla L(x) term for the generator update, but keep it for the discriminator update. 3. Figure 5 (table comparing architectures and optimizers) is a nice demonstration of the stability of Consensus relative to other approaches, but did you sweep over learning rate/optimization hyperparameters? If so, then I think this would make a strong figure to include in the main text. 4. I had a hard time following the convergence theory section and understanding how it relates to the results we see in practice.
nips_2017_1322
Fast Rates for Bandit Optimization with Upper-Confidence Frank-Wolfe We consider the problem of bandit optimization, inspired by stochastic optimization and online learning problems with bandit feedback. In this problem, the objective is to minimize a global loss function of all the actions, not necessarily a cumulative loss. This framework allows us to study a very general class of problems, with applications in statistics, machine learning, and other fields. To solve this problem, we analyze the Upper-Confidence Frank-Wolfe algorithm, inspired by techniques for bandits and convex optimization. We give theoretical guarantees for the performance of this algorithm over various classes of functions, and discuss the optimality of these results.
This is a very interesting paper at the intersection of bandit problems and stochastic optimization. The authors consider the setup where at each time step a decision maker takes an action and gets as a feedback a gradient estimate at the chosen point. The gradient estimate is noisy but the assumption is that we have control over how noisy the gradient estimate is. The challenge for the decision maker is to make a sequence of moves such that the average of all iterates has low error compared to the optima. So the goal is similar to the goal in stochastic optimization, but unlike stochastic optimization (but like bandit problems) one gets to see limited information about the loss function. For the above problem setting the authors introduce an algorithm that combines elements of UCB along with Frank-Wolfe updates. The FW style updates means that we do not play the arm corresponding to the highest UCB, but instead play an arm that is a convex combination of the UCB arm and a probability distribution that comes from historical updates. The authors establish stochastic optimization style bounds for a variety of function classes. The paper is failry well written and makes novel contributions to both stochastic opt and bandit literature. I have a few comments 1. It is not clear to me why the authors consider only Frank-Wolfe algorithm. Is there a deep motivation here or can the arguments made in this paper be translated to even work with other optimization algorithms such as stochastic gradient descent etc...? 2. In lines 214-219, it is not clear to me why one needs to invoke KKT condition. It was not clear to me what the relevance and importance of this paragraph is. Can the authors shine some light here? 3. Definition of strong convexity in line 230 is incorrect. 4. It would perhaps be useful to moved the lines 246-249 to the start of the paper. In this way it will be more clear why and how the problem considered here is different from standard stochastic optimization algorithms. 5. In line 257 what is \sigma_max?
nips_2017_990
Cost efficient gradient boosting Many applications require learning classifiers or regressors that are both accurate and cheap to evaluate. Prediction cost can be drastically reduced if the learned predictor is constructed such that on the majority of the inputs, it uses cheap features and fast evaluations. The main challenge is to do so with little loss in accuracy. In this work we propose a budget-aware strategy based on deep boosted regression trees. In contrast to previous approaches to learning with cost penalties, our method can grow very deep trees that on average are nonetheless cheap to compute. We evaluate our method on a number of datasets and find that it outperforms the current state of the art by a large margin. Our algorithm is easy to implement and its learning time is comparable to that of the original gradient boosting. Source code is made available at http://github.com/svenpeter42/LightGBM-CEGB.
1. Summary of paper This paper introduced a technique to learn a gradient boosted regression tree ensemble that is sensitive to feature costs and the cost of evaluating splits in the tree. Thus, the paper is similar to the work of Xu et al., 2012. The main differences are the fact that the feature and evaluation costs are input-specific, the evaluation cost depends on the number of tree splits, their optimization approach is different (based on the Taylor expansion around T_{k-1}, as described in the XGBoost paper), and they use best-first growth to grow the trees to a maximum number of splits (instead of a max depth). The authors point out that their setup works either in the case where feature cost dominates or evaluation cost dominates and they show experimental results for these settings. 2. High level subjective The paper is clearly written. Apart from introducing a significant amount of notation, I find it clear to follow. The contribution of the paper seems a bit small given the work of XGBoost and GreedyMiser. The paper seems to have sufficient experiments comparing the method with prior work and showing how model parameters affect performance. I have one confusion which may warrant an additional experiment, which I describe below. I think this model could be interesting for practitioners interested in classification in time-constrained settings. 3. High level technical One thing that is unclear is in Figures 2a and 2b, how was the cost measured for each method to obtain the Precision versus Cost curves? It seems to me that for CEGB cost is measured per-input and per-split, but for the other methods cost is measured per-tree. If this is the case this seems a bit unfair. It seems to me that the cost of each method should be evaluated in exactly the same way in order to compare each method fairly, even if the original papers evaluated cost in a different way. For this reason it's unclear to me how much better CEGB is than the other methods. If the costs are measured differently I think it would be really helpful to modify Figure 2 so that all of the costs are measured the same way. Personally I think the paper would improve if Figure 1 was removed. I think it's pretty clear what you mean by breadth-first vs. depth-first vs. best-first. I would replace this figure with one that shows the trees generated by CEGB, GreedyMiser and BudgetPrune for the Yahoo! Learning to Rank dataset. And it shows in detail the features and the costs at each split. Something that gives the reader a bit more insight as to why CEGB is outperforming the other methods. I think it would also be nice to more clearly emphasize what improvements over GreedyMiser you make to arrive at CEGB. For instance, it would be nice to write out GreedyMiser in the same notation and show the innovations (best-first, not limited-depth, different optimization procedure) that lead to CEGB. Then you could have a figure showing how each innovation improves the Precision vs. Cost curve. In Section 5.2 it's not completely clear to me why GreedyMiser or BudgetPrune cannot be applied here. Why is the cost of feature responses considered an evaluation cost and not a feature cost? 4. Low level technical - In equation 7, why is \lambda only in front of the first cost term and not the second? - In equation 12, should the denominators of the first 3 terms have a term with \lambda added, similar to the XGBoost paper? - What do the 'levels' refer to in Figure 4b? - Line 205: I would recommend redefining \beta_m here as readers may forget its definition. Similarly with \alpha on Line 213. 5. Summary of review While this paper aims to introduce a new method for cost efficient learning that (a) outperforms the state-of-the-art and (b) works in settings where previous methods are unsuitable, it is unclear to me if these results are because (a) the cost is measured differently and (b) because a cost is defined as an evaluation cost instead of a feature cost. Until these confusions are resolved I think the paper is slightly below the acceptance threshold of NIPS.
nips_2017_1677
State Aware Imitation Learning Imitation learning is the study of learning how to act given a set of demonstrations provided by a human expert. It is intuitively apparent that learning to take optimal actions is a simpler undertaking in situations that are similar to the ones shown by the teacher. However, imitation learning approaches do not tend to use this insight directly. In this paper, we introduce State Aware Imitation Learning (SAIL), an imitation learning algorithm that allows an agent to learn how to remain in states where it can confidently take the correct action and how to recover if it is lead astray. Key to this algorithm is a gradient learned using a temporal difference update rule which leads the agent to prefer states similar to the demonstrated states. We show that estimating a linear approximation of this gradient yields similar theoretical guarantees to online temporal difference learning approaches and empirically show that SAIL can effectively be used for imitation learning in continuous domains with non-linear function approximators used for both the policy representation and the gradient estimate.
I've read the feedback, thanks for the clarifications. %%% This paper deals with imitation learning. It is proposed to maximize the posterior (policy parameters given the expert state-action pairs). The difference with a classic supervised learning approach is that here the fact that the policy acts on a dynamical system is taken into account, which adds the likelihood of states given policy (the states should be sampled according to the stationary distribution induced by the policy). This additional term is addressed through a temporal difference approach inspired by the work of Morimusa et al. [12], other terms are handled in the classic way (as in supervised learning). The proposed approach is experimented on two domains, a small on and a much larger one, and compared mainly to the supervised baseline (ignoring the dynamics). This paper proposes an interesting, sound and well motivated contribution to imitation learning. I found the paper rather clear, but I think it could be clarified for readers less familiar with the domain. For example: * in Eq. 3, the reverse probability appears. It would help to at least mention it in the text (it is not that classical) * the constraint (l.168, expected gradient null) comes simply from the fact that the probabilities should sum to one. It would help a lot to say it (instead of just giving the constraint) * the function f_w is multi-output (as many components as policy parameters), it could help to highlight this in the text * the off-policy approach should be made explicit (notably, as it is a reversed TD approach, due to the reversed probabilities behind, do the importance weights apply to the same terms?) Section 2 is rather complete, but some papers are missing. There is a line of work that tries to do imitation as a supervised approach while taking into account the dynamics (which is also what is done in this paper, the dynamics inducing a stationary distribution for each policy), for example : * "Learning from Demonstration Using MDP Induced Metrics" by Melo and Lopes (ECML 2010) * "Boosted and Reward-regularized Classification for Apprenticeship Learning" by Piot et al. (AAMAS 2014) The second one could be easily considered as a baseline (it is a supervised approach whit an additional regularization term taking into account the dynamics, so the used supervised approach in the paper could be easily adapted) Regarding the experiments: * it would have been interesting to vary the number of episodes collected from the expert * for the off-policy experiment, it would be good to compare also to (on-policy) SAIL, as a number of gathered trajectories * for the walker, giving the number of input/output neurons would also help (28 in/4 out for the policy, 28 in/3200 out for the gradient estimator? so a 28:80:80:3200 network?) This makes big output networks, it could be discussed. * still for the walker, what about off-policy learning? Regarding appendix A, the gradient of the log-policy plays the role of the reward. Usually, rewards are assumed to be uniformly bounded, which may not be the case here (eg, with a linear softmax policy based on a tabular representation, the probabilities could be arbitrary close to zero, even if ever strictly positive). Isn't this a problem for the proof?
nips_2017_1880
Sparse Embedded k-Means Clustering The k-means clustering algorithm is a ubiquitous tool in data mining and machine learning that shows promising performance. However, its high computational cost has hindered its applications in broad domains. Researchers have successfully addressed these obstacles with dimensionality reduction methods. Recently, [1] develop a state-of-the-art random projection (RP) method for faster k-means clustering. Their method delivers many improvements over other dimensionality reduction methods. For example, compared to the advanced singular value decomposition based feature extraction approach, [1] reduce the running time by a factor of min{n, d} 2 log(d)/k for data matrix X ∈ R n×d with n data points and d features, while losing only a factor of one in approximation accuracy. Unfortunately, they still require O( ndk 2 log(d) ) for matrix multiplication and this cost will be prohibitive for large values of n and d. To break this bottleneck, we carefully build a sparse embedded k-means clustering algorithm which requires O(nnz(X)) (nnz(X) denotes the number of non-zeros in X) for fast matrix multiplication. Moreover, our proposed algorithm improves on [1]'s results for approximation accuracy by a factor of one. Our empirical studies corroborate our theoretical findings, and demonstrate that our approach is able to significantly accelerate k-means clustering, while achieving satisfactory clustering performance.
The authors proposed a sparse embedded k-means clustering algorithm to improve the running time of matrix multiplication of current proposed randomization projection method under sparse setting of data matrix in the literature. In particular, they demonstrated that their algorithms achieve the calculation time of matrix multiplication of order proportional to the number of non-zeroes entries of data matrix. I think the sparse embedded k-means clustering algorithm is rather interesting; however, it is not a very surprising improvement given the current results from the paper of C. Boutsidis et al. (2015). More specifically, both papers try to approximate low dimension solution for the original solution of K-means problem. To do that, C. Boutsidis et al. (2015) proposed to multiply the data matrix X with a random matrix having entries $1/\sqrt{d'}$ or $-1/\sqrt{d'}$ where $d'$ is the dimension of the approximation solution. However, their proposed solution does not work work well with sparse setting of data matrix as the calculation of matrix multiplication from their method greatly depends on the number of data points and the number of clusters. To account for that drawback, the authors of current paper proposed to multiply the data matrix with sparse random matrix inspired from diagonal Rademacher matrix. At the high level, it is unsurprisingly will improve the computation of matrix multiplication from C. Boutsidis et al.'s work under sparse setting of data matrix. At the setting of rather dense data matrix, I think both methods may differ slightly in terms of accuracy and matrix multiplication time. In summary, even though the paper is rather interesting, I think the contribution of the paper is not very surprising. Some minor comments: (1) Will the condition in Definition 2 hold for any matrix $D$? (2) It may be better for readers if the authors replace notation $\widehat{D}$ in Lemma 1 with less confusing notation. At the moments, the notations in Lemma 1 look rather similar. (3) I do not see the clustering accuracy with Random projection method from C. Boutsidis et al.'s work with RCV1 data in the experiment section. I wonder what may happen with that accuracy result?
nips_2017_2383
Streaming Robust Submodular Maximization: A Partitioned Thresholding Approach We study the classical problem of maximizing a monotone submodular function subject to a cardinality constraint k, with two additional twists: (i) elements arrive in a streaming fashion, and (ii) m items from the algorithm's memory are removed after the stream is finished. We develop a robust submodular algorithm STAR-T. It is based on a novel partitioning structure and an exponentially decreasing thresholding rule. STAR-T makes one pass over the data and retains a short but robust summary. We show that after the removal of any m elements from the obtained summary, a simple greedy algorithm STAR-T-GREEDY that runs on the remaining elements achieves a constant-factor approximation guarantee. In two different data summarization tasks, we demonstrate that it matches or outperforms existing greedy and streaming methods, even if they are allowed the benefit of knowing the removed subset in advance.
This paper studies the problem of submodular maximization in a streaming and robust setting. In this setting, the goal is to find k elements of high value among a small number of elements retained during the stream, where m items might be removed after the stream. Streaming submodular maximization has been extensively studied due to the large scale applications where only a small number of passes over the data is reasonable. Robustness has also attracted some attention recently and is motivated by scenarios where some elements might get removed and the goal is to obtain solutions that are robust to such removals. This paper is the first to consider jointly the problem of streaming and robust submodular maximization. The authors develop the STAR-T algorithm, which maintains O((m log k + k )log^2 k) elements after the stream and achieves a constant (0.149) approximation guarantee while being robust to the removal of m elements. This algorithm combines partitioning and thresholding techniques for submodular optimization which uses buckets. Similar ideas using buckets were used in previous work for robust submodular optimization, but here the buckets are filled in a different fashion. It is then shown experimentally that this algorithm performs well in influence maximization and movie recommendation applications, even against streaming algorithms which know beforehand which elements are removed. With both the streaming and robustness aspects, the problem setting is challenging and obtaining a constant factor approximation is a strong result. However, there are several aspects of the paper that could be improved. It would have been nice if there were some non-trivial lower bounds to complement the upper bound. The writing of the analysis could also be improved, the authors mention an “innovative analysis” but do not expand on what is innovative about it and it is difficult to extract the main ideas and steps of the analysis from the proof sketch. Finally, the influence maximization application could also be more convincing. For the experiments, the authors use a very simplified objective for influence maximization which only consists of the size of the neighborhood of the seed nodes. It is not clear how traditional influence maximization objectives where the value of the function depends on the entire network for any set of seed nodes could fit in a streaming setting.
nips_2017_409
Convergence Analysis of Two-layer Neural Networks with ReLU Activation In recent years, stochastic gradient descent (SGD) based techniques has become the standard tools for training neural networks. However, formal theoretical understanding of why SGD can train neural networks in practice is largely missing. In this paper, we make progress on understanding this mystery by providing a convergence analysis for SGD on a rich subset of two-layer feedforward networks with ReLU activations. This subset is characterized by a special structure called "identity mapping". We prove that, if input follows from Gaussian distribution, with standard O(1/ √ d) initialization of the weights, SGD converges to the global minimum in polynomial number of steps. Unlike normal vanilla networks, the "identity mapping" makes our network asymmetric and thus the global minimum is unique. To complement our theory, we are also able to show experimentally that multi-layer networks with this mapping have better performance compared with normal vanilla networks. Our convergence theorem differs from traditional non-convex optimization techniques. We show that SGD converges to optimal in "two phases": In phase I, the gradient points to the wrong direction, however, a potential function g gradually decreases. Then in phase II, SGD enters a nice one point convex region and converges. We also show that the identity mapping is necessary for convergence, as it moves the initial point to a better place for optimization. Experiment verifies our claims.
This paper proves the convergence of a stochastic gradient descent algorithm from a suitable starting point to the global minimizer of a nonconvex energy representing the loss of a two-layer feedforward network with rectified linear unit activation. In particular, the algorithm is shown to converge in two phases, where phase 1 drives the iterates into a one-point convex region which subsequently leads to the actual convergence in phase 2. The findings, the analysis, and particularly the methodology for proving the convergence (in 2 phases) are very interesting and definitely deserve to be published. The entire proof is extremely long (including a flowchart of 15 Lemmas/Theorems that finally allow to show the main theorem in 25 pages of proofs), and I have to admit that I did not check this part. I have some questions on the paper and suggestions to further improve the manuscript: - Figure 1 points out the different structure of the considered networks, and even the abstract already refers to the special structure of "identity mappings". However, optimizing for W (vanilla network) is equivalent to optimizing for (W+I), such that the difference seems to lie in different starting points only. If this is correct, I strongly suggest to emphasize the starting point rather than the identity mapping. In this case, please shorten the connection to ResNet. - There is a fundamental difference between the architecture shown in figure 1 (which is analyzed in the proofs), and the experiments in sections 5.1, 5.4, and 5.5. In particular, the ResNet trick of an identity shortcut behaves fundamentally different if the identity is used after a nonlinearity (e.g. BatchNorm in Figure 6 and ReLu in the ResNet Paper), opposed to an equivalent reformulation of the original network in Figure 1. Moreover, sections 5.1, 5.4, and 5.5 consider deep networks that have not been analyzed theoretically. Since your theoretical results deal with the 2-layer architecture shown in Figure 1, I suggest to reduce the numerical experiments that go well beyond the theoretically studied case, and instead explain the latter in more detail. It certainly contains enough material. In particular, it would be interesting to understand how close real-world data is to satisfying the assumptions of Theorem 3.1. - For a paper of the technical precision such as the one presented here, I think one should not talk about taking the derivative of a ReLU function without a comment. The ReLU is not differentiable whenever any component is zero - so what exactly is \nabla L(W)? In a convex setting one would consider a subgradient, but what is the right object here? A weak derivative? - Related to the previous question, subgradient descent on a convex function already requires the step size to go to zero in order to state convergence results. Can you comment on why you can get away with constant step sizes? - The plots in Figures 7 and 8 are difficult to read and would benefit from more detailed explanations, and labeled axes. - I recommend to carefully check the manuscript with respect to third person singular verbs and plural nouns. Also, some letters are accidently capitalized after a comma.
nips_2017_1482
Toward Goal-Driven Neural Network Models for the Rodent Whisker-Trigeminal System In large part, rodents "see" the world through their whiskers, a powerful tactile sense enabled by a series of brain areas that form the whisker-trigeminal system. Raw sensory data arrives in the form of mechanical input to the exquisitely sensitive, actively-controllable whisker array, and is processed through a sequence of neural circuits, eventually arriving in cortical regions that communicate with decisionmaking and memory areas. Although a long history of experimental studies has characterized many aspects of these processing stages, the computational operations of the whisker-trigeminal system remain largely unknown. In the present work, we take a goal-driven deep neural network (DNN) approach to modeling these computations. First, we construct a biophysically-realistic model of the rat whisker array. We then generate a large dataset of whisker sweeps across a wide variety of 3D objects in highly-varying poses, angles, and speeds. Next, we train DNNs from several distinct architectural families to solve a shape recognition task in this dataset. Each architectural family represents a structurally-distinct hypothesis for processing in the whisker-trigeminal system, corresponding to different ways in which spatial and temporal information can be integrated. We find that most networks perform poorly on the challenging shape recognition task, but that specific architectures from several families can achieve reasonable performance levels. Finally, we show that Representational Dissimilarity Matrices (RDMs), a tool for comparing population codes between neural systems, can separate these higherperforming networks with data of a type that could plausibly be collected in a neurophysiological or imaging experiment. Our results are a proof-of-concept that DNN models of the whisker-trigeminal system are potentially within reach.
* Summary The authors of the paper "Toward Goal-Driven Neural Network Models for the Rodent Whisker-Trigeminal System" train networks for object detection on simulated whisker data. They compare several architectures of DNNs and RNNs on that task and report on elements that are crucial to get good performance. Finally they use a representational dissimilarity matrix analysis to distinguish different high performing network architectures, reasoning that this kind of analysis could also be used on neural data. * Comments What I like most about the paper is direction that it is taking. Visual datasets and deep neural networks have shown that there are interesting links between artificial networks and neuroscience that are worthwhile to explore. Carrying these approaches to other modalities is a laudable effort, in particular as interesting benchmarks often have advanced the field (e.g. imagenet). The paper is generally well written, and the main concepts are intuitively clear. More details are provided in the supplementary material, and the authors promise to share the code and the dataset with the community. I cannot judge the physics behind simulating whiskers, but the authors seem to have taken great care to come up with a realistic and simulation, and verify that the data actually contains information about the stimulus. I think a bit more nuisance variables like slight airflow, as well as active whisking bouts would definitely make the dataset even more interesting, but I can see why it is a good idea to start with the "static" case first. With regard to benchmarking, the authors made an effort to evaluate a variety of different networks and find interesting insights into which model components seem to matter. I would have wished for more details on how the networks are setup and trained. For example, how many hyperparameters were tried, what was the training schedule, was any regularization used, etc? The code will certainly contain these details, but a summary would have been good. The part about the representation dissimilarity matrix analysis is a bit short and it is not clear to me what can be learned from it. I guess the authors want to demonstrate that there are detectable differences in this analysis that could be used to compare neural performance to those networks. The real test for this analysis is neural data, of course, which the paper does not cover. In general, I see the main contribution of the paper to be the dataset and the benchmarking analysis on it. Both together are a useful contribution to advance the field of neuroscience and machine learning.
nips_2017_1988
Certified Defenses for Data Poisoning Attacks Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (nonpoisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset. Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error by adding only 3% poisoned data.
The paper studies an interesting topic and is overall nice to read. However, I have doubts about the correctness of the main theorem (Proposition 1), which makes two statements: one that bounds the max-min loss M from below and above, and a second that further bounds the upper bound. While the second claim is proven in the appendix; the first is not. The upper bound follows from the convexity of U and Equation 5. (It might be helpful for the reader to explicitly state this finding and argue why D_p reduces to a set of \epsilon n identical points.) However, I can't see why the lower bound holds. Computing M requires the solution of a saddle point problem. If one knew one part of this saddle point solution, one could easily bound M from above and/or below. However, Algorithm 1 only generates interim solutions that eventually converge to the saddle point. Concretely, in (6), D_p is not (necessarily) part of the saddle point solution; and \hat theta is neither the minimizer of L w.r.t. D_c \cup D_p nor part of the saddle point solution. So why is this necessarily a lower bound? It's hard to judge the quality of candidate set D_p. One could argue that it would be better to actually run the algorithm till convergence and take the final data pair (x, y) \epsilon n times. Alternatively, one could take the last \epsilon n data points before the algorithm converges. Why the first \epsilon n update steps give a better candidate set is unclear to me. Whether the resulting model after \epsilon n update steps is any close to an optimum is unclear to me, too. Overall, there is little theoretical justification for the generation of the candidate attack in Alg. 1; it is for sure not the solution of (4). Hence, it's hard to conclude something like "even under a simple defense, the XYZ datasets are resilient to attack". And since the attack is model-based - it is constructed with a certain model (in this case, a linear model), loss function, and regularization scheme in mind - it is also hard to conclude "the ABC dataset can be driven from 12% to 23% test error"; this might be true for the concretely chosen model, but a different model might be more robust to this kind of poising attack. To me the value of the approach is that it allows to study the sensitivity of a concrete classifier (e.g., SVM) w.r.t. a rather broad set of attacks. In case of such sensitivity (for a concrete dataset), one could think about choosing a different defense model and/or classifier. However, in the absence of such sensitivity one should NOT conclude the robustness of the model. In Section 3.2, the defenses are defined in terms of class means rather than centroids (as in (2)). Eq. (8) requires F(\pi), though, F is defined over a set of data points. I don't fully understand this maximization term and I would like the authors to provide more details how to generalize Alg. 1 to this setting. In the experimentation section, some details are missing to reproduce the results; for example, the algorithm has parameters \roh and \eta, the defense strategies have thresholds. It is neither clear to me how those parameters are chosen, nor which parameters were used in the experiments. After authors response: Regarding the correctness of correctness of Prop. 1: \hat {theta} is the minimizer of L w.r.t. D_c \cup D_p; but D_c \cup D_p is not simultaneously a maximizer of L. M states the value of L at a saddle point, but neither \hat{theta} nor D_c \cup D_p is necessarily part of a saddle point. However, it became clear to me that this is not required. We can replace max_D min_\theta L(D, \theta) by max_D L(D, \hat {\theta}_D) where \hat{theta}_D is the minimizer w.r.t. D. Of course, this quantity is always smaller than M := L(\hat{D}, \hat{\theta}_\hat{D}) for any maximizer \hat{D} and saddle point (\hat{D}, \hat{\theta}_\hat{D}). I got confused from the two meanings of D_p, which is used as a free variable in the maximization in (4) and as the result of Alg. 1.
nips_2017_2961
Higher-Order Total Variation Classes on Grids: Minimax Theory and Trend Filtering Methods We consider the problem of estimating the values of a function over n nodes of a d-dimensional grid graph (having equal side lengths n 1/d ) from noisy observations. The function is assumed to be smooth, but is allowed to exhibit different amounts of smoothness at different regions in the grid. Such heterogeneity eludes classical measures of smoothness from nonparametric statistics, such as Holder smoothness. Meanwhile, total variation (TV) smoothness classes allow for heterogeneity, but are restrictive in another sense: only constant functions count as perfectly smooth (achieve zero TV). To move past this, we define two new higher-order TV classes, based on two ways of compiling the discrete derivatives of a parameter across the nodes. We relate these two new classes to Holder classes, and derive lower bounds on their minimax errors. We also analyze two naturally associated trend filtering methods; when d = 2, each is seen to be rate optimal over the appropriate class.
This paper considers graph-structured signal denoising problem. The particular structure enforced involves a total variation type penalty involving higher order discrete derivatives. Optimal rates for penalized least-squares estimator is established and minimax lower bound is established. For the grid filtering case the upper bounds are established under an assumed conjecture. The paper is well written. The main proof techniques of the paper are borrowed from previous work but there are some additional technicalities required to establish the current results. I went over the proofs (not 100% though) and it seems there is no significant issues with it. Overall I find the results of the paper as interesting contributions to trend filtering literature and I recommend to accept the paper. Minor: it is not clear why lines 86-87 are there in the supplementary material.
nips_2017_1459
How regularization affects the critical points in linear networks This paper is concerned with the problem of representing and learning a linear transformation using a linear neural network. In recent years, there is a growing interest in the study of such networks, in part due to the successes of deep learning. The main question of this body of research (and also of our paper) is related to the existence and optimality properties of the critical points of the mean-squared loss function. An additional primary concern of our paper pertains to the robustness of these critical points in the face of (a small amount of) regularization. An optimal control model is introduced for this purpose and a learning algorithm (backprop with weight decay) derived for the same using the Hamilton's formulation of optimal control. The formulation is used to provide a complete characterization of the critical points in terms of the solutions of a nonlinear matrix-valued equation, referred to as the characteristic equation. Analytical and numerical tools from bifurcation theory are used to compute the critical points via the solutions of the characteristic equation.
The authors of this article consider an infinite-dimensional version of a deep linear network. They propose to learn the weights using an objective function that is the sum of a square loss and a Tikhonov regularizer, controlled by a regularization parameter lambda. The authors derive analytic characterizations for the critical points of the objective, and use them to numerically show that, even for arbitrarily small lambda, local minima and saddle points can appear, that do not exist when there is no regularization. I am not an expert of this subject, but it seems to me that looking at deep linear networks in an infinite-dimensional setting, instead of the usual finite-dimensional one, is a good idea. It seems to make computations simpler or at least more elegant, and could then allow for more in-depth analysis. Using methods from optimal control to study this problem is original, and apparently efficient. I also found the article well-written: it was pleasant to read, despite the fact that I am very unfamiliar with optimal control. However, I am not sure that the picture that the authors give of the critical points in the limit where lambda goes to zero is clear enough so that I understand what is surprising in it, and what algorithmic implications it can have. Indeed, when lambda is exactly zero, there are a lot of global minima. If these global minima can be grouped into distinct connected components, then, when lambda becomes positive, it is "normal" that local minima appear (intuitively, each connected component will typically give rise to at least one local minimum, but only one connected component - the one containing the global solution that minimizes the regularizer - will give rise to a local minimum that is also global). It seems to me that this is what the authors observe, in terms of local minima. The appearing of saddle points is maybe more surprising, but since they seem to come in "from infinity", they might not be an issue for optimization, at least for small values of lambda. Also, the authors focus on normal solutions. They are easier to compute than non-normal ones, but they are a priori not the only critical points (at least not when lambda=0). So maybe looking only at the normal critical points yields a very different "critical point diagram" than looking at all the critical points. If not, can the authors explain why? Finally, this is less important but, after all the formal computations (up to Theorem 2), I was slightly disappointed that the appearing of local minima and saddle points was only established for precise examples, and through numerical computations, and not via more rigorous arguments. Minor remarks: - It seems to me that the regularization proposed by the authors is essentially "weight decay", that is well-known among the people that train deep networks; it should be said. - Proposition 1: it is to me that it is not explicitly said that it only holds for positive values of lambda. - The covariance is defined as \Sigma_0 in the introduction but subsequently denoted by \Sigma. - l.129: it is said that the "critical points" will be characterized, but the proof uses Proposition 1, that deals with "minimizers", and not "critical points". - l. 141: "T" is missing in the characteristic equation. - Theorem 2 - (i): plural should probably be used, since there are several solutions. - Proof of theorem 2: the proof sketch seems to suggest that the authors assume with no proof that there always exists a normal solution when lambda > 0. The appendix actually proves the existence, but maybe the sketch could be slightly reformulated, so as not to give a wrong impression? - It seems to me that the authors never show that values of C that satisfy Equation (18) are critical points of (4) (the proposition says the opposite: critical points of (4) satisfy (18)). So how can they be sure that the solutions they compute by analytic continuation are indeed critical points of (4)? - I am not sure I understand the meaning of "robust" in the conclusion. Added after author feedback: thank you for the additional explanations. I am however slightly disappointed with your response to my main two comments. I agree that the emergence of saddle points, as well as the normal / non-normal difference are somewhat surprising. However, it seems to me that it is not really what the article presents as a surprise: if you think that these two points are the most surprising contribution of the article, it should probably be better highlighted and discussed. For the normal vs non-normal solutions, I do understand that studying non-normal critical points looks very difficult, and I am fine with it. However, I think that the abstract and introduction should then be clear about it, and not talk about general critical points, while the article only describes normal ones.
nips_2017_1952
Spherical convolutions and their application in molecular modelling Convolutional neural networks are increasingly used outside the domain of image analysis, in particular in various areas of the natural sciences concerned with spatial data. Such networks often work out-of-the box, and in some cases entire model architectures from image analysis can be carried over to other problem domains almost unaltered. Unfortunately, this convenience does not trivially extend to data in non-euclidean spaces, such as spherical data. In this paper, we introduce two strategies for conducting convolutions on the sphere, using either a spherical-polar grid or a grid based on the cubed-sphere representation. We investigate the challenges that arise in this setting, and extend our discussion to include scenarios of spherical volumes, with several strategies for parameterizing the radial dimension. As a proof of concept, we conclude with an assessment of the performance of spherical convolutions in the context of molecular modelling, by considering structural environments within proteins. We show that the models are capable of learning non-trivial functions in these molecular environments, and that our spherical convolutions generally outperform standard 3D convolutions in this setting. In particular, despite the lack of any domain specific feature-engineering, we demonstrate performance comparable to state-of-the-art methods in the field, which build on decades of domain-specific knowledge.
Paper summary: The authors propose to separate the surface of a sphere using the equiangular cubed sphere representation of Ronchi et al. (1996) in order to do a spherical convolution. This convolution has the advantage of being almost invariant to rotation of the volume and should have application in earth science, astronomy and in molecular modeling. The authors demonstrate the usefulness of their approach on practical tasks of this later field. Strengths and weaknesses: The paper is clearly written and easy to read. Also, I appreciate the amount of work that has been put in this paper as molecular modeling is a field with a steep learning curve, where the data is hard to pre-process, and where clear prediction task is sometime hard to define. Although I appreciate the importance of spherical convolution on spherical data and the comparison with a purely dense model, there are other comparisons I believe would be of interest. Namely, I would like to know if empirical comparison with 3D convolution (ex: https://keras.io/layers/convolutional/#conv3d) in a cube is possible. For example, in the case of molecular modeling, the cube could be centered on the atom of interest. Also, it would also be of interest to compare the spherical convolution using different coordinate system: the equiangular cubed sphere (as proposed, Figure 1b) and the standard equiangular spacing (Figure 1a). Finally, it would be nice if the conclusion contained ways to improve the proposed approach. Quality: I found the paper well written and easy to understand. I found few minor typos that are highlighted at the end of this review. Clarity: The figures are well done and help to understand the concepts that are important to the comprehension of the paper. I liked that the contribution of the authors is well-defined and that the authors didn’t try to oversell their ideas. The authors should define the Q20 prediction score at line 229. On line 245, the author state "Although there are a number of issues with our approach …". I appreciate that the authors were honest about it but I would like them to elaborate on the said issues. Originality: I found the paper interesting and refreshing. I liked that the authors have identified a niche problem and that they addressed it simply. Significance: The authors were honest about their work being preliminary and still at an early stage, to which I agree. However, I believe that their work could have a significant impact in domains where data are in a spherical format or when 3D data is acquired from a fixed point sensor. I would like to emphasize that the impact of this paper will be greatly reduced if the author do not make the source code for the spherical convolution available online. In addition, having the source code to reproduce the results presented in the paper is good practice. For these reasons I strongly encourage the authors to make their source code available if the paper is accepted. Errors / typos: Line 68: "... based on the on the cubed …" Line 101: "... we can make the make the …" Line 117: The sentence seems unfinished or has a missing piece.
nips_2017_2251
Deep Reinforcement Learning from Human Preferences For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than 1% of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any which have been previously learned from human feedback.
Overall I find this paper is generally interesting, clearly presented, and technically sound. My concerns are that the contributions of this paper seems rather incremental when compared to previous work. Also, some of the experiments would benefit from further analysis. Let me elaborate below: Summary: This paper advocates a preference-based approach for teaching an RL agent to perform a task. A human observes two trajectories generated by different policies and indicates which one is better performing the desired task. Using these indications, the agent infers a reward signal that is hopefully consistent with the preferences of the human, and then uses RL to optimize a policy that maximizes the inferred rewards. My main hesitation for accepting this paper is that the method presented is not sufficiently different from previous work on preference-based RL (such as Wirth '16). The authors acknowledge that the main contribution of this work is to scale up preference-based RL to more complex domains by using deep reinforcement learning. I am less enthusiastic about this paper if the main algorithmic contribution is just using a NN to approximate the policy and extending to Mujoco/Atari. Finally, I am concerned with the fact that synthetic queries are outperforming RL using the true reward function. It seems to me that the synthetic query examples should do strictly worse than having the actual reward function as they are just eliciting preferences based on sub-trajectories of the real rewards. The fact that they are in some cases solidly outperforming standard RL is highly unexpected and, in my opinion, deserves a more rigorous analysis. I don't find the authors explanation of the learned reward function being smoother a satisfying one.
nips_2017_49
On the Consistency of Quick Shift Quick Shift is a popular mode-seeking and clustering algorithm. We present finite sample statistical consistency guarantees for Quick Shift on mode and cluster recovery under mild distributional assumptions. We then apply our results to construct a consistent modal regression algorithm.
In this paper the authors submit proofs regarding various notions of consistency for the Quick Shift algorithm. They also introduce adaptions of the Quick Shift algorithm for other tasks paired with relevant consistency results. Quick Shift is a mode finding algorithm where one chooses a segmentation parameter \tau, constructs a kernel density estimator, and assigns a mode, m, at samples where the KDE magnitude at m is larger than the KDE magnitude evaluated at all samples within a \tau-ball of m. In this setting a true mode are those points in the sample space where the density is maximized within a \tau ball. Quick shift can also be used to assign samples to a mode by ascending a sample to the nearest sample where the KDE magnitude is larger and repeating until a mode is reached. The authors present proofs of consistency for both of these tasks: mode recovery and mode assignment. The authors also propose an adaptation of Quick Shift for cluster tree recovery and modal regression for which they also prove consistency. The proofs presented in this paper are dense and technical; I cannot claim to have to have totally validated their correctness. That being said I found no glaring errors in the authors' reasoning. My largest concern with this paper is reference [1]. [1] plays a central role in the proofs for individual mode recovery rates. [1] is an "anonymous "unpublished manuscript" on Dropbox. I spent a fair amount of time searching for any other reference to this manuscript and I could find none. The authors give no explanation for this manuscript. I'm inclined to think that I cannot reasonably accept a paper whose results rely upon such a dubious source. Additionally there are many other smaller issues I found in the paper. Some of them are listed here by line: 99: period after "we" should be removed 101-103: The sentence "For example, if given an input X the response y have two polarizing tendancies, modal regression would be able to adapt to these tendancies," is very strange and unclear. Also "tendencies" is misspelled. 103: "indepth" is not a word. Replace with "in-depth" or "indepth" 106: "require" should be "requires" 112: Why is the "distribution \script{F}" introduced? This is never mentioned elsewhere. We can simply proceed from the density. 113: I'm guessing that the "uniform measure" is the same thing as the "Lebesgue measure." "Lebesgue measure" is more standard terminology in my opinion. 115: Why is the norm notation replaced with absolute value notation? This occurs later as well elements which are clearly vectors, line 137 and Lemma 2 for instance. 139: Remark 3 should have a citation. 145: C is never defined. 150: Your style changes here. Assumption 3 uses parentheses and here Assumption 4 uses square brackets. 158: The final "x" in the set definition should be "x'" 242: "f(x)" should be to the right of "inf" 280: "conditiona" should be "conditional" 286: What exactly is meant by "valid bandwidth choices?" Update: I've bumped the paper by two points. I still have concerns with the numerous small technical errors I found
nips_2017_440
Geometric Descent Method for Convex Composite Minimization In this paper, we extend the geometric descent method recently proposed by Bubeck, Lee and Singh [1] to tackle nonsmooth and strongly convex composite problems. We prove that our proposed algorithm, dubbed geometric proximal gradient method (GeoPG), converges with a linear rate (1 − 1/ √ κ) and thus achieves the optimal rate among first-order methods, where κ is the condition number of the problem. Numerical results on linear regression and logistic regression with elastic net regularization show that GeoPG compares favorably with Nesterov's accelerated proximal gradient method, especially when the problem is ill-conditioned.
Summary: The paper extends the recent work of Bubeck et al. on the geometric gradient descent method, in order to handle non-smooth (but “nice”) objectives. Similar work can be found in [10]; however, no optimal rates are obtained in that case (in the sense of having a square root condition number in the retraction factor). The core ideas and motivation origin from the proximity operators used in non-smooth optimization in machine learning and signal processing (see e.g., lasso). Quality: The paper is of good technical quality - for clarity see below. The results could be considered “expected” since similar results (from smooth to non-smooth convex objectives with proximal operators) have been proved in numerous papers the past decade; especially under convexity assumptions. Technical correctness: I believe that the results I read in the main are correct. Impact: The paper might have a good impact on the direction of demystifying Nesterov’s accelerated routines like FISTA (by proposing alternatives). I can see several ramifications based on this work, so I would say that the paper has at least some importance. Clarity: My main concern is the balance of being succinct and being thorough. The authors tend towards the former; however, several parts of the paper are no clear (especially the technical ones). Also, the whole idea of this line of work is to provide a better understanding of accelerated methods; the paper does not provide much intuition on this front (update: there is a paragraph in the sup. material - maybe the authors should consider moving that part in the main text). Comments: 1. Neither here, nor in the original paper of Bubeck it is clear how we get some basic arguments: E.g., I “wasted” a lot of time to understand how we get to (2.2) by setting y = x^* in (2.1). Given the restricted review time, things need to be crystal clear, even if they are not in the original paper, based on which this work is conducted. Maybe this can be clarified by the rest of the reviewers. Assuming y = x++, this gives the bound x* \in B(x++, 2/\alpha^2 * || \nabla f(x) ||_2^2 - 2/\alpha (f(x) - f(x++))), but f(x*) \leq f(x++), so we cannot naively substitute it there. Probably I’m missing something fundamental. Overall: There is nothing strongly negative about the paper - I would say, apart from some clarity issues and given that proofs work through, it is a “weak accept”.
nips_2017_2563
DropoutNet: Addressing Cold Start in Recommender Systems Latent models have become the default choice for recommender systems due to their performance and scalability. However, research in this area has primarily focused on modeling user-item interactions, and few latent models have been developed for cold start. Deep learning has recently achieved remarkable success showing excellent results for diverse input types. Inspired by these results we propose a neural network based latent model called DropoutNet to address the cold start problem in recommender systems. Unlike existing approaches that incorporate additional content-based objective terms, we instead focus on the optimization and show that neural network models can be explicitly trained for cold start through dropout. Our model can be applied on top of any existing latent model effectively providing cold start capabilities, and full power of deep architectures. Empirically we demonstrate state-of-the-art accuracy on publicly available benchmarks. Code is available at https://github.com/layer6ai-labs/DropoutNet.
Excellent paper, presenting an good idea with sufficient experiments demonstrating its efficiency. Well written and with a good survey of related work. The paper addresses the following problem: when mixing id level information (user id of item id) with coarser user and item features, any model has the tendency to explain most of the training data with the fine grained id features, using coarse features only to learn the rarest ids. As a result, the generated model is not good at inference when only coarser features are available (cold-start cases, of a new user or a new item). The paper proposes a dial to control how much the model balances out the fine grained and coarser information via drop-out of fine grained info while training. The loss function in Equation (2) has the merit of having a label value for all user-item pairs, by taking the output of the low rank model U_u V_v ^T as labels. My concern with such a loss is that the predictions from the low rank model have very different levels of accuracy: they are very accurate for the most popular users and the most popular items (users and items with most data), but can be quite noisy in the tail, and the loss function in Equation (2) does not account for the confidence in the low rank model prediction. Replacing the sum by a weighted sum using as weights some simple estimates of the confidence in each low rank model prediction could improve the overall model. But this can be done in future work. Question about experiments on CiteULike: Figure 2 and Table 1 well support the point the author is making about the effect of dropout on recall for both, warm start cases and cold start cases. However, it is surprising that the cold start recall numbers are better than the warm start recall numbers. Can the authors explain this phenomenon? Minor details: Line 138: f_\cal{U} -> f_\cal{V} Line 349: is able is able -> is able
nips_2017_1004
A Scale Free Algorithm for Stochastic Bandits with Bounded Kurtosis Existing strategies for finite-armed stochastic bandits mostly depend on a parameter of scale that must be known in advance. Sometimes this is in the form of a bound on the payoffs, or the knowledge of a variance or subgaussian parameter. The notable exceptions are the analysis of Gaussian bandits with unknown mean and variance by Cowan et al. [2015] and of uniform distributions with unknown support . The results derived in these specialised cases are generalised here to the non-parametric setup, where the learner knows only a bound on the kurtosis of the noise, which is a scale free measure of the extremity of outliers.
Rebuttal -------- I read the author feedback. There was no discussion, but due to the consensus, the paper should be accepted. Summary ------- The article considers a variant of the Upper Confidence Bound (UCB) algorithm for multi-armed bandit that is using the notion of kurtosis as well as median of means estimates for the mean and the variance of the considered distributions. Detailed comments ----------------- 1) Table 1: I think the work by Burnetas and Katehakis regarding bandit lower bounds for general distributions deserves to be in this table. 2) It is nice to provide Theorem 7 and make explicit the lower bound in (special cases of) this setting. 3) The algorithm that is derived looks very natural in view of the discussion regarding the kurtosis. One may observe that a bound kappa_0 on kappa is currently required by the algorithm, and I suggest you add some paragraph regarding this point: Indeed on the one hand, a (naive) criticism is that people may not know a value kappa_0 in practice and thus one would now like to know what can be achieved when the kurtosis is unknown and we want to estimate it; This would then call for another assumption such as on higher order moments, and one may then continue asking similar questions for the novel assumption, and so on and so forth. On the other hand, the crucial fact about kurtosis is that unlike mean or variance, for many classes of distributions all distributions of a certain type share the same kurtosis. Thus this is rather a "class" dependent quantity than a distribution dependent quantity, and using the knowledge of kappa_0 \geq \kappa can be considered to be a weak assumption. I think this should be emphasized more. 4) Proof of Lemma 5: The proof is a little compact. I can see the missing steps; but for the sake of clarity, it may be nice to reformulate/expand slightly. (Applying Lemma 4 twice, first to the random variables Y_s, then to (Y_s-mu)^2, etc). 5) Proof of Lemma 6: end of page 4, first inequality: from (c), you can improve the factor 2 to 4/3 (1/ (1-1/4) instead of 1/(1-1/2)) Line 112 "while the last uses the definition..." and Lemma 4 as well? With the 2/3 improvement, you should get \tilde \sigma_t^2 \leq 11/3 \sigma^2 + \tilde \sigma_t^2/3. 6) Without section 3, the paper would have been a little weak, since the proof techniques are extremely straightforward once we have the estimation errors of the median of means, which are known. Thus this section is welcomed. 7) Proof of Theorem 7: "Let Delta_\epsilon = \Delta_\epsilon + \epsilon" I guess it should be "Let Delta_\epsilon = \Delta + \epsilon". 8) There are some missing words on page 7. 9) Section 4: i)Optimal constants. For the seek improvement, one should mimic the KL optimization, at least implicitly. I am not sure this can be achieved that easily. Self-normalized processes would play a role in reducing the magnitude of the confidence bounds, I am not sure it will help beyond that. ii) TS: I don't see any simple way to define a "kurtosis-based" prior/posterior. Can you elaborate on this point? Otherwise this seems like an empty paragraph and I suggest you remove it. Decision: -------- Overall, this is an article with a rather simple (a direct application of median of means confidence bounds for unknown mean and variance) but elegant idea, and the paper is nicely executed and very clear. I give a weak accept, not a clear accept only because most of the material is already known.
nips_2017_3060
Recurrent Ladder Networks We propose a recurrent extension of the Ladder networks [22] whose structure is motivated by the inference required in hierarchical latent variable models. We demonstrate that the recurrent Ladder is able to handle a wide variety of complex learning tasks that benefit from iterative inference and temporal modeling. The architecture shows close-to-optimal results on temporal modeling of video data, competitive results on music modeling, and improved perceptual grouping based on higher order abstractions, such as stochastic textures and motion cues. We present results for fully supervised, semi-supervised, and unsupervised tasks. The results suggest that the proposed architecture and principles are powerful tools for learning a hierarchy of abstractions, learning iterative inference and handling temporal information.
In this paper, the authors introduce an extension of ladder networks for temporal data. Similarly to ladder network, the model proposed in the paper consists in an encoder-decoder network, with "lateral" connections between the corresponding layers of the encoder and the decoder. Contrary to the original model, recurrent ladder networks also have connection between different time steps (as well as skip connection from the encoder to the next time step). These recurrent connections allows to model temporal data or to perform iterative inference (where more than one step of message passing is required). The proposed model is then evaluated on different tasks. The first one is a classification task of partially occluded and moving MNIST digit, where the authors show that the proposed model performs slightly better than baselines corresponding to the proposed models with various ablation. The second task is generative modeling of various piano pieces. Finally, the last experiment is a classification task of MNIST digits with Brodatz textures, making the classification very challenging. While the ideas presented in this paper are natural and well founded, I believe that the current version of the papers has some issues. First, the proposed model is very succinctly described (in section 2), assuming that the reader is familier with ladder networks. This is not the case for me, and I had to look at previous papers to understand what is the model. I believe that the paper should be self-contained, which I do not think is the case for the submitted version. I also believe that some statement in the paper are a bit unclear or misleading: for example, I do not think that Fig. 1c corresponds to message passing with latent variables varying in time (I think that it corresponds to iterative inference). Indeed, in the case of a temporal latent variable model, there should also be messages from t to t-1. I also found the experimental section a bit weak: most of the experiments are performed on toy datasets (with the exception of the piano dataset, where the model is not state of the art). Overall, I am a bit ambivalent about the paper. While the ideas presented are interesting, I think that the proposed model is a bit incremental compared to previous work, and that the experiments are a bit weak. Since I am not familiar at all with this research area, I put a low confidence score for my review.
nips_2017_2667
Excess Risk Bounds for the Bayes Risk using Variational Inference in Latent Gaussian Models Bayesian models are established as one of the main successful paradigms for complex problems in machine learning. To handle intractable inference, research in this area has developed new approximation methods that are fast and effective. However, theoretical analysis of the performance of such approximations is not well developed. The paper furthers such analysis by providing bounds on the excess risk of variational inference algorithms and related regularized loss minimization algorithms for a large class of latent variable models with Gaussian latent variables. We strengthen previous results for variational algorithms by showing that they are competitive with any point-estimate predictor. Unlike previous work, we provide bounds on the risk of the Bayesian predictor and not just the risk of the Gibbs predictor for the same approximate posterior. The bounds are applied in complex models including sparse Gaussian processes and correlated topic models. Theoretical results are complemented by identifying novel approximations to the Bayesian objective that attempt to minimize the risk directly. An empirical evaluation compares the variational and new algorithms shedding further light on their performance.
General comments: Overall this is an interesting paper with some nice theoretical results, though the techniques are rather standard. As far as I know, the results on risk bounds for Bayes predictor are novel. It provides theoretical evidence for Bayesian treating of parameters (at least for Gaussian priors) and the generalization guarantee of variational approximations (1-layer, also 2-layer by collapsing the second), which should be of wide interest to the Bayesian community. The experiments on CTMs demonstrate the risk bounds of different variational approximations, though I find the direct loss minimization case less significant because usually it’s hard to compute the marginal log likelihoods. The main concern that I have is the presentation of the results. It seems that the authors induce unnecessary complexity during the writing. The three kinds of risks induced in the paper ($r_{Bay}, r_{Gib}, r_{2A}$) are just negative expectations of marginal log likelihood $\log p(y|x)$, the first term in elbo for w, and the first term in elbo for w and f (using terminology from variational inference). In Section 2.2, a 2-layer Bayesian model is introduced. I understand the authors’ intention to align with real model examples used in the paper after I finish later sections. But I still feel it’s better to derive the main results (Th.2, 6, 8) first on the 1-layer model $p(w)p(y|x,w)$ and later extend them into 2-layer settings. In fact the main theoretical result is on collapsed variational approximations, which can be seen as a 1-layer model (This is also how the proof of Corollary 3 comes). Minor: In Section 2.2, if the 2-layer model is defined as $p(w)p(f)\prod_i p(y_i|f_i, x_i)$, should the risks be $...E_{p(f|w)} p(y|f, x)$ instead of $...E_{p(f|w,x)}p(y|f)$? I’m not sure why there is a dependence of f on x.
nips_2017_2891
Learning from uncertain curves: The 2-Wasserstein metric for Gaussian processes We introduce a novel framework for statistical analysis of populations of nondegenerate Gaussian processes (GPs), which are natural representations of uncertain curves. This allows inherent variation or uncertainty in function-valued data to be properly incorporated in the population analysis. Using the 2-Wasserstein metric we geometrize the space of GPs with L 2 mean and covariance functions over compact index spaces. We prove uniqueness of the barycenter of a population of GPs, as well as convergence of the metric and the barycenter of their finite-dimensional counterparts. This justifies practical computations. Finally, we demonstrate our framework through experimental validation on GP datasets representing brain connectivity and climate development. A MATLAB library for relevant computations will be published at https://sites.google.com/view/antonmallasto/software.
The paper extends the Wasserstein (earth-mover's) metric from finite-dimensional Gaussians to GPs, defining a geometry on spaces of GPs and in particular the notion of a barycenter, or 'average' GP. It establishes that 2-Wasserstein distances, and barycenters, between GPs can be computed as the limit of equivalent computations on finite Gaussians. These results are applied to analysis of uncertain white-matter tract data and annual temperature curves from multiple stations. The writing is serviceable and relevant background is presented clearly. I found no errors in the results themselves, though did not check all of them thoroughly. My main concerns are with motivation and significance. I agree that it is in some sense natural to extend the Wasserstein geometry on Gaussians to GPs, and it's reassuring to see that this can be done and (as I understand the results of the paper) basically nothing surprising happens -- this is definitely a contribution. But there is not much motivation for why this is a useful thing to do. The experiments in the paper are (as is standard for theory papers) fairly contrived, and the conclusions they arrive at (the earth is getting warmer) could likely be gained by many other forms of analysis --- what makes the Wasserstein machinery particularly meaningful here? For example: from a hierarchical Bayesian perspective it would seem reasonable to model the climate data using a latent, unobserved global temperature curve f_global ~ GP(m_global, k_global), and then assume that each station i observes that curve with additional local GP noise, f_i ~ GP(f_global + mi, ki). Then the latent global temperature curve could be analytically computed through simple Gaussian conditioning, would be interpretable in terms of this simple generative model, and would (correctly) become more certain as we gathered evidence from additional stations. By contrast I don't understand how to statistically interpret the barycenter of independently estimated GPs, or what meaning to assign to its uncertainty (am I correct in understanding that uncertainty would *not* decrease given additional observations? what are the circumstances in which this is desirable behavior?). Perhaps other reviewers with a more theoretical bent will find this work exciting. But in the absence of any really surprising results or novel theoretical tools, I would like to see evidence that the machinery it builds is well-motivated and has real advantages for practical data analysis -- can it reach conclusions more efficiently or uncover structure more effectively than existing techniques? As currently written the paper doesn't cross my personal NIPS significance bar. misc notes: - Fig 1: the 'naive mean' column claims to average the pointwise standard deviations, but the stddevs shown are smaller than *any* of the example curves: how can this be true? Also why is averaging pointwise stddevs, rather than (co)variances or precisions, the relevant comparison? - I wonder if the Wasserstein metric could be useful for variational fitting of sparse/approximate GP models? Currently this is mostly done by minimizing KL divergence (as in Matthews et al. 2015, https://arxiv.org/abs/1504.07027) but one could imagine minimizing the Wasserstein distance between a target GP and a fast approximating GP. If that turned out to yield a practical method with statistical or computational advantages over KL divergence for fitting large-scale GPs, that'd be a noteworthy contribution. 130-131: missing close-paren in inf_\mu\inP2 (H 152: 'whe' -> 'we' 155: 'This set is a convex' 170: why is 'Covariance' capitalized? 178: 'GDss' -> 'GDs'?
nips_2017_523
Robust Hypothesis Test for Nonlinear Effect with Gaussian Processes This work constructs a hypothesis test for detecting whether an data-generating function h : R p → R belongs to a specific reproducing kernel Hilbert space H 0 , where the structure of H 0 is only partially known. Utilizing the theory of reproducing kernels, we reduce this hypothesis to a simple one-sided score test for a scalar parameter, develop a testing procedure that is robust against the misspecification of kernel functions, and also propose an ensemble-based estimator for the null model to guarantee test performance in small samples. To demonstrate the utility of the proposed method, we apply our test to the problem of detecting nonlinear interaction between groups of continuous features. We evaluate the finite-sample performance of our test under different data-generating functions and estimation strategies for the null model. Our results reveal interesting connections between notions in machine learning (model underfit/overfit) and those in statistical inference (i.e. Type I error/power of hypothesis test), and also highlight unexpected consequences of common model estimating strategies (e.g. estimating kernel hyperparameters using maximum likelihood estimation) on model inference.
This paper considers an interesting issue of hypothesis testing with complex black models. I will say from the outset that I am skeptical of the NHST framework, especially as it is currently applied in classical statistical settings, where I think it is clear the ongoing harm that it has done to applied science (see, e.g. the ASA's statement on p-values). Whether I should be more or less skeptical of it in nonparametric settings is an interesting questions and I'd be happy to be convinced otherwise. I guess the standard arguments aren't worth rehashing in great detail but just so you're aware of where I'm coming from, if I care about the (possibly non-linear) interaction effect then I want a kernel parameterized as: beta1 * k1a(x1,x1') + beta2 * k2c(x2,x2') + beta12 * k1b(x1,x1')*k2b(x2,x2') I want to fit that model using all of my expensive data, and then say something about beta12. Bayesian methods would give me a posterior credible interval for beta12---given limited data this is going to be my approach for sure and I'm going to spend time constructing priors based on previous studies and expert knowledge. But if you want to take a frequentist approach that's fine, you can just bootstrap to obtain a confidence interval for beta12, which I'm going to be happier with than the asymptotics-based approach you take for small samples. OK, now that I've gotten that disclaimer out of the way (apologies!), taking your paper on its own terms, here are some areas that I think could be improved. On line 29 I'm not sure why the null hypothesis is additive; shouldn't this be a multiplicative interaction between air pollutants and nutrient intake? Be careful in Section 2: the fact that h ~ GP with covariance kernel k means that the probability that h is in the RKHS H_k is 0! This is a surprising fact (see Wahba 1990 and Lukic and Beder 2001) and I don't think it invalidates your approach (e.g. E[h] is in the RKHS), but it's worth fixing this point. I like the GP-LMM connection and think it needs more explanation as I am not very familiar with REML. Could I, for example, posit the following model: y = h1 + h2 + epsilon, h1 ~ GP(0,K1) and h2 ~ GP(0,K2) and then use your approach to decide between these models: y = h1 + epsilon y = h2 + epsilon y = h1 + h2 + epsilon? I think the answer is yes, and I'm wondering if it'd be clearer to explain things this way before introducing the garrote parameter delta. Turning the test statistic into a model residual also helped me think about what was going on, so maybe it'd be good to move that earlier than Section 4. It also might be nice to show some training/testing curves (to illustrate bias/variance) in Section 4 to explain what's going on. It wasn't obvious to me why the additive main effects domain H1 \osum H12 is orthogonal to the product domain H1 \otimes H2 (section 5). For the experiments, if this were real data I could imagine trying all of the difference kernel parameterizations and choosing the one that the fits the best (based on marginal likelihood I guess) as the model for the null hypothesis. Of the different models you considered, which one fit the data best? And did this choice end up giving good type 1 and type 2 error? A real data experiment would be nice.
nips_2017_1939
Deep Sets We study the problem of designing models for machine learning tasks defined on sets. In contrast to traditional approach of operating on fixed dimensional vectors, we consider objective functions defined on sets that are invariant to permutations. Such problems are widespread, ranging from estimation of population statistics [1], to anomaly detection in piezometer data of embankment dams [2], to cosmology [3,4]. Our main theorem characterizes the permutation invariant functions and provides a family of functions to which any permutation invariant objective function must belong. This family of functions has a special structure which enables us to design a deep network architecture that can operate on sets and which can be deployed on a variety of scenarios including both unsupervised and supervised learning tasks. We also derive the necessary and sufficient conditions for permutation equivariance in deep models. We demonstrate the applicability of our method on population statistic estimation, point cloud classification, set expansion, and outlier detection.
Summary of Review: Although the techniques are not novel, the proofs are, and the paper seems well done overall. I’m not familiar with the experiments / domains of the tasks, and there’s not enough description for me to confidently judge the experiments, but they appear fairly strong. Also, the authors missed one key piece of related work (“Learning Multiagent Communication with Backpropagation” - Sukhbaatar et al. 2016). Overview of paper: This paper examines permutation-invariant/equivariant deep learning, characterizing the class of functions (on countable input spaces) and DNN layers which exhibit these properties. There is also an extensive experimental evaluation. The paper is lacking in clarity of presentation and this may hide some lack of conceptual clarity as well; for instance, in some of the applications given (e.g. anomaly detection), the input is properly thought of as a multiset, not a set. The proof of Theorem 2 would not apply to multisets. Quality: Overall, the paper appears solid in motivation, theory, and experiments. The proofs (which I view as the main contribution) are not particularly difficult, impressive, or involved, but they do provide valuable results. The other contributions are a large number of experiments, which I have difficulty judging. The take-aways of the experiments are fairly clear: 1) expanding deep learning to solve set-based tasks can (yet again) yield improvements over approaches that don’t use deep learning 2) in many cases, a naive application of deep learning is bound to fail (for instance, section 4.3, where the position of the anomaly is, presumably, random, and thus impossible for a permutation *invariant* architecture, which I presume the baseline to be), and permutation invariance / equivariance *must* be considered and properly incorporated into the model design. Clarity: While the overall message is clear, the details are not, and I’ll offer a number of corrections / suggestions, which I expect the authors to address or consider. First of all, notation should be defined (for instance, this is missing in de Finetti’s theorem). Section 2.3 should, but does not explicitly show how these results relate to the rest of the paper. Line 136-137: you should mention some specific previous applications, and be clear, later on in the experiments section, about the novelty of each experiment. Line 144: which results are competitive with state-of-the-art (SOTA)? Please specify, and reference previous SOTA. As mentioned before, the tasks are not always explained clearly enough for someone unfamiliar with them (not even in the appendix!). In particular, I had difficulty understanding the tasks in section 4.1.3 and 4.2. In 4.2 I ask: How is “set expansion” performed? What does “coherent” mean? Is the data labelled (e.g. with ground-truth set assignments?) What is the performance metric? What is the “Beta-Binomial model”? detailed comments/questions: There are many missing articles, e.g. line 54 “in transductive” —> “in the transductive” The 2nd sentence of the abstract is not grammatical. line 46: “not” should be removed line 48: sets do not have an ordering; they are “invariant by definition”. Please rephrase this part to clarify (you could talk about “ordered sets”, for instance). Line 151: undefined notation (R(alpha), M). Lines 161-164 repeat the caption of Figure 1; remove/replace duplicate text. the enumeration in lines 217-273 is poorly formatted Line 275 d) : is this the *only* difference with your method? Max-pooling is permutation invariant, so this would still be a kind of Deep Set method, right?? line 513: the mapping must be a bijection for the conclusions to follow! As I mentioned, set and multiset should be distinguished; a multiset is permutation invariant but may have repeated elements. Multisets may be more appropriate for some machine learning tasks. BTW, I suspect the result of Theorem 2 would not hold for uncountable sets. Bayesian Sets is cited twice (35 and 49). Originality: I’m a bit out of my depth in terms of background on the tasks in this paper, and I’m not 100% sure the proofs are novel; I would not be particularly surprised if these results are known in, e.g. some pure mathematics communities. My best guess is that the proofs are novel, and I’m fairly confident they’re novel to the machine learning or at least deep learning community. Significance: I think the proofs are significant and likely to clarify the possible approaches to these kinds of set-based tasks for many readers. As a result (and also due to the substantial experimental work), the paper may also play a role in popularizing the application of deep nets to set-based tasks, which would be quite significant.
nips_2017_2333
Dual Path Networks In this work, we present a simple, highly efficient and modularized Dual Path Network (DPN) for image classification which presents a new topology of connection paths internally. By revealing the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, we find that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. To enjoy the benefits from both path topologies, our proposed Dual Path Network shares common features while maintaining the flexibility to explore new features through dual path architectures. Extensive experiments on three benchmark datasets, ImagNet-1k, Places365 and PASCAL VOC, clearly demonstrate superior performance of the proposed DPN over state-of-the-arts. In particular, on the ImagNet-1k dataset, a shallow DPN surpasses the best ResNeXt-101(64 × 4d) with 26% smaller model size, 25% less computational cost and 8% lower memory consumption, and a deeper DPN (DPN-131) further pushes the state-of-the-art single model performance with about 2 times faster training speed. Experiments on the Places365 large-scale scene dataset, PASCAL VOC detection dataset, and PASCAL VOC segmentation dataset also demonstrate its consistently better performance than DenseNet, ResNet and the latest ResNeXt model over various applications.
The authors propose a new network architecture which is a combination of ResNets and DenseNets. They introduce a very informative theoretical formulation which can be used to formulate ResNets, DenseNets and their proposed architecture. The authors present compelling results, analysis and statistics on compelling benchmarks. Pros: (+) The paper is well written with theoretical and empirical results (+) The authors provide useful analysis and statistics (+) The impact of DPNs is shown on a variety of computer vision tasks (+) The performance of the DPNs on the presented vision tasks is compelling Cons: (-) Optional results on MS COCO would make the paper even stronger Network engineering is an important field and it is important that it is done correctly, with analysis and many in depth experiments. The impact of new architectures comes through their generalization capabilities. This paper does a good job on all of the above. Even though there is no groundbreaking novelty in the proposed micro-blocks, the authors provide extensive analysis and statistics for their proposed model. They also present results on a variety of computer vision tasks which makes their paper complete. Especially the analysis of memory usage, training speed and number of flops is very revealing and informative. The authors present results on compelling computer vision tasks, such as ImageNet, Places365 and PASCAL VOC. These are traditionally the most difficult datasets and tasks in computer vision. It would make the paper even stronger if the authors presented results on MS COCO, which is harder than PASCAL VOC.
nips_2017_1433
Analyzing Hidden Representations in End-to-End Automatic Speech Recognition Systems Neural networks have become ubiquitous in automatic speech recognition systems. While neural networks are typically used as acoustic models in more complex systems, recent studies have explored end-to-end speech recognition systems based on neural networks, which can be trained to directly predict text from input acoustic features. Although such systems are conceptually elegant and simpler than traditional systems, it is less obvious how to interpret the trained models. In this work, we analyze the speech representations learned by a deep end-to-end model that is based on convolutional and recurrent layers, and trained with a connectionist temporal classification (CTC) loss. We use a pre-trained model to generate frame-level features which are given to a classifier that is trained on frame classification into phones. We evaluate representations from different layers of the deep model and compare their quality for predicting phone labels. Our experiments shed light on important aspects of the end-to-end model such as layer depth, model complexity, and other design choices.
The authors conduct an analysis of CTC trained acoustic models to determine how information related to phonetic categories is preserved in CTC-based models which directly output graphemes. The work follows a long line of research that has analyzed neural network representations to determine how they model phonemic representations, although to the best of my knowledge this has not been done previously for CTC-based end-to-end architectures. The results and analysis presented by the authors is interesting, although there are some concerns I have with the conclusions that the authors draw that I would like to clarify these points. Please see my detailed comments below. - In analyzing the quality of features learned at each layer in the network, from the description of the experiments in the paper it would appear that the authors only feed in features corresponding to a single frame at time t, and predict the corresponding output label. In the paper, the authors conclude that (Line 159--164) "... after the 5th recurrent layer accuracy goes down again. One possible explanation to this may be that higher layers in the model are more sensitive to long distance information that is needed for the speech recognition task, whereas the local information which is needed for classifying phones is better captured in lower layers." This is a plausible explanation, although another plausible explanation that I'd like to suggest for this result is the following: It is know from previous work, e.g., (Senior et al., 2015) that even when predicting phoneme output targets, CTC-based models with recurrent layers can significantly delay outputs from the model. In other words, the features at a given frame may not contain information required to predict the current ground-truth label, but this may be shifted. Thus, for example, it is possible that if a window of features around the frame at time t was used instead of a single frame, the conclusions vis-a-vis the quality of the recurrent layers in the model would be very different. An alternative approach, would be to follow the procedure in (Senior et al., 2015) and constrain the CTC loss function to only allow a certain amount of delay in predicting labels. I think it is important to conduct the experiments required to establish whether this is the case before drawing the conclusions in the paper. Reference: A. Senior, H. Sak, F. de Chaumont Quitry, T. Sainath and K. Rao, "Acoustic modelling with CD-CTC-SMBR LSTM RNNS," 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), Scottsdale, AZ, 2015, pp. 604-609. - In analyzing phoneme categories I notice that the authors use the full 61 label TIMIT phoneme set (60 after excluding h#). However, it is common practice for TIMIT to also report phoneme recognition results after mapping the 61 label set down to 48 following (Lee and Hon, 89). In particular, this maps certain allophones into the same category, e.g., many closures are mapped to the same class. It would be interesting to confirm that the same trends as identified by the authors continue to hold in the reduced set. Reference: Lee, K-F., and H-W. Hon. "Speaker-independent phone recognition using hidden Markov models." IEEE Transactions on Acoustics, Speech, and Signal Processing 37.11 (1989): 1641-1648. - There have been many previous works on analyzing neural network representations in the context of ASR apart from the references cited by the authors. I would suggest that the authors incorporate some of the references listed below in addition to the ones that they have already listed: * Nagamine, Tasha, Michael L. Seltzer, and Nima Mesgarani. "On the Role of Nonlinear Transformations in Deep Neural Network Acoustic Models." INTERSPEECH. 2016. * Nagamine, Tasha, Michael L. Seltzer, and Nima Mesgarani. "Exploring how deep neural networks form phonemic categories." Sixteenth Annual Conference of the International Speech Communication Association. 2015. * Yu, Dong, et al. "Feature learning in deep neural networks-studies on speech recognition tasks." arXiv preprint arXiv:1301.3605 (2013). - In the context of grapheme-based CTC for ASR, I think the authors should cite the following early work: * F. Eyben, M. Wöllmer, B. Schuller and A. Graves, "From speech to letters - using a novel neural network architecture for grapheme based ASR," 2009 IEEE Workshop on Automatic Speech Recognition & Understanding, Merano, 2009, pp. 376-380. - Line 250: "... and micro-averaging F1 inside each coarse sound class." What is meant by "micro-averaging" in this context? - Minor comments and typographical errors: * Line 135: "We train the classifier with Adam [22] with default parameters ...". Please specify what you mean by default parameters. * In Figure 1 and Figure 2: The authors use "steps" when describing strides in the convolution. I'd recommend changing this to "strides" for consistency with the text. * In Figure 1. d: The scale on the Y-axis is different from the other figures which makes the two figures in (c.) and (d.) not directly comparable. Could the authors please use the same Y-axis scale for all figures in Figure 1.
nips_2017_3169
Neural Discrete Representation Learning Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector QuantisedVariational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of "posterior collapse" --where the latents are ignored when they are paired with a powerful autoregressive decoder --typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.
Summary- This paper describes a VAE model in which the latent space consists of discrete-valued (multinomial) random variables. VAEs with multinomial latent variables have been proposed before. This model differs in the way it computes which state to pick and how it sends gradients back. Previous work has used sampled softmaxes, and other softmax-based ideas (for example, trying to start with something more continuous and anneal to hard). In this work, a VQ approach is used, where a hard decision is made to pick the closest embedding space vector. The gradients are sent into both into the embedding space and the encoder. Strengths - The model is tested on many different datasets including images, audio, and video datasets. - The model gets good results which are comparable to those obtained with continuous latent variables. - The paper is well-written and easy to follow. Weaknesses- - Experimental comparisons with other discrete latent variable VAE approaches are not reported. It is mentioned that the proposed model is the first to achieve comparable performance with continuous latent variable models, but it would be useful to provide a quantitative assessment as well. - The motivation for having discrete latent variables is not entirely convincing. Even if the input space is discrete, it is not obvious why the latent space should be. For example, the paper argues that "Language is inherently discrete, similarly speech is typically represented as a sequence of symbols" and that this makes discrete representations a natural fit for such domains. However, the latent semantic representation of words or speech could still be continuous, which would make it possible to capture subtle inflections in meaning or tone which might otherwise be lost in a discretized or quantized representation. The applications to compression are definitely more convincing. Overall- The use of VQ to work with discrete latent variables in VAEs is a novel contribution. The model is shown to work well on a multiple domains and datasets. While the qualitative results are convincing, the paper can be made more complete by including some more comparisons, especially with other ways of producing discrete latent representations.
nips_2017_1209
PixelGAN Autoencoders In this paper, we describe the "PixelGAN autoencoder", a generative autoencoder in which the generative path is a convolutional autoregressive neural network on pixels (PixelCNN) that is conditioned on a latent code, and the recognition path uses a generative adversarial network (GAN) to impose a prior distribution on the latent code. We show that different priors result in different decompositions of information between the latent code and the autoregressive decoder. For example, by imposing a Gaussian distribution as the prior, we can achieve a global vs. local decomposition, or by imposing a categorical distribution as the prior, we can disentangle the style and content information of images in an unsupervised fashion. We further show how the PixelGAN autoencoder with a categorical prior can be directly used in semi-supervised settings and achieve competitive semi-supervised classification results on the MNIST, SVHN and NORB datasets.
Update after rebuttal: I believe the paper can be accepted as a poster. I advise the authors to polish the writing to better highlight their contributions, motivation and design choices. This could make the work attractive and rememberable, not "yet another hybrid generative model". ----- The paper proposes PixelGAN autocencoder - a generative model which is a hybrid of an adversarial autoencoder and a PixelCNN autoencoder. The authors provide a theoretical justification of the approach based on a decomposition of variational evidence lower bound (ELBO). The authors provide qualitative results with different priors on the hidden distribution, and quantitative results on semi-supervised learning on MNIST, SVHN and NORB. The paper is closely related to Adversarial autoencoders (Makhzani et al. ICLR 2016 Workshop), which, as far as I know, have only been published on arxiv and as a Workshop contribution to ICLR. Yet, the work is well known and widely cited. The novelty of the present submission significantly depends on Adversarial autoencoders being considered existing approach or not. This is a complicated situation, which, I assume, ACs and PCs are better qualified to judge about. Now to some more detailed comments. Pros: 1) Good results on semi-supervised learning on MNIST, SVHN and NORB, and unsupervised clustering on MNIST. It is difficult to say if the results are state-of-the-art since many numbers for the baselines are missing, but at least they are close to the state of the art. 2) A clear discussion of an ELBO decomposition and related architectural choices. Cons: 1) If Adversarial autoencoders are considered existing work, then novelty is somewhat limited - it’s yet another paper which combines two existing generative models. What makes exactly this combination especially interesting? 2) Lack of results on actual image generation. I understand that image generation is difficult to evaluate, but still some likelihood bounds (are these even computable?), Inception scores and images would be nice to see, at least in the Appendix. 3) It is somewhat confusing that two versions of the approach - with location-dependent and location-independent biases - are used in experiments interchangeably, but are not directly compared to each other. I appreciate the authors mentioning this in lines 153-157, but a more in-depth analysis would be useful. 4) A more detailed discussion of relation to existing approaches, such as VLAE and PixelVAE (both published at ICLR 2017), would be helpful. Lines 158-163 are helpful, but do not quite clear highlight the differences, and strengths and weaknesses of different approaches. 5) Some formulations are not quite clear, such as “limited stochasticity” vs “powerful decoder” in lines 88 and 96. Also the statement in line 111 about “approximately optimizing the KL divergence” and the corresponding footnote looks a bit too abstract - so do the authors optimize it or not? 6) In the bibliography the authors tend to ignore the ICLR conference and list many officially published papers as arxiv. 7) Putting a whole section on cross-domain relations to the appendix is not good practice at all. I realize it’s difficult to fit all content to 8 pages, but it’s the job of the authors to organize the paper in such a way that all important contributions fit into the main paper. Overall, I am in the borderline mode. The results are quite good, but the novelty seems limited.
nips_2017_2999
The Importance of Communities for Learning to Influence We consider the canonical problem of influence maximization in social networks. Since the seminal work of Kempe, Kleinberg, and Tardos [KKT03] there have been two, largely disjoint efforts on this problem. The first studies the problem associated with learning the generative model that produces cascades, and the second focuses on the algorithmic challenge of identifying a set of influencers, assuming the generative model is known. Recent results on learning and optimization imply that in general, if the generative model is not known but rather learned from training data, no algorithm for influence maximization can yield a constant factor approximation guarantee using polynomially-many samples, drawn from any distribution. In this paper we describe a simple algorithm for maximizing influence from training data. The main idea behind the algorithm is to leverage the strong community structure of social networks and identify a set of individuals who are influentials but whose communities have little overlap. Although in general, the approximation guarantee of such an algorithm is unbounded, we show that this algorithm performs well experimentally. To analyze its performance, we prove this algorithm obtains a constant factor approximation guarantee on graphs generated through the stochastic block model, traditionally used to model networks with community structure.
This work marries influence maximization (IM) with recent work on submodular optimization from samples. The work salvages some positive results from the wreckage of previous impossibility results on IM from samples, by showing that under an SBM model of community structure in graphs, positive results for IM under sampling are possible with a new algorithm (COPS) that is a new variation on other greedy algorithms for IM. It’s surprising that the removal step in the COPS algorithm is sufficient from producing the improvement seen between Margl and COPS in Figure 2 (where Margl sometimes does worse than random). Overall this is a strong contribution to the IM literature. Pros: - Brings IM closer to practical contexts by studying IM under learned influence functions - Gives rigorous analysis of this problem for SBMs - Despite simplicity of SBMs, solid evaluation shows good performance on real data Cons: - The paper is very well written, but sometimes feels like it oversimplifies the literature in the service of somewhat overstating the importance of the paper. But it’s good work. - The paper could use a conclusion. 1) The use of “training data” in the title makes it a bit unclear what the paper is about. I actually thought the paper was going to be about something very different, some curious application of IM to understand how training data influences machine learning models in graph datasets. Having read the abstract it made more sense, but the word “training” is only used once beyond the abstract, in a subsection title. It seemed as though the focus on SBMs and community structure deserve mentioning in the title? 2) The SBM is a very simplistic model of “communities,” and it’s important to stay humble about what’s really going on in the theoretical results: the positive results for SBMs in this work hinge on the fact that within communities, SBM graphs are just ER graphs, and the simple structure of these ER graphs is what is powering the results. 3) With (2) registered as a minor grievance, the empirical results are really quite striking, given that the theoretical analysis assumes ER structure within the communities. It’s also notable that the COPS algorithm really does quite well even graphs from on the BA model, which is far from an SBM. In some sense, the issue in (2) sets up an expectation (at least for me) that COPS won’t "actually work". This is quite intriguing, and suggests that the parts of the analysis relying on the ER structure are not using the strongest properties of it (i.e., something close to the main result holds for models much more realistic than ER). This is a strong property of the work, inviting future exploration. Minor points: 1) The dichotomous decomposition of the IM literature is a little simplistic, and some additional research directions deserve mention. There has been several recent works that consider IM beyond submwdular cases, notably the “robust IM” work by [HK16] (which this submission cites as focusing on submwdular IM, not wrong) and also [Angell-Schoenebeck, arxiv 2016]. 2) Two broken cites in Section 3.1. Another on line 292. And I think I see at least one more. Check cites. 3) The name COPS changes to Greedy-Prune in Section 5 (this reviewer assumes they refer to the same thing). 4) I really prefer for papers to have conclusions, even if the short NIPS format makes that a tight fit.
nips_2017_2998
Smooth Primal-Dual Coordinate Descent Algorithms for Nonsmooth Convex Optimization We propose a new randomized coordinate descent method for a convex optimization template with broad applications. Our analysis relies on a novel combination of four ideas applied to the primal-dual gap function: smoothing, acceleration, homotopy, and coordinate descent with non-uniform sampling. As a result, our method features the first convergence rate guarantees among the coordinate descent methods, that are the best-known under a variety of common structure assumptions on the template. We provide numerical evidence to support the theoretical results with a comparison to state-of-the-art algorithms.
The authors propose a new smooth primal-dual randomized coordinate descent method for the three composite problem F(x) = f(x) + g(x) + h(Ax), where f is smooth, g is non-smooth, separable and has a block-wise proximal operator, and h is a general nonsmooth function. The method is basically a generalization of the accelerated, parallel, and proximal coordinate descent method of Fercoq and Richtarik, and the coordinate descent method with arbitrary sampling of Qu and Richtarik to the three composite case. The difficulty of this extension lies in the non-smoothness of the function h composed with a linear operator A. The authors provide an efficient implementation by breaking up full vector updates. They show that the algorithm achieves the best known O(n/k) convergence rate. Restart schemes are also used to improve practical performance. Special cases of the three composite problem are considered and analyzed. The contributions are clearly stated and overall, this is a nice paper. It is applicable to many different objective functions and employs all the "tricks" to make this a fast and efficient method. Comments: - The smoothing and non-uniform sampling techniques of Algorithm 1 are clearly described in Section 2, but the homotopy and acceleration techniques are not described anywhere, they are just directly used in Alg 1. This makes Alg. 1 a little cryptic if you are unfamiliar with the structure of these methods -- I think Alg. 1 needs to be better explained either in words or indicating what method/approach each step corresponds to. - Following from the previous comment, ll 9 of Algorithm 1 - where does this formula for \tau come from?, ll 153 - where does the adapted \tau formula come from for the strongly convex case? This is not clear. - What are the values of \hat{x}, \tilde{x} and \bar{x} initialized to in Alg 1? - What is the "given" value of \dot{y} (re: line 73 and equation (6))? - Unknown reference in line 165. ============ POST REBUTTAL ============ I have read the author rebuttal and the other reviews. I thank the authors for addressing my comments/questions, as well as the comments/questions of the other reviewers. My recommendation stands, I think this paper should be accepted.
nips_2017_1831
Is the Bellman residual a bad proxy? This paper aims at theoretically and empirically comparing two standard optimization criteria for Reinforcement Learning: i) maximization of the mean value and ii) minimization of the Bellman residual. For that purpose, we place ourselves in the framework of policy search algorithms, that are usually designed to maximize the mean value, and derive a method that minimizes the residual T * v π − v π 1,ν over policies. A theoretical analysis shows how good this proxy is to policy optimization, and notably that it is better than its value-based counterpart. We also propose experiments on randomly generated generic Markov decision processes, specifically designed for studying the influence of the involved concentrability coefficient. They show that the Bellman residual is generally a bad proxy to policy optimization and that directly maximizing the mean value is much better, despite the current lack of deep theoretical analysis. This might seem obvious, as directly addressing the problem of interest is usually better, but given the prevalence of (projected) Bellman residual minimization in value-based reinforcement learning, we believe that this question is worth to be considered.
The paper investigates a fundamental question in RL, namely whether we should directly maximize the mean value, which is the primary objective, or if it is suitable to minimize the Bellman residual, as is done by several algorithms. The paper provides new theoretical analysis, in the form of a bound on residual policy search. This is complemented by a series of empirical results with random (Garnet-type) MDPs. The paper has a simple take-home message, which is that minimizing the Bellman residual is more susceptible to mismatch between the sample distribution and the optimal distribution, compared to policy search methods that directly optimize the value. The papes is well written; it outlines a specific question, and provides both theoretical and empirical support for answering this question. The new theoretical analysis extends previous known results. It is a modest extension, but potentially useful, and easy to grasp. The empirical results are informative, and constructed to shed light on the core question. I would be interested in seeing analogous results for standard RL benchmarks, to verify that the results carry over to "real" MDPs, not just random, but that is a minor point. One weakness of the work might be its scope: it is focused on a simple setting of policy search, without tackling implication of the work for more complex settings more commonly used, such as TRPO, DDPG.
nips_2017_319
Inference in Graphical Models via Semidefinite Programming Hierarchies Maximum A posteriori Probability (MAP) inference in graphical models amounts to solving a graph-structured combinatorial optimization problem. Popular inference algorithms such as belief propagation (BP) and generalized belief propagation (GBP) are intimately related to linear programming (LP) relaxation within the Sherali-Adams hierarchy. Despite the popularity of these algorithms, it is well understood that the Sum-of-Squares (SOS) hierarchy based on semidefinite programming (SDP) can provide superior guarantees. Unfortunately, SOS relaxations for a graph with n vertices require solving an SDP with n ⇥(d) variables where d is the degree in the hierarchy. In practice, for d 4, this approach does not scale beyond a few tens of variables. In this paper, we propose binary SDP relaxations for MAP inference using the SOS hierarchy with two innovations focused on computational efficiency. Firstly, in analogy to BP and its variants, we only introduce decision variables corresponding to contiguous regions in the graphical model. Secondly, we solve the resulting SDP using a non-convex Burer-Monteiro style method, and develop a sequential rounding procedure. We demonstrate that the resulting algorithm can solve problems with tens of thousands of variables within minutes, and outperforms BP and GBP on practical problems such as image denoising and Ising spin glasses. Finally, for specific graph types, we establish a sufficient condition for the tightness of the proposed partial SOS relaxation.
The authors show how SDP methods based on the Sum-of-Squares (SOS) hierarchy may be used for MAP inference in discrete graphical models, focusing on binary pairwise models. Specifically, an approximate method for binary pairwise models is introduced to solve what is called PSOS(4), then the solution is rounded to an integer solution using a recursive scheme called CLAP (for Confidence Lift And Project). Preliminary empirical results are presented which appear encouraging. This is an interesting direction but I was confused by several aspects. Please could the authors clarify/note the following: 1. l.60-67 Optimizing an LP over a relaxation in the Sherali-Adams hierarchy, and relation to Theorem 1. For the first order relaxation, i.e. the standard pairwise local polytope, only edges in the original model need be considered. This is because, in this case, it is straightforward to observe that consistent marginals for the other edges (which do not exist in the original model) exist while remaining in the pairwise polytope for the complete graph. By asusmption their values do not affect the optimum. For the next higher order relaxation, i.e. the triplet-consistent polytope TRI, in fact again it may be shown *for binary pairwise models whose graph is chordal* that one need only consider triangles in the original model, since there always exist consistent marginals for all other edges. This is not obvious and was shown recently by Rowland, Pacchiano and Weller "Conditions beyond treewidth..." AISTATS 2017, Sec 3.3. This theme is clearly relevant to your approach, as described in l.81-89. Further, properties of TRI (equivalent for binary pairwise models to the cycle polytope, see Sontag PhD thesis 2010) may be highly relevant for this paper since I believe that the proof for Theorem 1 relies on showing essentially that the SOS(4) relaxation lies within TRI - please could you comment? If that is true, is it true for any binary pairwise model or only subject to certain restrictions? 2. l.72 Here you helpfully suggest that SDP relaxations do not account well for correlations between neighboring vertices. It would be very valuable if you could please elaborate on this, and other possible negatives of your approach, to provide a balanced picture. Given this observation, why then does your method appear to work well in the empirical examples you show later? 3. I appreciate space constraints are very tight but if possible to elaborate (even in the Appendix) on the approach, it would be very helpful. In particular: what exactly is happening when r is varied for a given model; how should we think about the directed i->j constraints in l.124-5? 4. Alg 1. When will this algorithm perform well/badly? Can any guarantees be provided, e.g. will it give an upper bound on the solution? 5. CLAP Alg 2. Quick check: is this guaranteed to terminate? [maybe for binary pairwise models, is < \sigma_0, \sigma_s > always \geq 1/2 or some other bound?] Could you briefly discuss its strengths and weaknesses. Are other methods such as Barak, Kelner, Steuer 2014 "Rounding sum-of-squares relaxations" relevant? 6. Sec 4 Experiments. When you run BP-SP, you obtain marginals. How do you then compute your approximate MAP solution? Do you use the same CLAP rounding approach or something else? This may be important since in your experiments, BP-SP performs very well. Since you use triangles as regions for PSOS(4), could you try the same for GBP to make the comparison more similar? Particularly since it appears somewhat odd that the current GBP with 4-sets is not doing better than BP-SP. Times should be reported for all methods to allow more meaningful comparisons [I recognize this can be tricky with non-optimized code but the pattern as larger models are examined would still be helpful]. If possible, it would be instructive to add experiments for larger planar models with no singleton potentials, where it is feasible to compute the exact MAP score. Minor points: In a few places, claims are perhaps stronger than justified - e.g. in the Abstract, "significantly outperforms BP and GBP" ; l. 101 perhaps remove "extensive"; l. 243 - surely the exact max was obtained only for the small experiments; you don't know for the larger models? A few capitalizations are missing in the References, e.g. Ising, Burer-Monteiro, SDP ======================= I have read the rebuttal and thank the authors for addressing some of my concerns.
nips_2017_2493
A Unified Approach to Interpreting Model Predictions Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.
The authors show that several methods in the literature used for explaining individual model predictions fall into the category of "additive feature attribution" methods. They proposes a new kind of additive feature attribution method based on the concept of Shapely values and call the resulting explanations the SHAP values. The authors also suggest a new kernel called the shapely kernel which can be used to compute SHAP values via linear regression (a method they call kernel SHAP). They discuss how other methods, such as DeepLIFT, can be improved by better approximating the Shapely values. Summary of review: Positives: (1) Novel and sound theoretical framework for approaching the question of model explanations, which has been very lacking in the field (most other methods were developed ad-hoc). (2) Early versions of this paper have already been cited to justify improvements in other methods (specifically New DeepLIFT). (3) Kernel SHAP is a significantly superior way of approximating Shapely values compared to classical Shapely sampling - much lower variance vs. number of model evaluations (Figure 3). Negatives: (1) Algorithm for Max SHAP is incorrect (2) Proof for Kernel SHAP included in supplement is incomplete and hastily written (3) Paper has significant typos: main text says runtime of Max SHAP is O(M^3) but supplement says O(M^2), equation for L under Theorem 2 has a missing close parenthesis for f() (4) Argument that SHAP values are a superior form of model explanation is contentious. Case studies in Figure 4 have been chosen to favor SHAP. Paper doesn't have a discussion of the runtime for kernel SHAP or the recommended number of function evaluations needed for good performance. LIME was not included in the Figure 5 comparison. Detailed comments: The fact that Shapely values can be adapted to the task of model explanations is an excellent insight, and early versions of this paper have already been used by other authors to justify improvements to their methods (specifically New DeepLIFT). Kernel SHAP seems like a very significant contribution because, based on Figure 3, it has far lower variance compared to Shapely sampling when estimating the true Shapely values. It is also appealing because, unlike the popular method LIME, the choice of kernel here is motivated by a sound theoretical foundation. However, there are a few problems with the paper. The algorithm for Max SHAP assumes that when i1 is greater than i2 and i2 is the largest input seen so far, then including i1 contributes (i1 - max(reference_of_i1, i2)) to the output. However, this is only true if none of the inputs which haven't been included yet have a reference value that exceeds max(reference_of_i1, i2). To give a concrete example, imagine there are two inputs, a and b, where a=10 and b=6, and the reference values are ref_a = 9 and ref_b = 0. The reference value of max(a,b) is 9, and the difference-from-reference is 10. The correct SHAP values are 1 for a and 0 for b, because b is so far below the reference of a that it never influences the output. However, the line phi[ind] += max(ref,0)/M will assign a SHAP value of 3 to b (ref = xsorted[i] - r[ind], which is 6-0 for b, and M=2). In the tests, it appears that the authors only checked cases where all inputs had the same reference. While this is often true of maxpooling neurons, it need not be true of maxout neurons. Also, the algorithm seems difficult to implement on a GPU in a way that would not significantly slow down backpropagation, particularly in the case of maxpooling layers. That said, this algorithm is not central to the thesis of the paper and was clearly written in a rush (the main text says the runtime is O(M^3) but the supplement says the runtime is O(M^2)). Additionally, when I looked up the "Shapely kernel proof" in the supplement, the code provided for the computational proof was incomplete and the explanation was terse and contained grammatical errors. After several re-reads, my understanding of the proof is that: (1) f_x(S) represents the output of the function when all inputs except those in the subset S are masked (2) Shapely values are a linear function of the vector of values f_x(S) generated by enumerating all possible subsets S (3) kernel SHAP (which is doing weighted linear regression) also produces an answer that is linear in the vector of all output values f_x(S) (4) therefore, if the linear function in both cases is the same, then kernel SHAP is computing the Shapely values. Based on this understanding, I expected the computational proof (which was performed for functions of up to 10 inputs) to check whether the final coefficients applied to the vector of output values when analytically solving kernel SHAP were the same as the coefficients used in the classic Shapely value computation. In other words, to use the standard formula for weighted linear regression, the final coefficients for kernel SHAP would be the matrix (X^T W X)^{-1} X^T W where X is a binary matrix in which the rows enumerate all possible subsets S and W are the weights computed by the shapely kernel; I expected the ith row of this matrix to match the coefficients for each S in the equation at the top of the page. However, it appears that the code for the computational proof is not comparing any coefficients; rather, for a *specific* model generated by the method single_point_model, it compares the final Shapely values produced by kernel SHAP to the values produced by the classic Shapely value computation. Specifically, the code checks that there is negligible difference between kernel_vals and classic_vals, but classic_vals is returned by the method classic_shapely which accepts as one of its arguments a specific model f and returns the shapely values. The code also calls several methods for which the source has not been provided (namely single_point_model, rawShapely, and kernel_shapely). The statement that SHAP values are a superior form of model explanation is contentious. The authors provide examples in Figure 4 where the SHAP values accord perfectly with human intuition for assigning feature importance, but it is also possible to devise examples where this might not be the case - for example, consider a filter in a convolutional neuron network which has a ReLU activation and a negative bias. This filter produces a positive output when it sees a sufficiently good match to some pattern and an output of zero otherwise. When the neuron produces an output of zero, a human might assign zero importance to all the pixels in the input because the input as a whole did not match the desired pattern. However, SHAP values may assign positive importance to those pixels which resemble the desire pattern and negative importance to the rest - in other words, it might assign nonzero importance to the input pixels even when the input is white noise. In a similar vein, the argument to favor kernel SHAP over LIME would be more compelling if a comparison with LIME were included in Figure 5. The authors state that didn't do the comparison because the standard implementation of LIME uses superpixel segmentation which is not appropriate here. However, I understand that kernel SHAP and LIME are identical except for the choice of weighting kernel, so I am confused why the authors were not able to adapt their implementation of kernel SHAP to replicate LIME. Some discussion of runtime or the recommended number of function evaluations would have been desirable; the primary reason LIME uses superpixel segmentation for images is to reduce computational cost, and a key advantage of DeepLIFT-style backpropagation is computational efficiency. The authors show a connection between DeepLIFT and the SHAP values which has already been used by the authors of DeepLIFT to justify improvements to their method. However, the section on Deep SHAP is problematic. The authors state that "Since the SHAP values for the simple components of the network can be analytically solved efficiently, this composition rule enables a fast approximation of values for the whole model", but they explicitly discuss only 2 analytical solutions: Linear SHAP and Max SHAP. Linear SHAP results in idential backpropagation rules for linear components as original DeepLIFT. As explained earlier, the algorithm for Max SHAP proposed in the supplement appears to be incorrect. The authors mention Low-order SHAP and state that it can be solved efficiently, but Low-order SHAP is not appropriate for many neural networks as individual neurons can have thousands of inputs, and even in the case where a neuron has very few inputs it is unclear if the weighted linear regression can be implemented on a GPU in a way that would not significantly slow down the backpropagation (the primary reason to use DeepLIFT-style backpropagation is computational efficiency). Despite these problems, I feel that the adaptation of Shapely values to the task of model interpretation, as well as the development of kernel SHAP, are substantive enough to accept this paper. REVISED REVIEW: The authors have addressed my concerns and corrected the issues.
nips_2017_920
Max-Margin Invariant Features from Transformed Unlabeled Data The study of representations invariant to common transformations of the data is important to learning. Most techniques have focused on local approximate invariance implemented within expensive optimization frameworks lacking explicit theoretical guarantees. In this paper, we study kernels that are invariant to a unitary group while having theoretical guarantees in addressing the important practical issue of unavailability of transformed versions of labelled data. A problem we call the Unlabeled Transformation Problem which is a special form of semisupervised learning and one-shot learning. We present a theoretically motivated alternate approach to the invariant kernel SVM based on which we propose MaxMargin Invariant Features (MMIF) to solve this problem. As an illustration, we design an framework for face recognition and demonstrate the efficacy of our approach on a large scale semi-synthetic dataset with 153,000 images and a new challenging protocol on Labelled Faces in the Wild (LFW) while out-performing strong baselines.
#### Paper summary This paper considers the problem of learning invariant features for classification under a special setting of the more general semi-supervised learning; focusing on the face recognition problem. Given a set of unlabeled points (faces), and a set of labeled points, the goal is to learn to construct a feature map which is invariant to nuisance transformations (as observed through the unlabeled set i.e., minor pose variations of the same unlabeled subjects), while being discriminative to the inter-class transformations (as observed through the labeled sample). The challenge in constructing such an invariant feature map is that each face in the labeled set is not associated with an explicit set of possible facial variations. Rather, these minor facial variations (to be invariant to) are in the unlabeled set. The paper addresses the problem by appealing to tools from group theory. Each nuisance transformation is represented as a unitary transformation, and the set of all nuisance transformations is treated as a unitary group (i.e., set of orthogonal matrices). Key in constructing invariant features is the so called "group integration" i.e., integral of all the orthogonal matrices in the group, resulting in a symmetric projection matrix. This matrix is used to project points to the invariant space. The paper gives a number of theoretical results regarding this matrix, and extends it to reproducing kernel Hilbert spaces (RKHSs). The invariant features, termed *max-margin invariant features* (MMIF), of a point x are given by taking invariant inner products (induced by the matrix) in an RKHS between x and a set of unlabeled points. ##### Review summary The paper contributes a number of theorems regarding the group integration, as well as invariance in the RKHS. While parts of these can be derived easily, the end result of MMIFs which are provably invariant to nuisance transformations is theoretically interesting. Some of these results appear to be new. Notations used and writing in the paper are unfortunately unclear in many parts. The use of overloaded notation severely hinders understanding (more comments below). A few important aspects have not been addressed, including why the MMIFs are constructed the way they are, and the assumption that "all" unitary transformations need to be observed (more details below). Computational complexity of the new approach is not given. ##### Major comments/questions I find the MMIFs and the presented theoretical results very interesting. However, there are still some unclear parts which I hope the authors can address. In order of priority, 1. What is the relationship of the new invariant features to the mean embedding features of Raj et al., 2017? Local Group Invariant Representations via Orbit Embeddings Raj et al., 2017 AISTATS 2017 2. The central part of the proposal is the group integration, integrating over "all" orthogonal matrices. There is no mentioning at all how this is computed until the very end on the last page (line 329), from which I infer that the integration is replaced by an empirical average over all observed unlabeled points. When the integration is not over all orthogonal matrices in the group, all the given theorems do not hold. What can we say in this case? Also, how do we know that the observed unlabeled samples are results of some orthogonal transform? Line 160 does mention that "small changes in pose can be modeled as unitary transformations" according to reference [8]. Could you please point out where in [8]? 3. I understand how the proposed approach creates invariance to nuisance transformations. What is unclear to me is the motivation of doing the operations in Eq. (2) and line 284 to create features which are discriminative for classification. Why are features constructed in this way a good choice for being discriminative to between-class transformations? Related, at line 274, the paper mentions about generating random binary label assignments for training. Why random? There is not enough explanation in the text. Could you please explain? 4. The size of the invariant feature map in the end is the same as the number of classes (line 284). However, before obtaining this, an intermediate step in Eq. (2) requires computing a feature vector of size M for each x (input point), where M = number of distinct subjects in the unlabeled set. Presumably M must be very large in practice. What is the runtime complexity of the full algorithm? 5. In Theorem 2.4, the paper claims that the newly constructed transformation g_H is a unitary transformation (preserving inner product) in the RKHS H. However, it appears that only the range of feature map is considered in the proof i.e., the set R := {x | \phi(x) for x in domain of X} and \phi is the feature map associated with the kernel. In general, R is a subset (or equal) of H. The proof of this theorem in the appendix is not complete unless it mentions how to extend the results of the group transformation on R to H. 6. I think Lemma 2.1 requires the group \mathcal{G} to have linear operation. So, not "any" group as stated? ##### Suggestions on writing/notations * \mathcal{X}, \mathcal{X}_\mathcal{G} are overloaded each with 2-3 definitions. See for example lines 119-125. This is very confusing. * Line 280: K appears out of nowhere. I feel that it should be the number of classes, which is denoted by N at line 269. But in the beginning N is used to denote the cardinality of the labeled set... * The paper should state from the beginning that the Haar measure in Lemma 2.1 will be replaced with an empirical measure as observed through the unlabeled sample. It will help the reader. * Line 155: I suggest that the term "generalization" be changed to something else. As mentioned in the paper itself, the term does not have the usual meaning. * There are a few paragraphs that seem to appear out of context. These include line 85, line 274, line 280 and line 284. Why should the paragraph at line 284 be there? I find it confusing. I feel that lines 311-323 should be in the introduction. ------- after rebuttal ------ I read the authors' response. The authors partially addressed the issues that I raised. I have updated the score accordingly.
nips_2017_2802
Maximizing Subset Accuracy with Recurrent Neural Networks in Multi-label Classification Multi-label classification is the task of predicting a set of labels for a given input instance. Classifier chains are a state-of-the-art method for tackling such problems, which essentially converts this problem into a sequential prediction problem, where the labels are first ordered in an arbitrary fashion, and the task is to predict a sequence of binary values for these labels. In this paper, we replace classifier chains with recurrent neural networks, a sequence-to-sequence prediction algorithm which has recently been successfully applied to sequential prediction tasks in many domains. The key advantage of this approach is that it allows to focus on the prediction of the positive labels only, a much smaller set than the full set of possible labels. Moreover, parameter sharing across all classifiers allows to better exploit information of previous decisions. As both, classifier chains and recurrent neural networks depend on a fixed ordering of the labels, which is typically not part of a multi-label problem specification, we also compare different ways of ordering the label set, and give some recommendations on suitable ordering strategies.
I have been reviewing this paper for another conference, so in my review I mostly repeat my comments sent earlier. Already at that time I was for accepting the paper. It is worth to underline that the paper has been further improved by the authors since then. The paper considers a problem of solving multi-label classification (MLC) with recurrent neural networks (RNNs). MLC is converted into a sequential prediction problem: the model predicts a sequence of relevant labels. The prediction is made in a sequential manner taking into account all previous labels. This approach is similar to classifier chains, with two key differences: firstly, only single model is used, and its parameters are shared during prediction; secondly, RNN predicts "positive" labels only. In this way, the algorithm can be much more efficient than the standard classifier chains. The authors discuss also the problem of the right ordering of labels to be used during training and testing. The paper is definitely very investing. The idea of reducing MLC to sequential prediction problem is a natural extension of chaining commonly used for solving MLC under 0/1 loss. I suppose, however, that some similar ideas have been already considered. The authors could, for example, discuss a link between their approach and the learn-to-search paradigm used, for example, in "HC-Search for Multi-Label Prediction: An Empirical Study". I suppose that the algorithm presented in that paper could also be used for predicting sequences of "positive" labels. Minor comments: - The authors write that "Our first observation is that PCCs are no longer competitive to the sequence prediction approaches w.r.t. finding the mode of the joint distribution (ACC and maF1)". It seems however that PCC performs better than RNN under the ACC measure. - FastXML can be efficiently tuned for macro-F (see, "Extreme F-measure Maximization using Sparse Probability Estimates"). It is not clear, however, how much this tuning will improve the result on BioASQ. After rebuttal: I thank the authors for their response.
nips_2017_1786
LightGBM: A Highly Efficient Gradient Boosting Decision Tree Gradient Boosting Decision Tree (GBDT) is a popular machine learning algorithm, and has quite a few effective implementations such as XGBoost and pGBRT. Although many engineering optimizations have been adopted in these implementations, the efficiency and scalability are still unsatisfactory when the feature dimension is high and data size is large. A major reason is that for each feature, they need to scan all the data instances to estimate the information gain of all possible split points, which is very time consuming. To tackle this problem, we propose two novel techniques: Gradient-based One-Side Sampling (GOSS) and Exclusive Feature Bundling (EFB). With GOSS, we exclude a significant proportion of data instances with small gradients, and only use the rest to estimate the information gain. We prove that, since the data instances with larger gradients play a more important role in the computation of information gain, GOSS can obtain quite accurate estimation of the information gain with a much smaller data size. With EFB, we bundle mutually exclusive features (i.e., they rarely take nonzero values simultaneously), to reduce the number of features. We prove that finding the optimal bundling of exclusive features is NP-hard, but a greedy algorithm can achieve quite good approximation ratio (and thus can effectively reduce the number of features without hurting the accuracy of split point determination by much). We call our new GBDT implementation with GOSS and EFB LightGBM. Our experiments on multiple public datasets show that, LightGBM speeds up the training process of conventional GBDT by up to over 20 times while achieving almost the same accuracy.
The paper presents two nice ways for improving the usual gradient boosting algorithm where weak classifiers are decision trees. It is a paper oriented towards efficient (less costful) implementation of the usual algorithm in order to speed up the learning of decision trees by taking into account previous computations and sparse data. The approaches are interesting and smart. A risk bound is given for one of the improvements (GOSS), which seems sound but still quite loose: according to the experiments, a tighter bound could be obtained, getting rid of the "max" sizes of considered sets. No garantee is given for the second improvement (EFB) although is seems to be quite efficient in practice. The paper comes with a bunch of interesting experiments, which give good insights and shows a nice study of the pros and vons of the improving approaches, both in accuracy and computational time, in various types of datasets. The experiments lacks of standard deviation indications, for the performances are often very closed from one mathod to another. A suggestion would be to observe the effect of noise in the data on the improvement methods (especially on GOSS, which relies mostly on a greedy selection based on the highest gradients). The paper is not easy to read, because the presentation, explanation, experimental study and analysis are spread all over the paper. It would be better to concentrate fully on one improvements, then on the other, and to finish by the study of them together. Besides, it would be helpfulfor the reader if the main algorithms where written in simplest ways, not in pseudo codes and assumption on the specificities of data structures.
nips_2017_2086
Universal consistency and minimax rates for online Mondrian Forests We establish the consistency of an algorithm of Mondrian Forests [LRT14, LRT16], a randomized classification algorithm that can be implemented online. First, we amend the original Mondrian Forest algorithm proposed in [LRT14], that considers a fixed lifetime parameter. Indeed, the fact that this parameter is fixed hinders the statistical consistency of the original procedure. Our modified Mondrian Forest algorithm grows trees with increasing lifetime parameters λ n , and uses an alternative updating rule, allowing to work also in an online fashion. Second, we provide a theoretical analysis establishing simple conditions for consistency. Our theoretical analysis also exhibits a surprising fact: our algorithm achieves the minimax rate (optimal rate) for the estimation of a Lipschitz regression function, which is a strong extension of previous results [AG14] to an arbitrary dimension.
Summary: This paper proposes a modification of Mondorian Forest which is a variant of Random Forest, a majority vote of decision trees. The authors show that the modified algorithm has the consistency property while the original algorithm does not have one. In particular, when the conditional probability function is Lipschitz, the proposed algorithm achieves the minimax error rate, where the lower bound is previously known. Comments: The technical contribution is to refine the original version of the Mondorian Forest and prove its consistency. The theoretical results are nice and solid. The main idea comes from the original algorithm, thus the originality of the paper is a bit incremental. I’m wondering that in Theorem 2, there is no factor about K, the number of Mondrian trees, as authors mentioned in Section 5. In my intuition, it is natural that many K results more accurate and converges faster. Ultimately, Theorem 2 says the Mondrian forest works well even if K=1. Is there any theoretical evidence of advantage by using many Mondrian trees? Computational time of Algorithm 4 is not clear. It seems to me that increasing lifetime parameters leads more time consumption in SplitCell. In theorem 2, it is natural to measure the difference between two distributions by KL divergence. Do you have any idea of using another discrepancy measure instead of the quadratic loss? Additionally, the proof, the variance of noise \sigma assumed to be a constant. Is it a natural setting? Minor Comments: Line 111 g_n(x,Z_k) -> \hat{g}_n(x,Z_k) Line 117 \nu_{d_\eta} -> \nu_{\eta} Line 3 of pseudocode of Algorithm 2 \tau_A -> \tau Line 162 M_{\lambda^{\prime}} \sim MP(\lambda,C) -> M_{\lambda^{\prime}} \sim MP(\lambda^{\prime},C)
nips_2017_3474
Sticking the Landing: Simple, Lower-Variance Gradient Estimators for Variational Inference We propose a simple and general variant of the standard reparameterized gradient estimator for the variational evidence lower bound. Specifically, we remove a part of the total derivative with respect to the variational parameters that corresponds to the score function. Removing this term produces an unbiased gradient estimator whose variance approaches zero as the approximate posterior approaches the exact posterior. We analyze the behavior of this gradient estimator theoretically and empirically, and generalize it to more complex variational distributions such as mixtures and importance-weighted posteriors. Recent advances in variational inference have begun to make approximate inference practical in large-scale latent variable models. One of the main recent advances has been the development of variational autoencoders along with the reparameterization trick Welling, 2013, Rezende et al., 2014]. The reparameterization trick is applicable to most continuous latent-variable models, and usually provides lower-variance gradient estimates than the more general REINFORCE gradient estimator [Williams, 1992]. Intuitively, the reparameterization trick provides more informative gradients by exposing the dependence of sampled latent variables z on variational parameters φ. In contrast, the REINFORCE gradient estimate only depends on the relationship between the density function log q φ (z|x, φ) and its parameters. Surprisingly, even the reparameterized gradient estimate contains the score function-a special case of the REIN-FORCE gradient estimator. We show that this term can easily be removed, and that doing so gives even lower-variance gradient estimates in many circumstances. In particular, as the variational posterior approaches the true posterior, this gradient estimator approaches zero variance faster, making stochastic gradient-based optimization converge and "stick" to the true variational parameters, as seen in figure 1.
The authors analyze the functional form of the reparameterization trick for obtaining the gradient of the ELBO, noting that one term depends on the gradient wrt. the sample z, which they call the path derivative, and the other term does not depend on the gradient wrt sample z, which they (and others) call the score function. The score function has zero expectation, but is not necessarily zero at particular samples of z. The authors propose to delete the score function term from the gradient, and show can be done with a one-line modification in popular autodiff software. They claim that this can reduce the variance—in particular, as the variational posterior approaches the true posterior, the variance in the stochastic gradient approaches zero. They also extent this idea nontrivially to more complicated models. While this is a useful and easy-to-use trick, particularly in the way it is implemented in autodiff software, this does not appear to be a sufficiently novel contribution. The results about the score function are well-known, and eliminating or rescaling the score function seems like a natural experiment to do. The discussion of control variates in lines 124-46 is confusing and seemingly contradictory. From my understanding, setting the control variate scale c = 1 corresponds to including the score function term, i.e. using the ordinary ELBO gradient. Their method proposes to not use a control variable, i.e. set c = 0. But lines 140-6 imply that they got better experimental results setting c = 1, i.e. the ordinary ELBO gradient, than with c = 0, i.e. the ELBO gradient without the score function term. This contradicts their experimental results. Perhaps I have misunderstood this section, but I’ve read it several times and still can’t come to a sensible interpretation. The experimental results show only marginal improvements, though they are essentially free.
nips_2017_1715
Continual Learning with Deep Generative Replay Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks have been impeded by a chronic problem called catastrophic forgetting. Although simply replaying all previous data alleviates the problem, it requires large memory and even worse, often infeasible in real world applications where the access to past data is limited. Inspired by the generative nature of the hippocampus as a short-term memory system in primate brain, we propose the Deep Generative Replay, a novel framework with a cooperative dual model architecture consisting of a deep generative model ("generator") and a task solving model ("solver"). With only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task. We test our methods in several sequential learning settings involving image classification tasks.
Quality: The paper is technically sound. Extensive comparisons between the proposed approach and other baseline approaches are being made. Clarity: The paper is well-organized. However, insufficient details are provided on the architectures of the discriminator, generator and the used optimizer. I would be surprised if the cited papers refer to the hippocampus as a short-term memory system since the latter typically refers to working memroy. I assume they all refer to hippocampus as a system for long-term memory consolidation. I advice to adapt accordingly. A bit more attention should be paid to English writing style: primate brain’s the most apparent distinction inputs are coincided with Systems(CLS) Recent evidences networks(GANs) A branch of works line 114: real and outputs 1/2 everywhere => unclear; please reformulate line 124: real-like samples => realistic; real-life (?) in both function past copy of a self. Sequence of scholar models were favorable than other two methods can recovers real input space Old Tasks Performance good performances with in same setting employed in section 4.3. 
 Originality: The proposed approach to solve catastrophic forgetting is new (to my knowledge) yet also straightforward, replacing experience replay with a replay mechanism based on a generative model. The authors do show convincingly that their approach performs much better than other approaches, approximating exact replay. A worry is that the generative model essentially stores the experiences which would explain why its performance is almost identical to that of experience replay. I think the authors need to provide more insight into the how performance scales with data size and show that the generative model becomes more and more memory efficient as the number of processed training samples increases. In other words: the generative model only makes sense if its memory footprint is much lower than that of just storing the examples. I am sure this would be the case but it is important and insightful to make this explicit. Significance: The results are important given the current interest in life-long learning. While the idea is expected in the light of current advances, the authors give a convincing demonstration of the usefulness of this approach. If the authors can improve the manuscript based on suggested modifications then I believe the paper would be a valuable contribution.
nips_2017_2272
A Minimax Optimal Algorithm for Crowdsourcing We consider the problem of accurately estimating the reliability of workers based on noisy labels they provide, which is a fundamental question in crowdsourcing. We propose a novel lower bound on the minimax estimation error which applies to any estimation procedure. We further propose Triangular Estimation (TE), an algorithm for estimating the reliability of workers. TE has low complexity, may be implemented in a streaming setting when labels are provided by workers in real time, and does not rely on an iterative procedure. We prove that TE is minimax optimal and matches our lower bound. We conclude by assessing the performance of TE and other state-of-the-art algorithms on both synthetic and real-world data.
Summary of paper: The authors derive a lower bound on estimating crowdsource worker reliability in labeling data. They then propose an algorithm to estimate this reliability and analyze the algorithm’s performance. Summary of review: This submission presents a solid piece of research that is well-grounded in both theory and application. Detailed review: — Quality: This paper is thorough. The lower bound and relevant proofs are outlined cleanly. The algorithm is presented in conjunction with time and memory complexity analysis, and the authors compare to related work nicely. Both simulated and real data sets are used; the simulated data have multiple settings and six sources of real-world data are used. The experiments do not demonstrate that the proposed algorithms outperforms all competing methods, but these results are refreshingly realistic and provide a compelling argument for the use of the proposed algorithm. — Clarity: The paper is well-written; the problem and contributions are outlined clearly. The paper would benefit from more description regarding the intuitions of TE and perhaps a high-level summary paragraph in the numerical experiments section. — Originality: The proposed algorithm is novel and the derived lower bound is tied nicely to the problem, bringing a rare balance of theory and application. — Significance: While the problem addressed is very specific, it is important. The proposed algorithm does well—even if it doesn’t beat all others in every instance, it’s more consistent in its performance. This paper will be a solid contribution to the crowdsourcing literature, and could easily impact the use of crowdsourced labels as well.
nips_2017_1879
VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning Deep generative models provide powerful tools for distributions over complicated manifolds, such as those of natural images. But many of these methods, including generative adversarial networks (GANs), can be difficult to train, in part because they are prone to mode collapse, which means that they characterize only a few modes of the true distribution. To address this, we introduce VEEGAN, which features a reconstructor network, reversing the action of the generator by mapping from data to noise. Our training objective retains the original asymptotic consistency guarantee of GANs, and can be interpreted as a novel autoencoder loss over the noise. In sharp contrast to a traditional autoencoder over data points, VEEGAN does not require specifying a loss function over the data, but rather only over the representations, which are standard normal by assumption. On an extensive set of synthetic and real world image datasets, VEEGAN indeed resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples.
This paper proposes a variation of the GAN algorithm aimed at improving the mode coverage of the generative distribution. The main idea is to use three networks: a generator, an encoder and a discriminator where the discriminator takes pairs (x,z) as input. The criterion to optimize is a GAN style criterion for the discriminator and a reconstruction in latent space for the encoder (unlike auto-encoders where the reconstruction is in input space). The proposed criterion is properly motivated and experimental evidence is provided to support the claim that the new criterion allows to produce good quality samples (similarly to other GAN approaches) but with more diversity and coverage of the target distribution (i.e. more modes are covered) than other approaches. The results are fairly convincing and the algorithm is evaluated with several of the metrics used in previous literature on missing modes in generative models. However, I believe that the paper could be improved on two aspects: - the criterion is only partially justified by the proposed derivation starting from the cross entropy in Z space and there is an extra distance term that is added without clear justification (it's ok to add terms, but making it sound like it falls out of the derivation is not great) - the connection to other existing approaches, and in particular ALI could be further detailed and the differences could be better explained. More specifically, regarding section 3.1 and the derivation of the objective, I think the exposition is somewhat obfuscating what's really going on: Indeed, I would suggest to rewrite (3) as the following simple lemma: H(Z,F_\theta(X)) <= KL(q_\gamma(x|z)p_0(z) \| p_\theta(z|x)p(x)) - E log p_0(z) which directly follows from combining (1) and (2) And then the authors could argue that they add this extra expected distance term (third term of (3)). But really, this term is not justified by any of the derivation. However, it is important to notice that this term vanishes exactly when the KL term is minimized since the minimum of the KL term is reached when q_\gamma inverses p_\theta i.e. when F_\theta is the inverse of G_\gamma in which case z=F_\theta(G_\gamma(z)). There is also the question of whether this term is necessary. Of course, it helps training the encoder to invert the decoder but it seems possible to optimize only the first part of (5) with respect to both theta and gamma Regarding the connection to ALI: the discriminator is similar to the one in ALI and trained with the same objective. However what is different is the way the discriminator is used when training the generator and "encoder" networks. Indeed, in ALI they are trained by optimizing the JS divergence, while here the generator is trained by optimizing the (estimate of) KL divergence plus the additional reconstruction term and the encoder is purely trained using the reconstruction term. It would be interesting to see what is the effect of varying the weight of the reconstruction term. The authors say that there is no need to add a weight (indeed, the reconstruction term is zero when the KL term is zero as discussed above), but that doesn't mean it cannot have an effect in practice. Side note: there is inconsistency in the notations as d is a distance in latent space, so d(z^i, x_g^i) is not correct and should be replaced by d(z^i, F_\theta(x_g^i)) everywhere. Overall, I feel that this paper presents an interesting new algorithm for training a generative model and the experiments are showing that it does have the claimed effect of improving the mode coverage of other comparable algorithms, but the derivation and justification of the algorithm as well as the comparison to other algorithms could be improved in order to provide better intuition of why this would work.
nips_2017_604
GP CaKe: Effective brain connectivity with causal kernels A fundamental goal in network neuroscience is to understand how activity in one brain region drives activity elsewhere, a process referred to as effective connectivity. Here we propose to model this causal interaction using integro-differential equations and causal kernels that allow for a rich analysis of effective connectivity. The approach combines the tractability and flexibility of autoregressive modeling with the biophysical interpretability of dynamic causal modeling. The causal kernels are learned nonparametrically using Gaussian process regression, yielding an efficient framework for causal inference. We construct a novel class of causal covariance functions that enforce the desired properties of the causal kernels, an approach which we call GP CaKe. By construction, the model and its hyperparameters have biophysical meaning and are therefore easily interpretable. We demonstrate the efficacy of GP CaKe on a number of simulations and give an example of a realistic application on magnetoencephalography (MEG) data.
This paper addresses the problem of understanding brain connectivity (i.e. how activity in one region from the brain drives activity in other regions) from brain activity observations. More generally, perhaps, the paper attempts to uncover causal structure and uses neuroscience insights to specifically apply the model to brain connectivity. The model can be seen as an extension of (linear) dynamic causal models (DCMs) and assumes that the observations are a linear combination of latent activities, which have a GP prior, plus Gaussian noise (Eq 11). Overall the paper is readable but more clarity in the details of how the posterior over the influence from i->j is actually computed (paragraph just below Eq 12). I write this review as a machine learning researcher (i.e. non-expert in neuroscience) so I am not entirely sure about the significance of the contribution in terms of neuroscience. For the machine learning perspective, the paper does not really provide a significant contribution as it applies standard GP regression. Indeed, it seems that the claimed novelty arises from the specific kernel used in order to incorporate localization, causality, and smoothness. However, none of these components seem actually new as they use, respectively, (i) the squared exponential (SE) covariance function, (iii) a previously published kernel, and (iii) some additional smoothing also previously known. With this regards, it is unclear why this additionally smoothing is necessary as the SE kernel is infinitely differentiable. The method is evaluated on a toy problem and on a real dataset. There are a couple of major deficiencies in the approach from the machine learning (ML) perspective. Firstly, the mixing functions are assumed to be known. This is very surprising as these functions will obviously have an important effect on the predictions. Second, for the only real dataset used, most (if not all) the hyperparameters are fixed, which also goes against standard ML practice. These hyperparameters are very important, as shown in the paper for the toy dataset, and ultimately affect the connectivities discovered. There also seems to be a significant amount of pre-processing to actually fit the problem to the method, for example, as explained in section 5, running ICA. It is unclear how generalizable such an approach would be in practice.
nips_2017_3120
Optimal Shrinkage of Singular Values Under Random Data Contamination A low rank matrix X has been contaminated by uniformly distributed noise, missing values, outliers and corrupt entries. Reconstruction of X from the singular values and singular vectors of the contaminated matrix Y is a key problem in machine learning, computer vision and data science. In this paper, we show that common contamination models (including arbitrary combinations of uniform noise, missing values, outliers and corrupt entries) can be described efficiently using a single framework. We develop an asymptotically optimal algorithm that estimates X by manipulation of the singular values of Y , which applies to any of the contamination models considered. Finally, we find an explicit signal-to-noise cutoff, below which estimation of X from the singular value decomposition of Y must fail, in a welldefined sense.
This paper consider the problem of matrix de-noising under the mean square-loss via estimators that shrink the singular values of a contaminated version of the matrix to be estimated. Specifically, the authors seek asymptotically optimal shrinkers under various random contamination models, extending beyond additive white noise. Unfortunately, I could not determine the novelty of the techniques used as there is no clear indication in the paper of what results are new or rather simple extensions of previous results, such as in Nadakuditi (2014) and Gavish & Donoho (2017). As a non-expert, as far as I can see, the results in this paper are rather simple extensions of those given in Gavish and Donoho. A clear indication of what is new and a detailed comparison to what is already known is needed. In addition, I found many sentences to be unclear and inadequate, making it a difficult read. Some more specific questions/comments: In line 213 it is not clear whether you prove that the a.s. limit exists or assume it exists. It is unclear under what class of estimators is yours optimal. Only a clue is given in the very last sentence of the paper. In line 64, the variance of A's entries is taken to be the same as that of B, namely \sigma_B, while later it is changed to sigma_A. Thm. 2 however does not involve sigma_A. Is there a specific reason for choosing sigma_B = sigma_A? In addition, it is unclear how can one estimate mu_A, sigma_A and sigma_B, which are needed for the method. Lines 89 and 90 should come before Thm. 1. The hard threshold shrinker is missing a multiplicative y_i all over the paper. There are many typos.
nips_2017_2935
Multiscale Quantization for Fast Similarity Search We propose a multiscale quantization approach for fast similarity search on large, high-dimensional datasets. The key insight of the approach is that quantization methods, in particular product quantization, perform poorly when there is large variance in the norms of the data points. This is a common scenario for realworld datasets, especially when doing product quantization of residuals obtained from coarse vector quantization. To address this issue, we propose a multiscale formulation where we learn a separate scalar quantizer of the residual norm scales. All parameters are learned jointly in a stochastic gradient descent framework to minimize the overall quantization error. We provide theoretical motivation for the proposed technique and conduct comprehensive experiments on two large-scale public datasets, demonstrating substantial improvements in recall over existing state-of-the-art methods.
The paper proposes a novel approach to reduce quantization error in the scenario where the product quantization is applied to the residual values that are the result of the coarse whole-space quantization. To encode residuals the authors propose to (1) quantize the norm separately (2) apply product quantization to unit-normalized residuals. It trains everything in the end-to-end manner, which is certainly good. Pros: 1. I very much like the approach. 2. The paper is well-written. 3. A good coverage of work related to quantization (including several relevant recent papers). However, not all work related to efficient k-NN search (see below)! Cons: 1. The related work on non-quantization techniques doesn't mention any graph-based techniques. Compared to quantization approaches they are less SPACE-efficient (but so are LSH methods, even more so!), but they are super-fast. Empirically they often greatly outperform everything else, in particular, they outperform LSH. One recent relevant work (but there are many others): Harwood, Ben, and Tom Drummond. "FANNG: fast approximate nearest neighbour graphs." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016. I would encourage authors to mention this line of work, space permitting. 2. The paper is a bit cryptic. See, e.g., my comment to the recall experiments section. A few more detailed comments: I think I can guess where formula 4:158 comes from, but it won't harm if you show a derivation. 7: 280 What do you mean by saying that VPSHUFB performs lookups? It's a shuffle instruction, not a store/gather instruction. Recall experiments: Does the phrase "We generate ground-truth results using brute force search, and compare the results of each method against ground-truth in fixed-bitrate settings." mean that you do not compute accuracy for the k-NN search using *ORIGINAL* vectors? Please, also clarify how do you understand ground-truth in the fixed-bitrate setting. Figure 4: please, explain better the figures. I don't quite get what is exactly the number of neighbors on x axis. Is this the number of neighbors retrieved using an approximated distance, i.e., using the formula in 4:158? Speed benchmarks are a bit confusing, because these are only for sequential search using a tiny subset of data likely fitting into a L3 cache. I would love to see actually full-size tests. Regarding the POPCNT approach: what exactly do you do? Is it a scalar code? Is it single-thread? I am asking b/c I am getting twice as longer processing times using POPCNT for the Hamming distance with the same number of bits (128). What is your hardware: CPU, #of cores, RAM, cache size? Are your speed-benchmarks single-threaded? Which compiler do you use? Please, clarify if your approach to learning an orthogonal transformation (using Cayley factorization) is novel or not.
nips_2017_2934
Asynchronous Parallel Coordinate Minimization for MAP Inference Finding the maximum a-posteriori (MAP) assignment is a central task for structured prediction. Since modern applications give rise to very large structured problem instances, there is increasing need for efficient solvers. In this work we propose to improve the efficiency of coordinate-minimization-based dual-decomposition solvers by running their updates asynchronously in parallel. In this case messagepassing inference is performed by multiple processing units simultaneously without coordination, all reading and writing to shared memory. We analyze the convergence properties of the resulting algorithms and identify settings where speedup gains can be expected. Our numerical evaluations show that this approach indeed achieves significant speedups in common computer vision tasks.
SUMMARY: ======== The authors propose a parallel MAP inference algorithm based on a dual decomposition of the MAP objective. Updates are performed without synchronization and locally enforce consistency between overlapping regions. A convergence analysis establishes a linear rate of convergence and experiments are performed on synthetic MRFs and image disparity estimation. PROS: ===== The paper is well-rounded and provides algorithmic development, theoretical analysis in terms of convergence rate results, and experimental validation. Clarity is good overall and the work is interesting, novel, and well-motivated. CONS: ===== Experimental validation in the current paper is inadequate in several respects. First, since the approach is posed as a general MAP inference it would be ideal to see validation in more than one synthetic and one real world model (modulo variations in network topology). Second, the size of problem instances is much too small to validate a "large-scale" inference algorithm. Indeed, as the authors note (L:20-21) a motivation of the proposed approach is that "modern applications give rise to very large instances". To be clear, the "size" of instances in this setting is the number of regions in Eq. (1). The largest example is on image disparity, which contains less than 27K regions. By contrast, the main comparison, HOGWILD, was originally validated on three standard "real world" instances with a number of objective terms reaching into the millions. Note that even single-threaded inference converges within a few hundred seconds in most cases. Finally, the authors do not state which disparity estimation dataset and model are used. This reviewer assumes the standard Middlebury dataset it used, but the reported number of regions is much smaller than the typical image size in that dataset, please clarify. Results of the convergence analysis are not sufficient to justify the lack of experimental validation. In particular, Theorem 1 provides a linear rate of convergence, but this does not establish global linear rate of convergence as the bound is only valid away from a stationary point (e.g. when the gradient norm is above threshold). It is also unclear to what extent assumptions of the convergence rate can be evaluated since they must be such that the delay rate, which is unknown and not controlled, is bounded above as a function of c and M. As a secondary issue, this reviewer is unclear about the sequential results. Specifically, it is not clear why the upper bound in Prop. 2 is necessary when an exact calculation in Prop. 1 has been established, and as the authors point out this bound can be quite loose. Also, it is not clear what the authors mean when they state Prop. 2 accounts for the sparsity of updates. Please clarify.
nips_2017_1906
Multi-output Polynomial Networks and Factorization Machines Factorization machines and polynomial networks are supervised polynomial models based on an efficient low-rank decomposition. We extend these models to the multi-output setting, i.e., for learning vector-valued functions, with application to multi-class or multi-task problems. We cast this as the problem of learning a 3-way tensor whose slices share a common basis and propose a convex formulation of that problem. We then develop an efficient conditional gradient algorithm and prove its global convergence, despite the fact that it involves a non-convex basis selection step. On classification tasks, we show that our algorithm achieves excellent accuracy with much sparser models than existing methods. On recommendation system tasks, we show how to combine our algorithm with a reduction from ordinal regression to multi-output classification and show that the resulting algorithm outperforms simple baselines in terms of ranking accuracy.
The authors extend factorization machines and polynomial networks to the multi-output setting casting their formulation as a 3-way tensor decomposition. Experiments on classification and recommendation are presented. I enjoyed reading the paper and although the model itself seems a somewhat incremental extension with respect to [5], the algorithmic approach is interesting and the theoretical results are welcome. I would like to see runtimes of the proposed model compared to baselines, kernelized in particular, to highlight the benefits of the proposed method. Besides, the classification experiments could use at least one large dataset, the largest used is letter (N=15,000), again it will serve to highlight the benefits of the proposed model. I understand that a polynomial SVM with quadratic kernel is a natural baseline for the proposed model but I am curious about the results with more popular kernels choices, i.e, 3rd degree polynomial or RBF. The recommender system experiment needs baselines to put the results from single-output and ordinal McRank FMs in context. Minor comments: - line 26, classical approaches such as?
nips_2017_1504
Practical Data-Dependent Metric Compression with Provable Guarantees We introduce a new distance-preserving compact representation of multidimensional point-sets. Given n points in a d-dimensional space where each coordinate is represented using B bits (i.e., dB bits per point), it produces a representation of size O(d log(dB/ ) + log n) bits per point from which one can approximate the distances up to a factor of 1 ± . Our algorithm almost matches the recent bound of [6] while being much simpler. We compare our algorithm to Product Quantization (PQ) [7], a state of the art heuristic metric compression method. We evaluate both algorithms on several data sets: SIFT (used in [7]), MNIST [11], New York City taxi time series [4] and a synthetic one-dimensional data set embedded in a high-dimensional space. With appropriately tuned parameters, our algorithm produces representations that are comparable to or better than those produced by PQ, while having provable guarantees on its performance.
This paper presents an impressive algorithm that embeds a set of data points into a small dimensional space, represented by a near-optimal number of bits. The most appealing aspect of this paper is to provide a simple but provable dimensionality reduction algorithm, and supported by empirical experiments. === Strengths === This paper is supported by strong theoretical analysis. And, it is much simpler algorithm than [5] with a slightly worse bit complexity. === Weaknesses === 1. This is an example of data-dependent compression algorithm. Though this paper presents a strong theoretical guarantee, it needs to be compared to state-of-the art data-dependent algorithms. For example, - Optimized Product Quantization for Approximated Nearest Neighbor Search, T. Ge et al, CVPR’13 - Composite Product Quantization for Approximated Nearest Neighbor Search, T. Zhang et al, ICML’14 - And, other recent quantization algorithms. 2. The proposed compression performs worse than PQ when a small code length is allowed, which is the main weakness of this method, in view of a practical side. 3. When the proposed compression is allowed to have the same bit budget to represent each coordinate, the pairwise Euclidean distance is not distorted? Most of compression algorithms (not PCA-type algorithms) suffer from the same problem. === Updates after the feedback === I've read other reviewers' comments and authors' feedback. I think that my original concerns are resolved by the feedback. Moreover, they suggest a possible direction to combine QuadSketch with OPQ, which is appealing.
nips_2017_1762
Stein Variational Gradient Descent as Gradient Flow Stein variational gradient descent (SVGD) is a deterministic sampling algorithm that iteratively transports a set of particles to approximate given distributions, based on a gradient-based update that guarantees to optimally decrease the KL divergence within a function space. This paper develops the first theoretical analysis on SVGD. We establish that the empirical measures of the SVGD samples weakly converge to the target distribution, and show that the asymptotic behavior of SVGD is characterized by a nonlinear Fokker-Planck equation known as Vlasov equation in physics. We develop a geometric perspective that views SVGD as a gradient flow of the KL divergence functional under a new metric structure on the space of distributions induced by Stein operator.
–––RESPONSE TO AUTHOR REBUTTAL–––– The author's response is very thoughtful, but unfortunately it doesn't fully address my concerns. The fact that the proof of Lemma 3.1 works in the compact case is fine as far as it goes, but substantially detracts from how interesting the result is. The proposed condition on E_{y\sim \mu}[max_x(g(x,y))] is not sufficient since the same condition also has to hold for mu', which is a much more challenging condition to guarantee uniformly. It seems such a condition is also necessary in a Wasserstein metric version of Lemma 3.1. So the fix is non-trivial. The changes to Theorem 3.3 are more straightforward. Another issue which I forgot to include in my review is that the empirical measure converges in BL at a n^{-1/d} + n^{-1/2} rate, not the n^{-1/2} rate claimed at line 133. See [1] and [2]. Overall the paper requires substantial revision as well as some new technical arguments. I still recommend against acceptance. [1] Sriperumbudur et al. On the empirical estimation of integral probability metrics. Electron. J. Statist. Volume 6 (2012), 1550-1599. [2] Fournier and Guillin. On the rate of convergence in Wasserstein distance of the empirical measure. Probability Theory and Related Fields. Volume 162, Issue 3–4 (2015), pp 707–738. –––ORIGINAL REVIEW––– Summary: The paper investigates theoretical properties of Stein variational gradient descent (SVGD). The paper provides asymptotic convergence results, both in the large-particle and large-time limits. The paper also investigates the continuous-time limit of SVGD, which results in a PDE that has the flavor of a deterministic Fokker-Planck equation. Finally, the paper offers a geometric perspective, interpreting the continuous-time process as a gradient flow and introducing a novel optimal transport metric along the way. Overall, this is a very nice paper with some insightful results. However, there are a few important technical issues that prevent me from recommending publication. 1. Theorem 3.2: Via Lemma 3.1, Theorem 3.2 assumes g(x,y) is bounded and Lipschitz. However, if p is distantly dissipative, then g(x,y) is not bounded. Thus, the result as stated is essentially vacuous. That said, examining the proof, it should be straightforward to replace all of the bounded Lipschitz norms with Lipschitz norms and then get a result in terms of Wasserstein distance while only assuming g(x, y) is Lipschitz. 2. On a related note, the fact that the paper assumes eq. (7) holds “without further examination” (lines 78-79) seems ill-advised. As can be seen from the case of Theorem 3.2, it is important to validate that the hypotheses of a result actually hold and don’t interact badly with other (implicit) assumptions. The author(s) should more thoroughly discuss and show that the hypotheses in the two main theorems actually hold in some non-trivial cases. 3. Theorem 3.3: The final statement of the theorem, that eq. (12) implies the stein discrepancy goes to zero as \ell -> infinity, is not true. In particular, if \epsilon_\ell^* goes to zero too quickly, there’s no need for the Stein discrepancy to go to zero. Again, I think this statement can also be shown rigorously using the upper bound on the spectral radius discussed in the first remark after Thm 3.3. However, this should be worked out by the author(s). Minor comments: - \rho is never defined in Thm 3.3 - In many results and remarks: instead of writing “Assume …, then …”, one should writing either “Assuming …, then …” or “If …, then …”. - There were numerous other grammatical errors - \phi_{\mu,p}^* was used inconsistently: sometimes it was normalized to have norm 1, sometimes it wasn’t. - The references need to be updated: many of the papers that are only cited as being on the arXiv have now been published
nips_2017_2204
Deconvolutional Paragraph Representation Learning Learning latent representations from long text sequences is an important first step in many natural language processing applications. Recurrent Neural Networks (RNNs) have become a cornerstone for this challenging task. However, the quality of sentences during RNN-based decoding (reconstruction) decreases with the length of the text. We propose a sequence-to-sequence, purely convolutional and deconvolutional autoencoding framework that is free of the above issue, while also being computationally efficient. The proposed method is simple, easy to implement and can be leveraged as a building block for many applications. We show empirically that compared to RNNs, our framework is better at reconstructing and correcting long paragraphs. Quantitative evaluation on semi-supervised text classification and summarization tasks demonstrate the potential for better utilization of long unlabeled text data.
This paper proposes encoder-decoder models that uses convolutional neural networks, not RNNs. Experimental results on NLP data sets show that the proposed models outperform several existing methods. In NLP, RNNs are more commonly used than CNNs. This is because the length of an input varies. CNNs can only handle a fixed length of an input, by truncating input text or padding shortage of the input. In experiments, it is okay to assume the length of an input, such as 50-250 words; however, in reality, we need to handle an arbitrary length of text information, sometimes 10 and sometimes 1,000 words. In this respect, RNNs, especially, LSTMs, have high potential in NLP applications. Major Comments: 1. For summarization experiments, Document Understanding Conference (DUC) data sets and CNN news corpus are commonly used. 2. For the headline summarization, if you use shared data, we can judge how your models are better than other methods, including non-neural approaches. 3. Table 2 shows that the CNN-DCNN model can directly output an input. Extremely high BLEU scores imply low generalization. It has a sufficient rephrasing ability, it cannot achieve the BLUE scores over 90. 4. In image processing, similar models have already been proposed, such as: V. Badrinarayanan, A. Handa, and R. Cipolla. SegNet: a deep convolutional encoder-decoder architecture for robust semantic pixel-wise labelling. arXiv preprint arXiv:1505.07293, 2015. Please make your contributions clear in your paper. 5. I am not sure how your semi-supervised experiments have been conducted. In what sense, your methods are semi-supervised? 6. Regarding the compared LSTM-LSTM encoder-decoder model, is it an attention-based encoder-decoder model? 7. I suggest to try machine translation data sets, in your future work.
nips_2017_3157
Population Matching Discrepancy and Applications in Deep Learning A differentiable estimation of the distance between two distributions based on samples is important for many deep learning tasks. One such estimation is maximum mean discrepancy (MMD). However, MMD suffers from its sensitive kernel bandwidth hyper-parameter, weak gradients, and large mini-batch size when used as a training objective. In this paper, we propose population matching discrepancy (PMD) for estimating the distribution distance based on samples, as well as an algorithm to learn the parameters of the distributions using PMD as an objective. PMD is defined as the minimum weight matching of sample populations from each distribution, and we prove that PMD is a strongly consistent estimator of the first Wasserstein metric. We apply PMD to two deep learning tasks, domain adaptation and generative modeling. Empirical results demonstrate that PMD overcomes the aforementioned drawbacks of MMD, and outperforms MMD on both tasks in terms of the performance as well as the convergence speed.
This paper presents the population matching discrepancy (PMD) as a better alternative to MMD for distribution matching applications. It is shown that PMD is a sampled version of Wasserstein metric or earth mover’s distance, and it has a few advantages over MMD, most notably stronger gradients and the applicability of smaller mini-batch sizes, and fewer hyperparameters. For training generative models at least, the MMD metric does suffer from weak gradients and the requirement of large mini-batches, the proposals in this paper therefore provides a nice solution to both of these problems. The small mini-batch claim is verified quite nicely in the empirical results. The verification of the stronger gradients claim is less satisfactory, since the MMD metric depends on the scale parameter sigma, it is essential to consider either the best sigma or a range of sigmas when making such a claim. In terms of having fewer hyper-parameters, I feel this claim is less well-supported, because PMD depends on a distance metric, and this distance metric might contain extra hyperparameters as well as in the MMD case. Moreover, it is hard to get a reliable distance metric in a high dimensional space, therefore PMD may suffer from the same issue of relying on a distance metric as MMD. On the other hand, there are some standard heuristics for MMDs about how to choose the bandwidth parameter, it would be good to compare against such heuristics and treat MMD as a hyperparameter-free metric as well. Overall I think the proposed method has the nice property of permitting small minibatch sizes therefore fast training. It seems like a valid improvement over large batch MMD methods. But the it still has the problem of relying on a distance metric, which may limit its success on modeling higher dimensional data.
nips_2017_1236
Stabilizing Training of Generative Adversarial Networks through Regularization Deep generative models based on Generative Adversarial Networks (GANs) have demonstrated impressive sample quality but in order to work they require a careful choice of architecture, parameter initialization, and selection of hyper-parameters. This fragility is in part due to a dimensional mismatch or non-overlapping support between the model distribution and the data distribution, causing their density ratio and the associated f -divergence to be undefined. We overcome this fundamental limitation and propose a new regularization approach with low computational cost that yields a stable GAN training procedure. We demonstrate the effectiveness of this regularizer accross several architectures trained on common benchmark image generation tasks. Our regularization turns GAN models into reliable building blocks for deep learning.
This paper proposed to stabilize the training of GAN using proposed gradient-norm regularizer. This regularization is designed for conventional GAN, or more general f-GAN proposed last year. The idea is interesting but the justification is a little bit coarse. 1. Eq. (15) is only correct for the maximizer \psi^{\star}. However, the regularizer defined in (21) is for arbitrary \psi, which contradicts this assumption. 2. It seems that increasing the weighting of the regularization term, i.e., \gamma does not affect much the training in Fig. 2. The authors claim that 'the results clearly demonstrate the robustness of the regularizer w.r.t. the various regularization bandwidths'. This is a little bit strange and it will be necessary to show how this regularization can affect the training. 3. For the comparison in the experiments, it is not clear how the explicit noise approach was done. What is the power of the noise? How is the batch-size-to-noise-ratios defined? What's relation between it and the conventional SNR? 4. The results in Section 4.3 make no much sense. From the reported table, the diagonal elements are all quite close to one, which can hardly justifies the advantage of regularized GAN. 5. It will be convincing to compare the proposed method with some other stabilizing training techniques, like Salimans et al 2016 or WGAN technique.
nips_2017_3156
Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using lowdimensional latent embeddings. Our new framework correctly models the joint uncertainty in the latent parameters and the state space. We also replace the original Gaussian Process-based model with a Bayesian Neural Network, enabling more scalable inference. Thus, we expand the scope of the HiP-MDP to applications with higher dimensions and more complex dynamics.
SUMMARY The authors consider the problem of learning multiple related tasks. It is assumed each tasks can be described by a low-dimensional vector. A multi-task neural network model can than be learned by treating this low-dimensional vector as additional input (in addition to state and action). During optimization, both these low-dimensional inputs and the network weights are optimized. The network weights are shared over all tasks. This procedure allows a new task to be modeled very quickly. Subsequently, 'virtual samples' taken from the model can be used to train a policy (using DDQN). The method is evaluated on three different task families. REACTION TO AUTHOR'S REPLY Thanks for the response. It cleared up many questions I had. I changed my rating accordingly. I think it would be good to show the standard deviation over independently trained models to show the variance induced (or absence of such variance) by the (random) choice of initial tasks and possibly the model training procedure. In the toy task, it seems there is always one 'red' and one 'blue' tasks in the initial data. Does that mean the initial tasks are hand-picked (rather than randomly drawn?). TECHNICAL QUALITY The used approach seems sensible for the task at hand. The development of the method section seems mostly unproblematic. Two issues / suggestions are 1) when tuning the model, shouldn't the update for the network weight take samples from the global buffer into account to avoid catastrophic forgetting for the next task? 2) I'm not sure why you need prioritization for the model update, if you are looking to minimize the mean squared error wouldn't you want to see all samples equally often? Support for the method is given by experimental evaluation. The method is mostly compared to ablated versions, and not to alternatives from the literature (although one of the methods is supposed to be analogous to the method in [11], and there might not be many other appropriate methods to compare to - would a comparison to [2] be possible?). In any case, the selection of methods to compare to includes different aspects, such as model-free vs model based, and independent models vs. multitask vs. single model for all tasks. It is not clear whether multiple independent runs were performed for each methods. Please specify this, and specify what the error bars represent (standard deviation or standard error, one or two deviations, spread between roll-outs or between independent trials, how many trials). Dependent on the experiment, the proposed method seems at least as good as most baselines, and (at least numerically) better than all baselines on the most challenging task. The additional material in the appendix also shows the runtime to be much faster compared to the GP-based method in [11]. NOVELTY The general framework is relatively close to [11]. However, the use of neural networks and the possibility to model non-linear dependence and covariance between states and hidden parameters seems like an important and interesting contribution. SIGNIFICANCE AND RELEVANCE Multi-task and transfer learning seem important components for RL approaches in application domains where rapid adaptation or sample efficiency are important. The proposed method is limited to domains where the difference between tasks can be captured in a low dimensional vector. CLARITY There are a couple of issues with the paper's clarity. If found Figure (1) hard to understand, given its early position in the text and the absense of axis labels. I also found Algorithm 1 hard to understand. The 'real' roll-out only seemed to be used in the first episode, what is done with the data after the first episode (apart from storing in global memory for next task?). Maybe split move entire if (i=0) out of the for loop (and remove the condition), than the forloop can just contain (15) if I'm not mistaken. The description of the experiments is missing too many details. I couldn't find in either appendix or paper how many source instances the method was trained on before evaulation on the target task for the second or third experiment. I also couldn't find how many times you independently trained the model or what error bars represent. Two source tasks for the toy tasks seems very little given that you have differences between instances across multiple axis - with just two examples would it only learn 'red' vs 'blue' since this is the most important difference? Furthermore there are quite a few spelling errors and similar issues in the text. I'll mention some of them below under minor comments. MINOR COMMENTS - Typesetting: subscripts GP and BNN are typeset as products of single-letter variables - You might want to explain the subscripts k,b,a,d in (1) briefly. - 79: model -> to model - since your model's epsilon distribution is not dependent on the current state, I don't think this could model hereskedasticyity (line 31). If so, please explain. (Uncertainty in weights lead to different predictive uncertainty in parts of the state-space, like a regurlar GP. But, like a regular GP this doesn't necessarily mean you model different 'noise distribution' or risk in different part of your state space ) - 'bounded number of parameters' seems somewhat ambiguous (do you mean a finite-dimensional vector, or finitely many values for this parameter?). - 81: GP's don't really care about high-d input representation (linear scaling in input dimensionality), tractability issues mainly concern number of samples (cubic scaling for naive implementations). - 92-93: are delta_x and delta_y really the same weather a=E or a=W as it appears from this equation? - 100: a linear of function -> a linear function - 114: Sec ?? - 123: is develop -> is to develop - 169 from which learn a policy -> please rephrase - 174 on trained two previous -> please rephrase - 188 "of average model"-> of the average model - 198 Appendix ?? - 230 - I'm not sure if embedded is the right word here - 247-249 - it would be interesting to see a comparison - 255 - "our work" - references: It would be good to go over all of these ones as there are many small issues. E.g. the IEEE conferences mention the year and 'IEEE' twice. Names like 'Bayesian' or 'Markov' and abbreviations like STI, HIV, POMDP, CSPBVI should be capitalized. Q-learning too. For [21] I would cite the published (ICLR) version. Publication venue is missing for [35]. - appendices: just wanted to make sure you really meant exp(-10), not 1E-10. Does the prior need to be so small? Why? What do you mean when you say the prior variance changes over training? I think with |w| you mean the number of elements, but you used notation of a norm (w is a vector, not a set).
nips_2017_732
One-Shot Imitation Learning Imitation learning has been commonly applied to solve different tasks in isolation. This usually requires either careful feature engineering, or a significant number of samples. This is far from what we desire: ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring task-specific engineering. In this paper, we propose a meta-learning framework for achieving such capability, which we call one-shot imitation learning. Specifically, we consider the setting where there is a very large (maybe infinite) set of tasks, and each task has many instantiations. For example, a task could be to stack all blocks on a table into a single tower, another task could be to place all blocks on a table into two-block towers, etc. In each case, different instances of the task would consist of different sets of blocks with different initial states. At training time, our algorithm is presented with pairs of demonstrations for a subset of all tasks. A neural net is trained such that when it takes as input the first demonstration demonstration and a state sampled from the second demonstration, it should predict the action corresponding to the sampled state. At test time, a full demonstration of a single instance of a new task is presented, and the neural net is expected to perform well on new instances of this new task. Our experiments show that the use of soft attention allows the model to generalize to conditions and tasks unseen in the training data. We anticipate that by training this model on a much greater variety of tasks and settings, we will obtain a general system that can turn any demonstrations into robust policies that can accomplish an overwhelming variety of tasks.
Summary --- Complex and useful robotic manipulation tasks are difficult because of the difficulty of manipulation itself, but also because it's difficult to communicate the intent of a task. Both of these problems can be alleviated through the use of imitation learning, but in order for this to be practical the learner must be able to generalize from few examples. This paper presents an architecture inspired by recent work in meta learning which generalizes manipulation of a robot arm from a single task demonstration; i.e., it does one-shot imitation learning. The network is something like a seq2seq model that uses multiple attention mechanisms in the style of "Neural Machine Translation by Jointly Learning to Align and Translate". There is a demonstration network, a context network and a manipulation network. The first two produce a context embedding, which is fed to a simple MLP (the manipulation network) that produces an action that tell the arm how to move the blocks. This context embedding encodes the demonstration sequence of states and the current state of arm in a way that selects only relevant blocks at relevant times from the demonstration. More specifically, the demonstration network takes in a sequence (hundreds of time steps long) of block positions and gripper states (open or closed) and embeds the sequence through a stack of residual 1d convolution block. At each layer in the stack and at each time step, an attention mechanism allows that layer to compare its blocks to a subset of relevant blocks. This sequence embedding is fed into the context network which uses two more attention mechanisms to first select the relevant period of the demonstration to the current state and then attend to relevant blocks of the demonstration. None of the aforementioned interpretations are enforced via supervision, but they are supported by manual inspection of attention distributions. The network is trained to build block towers that mimic those built in a demonstration in the final arrangement of blocks where the demonstration and test instance start from initially different configuration. DAGGER and Behavior Cloning (BC) are used to train the above architecture and performance is evaluated by measuring whether the correct configuration of blocks is produced or not. DAGGER and BC perform about the same. Other findings: * The largest difference in performance is due to increased difficulty. The more blocks that need to be stacked, the worse the desired state is replicated. * The method generalizes well to new tasks (number and height of towers), not just instances (labelings and positions of blocks). * The final demonstration state is enough to specify a task, so a strong baseline encodes only this state but performs significantly worse than the proposed model. Strengths --- This is a very nice paper. It proposes a new model for a well motivated problem based on a number of recent directions in neural net design. In particular, this proposes a novel combination of elements from few-shot learning, (esp meta learning approaches like Matching Networks), imitation learning, transfer learning, and attention models. Furthermore, some attention parts of the model (at least for examples provided) seem cleanly interpretable, as shown in figure 4. Comments/Suggestions/Weaknesses --- There aren't any major weaknesses, but there are some additional questions that could be answered and the presentation might be improved a bit. * More details about the hard-coded demonstration policy should be included. Were different versions of the hard-coded policy tried? How human-like is the hard-coded policy (e.g., how a human would demonstrate for Baxter)? Does the model generalize from any working policy? What about a policy which spends most of its time doing irrelevant or intentionally misleading manipulations? Can a demonstration task be input in a higher level language like the one used throughout the paper (e.g., at line 129)? * How does this setting relate to question answering or visual question answering? * How does the model perform on the same train data it's seen already? How much does it overfit? * How hard is it to find intuitive attention examples as in figure 4? * The model is somewhat complicated and its presentation in section 4 requires careful reading, perhaps with reference to the supplement. If possible, try to improve this presentation. Replacing some of the natural language description with notation and adding breakout diagrams showing the attention mechanisms might help. * The related works section would be better understood knowing how the model works, so it should be presented later.
nips_2017_1019
Learning Multiple Tasks with Multilinear Relationship Networks Deep networks trained on large-scale data can learn transferable features to promote learning multiple tasks. Since deep features eventually transition from general to specific along deep networks, a fundamental problem of multi-task learning is how to exploit the task relatedness underlying parameter tensors and improve feature transferability in the multiple task-specific layers. This paper presents Multilinear Relationship Networks (MRN) that discover the task relationships based on novel tensor normal priors over parameter tensors of multiple task-specific layers in deep convolutional networks. By jointly learning transferable features and multilinear relationships of tasks and features, MRN is able to alleviate the dilemma of negativetransfer in the feature layers and under-transfer in the classifier layer. Experiments show that MRN yields state-of-the-art results on three multi-task learning datasets.
This paper proposes a solution to a very interesting problem, namely multi-task learning. The authors tackles learning task similarities. It is known to be very hard problem and it has been ignored in many problems that were proposed to solve multi-task learning. The authors proposes a Bayesian method by using tensor normal priors. The paper is well written and well connected to literature. The idea is quite novel. The authors supply a set of convincing experiments and to support their method. Although I am quite positive about paper, I would like to get information to the following questions: 1) Although experiments are convincing, I would like to see the standard deviations (stds)for experiments? Since the dataset sizes are small then reporting stds are quite important. 2) The authors uses AlexNet as a baseline. However, AlexNet architecture is not anymore the state of the art architecture. It would be great if authors uses GoogleNet or something similar and show the improvement of their algorithm.
nips_2017_1846
Wasserstein Learning of Deep Generative Point Process Models Point processes are becoming very popular in modeling asynchronous sequential data due to their sound mathematical foundation and strength in modeling a variety of real-world phenomena. Currently, they are often characterized via intensity function which limits model's expressiveness due to unrealistic assumptions on its parametric form used in practice. Furthermore, they are learned via maximum likelihood approach which is prone to failure in multi-modal distributions of sequences. In this paper, we propose an intensity-free approach for point processes modeling that transforms nuisance processes to a target one. Furthermore, we train the model using a likelihood-free leveraging Wasserstein distance between point processes. Experiments on various synthetic and real-world data substantiate the superiority of the proposed point process model over conventional ones.
This paper proposes to perform estimation of a point process using the Wasserstein-GAN approach. More precisely, given data that has been generated by a point process on the real line, the goal is to build a model of this point process. Instead of using maximum likelihood, the authors proposed to use WGAN. This requires to: - define a distance between 2 realizations of a point process - define a family of Lipschitz functions with respect to this distance - define a generative model which transforms "noise" into a point process The contribution of the paper is to propose a particular way of addressing these three points and thus demonstrate how to use WGAN in this setting. The resulting approach is compared on a variety of point processes (both synthetic and real) with maximum likelihood approaches and shown to compare favorably (especially when the underlying intensity model is unknown). I must admit that I am not very familiar with estimation of point processes and the corresponding applications and thus cannot judge the potential impact and relevance of the proposed method. However, I feel that the adaptation of WGAN (which is becoming increasingly popular in a variety of domains) to the estimation of point processes is not so obvious and the originality of the contribution comes from proposing a reasonable approach to do this adaptation, along with some insights regarding the implementation of the Lipschitz constraint which I find interesting. One aspect that could be further clarified is regarding the modeling of the generator function: from the definition in equation (7) there is no guarantee that the generated sequence t_i will be increasing. It is the case that the weights are constrained to be positive for example? or is it the case that the algorithm works even when the generated sequence is not increasing since the discriminator function would discriminate against such sequences and thus encourage the generator to produce increasing ones?
nips_2017_3085
Predictive State Recurrent Neural Networks We present a new model, Predictive State Recurrent Neural Networks (PSRNNs), for filtering and prediction in dynamical systems. PSRNNs draw on insights from both Recurrent Neural Networks (RNNs) and Predictive State Representations (PSRs), and inherit advantages from both types of models. Like many successful RNN architectures, PSRNNs use (potentially deeply composed) bilinear transfer functions to combine information from multiple sources. We show that such bilinear functions arise naturally from state updates in Bayes filters like PSRs, in which observations can be viewed as gating belief states. We also show that PSRNNs can be learned effectively by combining Backpropogation Through Time (BPTT) with an initialization derived from a statistically consistent learning algorithm for PSRs called two-stage regression (2SR). Finally, we show that PSRNNs can be factorized using tensor decomposition, reducing model size and suggesting interesting connections to existing multiplicative architectures such as LSTMs and GRUs. We apply PSRNNs to 4 datasets, and show that we outperform several popular alternative approaches to modeling dynamical systems in all cases.
This paper extends predictive state representations (PSRs) to a multilayer RNN architecture. A training algorithm for this model works in two stages: first the model is initialized by two-stage regression, where the output of each layer serves as the input for the next layer, next the model is refined by back propagation through time. The number of parameters can be controlled by factorizing the model. Finally, the performance of this model is compared to LSTMs and GRUs on four datasets. The proposed extension is non-trivial and seem to be useful, as demonstrated by the experimental results. However, the initialization algorithm, which is a crucial learning stage, should be described more clearly. Specifically, the relation of the estimation equations (3-5) to Heffny et al. is not straightforward and requires a few intermediate explanations. On the other hand, Section 3 can be shortened. The introduction of the normalization in Equation (1) is a bit confusing as it takes a slightly different form later on. Two strong aspects of the paper are that initialization algorithm exploits a powerful estimation procedure and the Hilbert embedding provides a rich representation. More generally, exploiting existing approaches to learn dynamical systems is an interesting avenue of research. For example, [Haarjona etal. 2016] proposed construction of RNNs by combining neural networks with Kalman Filter computation graphs. In the experimental section, can the authors specify what kernels have been used? It would be interesting to elaborate on the added value of additional layers. The experiments show the added value of two-layer models with respect to a one-layer in terms of test performance measure. Is this added value achieved immediately after initialization, or only after applying BPTT? Does the second layer improve by making the features richer? Does the second stage regression learn the residuals of the first one. Minor comments and typos: - Figures are too small - Line 206: delete one ‘the’ Ref: Haarjona etal. 2016, Backprop KF: Learning Discriminative Deterministic State Estimators
nips_2017_2874
Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks Collecting large training datasets, annotated with high-quality labels, is costly and time-consuming. This paper proposes a novel framework for training deep convolutional neural networks from noisy labeled datasets that can be obtained cheaply. The problem is formulated using an undirected graphical model that represents the relationship between noisy and clean labels, trained in a semisupervised setting. In our formulation, the inference over latent clean labels is tractable and is regularized during training using auxiliary sources of information. The proposed model is applied to the image labeling problem and is shown to be effective in labeling unseen images as well as reducing label noise in training on CIFAR-10 and MS COCO datasets.
The paper proposes a method for incorporating data with noisy labels into a discriminative learning framework (assuming some cleanly labeled data is also available), by formulating a CRF to relate the clean and noisy labels. The CRF sits on top of a neural network (e.g., CNN in their experiments), and all parameters (CRF and NN) are trained jointly via an EM approach (utilizing persistent contrastive divergence). Experiments are provided indicating the model outperforms prior methods on leveraging noisy labels to perform well on classification tasks, as well as inferring clean labels for noisily-labeled datapoints. The technique is generally elegant: the graphical model is intuitive, capturing relationships between the data, clean/noisy labels, and latent "hidden" factors that can encode structure in the noisy labels (e.g., presence of a common attribute described by many of the noisy labels), while permitting an analytically tractable factorized form. The regularization approach (pre-training a simpler unconditional generative model on the cleanly labeled data to constrain the variational approximation during the early part of training) is also reasonable and appears effective. The main downside of the technique is the use of persistent contrastive divergence during training, which (relative to pure SGD style approaches) adds overhead and significant complexity. In particular the need to maintain per-datapoint Markov chains could require a rather involved setup for truly large scale noisy datasets with hundreds of millions to billions of datapoints (such as might be obtained from web-based or user-interaction data). The experiments are carefully constructed and informative. I was particularly happy to see the ablation study estimating the contribution of different aspects of the model; the finding that dropping the data <-> noisy-label link improves performance makes sense in retrospect, although wasn't something I was expecting. I would have been curious to see an additional experiment using the full COCO dataset as the "clean" data, but leveraging additional Flickr data to improve over the baseline (what would be the gain?). The paper was quite readable, but could benefit from another pass to improve language clarity in a few places. For instance, the sentence beginning on line 30 took me a while to understand. Overall, I think the paper is a valuable contribution addressing an important topic, so I recommend acceptance.
nips_2017_2683
Online Learning with Transductive Regret We study online learning with the general notion of transductive regret, that is regret with modification rules applying to expert sequences (as opposed to single experts) that are representable by weighted finite-state transducers. We show how transductive regret generalizes existing notions of regret, including: (1) external regret; (2) internal regret; (3) swap regret; and (4) conditional swap regret. We present a general and efficient online learning algorithm for minimizing transductive regret. We further extend that to design efficient algorithms for the time-selection and sleeping expert settings. A by-product of our study is an algorithm for swap regret, which, under mild assumptions, is more efficient than existing ones, and a substantially more efficient algorithm for time selection swap regret.
The paper considers the classic setting of prediction with expert advice and proposes a new notion of regret called "transductive regret", which measures the performance of the learner against the best from a class of finite-state transducers. Precisely, a transducer will read the history of past predictions of the learner and output a distribution over the next predictions; for forming the next prediction, the learner can take advantage of the class of transducers under consideration. The resulting notion of regret generalizes a number of known settings such as external and internal regret, swap regret and conditional swap regret. The main contribution is proposing an algorithm that achieves a regret of at most sqrt(E*T), where E is bounded by the number of transitions in the considered class of transducers. The authors also extend their algorithm to other settings involving time-selection functions and sleeping experts. The paper is very technical and seems to target a rather specialized audience. The considered setting is very general and highly abstract---which is actually my main concern about the paper. Indeed, the authors fail to motivate their very general setting appropriately: the only justification given is that the new notion generalizes the most general related regret notion so far, that of conditional swap regret. There is one specific example in Section 4.2 that was not covered by previous results, but I'm not sure I understand why this particular regret notion would be useful. Specifically, I fail to see how exactly is the learner "penalized" by having to measure its regret against a restricted set of experts after having picked another expert too many times---if anything, measuring regret against a smaller class just makes the problem *easier* for the learner. Indeed, it is trivial to achieve bounded regret in this setting: just choose "b" n times and "a" in all further rounds, thus obtaining a regret of exactly n, independently of the number of rounds T. The paper would really benefit from providing a more convincing motivating example. The paper is easy to read for the most part, with the critical exception of Section 4.1 that describes the concept of weighted finite-state transducers (WFSTs). This part was quite confusing for me to read, specifically because it is never explicitly revealed that in the specific online learning setting, labels will correspond to experts and the states will remain merely internal variables of WFSTs. Actually, the mechanism of how a WFST operates and produces output labels is never described either---while the reader can certainly figure this out after a bit of thought, I think a few explanatory sentences would be useful to remove ambiguity. The fact that the letters E and T are so heavily overloaded does not help readability either. The analysis techniques appear to be rather standard, with most technical tools apparently borrowed from Blum and Mansour (2007). However, this is difficult to tell as the authors do not provide any discussion in the main text about the intuition underlying the analysis or the crucial difficulties that needed to be overcome to complete the proofs. The authors also provide some computational improvements over the standard reduction of Blum and Mansour (2007) and its previous improved variants, but these improvements are, in my view, minor. Actually, at the end of Section 5, the authors claim an exponential improvement over a naive implementation, but this seems to contradict the previous discussion at the beginning of the section about the work of Khot and Ponnuswami (2008). I would also like to call the work of Maillard and Munos (ICML 2011) into the authors' attention: they also study a similar notion of regret relative to an even more general class of functions operating over histories of actions taken by the learner. This work is definitely relevant to the current paper and the similarities/differences beg to be discussed. Overall, while I appreciate the generality of the considered setting and the amount of technical work invested into proving the main results, I am not yet convinced about their usefulness. Indeed, I can't really see the potential impact of this work in lack of one single example where the newly proposed regret notion is more useful than others. I hope to be convinced by the author response so that I can vote for acceptance---right now, I'm leaning a little bit towards suggesting rejection. Detailed comments ================= 052: Notation is somewhat inconsistent, e.g., the loss function l_t is sometimes in plain typeface, sometimes bold... 055: Missing parens in the displayed equation (this is a recurring error throughout the paper). 093: Is it really important to have O(\sqrt{L_T}) regret bounds for the base learner? After inspecting the proofs, requesting O(\sqrt{T}) seems to be enough. In fact, this would allow setting alpha = 1/\sqrt{T} artificially for every base learner and would make the statement of all theorems a bit simpler. 099: "t is both the number of rounds and the number of iterations"---I don't understand; I thought the number of iterations was \tau_t = poly(t)? Comments after rebuttal ======================= Thanks for the response. I now fully appreciate the improvement in the results of Section 5, and understand the algorithmic contribution a little better. I still wish that the paper would do a better job in describing the algorithmic and proof techniques, and also in motivating the study a little better. I'm still not entirely sold on the new motivating example provided in the response, specifically: how does such a benchmark "penalize" a learning algorithm and how is this example different from the one by Mohri and Yang (2014) used to motivate conditional swap regret? I raise my score as a sign of my appreciation, but note again that the authors have to work quite a bit more to improve the presentation in the final version.
nips_2017_469
Unsupervised Image-to-Image Translation Networks Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs. We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on benchmark datasets. Code and additional results are available in https://github.com/mingyuliutw/unit.
This paper investigated the problem of unsupervised image-to-image translation (UNIT) using deep generative models. In contrast to existing supervised methods, the proposed approach is able to learn domain-level translation without paired images from two domains. Instead, the paper proposed to learn a joint distribution (two domains) from a single shared latent variable using generative adversarial networks. Additional encoder networks are used to infer the latent code from each domain. The paper proposed to jointly train the encoder, decoder, and discriminator in an end-to-end style without paired images as supervision. Qualitative results on several image datasets demonstrate good performance of the proposed UNIT pipeline. == Qualitative Assessment == The unsupervised domain transfer is a challenging task in deep representation learning and the proposed UNIT network demonstrated a promising direction based on generative models. In terms of novelty, the UNIT network can be treated as an extension of coGAN paper (reference 17 in the paper) with joint training objectives using additional encoder networks. Overall, the proposed method is sound with sufficient details including both qualitative and quantitative analysis. I like Table 1 that interpolates and compares the existing work as subnetworks in UNIT. Considering the fact that UNIT extends coGAN paper, I would like to see more discussions and explicitly comparisons between coGAN and UNIT in the paper. Performance-wise, there seems to be slight improvement compared to coGAN (Table 3 in the paper). However, the evaluation is done on MNIST dataset which may not be conclusive at all. Also, I wonder whether images in Figure 8 are cherry-picked? Compared to coGAN figures, the examples in Figure 8 have less visual artifacts. I encourage authors to elaborate this a little bit. * What’s the benefit of having an encoder network for image-to-image translation? Image-to-image translation can also be applied to coGAN paper: (1) analysis-by-synthesis optimization [1] & [2]; or (2) train a separate inference network [3] & [4]. Can you possibly comment on the visual quality by applying optimization-based method to a pre-trained coGAN? I encourage authors to elaborate this a bit and provide side-by-side comparisons in the final version of the paper. * What’s the benefit of joint training? As being said, one can train a separate encoder/inference network. It is not crystal clear to me why VAE type of training is beneficial. Maybe the reconstruction loss can help to stabilize the training a little bit but I don’t see much help when you add KL penalty to the loss function. Please elaborate this a little bit. Additional comments: * Consider style-transfer as alternative method in unsupervised image-to-image translation? Given the proposed weight sharing architecture, I would assume the translated image at least preserves the structure of input image (they are aligned to some extent). If so, style-transfer methods (either optimization based or feed-forward approximation) can serve as baseline in most experiments. * Just for curiosity, other than visual examination, what is the metric used when selecting models for presentation? Though the hyperparameters have little impact as described in ablation study, I wonder how did you find the good architecture for presentation? At least quantitative evaluations can be used as metric In MNIST experiment for model selection. Since you obtained good results in most experiments, I wonder if this can be elaborated a bit. [1] Generative Visual Manipulation on the Natural Image Manifold, Zhu et al. In ECCV 2016; [2] Reference 29 in the paper; [3] Picture: A Probabilistic Programming Language for Scene Perception, Kulkarni et al. In CVPR 2015; [4] Generative Adversarial Text-to-Image Synthesis, Reed et al. In ICML 2016;
nips_2017_2184
Learning A Structured Optimal Bipartite Graph for Co-Clustering Co-clustering methods have been widely applied to document clustering and gene expression analysis. These methods make use of the duality between features and samples such that the co-occurring structure of sample and feature clusters can be extracted. In graph based co-clustering methods, a bipartite graph is constructed to depict the relation between features and samples. Most existing co-clustering methods conduct clustering on the graph achieved from the original data matrix, which doesn't have explicit cluster structure, thus they require a post-processing step to obtain the clustering results. In this paper, we propose a novel co-clustering method to learn a bipartite graph with exactly k connected components, where k is the number of clusters. The new bipartite graph learned in our model approximates the original graph but maintains an explicit cluster structure, from which we can immediately get the clustering results without post-processing. Extensive empirical results are presented to verify the effectiveness and robustness of our model.
The authors propose a new method for co-clustering. The idea is to learn a bipartite graph with exactly k connected components. This way, the clusters can be directly inferred and no further preprocessing step (like executing k-means) is necessary. After introducing their approach the authors conduct experiments on a synthetic data set as well as on four benchmark data sets. I think that the proposed approach is interesting. However, there are some issues. First, it is not clear to me how the synthetic data was generated. Based on the first experiment I suppose it is too trivial anyway. Why did the authors not use k-means++ instead of standard k-means (which appears in 3/5 baselines)? Why did they not compare against standard spectral clustering? How did they initialize NMF? Randomly? Why not 100 random initializations like in the other methods? Why did they not use a second or third evaluation measure like the adjusted rand index or normalized mutual information? Since the authors introduce another parameter lambda (beside k) I would like to see the clustering performance in dependence of lambda to see how critical the choice of lambda really is. I see the argument why Algorithm 2 is faster than Algorithm 1, but I would like to see how they compare within the evaluation, because I doubt that they perform exactly the same. How much faster is Algorithm 2 compared to Algorithm 1? Second, the organization of the paper has to be improved. Figure 1 is referenced already on the first page but appears on the third page. The introduced baseline BSGP should be set as an algorithm on page two. Furthermore, the authors tend to use variables before introducing them, i.e., D_u, D_v and F in lines 61-63. Regarding the title: In what sense is the approach optimal? Isn't bipartite and structured the same in your case? Third, the language of the paper should be improved. There are some weird sentence constructions, especially when embedded sentences are present. Furthermore, there are too many "the", singular/plural mistakes and a problem with tenses, i.e. Section 5.2 should be in present tense. In addition to grammar, the mathematical presentation should be improved. Equations are part of the sentences and hence, need punctation marks. Due to the inadequate evaluation and the quality of the write-up, I vote for rejecting the paper. More detailed comments: *** Organization: - title: I do not get where the "optimal" comes from. Lines 33-34 say Bipartite=Structured, so why are both words within the title? - line 35: Figure 1 should be on the top of page two - lines 61-63 shoud appear as an algorithm - line 61: in order to understand the matrices D_u and D_v, the reader has to jump to line 79 and then search backwards for D, which is defined in line 74.. - line 63: define D_u, D_v and F before using them.. *** Technical: - line 35: B is used for a graph but also for the data matrix (lines 35 and 57) - line 38: B_{ij} should be small (see lines 53 and 60) - almost all equations: math is part of the text and therefore punctuation marks (.,) have to be used - equation (2) and line 70, set Ncut, cut and assoc with \operatorname{} - line 74: order: first introduce D, then d_{ii} and then L - line 74: if D is a diagonal matrix, then D_u and D_v are diagonal too. Say it! - equation (4): introduce I as the identity matrix - lines 82-85: this paragraph is problematic. What is a discrete value? Should the entries of U and V be integer? ( I think there a just a few orthogonal matrices which are integer ) How does someone obtain a discrete solution of by running k-means on U and V? - line 93: if P (and therefore S) should be non-negative (see Equation (9)), then you can already say it in line 93. Furthermore, isn't this an assumption you have to make? Until line 93, A and B could contain negative numbers but in the proposed approach, this is no longer possible, right? - line 113: a sentence does not start with a variable, in this case \sigma - line 132: get rid of (n=n_1+n_2) or add a where before - line 149: \min - line 150: Algorithms 1 and 2 are working on different graph laplacians. I understand that Alg.2 is faster than Alg.1 but shouldn't there be also a difference in performance? What are the results? It should be noted if Alg.1 performs exactly the same as Alg.2. However, another graph laplacian should make a difference. - line 160: Why does NMF need a similarity graph? - line 161: Did you tune the number of neighbors? - line 164: Why did you not use k-means++? - line 167: How does the performance of the proposed method vary with different values for the newly introduced parameter lambda? - line 171: How does the performance of the proposed method vary with different values for k. Only the ground-truth k was evaluated.. - line 175: "two-dimensional matrix" Why not $n_1 \times n_2$? - Section 5.2: Please improve the description on how the synthetic data was generated. - line 182: if \delta is iid N(0,1) and you scale it with r greater than 0, then r*\delta is N(0,r^2) - line 183: according to the values of r, the noise was always less than the "data"? - Figure 2: Improve layout and caption. - Table 1: After having 100 executions of all algorithms, what are the standard deviations or standard errors of the values within the table? - line 183: r is not the portion of noise - line 183: you did not do a evaluation of robustness! - line 194: setting r=0.9 does not yield a high portion of noise. Perhaps the description of the synthetic data needs improvement. - Table 2: NMF has to be randomly initialized as well, hence I am missing the average over 100 executions - Table 2: What is +-4.59? One standard deviation? Standard error? Something else? - line 219: You forgot to mention the obvious limitation of your approach: you assume the data to be non-negative! - line 220: \ell_2-norm vs. L2-norm (line 53) - line 224: Please elaborate the sentence "This is because high-dimensional..." *** Language: - line 4: graph-based - line 20: feature space - line 22: distribution (singular) - line 27: the duality [...] is - line 33: In graph-based - line 35: of such a bipartite graph - line 37: The affinity [...] is - line 38: propose, stay in present tense - line 39: conduct - line 40: "in this method" Which method is meant? - line 43: graph-based - lines 53-54: L2-norm vs. Frobenius norm, with or without - - line 58: "view" - line 66: In the following, - line 72: component (singular) - lines 76-77: "denotes" is wrong, better use "Let Z=" - line 82: Note that - line 88: We can see from the previous section - lines 88-90: this is a strange sentence - line 93: we learn a matrix S that has - line 96: If S has exactly k connected - line 101: S, which have exactly k connected - lines 104-105: In the next subsection - line 132: to conduct an eigen-decomposition - line 134: to conduct an SVD - Algorithm 1: while not converged - line 141: "will be hold" - line 148: to conduct an SVD - line 149: "Algorithm 2 is much efficient than Algorithm 1" - Section 5.1: the section should be written in present tense (like Section 5.2) not in past tense - line 159: Non-negative Matrix Factorization (NMF) - line 204: data sets do not participate in experiments - line 229: graph-based - line 232: "we guaranteed the learned grapg to have explicitly k connected components" - unnecessary use of "the" in lines 33, 68, 76, 77, 81, 82, 102, 104, 111, 114, 118, 119, 120, 122, 125, 129, 141, 143, 145
nips_2017_2185
Learning Low-Dimensional Metrics This paper investigates the theoretical foundations of metric learning, focused on three key questions that are not fully addressed in prior work: 1) we consider learning general low-dimensional (low-rank) metrics as well as sparse metrics; 2) we develop upper and lower (minimax) bounds on the generalization error; 3) we quantify the sample complexity of metric learning in terms of the dimension of the feature space and the dimension/rank of the underlying metric; 4) we also bound the accuracy of the learned metric relative to the underlying true generative metric. All the results involve novel mathematical approaches to the metric learning problem, and also shed new light on the special case of ordinal embedding (aka non-metric multidimensional scaling).
Summary of the paper: The paper considers the problem of learning a low-rank / sparse and low-rank Mahalanobis distance under relative constraints dist(x_i,x_j) < dist(x_i,x_k) in the framework of regularized empirical risk minimization using trace norm / l_{1,2} norm regularization. The contributions are three theorems that provide 1) an upper bound on the estimation error of the empirical risk minimizer 2) a minimax lower bound on the error under the log loss function associated to the model, showing near-optimality of empirical risk minimization in this case 3) an upper bound on the deviation of the learned Mahalanobis distance from the true one (in terms of Frobenius norm) under the log loss function associated to the model. Quality: My main concern here is the close resemblance to Jain et al. (2016). Big parts of the present paper have a one-to-one correspondence to parts in that paper. Jain et al. study the problem of learning a Gram matrix G given relative constraints dist(x_i,x_j) < dist(x_i,x_k), but without being given the coordinates of the points x_i (ordinal embedding problem). The authors do a good job in explaining that the ordinal embedding problem is a special case of the considered metric learning problem (Section 2.4). However, I am wondering whether such a relationship in some sense holds the other way round too and how it could be exploited to derive the results in the present paper from the results in Jain et al. (2016) without repeating so many things: can’t we exploit the relationship G=X^T K X to learn K from G? At least in the noise-free case, if G=Y^T Y we can simply set K=L^T L, where L solves Y^T = X^T L^T? I did not check the proofs in detail, but what I checked is correct. Also, many things can be found in Jain et al. (2016) already. In Line 16, d_K(x_i,x_j) is defined as (x_i-x_j)^T K (x_i-x_j), but this is not a metric. To be rigorous, one has to take the square root in this definition and replace d_K(x_i,x_j) by d_K(x_i,x_j)^2 in the following. In Corollary 2.1.1, the assumption is max_i \|x_i\|^2 = O(\sqrt{d}\log n), but in Theorem 2.1 it is max_i \|x_i\|^2 = 1. How can we have a weaker assumption in the corollary than in the theorem? There is a number of minor things that should be corrected, see below. Clarity: At some points, the presentation of the paper could be improved: Apparently, in Theorem 2.3 the set K_{\lambda,\gamma} corresponds to l_{1,2} norm regularization, but the definition from Line 106 has never been changed. That’s quite confusing. Line 76: “a noisy indication of the triplet constraint d_{\hat{K}}(x_i,x_j) < d_{\hat{K}}(x_i,x_k)” --- in the rest of the paper, hat{K} denotes the estimate? And, if I understand correctly, we have y_t=-1 if the constraint is true. From the current formulation one would expect y_t=+1 in this case. Originality: I am missing a comparison to closely related results in the literature. What is the difference between Theorem 2.1 and Inequality (5) in combination with Example 6 in [3] (Bellet et al., 2015---I refer to the latest arxiv version)? The authors claim that “past generalization error bounds ... failed to quantify the precise dependence on ... dimension”. However, the dimension p only appears via log p and is not present in the lower bound anymore, so it’s unclear whether the provided upper bound really quantifies the precise dependence. Huang et al. (2011) is cited incorrectly: it DOES consider relative constraints and does NOT use Rademacher complexities. Significance: In the context of metric learning, the results might be interesting, but a big part of the results (formulated in the setting of ordinal embedding --- see above) and techniques can be found already in Jain et al. (2016). Minor points: - Line 75, definition of T: in a triple (i,j,k), we assume j < k - Line 88: that’s an equation and not a definition - Line 98: if I_n is used to denote the identity, why not use 1_n? - Line 99: XV instead of VX - Line 100/101: also define the K-norm used in Line 126 and the ordinary norm as Euclidean norm or operator norm - Be consistent with the names of the norm: in Line 101 you say “the mixed 1,2 norm”, in Line 144 “a mixed l_{1,2} norm”. - In Corollary 2.1.1: shouldn’t the log n-term be squared? - Line 190: remove the dot - Line 212: I think “rank of the” is missing - Typos in Line 288, Line 372, Line 306, Line 229, Line 237, Line 240, Line 259 - Line 394: “We” instead of “I” - Line 335: for K^1 \noteq K^2
nips_2017_963
Best Response Regression In a regression task, a predictor is given a set of instances, along with a real value for each point. Subsequently, she has to identify the value of a new instance as accurately as possible. In this work, we initiate the study of strategic predictions in machine learning. We consider a regression task tackled by two players, where the payoff of each player is the proportion of the points she predicts more accurately than the other player. We first revise the probably approximately correct learning framework to deal with the case of a duel between two predictors. We then devise an algorithm which finds a linear regression predictor that is a best response to any (not necessarily linear) regression algorithm. We show that it has linearithmic sample complexity, and polynomial time complexity when the dimension of the instances domain is fixed. We also test our approach in a high-dimensional setting, and show it significantly defeats classical regression algorithms in the prediction duel. Together, our work introduces a novel machine learning task that lends itself well to current competitive online settings, provides its theoretical foundations, and illustrates its applicability.
STRENGTHS: On some "abstract" level, this paper pursues an interesting direction: - I fully agree that "prediction is not done in isolation" (l24), and though this is a vague observation, it is worth pursuing this beyond established work (although I'm not an expert with a full overview over this field). - Making explicit the tradeoff between good-often and not-bad-always in deciding on a loss function is interesting. Overall, the writing and mathematical quality of the paper is good to very good, with some uncertainty regarding proofs though: - Well written and structured. - The paper is very strong in that the algorithmic and mathemtaical results are well connected (combine the VC theorems and lemmas to reach corollary 1 which shows sample bounds for their EPM algo) and cover the usual questions (consistency, runtime). - In terms of correctness of the mathematical results, it seems that the authors know what they are doing. I only checked one proof, that of Theorem 1 (via the second proof of Lemma 5), and it seems correct although I couldn't follow all steps. WEAKNESSES: For me the main issue with this paper is that the relevance of the *specific* problem that they study -- maximizing the "best response" payoff (l127) on test data -- remains unclear. I don't see a substantial motivation in terms of a link to settings (real or theoretical) that are relevant: - In which real scenarios is the objective given by the adverserial prediction accuracy they propose, in contrast to classical prediction accuracy? - In l32-45 they pretend to give a real example but for me this is too vague. I do see that in some scenarios the loss/objective they consider (high accuracy on majority) kind of makes sense. But I imagine that such losses already have been studied, without necessarily referring to "strategic" settings. In particular, how is this related to robust statistics, Huber loss, precision, recall, etc.? - In l50 they claim that "pershaps even in most [...] practical scenarios" predicting accurate on the majority is most important. I contradict: in many areas with safety issues such as robotics and self-driving cars (generally: control), the models are allowed to have small errors, but by no means may have large errors (imagine a self-driving car to significantly overestimate the distance to the next car in 1% of the situations). Related to this, in my view they fall short of what they claim as their contribution in the introduction and in l79-87: - Generally, this seems like only a very first step towards real strategic settings: in light of what they claim ("strategic predictions", l28), their setting is only partially strategic/game theoretic as the opponent doesn't behave strategically (i.e., take into account the other strategic player). - In particular, in the experiments, it doesn't come as a complete surprise that the opponent can be outperformed w.r.t. the multi-agent payoff proposed by the authors, because the opponent simply doesn't aim at maximizing it (e.g. in the experiments he maximizes classical SE and AE). - Related to this, in the experiments it would be interesting to see the comparison of the classical squared/absolute error on the test set as well (since this is what LSE claims to optimize). - I agree that "prediction is not done in isolation", but I don't see the "main" contribution of showing that the "task of prediction may have strategic aspects" yet. REMARKS: What's "true" payoff in Table 1? I would have expected to see the test set payoff in that column. Or is it the population (complete sample) empirical payoff? Have you looked into the work by Vapnik about teaching a learner with side information? This looks a bit similar as having your discrapency p alongside x,y.
nips_2017_2840
On the Complexity of Learning Neural Networks The stunning empirical successes of neural networks currently lack rigorous theoretical explanation. What form would such an explanation take, in the face of existing complexity-theoretic lower bounds? A first step might be to show that data generated by neural networks with a single hidden layer, smooth activation functions and benign input distributions can be learned efficiently. We demonstrate here a comprehensive lower bound ruling out this possibility: for a wide class of activation functions (including all currently used), and inputs drawn from any logconcave distribution, there is a family of one-hidden-layer functions whose output is a sum gate, that are hard to learn in a precise sense: any statistical query algorithm (which includes all known variants of stochastic gradient descent with any loss function) needs an exponential number of queries even using tolerance inversely proportional to the input dimensionality. Moreover, this hard family of functions is realizable with a small (sublinear in dimension) number of activation units in the single hidden layer. The lower bound is also robust to small perturbations of the true weights. Systematic experiments illustrate a phase transition in the training error as predicted by the analysis.
The paper presents a new information-theoretic lower bound for learning neural networks. In particular, it gives an exponential statistical query lower bound that applies even to neural nets with only one hidden layer using common activation functions and log-concave input distributions. (In fact, the results apply to a slightly more general family of data distributions.) By assuming log-concavity for the input distribution, the paper proves a lower bound for a more realistic setting than previous works with applied to discrete distributions. To prove this SQ lower bound, the paper extends the notion of statistical dimension to regression problems and proves that a family of functions which can be represented by neural nets with one hidden layer has exponentially high statistical dimension with respect to any log-concave distribution. (I was surprised to read that statistical dimension and SQ lower bounds have not been studied for regression problems before; however, my cursory search did not turn up any previous papers that would have studied this problem.) It is not clear to me if there are any novel technical ideas in the proofs, but the idea of studying the SQ complexity of neural networks in order to obtain lower bounds for a more realistic class of nets is (as far as I can tell) novel and clever. The specific class of functions which are used to prove the lower bound is, like in most lower bound proofs, pretty artificial and different from functions one would encounter in practice. Therefore the most interesting consequence of the results is that reasonable-looking assumptions are not sufficient to prove upper bounds for learning NN's (as the authors explain in the abstract). On the other hand, the class of functions studied is not of practical interests, which makes me question the value of the experiments in the paper (beyond pleasing potential reviewers who dislike papers without experimental results). With the tight page limits of NIPS, it seems like the space taken up by the experimental results could have a better use. My main problem with the paper is the clarity of the write-up. I find the paper hard to read. I expect technical lemmas to be difficult to follow, but here even the main theorem statements are hard to parse with a lot of notation and formulas and little intuitive explanation. In fact, even the "Proof ideas" section (1.4), which I expected to contain a reader-friendly, high-level intuitive explanation of the main ideas, was disappointingly difficult to follow. I had to reread some sentences several times before I understood them. Of course, it is possible that this is due to my limitations instead of the paper's, but the main conceptual ideas of the paper don't seem so complex that this complexity would make it impossible to give an easily readable overview of the theorems and proofs. Overall, this seems to be a decent paper that many in the NIPS community would find interesting to read but in my opinion a substantial portion of it would have to be rewritten to improve its clarity.
nips_2017_60
Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization Due to their simplicity and excellent performance, parallel asynchronous variants of stochastic gradient descent have become popular methods to solve a wide range of large-scale optimization problems on multi-core architectures. Yet, despite their practical success, support for nonsmooth objectives is still lacking, making them unsuitable for many problems of interest in machine learning, such as the Lasso, group Lasso or empirical risk minimization with convex constraints. In this work, we propose and analyze PROXASAGA, a fully asynchronous sparse method inspired by SAGA, a variance reduced incremental gradient algorithm. The proposed method is easy to implement and significantly outperforms the state of the art on several nonsmooth, large-scale problems. We prove that our method achieves a theoretical linear speedup with respect to the sequential version under assumptions on the sparsity of gradients and block-separability of the proximal term. Empirical benchmarks on a multi-core architecture illustrate practical speedups of up to 12x on a 20-core machine.
The main contribution of this paper is to offer a convergence proof for minimizing sum fi(x) + g(x) where fi(x) is smooth, and g is nonsmooth, in an asynchronous setting. The problem is well-motivated; there is indeed no known proof for this, in my knowledge. A key aspect of this work is the block separability of the variable, which allows for some decomposition gain in the prox step (since the gradient of each fi may only effect certain indices of x.) This is an important practical assumption, as otherwise the prox causes locks, which may be unavoidable. (It would be interesting to see element-wise proxes, such as shrinkage, being decomposed as well.) Their are two main theoretical results. Theorem 1 gives a convergance rate for proxSAGA, which is incrementally better than a previous result. Theorem 2 gives the rate for an asynchronous setting, which is more groundbreaking. Overall I think it is a good paper, with a very nice theoretical contribution that is also practically very useful. I think it would be stronger without the sparsity assumption, but the authors also say that investigating asynchronosy without sparsity for nonsmooth functions is an open problem, so that is acknowledged. potentially major comment: - Theorem 2 relies heavily on a result from leblond 2017, which assumes smooth g's. It is not clear in the proof that the parts borrowed from Leblond 2017 does not require this assumption. minor comments about paper: - line 61, semicolon after "sparse" - line 127: "a a" minor comments about proof (mostly theorem 1): - eq. (14) is backwards (x-z, not z-x) - line 425: do you mean x+ = x - gamma vi? - Lemma 7 can divide RHS by 2 (triangle inequality in line 433 is loose) - line 435: not the third term (4th term) - line 444: I am not entirely clear why EDi = I. Di has to do with problem structure, not asynchronasy? - eq (33) those two are identical, typo somewhere with missing Ti? - line 447: Not sure where that split is coming from, and the relationship between alpha i, alpha i +, and grad fi. - line 448: there is some mixup in the second line, a nabla f_i(x*) became nabla f_i(x). Some of these seem pretty necessary to fix, but none of these seem fatal, especially since the result is very close to a previously proven result. I mostly skimmed theorem 2 proof; it looks reasonable, apart from the concern of borrowing results from Leblond (assumptions on g need to be clearer)
nips_2017_2279
A Decomposition of Forecast Error in Prediction Markets We analyze sources of error in prediction market forecasts in order to bound the difference between a security's price and the ground truth it estimates. We consider cost-function-based prediction markets in which an automated market maker adjusts security prices according to the history of trade. We decompose the forecasting error into three components: sampling error, arising because traders only possess noisy estimates of ground truth; market-maker bias, resulting from the use of a particular market maker (i.e., cost function) to facilitate trade; and convergence error, arising because, at any point in time, market prices may still be in flux. Our goal is to make explicit the tradeoffs between these error components, influenced by design decisions such as the functional form of the cost function and the amount of liquidity in the market. We consider a specific model in which traders have exponential utility and exponential-family beliefs representing noisy estimates of ground truth. In this setting, sampling error vanishes as the number of traders grows, but there is a tradeoff between the other two components. We provide both upper and lower bounds on market-maker bias and convergence error, and demonstrate via numerical simulations that these bounds are tight. Our results yield new insights into the question of how to set the market's liquidity parameter and into the forecasting benefits of enforcing coherent prices across securities.
Summary: This paper presents several results for analyzing the error of forecasts made using prediction markets. These results can be used to compare different choices for the cost function and give principled guidance about setting the liquidity parameter b when designing markets. The first key idea is to decompose the error into three components: sampling error, market-maker bias (at equilibrium), and convergence error. Intuitively, sampling error is due to the empirical distribution of traders' beliefs not matching the unknown true distribution over states, the market-maker bias is the asymptotic error due to the market's rules influencing trader behavior, and the convergence error is the distance from the market's prices to the equilibrium prices after a fixed number of trades. To make this decomposition precise, the authors introduce two notions of equilibria for the market: market-clearing and market-maker equilibria. Intuitively, a market-clearing equilibrium is a set of security prices and allocations of securities and cash to the N traders such that no trader has incentive to buy or sell securities at the given prices (in this case, the traders need not interact via the market!). Similarly, a market-maker equilibrium is a set of prices and allocations of securities and cash to the N traders that can be achieved by trading through the market and for which no trader has incentive to buy or sell securities through the market at the current prices. With this, the sampling error is the difference between the true prices and market-clearing equilibrium prices, the market-maker bias is the difference between the market-clearing and market-maker equilibrium prices, and the convergence error is the difference between the market-maker equilibrium prices and the prices actually observed in the market after t trades. Only the market-maker bias and the convergence error depend on the design of the market, so the authors focus on understanding the influence of the market's cost function and liquidity parameter on these two quantities. For the remainder of the paper, the authors focus on the exponential trader model introduced by Abernethy et al. (EC 2014), where each trader has an exponential utility function and their beliefs are exponential family distributions. The authors then recast prior results from Abernethy et al. (EC 2014) and Frongillo et al. (NIPS 2015) to show that in this case, the market-clearing and market-maker equilibria are unique and have prices/allocations that can be expressed as the solutions to clean optimization problems. From these results, it is possible to bound the market-maker bias by O(b), where b is the liquidity parameter of the market. Unfortunately, this upper bound is somewhat pessimistic and, since it does not have an accompanying lower bound, it can't be used to compare different cost functions for the market. The authors then present a more refined local analysis, under the assumption that the cost function satisfies a condition the authors call convex+. These results give a precise characterization the relationship between the market-clearing and market-maker equilibrium prices, up to second order terms. To study the convergence error, the authors consider the following market protocol: on each round the market chooses a trader uniformly at random and that trader buys the bundle of securities that optimizes their utility given the current state of the market and their own cash/security allocations. Frongillo et al. (NIPS 2015) showed that under this model, the updates to the vector of security allocations to the N traders exactly corresponds to randomized block-coordinate descent on a particular potential function. The standard analysis of randomized block coordinate descent shows that the suboptimality decreases at a rate O(gamma^t) for some gamma less than one. Under the assumption that the market cost function is convex+, this paper shows both upper and lower bounds on the convergence rate, as well as gives an explicit characterization of gamma on the cost function C and the liquidity parameter b. We are able to combine the two above results to compare different cost functions and liquidity parameters. In particular, we can compare two different cost functions by setting their liquidity parameter to ensure the market-maker bias of the two cost functions is equal, and then comparing the rates of the two resulting convergence errors. The authors apply this analysis to two cost functions, the Logarithmic Market Scoring Rule (LMSR) and Sum of Independent LMSRs (IND), showing that for the same asymptotic bias, IND requires at least 1/2 and no more than 2 times as many trades as LSMR to achieve the same error. Their theoretical findings are also verified empirically. Comments: The results in the paper are interesting and nontrivial. One limitation, however, is that they seem to only apply under the exponential trader model, where each trader has an exponential family belief over outcomes and a risk-averse exponential utility function. If the results are more generally applicable (e.g., maybe for other trader models we can have similar characterizations and analysis of the two market equilibria), it may be helpful to include some discussion of this in the paper. Also, is the exponential trader model inspired by the behavior of real traders in prediction markets? In the comparison on LMSR and IND, the authors choose the liquidity parameters of the two markets so that the market-maker bias is equal. An alternative comparison that might be interesting would be to fix a time horizon of T trades and choose the market liquidity parameter to give the most accurate forecasts after T trades. It's also a bit surprising that the analysis seems to suggest that LMSR and IND are roughly comparable in terms of the number of trades required to get the same level of accuracy. This result seems to contradict the intuition that maintaining coherence between the security prices has informational advantages. Is it the case that IND requires the liquidity parameter b to be set larger and this increases the potential loss for the market? It may be easier to understand the definition of market-clearing and market-maker equilibria if the orders of their definition were reversed. The market-clearing equilibrium definition comes immediately after the definition of the market, but does not require that the traders buy/sell securities through the market, and this took me some time to understand. The new upper and lower bounds for randomized block coordinate descent for convex+ functions also sound interesting, and stating them in their general form might help the paper appeal to a wider audience.
nips_2017_2442
SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud Inference using deep neural networks is often outsourced to the cloud since it is a computationally demanding task. However, this raises a fundamental issue of trust. How can a client be sure that the cloud has performed inference correctly? A lazy cloud provider might use a simpler but less accurate model to reduce its own computational load, or worse, maliciously modify the inference results sent to the client. We propose SafetyNets, a framework that enables an untrusted server (the cloud) to provide a client with a short mathematical proof of the correctness of inference tasks that they perform on behalf of the client. Specifically, SafetyNets develops and implements a specialized interactive proof (IP) protocol for verifiable execution of a class of deep neural networks, i.e., those that can be represented as arithmetic circuits. Our empirical results on three-and four-layer deep neural networks demonstrate the run-time costs of SafetyNets for both the client and server are low. SafetyNets detects any incorrect computations of the neural network by the untrusted server with high probability, while achieving state-of-the-art accuracy on the MNIST digit recognition (99.4%) and TIMIT speech recognition tasks (75.22%).
The submission is interested in allowing an entity ("the verifier") to outsource the training of neural networks to a powerful but untrusted service provider ("the prover"), while obtaining a guarantee that that the prover correctly trained the neural network. Interactive proofs (IPs) are a kind of cryptographic protocol that provide such a guarantee. The dominant cost in neural network training lies in evaluating the network on many different inputs (at least, this was my understanding -- the submission could be a bit clearer about this point for the benefit of readers who are not experts on how neural networks are trained in practice). Hence, the submission is focused on giving a very efficient IP for evaluating a neural network on many different inputs. The IP given in the submission applies to a rather restricted class C of neural networks: those that exclusively use quadratic activation functions and sum-pooling gates. It does not support popular activation functions like ReLU, sigmoid, or softmax, or max-pooling or stochastic pooling. The reason for this restriction is technical: existing IP techniques are well-suited mainly to "low-degree" operations, and quadratic activation functions and sum-pooling operations are low-degree. Despite the fact that the IP applies to a rather restricted class of neural networks, the submission's experimental section states that this class of networks nonetheless achieves state-of-the-art performance on a few benchmark learning tasks. The submission represents a quite direct application of existing IP techniques (most notably, a highly efficient IP for matrix multiplication due to Thaler). The main technical contribution of the IP protocol lies in the observation that evaluating a neural network from the class C on many different inputs can be directly implemented via matrix multiplication (this observation exploits the linear nature of neurons), and squaring operations. This allows the authors to make use of the highly efficient IP for matrix multiplication due to Thaler. Without this observation, the resulting IP would be much more costly, especially for the prover, as Thaler's IP for matrix multiplication is far more efficient than general IPs for circuit-checking that have appeared in the literature. In summary, I view the main contributions of the submission as two-fold. First, the authors observe that evaluating any neural network from a certain class (described above) on many different inputs can be reduced to operations for which highly efficient IPs are already known. Second, the submission shows experimentally that although this class is restrictive, it is still powerful enough to achieve state of the art performance on some learning tasks. Evaluation: Although the submission contains a careful and thorough application of interactive proof techniques, it has a few weaknesses that I think leave it below the bar for NIPS. First, as a rather direct application of known IP methods, the technical novelty is a bit low. Second, the class of networks supported is restrictive (though, again, the experiments demonstrate that the class performs quite well on some learning tasks). Third, and perhaps most importantly, is an issue of motivation. The interactive proof in the submission allows the verifier to ensure that the neural network was correctly trained on a specific training set using a specific training algorithm. But people shouldn't care *how* the network was trained -- they should only care that the trained network makes accurate predictions (i.e., has low generalization error). Hence, there is an obvious alternative solution to the real problem that the submission does not mention or compare against: the prover can simply send the final trained-up network to the verifier. The verifier can check that this final network makes accurate predictions simply by evaluating it on a relatively small randomly chosen selection of labeled examples, and confirming that its error on these labeled examples is low. Based on the reported numbers for the verifier (i.e., an 8x-80x speedup for the verifier compared to doing the full training itself with no prover) this naive solution is in many cases likely to be more efficient for the verifier than the actual solution described in the paper.
nips_2017_3022
Self-Supervised Intrinsic Image Decomposition Intrinsic decomposition from a single image is a highly challenging task, due to its inherent ambiguity and the scarcity of training data. In contrast to traditional fully supervised learning approaches, in this paper we propose learning intrinsic image decomposition by explaining the input image. Our model, the Rendered Intrinsics Network (RIN), joins together an image decomposition pipeline, which predicts reflectance, shape, and lighting conditions given a single image, with a recombination function, a learned shading model used to recompose the original input based off of intrinsic image predictions. Our network can then use unsupervised reconstruction error as an additional signal to improve its intermediate representations. This allows large-scale unlabeled data to be useful during training, and also enables transferring learned knowledge to images of unseen object categories, lighting conditions, and shapes. Extensive experiments demonstrate that our method performs well on both intrinsic image decomposition and knowledge transfer.
The paper presents an interesting approach on the intrinsic image decomposition problem: given an input rgb image, it decomposes it first into shape (normals), reflectance (albedo) and illumination (point light) using an encoder-decoder deep architecture with 3 outputs. Then there is another encoder-decoder that takes the predicted normals and light and outputs the shading of the shape. Finally, the result comes from a multiplication between the estimated reflectance (from the 1st encoder-decoder) with the estimated shading. The idea of having a reconstruction loss to recover the input image is interesting, but I believe that is only partially employed in the paper. The network architecture still needs labeled data for the initial training. Also, in lines 164-166 the authors mention that the unlabeled data are used together with labeled so the "representations do not shift too far from those learnt". Does this mean that the unlabeled data corrupts the network in a way that the reconstruction is valid but the intermediate results are not correct? That implies that such data driven inference without taking into account the physical properties of light/material/geometry is not optimal. I found difficult to parse the experiments and how the network is being deployed. How are the train/validation/test sets being created? For example, in 4.1 the motorbike numbers correspond to a test set of motorbikes that were not seen during the initial training (it is mentioned that the held-out dataset is airplanes)? And for decomposing a new shape into its intrinsic properties, do you directly take the output of the 3 decoders, or you have another round of training, but with including the test data. Another important limitation is the illumination model being just a single point light source. This simple illumination model (even a spotlight) can not be applied in real images. Regarding that, the experiments are only on the synthetic dataset proposed in the paper and there is no comparison with [13, 15] (or a discussion why it is missing). Without testing on real images or having a benchmark such as MIT intrinsics, its difficult to argue about the effectiveness of the method outside of synthetic images. Also, Figure 3 is the output of the network or is it an illustration of what is not captured by simple lambertian shading? There is some related work that is missing both for radiometric and learned decomposition into intrinsic properties. Decomposition into shape/illumination/reflectance: Single Image Multimaterial Estimation, Lombardi et al, CVPR 12 Shape and Reflectance Estimation in the Wild, Oxholm et al, PAMI 2016, ECCV12 Reflectance and Illumination Recovery in the Wild, Lombardi et al, PAMI 2016, ECCV 12 Deep Reflectance Maps, Rematas et al, CVPR 2016 Decomposing Single Images for Layered Photo Retouching, Innamorati et al, CGF 2017 (arxiv 16) Deep Outdoor Illumination Estimation, Hold-Geoffroy et al, CVPR 2017 (arxiv 16) Learning shading using deep learning: Deep Shading: Convolutional Neural Networks for Screen-Space Shading, Nalbach et al CGF 2017 (arxiv 16)
nips_2017_3576
A General Framework for Robust Interactive Learning * We propose a general framework for interactively learning models, such as (binary or non-binary) classifiers, orderings/rankings of items, or clusterings of data points. Our framework is based on a generalization of Angluin's equivalence query model and Littlestone's online learning model: in each iteration, the algorithm proposes a model, and the user either accepts it or reveals a specific mistake in the proposal. The feedback is correct only with probability p > 1 2 (and adversarially incorrect with probability 1 − p), i.e., the algorithm must be able to learn in the presence of arbitrary noise. The algorithm's goal is to learn the ground truth model using few iterations. Our general framework is based on a graph representation of the models and user feedback. To be able to learn efficiently, it is sufficient that there be a graph G whose nodes are the models, and (weighted) edges capture the user feedback, with the property that if s, s * are the proposed and target models, respectively, then any (correct) user feedback s must lie on a shortest s-s * path in G. Under this one assumption, there is a natural algorithm, reminiscent of the Multiplicative Weights Update algorithm, which will efficiently learn s * even in the presence of noise in the user's feedback. From this general result, we rederive with barely any extra effort classic results on learning of classifiers and a recent result on interactive clustering; in addition, we easily obtain new interactive learning algorithms for ordering/ranking.
The paper proposes a general framework for interactive learning. In the framework, the machine learning models are represented as the nodes in a graph G and the user feedback are represented as weighted edges in G. Under the assumption of “if s, s* are the proposed and target models, then any (correct) user feedback s’ must lie on the shortest s-s* path in G”, the author showed that the Multiplicative Weights Update algorithm can efficiently learn the target model. The framework can be applied to three important machine learning tasks: ranking, clustering, and classification. The problem investigated in the paper is interesting and important. The theoretical results of the paper is convincing. However, these results are not empirically verified on real tasks. My comments on the paper are listed as follows: Pros. 1. A interactive learning framework which models the models and user feedback with graph. The framework can learn the target model in the presence of random noise in the user’s feedback. 2. The framework is really general and can be applied to major machine learning tasks such as classification, clustering and ranking. Cons. 1. My major concern on the paper comes from the absence of empirical evaluation of the proposed framework. The major assumption of the framework is “any (correct) user feedback s’ must lie on the shortest s-s* path in G”. Also, the authors claimed that the proposed framework is robust, which can learn the correct s* even in the presence of random noise in user’s feedback. It is necessary to verify the correctness of these assumption and conclusions based on real data and real tasks. The framework is applied to the tasks of ranking, clustering, and classification, achieving the interactive ranking, interactive clustering, and interactive classification models. It is necessary to compare these derived models with state-of-the-arts models to show the effectiveness of these models. Currently, the authors only showed the theoretical results. 2. There is no conclusion section in the paper. It is better to conclude the contributions of the paper in the last section.
nips_2017_253
Gated Recurrent Convolution Neural Network for OCR Optical Character Recognition (OCR) aims to recognize text in natural images. Inspired by a recently proposed model for general image classification, Recurrent Convolution Neural Network (RCNN), we propose a new architecture named Gated RCNN (GRCNN) for solving this problem. Its critical component, Gated Recurrent Convolution Layer (GRCL), is constructed by adding a gate to the Recurrent Convolution Layer (RCL), the critical component of RCNN. The gate controls the context modulation in RCL and balances the feed-forward information and the recurrent information. In addition, an efficient Bidirectional Long ShortTerm Memory (BLSTM) is built for sequence modeling. The GRCNN is combined with BLSTM to recognize text in natural images. The entire GRCNN-BLSTM model can be trained end-to-end. Experiments show that the proposed model outperforms existing methods on several benchmark datasets including the IIIT-5K, Street View Text (SVT) and ICDAR. : Illustration of using RCL with T = 2 for OCR. A Recurrent Convolution Neural Network (RCNN) is proposed for general image classification [22], which simulates an anatomical fact that recurrent connections are ubiquitously existent in the neocortex. It is believed that the recurrent synapses that exist in the neocortex play an important role in context modulation during visual recognition. A feed-forward model can only capture the context in higher layers where units have larger receptive fields, but this information cannot modulate the units in lower layers which is responsible for recognizing smaller objects. Hence, using recurrent connections within a convolutional layer (called Recurrent Convolution Layer or RCL) can bring context information to all units in this layer. This implements a nonclassical receptive field [12,18] of a biological neuron as the effective receptive field is larger than that determined by feedforward connections. However, with increasing iterations, the size of the effective receptive field will increase unboundedly, which contradicts the biological fact. One needs a mechanism to constrain the growth of the effective receptive field. In addition, from the viewpoint of performance enhancing, one also needs to control the context modulation of neurons in RCNN. For example, in Figure 1, it is seen that for recognizing a character, not all of the context are useful. When the network recognizes the character "h", the recurrent kernel which covers the other parts of the character is beneficial. However, when the recurrent kernel is enlarged to the parts of other characters, such as "p", the context carried by the kernel is unnecessary. Therefore, we need to weaken the signal that comes from unrelated context and combine the feed-forward information with recurrent information in a flexible way. To achieve the above goals, we introduce a gate to control the context modulation in each RCL layer, which leads to the Gated Recurrent Convolution Neural Network (GRCNN). In addition, the recurrent neural network is adopted in the recognition of words in natural images since it is good at sequence modeling. In this work, we choose the Long Short Term Memory (LSTM) as the top layer of the proposed model, which is trained in an end-to-end fashion.
Authors added gating mechanism in Recurrent Convolutional NN that allow controlling the amount of context / recursion. In other words, it is similar to improvement from RNN to GRU but for Recurrent CNNs. Authors benchmark their model on text reading task. Positives aspects: - authors improved upon the state of the art results on very well research benchmark - the proposed model combines few proved ideas into a single model and made it work. Areas to improve: - Lines 195-196: you state that choose 10 best models per architecture that perform well on each dataset and state the results as the median of them. It is ok when you compare different iterations of you model to make sure you found good hyperparameters. But this cannot be at the final table with the comparison to other works. You should clearly state what is your pool of models (is it coming from different hyperparameters? what parameters and what values did you choose). How many models did you try during this search? You should provide single hyper parameter + model setup with the results that works the best across all the problems in the final table. Additionally, you may provide variance of the results with the clear explanation of hyper parameter search you performed. - did authors try to use attention not bidirectional RNN + CTC loss? In few recent works (https://arxiv.org/pdf/1603.03101.pdf, https://arxiv.org/pdf/1704.03549.pdf, https://arxiv.org/pdf/1603.03915.pdf). This should still allow for improving the results, or if didn't work, this is worth stating. I vote for top 50% of papers as improving upon the previous state of the art results on such a benchmark is a difficult job. Authors proposed a new model for extracting image features that is potentially beneficial to other visual tasks. This is conditional on fixing the results number (to present numbers from a single model, not median) and open-sourcing the code and models to the community for reproducibility.
nips_2017_2117
Variational Memory Addressing in Generative Models Aiming to augment generative models with external memory, we interpret the output of a memory module with stochastic addressing as a conditional mixture distribution, where a read operation corresponds to sampling a discrete memory address and retrieving the corresponding content from memory. This perspective allows us to apply variational inference to memory addressing, which enables effective training of the memory module by using the target information to guide memory lookups. Stochastic addressing is particularly well-suited for generative models as it naturally encourages multimodality which is a prominent aspect of most high-dimensional datasets. Treating the chosen address as a latent variable also allows us to quantify the amount of information gained with a memory lookup and measure the contribution of the memory module to the generative process. To illustrate the advantages of this approach we incorporate it into a variational autoencoder and apply the resulting model to the task of generative few-shot learning. The intuition behind this architecture is that the memory module can pick a relevant template from memory and the continuous part of the model can concentrate on modeling remaining variations. We demonstrate empirically that our model is able to identify and access the relevant memory contents even with hundreds of unseen Omniglot characters in memory. The KL(q||p) cost to identify a specific memory entry might be substantial, even though the cost of accessing a memory entry should be in the order of log |M|. Middle & Right: We combine a top-level categorical distribution p(a) and a conditional variational autoencoder with a Gaussian p(z|m). when retrieving contents from memory during training. We compute the sampling distribution over the addresses based on a learned similarity measure between the memory contents at each address and the target. The memory contents m a at the selected address a serve as a context for a continuous latent variable z, which together with m a is used to generate the target observation. We therefore interpret memory as a non-parametric conditional mixture distribution. It is non-parametric in the sense that we can change the content and the size of the memory from one evaluation of the model to another without having to relearn the model parameters. And since the retrieved content m a is dependent on the stochastic variable a, which is part of the generative model, we can directly use it downstream to generate the observation x. These two properties set our model apart from other work on VAEs with mixture priors [9,10] aimed at unconditional density modelling. Another distinguishing feature of our approach is that we perform sampling-based variational inference on the mixing variable instead of integrating it out as is done in prior work, which is essential for scaling to a large number of memory addresses. Most existing memory-augmented generative models use soft attention with the weights dependent on the continuous latent variable to access the memory. This does not provide clean separation between inferring the address to access in memory and the latent factors of variation that account for the variability of the observation relative to the memory contents (see Figure 1). Or, alternatively, when the attention weights depend deterministically on the encoder, the retrieved memory content can not be directly used in the decoder. Our contributions in this paper are threefold: a) We interpret memory-read operations as conditional mixture distribution and use amortized variational inference for training; b) demonstrate that we can combine discrete memory addressing variables with continuous latent variables to build powerful models for generative few-shot learning that scale gracefully with the number of items in memory; and c) demonstrate that the KL divergence over the discrete variable a serves as a useful measure to monitor memory usage during inference and training.
This paper proposes a memory-augmented generative model that performs stochastic, discrete memory addressing, by interpreting the memory as a non-parametric conditional mixture distribution. This variational memory addressing model can combine discrete memory addressing variables with continuous latent variables, to generate samples only with few samples in the memory, which is useful for few-shot learning. The authors implement a VAE version of their model and validate it for the few-shot recognition tasks on the Omniglot dataset, on which it significantly outperforms the Generative Matching Networks, which is an existing memory-augmented network model. Further analysis shows that the proposed model accesses relevant part of the memory even with hundreds of unseen instances in the memory. Pros: - Performing discrete, stochastic memory addressing for memory-augmented generative model is a novel idea which makes sense. Also, the authors have done a good job in motivating why this model is superior to soft attention approach. - The proposed variational addressing scheme is shown to work well in case of few-shot learning, even in case where existing soft-attention model fails to work. - The proposed scheme of interpreting the memory usage with KL divergence seems useful. Cons -Not much, except that the experimental study only considers character data, although they are standard datasets. It would be better if the paper provides experimental results on other types of data, such as images, and compared against (or coupled with) recent generative models (such as GANs) Overall, this is a good paper that presents a novel, working idea. It has effectively solved the problem with existing soft-attention memory addressing model, and is shown to work well for few-shot learning. Thus I vote for accepting the paper. - Some typos: Line 104: unconditioneal -> unconditional Line 135: "of" is missing between context and supervised. Line 185: eachieving -> achieving Table 1 is missing a label "number of shots" for the numbers 1,2,3,4,5,10 and 19.
nips_2017_174
The Unreasonable Effectiveness of Structured Random Orthogonal Embeddings We examine a class of embeddings based on structured random matrices with orthogonal rows which can be applied in many machine learning applications including dimensionality reduction and kernel approximation. For both the JohnsonLindenstrauss transform and the angular kernel, we show that we can select matrices yielding guaranteed improved performance in accuracy and/or speed compared to earlier methods. We introduce matrices with complex entries which give significant further accuracy improvement. We provide geometric and Markov chain-based perspectives to help understand the benefits, and empirical results which suggest that the approach is helpful in a wider range of applications.
The paper analyses the theoretical properties of a family random projections approaches to approximate inner products in high dimensional spaces. In particular the authors focus on methods based on random structured matrices, namely Gaussian orthogonal matrices and SD-matrices. The latter are indeed appealing since they require significantly less computations to perform the projections thanks to their underlying structure. The authors show that the methods considered perform comparably well (or better) with respect to the Johnson-Lindenstrauss transform (baseline based on unstructured Gaussian matrices). Moreover they show that further improvements can be achieved by extending SD-matrices to the complex domain. The authors extend their analysis to the case random feature based approximation of angular kernels. The paper is well written and clear to read, however the discussion of some of the results could be elaborated more. For instance after Thm 3.3, which characterizes the MSE of the orthogonal JL transform based on SD-matrices, it is not discussed in much detail how this compares to the standard JL baseline. Cor. 3.4 does not really help much since it simply states that Thm 3.3 yields to lower MSE, without clarifying the entity of such improvement. In particular it appears that the improvement in performance of the OJLT over JLT is only in terms of constants (w.r.t. the number of sub-sampled dimensions m). I found it a bit misleading in Sec. 4, to introduce a general family of kernels, namely the pointwise nonlinear Gaussian kernels, but then immediately focus on a specific instance of such class. The reader expects the following results to apply to the whole family but this is not the case. Reversing the order, and discussing PNG kernels only at the end of the section would probably help the discussion. I found the term 'Unreasonable' in the title not explained in the text. Is it not reasonable to expect that adding structure to an estimator could make it more effective? The Mean Squared Error (MSE) is never defined. Although being a standard concept, it is also critical to the theoretical analysis presented in the paper, so it should be defined nevertheless.
nips_2017_175
From Parity to Preference-based Notions of Fairness in Classification The adoption of automated, data-driven decision making in an ever expanding range of applications has raised concerns about its potential unfairness towards certain social groups. In this context, a number of recent studies have focused on defining, detecting, and removing unfairness from data-driven decision systems. However, the existing notions of fairness, based on parity (equality) in treatment or outcomes for different social groups, tend to be quite stringent, limiting the overall decision making accuracy. In this paper, we draw inspiration from the fairdivision and envy-freeness literature in economics and game theory and propose preference-based notions of fairness-given the choice between various sets of decision treatments or outcomes, any group of users would collectively prefer its treatment or outcomes, regardless of the (dis)parity as compared to the other groups. Then, we introduce tractable proxies to design margin-based classifiers that satisfy these preference-based notions of fairness. Finally, we experiment with a variety of synthetic and real-world datasets and show that preference-based fairness allows for greater decision accuracy than parity-based fairness.
ML models control many aspects of our life such as our bank decision on approving a loan or a medical centers that use ML to make treatment decision. Therefore, it is paramount that these models are fair, even in cases where the data contains biases. This study follows previous studies in the field but argues that the definition of fairness provided in previous studies might pose a too high bar which results in unnecessary cost in terms of accuracy. Instead they present an alternative fairness definition, presenting an algorithm to apply it for linear classifiers together with some empirical results. I find this paper very interesting and relevant but lacking in several aspects. I think that it will benefit from some additional depth which will make it a significant contribution. Here are some issues that I find with this paper: 1. I am not sure that the treatment of the bias is comprehensive enough. For example, if z is a gender feature. A classifier may ignore this feature altogether but still introduce bias by inferring gender from other features such that height, weight, facial image, or even the existence of male/female reproducing organs. I could not figure out from the text how this issue is treated. This is important that in the current definition, if a model never treats a person with male reproducing organs for cancer, this model may be gender-fair. The paragraph at lines 94-100 gives a short discussion that relates to this issue but it should be made more explicit and clear. 2. There is a sense of over-selling in this paper. The abstract suggests that there is an algorithmic solution but only in section 3 an assumption is added that the classification model is linear. This is a non-trivial constraint but there is no discussion on scenarios in which linear models are to be used. Some additional comments that the authors may wish to consider: 3. The introduction can benefit from concrete examples. For example, you can take the case of treating cancer: while male and female should receive equally good treatment, it does not mean that female should be treated for prostate cancer or male should be treated for breast cancer in the same rates. 4. Lines 66-74 are repetition of previous paragraphs 5. Line 79: “may” should be “many” 6. When making the approximations (lines 171-175) you should show that these are good approximation in a sense that a solution to the approximated problem will be a good solution to the original problem. It may be easier to state the original problem as (9) and avoid the approximation discussion.