categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
cs.LG
null
0801.1988
null
null
http://arxiv.org/pdf/0801.1988v1
2008-01-14T06:56:42Z
2008-01-14T06:56:42Z
Online variants of the cross-entropy method
The cross-entropy method is a simple but efficient method for global optimization. In this paper we provide two online variants of the basic CEM, together with a proof of convergence.
[ "Istvan Szita and Andras Lorincz", "['Istvan Szita' 'Andras Lorincz']" ]
cs.AI cs.LG
null
0801.2069
null
null
http://arxiv.org/pdf/0801.2069v2
2008-08-13T15:07:08Z
2008-01-14T13:09:06Z
Factored Value Iteration Converges
In this paper we propose a novel algorithm, factored value iteration (FVI), for the approximate solution of factored Markov decision processes (fMDPs). The traditional approximate value iteration algorithm is modified in two ways. For one, the least-squares projection operator is modified so that it does not increase max-norm, and thus preserves convergence. The other modification is that we uniformly sample polynomially many samples from the (exponentially large) state space. This way, the complexity of our algorithm becomes polynomial in the size of the fMDP description length. We prove that the algorithm is convergent. We also derive an upper bound on the difference between our approximate solution and the optimal one, and also on the error introduced by sampling. We analyze various projection operators with respect to their computation complexity and their convergence when combined with approximate value iteration.
[ "Istvan Szita and Andras Lorincz", "['Istvan Szita' 'Andras Lorincz']" ]
cs.LG
null
0801.4061
null
null
http://arxiv.org/pdf/0801.4061v1
2008-01-26T07:32:48Z
2008-01-26T07:32:48Z
The optimal assignment kernel is not positive definite
We prove that the optimal assignment kernel, proposed recently as an attempt to embed labeled graphs and more generally tuples of basic data to a Hilbert space, is in fact not always positive definite.
[ "['Jean-Philippe Vert']", "Jean-Philippe Vert (CB)" ]
cs.DM cs.IT cs.LG math.IT
null
0801.4790
null
null
http://arxiv.org/pdf/0801.4790v2
2008-07-01T09:46:33Z
2008-01-30T22:49:57Z
Information Width
Kolmogorov argued that the concept of information exists also in problems with no underlying stochastic model (as Shannon's information representation) for instance, the information contained in an algorithm or in the genome. He introduced a combinatorial notion of entropy and information $I(x:\sy)$ conveyed by a binary string $x$ about the unknown value of a variable $\sy$. The current paper poses the following questions: what is the relationship between the information conveyed by $x$ about $\sy$ to the description complexity of $x$ ? is there a notion of cost of information ? are there limits on how efficient $x$ conveys information ? To answer these questions Kolmogorov's definition is extended and a new concept termed {\em information width} which is similar to $n$-widths in approximation theory is introduced. Information of any input source, e.g., sample-based, general side-information or a hybrid of both can be evaluated by a single common formula. An application to the space of binary functions is considered.
[ "Joel Ratsaby", "['Joel Ratsaby']" ]
cs.DM cs.AI cs.LG
null
0801.4794
null
null
http://arxiv.org/pdf/0801.4794v1
2008-01-30T23:14:19Z
2008-01-30T23:14:19Z
On the Complexity of Binary Samples
Consider a class $\mH$ of binary functions $h: X\to\{-1, +1\}$ on a finite interval $X=[0, B]\subset \Real$. Define the {\em sample width} of $h$ on a finite subset (a sample) $S\subset X$ as $\w_S(h) \equiv \min_{x\in S} |\w_h(x)|$, where $\w_h(x) = h(x) \max\{a\geq 0: h(z)=h(x), x-a\leq z\leq x+a\}$. Let $\mathbb{S}_\ell$ be the space of all samples in $X$ of cardinality $\ell$ and consider sets of wide samples, i.e., {\em hypersets} which are defined as $A_{\beta, h} = \{S\in \mathbb{S}_\ell: \w_{S}(h) \geq \beta\}$. Through an application of the Sauer-Shelah result on the density of sets an upper estimate is obtained on the growth function (or trace) of the class $\{A_{\beta, h}: h\in\mH\}$, $\beta>0$, i.e., on the number of possible dichotomies obtained by intersecting all hypersets with a fixed collection of samples $S\in\mathbb{S}_\ell$ of cardinality $m$. The estimate is $2\sum_{i=0}^{2\lfloor B/(2\beta)\rfloor}{m-\ell\choose i}$.
[ "Joel Ratsaby", "['Joel Ratsaby']" ]
cs.LG
null
0802.1002
null
null
http://arxiv.org/pdf/0802.1002v1
2008-02-07T15:18:27Z
2008-02-07T15:18:27Z
New Estimation Procedures for PLS Path Modelling
Given R groups of numerical variables X1, ... XR, we assume that each group is the result of one underlying latent variable, and that all latent variables are bound together through a linear equation system. Moreover, we assume that some explanatory latent variables may interact pairwise in one or more equations. We basically consider PLS Path Modelling's algorithm to estimate both latent variables and the model's coefficients. New "external" estimation schemes are proposed that draw latent variables towards strong group structures in a more flexible way. New "internal" estimation schemes are proposed to enable PLSPM to make good use of variable group complementarity and to deal with interactions. Application examples are given.
[ "['Xavier Bry']", "Xavier Bry (I3M)" ]
cs.LG stat.ML
null
0802.1244
null
null
http://arxiv.org/pdf/0802.1244v1
2008-02-10T07:38:49Z
2008-02-10T07:38:49Z
Learning Balanced Mixtures of Discrete Distributions with Small Sample
We study the problem of partitioning a small sample of $n$ individuals from a mixture of $k$ product distributions over a Boolean cube $\{0, 1\}^K$ according to their distributions. Each distribution is described by a vector of allele frequencies in $\R^K$. Given two distributions, we use $\gamma$ to denote the average $\ell_2^2$ distance in frequencies across $K$ dimensions, which measures the statistical divergence between them. We study the case assuming that bits are independently distributed across $K$ dimensions. This work demonstrates that, for a balanced input instance for $k = 2$, a certain graph-based optimization function returns the correct partition with high probability, where a weighted graph $G$ is formed over $n$ individuals, whose pairwise hamming distances between their corresponding bit vectors define the edge weights, so long as $K = \Omega(\ln n/\gamma)$ and $Kn = \tilde\Omega(\ln n/\gamma^2)$. The function computes a maximum-weight balanced cut of $G$, where the weight of a cut is the sum of the weights across all edges in the cut. This result demonstrates a nice property in the high-dimensional feature space: one can trade off the number of features that are required with the size of the sample to accomplish certain tasks like clustering.
[ "['Shuheng Zhou']", "Shuheng Zhou" ]
cs.CV cs.LG
null
0802.1258
null
null
http://arxiv.org/pdf/0802.1258v1
2008-02-09T12:22:47Z
2008-02-09T12:22:47Z
Bayesian Nonlinear Principal Component Analysis Using Random Fields
We propose a novel model for nonlinear dimension reduction motivated by the probabilistic formulation of principal component analysis. Nonlinearity is achieved by specifying different transformation matrices at different locations of the latent space and smoothing the transformation using a Markov random field type prior. The computation is made feasible by the recent advances in sampling from von Mises-Fisher distributions.
[ "['Heng Lian']", "Heng Lian" ]
cs.LG
null
0802.1430
null
null
http://arxiv.org/pdf/0802.1430v2
2008-12-19T14:05:14Z
2008-02-11T12:55:34Z
A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization
We present a general approach for collaborative filtering (CF) using spectral regularization to learn linear operators from "users" to the "objects" they rate. Recent low-rank type matrix completion approaches to CF are shown to be special cases. However, unlike existing regularization based CF methods, our approach can be used to also incorporate information such as attributes of the users or the objects -- a limitation of existing regularization based CF methods. We then provide novel representer theorems that we use to develop new estimation methods. We provide learning algorithms based on low-rank decompositions, and test them on a standard CF dataset. The experiments indicate the advantages of generalizing the existing regularization based CF methods to incorporate related information about users and objects. Finally, we show that certain multi-task learning methods can be also seen as special cases of our proposed approach.
[ "['Jacob Abernethy' 'Francis Bach' 'Theodoros Evgeniou'\n 'Jean-Philippe Vert']", "Jacob Abernethy, Francis Bach (INRIA Rocquencourt), Theodoros\n Evgeniou, Jean-Philippe Vert (CB)" ]
cs.LG cs.DS cs.IT math.IT
null
0802.2015
null
null
http://arxiv.org/pdf/0802.2015v2
2008-02-15T10:59:15Z
2008-02-14T14:54:57Z
Combining Expert Advice Efficiently
We show how models for prediction with expert advice can be defined concisely and clearly using hidden Markov models (HMMs); standard HMM algorithms can then be used to efficiently calculate, among other things, how the expert predictions should be weighted according to the model. We cast many existing models as HMMs and recover the best known running times in each case. We also describe two new models: the switch distribution, which was recently developed to improve Bayesian/Minimum Description Length model selection, and a new generalisation of the fixed share algorithm based on run-length coding. We give loss bounds for all models and shed new light on their relationships.
[ "['Wouter Koolen' 'Steven de Rooij']", "Wouter Koolen and Steven de Rooij" ]
cs.LG math.ST stat.TH
null
0802.2158
null
null
http://arxiv.org/pdf/0802.2158v1
2008-02-15T09:06:25Z
2008-02-15T09:06:25Z
A Radar-Shaped Statistic for Testing and Visualizing Uniformity Properties in Computer Experiments
In the study of computer codes, filling space as uniformly as possible is important to describe the complexity of the investigated phenomenon. However, this property is not conserved by reducing the dimension. Some numeric experiment designs are conceived in this sense as Latin hypercubes or orthogonal arrays, but they consider only the projections onto the axes or the coordinate planes. In this article we introduce a statistic which allows studying the good distribution of points according to all 1-dimensional projections. By angularly scanning the domain, we obtain a radar type representation, allowing the uniformity defects of a design to be identified with respect to its projections onto straight lines. The advantages of this new tool are demonstrated on usual examples of space-filling designs (SFD) and a global statistic independent of the angle of rotation is studied.
[ "Jessica Franco, Laurent Carraro, Olivier Roustant, Astrid Jourdan\n (LMA-PAU)", "['Jessica Franco' 'Laurent Carraro' 'Olivier Roustant' 'Astrid Jourdan']" ]
cs.IT cs.CC cs.DM cs.DS cs.LG math.IT
null
0802.2305
null
null
http://arxiv.org/pdf/0802.2305v2
2008-02-24T09:51:09Z
2008-02-17T16:42:52Z
Compressed Counting
Counting is among the most fundamental operations in computing. For example, counting the pth frequency moment has been a very active area of research, in theoretical computer science, databases, and data mining. When p=1, the task (i.e., counting the sum) can be accomplished using a simple counter. Compressed Counting (CC) is proposed for efficiently computing the pth frequency moment of a data stream signal A_t, where 0<p<=2. CC is applicable if the streaming data follow the Turnstile model, with the restriction that at the time t for the evaluation, A_t[i]>= 0, which includes the strict Turnstile model as a special case. For natural data streams encountered in practice, this restriction is minor. The underly technique for CC is what we call skewed stable random projections, which captures the intuition that, when p=1 a simple counter suffices, and when p = 1+/\Delta with small \Delta, the sample complexity of a counter system should be low (continuously as a function of \Delta). We show at small \Delta the sample complexity (number of projections) k = O(1/\epsilon) instead of O(1/\epsilon^2). Compressed Counting can serve a basic building block for other tasks in statistics and computing, for example, estimation entropies of data streams, parameter estimations using the method of moments and maximum likelihood. Finally, another contribution is an algorithm for approximating the logarithmic norm, \sum_{i=1}^D\log A_t[i], and logarithmic distance. The logarithmic distance is useful in machine learning practice with heavy-tailed data.
[ "Ping Li", "['Ping Li']" ]
cs.LG cs.HC
null
0802.2428
null
null
http://arxiv.org/pdf/0802.2428v1
2008-02-18T07:28:44Z
2008-02-18T07:28:44Z
Sign Language Tutoring Tool
In this project, we have developed a sign language tutor that lets users learn isolated signs by watching recorded videos and by trying the same signs. The system records the user's video and analyses it. If the sign is recognized, both verbal and animated feedback is given to the user. The system is able to recognize complex signs that involve both hand gestures and head movements and expressions. Our performance tests yield a 99% recognition rate on signs involving only manual gestures and 85% recognition rate on signs that involve both manual and non manual components, such as head movement and facial expressions.
[ "['Oya Aran' 'Ismail Ari' 'Alexandre Benoit' 'Ana Huerta Carrillo'\n 'François-Xavier Fanard' 'Pavel Campr' 'Lale Akarun' 'Alice Caplier'\n 'Michele Rombaut' 'Bulent Sankur']", "Oya Aran, Ismail Ari, Alexandre Benoit (GIPSA-lab), Ana Huerta\n Carrillo, Fran\\c{c}ois-Xavier Fanard (TELE), Pavel Campr, Lale Akarun, Alice\n Caplier (GIPSA-lab), Michele Rombaut (GIPSA-lab), Bulent Sankur" ]
math.ST cs.LG stat.TH
null
0802.2655
null
null
http://arxiv.org/pdf/0802.2655v6
2010-06-09T09:08:50Z
2008-02-19T14:05:22Z
Pure Exploration for Multi-Armed Bandit Problems
We consider the framework of stochastic multi-armed bandit problems and study the possibilities and limitations of forecasters that perform an on-line exploration of the arms. These forecasters are assessed in terms of their simple regret, a regret notion that captures the fact that exploration is only constrained by the number of available rounds (not necessarily known in advance), in contrast to the case when the cumulative regret is considered and when exploitation needs to be performed at the same time. We believe that this performance criterion is suited to situations when the cost of pulling an arm is expressed in terms of resources rather than rewards. We discuss the links between the simple and the cumulative regret. One of the main results in the case of a finite number of arms is a general lower bound on the simple regret of a forecaster in terms of its cumulative regret: the smaller the latter, the larger the former. Keeping this result in mind, we then exhibit upper bounds on the simple regret of some forecasters. The paper ends with a study devoted to continuous-armed bandit problems; we show that the simple regret can be minimized with respect to a family of probability distributions if and only if the cumulative regret can be minimized for it. Based on this equivalence, we are able to prove that the separable metric spaces are exactly the metric spaces on which these regrets can be minimized with respect to the family of all probability distributions with continuous mean-payoff functions.
[ "S\\'ebastien Bubeck (INRIA Futurs), R\\'emi Munos (INRIA Futurs), Gilles\n Stoltz (DMA, GREGH)", "['Sébastien Bubeck' 'Rémi Munos' 'Gilles Stoltz']" ]
cs.CY cs.AI cs.LG cs.SE
null
0802.3789
null
null
http://arxiv.org/pdf/0802.3789v1
2008-02-26T11:26:09Z
2008-02-26T11:26:09Z
Knowledge Technologies
Several technologies are emerging that provide new ways to capture, store, present and use knowledge. This book is the first to provide a comprehensive introduction to five of the most important of these technologies: Knowledge Engineering, Knowledge Based Engineering, Knowledge Webs, Ontologies and Semantic Webs. For each of these, answers are given to a number of key questions (What is it? How does it operate? How is a system developed? What can it be used for? What tools are available? What are the main issues?). The book is aimed at students, researchers and practitioners interested in Knowledge Management, Artificial Intelligence, Design Engineering and Web Technologies. During the 1990s, Nick worked at the University of Nottingham on the application of AI techniques to knowledge management and on various knowledge acquisition projects to develop expert systems for military applications. In 1999, he joined Epistemics where he worked on numerous knowledge projects and helped establish knowledge management programmes at large organisations in the engineering, technology and legal sectors. He is author of the book "Knowledge Acquisition in Practice", which describes a step-by-step procedure for acquiring and implementing expertise. He maintains strong links with leading research organisations working on knowledge technologies, such as knowledge-based engineering, ontologies and semantic technologies.
[ "['Nick Milton']", "Nick Milton" ]
cs.LG cs.CC cs.CR cs.DB
null
0803.0924
null
null
http://arxiv.org/pdf/0803.0924v3
2010-02-19T01:47:02Z
2008-03-06T17:50:07Z
What Can We Learn Privately?
Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private PAC learner for the class of parity functions. Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms.
[ "Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya\n Raskhodnikova, and Adam Smith", "['Shiva Prasad Kasiviswanathan' 'Homin K. Lee' 'Kobbi Nissim'\n 'Sofya Raskhodnikova' 'Adam Smith']" ]
cs.DB cs.LG
null
0803.1555
null
null
http://arxiv.org/pdf/0803.1555v1
2008-03-11T11:18:52Z
2008-03-11T11:18:52Z
Privacy Preserving ID3 over Horizontally, Vertically and Grid Partitioned Data
We consider privacy preserving decision tree induction via ID3 in the case where the training data is horizontally or vertically distributed. Furthermore, we consider the same problem in the case where the data is both horizontally and vertically distributed, a situation we refer to as grid partitioned data. We give an algorithm for privacy preserving ID3 over horizontally partitioned data involving more than two parties. For grid partitioned data, we discuss two different evaluation methods for preserving privacy ID3, namely, first merging horizontally and developing vertically or first merging vertically and next developing horizontally. Next to introducing privacy preserving data mining over grid-partitioned data, the main contribution of this paper is that we show, by means of a complexity analysis that the former evaluation method is the more efficient.
[ "Bart Kuijpers, Vanessa Lemmens, Bart Moelans and Karl Tuyls", "['Bart Kuijpers' 'Vanessa Lemmens' 'Bart Moelans' 'Karl Tuyls']" ]
cs.CL cs.LG
null
0803.2856
null
null
http://arxiv.org/pdf/0803.2856v1
2008-03-19T18:00:19Z
2008-03-19T18:00:19Z
Figuring out Actors in Text Streams: Using Collocations to establish Incremental Mind-maps
The recognition, involvement, and description of main actors influences the story line of the whole text. This is of higher importance as the text per se represents a flow of words and expressions that once it is read it is lost. In this respect, the understanding of a text and moreover on how the actor exactly behaves is not only a major concern: as human beings try to store a given input on short-term memory while associating diverse aspects and actors with incidents, the following approach represents a virtual architecture, where collocations are concerned and taken as the associative completion of the actors' acting. Once that collocations are discovered, they become managed in separated memory blocks broken down by the actors. As for human beings, the memory blocks refer to associative mind-maps. We then present several priority functions to represent the actual temporal situation inside a mind-map to enable the user to reconstruct the recent events from the discovered temporal results.
[ "['T. Rothenberger' 'S. Oez' 'E. Tahirovic' 'C. Schommer']", "T. Rothenberger, S. Oez, E. Tahirovic, C. Schommer" ]
cs.LG cs.AI
null
0803.3490
null
null
http://arxiv.org/pdf/0803.3490v2
2008-11-11T22:36:47Z
2008-03-25T03:51:59Z
Robustness and Regularization of Support Vector Machines
We consider regularized support vector machines (SVMs) and show that they are precisely equivalent to a new robust optimization formulation. We show that this equivalence of robust optimization and regularization has implications for both algorithms, and analysis. In terms of algorithms, the equivalence suggests more general SVM-like algorithms for classification that explicitly build in protection to noise, and at the same time control overfitting. On the analysis front, the equivalence of robustness and regularization, provides a robust optimization interpretation for the success of regularized SVMs. We use the this new robustness interpretation of SVMs to give a new proof of consistency of (kernelized) SVMs, thus establishing robustness as the reason regularized SVMs generalize well.
[ "Huan Xu, Constantine Caramanis and Shie Mannor", "['Huan Xu' 'Constantine Caramanis' 'Shie Mannor']" ]
cs.NE cs.LG
null
0803.3838
null
null
http://arxiv.org/pdf/0803.3838v2
2009-03-26T20:37:59Z
2008-03-26T22:49:40Z
Recorded Step Directional Mutation for Faster Convergence
Two meta-evolutionary optimization strategies described in this paper accelerate the convergence of evolutionary programming algorithms while still retaining much of their ability to deal with multi-modal problems. The strategies, called directional mutation and recorded step in this paper, can operate independently but together they greatly enhance the ability of evolutionary programming algorithms to deal with fitness landscapes characterized by long narrow valleys. The directional mutation aspect of this combined method uses correlated meta-mutation but does not introduce a full covariance matrix. These new methods are thus much more economical in terms of storage for problems with high dimensionality. Additionally, directional mutation is rotationally invariant which is a substantial advantage over self-adaptive methods which use a single variance per coordinate for problems where the natural orientation of the problem is not oriented along the axes.
[ "Ted Dunning", "['Ted Dunning']" ]
cs.LG cs.AI
null
0804.0188
null
null
http://arxiv.org/pdf/0804.0188v2
2009-08-04T11:48:14Z
2008-04-01T14:55:33Z
Support Vector Machine Classification with Indefinite Kernels
We propose a method for support vector machine classification using indefinite kernels. Instead of directly minimizing or stabilizing a nonconvex loss function, our algorithm simultaneously computes support vectors and a proxy kernel matrix used in forming the loss. This can be interpreted as a penalized kernel learning problem where indefinite kernel matrices are treated as a noisy observations of a true Mercer kernel. Our formulation keeps the problem convex and relatively large problems can be solved efficiently using the projected gradient or analytic center cutting plane methods. We compare the performance of our technique with other methods on several classic data sets.
[ "Ronny Luss, Alexandre d'Aspremont", "['Ronny Luss' \"Alexandre d'Aspremont\"]" ]
cs.LG cs.AI
null
0804.0924
null
null
http://arxiv.org/pdf/0804.0924v2
2009-07-29T04:25:24Z
2008-04-06T18:14:34Z
A Unified Semi-Supervised Dimensionality Reduction Framework for Manifold Learning
We present a general framework of semi-supervised dimensionality reduction for manifold learning which naturally generalizes existing supervised and unsupervised learning frameworks which apply the spectral decomposition. Algorithms derived under our framework are able to employ both labeled and unlabeled examples and are able to handle complex problems where data form separate clusters of manifolds. Our framework offers simple views, explains relationships among existing frameworks and provides further extensions which can improve existing algorithms. Furthermore, a new semi-supervised kernelization framework called ``KPCA trick'' is proposed to handle non-linear problems.
[ "Ratthachat Chatpatanasiri and Boonserm Kijsirikul", "['Ratthachat Chatpatanasiri' 'Boonserm Kijsirikul']" ]
cs.LG math.ST stat.ML stat.TH
null
0804.1302
null
null
http://arxiv.org/pdf/0804.1302v1
2008-04-08T15:40:03Z
2008-04-08T15:40:03Z
Bolasso: model consistent Lasso estimation through the bootstrap
We consider the least-square linear regression problem with regularization by the l1-norm, a problem usually referred to as the Lasso. In this paper, we present a detailed asymptotic analysis of model consistency of the Lasso. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection (i.e., variable selection). For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection algorithm, referred to as the Bolasso, is compared favorably to other linear regression methods on synthetic data and datasets from the UCI machine learning repository.
[ "['Francis Bach']", "Francis Bach (INRIA Rocquencourt)" ]
cs.LG cs.AI
null
0804.1441
null
null
http://arxiv.org/pdf/0804.1441v3
2009-01-30T02:19:27Z
2008-04-09T09:40:51Z
On Kernelization of Supervised Mahalanobis Distance Learners
This paper focuses on the problem of kernelizing an existing supervised Mahalanobis distance learner. The following features are included in the paper. Firstly, three popular learners, namely, "neighborhood component analysis", "large margin nearest neighbors" and "discriminant neighborhood embedding", which do not have kernel versions are kernelized in order to improve their classification performances. Secondly, an alternative kernelization framework called "KPCA trick" is presented. Implementing a learner in the new framework gains several advantages over the standard framework, e.g. no mathematical formulas and no reprogramming are required for a kernel implementation, the framework avoids troublesome problems such as singularity, etc. Thirdly, while the truths of representer theorems are just assumptions in previous papers related to ours, here, representer theorems are formally proven. The proofs validate both the kernel trick and the KPCA trick in the context of Mahalanobis distance learning. Fourthly, unlike previous works which always apply brute force methods to select a kernel, we investigate two approaches which can be efficiently adopted to construct an appropriate kernel for a given dataset. Finally, numerical results on various real-world datasets are presented.
[ "Ratthachat Chatpatanasiri, Teesid Korsrilabutr, Pasakorn\n Tangchanachaianan and Boonserm Kijsirikul", "['Ratthachat Chatpatanasiri' 'Teesid Korsrilabutr'\n 'Pasakorn Tangchanachaianan' 'Boonserm Kijsirikul']" ]
cs.LG cs.CG
null
0804.3575
null
null
http://arxiv.org/pdf/0804.3575v2
2008-08-04T19:28:46Z
2008-04-22T17:59:03Z
Isotropic PCA and Affine-Invariant Clustering
We present a new algorithm for clustering points in R^n. The key property of the algorithm is that it is affine-invariant, i.e., it produces the same partition for any affine transformation of the input. It has strong guarantees when the input is drawn from a mixture model. For a mixture of two arbitrary Gaussians, the algorithm correctly classifies the sample assuming only that the two components are separable by a hyperplane, i.e., there exists a halfspace that contains most of one Gaussian and almost none of the other in probability mass. This is nearly the best possible, improving known results substantially. For k > 2 components, the algorithm requires only that there be some (k-1)-dimensional subspace in which the emoverlap in every direction is small. Here we define overlap to be the ratio of the following two quantities: 1) the average squared distance between a point and the mean of its component, and 2) the average squared distance between a point and the mean of the mixture. The main result may also be stated in the language of linear discriminant analysis: if the standard Fisher discriminant is small enough, labels are not needed to estimate the optimal subspace for projection. Our main tools are isotropic transformation, spectral projection and a simple reweighting technique. We call this combination isotropic PCA.
[ "S. Charles Brubaker and Santosh S. Vempala", "['S. Charles Brubaker' 'Santosh S. Vempala']" ]
cs.LG
null
0804.3817
null
null
http://arxiv.org/pdf/0804.3817v1
2008-04-23T23:18:00Z
2008-04-23T23:18:00Z
Multiple Random Oracles Are Better Than One
We study the problem of learning k-juntas given access to examples drawn from a number of different product distributions. Thus we wish to learn a function f : {-1,1}^n -> {-1,1} that depends on k (unknown) coordinates. While the best known algorithms for the general problem of learning a k-junta require running time of n^k * poly(n,2^k), we show that given access to k different product distributions with biases separated by \gamma>0, the functions may be learned in time poly(n,2^k,\gamma^{-k}). More generally, given access to t <= k different product distributions, the functions may be learned in time n^{k/t} * poly(n,2^k,\gamma^{-k}). Our techniques involve novel results in Fourier analysis relating Fourier expansions with respect to different biases and a generalization of Russo's formula.
[ "Jan Arpe and Elchanan Mossel", "['Jan Arpe' 'Elchanan Mossel']" ]
cs.LG cs.IR stat.ME
null
0804.4451
null
null
http://arxiv.org/pdf/0804.4451v2
2019-09-07T00:29:28Z
2008-04-28T17:14:53Z
Dependence Structure Estimation via Copula
Dependence strucuture estimation is one of the important problems in machine learning domain and has many applications in different scientific areas. In this paper, a theoretical framework for such estimation based on copula and copula entropy -- the probabilistic theory of representation and measurement of statistical dependence, is proposed. Graphical models are considered as a special case of the copula framework. A method of the framework for estimating maximum spanning copula is proposed. Due to copula, the method is irrelevant to the properties of individual variables, insensitive to outlier and able to deal with non-Gaussianity. Experiments on both simulated data and real dataset demonstrated the effectiveness of the proposed method.
[ "['Jian Ma' 'Zengqi Sun']", "Jian Ma and Zengqi Sun" ]
cs.LG
null
0804.4682
null
null
http://arxiv.org/pdf/0804.4682v1
2008-04-29T19:25:07Z
2008-04-29T19:25:07Z
Introduction to Relational Networks for Classification
The use of computational intelligence techniques for classification has been used in numerous applications. This paper compares the use of a Multi Layer Perceptron Neural Network and a new Relational Network on classifying the HIV status of women at ante-natal clinics. The paper discusses the architecture of the relational network and its merits compared to a neural network and most other computational intelligence classifiers. Results gathered from the study indicate comparable classification accuracies as well as revealed relationships between data features in the classification data. Much higher classification accuracies are recommended for future research in the area of HIV classification as well as missing data estimation.
[ "Vukosi Marivate and Tshilidzi Marwala", "['Vukosi Marivate' 'Tshilidzi Marwala']" ]
cs.LG
null
0804.4741
null
null
http://arxiv.org/pdf/0804.4741v1
2008-04-30T06:07:45Z
2008-04-30T06:07:45Z
The Effect of Structural Diversity of an Ensemble of Classifiers on Classification Accuracy
This paper aims to showcase the measure of structural diversity of an ensemble of 9 classifiers and then map a relationship between this structural diversity and accuracy. The structural diversity was induced by having different architectures or structures of the classifiers The Genetical Algorithms (GA) were used to derive the relationship between diversity and the classification accuracy by evolving the classifiers and then picking 9 classifiers out on an ensemble of 60 classifiers. It was found that as the ensemble became diverse the accuracy improved. However at a certain diversity measure the accuracy began to drop. The Kohavi-Wolpert variance method is used to measure the diversity of the ensemble. A method of voting is used to aggregate the results from each classifier. The lowest error was observed at a diversity measure of 0.16 with a mean square error of 0.274, when taking 0.2024 as maximum diversity measured. The parameters that were varied were: the number of hidden nodes, learning rate and the activation function.
[ "Lesedi Masisi, Fulufhelo V. Nelwamondo and Tshilidzi Marwala", "['Lesedi Masisi' 'Fulufhelo V. Nelwamondo' 'Tshilidzi Marwala']" ]
cs.LG
null
0804.4898
null
null
http://arxiv.org/pdf/0804.4898v1
2008-04-30T19:59:56Z
2008-04-30T19:59:56Z
A Quadratic Loss Multi-Class SVM
Using a support vector machine requires to set two types of hyperparameters: the soft margin parameter C and the parameters of the kernel. To perform this model selection task, the method of choice is cross-validation. Its leave-one-out variant is known to produce an estimator of the generalization error which is almost unbiased. Its major drawback rests in its time requirement. To overcome this difficulty, several upper bounds on the leave-one-out error of the pattern recognition SVM have been derived. Among those bounds, the most popular one is probably the radius-margin bound. It applies to the hard margin pattern recognition SVM, and by extension to the 2-norm SVM. In this report, we introduce a quadratic loss M-SVM, the M-SVM^2, as a direct extension of the 2-norm SVM to the multi-class case. For this machine, a generalized radius-margin bound is then established.
[ "['Emmanuel Monfrini' 'Yann Guermeur']", "Emmanuel Monfrini (LORIA), Yann Guermeur (LORIA)" ]
cs.LG
null
0805.0149
null
null
http://arxiv.org/pdf/0805.0149v1
2008-05-01T20:25:27Z
2008-05-01T20:25:27Z
On Recovery of Sparse Signals via $\ell_1$ Minimization
This article considers constrained $\ell_1$ minimization methods for the recovery of high dimensional sparse signals in three settings: noiseless, bounded error and Gaussian noise. A unified and elementary treatment is given in these noise settings for two $\ell_1$ minimization methods: the Dantzig selector and $\ell_1$ minimization with an $\ell_2$ constraint. The results of this paper improve the existing results in the literature by weakening the conditions and tightening the error bounds. The improvement on the conditions shows that signals with larger support can be recovered accurately. This paper also establishes connections between restricted isometry property and the mutual incoherence property. Some results of Candes, Romberg and Tao (2006) and Donoho, Elad, and Temlyakov (2006) are extended.
[ "T. Tony Cai, Guangwu Xu, and Jun Zhang", "['T. Tony Cai' 'Guangwu Xu' 'Jun Zhang']" ]
cond-mat.dis-nn cs.LG
10.1143/JPSJ.77.094801
0805.1480
null
null
http://arxiv.org/abs/0805.1480v1
2008-05-10T15:40:24Z
2008-05-10T15:40:24Z
On-line Learning of an Unlearnable True Teacher through Mobile Ensemble Teachers
On-line learning of a hierarchical learning model is studied by a method from statistical mechanics. In our model a student of a simple perceptron learns from not a true teacher directly, but ensemble teachers who learn from the true teacher with a perceptron learning rule. Since the true teacher and the ensemble teachers are expressed as non-monotonic perceptron and simple ones, respectively, the ensemble teachers go around the unlearnable true teacher with the distance between them fixed in an asymptotic steady state. The generalization performance of the student is shown to exceed that of the ensemble teachers in a transient state, as was shown in similar ensemble-teachers models. Further, it is found that moving the ensemble teachers even in the steady state, in contrast to the fixed ensemble teachers, is efficient for the performance of the student.
[ "Takeshi Hirama and Koji Hukushima", "['Takeshi Hirama' 'Koji Hukushima']" ]
cs.LG cs.AI cs.CC
10.1007/s10994-008-5069-3
0805.2027
null
null
http://arxiv.org/abs/0805.2027v2
2008-07-06T17:36:33Z
2008-05-14T11:19:19Z
Rollout Sampling Approximate Policy Iteration
Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions which focus on policy representation using classifiers and address policy learning as a supervised learning problem. This paper proposes variants of an improved policy iteration scheme which addresses the core sampling problem in evaluating a policy through simulation as a multi-armed bandit machine. The resulting algorithm offers comparable performance to the previous algorithm achieved, however, with significantly less computational effort. An order of magnitude improvement is demonstrated experimentally in two standard reinforcement learning domains: inverted pendulum and mountain-car.
[ "Christos Dimitrakakis and Michail G. Lagoudakis", "['Christos Dimitrakakis' 'Michail G. Lagoudakis']" ]
cs.LG cs.CG
null
0805.2362
null
null
http://arxiv.org/pdf/0805.2362v1
2008-05-15T17:25:03Z
2008-05-15T17:25:03Z
An optimization problem on the sphere
We prove existence and uniqueness of the minimizer for the average geodesic distance to the points of a geodesically convex set on the sphere. This implies a corresponding existence and uniqueness result for an optimal algorithm for halfspace learning, when data and target functions are drawn from the uniform distribution.
[ "['Andreas Maurer']", "Andreas Maurer" ]
cs.LG cs.AI
null
0805.2368
null
null
http://arxiv.org/pdf/0805.2368v1
2008-05-15T17:46:53Z
2008-05-15T17:46:53Z
A Kernel Method for the Two-Sample Problem
We propose a framework for analyzing and comparing distributions, allowing us to design statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS). We present two tests based on large deviation bounds for the test statistic, while a third is based on the asymptotic distribution of this statistic. The test statistic can be computed in quadratic time, although efficient linear time approximations are available. Several classical metrics on distributions are recovered when the function space used to compute the difference in expectations is allowed to be more general (eg. a Banach space). We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.
[ "['Arthur Gretton' 'Karsten Borgwardt' 'Malte J. Rasch'\n 'Bernhard Scholkopf' 'Alexander J. Smola']", "Arthur Gretton, Karsten Borgwardt, Malte J. Rasch, Bernhard Scholkopf,\n Alexander J. Smola" ]
cs.LG
null
0805.2752
null
null
http://arxiv.org/pdf/0805.2752v1
2008-05-18T20:07:22Z
2008-05-18T20:07:22Z
The Margitron: A Generalised Perceptron with Margin
We identify the classical Perceptron algorithm with margin as a member of a broader family of large margin classifiers which we collectively call the Margitron. The Margitron, (despite its) sharing the same update rule with the Perceptron, is shown in an incremental setting to converge in a finite number of updates to solutions possessing any desirable fraction of the maximum margin. Experiments comparing the Margitron with decomposition SVMs on tasks involving linear kernels and 2-norm soft margin are also reported.
[ "['Constantinos Panagiotakopoulos' 'Petroula Tsampouka']", "Constantinos Panagiotakopoulos and Petroula Tsampouka" ]
cs.LG
null
0805.2775
null
null
http://arxiv.org/pdf/0805.2775v1
2008-05-19T02:55:08Z
2008-05-19T02:55:08Z
Sample Selection Bias Correction Theory
This paper presents a theoretical analysis of sample selection bias correction. The sample bias correction technique commonly used in machine learning consists of reweighting the cost of an error on each training point of a biased sample to more closely reflect the unbiased distribution. This relies on weights derived by various estimation techniques based on finite samples. We analyze the effect of an error in that estimation on the accuracy of the hypothesis returned by the learning algorithm for two estimation techniques: a cluster-based estimation technique and kernel mean matching. We also report the results of sample bias correction experiments with several data sets using these techniques. Our analysis is based on the novel concept of distributional stability which generalizes the existing concept of point-based stability. Much of our work and proof techniques can be used to analyze other importance weighting techniques and their effect on accuracy when using a distributionally stable algorithm.
[ "['Corinna Cortes' 'Mehryar Mohri' 'Michael Riley' 'Afshin Rostamizadeh']", "Corinna Cortes, Mehryar Mohri, Michael Riley, Afshin Rostamizadeh" ]
cs.LG cs.AI
null
0805.2891
null
null
http://arxiv.org/pdf/0805.2891v2
2009-01-22T18:25:33Z
2008-05-19T17:55:08Z
Learning Low-Density Separators
We define a novel, basic, unsupervised learning problem - learning the lowest density homogeneous hyperplane separator of an unknown probability distribution. This task is relevant to several problems in machine learning, such as semi-supervised learning and clustering stability. We investigate the question of existence of a universally consistent algorithm for this problem. We propose two natural learning paradigms and prove that, on input unlabeled random samples generated by any member of a rich family of distributions, they are guaranteed to converge to the optimal separator for that distribution. We complement this result by showing that no learning algorithm for our task can achieve uniform learning rates (that are independent of the data generating distribution).
[ "['Shai Ben-David' 'Tyler Lu' 'David Pal' 'Miroslava Sotakova']", "Shai Ben-David, Tyler Lu, David Pal, Miroslava Sotakova" ]
cs.LG
10.1007/s10032-002-0095-3
0805.4290
null
null
http://arxiv.org/abs/0805.4290v1
2008-05-28T09:16:44Z
2008-05-28T09:16:44Z
From Data Topology to a Modular Classifier
This article describes an approach to designing a distributed and modular neural classifier. This approach introduces a new hierarchical clustering that enables one to determine reliable regions in the representation space by exploiting supervised information. A multilayer perceptron is then associated with each of these detected clusters and charged with recognizing elements of the associated cluster while rejecting all others. The obtained global classifier is comprised of a set of cooperating neural networks and completed by a K-nearest neighbor classifier charged with treating elements rejected by all the neural networks. Experimental results for the handwritten digit recognition problem and comparison with neural and statistical nonmodular classifiers are given.
[ "['Abdel Ennaji' 'Arnaud Ribert' 'Yves Lecourtier']", "Abdel Ennaji (LITIS), Arnaud Ribert (LITIS), Yves Lecourtier (LITIS)" ]
cs.LG
null
0806.1156
null
null
http://arxiv.org/pdf/0806.1156v1
2008-06-06T13:33:31Z
2008-06-06T13:33:31Z
Utilisation des grammaires probabilistes dans les t\^aches de segmentation et d'annotation prosodique
Nous pr\'esentons dans cette contribution une approche \`a la fois symbolique et probabiliste permettant d'extraire l'information sur la segmentation du signal de parole \`a partir d'information prosodique. Nous utilisons pour ce faire des grammaires probabilistes poss\'edant une structure hi\'erarchique minimale. La phase de construction des grammaires ainsi que leur pouvoir de pr\'ediction sont \'evalu\'es qualitativement ainsi que quantitativement. ----- Methodologically oriented, the present work sketches an approach for prosodic information retrieval and speech segmentation, based on both symbolic and probabilistic information. We have recourse to probabilistic grammars, within which we implement a minimal hierarchical structure. Both the stages of probabilistic grammar building and its testing in prediction are explored and quantitatively and qualitatively evaluated.
[ "['Irina Nesterenko' 'Stéphane Rauzy']", "Irina Nesterenko (LPL), St\\'ephane Rauzy (LPL)" ]
cs.IT cond-mat.stat-mech cs.AI cs.LG math.IT physics.flu-dyn
null
0806.1199
null
null
http://arxiv.org/pdf/0806.1199v1
2008-06-06T16:18:13Z
2008-06-06T16:18:13Z
Belief Propagation and Beyond for Particle Tracking
We describe a novel approach to statistical learning from particles tracked while moving in a random environment. The problem consists in inferring properties of the environment from recorded snapshots. We consider here the case of a fluid seeded with identical passive particles that diffuse and are advected by a flow. Our approach rests on efficient algorithms to estimate the weighted number of possible matchings among particles in two consecutive snapshots, the partition function of the underlying graphical model. The partition function is then maximized over the model parameters, namely diffusivity and velocity gradient. A Belief Propagation (BP) scheme is the backbone of our algorithm, providing accurate results for the flow parameters we want to learn. The BP estimate is additionally improved by incorporating Loop Series (LS) contributions. For the weighted matching problem, LS is compactly expressed as a Cauchy integral, accurately estimated by a saddle point approximation. Numerical experiments show that the quality of our improved BP algorithm is comparable to the one of a fully polynomial randomized approximation scheme, based on the Markov Chain Monte Carlo (MCMC) method, while the BP-based scheme is substantially faster than the MCMC scheme.
[ "['Michael Chertkov' 'Lukas Kroc' 'Massimo Vergassola']", "Michael Chertkov, Lukas Kroc, Massimo Vergassola" ]
nucl-th astro-ph cond-mat.dis-nn cs.LG stat.ML
10.1103/PhysRevC.80.044332
0806.2850
null
null
http://arxiv.org/abs/0806.2850v1
2008-06-17T18:23:15Z
2008-06-17T18:23:15Z
Decoding Beta-Decay Systematics: A Global Statistical Model for Beta^- Halflives
Statistical modeling of nuclear data provides a novel approach to nuclear systematics complementary to established theoretical and phenomenological approaches based on quantum theory. Continuing previous studies in which global statistical modeling is pursued within the general framework of machine learning theory, we implement advances in training algorithms designed to improved generalization, in application to the problem of reproducing and predicting the halflives of nuclear ground states that decay 100% by the beta^- mode. More specifically, fully-connected, multilayer feedforward artificial neural network models are developed using the Levenberg-Marquardt optimization algorithm together with Bayesian regularization and cross-validation. The predictive performance of models emerging from extensive computer experiments is compared with that of traditional microscopic and phenomenological models as well as with the performance of other learning systems, including earlier neural network models as well as the support vector machines recently applied to the same problem. In discussing the results, emphasis is placed on predictions for nuclei that are far from the stability line, and especially those involved in the r-process nucleosynthesis. It is found that the new statistical models can match or even surpass the predictive performance of conventional models for beta-decay systematics and accordingly should provide a valuable additional tool for exploring the expanding nuclear landscape.
[ "N. J. Costiris, E. Mavrommatis, K. A. Gernoth, J. W. Clark", "['N. J. Costiris' 'E. Mavrommatis' 'K. A. Gernoth' 'J. W. Clark']" ]
cs.CV cs.LG
null
0806.2890
null
null
http://arxiv.org/pdf/0806.2890v1
2008-06-17T23:28:08Z
2008-06-17T23:28:08Z
Learning Graph Matching
As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the `labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.
[ "Tiberio S. Caetano, Julian J. McAuley, Li Cheng, Quoc V. Le and Alex\n J. Smola", "['Tiberio S. Caetano' 'Julian J. McAuley' 'Li Cheng' 'Quoc V. Le'\n 'Alex J. Smola']" ]
cs.LG
null
0806.3537
null
null
http://arxiv.org/pdf/0806.3537v2
2008-07-10T02:51:05Z
2008-06-22T01:28:14Z
Statistical Learning of Arbitrary Computable Classifiers
Statistical learning theory chiefly studies restricted hypothesis classes, particularly those with finite Vapnik-Chervonenkis (VC) dimension. The fundamental quantity of interest is the sample complexity: the number of samples required to learn to a specified level of accuracy. Here we consider learning over the set of all computable labeling functions. Since the VC-dimension is infinite and a priori (uniform) bounds on the number of samples are impossible, we let the learning algorithm decide when it has seen sufficient samples to have learned. We first show that learning in this setting is indeed possible, and develop a learning algorithm. We then show, however, that bounding sample complexity independently of the distribution is impossible. Notably, this impossibility is entirely due to the requirement that the learning algorithm be computable, and not due to the statistical nature of the problem.
[ "David Soloveichik", "['David Soloveichik']" ]
cs.LG
null
0806.4210
null
null
http://arxiv.org/pdf/0806.4210v1
2008-06-25T23:18:44Z
2008-06-25T23:18:44Z
Agnostically Learning Juntas from Random Walks
We prove that the class of functions g:{-1,+1}^n -> {-1,+1} that only depend on an unknown subset of k<<n variables (so-called k-juntas) is agnostically learnable from a random walk in time polynomial in n, 2^{k^2}, epsilon^{-k}, and log(1/delta). In other words, there is an algorithm with the claimed running time that, given epsilon, delta > 0 and access to a random walk on {-1,+1}^n labeled by an arbitrary function f:{-1,+1}^n -> {-1,+1}, finds with probability at least 1-delta a k-junta that is (opt(f)+epsilon)-close to f, where opt(f) denotes the distance of a closest k-junta to f.
[ "Jan Arpe and Elchanan Mossel", "['Jan Arpe' 'Elchanan Mossel']" ]
cs.AI cs.LG
null
0806.4341
null
null
http://arxiv.org/pdf/0806.4341v1
2008-06-26T15:21:00Z
2008-06-26T15:21:00Z
On Sequences with Non-Learnable Subsequences
The remarkable results of Foster and Vohra was a starting point for a series of papers which show that any sequence of outcomes can be learned (with no prior knowledge) using some universal randomized forecasting algorithm and forecast-dependent checking rules. We show that for the class of all computationally efficient outcome-forecast-based checking rules, this property is violated. Moreover, we present a probabilistic algorithm generating with probability close to one a sequence with a subsequence which simultaneously miscalibrates all partially weakly computable randomized forecasting algorithms. %subsequences non-learnable by each randomized algorithm. According to the Dawid's prequential framework we consider partial recursive randomized algorithms.
[ "[\"Vladimir V. V'yugin\"]", "Vladimir V. V'yugin" ]
cs.LG cs.AI
null
0806.4391
null
null
http://arxiv.org/pdf/0806.4391v1
2008-06-26T20:21:06Z
2008-06-26T20:21:06Z
Prediction with Expert Advice in Games with Unbounded One-Step Gains
The games of prediction with expert advice are considered in this paper. We present some modification of Kalai and Vempala algorithm of following the perturbed leader for the case of unrestrictedly large one-step gains. We show that in general case the cumulative gain of any probabilistic prediction algorithm can be much worse than the gain of some expert of the pool. Nevertheless, we give the lower bound for this cumulative gain in general case and construct a universal algorithm which has the optimal performance; we also prove that in case when one-step gains of experts of the pool have ``limited deviations'' the performance of our algorithm is close to the performance of the best expert.
[ "[\"Vladimir V. V'yugin\"]", "Vladimir V. V'yugin" ]
cs.LG
null
0806.4422
null
null
http://arxiv.org/pdf/0806.4422v1
2008-06-27T05:19:19Z
2008-06-27T05:19:19Z
Computationally Efficient Estimators for Dimension Reductions Using Stable Random Projections
The method of stable random projections is a tool for efficiently computing the $l_\alpha$ distances using low memory, where $0<\alpha \leq 2$ is a tuning parameter. The method boils down to a statistical estimation task and various estimators have been proposed, based on the geometric mean, the harmonic mean, and the fractional power etc. This study proposes the optimal quantile estimator, whose main operation is selecting, which is considerably less expensive than taking fractional power, the main operation in previous estimators. Our experiments report that the optimal quantile estimator is nearly one order of magnitude more computationally efficient than previous estimators. For large-scale learning tasks in which storing and computing pairwise distances is a serious bottleneck, this estimator should be desirable. In addition to its computational advantages, the optimal quantile estimator exhibits nice theoretical properties. It is more accurate than previous estimators when $\alpha>1$. We derive its theoretical error bounds and establish the explicit (i.e., no hidden constants) sample complexity bound.
[ "Ping Li", "['Ping Li']" ]
cs.LG
null
0806.4423
null
null
http://arxiv.org/pdf/0806.4423v1
2008-06-27T05:36:09Z
2008-06-27T05:36:09Z
On Approximating the Lp Distances for p>2
Applications in machine learning and data mining require computing pairwise Lp distances in a data matrix A. For massive high-dimensional data, computing all pairwise distances of A can be infeasible. In fact, even storing A or all pairwise distances of A in the memory may be also infeasible. This paper proposes a simple method for p = 2, 4, 6, ... We first decompose the l_p (where p is even) distances into a sum of 2 marginal norms and p-1 ``inner products'' at different orders. Then we apply normal or sub-Gaussian random projections to approximate the resultant ``inner products,'' assuming that the marginal norms can be computed exactly by a linear scan. We propose two strategies for applying random projections. The basic projection strategy requires only one projection matrix but it is more difficult to analyze, while the alternative projection strategy requires p-1 projection matrices but its theoretical analysis is much easier. In terms of the accuracy, at least for p=4, the basic strategy is always more accurate than the alternative strategy if the data are non-negative, which is common in reality.
[ "Ping Li", "['Ping Li']" ]
cs.LG cs.AI
null
0806.4484
null
null
http://arxiv.org/pdf/0806.4484v2
2009-06-25T20:07:47Z
2008-06-27T10:49:33Z
On empirical meaning of randomness with respect to a real parameter
We study the empirical meaning of randomness with respect to a family of probability distributions $P_\theta$, where $\theta$ is a real parameter, using algorithmic randomness theory. In the case when for a computable probability distribution $P_\theta$ an effectively strongly consistent estimate exists, we show that the Levin's a priory semicomputable semimeasure of the set of all $P_\theta$-random sequences is positive if and only if the parameter $\theta$ is a computable real number. The different methods for generating ``meaningful'' $P_\theta$-random sequences with noncomputable $\theta$ are discussed.
[ "Vladimir V'yugin", "[\"Vladimir V'yugin\"]" ]
cs.LG cs.AI
null
0806.4686
null
null
http://arxiv.org/pdf/0806.4686v2
2008-07-04T01:58:32Z
2008-06-28T14:19:50Z
Sparse Online Learning via Truncated Gradient
We propose a general method called truncated gradient to induce sparsity in the weights of online learning algorithms with convex loss functions. This method has several essential properties: The degree of sparsity is continuous -- a parameter controls the rate of sparsification from no sparsification to total sparsification. The approach is theoretically motivated, and an instance of it can be regarded as an online counterpart of the popular $L_1$-regularization method in the batch setting. We prove that small rates of sparsification result in only small additional regret with respect to typical online learning guarantees. The approach works well empirically. We apply the approach to several datasets and find that for datasets with large numbers of features, substantial sparsity is discoverable.
[ "John Langford, Lihong Li, Tong Zhang", "['John Langford' 'Lihong Li' 'Tong Zhang']" ]
cs.LG
null
0807.0093
null
null
http://arxiv.org/pdf/0807.0093v1
2008-07-01T09:46:14Z
2008-07-01T09:46:14Z
Graph Kernels
We present a unified framework to study graph kernels, special cases of which include the random walk graph kernel \citep{GaeFlaWro03,BorOngSchVisetal05}, marginalized graph kernel \citep{KasTsuIno03,KasTsuIno04,MahUedAkuPeretal04}, and geometric kernel on graphs \citep{Gaertner02}. Through extensions of linear algebra to Reproducing Kernel Hilbert Spaces (RKHS) and reduction to a Sylvester equation, we construct an algorithm that improves the time complexity of kernel computation from $O(n^6)$ to $O(n^3)$. When the graphs are sparse, conjugate gradient solvers or fixed-point iterations bring our algorithm into the sub-cubic domain. Experiments on graphs from bioinformatics and other application domains show that it is often more than a thousand times faster than previous approaches. We then explore connections between diffusion kernels \citep{KonLaf02}, regularization on graphs \citep{SmoKon03}, and graph kernels, and use these connections to propose new graph kernels. Finally, we show that rational kernels \citep{CorHafMoh02,CorHafMoh03,CorHafMoh04} when specialized to graphs reduce to the random walk graph kernel.
[ "S.V.N. Vishwanathan, Karsten M. Borgwardt, Imre Risi Kondor, Nicol N.\n Schraudolph", "['S. V. N. Vishwanathan' 'Karsten M. Borgwardt' 'Imre Risi Kondor'\n 'Nicol N. Schraudolph']" ]
math.ST cs.IT cs.LG math.IT stat.ME stat.ML stat.TH
null
0807.1005
null
null
http://arxiv.org/pdf/0807.1005v1
2008-07-07T12:57:23Z
2008-07-07T12:57:23Z
Catching Up Faster by Switching Sooner: A Prequential Solution to the AIC-BIC Dilemma
Bayesian model averaging, model selection and its approximations such as BIC are generally statistically consistent, but sometimes achieve slower rates og convergence than other methods such as AIC and leave-one-out cross-validation. On the other hand, these other methods can br inconsistent. We identify the "catch-up phenomenon" as a novel explanation for the slow convergence of Bayesian methods. Based on this analysis we define the switch distribution, a modification of the Bayesian marginal distribution. We show that, under broad conditions,model selection and prediction based on the switch distribution is both consistent and achieves optimal convergence rates, thereby resolving the AIC-BIC dilemma. The method is practical; we give an efficient implementation. The switch distribution has a data compression interpretation, and can thus be viewed as a "prequential" or MDL method; yet it is different from the MDL methods that are usually considered in the literature. We compare the switch distribution to Bayes factor model selection and leave-one-out cross-validation.
[ "['Tim van Erven' 'Peter Grunwald' 'Steven de Rooij']", "Tim van Erven, Peter Grunwald and Steven de Rooij" ]
cs.AI cs.GT cs.LG
10.1007/978-3-642-13800-3_7
0807.1494
null
null
http://arxiv.org/abs/0807.1494v1
2008-07-09T16:47:36Z
2008-07-09T16:47:36Z
Algorithm Selection as a Bandit Problem with Unbounded Losses
Algorithm selection is typically based on models of algorithm performance, learned during a separate offline training sequence, which can be prohibitively expensive. In recent work, we adopted an online approach, in which a performance model is iteratively updated and used to guide selection on a sequence of problem instances. The resulting exploration-exploitation trade-off was represented as a bandit problem with expert advice, using an existing solver for this game, but this required the setting of an arbitrary bound on algorithm runtimes, thus invalidating the optimal regret of the solver. In this paper, we propose a simpler framework for representing algorithm selection as a bandit problem, with partial information, and an unknown bound on losses. We adapt an existing solver to this game, proving a bound on its expected regret, which holds also for the resulting algorithm selection technique. We present preliminary experiments with a set of SAT solvers on a mixed SAT-UNSAT benchmark.
[ "Matteo Gagliolo and Juergen Schmidhuber", "['Matteo Gagliolo' 'Juergen Schmidhuber']" ]
cs.LG cs.AI
null
0807.1997
null
null
http://arxiv.org/pdf/0807.1997v4
2009-05-13T16:22:00Z
2008-07-12T20:19:18Z
Multi-Instance Learning by Treating Instances As Non-I.I.D. Samples
Multi-instance learning attempts to learn from a training set consisting of labeled bags each containing many unlabeled instances. Previous studies typically treat the instances in the bags as independently and identically distributed. However, the instances in a bag are rarely independent, and therefore a better performance can be expected if the instances are treated in an non-i.i.d. way that exploits the relations among instances. In this paper, we propose a simple yet effective multi-instance learning method, which regards each bag as a graph and uses a specific kernel to distinguish the graphs by considering the features of the nodes as well as the features of the edges that convey some relations among instances. The effectiveness of the proposed method is validated by experiments.
[ "['Zhi-Hua Zhou' 'Yu-Yin Sun' 'Yu-Feng Li']", "Zhi-Hua Zhou, Yu-Yin Sun, Yu-Feng Li" ]
cs.NI cs.LG
null
0807.2677
null
null
http://arxiv.org/pdf/0807.2677v4
2010-02-06T21:48:51Z
2008-07-16T23:59:28Z
Algorithms for Dynamic Spectrum Access with Learning for Cognitive Radio
We study the problem of dynamic spectrum sensing and access in cognitive radio systems as a partially observed Markov decision process (POMDP). A group of cognitive users cooperatively tries to exploit vacancies in primary (licensed) channels whose occupancies follow a Markovian evolution. We first consider the scenario where the cognitive users have perfect knowledge of the distribution of the signals they receive from the primary users. For this problem, we obtain a greedy channel selection and access policy that maximizes the instantaneous reward, while satisfying a constraint on the probability of interfering with licensed transmissions. We also derive an analytical universal upper bound on the performance of the optimal policy. Through simulation, we show that our scheme achieves good performance relative to the upper bound and improved performance relative to an existing scheme. We then consider the more practical scenario where the exact distribution of the signal from the primary is unknown. We assume a parametric model for the distribution and develop an algorithm that can learn the true distribution, still guaranteeing the constraint on the interference probability. We show that this algorithm outperforms the naive design that assumes a worst case value for the parameter. We also provide a proof for the convergence of the learning algorithm.
[ "['Jayakrishnan Unnikrishnan' 'Venugopal Veeravalli']", "Jayakrishnan Unnikrishnan and Venugopal Veeravalli" ]
cs.LG
null
0807.2983
null
null
http://arxiv.org/pdf/0807.2983v1
2008-07-18T14:41:44Z
2008-07-18T14:41:44Z
On Probability Distributions for Trees: Representations, Inference and Learning
We study probability distributions over free algebras of trees. Probability distributions can be seen as particular (formal power) tree series [Berstel et al 82, Esik et al 03], i.e. mappings from trees to a semiring K . A widely studied class of tree series is the class of rational (or recognizable) tree series which can be defined either in an algebraic way or by means of multiplicity tree automata. We argue that the algebraic representation is very convenient to model probability distributions over a free algebra of trees. First, as in the string case, the algebraic representation allows to design learning algorithms for the whole class of probability distributions defined by rational tree series. Note that learning algorithms for rational tree series correspond to learning algorithms for weighted tree automata where both the structure and the weights are learned. Second, the algebraic representation can be easily extended to deal with unranked trees (like XML trees where a symbol may have an unbounded number of children). Both properties are particularly relevant for applications: nondeterministic automata are required for the inference problem to be relevant (recall that Hidden Markov Models are equivalent to nondeterministic string automata); nowadays applications for Web Information Extraction, Web Services and document processing consider unranked trees.
[ "Fran\\c{c}ois Denis (LIF), Amaury Habrard (LIF), R\\'emi Gilleron (LIFL,\n INRIA Futurs), Marc Tommasi (LIFL, INRIA Futurs, GRAPPA), \\'Edouard Gilbert\n (INRIA Futurs)", "['François Denis' 'Amaury Habrard' 'Rémi Gilleron' 'Marc Tommasi'\n 'Édouard Gilbert']" ]
cs.IT cs.LG math.IT math.ST stat.TH
null
0807.3396
null
null
http://arxiv.org/pdf/0807.3396v1
2008-07-22T07:42:11Z
2008-07-22T07:42:11Z
Universal Denoising of Discrete-time Continuous-Amplitude Signals
We consider the problem of reconstructing a discrete-time signal (sequence) with continuous-valued components corrupted by a known memoryless channel. When performance is measured using a per-symbol loss function satisfying mild regularity conditions, we develop a sequence of denoisers that, although independent of the distribution of the underlying `clean' sequence, is universally optimal in the limit of large sequence length. This sequence of denoisers is universal in the sense of performing as well as any sliding window denoising scheme which may be optimized for the underlying clean signal. Our results are initially developed in a ``semi-stochastic'' setting, where the noiseless signal is an unknown individual sequence, and the only source of randomness is due to the channel noise. It is subsequently shown that in the fully stochastic setting, where the noiseless sequence is a stationary stochastic process, our schemes universally attain optimum performance. The proposed schemes draw from nonparametric density estimation techniques and are practically implementable. We demonstrate efficacy of the proposed schemes in denoising gray-scale images in the conventional additive white Gaussian noise setting, with additional promising results for less conventional noise distributions.
[ "['Kamakshi Sivaramakrishnan' 'Tsachy Weissman']", "Kamakshi Sivaramakrishnan and Tsachy Weissman" ]
cs.LG
null
0807.4198
null
null
http://arxiv.org/pdf/0807.4198v2
2009-07-16T00:30:26Z
2008-07-25T22:50:46Z
Positive factor networks: A graphical framework for modeling non-negative sequential data
We present a novel graphical framework for modeling non-negative sequential data with hierarchical structure. Our model corresponds to a network of coupled non-negative matrix factorization (NMF) modules, which we refer to as a positive factor network (PFN). The data model is linear, subject to non-negativity constraints, so that observation data consisting of an additive combination of individually representable observations is also representable by the network. This is a desirable property for modeling problems in computational auditory scene analysis, since distinct sound sources in the environment are often well-modeled as combining additively in the corresponding magnitude spectrogram. We propose inference and learning algorithms that leverage existing NMF algorithms and that are straightforward to implement. We present a target tracking example and provide results for synthetic observation data which serve to illustrate the interesting properties of PFNs and motivate their potential usefulness in applications such as music transcription, source separation, and speech recognition. We show how a target process characterized by a hierarchical state transition model can be represented as a PFN. Our results illustrate that a PFN which is defined in terms of a single target observation can then be used to effectively track the states of multiple simultaneous targets. Our results show that the quality of the inferred target states degrades gradually as the observation noise is increased. We also present results for an example in which meaningful hierarchical features are extracted from a spectrogram. Such a hierarchical representation could be useful for music transcription and source separation applications. We also propose a network for language modeling.
[ "Brian K. Vogel", "['Brian K. Vogel']" ]
cs.IT cs.LG math.IT math.ST stat.TH
null
0808.0845
null
null
http://arxiv.org/pdf/0808.0845v1
2008-08-06T14:20:56Z
2008-08-06T14:20:56Z
Mutual information is copula entropy
We prove that mutual information is actually negative copula entropy, based on which a method for mutual information estimation is proposed.
[ "['Jian Ma' 'Zengqi Sun']", "Jian Ma and Zengqi Sun" ]
cs.LG cs.AI
10.1016/j.fss.2007.04.026
0808.2984
null
null
http://arxiv.org/abs/0808.2984v1
2008-08-21T19:54:04Z
2008-08-21T19:54:04Z
Building an interpretable fuzzy rule base from data using Orthogonal Least Squares Application to a depollution problem
In many fields where human understanding plays a crucial role, such as bioprocesses, the capacity of extracting knowledge from data is of critical importance. Within this framework, fuzzy learning methods, if properly used, can greatly help human experts. Amongst these methods, the aim of orthogonal transformations, which have been proven to be mathematically robust, is to build rules from a set of training data and to select the most important ones by linear regression or rank revealing techniques. The OLS algorithm is a good representative of those methods. However, it was originally designed so that it only cared about numerical performance. Thus, we propose some modifications of the original method to take interpretability into account. After recalling the original algorithm, this paper presents the changes made to the original method, then discusses some results obtained from benchmark problems. Finally, the algorithm is applied to a real-world fault detection depollution problem.
[ "S\\'ebastien Destercke (IRSN, IRIT), Serge Guillaume (ITAP), Brigitte\n Charnomordic (ASB)", "['Sébastien Destercke' 'Serge Guillaume' 'Brigitte Charnomordic']" ]
cs.LG cs.AI
10.1016/j.artint.2011.10.002
0808.3231
null
null
http://arxiv.org/abs/0808.3231v4
2011-10-23T16:22:49Z
2008-08-24T06:31:43Z
Multi-Instance Multi-Label Learning
In this paper, we propose the MIML (Multi-Instance Multi-Label learning) framework where an example is described by multiple instances and associated with multiple class labels. Compared to traditional learning frameworks, the MIML framework is more convenient and natural for representing complicated objects which have multiple semantic meanings. To learn from MIML examples, we propose the MimlBoost and MimlSvm algorithms based on a simple degeneration strategy, and experiments show that solving problems involving complicated objects with multiple semantic meanings in the MIML framework can lead to good performance. Considering that the degeneration process may lose information, we propose the D-MimlSvm algorithm which tackles MIML problems directly in a regularization framework. Moreover, we show that even when we do not have access to the real objects and thus cannot capture more information from real objects by using the MIML representation, MIML is still useful. We propose the InsDif and SubCod algorithms. InsDif works by transforming single-instances into the MIML representation for learning, while SubCod works by transforming single-label examples into the MIML representation for learning. Experiments show that in some tasks they are able to achieve better performance than learning the single-instances or single-label examples directly.
[ "['Zhi-Hua Zhou' 'Min-Ling Zhang' 'Sheng-Jun Huang' 'Yu-Feng Li']", "Zhi-Hua Zhou, Min-Ling Zhang, Sheng-Jun Huang, Yu-Feng Li" ]
cs.LG cs.GT
null
0808.3746
null
null
http://arxiv.org/pdf/0808.3746v2
2008-10-21T08:03:45Z
2008-08-27T17:30:22Z
A game-theoretic version of Oakes' example for randomized forecasting
Using the game-theoretic framework for probability, Vovk and Shafer. have shown that it is always possible, using randomization, to make sequential probability forecasts that pass any countable set of well-behaved statistical tests. This result generalizes work by other authors, who consider only tests of calbration. We complement this result with a lower bound. We show that Vovk and Shafer's result is valid only when the forecasts are computed with unrestrictedly increasing degree of accuracy. When some level of discreteness is fixed, we present a game-theoretic generalization of Oakes' example for randomized forecasting that is a test failing any given method of deferministic forecasting; originally, this example was presented for deterministic calibration.
[ "[\"Vladimir V. V'yugin\"]", "Vladimir V. V'yugin" ]
cs.IT cs.LG math.IT
null
0809.0032
null
null
http://arxiv.org/pdf/0809.0032v1
2008-08-30T01:05:29Z
2008-08-30T01:05:29Z
A Variational Inference Framework for Soft-In-Soft-Out Detection in Multiple Access Channels
We propose a unified framework for deriving and studying soft-in-soft-out (SISO) detection in interference channels using the concept of variational inference. The proposed framework may be used in multiple-access interference (MAI), inter-symbol interference (ISI), and multiple-input multiple-outpu (MIMO) channels. Without loss of generality, we will focus our attention on turbo multiuser detection, to facilitate a more concrete discussion. It is shown that, with some loss of optimality, variational inference avoids the exponential complexity of a posteriori probability (APP) detection by optimizing a closely-related, but much more manageable, objective function called variational free energy. In addition to its systematic appeal, there are several other advantages to this viewpoint. First of all, it provides unified and rigorous justifications for numerous detectors that were proposed on radically different grounds, and facilitates convenient joint detection and decoding (utilizing the turbo principle) when error-control codes are incorporated. Secondly, efficient joint parameter estimation and data detection is possible via the variational expectation maximization (EM) algorithm, such that the detrimental effect of inaccurate channel knowledge at the receiver may be dealt with systematically. We are also able to extend BPSK-based SISO detection schemes to arbitrary square QAM constellations in a rigorous manner using a variational argument.
[ "['D. D. Lin' 'T. J. Lim']", "D. D. Lin and T. J. Lim" ]
cs.CL cs.IR cs.LG
null
0809.0124
null
null
http://arxiv.org/pdf/0809.0124v1
2008-08-31T14:00:26Z
2008-08-31T14:00:26Z
A Uniform Approach to Analogies, Synonyms, Antonyms, and Associations
Recognizing analogies, synonyms, antonyms, and associations appear to be four distinct tasks, requiring distinct NLP algorithms. In the past, the four tasks have been treated independently, using a wide variety of algorithms. These four semantic classes, however, are a tiny sample of the full range of semantic phenomena, and we cannot afford to create ad hoc algorithms for each semantic phenomenon; we need to seek a unified approach. We propose to subsume a broad range of phenomena under analogies. To limit the scope of this paper, we restrict our attention to the subsumption of synonyms, antonyms, and associations. We introduce a supervised corpus-based machine learning algorithm for classifying analogous word pairs, and we show that it can solve multiple-choice SAT analogy questions, TOEFL synonym questions, ESL synonym-antonym questions, and similar-associated-both questions from cognitive psychology.
[ "['Peter D. Turney']", "Peter D. Turney (National Research Council of Canada)" ]
quant-ph cs.LG
null
0809.0444
null
null
http://arxiv.org/pdf/0809.0444v2
2008-09-02T20:02:34Z
2008-09-02T19:56:54Z
Quantum classification
Quantum classification is defined as the task of predicting the associated class of an unknown quantum state drawn from an ensemble of pure states given a finite number of copies of this state. By recasting the state discrimination problem within the framework of Machine Learning (ML), we can use the notion of learning reduction coming from classical ML to solve different variants of the classification task, such as the weighted binary and the multiclass versions.
[ "S\\'ebastien Gambs", "['Sébastien Gambs']" ]
cs.LG cs.NE stat.ML
10.4018/978-1-60566-766-9
0809.0490
null
null
http://arxiv.org/abs/0809.0490v2
2011-05-09T13:23:08Z
2008-09-02T18:04:53Z
Principal Graphs and Manifolds
In many physical, statistical, biological and other investigations it is desirable to approximate a system of points by objects of lower dimension and/or complexity. For this purpose, Karl Pearson invented principal component analysis in 1901 and found 'lines and planes of closest fit to system of points'. The famous k-means algorithm solves the approximation problem too, but by finite sets instead of lines and planes. This chapter gives a brief practical introduction into the methods of construction of general principal objects, i.e. objects embedded in the 'middle' of the multidimensional data set. As a basis, the unifying framework of mean squared distance approximation of finite datasets is selected. Principal graphs and manifolds are constructed as generalisations of principal components and k-means principal points. For this purpose, the family of expectation/maximisation algorithms with nearest generalisations is presented. Construction of principal graphs with controlled complexity is based on the graph grammar approach.
[ "A. N. Gorban, A. Y. Zinovyev", "['A. N. Gorban' 'A. Y. Zinovyev']" ]
cs.IT cs.LG math.IT math.ST stat.ME stat.TH
null
0809.1017
null
null
http://arxiv.org/pdf/0809.1017v1
2008-09-05T12:18:15Z
2008-09-05T12:18:15Z
Entropy Concentration and the Empirical Coding Game
We give a characterization of Maximum Entropy/Minimum Relative Entropy inference by providing two `strong entropy concentration' theorems. These theorems unify and generalize Jaynes' `concentration phenomenon' and Van Campenhout and Cover's `conditional limit theorem'. The theorems characterize exactly in what sense a prior distribution Q conditioned on a given constraint, and the distribution P, minimizing the relative entropy D(P ||Q) over all distributions satisfying the constraint, are `close' to each other. We then apply our theorems to establish the relationship between entropy concentration and a game-theoretic characterization of Maximum Entropy Inference due to Topsoe and others.
[ "Peter Grunwald", "['Peter Grunwald']" ]
math.ST cs.LG math.PR stat.ME stat.TH
10.1103/PhysRevE.79.026307
0809.1241
null
null
null
null
null
A New Framework of Multistage Estimation
In this paper, we have established a unified framework of multistage parameter estimation. We demonstrate that a wide variety of statistical problems such as fixed-sample-size interval estimation, point estimation with error control, bounded-width confidence intervals, interval estimation following hypothesis testing, construction of confidence sequences, can be cast into the general framework of constructing sequential random intervals with prescribed coverage probabilities. We have developed exact methods for the construction of such sequential random intervals in the context of multistage sampling. In particular, we have established inclusion principle and coverage tuning techniques to control and adjust the coverage probabilities of sequential random intervals. We have obtained concrete sampling schemes which are unprecedentedly efficient in terms of sampling effort as compared to existing procedures.
[ "Xinjia Chen" ]
null
null
0809.1241v
null
null
http://arxiv.org/abs/0809.1241v35
2012-12-05T00:39:40Z
2008-09-08T14:03:24Z
A New Framework of Multistage Estimation
In this paper, we have established a unified framework of multistage parameter estimation. We demonstrate that a wide variety of statistical problems such as fixed-sample-size interval estimation, point estimation with error control, bounded-width confidence intervals, interval estimation following hypothesis testing, construction of confidence sequences, can be cast into the general framework of constructing sequential random intervals with prescribed coverage probabilities. We have developed exact methods for the construction of such sequential random intervals in the context of multistage sampling. In particular, we have established inclusion principle and coverage tuning techniques to control and adjust the coverage probabilities of sequential random intervals. We have obtained concrete sampling schemes which are unprecedentedly efficient in terms of sampling effort as compared to existing procedures.
[ "['Xinjia Chen']" ]
cs.LG math.ST stat.ML stat.TH
null
0809.1270
null
null
http://arxiv.org/pdf/0809.1270v1
2008-09-08T04:18:17Z
2008-09-08T04:18:17Z
Predictive Hypothesis Identification
While statistics focusses on hypothesis testing and on estimating (properties of) the true sampling distribution, in machine learning the performance of learning algorithms on future data is the primary issue. In this paper we bridge the gap with a general principle (PHI) that identifies hypotheses with best predictive performance. This includes predictive point and interval estimation, simple and composite hypothesis testing, (mixture) model selection, and others as special cases. For concrete instantiations we will recover well-known methods, variations thereof, and new ones. PHI nicely justifies, reconciles, and blends (a reparametrization invariant variation of) MAP, ML, MDL, and moment estimation. One particular feature of PHI is that it can genuinely deal with nested hypotheses.
[ "Marcus Hutter", "['Marcus Hutter']" ]
cs.LG stat.ML
null
0809.1493
null
null
http://arxiv.org/pdf/0809.1493v1
2008-09-09T06:48:10Z
2008-09-09T06:48:10Z
Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning
For supervised and unsupervised learning, positive definite kernels allow to use large and potentially infinite dimensional feature spaces with a computational cost that only depends on the number of observations. This is usually done through the penalization of predictor functions by Euclidean or Hilbertian norms. In this paper, we explore penalizing by sparsity-inducing norms such as the l1-norm or the block l1-norm. We assume that the kernel decomposes into a large sum of individual basis kernels which can be embedded in a directed acyclic graph; we show that it is then possible to perform kernel selection through a hierarchical multiple kernel learning framework, in polynomial time in the number of selected kernels. This framework is naturally applied to non linear variable selection; our extensive simulations on synthetic datasets and datasets from the UCI repository show that efficiently exploring the large feature space through sparsity-inducing norms leads to state-of-the-art predictive performance.
[ "['Francis Bach']", "Francis Bach (INRIA Rocquencourt)" ]
cs.LG
null
0809.1590
null
null
http://arxiv.org/pdf/0809.1590v1
2008-09-09T16:11:12Z
2008-09-09T16:11:12Z
When is there a representer theorem? Vector versus matrix regularizers
We consider a general class of regularization methods which learn a vector of parameters on the basis of linear measurements. It is well known that if the regularizer is a nondecreasing function of the inner product then the learned vector is a linear combination of the input data. This result, known as the {\em representer theorem}, is at the basis of kernel-based methods in machine learning. In this paper, we prove the necessity of the above condition, thereby completing the characterization of kernel methods based on regularization. We further extend our analysis to regularization methods which learn a matrix, a problem which is motivated by the application to multi-task learning. In this context, we study a more general representer theorem, which holds for a larger class of regularizers. We provide a necessary and sufficient condition for these class of matrix regularizers and highlight them with some concrete examples of practical importance. Our analysis uses basic principles from matrix theory, especially the useful notion of matrix nondecreasing function.
[ "Andreas Argyriou, Charles Micchelli and Massimiliano Pontil", "['Andreas Argyriou' 'Charles Micchelli' 'Massimiliano Pontil']" ]
cs.DS cs.DM cs.LG
null
0809.2075
null
null
http://arxiv.org/pdf/0809.2075v2
2008-09-12T07:02:37Z
2008-09-11T19:32:49Z
Low congestion online routing and an improved mistake bound for online prediction of graph labeling
In this paper, we show a connection between a certain online low-congestion routing problem and an online prediction of graph labeling. More specifically, we prove that if there exists a routing scheme that guarantees a congestion of $\alpha$ on any edge, there exists an online prediction algorithm with mistake bound $\alpha$ times the cut size, which is the size of the cut induced by the label partitioning of graph vertices. With previous known bound of $O(\log n)$ for $\alpha$ for the routing problem on trees with $n$ vertices, we obtain an improved prediction algorithm for graphs with high effective resistance. In contrast to previous approaches that move the graph problem into problems in vector space using graph Laplacian and rely on the analysis of the perceptron algorithm, our proof are purely combinatorial. Further more, our approach directly generalizes to the case where labels are not binary.
[ "Jittat Fakcharoenphol, Boonserm Kijsirikul", "['Jittat Fakcharoenphol' 'Boonserm Kijsirikul']" ]
cs.LG
null
0809.2085
null
null
http://arxiv.org/pdf/0809.2085v1
2008-09-11T19:01:39Z
2008-09-11T19:01:39Z
Clustered Multi-Task Learning: A Convex Formulation
In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others. In the context of learning linear functions for supervised classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they are expected to be related to each other. In this paper, we assume that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors. We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting in a new convex optimization formulation for multi-task learning. We show in simulations on synthetic examples and on the IEDB MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non convex methods dedicated to the same problem.
[ "['Laurent Jacob' 'Francis Bach' 'Jean-Philippe Vert']", "Laurent Jacob, Francis Bach (INRIA Rocquencourt), Jean-Philippe Vert" ]
cs.IT cs.LG math.IT math.ST stat.TH
null
0809.2754
null
null
http://arxiv.org/pdf/0809.2754v2
2008-09-17T17:25:44Z
2008-09-16T16:38:18Z
Algorithmic information theory
We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are fundamentally different. We indicate how recent developments within the theory allow one to formally distinguish between `structural' (meaningful) and `random' information as measured by the Kolmogorov structure function, which leads to a mathematical formalization of Occam's razor in inductive inference. We end by discussing some of the philosophical implications of the theory.
[ "['Peter D. Grunwald' 'Paul M. B. Vitanyi']", "Peter D. Grunwald (CWI) and Paul M.B. Vitanyi (CWI and Univ.\n Amsterdam)" ]
cs.LG cs.AI
null
0809.2792
null
null
http://arxiv.org/pdf/0809.2792v3
2009-06-24T17:45:11Z
2008-09-16T20:05:00Z
Predicting Abnormal Returns From News Using Text Classification
We show how text from news articles can be used to predict intraday price movements of financial assets using support vector machines. Multiple kernel learning is used to combine equity returns with text as predictive features to increase classification performance and we develop an analytic center cutting plane method to solve the kernel learning problem efficiently. We observe that while the direction of returns is not predictable using either text or returns, their size is, with text features producing significantly better performance than historical returns alone.
[ "Ronny Luss, Alexandre d'Aspremont", "['Ronny Luss' \"Alexandre d'Aspremont\"]" ]
math.ST cs.LG math.PR stat.ME stat.TH
null
0809.3170
null
null
null
null
null
A New Framework of Multistage Hypothesis Tests
In this paper, we have established a general framework of multistage hypothesis tests which applies to arbitrarily many mutually exclusive and exhaustive composite hypotheses. Within the new framework, we have constructed specific multistage tests which rigorously control the risk of committing decision errors and are more efficient than previous tests in terms of average sample number and the number of sampling operations. Without truncation, the sample numbers of our testing plans are absolutely bounded.
[ "Xinjia Chen" ]
null
null
0809.3170v
null
null
http://arxiv.org/pdf/0809.3170v25
2012-12-05T00:35:38Z
2008-09-18T14:25:06Z
A New Framework of Multistage Hypothesis Tests
In this paper, we have established a general framework of multistage hypothesis tests which applies to arbitrarily many mutually exclusive and exhaustive composite hypotheses. Within the new framework, we have constructed specific multistage tests which rigorously control the risk of committing decision errors and are more efficient than previous tests in terms of average sample number and the number of sampling operations. Without truncation, the sample numbers of our testing plans are absolutely bounded.
[ "['Xinjia Chen']" ]
cs.CV cs.AI cs.LG
null
0809.3352
null
null
http://arxiv.org/pdf/0809.3352v1
2008-09-19T11:02:39Z
2008-09-19T11:02:39Z
Generalized Prediction Intervals for Arbitrary Distributed High-Dimensional Data
This paper generalizes the traditional statistical concept of prediction intervals for arbitrary probability density functions in high-dimensional feature spaces by introducing significance level distributions, which provides interval-independent probabilities for continuous random variables. The advantage of the transformation of a probability density function into a significance level distribution is that it enables one-class classification or outlier detection in a direct manner.
[ "['Steffen Kuehn']", "Steffen Kuehn" ]
cs.CV cs.LG
null
0809.3618
null
null
http://arxiv.org/pdf/0809.3618v1
2008-09-21T23:23:26Z
2008-09-21T23:23:26Z
Robust Near-Isometric Matching via Structured Learning of Graphical Models
Models for near-rigid shape matching are typically based on distance-related features, in order to infer matches that are consistent with the isometric assumption. However, real shapes from image datasets, even when expected to be related by "almost isometric" transformations, are actually subject not only to noise but also, to some limited degree, to variations in appearance and scale. In this paper, we introduce a graphical model that parameterises appearance, distance, and angle features and we learn all of the involved parameters via structured prediction. The outcome is a model for near-rigid shape matching which is robust in the sense that it is able to capture the possibly limited but still important scale and appearance variations. Our experimental results reveal substantial improvements upon recent successful models, while maintaining similar running times.
[ "Julian J. McAuley, Tiberio S. Caetano, Alexander J. Smola", "['Julian J. McAuley' 'Tiberio S. Caetano' 'Alexander J. Smola']" ]
cs.LG cs.AI cs.IT math.IT
null
0809.4086
null
null
http://arxiv.org/pdf/0809.4086v2
2011-01-08T03:16:39Z
2008-09-24T05:34:56Z
Learning Hidden Markov Models using Non-Negative Matrix Factorization
The Baum-Welsh algorithm together with its derivatives and variations has been the main technique for learning Hidden Markov Models (HMM) from observational data. We present an HMM learning algorithm based on the non-negative matrix factorization (NMF) of higher order Markovian statistics that is structurally different from the Baum-Welsh and its associated approaches. The described algorithm supports estimation of the number of recurrent states of an HMM and iterates the non-negative matrix factorization (NMF) algorithm to improve the learned HMM parameters. Numerical examples are provided as well.
[ "George Cybenko and Valentino Crespi", "['George Cybenko' 'Valentino Crespi']" ]
cs.LG
null
0809.4632
null
null
http://arxiv.org/pdf/0809.4632v1
2008-09-26T13:47:36Z
2008-09-26T13:47:36Z
Surrogate Learning - An Approach for Semi-Supervised Classification
We consider the task of learning a classifier from the feature space $\mathcal{X}$ to the set of classes $\mathcal{Y} = \{0, 1\}$, when the features can be partitioned into class-conditionally independent feature sets $\mathcal{X}_1$ and $\mathcal{X}_2$. We show the surprising fact that the class-conditional independence can be used to represent the original learning task in terms of 1) learning a classifier from $\mathcal{X}_2$ to $\mathcal{X}_1$ and 2) learning the class-conditional distribution of the feature set $\mathcal{X}_1$. This fact can be exploited for semi-supervised learning because the former task can be accomplished purely from unlabeled samples. We present experimental evaluation of the idea in two real world applications.
[ "['Sriharsha Veeramachaneni' 'Ravikumar Kondadadi']", "Sriharsha Veeramachaneni and Ravikumar Kondadadi" ]
cs.DS cs.LG
null
0809.4882
null
null
http://arxiv.org/pdf/0809.4882v1
2008-09-29T01:58:13Z
2008-09-29T01:58:13Z
Multi-Armed Bandits in Metric Spaces
In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions. In this work we study a very general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the "Lipschitz MAB problem". We present a complete solution for the multi-armed problem in this setting. That is, for every metric space (L,X) we define an isometry invariant which bounds from below the performance of Lipschitz MAB algorithms for X, and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions.
[ "Robert Kleinberg, Aleksandrs Slivkins and Eli Upfal", "['Robert Kleinberg' 'Aleksandrs Slivkins' 'Eli Upfal']" ]
cs.IT cs.LG math.IT
null
0809.4883
null
null
http://arxiv.org/pdf/0809.4883v3
2010-05-08T11:34:25Z
2008-09-29T14:01:13Z
Thresholded Basis Pursuit: An LP Algorithm for Achieving Optimal Support Recovery for Sparse and Approximately Sparse Signals from Noisy Random Measurements
In this paper we present a linear programming solution for sign pattern recovery of a sparse signal from noisy random projections of the signal. We consider two types of noise models, input noise, where noise enters before the random projection; and output noise, where noise enters after the random projection. Sign pattern recovery involves the estimation of sign pattern of a sparse signal. Our idea is to pretend that no noise exists and solve the noiseless $\ell_1$ problem, namely, $\min \|\beta\|_1 ~ s.t. ~ y=G \beta$ and quantizing the resulting solution. We show that the quantized solution perfectly reconstructs the sign pattern of a sufficiently sparse signal. Specifically, we show that the sign pattern of an arbitrary k-sparse, n-dimensional signal $x$ can be recovered with $SNR=\Omega(\log n)$ and measurements scaling as $m= \Omega(k \log{n/k})$ for all sparsity levels $k$ satisfying $0< k \leq \alpha n$, where $\alpha$ is a sufficiently small positive constant. Surprisingly, this bound matches the optimal \emph{Max-Likelihood} performance bounds in terms of $SNR$, required number of measurements, and admissible sparsity level in an order-wise sense. In contrast to our results, previous results based on LASSO and Max-Correlation techniques either assume significantly larger $SNR$, sublinear sparsity levels or restrictive assumptions on signal sets. Our proof technique is based on noisy perturbation of the noiseless $\ell_1$ problem, in that, we estimate the maximum admissible noise level before sign pattern recovery fails.
[ "V. Saligrama, M. Zhao", "['V. Saligrama' 'M. Zhao']" ]
cs.NA cs.LG
null
0810.0877
null
null
http://arxiv.org/pdf/0810.0877v1
2008-10-06T04:58:44Z
2008-10-06T04:58:44Z
Bias-Variance Techniques for Monte Carlo Optimization: Cross-validation for the CE Method
In this paper, we examine the CE method in the broad context of Monte Carlo Optimization (MCO) and Parametric Learning (PL), a type of machine learning. A well-known overarching principle used to improve the performance of many PL algorithms is the bias-variance tradeoff. This tradeoff has been used to improve PL algorithms ranging from Monte Carlo estimation of integrals, to linear estimation, to general statistical estimation. Moreover, as described by, MCO is very closely related to PL. Owing to this similarity, the bias-variance tradeoff affects MCO performance, just as it does PL performance. In this article, we exploit the bias-variance tradeoff to enhance the performance of MCO algorithms. We use the technique of cross-validation, a technique based on the bias-variance tradeoff, to significantly improve the performance of the Cross Entropy (CE) method, which is an MCO algorithm. In previous work we have confirmed that other PL techniques improve the perfomance of other MCO algorithms. We conclude that the many techniques pioneered in PL could be investigated as ways to improve MCO algorithms in general, and the CE method in particular.
[ "Dev Rajnarayan and David Wolpert", "['Dev Rajnarayan' 'David Wolpert']" ]
cs.NI cs.LG
null
0810.1430
null
null
http://arxiv.org/pdf/0810.1430v1
2008-10-08T13:22:46Z
2008-10-08T13:22:46Z
Blind Cognitive MAC Protocols
We consider the design of cognitive Medium Access Control (MAC) protocols enabling an unlicensed (secondary) transmitter-receiver pair to communicate over the idle periods of a set of licensed channels, i.e., the primary network. The objective is to maximize data throughput while maintaining the synchronization between secondary users and avoiding interference with licensed (primary) users. No statistical information about the primary traffic is assumed to be available a-priori to the secondary user. We investigate two distinct sensing scenarios. In the first, the secondary transmitter is capable of sensing all the primary channels, whereas it senses one channel only in the second scenario. In both cases, we propose MAC protocols that efficiently learn the statistics of the primary traffic online. Our simulation results demonstrate that the proposed blind protocols asymptotically achieve the throughput obtained when prior knowledge of primary traffic statistics is available.
[ "Omar Mehanna, Ahmed Sultan and Hesham El Gamal", "['Omar Mehanna' 'Ahmed Sultan' 'Hesham El Gamal']" ]
cs.LG cs.IT math.IT
null
0810.1648
null
null
http://arxiv.org/pdf/0810.1648v1
2008-10-09T12:56:43Z
2008-10-09T12:56:43Z
A Gaussian Belief Propagation Solver for Large Scale Support Vector Machines
Support vector machines (SVMs) are an extremely successful type of classification and regression algorithms. Building an SVM entails solving a constrained convex quadratic programming problem, which is quadratic in the number of training samples. We introduce an efficient parallel implementation of an support vector regression solver, based on the Gaussian Belief Propagation algorithm (GaBP). In this paper, we demonstrate that methods from the complex system domain could be utilized for performing efficient distributed computation. We compare the proposed algorithm to previously proposed distributed and single-node SVM solvers. Our comparison shows that the proposed algorithm is just as accurate as these solvers, while being significantly faster, especially for large datasets. We demonstrate scalability of the proposed algorithm to up to 1,024 computing nodes and hundreds of thousands of data points using an IBM Blue Gene supercomputer. As far as we know, our work is the largest parallel implementation of belief propagation ever done, demonstrating the applicability of this algorithm for large scale distributed computing systems.
[ "Danny Bickson, Elad Yom-Tov and Danny Dolev", "['Danny Bickson' 'Elad Yom-Tov' 'Danny Dolev']" ]
cs.CV cs.LG
10.1109/TPAMI.2008.275
0810.2434
null
null
http://arxiv.org/abs/0810.2434v1
2008-10-14T14:22:05Z
2008-10-14T14:22:05Z
Faster and better: a machine learning approach to corner detection
The repeatability and efficiency of a corner detector determines how likely it is to be useful in a real-world application. The repeatability is importand because the same scene viewed from different positions should yield features which correspond to the same real-world 3D locations [Schmid et al 2000]. The efficiency is important because this determines whether the detector combined with further processing can operate at frame rate. Three advances are described in this paper. First, we present a new heuristic for feature detection, and using machine learning we derive a feature detector from this which can fully process live PAL video using less than 5% of the available processing time. By comparison, most other detectors cannot even operate at frame rate (Harris detector 115%, SIFT 195%). Second, we generalize the detector, allowing it to be optimized for repeatability, with little loss of efficiency. Third, we carry out a rigorous comparison of corner detectors based on the above repeatability criterion applied to 3D scenes. We show that despite being principally constructed for speed, on these stringent tests, our heuristic detector significantly outperforms existing feature detectors. Finally, the comparison demonstrates that using machine learning produces significant improvements in repeatability, yielding a detector that is both very fast and very high quality.
[ "Edward Rosten, Reid Porter, Tom Drummond", "['Edward Rosten' 'Reid Porter' 'Tom Drummond']" ]
cs.IR cs.LG
null
0810.2764
null
null
http://arxiv.org/pdf/0810.2764v1
2008-10-15T19:03:10Z
2008-10-15T19:03:10Z
A Simple Linear Ranking Algorithm Using Query Dependent Intercept Variables
The LETOR website contains three information retrieval datasets used as a benchmark for testing machine learning ideas for ranking. Algorithms participating in the challenge are required to assign score values to search results for a collection of queries, and are measured using standard IR ranking measures (NDCG, precision, MAP) that depend only the relative score-induced order of the results. Similarly to many of the ideas proposed in the participating algorithms, we train a linear classifier. In contrast with other participating algorithms, we define an additional free variable (intercept, or benchmark) for each query. This allows expressing the fact that results for different queries are incomparable for the purpose of determining relevance. The cost of this idea is the addition of relatively few nuisance parameters. Our approach is simple, and we used a standard logistic regression library to test it. The results beat the reported participating algorithms. Hence, it seems promising to combine our approach with other more complex ideas.
[ "Nir Ailon", "['Nir Ailon']" ]
cs.AI cs.CC cs.LG
null
0810.3451
null
null
http://arxiv.org/pdf/0810.3451v1
2008-10-20T02:09:16Z
2008-10-20T02:09:16Z
The many faces of optimism - Extended version
The exploration-exploitation dilemma has been an intriguing and unsolved problem within the framework of reinforcement learning. "Optimism in the face of uncertainty" and model building play central roles in advanced exploration methods. Here, we integrate several concepts and obtain a fast and simple algorithm. We show that the proposed algorithm finds a near-optimal policy in polynomial time, and give experimental evidence that it is robust and efficient compared to its ascendants.
[ "['István Szita' 'András Lőrincz']", "Istv\\'an Szita, Andr\\'as L\\H{o}rincz" ]
cs.LG cs.AI q-bio.QM
null
0810.3525
null
null
http://arxiv.org/pdf/0810.3525v1
2008-10-20T11:09:15Z
2008-10-20T11:09:15Z
The use of entropy to measure structural diversity
In this paper entropy based methods are compared and used to measure structural diversity of an ensemble of 21 classifiers. This measure is mostly applied in ecology, whereby species counts are used as a measure of diversity. The measures used were Shannon entropy, Simpsons and the Berger Parker diversity indexes. As the diversity indexes increased so did the accuracy of the ensemble. An ensemble dominated by classifiers with the same structure produced poor accuracy. Uncertainty rule from information theory was also used to further define diversity. Genetic algorithms were used to find the optimal ensemble by using the diversity indices as the cost function. The method of voting was used to aggregate the decisions.
[ "L. Masisi, V. Nelwamondo and T. Marwala", "['L. Masisi' 'V. Nelwamondo' 'T. Marwala']" ]
cs.AI cs.LG
null
0810.3605
null
null
http://arxiv.org/pdf/0810.3605v3
2010-04-11T00:35:51Z
2008-10-20T16:47:47Z
A Minimum Relative Entropy Principle for Learning and Acting
This paper proposes a method to construct an adaptive agent that is universal with respect to a given class of experts, where each expert is an agent that has been designed specifically for a particular environment. This adaptive control problem is formalized as the problem of minimizing the relative entropy of the adaptive agent from the expert that is most suitable for the unknown environment. If the agent is a passive observer, then the optimal solution is the well-known Bayesian predictor. However, if the agent is active, then its past actions need to be treated as causal interventions on the I/O stream rather than normal probability conditions. Here it is shown that the solution to this new variational problem is given by a stochastic controller called the Bayesian control rule, which implements adaptive behavior as a mixture of experts. Furthermore, it is shown that under mild assumptions, the Bayesian control rule converges to the control law of the most suitable expert.
[ "['Pedro A. Ortega' 'Daniel A. Braun']", "Pedro A. Ortega, Daniel A. Braun" ]
quant-ph cs.AI cs.LG
10.1109/TSMCB.2008.925743
0810.3828
null
null
http://arxiv.org/abs/0810.3828v1
2008-10-21T13:38:33Z
2008-10-21T13:38:33Z
Quantum reinforcement learning
The key approaches for machine learning, especially learning in unknown probabilistic environments are new representations and computation mechanisms. In this paper, a novel quantum reinforcement learning (QRL) method is proposed by combining quantum theory and reinforcement learning (RL). Inspired by the state superposition principle and quantum parallelism, a framework of value updating algorithm is introduced. The state (action) in traditional RL is identified as the eigen state (eigen action) in QRL. The state (action) set can be represented with a quantum superposition state and the eigen state (eigen action) can be obtained by randomly observing the simulated quantum state according to the collapse postulate of quantum measurement. The probability of the eigen action is determined by the probability amplitude, which is parallelly updated according to rewards. Some related characteristics of QRL such as convergence, optimality and balancing between exploration and exploitation are also analyzed, which shows that this approach makes a good tradeoff between exploration and exploitation using the probability amplitude and can speed up learning through the quantum parallelism. To evaluate the performance and practicability of QRL, several simulated experiments are given and the results demonstrate the effectiveness and superiority of QRL algorithm for some complex problems. The present work is also an effective exploration on the application of quantum computation to artificial intelligence.
[ "['Daoyi Dong' 'Chunlin Chen' 'Hanxiong Li' 'Tzyh-Jong Tarn']", "Daoyi Dong, Chunlin Chen, Hanxiong Li and Tzyh-Jong Tarn" ]
cs.LG cs.CV stat.ML
null
0810.4401
null
null
http://arxiv.org/pdf/0810.4401v2
2008-12-17T06:47:01Z
2008-10-24T08:49:09Z
Efficient Exact Inference in Planar Ising Models
We give polynomial-time algorithms for the exact computation of lowest-energy (ground) states, worst margin violators, log partition functions, and marginal edge probabilities in certain binary undirected graphical models. Our approach provides an interesting alternative to the well-known graph cut paradigm in that it does not impose any submodularity constraints; instead we require planarity to establish a correspondence with perfect matchings (dimer coverings) in an expanded dual graph. We implement a unified framework while delegating complex but well-understood subproblems (planar embedding, maximum-weight perfect matching) to established algorithms for which efficient implementations are freely available. Unlike graph cut methods, we can perform penalized maximum-likelihood as well as maximum-margin parameter estimation in the associated conditional random fields (CRFs), and employ marginal posterior probabilities as well as maximum a posteriori (MAP) states for prediction. Maximum-margin CRF parameter estimation on image denoising and segmentation problems shows our approach to be efficient and effective. A C++ implementation is available from http://nic.schraudolph.org/isinf/
[ "Nicol N. Schraudolph and Dmitry Kamenetsky", "['Nicol N. Schraudolph' 'Dmitry Kamenetsky']" ]
cs.LG
null
0810.4611
null
null
http://arxiv.org/pdf/0810.4611v2
2009-04-15T18:13:59Z
2008-10-25T15:09:28Z
Learning Isometric Separation Maps
Maximum Variance Unfolding (MVU) and its variants have been very successful in embedding data-manifolds in lower dimensional spaces, often revealing the true intrinsic dimension. In this paper we show how to also incorporate supervised class information into an MVU-like method without breaking its convexity. We call this method the Isometric Separation Map and we show that the resulting kernel matrix can be used as a binary/multiclass Support Vector Machine-like method in a semi-supervised (transductive) framework. We also show that the method always finds a kernel matrix that linearly separates the training data exactly without projecting them in infinite dimensional spaces. In traditional SVMs we choose a kernel and hope that the data become linearly separable in the kernel space. In this paper we show how the hyperplane can be chosen ad-hoc and the kernel is trained so that data are always linearly separable. Comparisons with Large Margin SVMs show comparable performance.
[ "['Nikolaos Vasiloglou' 'Alexander G. Gray' 'David V. Anderson']", "Nikolaos Vasiloglou, Alexander G. Gray, David V. Anderson" ]
cs.LG cs.AI cs.MA
null
0810.5484
null
null
http://arxiv.org/pdf/0810.5484v1
2008-10-30T13:26:31Z
2008-10-30T13:26:31Z
A Novel Clustering Algorithm Based on a Modified Model of Random Walk
We introduce a modified model of random walk, and then develop two novel clustering algorithms based on it. In the algorithms, each data point in a dataset is considered as a particle which can move at random in space according to the preset rules in the modified model. Further, this data point may be also viewed as a local control subsystem, in which the controller adjusts its transition probability vector in terms of the feedbacks of all data points, and then its transition direction is identified by an event-generating function. Finally, the positions of all data points are updated. As they move in space, data points collect gradually and some separating parts emerge among them automatically. As a consequence, data points that belong to the same class are located at a same position, whereas those that belong to different classes are away from one another. Moreover, the experimental results have demonstrated that data points in the test datasets are clustered reasonably and efficiently, and the comparison with other algorithms also provides an indication of the effectiveness of the proposed algorithms.
[ "['Qiang Li' 'Yan He' 'Jing-ping Jiang']", "Qiang Li, Yan He, Jing-ping Jiang" ]
math.ST cs.LG math.PR stat.ME stat.TH
null
0810.5551
null
null
http://arxiv.org/pdf/0810.5551v2
2008-11-11T02:38:09Z
2008-10-30T19:52:55Z
A Theory of Truncated Inverse Sampling
In this paper, we have established a new framework of truncated inverse sampling for estimating mean values of non-negative random variables such as binomial, Poisson, hyper-geometrical, and bounded variables. We have derived explicit formulas and computational methods for designing sampling schemes to ensure prescribed levels of precision and confidence for point estimators. Moreover, we have developed interval estimation methods.
[ "['Xinjia Chen']", "Xinjia Chen" ]
cs.CV cs.DS cs.LG
null
0810.5573
null
null
http://arxiv.org/pdf/0810.5573v1
2008-10-30T20:24:28Z
2008-10-30T20:24:28Z
A branch-and-bound feature selection algorithm for U-shaped cost functions
This paper presents the formulation of a combinatorial optimization problem with the following characteristics: i.the search space is the power set of a finite set structured as a Boolean lattice; ii.the cost function forms a U-shaped curve when applied to any lattice chain. This formulation applies for feature selection in the context of pattern recognition. The known approaches for this problem are branch-and-bound algorithms and heuristics, that explore partially the search space. Branch-and-bound algorithms are equivalent to the full search, while heuristics are not. This paper presents a branch-and-bound algorithm that differs from the others known by exploring the lattice structure and the U-shaped chain curves of the search space. The main contribution of this paper is the architecture of this algorithm that is based on the representation and exploration of the search space by new lattice properties proven here. Several experiments, with well known public data, indicate the superiority of the proposed method to SFFS, which is a popular heuristic that gives good results in very short computational time. In all experiments, the proposed method got better or equal results in similar or even smaller computational time.
[ "Marcelo Ris, Junior Barrera, David C. Martins Jr", "['Marcelo Ris' 'Junior Barrera' 'David C. Martins Jr']" ]
cs.LG cs.AI
null
0810.5631
null
null
http://arxiv.org/pdf/0810.5631v1
2008-10-31T07:15:01Z
2008-10-31T07:15:01Z
Temporal Difference Updating without a Learning Rate
We derive an equation for temporal difference learning from statistical principles. Specifically, we start with the variational principle and then bootstrap to produce an updating rule for discounted state value estimates. The resulting equation is similar to the standard equation for temporal difference learning with eligibility traces, so called TD(lambda), however it lacks the parameter alpha that specifies the learning rate. In the place of this free parameter there is now an equation for the learning rate that is specific to each state transition. We experimentally test this new learning rule against TD(lambda) and find that it offers superior performance in various settings. Finally, we make some preliminary investigations into how to extend our new temporal difference algorithm to reinforcement learning. To do this we combine our update equation with both Watkins' Q(lambda) and Sarsa(lambda) and find that it again offers superior performance without a learning rate parameter.
[ "Marcus Hutter and Shane Legg", "['Marcus Hutter' 'Shane Legg']" ]