doc_id
stringlengths
4
40
title
stringlengths
7
300
abstract
stringlengths
2
10k
corpus_id
uint64
171
251M
f14f85c294864e7cd75baa4cdbe3114d46cfa4ae
Invariances in Classification: an efficient SVM implementation
Often, in pattern recognition, complementary knowledge is available. This could be useful to improve the performance of the recognition system. Part of this knowledge regards invariances, in particular when treating images or voice data. Many approaches have been proposed to incorporate invariances in pattern recognition systems. Some of these approaches require a pre-processing phase, others integrate the invariances in the algorithms. We present a unifying formulation of the problem of incorporating invariances into a pattern recognition classifier and we extend the SimpleSVM algorithm [Vishwanathan et al., 2003] to handle invariances efficiently.
16,186,013
06bb5771e6b8a9356c5f4ae28c98b4397c043349
A tutorial on support vector regression
In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.
15,475
12fa4a3ee546ba8eeb0b88b06bcb571d65d91cc4
Online learning with kernels
Kernel-based algorithms such as support vector machines have achieved considerable success in various problems in batch setting, where all of the training data is available in advance. Support vector machines combine the so-called kernel trick with the large margin idea. There has been little use of these methods in an online setting suitable for real-time applications. In this paper, we consider online learning in a reproducing kernel Hilbert space. By considering classical stochastic gradient descent within a feature space and the use of some straightforward tricks, we develop simple and computationally efficient algorithms for a wide range of problems such as classification, regression, and novelty detection. In addition to allowing the exploitation of the kernel trick in an online setting, we examine the value of large margins for classification in the online setting with a drifting target. We derive worst-case loss bounds, and moreover, we show the convergence of the hypothesis to the minimizer of the regularized risk functional. We present some experimental results that support the theory as well as illustrating the power of the new algorithms for online novelty detection.
208,933,741
134c6f24e9715a6675b964b946b321a65dfe3af5
Une boîte à outils rapide et simple pour les SVM
Resume : Si les SVM (Support Vector Machines, ou Separateurs a Vaste Marge) sont aujourd’hui reconnus comme l’une des meilleures methodes d’apprentissage, ils restent consideres comme lents. Nous proposons ici une boite a outils Matlab permettant d’utiliser simplement et rapidement les SVM grâce a une methode de gradient projete particulierement bien adaptee au probleme : SimpleSVM (Vishwanathan et al., 2003). Nous avons choisi de coder cet algorithme dans l’environnement Matlab afin de profiter de sa convivialite tout en s’assurant une bonne efficacite. La comparaison de notre solution avec l’etat de l’art dans le domaine SMO (Sequential Minimal Optimization), montre qu’il s’agit la d’une solution dans certains cas plus rapide et d’une complexite moindre. Pour illustrer la simplicite et la rapidite de notre methode, nous montrons enfin que sur la base de donnees MNIST, il a ete possible d’obtenir des resultats satisfaisants en un temps relativement court (une heure et demi de calcul sur un PC sous linux pour construire 45 classifieurs binaires sur 60.000 exemples en dimension 576). Mots-cles : Support Vector Machine, Separateur a Vaste Marge, SVM, Apprentissage, Boite a outils Matlab, Contraintes actives, Gradient projete, MNIST.
117,181,875
2259aab92fba26dc91b7f8122c470e2e92f14a20
Binet-Cauchy Kernels
We propose a family of kernels based on the Binet-Cauchy theorem and its extension to Fredholm operators. This includes as special cases all currently known kernels derived from the behavioral framework, diffusion processes, marginalized kernels, kernels on graphs, and the kernels on sets arising from the subspace angle approach. Many of these kernels can be seen as the extrema of a new continuum of kernel functions, which leads to numerous new special cases. As an application, we apply the new class of kernels to the problem of clustering of video sequences with encouraging results.
1,876,803
4ccd300af19dac5f520af078340d42d3b9c501f3
Experimentally optimal v in support vector regression for different noise models and parameter settings
null
966,610
5723c6ee23e33a91e1a06c1cec983963ef2bec05
Sample based generalisation bounds
null
125,126,379
9ecd3155ea1b5056c43c624a91e3d9523f303e74
Learning with non-positive kernels
In this paper we show that many kernel methods can be adapted to deal with indefinite kernels, that is, kernels which are not positive semidefinite. They do not satisfy Mercer's condition and they induce associated functional spaces called Reproducing Kernel Kreĭn Spaces (RKKS), a generalization of Reproducing Kernel Hilbert Spaces (RKHS).Machine learning in RKKS shares many "nice" properties of learning in RKHS, such as orthogonality and projection. However, since the kernels are indefinite, we can no longer minimize the loss, instead we stabilize it. We show a general representer theorem for constrained stabilization and prove generalization bounds by computing the Rademacher averages of the kernel class. We list several examples of indefinite kernels and investigate regularization methods to solve spline interpolation. Some preliminary experiments with indefinite kernels for spline smoothing are reported for truncated spectral factorization, Landweber-Fridman iterations, and MR-II.
9,760,998
a2182e5a37f5fc04ce23bd2f4d6b5070382c8c5e
Gaussian process classification for segmenting and annotating sequences
Many real-world classification tasks involve the prediction of multiple, inter-dependent class labels. A prototypical case of this sort deals with prediction of a sequence of labels for a sequence of observations. Such problems arise naturally in the context of annotating and segmenting observation sequences. This paper generalizes Gaussian Process classification to predict multiple labels by taking dependencies between neighboring labels into account. Our approach is motivated by the desire to retain rigorous probabilistic semantics, while overcoming limitations of parametric methods like Conditional Random Fields, which exhibit conceptual and computational difficulties in high-dimensional input spaces. Experiments on named entity recognition and pitch accent prediction tasks demonstrate the competitiveness of our approach.
8,403,013
c03edc1b78c7aca86d3034aafe8b215a8608cd7d
Exponential Families for Conditional Random Fields
In this paper we define conditional random fields in reproducing kernel Hilbert spaces and show connections to Gaussian Process classification. More specifically, we prove decomposition results for undirected graphical models and we give constructions for kernels. Finally we present efficient means of solving the optimization problem using reduced rank decompositions and we show how stationarity can be exploited efficiently in the optimization process.
1,114,784
c12057aa350e81fc8aed81d6f3c08a03b6361346
A Second Order Cone programming Formulation for Classifying Missing Data
We propose a convex optimization based strategy to deal with uncertainty in the observations of a classification problem. We assume that instead of a sample (xi, yi) a distribution over (xi, yi) is specified. In particular, we derive a robust formulation when the distribution is given by a normal distribution. It leads to Second Order Cone Programming formulation. Our method is applied to the problem of missing data, where it outperforms direct imputation.
7,274,058
d9a3a61436476a5eab7746d1f4c2366209e2561b
Kernel Extrapolations for Enzyme Classification
null
223,433,350
f85fda7408cd69dbf2b86d8ed4d63c15d51b7bf7
Exponential Families and Kernels
null
59,870,272
02d4acd2ddaa4d862358abc571dcd02fc2f36196
Laplace Propagation
We present a novel method for approximate inference in Bayesian models and regularized risk functionals. It is based on the propagation of mean and variance derived from the Laplace approximation of conditional probabilities in factorizing distributions, much akin to Minka's Expectation Propagation. In the jointly normal case, it coincides with the latter and belief propagation, whereas in the general case, it provides an optimization strategy containing Support Vector chunking, the Bayes Committee Machine, and Gaussian Process chunking as special cases.
6,664,858
488a3d21cfc048371d12672a5c4ed954f44635c6
Logic, Trees and Kernels
Kernel based methods achieved much of their initial success on problems with real valued attributes. There are many problems with discrete attributes (including Boolean) and in this paper we present a number of results concerning the kernelisation of Boolean and discrete problems. We give results about the learnability and required complexity of logical formulae to solve classification problems. These results are obtained by linking propositional logic with kernel machines. In particular we show that decision trees and disjunctive normal forms (DNF) can be represented via of a special kernel, which connects the regularised risk to the margin of separation. Subsequently we derive a number of lower bounds on the required complexity of logical formulae using properties of algorithms for generation of linear machines machines. An interesting side eect of the development is a number of connections between machine learning algorithms that utilize discrete structures (such as a trees) and kernel machines. We also present some more general kernel constructions on discrete sets using the machinery of frames. These can be used to progressively penalize higher order interactions by explicitly constructing reproducing kernel Hilbert spaces, their associated kernels and the concomitant use of norm-based used regularization.
18,337,228
5af0e69ff389f3355bf0d95570dd2791449200c3
Classification in a normalized feature space using support vector machines
This paper discusses classification using support vector machines in a normalized feature space. We consider both normalization in input space and in feature space. Exploiting the fact that in this setting all points lie on the surface of a unit hypersphere we replace the optimal separating hyperplane by one that is symmetric in its angles, leading to an improved estimator. Evaluation of these considerations is done in numerical experiments on two real-world datasets. The stability to noise of this offset correction is subsequently investigated as well as its optimality.
633,145
60de4b6068407defa3c88f5feeb8b74d8e55fe9c
Kernels and Regularization on Graphs
We introduce a family of kernels on graphs based on the notion of regularization operators. This generalizes in a natural way the notion of regularization and Greens functions, as commonly used for real valued functions, to graphs. It turns out that diffusion kernels can be found as a special case of our reasoning. We show that the class of positive, monotonically decreasing functions on the unit interval leads to kernels and corresponding regularization operators.
7,326,173
619e2d7400772244a9b6cbc92e1559d67c20faca
Machine Learning with Hyperkernels
null
27,809,953
6337b2909c763be4fe67f8c6c99f5c6c7effec17
Advanced Lectures on Machine Learning
Advanced lectures on machine learning , Advanced lectures on machine learning , کتابخانه دیجیتال جندی شاپور اهواز
431,437
83d49e5bdc76a6e8b712564e95eced8bcc92a55b
Machine Learning Program , National ICT for Australia , Canberra , ACT 0200 , Australia
We present a fast iterative support vector training algorithm for a large variety of different formulations. It works by incrementally changing a candidate support vector set using a greedy approach, until the supporting hyperplane is found within a finite number of iterations. It is derived from a simple active set method which sweeps through the set of Lagrange multipliers and keeps optimality in the unconstrained variables, while discarding large amounts of bound-constrained variables. The hard-margin version can be viewed as a simple (yet computationally crucial) modification of the incremental SVM training algorithms of Cauwenberghs and Poggio. Experimental results for various settings are reported. In all cases our algorithm is considerably faster than competing methods such as Sequential Minimal Optimization or the Nearest Point Algorithm.
17,783,785
9401b14de27ba0aedfe9fa128c20f1db025e6caa
Kernel Methods and Support Vector Machines
Over the past ten years kernel methods such as Support Vector Machines and Gaussian Processes have become a staple for modern statistical estimation and machine learning. The groundwork for this field was laid in the second half of the 20th century by Vapnik and Chervonenkis (geometrical formulation of an optimal separating hyperplane, capacity measures for margin classifiers), Mangasarian (linear separation by a convex function class), Aronszajn (Reproducing Kernel Hilbert Spaces), Aizerman, Braverman, and Rozonoer (nonlinearity via kernel feature spaces), Arsenin and Tikhonov (regularization and ill-posed problems), and Wahba (regularization in Reproducing Kernel Hilbert Spaces). However, it took until the early 90s until positive definite kernels became a popular and viable means of estimation. Firstly this was due to the lack of sufficiently powerful hardware, since kernel methods require the computation of the socalled kernel matrix, which requires quadratic storage in the number of data points (a computer of at least a few megabytes of memory is required to deal with 1000+ points). Secondly, many of the previously mentioned techniques lay dormant or existed independently and only recently the (in hindsight obvious) connections were made to turn this into a practical estimation tool. Nowadays, a variety of good reference books exist and anyone serious about dealing with kernel methods is recommended to consult one of the following works for further information [15, 5, 8, 12]. Below, we will summarize the main ideas of kernel method and support vector machines, building on the summary given in [13].
12,171,463
9e2ec80b8e54b0db9ed2c35302475692a3c767db
The kernel mutual information
We introduce a new contrast function, the kernel mutual information (KMI), to measure the degree of independence of continuous random variables. This contrast function provides an approximate upper bound on the mutual information, as measured near independence, and is based on a kernel density estimate of the mutual information between a discretised approximation of the continuous random variables. We show that the kernel generalised variance (KGV) of F. Bach and M. Jordan (see JMLR, vol.3, p.1-48, 2002) is also an upper bound on the same kernel density estimate, but is looser. Finally, we suggest that the addition of a regularising term in the KGV causes it to approach the KMI, which motivates the introduction of this regularisation.
6,449,945
b0cbda62662e67a8ed3f356ece6fde2939a628fc
svlab - A Kernel Methods Package
Abstractsvlab is an extensible, object oriented, package for kernel based learningin R. Its main objective is to provide a tool kit consisting of basic kernelfunctionality, optimizers and high level algorithms such as Support VectorMachines and Kernel Principal Component Analysis which can be extendedby the user in a very modular way. Based on this infrastructure kernel-basedmethods can be easily be constructed and developed. 1 Introduction It is often difficult to solve problems like classification, regression and clustering—ormore generally: supervised and unsupervised learning—in the space in which theunderlying observations have been made. One way out is to project the observationsinto a higher-dimensional feature space where these problems are easier to solve,e.g., by using simple linear methods. If the methods applied in the feature spaceare only based on dot or inner products the projection does not have to be carriedout explicitely but only implicitely using kernel functions. This is often referred toas the “kernel trick”. More precisely, if a projection Φ : X → H is used the dotproduct hΦ(x),Φ(y)i can be represented by a kernel function kk(x,y) = hΦ(x),Φ(y)i, (1)
18,168,730
ba70d04019b399877085d87d11e228fc6e2fed00
Hilbert space embeddings in dynamical systems
null
18,282,012
cba8e927ad30fc9ec6863b120ba346df4d6da48e
Constructing Descriptive and Discriminative Nonlinear Features: Rayleigh Coefficients in Kernel Feature Spaces
We incorporate prior knowledge to construct nonlinear algorithms for invariant feature extraction and discrimination. Employing a unified framework in terms of a nonlinearized variant of the Rayleigh coefficient, we propose nonlinear generalizations of Fisher's discriminant and oriented PCA using support vector kernel functions. Extensive simulations show the utility of our approach.
7,783,614
daae67677edd6b3703dcbbb795d779638f6c7c4d
Machine learning using hyperkernels
We expand on the problem of learning a kernel via a RKHS on the space of kernels itself. The resulting optimization problem is shown to have a semidefinite programming solution. We demonstrate that it is possible to learn the kernel for various formulations of machine learning problems. Specifically, we provide mathematical programming formulations and experimental results for the C-SVM, ν-SVM and Lagrangian SVM for classification on UCI data, and novelty detection.
10,646,465
e43b2f0b8d99ef69e45055ce69e67b7306f15571
Bayesian Kernel Models
null
118,468,587
f0b47de69cad92ed72497ee4ae41b7505174bc42
Advanced Lectures on Machine Learning: Machine Learning Summer School 2002, Canberra, Australia, February 11-22, 2002, Revised Lectures
A Few Notes on Statistical Learning Theory.- A Short Introduction to Learning with Kernels.- Bayesian Kernel Methods.- An Introduction to Boosting and Leveraging.- An Introduction to Reinforcement Learning Theory: Value Function Methods.- Learning Comprehensible Theories from Structured Data.- Algorithms for Association Rules.- Online Learning of Linear Classifiers.
60,745,523
10c1ab4e9ad4b5c6f8298812e2aed1c615ee3ebb
Multi-Instance Kernels
Learning from structured data is becoming increasingly important. However, most prior work on kernel methods has focused on learning from attribute-value data. Only recently, research started investigating kernels for structured data. This paper considers kernels for multi-instance problems a class of concepts on individuals represented by sets. The main result of this paper is a kernel on multi-instance data that can be shown to separate positive and negative sets under natural assumptions. This kernel compares favorably with state of the art multi-instance learning algorithms in an empirical study. Finally, we give some concluding remarks and propose future work that might further improve the results.
17,874,965
1a1d28cadea3949e9e938fc4caa73abd1c5a0d80
Large Margin Classification for Moving Targets
We consider using online large margin classification algorithms in a setting where the target classifier may change over time. The algorithms we consider are Gentile's ALMA, and an algorithm we call NORMA which performs a modified online gradient descent with respect to a regularised risk. The update rule of ALMA includes a projection-based regularisation step, whereas NORMA has a weight decay type of regularisation. For ALMA we can prove mistake bounds in terms of the total distance the target moves during the trial sequence. For NORMA, we need the additional assumption that the movement rate stays sufficiently low uniformly over time. In addition to the movement of the target, the mistake bounds for both algorithms depend on the hinge loss of the target. Both algorithms use a margin parameter which can be tuned to make them mistake-driven (update only when classification error occurs) or more aggressive (update when the confidence of the classification is below the margin). We get similar mistake bounds both for the mistake-driven and a suitable aggressive tuning. Experiments on artificial data confirm that an aggressive tuning is often useful even if the goal is just to minimise the number of mistakes.
1,953,879
1a4ab3eb4a9e29455b6adaf37fbb51b9751f4fb4
Kernel Ma hines and Boolean Fun tions
null
18,535,414
2862e7b8fefb209cdb4c47a1643f2af71cd67b00
Support Vector Machines and Kernel Algorithms
null
13,989,473
2c0577cc0ab4210960984c05cec55f41cac2d918
Minimal Kernel Classifiers
A finite concave minimization algorithm is proposed for constructing kernel classifiers that use a minimal number of data points both in generating and characterizing a classifier. The algorithm is theoretically justified on the basis of linear programming perturbation theory and a leave-one-out error bound as well as effective computational results on seven real world datasets. A nonlinear rectangular kernel is generated by systematically utilizing as few of the data as possible both in training and in characterizing a nonlinear separating surface. This can result in substantial reduction in kernel data-dependence (over 94% in six of the seven public datasets tested on) and with test set correctness equal to that obtained by using a conventional support vector machine classifier that depends on many more data points. This reduction in data dependence results in a much faster classifier that requires less storage. To eliminate data points, the proposed approach makes use of a novel loss function, the "pound" function (·)#, which is a linear combination of the 1-norm and the step function that measures both the magnitude and the presence of any error.
2,908,949
408d390e80423ad7165180edba0a5b6e706b4b43
Adapting Codes and Embeddings for Polychotomies
In this paper we consider formulations of multi-class problems based on a generalized notion of a margin and using output coding. This includes, but is not restricted to, standard multi-class SVM formulations. Differently from many previous approaches we learn the code as well as the embedding function. We illustrate how this can lead to a formulation that allows for solving a wider range of problems with for instance many classes or even "missing classes". To keep our optimization problems tractable we propose an algorithm capable of solving them using two-class classifiers, similar in spirit to Boosting.
1,749,187
5045f45445b1fcef589c6d40dba834219cdd9e93
Learning with Kernels: support vector machines, regularization, optimization, and beyond
null
52,872,213
72a7e7bc1911b6a327c4614553bfcde98194d4ef
Fast Kernels for String and Tree Matching
In this paper we present a new algorithm suitable for matching discrete objects such as strings and trees in linear time, thus obviating dynamic programming with quadratic time complexity. Furthermore, prediction cost in many cases can be reduced to linear cost in the length of the sequence to be classified, regardless of the number of support vectors. This improvement on the currently available algorithms makes string kernels a viable alternative for the practitioner.
86,710,756
c1b9cb72f74e3091a00de438423314b902aff023
A Short Introduction to Learning with Kernels
We briefly describe the main ideas of statistical learning theory, support vector machines, and kernel feature spaces. This includes a derivation of the support vector optimization problem for classification and regression, the v-trick, various kernels and an overview over applications of kernel methods.
6,752,137
ce6f7ed1c3806f6628a771db3c8ce27a9860ed61
Bayesian Kernel Methods
Bayesian methods allow for a simple and intuitive representation of the function spaces used by kernel methods. This chapter describes the basic principles of Gaussian Processes, their implementation and their connection to other kernel-based Bayesian estimation methods, such as the Relevance Vector Machine.
230,585,695
d99350c375debf0af513a1d660e78b7379329a46
Sparse Kernel Feature Analysis
Kernel Principal Component Analysis (KPCA) has proven to be a versatile tool for unsupervised learning, however at a high computational cost due to the dense expansions in terms of kernel functions. We overcome this problem by proposing a new class of feature extractors employing l1 norms in coefficient space instead of the Reproducing Kernel Hilbert Space in which KPCA was originally formulated in. Moreover, the modified setting allows us to efficiently extract features which maximize criteria other than the variance in a way similar to projection pursuit.
15,572,502
f55aaa941537e2a83f912e0730f8ba1a05a8e71a
Hyperkernels
We consider the problem of choosing a kernel suitable for est imation using a Gaussian Process estimator or a Support Vector Machi ne. A novel solution is presented which involves defining a Reprod ucing Kernel Hilbert Space on the space of kernels itself. By utilizin g an analog of the classical representer theorem, the problem of choosi ng a kernel from a parameterized family of kernels (e.g. of varying widt h) is reduced to a statistical estimation problem akin to the problem of mi ni izing a regularized risk functional. Various classical settings f or model or kernel selection are special cases of our framework.
208,981,655
1070e4b420b686e5c347909951f2d1ef2e803a5b
A Tutorial Introduction
This chapter contains sections titled: Data Representation and Similarity, A Simple Pattern Recognition Algorithm, Some Insights From Statistical Learning Theory, Hyperplane Classifiers, Support Vector Classification, Support Vector Regression, Kernel Principal Component Analysis, Empirical Results and Implementations
63,200,661
17d2f027221d60cda373ecf15b03706c9e60269b
A Generalized Representer Theorem
Wahba's classical representer theorem states that the solutions of certain risk minimization problems involving an empirical risk term and a quadratic regularizer can be written as expansions in terms of the training examples. We generalize the theorem to a larger class of regularizers and empirical risk terms, and give a self-contained proof utilizing the feature space associated with a kernel. The result shows that a wide range of problems have optimal solutions that live in the finite dimensional span of the training examples mapped into feature space, thus enabling us to carry out kernel algorithms independent of the (potentially infinite) dimensionality of the feature space.
9,256,459
1b3250c3bb5209a66fca85912a5a0db46ee9c48b
Sparse GreedyGaussian Pro ess Regression
null
17,617,926
1d97bf0d29691a9c660cd8410cb338e4bd6ce964
Elements of Statistical Learning Theory
This chapter contains sections titled: Introduction, The Law of Large Numbers, When Does LearningWork: the Question of Consistency, Uniform Convergence and Consistency, How to Derive a VC Bound, A Model Selection Example, Summary, Problems
117,814,116
327a6afe3c4813e61686a5ed8b22c576292c03c7
Notation and Symbols
null
123,472,337
3d7c0d2f32e85e4335d634dd7b62243362528bb9
Regularized Principal Manifolds
Many settings of unsupervised learning can be viewed as quantization problems -- the minimization of the expected quantization error subject to some restrictions. This allows the use of tools such as regularization from the theory of (supervised) risk minimization for unsupervised settings. Moreover, this setting is very closely related to both principal curves and the generative topographic map. We explore this connection in two ways: 1) we propose an algorithm for finding principal manifolds that can be regularized in a variety of ways. Experimental results demonstrate the feasibility of the approach. 2) We derive uniform convergence bounds and hence bounds on the learning rates of the algorithm. In particular, we give good bounds on the covering numbers which allows us to obtain a nearly optimal learning rate of order O(m-1/2+α) for certain types of regularization operators, where m is the sample size and ff an arbitrary positive constant.
730,432
4083128d39b74643a1ec6e498eae0e70cafc87a5
Kernel Fisher Discriminant
This chapter contains sections titled: Introduction, Fisher's Discriminant in Feature Space, Efficient Training of Kernel Fisher Discriminants, Probabilistic Outputs, Experiments, Summary, Problems
126,051,545
60ec5a4149416978ecf1f2d071a560582d4c78b5
Pre-Images and Reduced Set Methods
null
184,762,705
71961da87f7d8dcfebf904e477bd24084e8a4b3d
Single-Class Problems: Quantile Estimation and Novelty Detection
This chapter contains sections titled: Introduction, A Distribution's Support and Quantiles, Algorithms, Optimization, Theory, Discussion, Experiments, Summary, Problems
125,916,966
76f96dadd80b19bde49e0e1f07bfa9fe8485eeec
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
From the Publisher: In the 1990s, a new type of learning algorithm was developed, based on results from statistical learning theory: the Support Vector Machine (SVM). This gave rise to a new class of theoretically elegant learning machines that use a central concept of SVMs—-kernels--for a number of learning tasks. Kernel machines provide a modular framework that can be adapted to different tasks and domains by the choice of the kernel function and the base algorithm. They are replacing neural networks in a variety of fields, including engineering, information retrieval, and bioinformatics. Learning with Kernels provides an introduction to SVMs and related kernel methods. Although the book begins with the basics, it also includes the latest research. It provides all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms and to understand and apply the powerful algorithms that have been developed over the last few years.
52,872,213
7832cc6a3045d10925a4674c13e9343cff299af9
Regularized principal manifolds
Many settings of unsupervised learning can be viewed as quantization problems - the minimization of the expected quantization error subject to some restrictions. This allows the use of tools such as regularization from the theory of (supervised) risk minimization for unsupervised learning. This setting turns out to be closely related to principal curves, the generative topographic map, and robust coding.We explore this connection in two ways: (1) we propose an algorithm for finding principal manifolds that can be regularized in a variety of ways; and (2) we derive uniform convergence bounds and hence bounds on the learning rates of the algorithm. In particular, we give bounds on the covering numbers which allows us to obtain nearly optimal learning rates for certain types of regularization operators. Experimental results demonstrate the feasibility of the approach.
730,432
7ca6a04d63cd1d0d73db37ba35c862859c8d52b6
Concepts and Tools
null
185,157,453
9220ddfec5f7cc4e97cca7f449ddbda0aa59146b
An improved training algorithm for kernel Fisher discriminants
We present a fast training algorithm for the kernel Fisher discriminant classifier. It uses a greedy approximation technique and has an empirical scaling behavior which improves upon the state of the art by more than an order of magnitude, thus rendering the kernel Fisher algorithm a viable option also for large datasets.
11,350,277
9cc912ae25797e5f7c0d73300d3968ad8339b411
Estimating the Support of a High-Dimensional Distribution
Suppose you are given some data set drawn from an underlying probability distribution P and you want to estimate a simple subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori specified value between 0 and 1. We propose a method to approach this problem by trying to estimate a function f that is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabeled data.
2,110,475
a0e1ae6e5eadb8859716a14a8e58e027da59da31
Kernel Feature Extraction
This chapter contains sections titled: Introduction, Kernel PCA, Kernel PCA Experiments, A Framework for Feature Extraction, Algorithms for Sparse KFA, KFA Experiments, Summary, Problems
125,524,515
a9e14b2332ac45134f88d6cfd6ad7d2b65babb4c
Bound on the Leave-One-Out Error for Density Support Estimation using nu-SVMs
null
118,602,543
ab206c8403f0a63da047f9b8d95e4dede6ddc7ca
Support vector machine learning
null
62,193,892
b7c43ce28f91288546001428d1f27d7fc4a217a2
Learning Theory Revisited
This chapter contains sections titled: Concentration of Measure Inequalities, Leave-One-Out Estimates, PAC-Bayesian Bounds, Operator-Theoretic Methods in Learning Theory, Summary, Problems
125,960,524
bcbbb98caf6f2f8349e97ec5ea75fa1518a3dc25
Risk and Loss Functions
This chapter contains sections titled: Loss Functions, Test Error and Expected Risk, A Statistical Perspective, Robust Estimators, Summary, Problems
125,568,517
ea1244e3a362ab880e9f04b9ec9b9946f387b8bd
Kernel Machines and Boolean Functions
We give results about the learnability and required complexity of logical formulae to solve classification problems. These results are obtained by linking propositional logic with kernel machines. In particular we show that decision trees and disjunctive normal forms (DNF) can be represented by the help of a special kernel, linking regularized risk to separation margin. Subsequently we derive a number of lower bounds on the required complexity of logic formulae using properties of algorithms for generation of linear estimators, such as perceptron and maximal perceptron learning.
1,870,326
ee177aacf6b3697d079579ce558cdb2ee58cee39
Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use of a viewpoint that is apparently novel in the field of statistical learning theory. The hypothesis class is described in terms of a linear operator mapping from a possibly infinite-dimensional unit ball in feature space into a finite-dimensional space. The covering numbers of the class are then determined via the entropy numbers of the operator. These numbers, which characterize the degree of compactness of the operator can be bounded in terms of the eigenvalues of an integral operator induced by the kernel function used by the machine. As a consequence, we are able to theoretically explain the effect of the choice of kernel function on the generalization performance of support vector machines.
777,816
01a61d9b9183ce11c89e36d9e1f24614c98f3ee8
Query Learning with Large Margin Classifiers
The active selection of instances can significantly improve the generalisation performance of a learning machine. Large margin classifiers such as support vector machines classify data using the most informative instances (the support vectors). This makes them natural candidates for instance selection strategies. In this paper we propose an algorithm for the training of support vector machines using instance selection. We give a theoretical justification for the strategy and experimental results on real and artificial data demonstrating its effectiveness. The technique is most efficient when the data set can be learnt using few support vectors.
14,439,180
044de86b2b92d3e2941a0a4d8628ce9d68e0172d
GACV for Support Vector Machines
This chapter contains sections titled: Introduction, The SVM Variational Problem, The Dual Problem, The Generalized Comparative Kullback-Leibler Distance, Leaving-out-one and the GACV, Numerical Results, Acknowledgments
126,047,450
08c45f752b4b45195dfbb83a55e6612fcc45900e
Adaptive Margin Support Vector Machines
This chapter contains sections titled: Introduction, Leave-One-Out Support Vector Machines, Adaptive Margin SVMs, Relationship of AM-SVMs to Other SVMs, Theoretical Analysis, Experiments, Discussion
64,089,129
0ddfcf57b6ea30ec00df91c7b0912c1113f24b13
Regularization with Dot-Product Kernels
In this paper we give necessary and sufficient conditions under which kernels of dot product type k(x, y) = k(x ċ y) satisfy Mercer's condition and thus may be used in Support Vector Machines (SVM), Regularization Networks (RN) or Gaussian Processes (GP). In particular, we show that if the kernel is analytic (i.e. can be expanded in a Taylor series), all expansion coefficients have to be nonnegative. We give an explicit functional form for the feature map by calculating its eigenfunctions and eigenvalues.
12,633,343
0e68e2ff254f8db0ab9c5cfa58f89d06c3ccb647
Towards a Strategy for Boosting Regressors
This chapter contains sections titled: Introduction, Background and Results, Top Level Description of the Boosting Strategy, Generation of Weak Learners, Overall Algorithm, Experiments, Conclusions
63,857,836
3101adc33f3c4caa453918726cb2fa6598f04516
Large Margin Bank Boundaries for Ordinal Regression
null
126,091,282
3aaadcda34636c5ffe1c713e059058fa866e55e0
Linear Discriminant and Support Vector Classifiers
This chapter contains sections titled: Introduction, What is a Linear Discriminant?, Formulation of the Linear Discriminant Training Problem, Training Algorithms, Which Linear Discriminant?, Conclusion, Acknowledgments
125,512,865
474165d2f20504ecb766f614b2d2a674508801d9
Margin Distribution and Soft Margin
This chapter contains sections titled: Introduction, Margin Distribution Bound on Generalization, An Explanation for the Soft Margin Algorithm, Related Techniques, Conclusion, Acknowledgments
125,607,160
50b21ab47d7ed16b3409a49ab87524f2e98f334a
Support Vectors and Statistical Mechanics
This chapter contains sections titled: Introduction, The Basic SVM Setting, The Learning Problem, The Approach of Statistical Mechanics, Results I: General, Results II: Overfitting, Results III: Dependence on the Input Density, Discussion and Outlook, Acknowledgments
125,660,860
57a47c42552115eaffffd3ffaafc2f02ebc38dfa
Computing the Bayes Kernel Classifier
This chapter contains sections titled: Introduction, A Simple Geometric Problem, The Maximal Margin Perceptron, The Bayes Perceptron, The Kernel-Billiard, Numerical Tests, Conclusions, Appendix
125,721,616
5efc26183090fa5b042ffbcb9e6d811a79a14067
Bounds on Error Expectation for SVM
null
125,202,514
61a8edf27625f60dadd94669b2cf2afd39052590
Maximal Margin Perception
This chapter contains sections titled: Introduction, Basic Approximation Steps, Basic Algorithms, Kernel Machine Extension, Soft Margin Extension, Experimental Results, Discussion, Conclusions, Appendix: Details of comparison against six other methods for iterative generation of support vector machines
63,874,640
68ac726d99671538ff89a3ec21c316bfb6eb37a2
Invariant Feature Extra tion andClassi ation in Kernel Spa
null
14,669,628
6db8712aedd3448b3a5e6bd45ded96c5a4ffacc0
Choosing /spl nu/ in support vector regression with different noise models-theory and experiments
In support vector (SV) regression, a parameter /spl nu/ controls the number of support vectors and the number of points that come to lie outside of the so-called /spl epsi/-insensitive tube. For various noise models and SV parameter settings, we experimentally determine the values of /spl nu/ that lead to the lowest generalization error. We find good agreement with the values that had previously been predicted by a theoretical argument based on the asymptotic efficiency of a simplified model of SV regression.
2,375,659
725ae3e36b9df3698fdf62bd00fcc191c57f9741
Regularization Networks and Support Vector Machines
null
70,866
8d73c0d0c92446102fdb6cc728b5d69674a1a387
New Support Vector Algorithms
We propose a new class of support vector algorithms for regression and classification. In these algorithms, a parameter lets one effectively control the number of support vectors. While this can be useful in its own right, the parameterization has the additional benefit of enabling us to eliminate one of the other free parameters of the algorithm: the accuracy parameter in the regression case, and the regularization constant C in the classification case. We describe the algorithms, give some theoretical results concerning the meaning and the choice of , and report experimental results.
207,673,395
8e597460557d44de07ec570738cd2b42cdcc2580
Sparse Greedy Gaussian Process Regression
We present a simple sparse greedy technique to approximate the maximum a posteriori estimate of Gaussian Processes with much improved scaling behaviour in the sample size m. In particular, computational requirements are O(n2m), storage is O(nm), the cost for prediction is O(n) and the cost to compute confidence bounds is O(nm), where n ≪ m. We show how to compute a stopping criterion, give bounds on the approximation error, and show applications to large scale problems.
8,981,636
911dc3dce97dde762da4c106b2bbec74672acc2b
Robust Ensemble Learning for Data Mining
We propose a new boosting algorithm which similarly to v- Support-Vector Classification allows for the possibility of a pre-specified fraction v of points to lie in the margin area or even on the wrong side of the decision boundary. It gives a nicely interpretable way of controlling the trade-off between minimizing training error and capacity. Furthermore, it can act as a filter for finding and selecting informative patterns from a database.
12,373,506
9b99b58bfbe7500ee71d62febc60bab6d4c0b575
Kernel method for percentile feature extraction
A method is proposed which computes a direction in a dataset such that a speci ed fraction of a particular class of all examples is separated from the overall mean by a maximal margin. The projector onto that direction can be used for class-speci c feature extraction. The algorithm is carried out in a feature space associated with a support vector kernel function, hence it can be used to construct a large class of nonlinear feature extractors. In the particular case where there exists only one class, the method can be thought of as a robust form of principal component analysis, where instead of variance we maximize percentile thresholds. Finally, we generalize it to also include the possibility of specifying negative examples.
18,223,591
9ded923a192ffbf13e4466c6b7d2ede55724b716
Sparse Greedy Matrix Approximation for Machine Learning
null
41,680,909
a38a5a4a8a683fd09f91dd79ab353b6ce9876d73
Beyond the Margin
The concept of large margins is a unifying principle for the analysis of many different approaches to the classification of data from examples, including boosting, mathematical programming, neural networks, and support vector machines. The fact that it is the margin, or confidence level, of a classification--that is, a scale parameter--rather than a raw training error that matters has become a key tool for dealing with classifiers. This book shows how this idea applies to both the theoretical analysis and the design of algorithms.The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. Among the contributors are Manfred Opper, Vladimir Vapnik, and Grace Wahba.
218,885,156
a61b3f5ff790f3d2edb9d06aebd256383e94cce2
Generalized Support Vector Machines
This chapter contains sections titled: Introduction, GSVM: The General Support Vector Machine, Quadratic Programming Support Vector Machines, Linear Programming Support Vector Machines, A Simple Illustrative Example, Conclusion, Acknowledgments
63,460,898
aef07e6e8b87617f47ef6373f098b5a54ace9a2a
Proc.17th Int Conference on Machine Learning
null
65,112,103
b6fee0077cde5513fc5f6212c70b2e188df64cf6
Natural Regularization from Generative Models
This chapter contains sections titled: Introduction, Natural Kernels, The Natural Regularization Operator, The Feature Map of Natural Kernel, Experiments, Discussion
118,183,172
b7caf811d6980627caad1a8b3053f40348693508
Probabilities for SV Machines
This chapter contains sections titled: Introduction, Fitting a Sigmoid After the SVM, Empirical Tests, Conclusions, Appendix: Pseudo-code for the Sigmoid Training
64,295,966
bcd4e7be87556b9ffa5af5a59065d3acfc291363
Introduction to Large Margin Classifiers
This chapter contains sections titled: A Simple Classification Problem, Theory, Support Vector Machines, Boosting, Empirical Results, Implementations, and Further Developments, Notations
59,736,345
c36415d28c41a6564386f5014a05c3cbbef4de9d
[Anastomosis using the Valtrac ring--pro and con].
Authors present their experiences with the gastrointestinal anastomosis construction with the biofragmentable ring (Valtrac). Between May 1995 and June 1999 they used it in the group of 75 patients with mean age 58.4 (range 19-59) years. They used Valtrac most often--in 32 patients--to construct the anastomosis between small and large bowel after right hemicolectomy. One enterocutaneous fistula and one intestinal obstruction due to adhesions occurred in this group. Two patients had the signs of fecal impaction in postoperative period, which disappeared after the ring fragmentation. According to their experiences authors claim, that the anastomosis with Valtrac is a safe procedure with a few number of postoperative complications. Disadvantage in our conditions is its high price.
19,911,075
c80a962c8aadc9c2139928c90e1f5e69fea68346
Robust Ensemble Learning for Data
null
16,069,672
c900c66310a29bdb771270bb22440a4cf42958cb
Advances in Large Margin Classifiers
From the Publisher: The concept of large margins is a unifying principle for the analysis of many different approaches to the classification of data from examples, including boosting, mathematical programming, neural networks, and support vector machines. The fact that it is the margin, or confidence level, of a classification--that is, a scale parameter--rather than a raw training error that matters has become a key tool for dealing with classifiers. This book shows how this idea applies to both the theoretical analysis and the design of algorithms. The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. Among the contributors are Manfred Opper, Vladimir Vapnik, and Grace Wahba.
54,174,771
d209278c4b82162257a5e29706de65c5593d7a4a
The Entropy Regularisation Information Criterion
null
124,107,749
dda3d63ac440420c9e7c2218e3c151dde3a4ebf1
Leave-One-Out Methods
The concept of large margins is a unifying principle for the analysis of many different approaches to the classification of data from examples, including boosting, mathematical programming, neural networks, and support vector machines. The fact that it is the margin, or confidence level, of a classification--that is, a scale parameter--rather than a raw training error that matters has become a key tool for dealing with classifiers. This book shows how this idea applies to both the theoretical analysis and the design of algorithms.The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. Among the contributors are Manfred Opper, Vladimir Vapnik, and Grace Wahba.
63,233,108
e003f0a280275de163269d32046950ad37aa37f0
Dynamic Alignment Kernels
This chapter contains sections titled: Introduction: Linear Methods using Kernel function, Applying Linear Methods to Structured Objects, Conditional Symmetric Independence Kernels, Pair Hidden Markov Models, Conditionally Symmetrically Independent PHMMs, Conclusion
17,875,902
e0f042a9f2e59fb469fa959a94fdfd64b53c29d4
Gaussian Processes and SVM: Mean Field and Leave-One-Out
This chapter contains sections titled: Introduction, Gaussian Process Classification, Modeling the Noise, From Gaussian Processes to SVM, Leave-One-Out Estimator, Naive Mean Field Algorithm, Simulation Results, Conclusion
125,220,229
ee6782fa23729abbab8eaa6c64822a96fd43cc23
Entropy Numbers of Linear Function Classes
This paper collects together a miscellany of results originally motivated by the analysis of the generalization performance of the “maximum-margin” algorithm due to Vapnik and others. The key feature of the paper is its operator-theoretic viewpoint. New bounds on covering numbers for classes related to Maximum Margin classes are derived directly without making use of a combinatorial dimension such as the VC-dimension. Specific contents of the paper include: a new and self-contained proof of Maurey’s theorem and some generalizations with small explicit values of constants; bounds on the covering numbers of maximum margin classes suitable for the analysis of their generalization performance; the extension of such classes to those induced by balls in quasi-Banach spaces (such as norms with ). extension of results on the covering numbers of convex hulls of basis functions to -convex hulls ( ); an appendix containing the tightest known bounds on the entropy numbers of the identity operator between and ( ).
7,051,801
f0992f17bf94f67ca6125d4f425ec6ab4c73f3a3
Entropy Numbers for Convex Combinations and MLPs
This chapter contains sections titled: Introduction, Tools from Functional Analysis, Convex Combinations of Parametric Families, Convex Combinations of Kernels, Multilayer Networks, Discussion, Appendix: A Remark on Traditional Weight Decay, Appendix: Proofs
118,355,980
06f726c32ab34119b1e19d438c8ac19964ca9dcd
Lernen mit Kernen
Zusammenfassung. Dieser Beitrag erläutert neue Ansätze und Ergebnisse der statistischen Lerntheorie. Nach einer Einleitung wird zunächst das Lernen aus Beispielen vorgestellt und erklärt, dass neben dem Erklären der Trainingdaten die Komplexität von Lernmaschinen wesentlich für den Lernerfolg ist. Weiterhin werden Kern-Algorithmen in Merkmalsräumen eingeführt, die eine elegante und effiziente Methode darstellen, verschiedene Lernmaschinen mit kontrollierbarer Komplexität durch Kernfunktionen zu realisieren. Beispiele für solche Algorithmen sind Support-Vektor-Maschinen(SVM), die Kernfunktionen zur Schätzung von Funktionen verwenden, oder Kern-PCA (principal component analysis), die Kernfunktionen zur Extraktion von nichtlinearen Merkmalen aus Datensätzen verwendet. Viel wichtiger als jedes einzelne Beispiel ist jedoch die Einsicht, dass jeder Algorithmus, der sich anhand von Skalarprodukten formulieren lässt, durch Verwendung von Kernfunktionen nichtlinear verallgemeinert werden kann.Die Signifikanz der Kernalgorithmen soll durch einen kurzen Abriss einiger industrieller und akademischer Anwendungen unterstrichen werden. Hier konnten wir Rekordergebnisse auf wichtigen praktisch relevanten Benchmarks erzielen.Abstract. We describe recent developments and results of statistical learning theory. In the framework of learning from examples, two factors control generalization ability: explaining the training data by a learning machine of a suitable complexity. We describe kernel algorithms in feature spaces as elegant and efficient methods of realizing such machines. Examples thereof are Support Vector Machines (SVM) and Kernel PCA (Principal Component Analysis). More important than any individual example of a kernel algorithm, however, is the insight that any algorithm that can be cast in terms of dot products can be generalized to a nonlinear setting using kernels.Finally, we illustrate the significance of kernel algorithms by briefly describing industrial and academic applications, including ones where we obtained benchmark record results.
551,677
12a26141985867a92771189dedf15bff18cdaf8b
Invariant Feature Extraction and Classification in Kernel Spaces
We incorporate prior knowledge to construct nonlinear algorithms for invariant feature extraction and discrimination. Employing a unified framework in terms of a nonlinear variant of the Rayleigh coefficient, we propose non-linear generalizations of Fisher's discriminant and oriented PCA using Support Vector kernel functions. Extensive simulations show the utility of our approach.
46,089,133
27aac1bcaea69f33de6fea34a31a836cbee5e9d1
Kernel principal component analysis
null
7,831,590
36bf9cbc759d2da451c1d2601e9f7b9ace80665a
Classification on proximity data with LP-machines
We provide a new linear program to deal with classification of data in the case of data given in terms of pairwise proximities. This allows to avoid the problems inherent in using feature spaces with indefinite metric in support vector machines, since the notion of a margin is purely needed in input space where the classification actually occurs. Moreover in our approach we can enforce sparsity in the proximity representation by sacrificing training error. This turns out to be favorable for proximity data. Similar to /spl nu/-SV methods, the only parameter needed in the algorithm is the (asymptotical) number of data points being classified with a margin. Finally, the algorithm is successfully compared with /spl nu/-SV learning in proximity space and K-nearest-neighbors on real world data from neuroscience and molecular biology.
856,530