%-----------------------------------------------------
\chapter{Introduction}
%-----------------------------------------------------

%-----------------------------------------------------
\section{Overview}
%-----------------------------------------------------

The pioneering work related to Support Vector Machine
(SVM) appeared in 1992, by
Vapnik and co-workers \cite{BOSER92a,VAPNIK9501}. They introduced
a new training algorithm that maximizes the margin between 
training vectors and a separation hyper-plane. With this algorithm,
``optimal margin classifiers'' could be obtained for perceptrons,
radial basis functions (RBF) and polynomials.
As an additional feature, it was possible to  choose, among all training data,
the most important vectors which define the separation hyper-plane 
between the classes, known as ``support vectors''.

The first SVM did not deal with misclassification,
only linearly separable training patterns (in the feature space) 
could be used. SVMs based on this training algorithm are 
known as SVMs with ``hard margins'' \cite{VAPNIK9501}.

With the adaptation of ``slack variables'' in the training process,
non linearly separable training patterns could be treated.
As a result, separation hyper-planes with maximum margin 
and minor classification errors are obtained  by SVMs with 
``soft margins'' \cite{VAPNIK9801,CRISTIANINI0001,CAMPBELL0201}.

In SVM design, non-linear functions map input data into a higher
dimensional {\it feature space}, where a separation hyper-plane is
obtained as the solution to the classification problem. This
non-linear mapping is carried out by kernel functions, like
polynomials kernels, Radial Basis Functions kernels  and 
Sigmoidal kernels, taking into consideration all the
vectors of the input data set. The solution is then obtained by
considering the training data error and the separation margin
between classes, which is a key concept in SVM formulation. The
separation margin is controlled by the user and is related to the
classification error that is allowed by the separation
hyper-plane.

SVM's training consists in solving a
quadratic program (QP) with linear constraints which depend on the
training vectors, on a few kernel parameters and on the separation
margin limits. The QP solution provides necessary information to
choose, among all data, the most important vectors which 
define the separation hyper-plane between the classes, known as
support vectors (SV). Hyper-plane separation in the feature space
results in non-linear decision boundaries in the input space.

% -----------------------------------------------
\section{Motivations}
% -------------------------------------------------

Since its formal description, Support Vector Machines
have been applied to several regression and classification problems. 
It has became a useful
tool in pattern recognition, with applications in many areas like
Spam categorization \cite{Drucker99}, texture recognition
\cite{Barzilay99,KIM0201}, gene expression analysis 
\cite{Brown00,VALENTINI0201}, text
categorization \cite{Joachims98}, olfactory signal recognition
\cite{DISTANTE0301}, segmentation of images \cite{KOTROPOULOS0301},
face recognition \cite{SHAONING0301,WANG0201,JEFFREY0201}
and hand-written digit recognition \cite{VAPNIK9801,LOO0201}.

An issue addressed in SVM research is the reduction
of its high computational cost,  
which is affected by the mapping of all
input vectors into the feature space where the solution to
the classification (or regression) problem is actually obtained.
The kernel products $K(\mathbf{x}_i,\mathbf{x}_j)$ are carried out to all
input data pairs $(\mathbf{x}_i,\mathbf{x}_j)$, what results in a
quadratic computational cost, that may be prohibitive for very large
data sets.

A way to decrease the computational cost is the reduction of 
the data set size by a sample selection method. Random selection is, 
by far, the most simple and easy way to implement sample selection 
 but has a serious problem: it does not take into account
the very specific structure of SVMs, based on support vectors
chosen from the data set. The development of training methods that choose
vectors with a higher probability to become support vectors 
may ensure better generalization ability for the SVMs when compared
with random selection. Of course, more elaborated methods
frequently take a larger time and a trade-off needs to established.
%
There are several methods to solve the QP problem present in
the training of SVMs. New strategies to solve this problem,
implementing simpler and faster algorithms, include other 
ways to generate SVMs with a lower computational cost
as well.

%-----------------------------------------------------
\section{Contributions}
%-----------------------------------------------------

One goal of this thesis is to present strategies 
based on $k$-means  \cite{MCQUENN67,ANDERBERG73,Lloyd82}
as a pre-selection method for 
Support Vector Machines training \cite{BARROS2000A}.
Due to the $k$-means boundary detection abilities, as will be demonstrated, 
these strategies are better than the traditional random selection method
since they consider the SVM topology when choosing the subset.
This is carried out by
applying $k$-means to organize input vectors into several small
clusters, from which input vectors are selected to form the new
training set. 
Computational cost of $k$-means is reduced with the application
of methods for dimensionality reduction, generating a combined 
training algorithm  for SVMs denoted as SVM-KM and with improved overall performance.
As expected, the use of reduced training sets generate a small loss
in generalization capacity.

How $k$-means and boundary estimation are related is explained 
using The Bayes Decision Theory \cite{dudahart73} 
and a measure for the homogeneity of the cluster.
This measure provides additional information to the 
Bayes Rule, enhancing the decision process.

Five pre-selection heuristics that take into
consideration the homogeneity of the cluster were proposed
in this work: KMDC, KMCC, KMAC, KMCD and KMAD.
The strategies differ on how the pre-selection of input
patterns is carried out, after clustering by $k$-means.
Their names are composed using the base name KM and two letters that
indicate what kind of action is performed with mixed and non-mixed
clusters, respectively: [D]iscard all vectors, 
use only the [C]enter of the cluster
or use [A]ll vectors.

SVM-KM yields a deep insight into sample selection strategies for SVMs,
indicating that the structure of SVMs, when 
creating sampling methods, has to be considered.
The idea is that if SVMs ``benefit'' some
vectors during the training (the vectors chosen as support vectors) it is natural to 
try to use these same vectors in the pre-selection strategy. 



%-------------------



Other contribution is the presentation of a
new training algorithm for SVMs based on the 
pedagogical pattern selection strategy called EDR 
(Error Dependent Repetition) \cite{CACHIN9401}. 
Using an iterative process, 
SVM-EDR can solve the dual problem without any assumption about 
support vectors or the Karush-Kuhn-Tucker (KKT) conditions
\cite{Luenberger86}.

SVM-EDR training algorithm \cite{barros:2001} is a boosting algorithm
that uses a deterministic schedule based on error
to re-sample the data set. Patterns related to large
errors have its Lagrange multipliers updated more
frequently, according to a gradient ascent strategy.
The schedule is calculated using a modified EDR procedure,
adapted to the output error of the SVMs.
Generalization errors and support vectors obtained are 
similar to that one found by Sequential Minimal Optimization (SMO).

%-------------------

The third contribution in of this work is a program for training 
support vector machines, called SVMBR.
Although several programs to training SVMs may
be found over the Internet, the main reason to develop a
new one was to create a framework where not only new kernels
could be added, as happens with the majority of SVMs programs, but
also it was allowed to add new training algorithms and QP
solver strategies.

SVMBR was developed from scratch, in C++, after a careful class
modeling and can be compiled to run on any operational 
system with a standard C++ compiler. 
%The program is totally 
%modeled in UML (Unified Modeling Language). 
At this moment, SMO)
and EDR are the only training methods
available and there are four kernel functions supported:
linear, polynomial, RBF and sigmoid.

%-----------------------------------------------------
\section{Outline of the chapters}
%-----------------------------------------------------

%\ref{CAP-LEARN-GER}
%\ref{CAP-INTRO-SVM}
%\ref{CAP-SVM-TRAINING}
%\ref{CAP-SVM-KM}
%\ref{CAP-SVM-EDR}
%\ref{CAP-CONC-FUTURE}


Some concepts about learning theory are described in  
Chapter~\ref{CAP-LEARN-GER}.
A brief introduction to statistical learning theory is given and 
important definitions like empirical risk minimization,
structural risk minimization principles and Vapnik-Chervonenskis dimension
are discussed.

Basic formulation of SVMs with hard and soft
margins are presented in Chapter \ref{CAP-INTRO-SVM}. 
Kernels and implicit mapping into feature 
space, two important features of SVMs, are described
as well. A pattern classification example 
is given at the end of the chapter.

Chapter \ref{CAP-SVM-TRAINING} contains many strategies to
solve the problem arising from training SVMs.
Training methods are divided into
four groups: classical, geometric, iterative and
working sets. Specifically, Successive Over Relaxation 
(SOR) and SMO  are  
detailed due to their importance in subsequent chapters.

The SVM-KM method is described in Chapter \ref{CAP-SVM-KM}.
It is explained how the proposed boundary estimation by 
$k$-means works and five pre-selection strategies
are presented. Results from simulations are showed and discussed. 

The SVM-EDR method, a new strategy for training SVMs, is presented in 
Chapter \ref{CAP-SVM-EDR}. SVM-EDR uses a step ascent algorithm 
associated with pattern selection strategies to solve the dual problem. 
Results from simulations are presented and discussed.

Conclusions and perspective for future works are given in
Chapter~\ref{CAP-CONC-FUTURE}.

In the Appendix \ref{SVMBR-PROGRAMA} is described how to use
the SVMBR program and how it is structured. Some  
examples of use are presented too.

