%-----------------------------------------------------
\chapter{Introduction}
%-----------------------------------------------------

%-----------------------------------------------------
\section{Overview}
%-----------------------------------------------------

The pioneering work related to Support Vector Machine
(SVM) appeared in 1992, by
Vapnik and co-workers \cite{BOSER92a,VAPNIK9501}. They introduced
a new training algorithm that maximizes the margin between 
training vectors and a separation hyper-plane. With this algorithm,
``optimal margin classifiers'' could be obtained for perceptrons,
radial basis functions (RBF) and polynomials.
As an additional feature, it was possible to  choose, among all training data,
the most important vectors which define the separation hyper-plane 
between the classes, known as ``support vectors''.

The first SVM did not deal with misclassification,
only linearly separable training patterns (in the feature space) 
could be used. SVMs based on this training algorithm are 
known as SVMs with ``hard margins'' \cite{VAPNIK9501}.

With the adaptation of ``slack variables'' in the training process,
non linearly separable training patterns could be treated.
As a result, separation hyper-planes with maximum margin 
and minor classification errors are obtained  by SVMs with 
``soft margins'' \cite{VAPNIK9801,CRISTIANINI0001,CAMPBELL0201}.

In SVM design, non-linear functions map input data into a higher
dimensional {\it feature space}, where a separation hyper-plane is
obtained as the solution to the classification problem. This
non-linear mapping is carried out by kernel functions, like
polynomials kernels, Radial Basis Functions kernels  and 
Sigmoidal kernels, taking into consideration all the
vectors of the input data set. The solution is then obtained by
considering the training data error and the separation margin
between classes, which is a key concept in SVM formulation. The
separation margin is controlled by the user and is related to the
classification error that is allowed by the separation
hyper-plane.

SVM's training consists in solving a
quadratic program (QP) with linear constraints which depend on the
training vectors, on a few kernel parameters and on the separation
margin limits. The QP solution provides necessary information to
choose, among all data, the most important vectors which 
define the separation hyper-plane between the classes, known as
support vectors (SV). Hyper-plane separation in the feature space
results in non-linear decision boundaries in the input space.

% -----------------------------------------------
\section{Motivations}
% -------------------------------------------------

Since its formal description, Support Vector Machines
have been applied to several regression and classification problems. 
It has became a useful
tool in pattern recognition, with applications in many areas like
Spam categorization \cite{Drucker99}, texture recognition
\cite{Barzilay99,KIM0201}, gene expression analysis 
\cite{Brown00,VALENTINI0201}, text
categorization \cite{Joachims98}, olfactory signal recognition
\cite{DISTANTE0301}, segmentation of images \cite{KOTROPOULOS0301},
face recognition \cite{SHAONING0301,WANG0201,JEFFREY0201}
and hand-written digit recognition \cite{VAPNIK9801,LOO0201}.

An issue addressed in SVM research is the reduction
of its high computational cost,  
which is affected by the mapping of all
input vectors into the feature space where the solution to
the classification (or regression) problem is actually obtained.
The kernel products $K(\mathbf{x}_i,\mathbf{x}_j)$ are carried out to all
input data pairs $(\mathbf{x}_i,\mathbf{x}_j)$, what results in a
quadratic computational cost, that may be prohibitive for very large
data sets.

A way to decrease the computational cost is the reduction of 
the data set size by a sample selection method. Random selection is, 
by far, the most simple and easy way to implement sample selection 
 but has a serious problem: it does not take into account
the very specific structure of SVMs, based on support vectors
chosen from the data set. The development of training methods that choose
vectors with a higher probability to become support vectors 
may ensure better generalization ability for the SVMs when compared
with random selection. Of course, more elaborated methods
frequently take a larger time and a trade-off needs to established.
%
There are several methods to solve the QP problem present in
the training of SVMs. New strategies to solve this problem,
implementing simpler and faster algorithms, include other 
ways to generate SVMs with a lower computational cost
as well.

%-----------------------------------------------------
\section{Outline of the chapters}
%-----------------------------------------------------

Some concepts about learning theory are described in  
Chapter~\ref{CAP-LEARN-GER}.
A brief introduction to statistical learning theory is given and 
important definitions like empirical risk minimization,
structural risk minimization principles and Vapnik-Chervonenskis dimension
are discussed.

Basic formulation of SVMs with hard and soft
margins are presented in Chapter \ref{CAP-INTRO-SVM}. 
Kernels and implicit mapping into feature 
space, two important features of SVMs, are described
as well. A pattern classification example 
is given at the end of the chapter.

Chapter \ref{CAP-SVM-TRAINING} contains many strategies to
solve the problem arising from training SVMs.
Training methods are divided into
four groups: classical, geometric, iterative and
working sets. Specifically, Successive Over Relaxation 
(SOR) and SMO  are  
detailed due to their importance in subsequent chapters.

In the Chapter \ref{SVMBR-PROGRAMA} is described how to use
the SVMBR program and how it is structured. SVMBR\cite{SMOBR} was developed from scratch, in C++, after a careful class modeling and can be compiled to run on any operational 
system with a standard C++ compiler.  SMO and EDR are the only training methods
available and there are four kernel functions supported:
linear, polynomial, RBF and sigmoid.