\chapter{Support Vector Machines}
\label{CAP-INTRO-SVM}

\section{Introduction}

Support Vector Machines (SVMs) can be considered as a learning machine 
with one hidden layer, trained with supervised learning but with an 
important difference
to other similar machines: they are based on the Structural Risk Minimization
principle. This characteristic allows SVMs to build learning machines
with better generalization ability and to overcome possible over fitting
in the data set.

A pioneering work related to SVMs appeared in 1992, by
Vapnik and co-workers \cite{BOSER92a,VAPNIK9501}. They introduced
a new training algorithm that maximizes the margin between 
training vectors and a separation hyper-plane. With this algorithm,
``optimal margin classifiers'' could be obtained for perceptrons,
radial basis functions (RBF) and polynomials.
As an additional feature, it was possible to  choose, among all training data,
the most important vectors which 
define the separation hyper-plane between the classes, known as
``support vectors''.

The first
SVMs did not deal with misclassification,
only linearly separable training patterns (in the feature space) 
could be used. SVMs based
on this training algorithm are 
known as SVMs with ``hard margins'' \cite{VAPNIK9501}.

With the adaptation of ``slack variables'' in the training process,
non linearly separable training patterns could be treated.
As a result, separation hyper-planes with maximum margin 
and minor classification errors are obtained  by SVMs with 
``soft margins'' \cite{VAPNIK9501}.

This chapter describes the basic formulation of SVMs with hard and soft
margins. Furthermore, it shows some concepts on kernels and
implicit mapping into feature space, two important features present
in SVMs.

\section{SVMs with hard margins}

Consider a training set  $\left\{ {\left(
{\mathbf{x}_i,y_i} \right)} \right\}_{i=1}^p$ with two distinct
class labels, given by ($+1$) and ($-1$).
A vector $\mathbf{x}_j$ is correctly classified when 
the following expression is true:
%A function able to decide when a vector $\mathbf{x}_j$
%was correctly classified can be expressed as:
% ----------------------------------------------------------------
\begin{equation}
y_j\left( \sum\limits_{i=1}^{p}w_i\varphi_i(\mathbf{x}_j)+b
\right) \geq +1 \;\;\;  
\mathrm{for}
\;\; j=1,2,\ldots,p 
\label{EQ-LIN-SEP}
\end{equation}
% ----------------------------------------------------------------
where $\varphi(\cdot)$ represents
the mapping of input vector $\mathbf{x}_j$ into a higher dimensional
space (called ``feature space''),
$w_i$'s are components of the weight and $b$ is the bias \cite{COVER6501}.
Values greater than one represent a correct
classification and the equation between parentheses defines the
separation hyper-plane.
%---
\begin{figure}[tb]
\centering
\psfrag{(a)}{(a)}
\psfrag{(b)}{(b)}
\includegraphics[scale=.4]{figs/hiperotimo.eps}\\
\caption[Maximum margin separation hyper-plane]
{(a) A possible separation and (b) the  maximum
    margin separation hyper-plane.}
\label{FIG-MAX-MARGIN}
\end{figure}
%-----

During the training of SVMs, the most important vectors to
define class boundaries, among all training data, 
are chosen. These vectors 
define the optimal separation hyper-plane,
with minor classification errors between the classes and maximum
separation margin (see Figure \ref{FIG-MAX-MARGIN}).
The optimal hyper-plane is given in vector form:
% ----------------------------------------------------------------
\begin{equation}
\mathbf{w}^T\gbf{\varphi}(\mathbf{x})+ b = 0, \label{EQ-HIPER-NON-LIN}
\end{equation}
% ----------------------------------------------------------------
where $\gbf{\varphi}(\mathbf{x}) = \left[
\varphi_1(\mathbf{x}) \; \varphi_2(\mathbf{x}) \;
\ldots \; \varphi_p(\mathbf{x})
\right]^T$ and $\mathbf w =\left[ w_1 \; w_2 \; \ldots \;
w_p \right]^T $

The SVM design with hard margins can be stated as \cite{VAPNIK9801,HAYKIN9901}:
% ----------------------------------------------------------------
\begin{quote}
{ \em   % ------ Inicio Italico ---
Given a training set
$\left\{ {\left({\mathbf{x}_i,y_i} \right)} \right\}_{i=1}^p$,
find the optimal values for the weight vector
$\mathbf{w}$ and bias $b$ such that the following constraints
\begin{equation}
\begin{array}{crc}
y_i(\mathbf{w}^T\gbf{\varphi}(\mathbf{x}_i)+b)\geq 1 &
\mathrm{for} &  i=1,2,\ldots,p \\
\end{array}
\label{EQ-CAP3-PRIMAL}
\end{equation}
are satisfied, and the cost functional
\begin{equation}
\Phi (\mathbf{w})=\frac{1}{2}\mathbf{w}^T\mathbf{w}
\end{equation}
is minimized.
} % ------ Fim Italico ---
\end{quote}
% -------------------------------------
% ----------------------------------------------------------------
The formulation above is called ``primal problem''. The cost
function is convex 
%(guaranteed minimum) 
and all restrictions are
linear. Using the theory of Lagrange multipliers \cite{BAZARAA7901},
this problem can be represented as:
% ----------------------------------------------------------------
\begin{equation}
J(\mathbf{w},b,\gbf{\alpha}) = {1 \over 2}\mathbf{w}^T\mathbf{w} -
\sum\limits_{i=1}^{p}\alpha_i
\left[
y_i(\mathbf{w}^T\gbf{\varphi}(\mathbf{x}_i)+b)-1
\right].
\label{EQ-FORMA-LAGRAN-DUAL}
\end{equation}
% ----------------------------------------------------------------
Differentiating Equation \ref{EQ-FORMA-LAGRAN-DUAL} with respect to
$\mathbf{w}$ and $b$%, the dual problem is obtained:
% ----------------------------------------------------------------
\begin{equation}
{\partial J \over \partial \mathbf{w}} =
\mathbf{w} - \sum\limits_{i=1}^{p}\alpha_iy_i\gbf{\varphi}(\mathbf{x}_i) =
\mathbf{0} \;\;\; \Rightarrow \;\;\;
\mathbf{w} = \sum\limits_{i=1}^{p}\alpha_iy_i\gbf{\varphi}(\mathbf{x}_i)
\label{EQ-DERIV-PART-W}
\end{equation}
% ----------------------------------------------------------------
\begin{equation}
{\partial J \over \partial b} =
\sum\limits_{i=1}^{p}\alpha_iy_i = 0
\label{EQ-DERIV-PART-B}
\end{equation}
% ----------------------------------------------------------------
and substituting these expressions into the primal problem
(Equation \ref{EQ-FORMA-LAGRAN-DUAL}), results in:
%\marginpar{\emph{\small Dual \\0problem}}
% ----------------------------------------------------------------
\begin{equation}
W(\alpha) = \sum\limits_{i=1}^{p}\alpha_i -
{1 \over 2}\sum\limits_{i=1}^{p}\sum\limits_{j=1}^{p}
\alpha_i\alpha_jy_iy_j
\gbf{\varphi}(\mathbf{x}_i)^T
\gbf{\varphi}(\mathbf{x}_j).
\label{EQ-FORMA-DUAL}
\end{equation}
% ----------------------------------------------------------------
Based on the equations above, the dual problem is defined 
as \cite{VAPNIK9801,HAYKIN9901}:
% ----------------------------------------------------------------
\begin{quote}
{ \em   % ------ Inicio Italico ---
Given a training set
$\left\{ {\left({\mathbf{x}_i,y_i} \right)} \right\}_{i=1}^p$,
find the optimal Lagrange  multipliers $\alpha_i$
for the problem: \\
Maximize
\begin{equation}
W(\gbf{\alpha})=
\sum\limits_{i=1}^p {\alpha _i}-
{1 \over 2}\sum\limits_{i=1}^p\sum\limits_{j=1}^p \alpha _i\alpha
_jy_iy_j
\gbf{\varphi}(\mathbf{x}_i)^T
\gbf{\varphi}(\mathbf{x}_j)
\label{EQ-CAP3-DUAL}
\end{equation}
subject to:
\begin{itemize}
  \item $\sum\limits_{i=1}^{p}\alpha_iy_i = 0$
  \item $\alpha_i \geq 0$
\end{itemize}
} % ------ Fim Italico ---
\end{quote}
% ----------------------------------------------------------------
\section{SVMs with soft margins}
% ---------------------------------------------------------------
With the use of ``slack variables'' in the training process
\cite{VAPNIK9501}, non linearly separable training patterns could be treated.
Slack variables decrease the classification errors and 
permits a good balance between the machine topology (structural risk)
and the training error (empirical risk).

Using slack variables, a vector $\mathbf{x}_j$ is correctly classified when 
the following expression is true:
% ----------------------------------------------------------------
\begin{equation}
y_j\left( \mathbf{w}^T\gbf{\varphi}(\mathbf{x}_j)+b \right)
+ \xi_j \geq 1 \;\;\;\;\;
\mathrm{for} \;\; j=1,2,\ldots,p
\label{EQ-LIN-SEP-FOLGA}
\end{equation}
% ----------------------------------------------------------------
where $\xi_j$ are the non negative slack variables associated
with each training vector $\mathbf{x}_j$. Depending on 
$\xi_j$, even if
$y_j\left( \mathbf{w}^T\gbf{\varphi}(\mathbf{x}_j)+b \right) $
is less than zero, when it is summed with $\xi_j$ the final value
can be larger than one and the pattern is correctly classified.

The new cost functional can be expressed as:
% ----------------------------------------------------------------
\begin{equation}
\Phi (\mathbf{w},\gbf{\xi})={1 \over 2}\mathbf{w}^T\mathbf{w} +
C\sum\limits_{i=1}^p \xi_i,
\end{equation}
% ----------------------------------------------------------------
where $C$ is a positive training parameter which
%$C$ can be understood as a regularization parameter. It
establishes a trade-off between model complexity and
training error. When $C$ is small, the slack variables become 
small as well, generating a decision surface tight to
the support vectors and possibly with larger norm for 
$\mathbf w$. Conversely, large values of $C$ provide a
decision surface smoother, since the distance between
the decision surface and support vectors increases.
In this case, the norm of $\mathbf w$ becomes smaller.

The primal problem for SVMs with soft margins can be stated 
as \cite{VAPNIK9801,HAYKIN9901}:
% ----------------------------------------------------------------
\begin{quote}
{ \em   % ------ Inicio Italico ---
  Given a training set
  $\left\{ {\left({\mathbf{x}_i,y_i} \right)} \right\}_{i=1}^p$,
  find the optimal values for the weight vector
  $\mathbf{w}$, bias $b$ and slack variables $\xi_i$ 
  such that the following constraints
  %----------------------------------
  \begin{equation}
  \begin{array}{crc}
  y_i(\mathbf{w}^T\gbf{\varphi}(\mathbf{x}_i) + b)+ \xi_i \geq 1 &
  \mathrm{for} &  i=1,2,\ldots,p \\
   & \mathrm{and} & \xi_i \geq 0  \; \forall \; i
  \end{array}
  \label{EQ-PRIMAL-SLACK}
  \end{equation}
  are satisfied, whereas the cost functional
  \begin{equation}
  \Phi (\mathbf{w},\gbf{\xi})=\frac{1}{2} \mathbf{w}^T\mathbf{w}
  + C\sum\limits_{i=1}^p \xi_i
  \end{equation}
  is minimized.
  %---------------------------------
} % ------ Fim Italico ---
\end{quote}
% ----------------------------------------------------------------
The dual Lagrangian expression is:
% ----------------------------------------------------------------
\begin{equation}
J(\mathbf{w},b,\gbf{\alpha},\gbf{\xi}) =
{1 \over 2}\mathbf{w}^T\mathbf{w} +
 C\sum\limits_{i=1}^p \xi_i -
\sum\limits_{i=1}^{p}\alpha_i
\left[
y_i(\mathbf{w}^T\gbf{\varphi}(\mathbf{x}_i)+b)+\xi_i-1
\right].
\label{EQ-FORMA-LAGRAN-DUAL-FOLGA}
\end{equation}
% ----------------------------------------------------------------
Differentiating this equation  with respect to
$\mathbf{w}$, $b$ and $\xi_i$
% ----------------------------------------------------------------
\begin{equation}
{\partial J \over \partial \mathbf{w}} =
\mathbf{w} - \sum\limits_{i=1}^{p}\alpha_iy_i\gbf{\varphi}(\mathbf{x}_i) =
\mathbf{0} \;\;\; \Rightarrow \;\;\;
\mathbf{w} = \sum\limits_{i=1}^{p}\alpha_iy_i\gbf{\varphi}(\mathbf{x}_i)
\label{EQ-DERIV-PART-W-FOLGA}
\end{equation}
% ----------------------------------------------------------------
\begin{equation}
{\partial J \over \partial b} =
\sum\limits_{i=1}^{p}\alpha_iy_i = 0
\label{EQ-DERIV-PART-B-FOLGA}
\end{equation}
% ----------------------------------------------------------------
\begin{equation}
{\partial J \over \partial \xi_i} = C - \alpha_i = 0
\;\;\; \Rightarrow \;\;\;
\alpha_i = C
\label{EQ-DERIV-PART-E-FOLGA}
\end{equation}
% ----------------------------------------------------------------
and re-substituting the expressions obtained 
into Lagrangian expression, the dual equation becomes:
% ----------------------------------------------------------------
\begin{equation}
W(\alpha) = \sum\limits_{i=1}^{p}\alpha_i -
{1 \over 2}\sum\limits_{i=1}^{p}\sum\limits_{j=1}^{p}
\alpha_i\alpha_jy_iy_j
\gbf{\varphi}(\mathbf{x}_i)^T
\gbf{\varphi}(\mathbf{x}_j).
\label{EQ-FORMA-DUAL-FOLGA}
\end{equation}
% ----------------------------------------------------------------
The difference between Equation \ref{EQ-CAP3-DUAL} and \ref{EQ-FORMA-DUAL-FOLGA}
is the upper limit for each $\alpha_i$, given by $C$.
The dual problem is defined as \cite{VAPNIK9801,HAYKIN9901}:
% ----------------------------------------------------------------
\begin{quote}
{ \em   % ------ Inicio Italico ---
Given a training set
$\left\{ {\left({\mathbf{x}_i,y_i} \right)} \right\}_{i=1}^p$,
find the optimal Lagrange  multipliers $\alpha_i$
for the problem: \\
Maximize
\begin{equation}
W(\gbf{\alpha})=
\sum\limits_{i=1}^p {\alpha _i}-
{1 \over 2}\sum\limits_{i=1}^p\sum\limits_{j=1}^p \alpha _i\alpha
_jy_iy_j
\gbf{\varphi}(\mathbf{x}_i)^T
\gbf{\varphi}(\mathbf{x}_j)
\label{EQ-DUAL-SLACK}
\end{equation}
subject to:
\begin{itemize}
  \item $\sum\limits_{i=1}^{p}\alpha_iy_i = 0$
  \item $0 \leq \alpha_i \leq C$
\end{itemize}
} % ------ Fim Italico ---
\end{quote}

% ----------------------------------------------------------------
\section{Implicit mapping using kernel functions}
% ----------------------------------------------------------------
The formulation presented for SVMs with hard and soft margins
(Equations \ref{EQ-CAP3-DUAL} and \ref{EQ-DUAL-SLACK}) has a common
feature: a dot product performed in the feature space, represented
by $\gbf{\varphi}(\mathbf{x}_i)^T\gbf{\varphi}(\mathbf{x}_j)$.
%In general, feature space dimension is bigger than training
%vector dimensionality. Therefore, 
If there was a symmetric function
such as:
% ------------------------
\begin{equation}%\displaystyle
K(\mathbf x_i,\mathbf x_j) = K(\mathbf x_j,\mathbf x_i) =
\gbf{\varphi}(\mathbf x_i)^T
\gbf{\varphi}(\mathbf x_j)
\end{equation}
% ------------------------
the computational cost of the operator $\gbf{\varphi}(\cdot)$
could be avoided since the operation is carried out in the input
space (see Figure \ref{FIG-MAX-IMPMAP}).
Such functions are known as ``inner-product kernel'' or, simply
``kernel functions''. 
%which functions can be used as kernel or no.
%Functions that obeys the Mercer Theorem \cite{COURANT7001}
%can be used as kernel.
%The Mercer Theorem \cite{COURANT7001}
%provides 
A characterization of when a function can be taken
as kernel or not is stated by the Mercer Theorem \cite{COURANT7001}. 
%
%---
\begin{figure}[H]
\centering
\psfrag{in}{ \small \shortstack{Input data \\space}}
\psfrag{out}{\small \shortstack{Output data \\space}}
\psfrag{fs}{ \small \shortstack{Feature \\space}}
\psfrag{xg}{$\mathbf X$}
\psfrag{kg}{$\mathbf K$}
\psfrag{ys}{$\mathbf{\hat y}$}
\psfrag{fx}{$\gbf{\varphi}(\mathbf X)$}
\psfrag{t1}{$\gbf{\varphi}(\cdot)$}
\psfrag{t2}{$\mathbf w^T\gbf{\varphi}(\mathbf X)$}
\psfrag{t3}{$\gbf{\alpha}^T\mathbf{Ky}$}
\psfrag{xgd}{
\begin{tabular}{lcl}
$\mathbf X$ & = & $[\mathbf x_1, \; \mathbf x_2,\; \cdots, \; \mathbf x_p]^T$ \\
$\mathbf K$ & = & $\{k(\mathbf x_i,\mathbf x_j)\}_{i,j=1}^p$ \\
$\mathbf w$ & = & $[w_1, \; w_2,\; \cdots, \; w_p]^T$ \\
$\mathbf y$ & = & $[y_1, \; y_2,\; \cdots, \; y_p]^T$ \\
$\mathbf{\hat y}$ &  = & $[\hat y_1, \; \hat y_2,\; \cdots, \; \hat y_p]^T$ \\
$\gbf{\alpha}$ & = & $[\alpha_1, \; \alpha_2,\; \cdots, \; \alpha_p]^T$
\end{tabular}
}
\includegraphics[scale=.4]{figs/implicitmapping.eps}\\
\caption{Mapping function $\gbf{\varphi}(\cdot)$ and 
the implicit mapping performed by kernel functions.}
\label{FIG-MAX-IMPMAP}
\end{figure}
%-----

Consider a symmetric matrix $\mathbf K$, with finite input space
$\mathbf X = \{\mathbf x_1, \; \mathbf x_2,\; \cdots, \; \mathbf x_p\}$.
Each element of $\mathbf K$ is defined as:
% ---------
\begin{equation}
\mathbf K_{ij} = K(\mathbf x_i,\mathbf x_j)
\label{EQ-KERNEL-MATRIX}
\end{equation}
% ---------
If $\mathbf K$ is positive semi-definite, $K(\mathbf x_i,\mathbf x_j)$
can be considered an inner-product kernel \cite{CRISTIANINI0001}.
Some known kernel functions are given in Table \ref{TAB-KERNELTYPES}.

% --------------------
\begin{table}[b]
\centering
\caption{Some types of functions that can be used as inner-product kernel}
\label{TAB-KERNELTYPES}
\begin{tabular}{lcc} \hline
{\bf Kernel }   & {\bf Expression} & {\bf Parameters} \\ \hline
RBF             & $\displaystyle e^{
                 - \left\|\mathbf{x}_i-\mathbf{x}_j
                \right\|^2/2 \sigma^2}$
                & $\sigma^2$ \\
Polynomial  & $(\mathbf{x}_i^T\mathbf{x}_j + a)^{b}$
            & $a$, $b$ \\
Perceptron  & $\tanh(\beta_0 \mathbf{x}_i^T\mathbf{x}_j + \beta_1)$
            & $\beta_0$, $\beta_1$ \\ \hline
\end{tabular}
\end{table}
% ----------------------------------------------------------------
In fact, the mapping function $\gbf{\varphi}(\cdot)$ is unknown in many
cases. Therefore, the weight vector cannot be evaluated since its
expression is
% ----------------------------------------------------------------
\begin{equation}%\textstyle
\mathbf{w} =  \sum\limits_{i=1}^{p}\alpha_iy_i\gbf{\varphi}(\mathbf x_i).
\end{equation}
% ----------------------------------------------------------------
In spite of this, its norm can be determined:
% ----------------------------------------------------------------
$$
\|\mathbf{w}\|^2 =
\sum\limits_{i=1}^{p}\alpha_iy_i\gbf{\varphi}(\mathbf x_i)^T
\sum\limits_{i=1}^{p}\alpha_iy_i\gbf{\varphi}(\mathbf x_i)
=
\sum\limits_{i=1}^{p}
\sum\limits_{j=1}^{p}\alpha_i\alpha_jy_iy_j\gbf{\varphi}(\mathbf x_i)^T
\gbf{\varphi}(\mathbf x_j)
$$
% ----------------------------------------------------------------
\begin{equation}%\textstyle
\|\mathbf{w}\|^2 =
\sum\limits_{i=1}^{p}
\sum\limits_{j=1}^{p}\alpha_i\alpha_jy_iy_jK(\mathbf x_i,\mathbf x_j).
\end{equation}
% ----------------------------------------------------------------
The separation hyper-plane was defined considering the mapping
function $\varphi(\cdot)$, but it can be represented using 
kernel functions too:
$$
\mathbf{w}^T\gbf{\varphi}(\mathbf{x})+ b = 0 \;\Rightarrow\;
%\left(\sum\limits_{i=1}^{p}\alpha_iy_i\gbf{\varphi}(\mathbf x_i)^T\right)^T
%\gbf{\varphi}(\mathbf{x})+ b = 0 \;\Rightarrow\;
\sum\limits_{i=1}^{p}\alpha_iy_i\gbf{\varphi}(\mathbf x_i)^T
\gbf{\varphi}(\mathbf{x})+ b = 0
$$
% ----------------------------------------------------------------
\begin{equation}%\textstyle
\sum\limits_{i=1}^{p}\alpha_iy_i
K(\mathbf x_i,\mathbf x)+ b = 0.
\end{equation}
% ----------------------------------------------------------------
\section{An example}

As an example, an artificial data set, similar to the one described in 
\cite{Kaufmann99}, was generated.
The training set  consists of 1000
vectors distributed in a $4 \times 4$ checkerboard (see Figure
\ref{FIG-EXE}) and 10000 test vectors. All vectors are drawn
from a uniform distribution in the $[0,2]\times[0,2]$ range.
The kernel chosen was an RBF with $\sigma = 0.25$ and the
value for the upper limit was $C=20$.

Training was repeated 100 times and an average
of the most important parameters was taken.
Simulations were carried on a Pentium II 400MHz with 128MB of memory
and running on Linux.  A C++ version of
Sequential Minimal Optimization (SMO) method \cite{Platt98b} 
was implemented for the simulations presented in this paper. 
The implemented version is called SMOBR
and is available for download on the Internet \cite{SMOBR}.

Figure \ref{FIG-EXE-DEC} shows the decision boundaries obtained. 
The number of support vectors was 172.19 on average and the percentage
of vectors well classified was 95.96\% on average.

% ----------------------------------------------------------------
% ----------------------------------------------------------------
%---
\begin{figure}[htb]
\centering
\includegraphics[scale=.5]{figs/exemplo_xadrez.ps}\\
\caption{Training set for the checkerboard example. Classes
	are represented by circles and crosses symbols.}
\label{FIG-EXE}
\end{figure}
%-----
%---
\begin{figure}[bth]
\centering
\includegraphics[scale=.5]{figs/decisao_exemplo_xadrez.ps}\\
\caption{Non-linear decision boundaries in the input space
	for checkerboard example. Support vectors are marked with a circle
  around vectors and the decision is represented by the continuous
  line. Dashed and dotted lines indicate the margins.}
\label{FIG-EXE-DEC}
\end{figure}
% ----------------------------------------------------------------
\section{Conclusion}
% ----------------------------------------------------------------
This chapter presented the basic formulation of SVMs with hard and soft
margins, defining primal and dual problems. 
A short discussion about implicit kernel mapping on feature space and inner
product kernel was also provided.
These concepts form the theoretical basis for the next chapter,
where methods for solving the dual problem are presented.

