\chapter{Training SVMs}
\label{CAP-SVM-TRAINING}
\section{Introduction}

The SVM dual problem has a special 
feature: it is a quadratic programming (QP) problem with 
ensured global solution. 
The symmetry of the kernel matrix, together with its
positive semidefinite characteristic
(conditions guaranteed by the Mercer theorem)
guarantee this global solution. 
Moreover, it needs few parameters: the upper limit for Lagrange 
multipliers ($C$) and some kernel parameters.
Despite of this, the kernel matrix may become large because its dimension 
grows quadratically with the number of training vectors,
generating a considerable number of variables to optimize
(time and memory consuming). 

There are many approaches to solve this problem 
and the choice depends mainly on the training set size.
In this work, training methods are divided into
four groups: classical, geometric, iterative and
working sets, briefly explained in the next sections.
In special, Successive Over-Relation (SOR) and 
Sequential Minimal Optimization (SMO) methods are detailed due to
their importance for subsequent chapters.

%--------------------------------------------
\section{Optimality conditions and feasible regions}
%--------------------------------------------
Each training vector has a Lagrange multiplier associated to it,
which is calculated during the training procedure. Depending on
the Lagrange multiplier values, the vector is classified as a
support vector or not. The Karush-Kuhn-Tucker (KKT) conditions
\cite{Luenberger86} for SVMs results in:
%--------------------------------------------
%\begin{table}
\begin{center}
\begin{tabular}{cccp{6cm}}
(i) & $\alpha_i = 0$      & $y_i f(\mathbf x_i) > 1$   & 
Ordinary vector, placed in the correct side. \\
(ii) & $0 < \alpha_i < C$  & $y_i f(\mathbf x_i) = 1$      & 
Non-bound support vector, placed  on the margin in the correct side. \\
(iii) & $\alpha_i = C$      & $y_i f(\mathbf x_i) < 1$      & 
Bound support vector,  placed in the wrong side.\\
\end{tabular}
\end{center}
%\end{table}
%--------------------------------------------
\begin{figure}[t]
\centering
\psfrag{CL1}{Class A ($+1$)}
\psfrag{CL2}{Class B ($-1$)}
\psfrag{SM}{\shortstack{Separation \\margins}}
\psfrag{A0A}{\shortstack{$\alpha_i = 0$ \\ $i\in$ A }}
\psfrag{A0B}{\shortstack{$\alpha_i = 0$ \\ $i\in$ B }}
\psfrag{ACB}{\shortstack{$\alpha_i = C$ \\ $i\in$ B }}
\psfrag{ACA}{\shortstack{$\alpha_i = C$ \\ $i\in$ A }}
\psfrag{0ACA}{\shortstack{$0 < \alpha_i < C$\\ $i\in$ A}}
\psfrag{0ACB}{\shortstack{$0 < \alpha_i < C$\\ $i\in$ B}}
\includegraphics[scale=.4]{figs/KKT2.eps}
\caption[Geometric interpretation of support vectors]
{
Correctly classified vectors are related to null Lagrange multipliers and
they are placed out of the region between margins. These vectors are irrelevant in
further use of the SVMs. Correctly classified vectors placed on the
margins are associated with Lagrange multipliers in the range
$0<\alpha_i<C$. Vectors in the wrong side are related to Lagrange multipliers
at the upper bound ($C$). These last two groups of vectors are known as
support vectors and they will define the separation hyper-plane between
the classes.}
\label{FIG-KKT}
\end{figure}
%--------------------------------------------
%-----
\begin{figure}[tb]
\centering
\psfrag{a1}{$\alpha_1$}
\psfrag{a2}{$\alpha_2$}
\psfrag{a3}{$\alpha_3$}
\psfrag{0}{$0$}
\psfrag{C}{$C$}
\includegraphics[scale=.37]{figs/feasible.eps}\\
\caption[Feasible region for a 3D case]
{Feasible region for a 3D case. Inequality constraints
give a hypercube of size $C$ and equality constraint an 
hyperplane. The feasible region is composed by the intersection
between these two regions. }
\label{FIG-FEASIBLE-REGION}
\end{figure}
% ----------------------------------------------------------------
Figure \ref{FIG-KKT} shows the three situations above in
schematic form. 
Support vectors, which define the hyper-plane
solution, are on the margins (if they are in the correct 
bounding plane) or in the wrong side, within margins or not.
The first ones are known as non-bound support vectors and
their corresponding Lagrange multipliers are 
in the range $0 < \alpha_i < C$.  The second ones are known
as bound support vectors and their corresponding Lagrange
multipliers are equal to $C$.
Vectors  associated with null Lagrange multipliers can be
eliminated, not affecting the optimal hyper-plane solution.

The feasible region for SVMs is formed by the intersection between 
equality and inequality constraints (Figure \ref{FIG-FEASIBLE-REGION}). 
Inequality constraints 
can be represented as a hypercube of size $C$ and the equality 
constraint as a hyperplane. During optimization, new values for
the Lagrange multipliers are generated inside the feasible region and
the intersection between equality and inequality constraints 
must be valid.
% ----------------------------------------------------------------
\section{Training methods for SVMs}
% ----------------------------------------------------------------
% ----------------------------------------------------------------
\subsection{Classical methods}
% ----------------------------------------------------------------
Methods that build
the entire kernel matrix $\mathbf K$ (Equation 
\ref{EQ-KERNEL-MATRIX}) and employ standard
QP routines  are classified here as ``classical methods''.
These methods are recommended for training sets with small size, 
with about a few hundred vectors. As the amount of data increases,
memory necessary to store the kernel matrix becomes very
large and data may not fit into memory. 
%For instance, a training set with 800 vectorsa matrix

%The quadratic programming problem can be state as:
%----
%\begin{equation}
%\underset{\gbf{\alpha}}{\mathrm{min}} \;\;
%\frac 1  2 \gbf{\alpha}'\mathbf{H}\gbf{\alpha}+\mathbf{f}'\gbf{\alpha} 
%\;\;\mathrm{subject \; to:} \;\;
%\mathbf A_i \gbf{\alpha} \leq \mathbf b_i 
%\;\; \mathrm{and} \;\;
%\mathbf A_e \gbf{\alpha} = \mathbf b_e
%\end{equation}
%-------

Matlab\Pisymbol{psy}{228} provides a routine called \cmd{quadprog}
(old Matlab \cmd{qp} routine) to solve QP problems with linear constraints. 
The SVM toolbox implemented by Steve Gun \cite{SteveGun2000} uses
Matlab QP solver and it is free for academic purposes.
Other free QP solvers can be obtained at NetLib \cite{NetLib2000}.
%
Commercial products, like CPLEX\footnote{
\url{http://www.ilog.com/products/cplex/}}, 
MINUS\footnote{\url{http://www.sbsi-sol-optimize.com/Minos.htm}},
LOQO \cite{Vanderbei94}
and IBM QP\footnote{\url{http://www6.software.ibm.com/es/oslv2/features/qp.htm}},
are available too.
%Benchmarks for Optimization Software
%http://plato.la.asu.edu/topics/problems/nlores.html#QP-problem
%http://plato.la.asu.edu/bench.html

The QP problem present in SVMs can be reduced to a linear 
programming (LP) problem with linear constraints, 
as described in \cite{Suykens99a}. 
This approach is known as ``Least Square SVMs'' and has
the disadvantage of generating a large number of support vectors.
Other efficient method to solve LP problem is called 
Bounded-Variable Least Squares (BVLS) \cite{LAWSON9501}.

\subsection{Geometric methods}

Using ideas from classical nearest point algorithms, 
Keerthi  \emph{et al} \cite{Keerthi99b} reformulated the SVM's 
training problem into a problem of computing nearest 
point between two convex polytopes.

Other approach  with geometrical appeal, 
called Central Support Vector Machine (CSVM),
is due to Zhang \cite{Zhang99a}. 
Zhang uses cluster centers to rewrite
SVM's training problem to make CSVM less susceptible to
noises and outliers.

\subsection{Iterative methods}

The most elementary solution to the problem of training SVMs is to
optimize one Lagrange multiplier per turn using gradient information.
Two iterative methods are described here: gradient ascent and
Successive Over Relaxation.

\subsubsection{Gradient ascent}

The gradient ascent method, using an initial estimation of the
Lagrange multipliers, updates one Lagrange multiplier per turn.
%The step given to 
%Each Lagrange multiplier is 
%startsuses a steep
%searches for a steepest feasible direction.
%This approach is known as ``gradient ascent'' and it is simple
%to be implemented.
%Moreover, 
Due to the convexity of the problem, at each iteration, the 
objective function is monotonically increased, until its maximum 
is reached.

Consider the following quadratic function
%------------------------
\begin{equation}
W(\gbf{\alpha}) = \sum\limits_{i=1}^{p}\alpha_i -
{1 \over 2}\sum\limits_{i=1}^{p}\sum\limits_{j=1}^{p}
\alpha_i\alpha_jy_iy_j
K(\mathbf{x}_i,\mathbf{x}_j)
\label{EQ-TRAIN-SVM-QP-BASIC}
\end{equation}
%------------------------
and suppose that $\aN$ is the next Lagrange multiplier to be
optimized. 
%
Separating $\aN$, last equation can be
rewritten as:
%----------
\begin{multline}
W(\aN)  =
\aN + \sum\limits_{\underset{i\neq N}{i=1}}^{p} 
{\alpha _i} - \\ \frac 1 2
\left(
\aN^2K_{NN} + 2\sum\limits_{\underset{i\neq N}{i=1}}^{p}
\aN\alpha_i y_Ny_iK_{iN} + 
 \sum\limits_{\underset{i\neq N}{i=1}}^{p}
\sum\limits_{\underset{i\neq N}{j=1}}^{p}
\alpha _i\alpha_jy_iy_jK_{ij}
\right),
\label{EQ-TRAIN-AN-EVIDENCE}
\end{multline}
%----------
where $K_{ij}=K(\mathbf{x}_i,\mathbf{x}_j)$.

Grouping all constant terms in $\mathcal{C}$, Equation
\ref{EQ-TRAIN-AN-EVIDENCE} becomes
%---------------------------
\begin{equation}
W(\aN)=
\aN  -\frac 1 2 \aN^2K_{NN} -
\aN y_N 
\sum\limits_{\underset{i\neq N}{i=1}}^{p}
\alpha_iy_iK_{iN} +
\mathcal{C}.
\end{equation}
%---------------------
Differentiating this equation  with respect to $\aN$ results in
%----------
\begin{equation}
\frac{\partial W(\aN)}{\partial \aN}=
1  -\aN K_{NN} - y_N 
\sum\limits_{\underset{i\neq N}{i=1}}^{p}
\alpha_iy_iK_{iN} = 0
\label{EQ-TRAIN-DERIV-AN}
\end{equation}
%----------
\begin{equation}
\aN = \frac{1 - y_N \sum\limits_{\underset{i\neq N}{i=1}}^{p}
\alpha_iy_iK_{iN}}
{K_{NN}}.
\end{equation}
% ----------
To avoid confusion, the new $\aN$ is represented as $\aNn$. Thus
%----------
\begin{equation}
\label{EQ-TRAIN-ANEW}
\aNn = \frac{1 - y_N 
\sum\limits_{\underset{i\neq N}{i=1}}^{p}
\alpha_iy_iK_{iN}}{K_{NN}}.
\end{equation}
%----------

After determining $\aNn$, it is necessary to apply the linear constraints 
 presented in the dual problem (Equation \ref{EQ-FORMA-DUAL-FOLGA}).
Inequality constraints gives upper and lower bounds to Lagrange
multipliers and are easily introduced into the method.
However, the equality
constraint represents a problem
since only one Lagrange multiplier is updated per turn.
Using primal problem described in page \pageref{EQ-PRIMAL-SLACK}, 
at least two Lagrange multipliers must be optimized per turn to
keep  the separation hyperplane within the 
feasible region \cite{CRISTIANINI0001}
(see Figure \ref{FIG-FEASIBLE-REGION}).
% An alternative is to use a
%fixed bias term. 

As the equality constraint is obtained  from differentiating
Equation \ref{EQ-FORMA-LAGRAN-DUAL-FOLGA} with respect to bias,
an alternative is to use a fixed bias term. 
% term eliminates equality constraint.
Cristianini \cite{CRISTIANINI0001} presents methods for training
SVMs with fixed bias and its implications.


Using $b=0$, SVM output for pattern $\mathbf x_N$ is given by
%----------
\begin{equation}
f(\mathbf{x}_N) = \sum\limits_{i=1}^{p}\alpha_iy_iK_{iN}
\end{equation}
%----------
or
%----------
\begin{equation}
f(\mathbf{x}_N) = \sum\limits_{\underset{i\neq N}{i=1}}^{p}
\alpha_iy_iK_{iN} + \aN y_N K_{NN}.
\label{EQ-TRAIN-OUTPUT-GRAD}
\end{equation}
%----------
Applying Equation \ref{EQ-TRAIN-OUTPUT-GRAD} into Equation
 \ref{EQ-TRAIN-ANEW} results in:
%----------
\begin{equation}
%\begin{array}{ccl}
\aNn =
\frac{1 - y_N
\left(
f(\mathbf{x}_N) - \aN y_N K_{NN}
\right)}
{K_{NN}}.  \nonumber
%\end{array}
\end{equation}
%----------
As $y_i$ assumes only $\pm 1$ values, the following expression is valid:
%----------
\begin{equation}
%\begin{array}{ccl}
\label{EQ-TRAIN-ANEW2}
\aNn = y_N\frac{y_N - f(\mathbf{x}_N) + \aN y_N K_{NN}} {K_{NN}}.  
%\end{array}
\end{equation}
%----------
Denoting output error for pattern $\mathbf x_N$ as
%----------
\begin{equation}
E_N = f(\mathbf{x}_N)-y_N.
\label{EQ-TRAIN-ERROR-EXPRE}
\end{equation}
%----------
Equation \ref{EQ-TRAIN-ANEW2} becomes
%----------
\begin{equation}
%\begin{array}{ccl}
\aNn = y_N\frac{-E_N + \aN y_N K_{NN}}
{K_{NN}}  \nonumber
%\end{array}
\end{equation}
%----------
%----------
$$
%\begin{equation}
\aNn = \aN - \frac{y_NE_N}{K_{NN}}
\label{EQ-ANEW-FINAL}
$$
%\end{equation}
%----------
%----------
\begin{equation}
\aNn = \aN + \eta_N E_N,
\label{EQ-TRAIN-ANEW-ETA}
\end{equation}
where $\eta_N = - \frac{y_N}{K_{NN}}$.

Equation \ref{EQ-TRAIN-ANEW-ETA} represents the updating
rule for the Lagrange multipliers. With this equation, 
Lagrange multipliers can be updated 
until the maximum of $W(\gbf{\alpha})$
is reached. Inequality constraints must be applied to $\aNn$
to keep it in the feasible region:
$$
0 \leq \aNn \leq C.
$$
% +----------------------------------------------------+
% | TESTAR SE O KERNEL PCA PODE SER USADO COM FORMA DE |
% | SE EVITAR O TERMO DE BIAS NO TREINAMENTO !         |
% |                                                    |
% | FALTA CITAR O TRABALHO DO HAYKIN                   |
% +----------------------------------------------------+

Similar expressions and algorithms using gradient ascent
can be found in \cite{Adatron98,FriCriCam98}. The method
is known as  Kernel Adatron Support Vector Neural Network.

Finally, training time can be reduced using a cache of errors. 
Consider the new Lagrange multiplier as $\aNn = \aN -
\Delta \aN$. Error cache for pattern $j$ can be updated as follows:
%----------
$$
E_j^{\mathrm{new}} = f(\mathbf{x}_j)-y_j 
$$
$$
E_j^{\mathrm{new}}= \sum\limits_{\underset{i\neq N}{i=1}}^{p}
    \alpha_iy_iK_{ij} + \aNn y_N K_{jN} -y_j
$$
$$
E_j^{\mathrm{new}}= \sum\limits_{\underset{i\neq N}{i=1}}^{p}
    \alpha_iy_iK_{i j} + (\aN-\Delta\aN)y_N K_{jN} -y_j
$$
$$
E_j^{\mathrm{new}}= \sum\limits_{\underset{i\neq N}{i=1}}^{p}
    \alpha_iy_iK_{iN} + \aN y_N K_{jN}-\Delta\aN y_N K_{jN} -y_j
$$
%$$
%E_j^{\mathrm{new}}= f(\mathbf{x}_j)-y_j-\Delta\aN y_N K_{jN} 
%$$
%
%\begin{equation}
%\begin{array}{ccl}
%E_j^{\mathrm{new}} &=& f(\mathbf{x}_j)-y_j \\ \nonumber
%&=& \sum\limits_{\underset{i\neq N}{i=1}}^{p}
%    \alpha_iy_iK_{ij} + \aNn y_N K_{jN} -y_j\\ \nonumber
%&=& \sum\limits_{\underset{i\neq N}{i=1}}^{p}
%    \alpha_iy_iK_{i j} + (\aN-\Delta\aN)y_N K_{jN} -y_j\\ \nonumber
%&=& \sum\limits_{\underset{i\neq N}{i=1}}^{p}
%    \alpha_iy_iK_{iN} + \aN y_N K_{jN}-\Delta\aN y_N K_{jN} -y_j\\ \nonumber
%&=& f(\mathbf{x}_j)-y_j-\Delta\aN y_N K_{jN} \\ \nonumber
%\end{array}
%\end{equation}
%----------
\begin{equation}
E_j^{\mathrm{new}} = E_j-\Delta\aN y_N K_{jN}.
\end{equation}
% --------------------------
This procedure must be repeated for all patterns.
% ----------------------------------------------------------------
\subsubsection{Successive Over Relaxation}
\label{CAP-SEC-TRAIN-SOR}
% ----------------------------------------------------------------
A new training method that eliminates the equality constraint was
proposed by  Mangasarian and Musicant \cite{Mangasarian99,ManMus99}
in 1999. This approach,  called Successive Over Relaxation (SOR)
changes the primal formulation of SVMs with soft margins and
generates a dual problem without equality constraints.

The new primal problem is stated as
% ----------------------------------------------------------------
\begin{quote}
{ \em   % ------ Inicio Italico ---
  Given a training set
  $\left\{ {\left({\mathbf{x}_i,y_i} \right)} \right\}_{i=1}^p$,
  find the optimal values for the weight vector
  $\mathbf{w}$ and bias $b$ such as the following constraints
  %----------------------------------
  %----------------------------------
  \begin{equation}
  \begin{array}{crc}
  y_i(\mathbf{w}^T\gbf{\varphi}(\mathbf{x}_i) + b)+ \xi_i \geq 1 &
  \mathrm{for} &  i=1,2,\ldots,p \\
   & \mathrm{and} & \xi_i \geq 0  \; \forall \; i
  \end{array}
  \label{EQ-TRAIN-PRIMAL-SLACK-SOR}
  \end{equation}
  are satisfied, whereas the cost functional
  \begin{equation}
  \Phi (\mathbf{w},\gbf{\xi},b)= \frac{1}{2}(\mathbf{w}^T\mathbf{w}+b^2)
  + C\sum\limits_{i=1}^p \xi_i
  \end{equation}
  is minimized.
  %---------------------------------
} % ------ Fim Italico ---
\end{quote}
% ----------------------------------------------------------------
Carrying out all differentiations, the following dual problem 
is obtained:
% ----------------------------------------------------------------
\begin{quote}
{ \em   % ------ Inicio Italico ---
Given a training set
$\left\{ {\left({\mathbf{x}_i,y_i} \right)} \right\}_{i=1}^p$,
find the optimal Lagrange  multipliers $\alpha_i$
for the problem: \\
Maximize
\begin{multline}
\label{EQ-TRAIN-DUAL-SLACK-SOR}
W(\gbf{\alpha})=
\sum\limits_{i=1}^p {\alpha _i}-\frac{1}{2}
\sum\limits_{i=1}^p 
\sum\limits_{j=1}^p 
\alpha _i\alpha_jy_iy_jK(\mathbf{x}_i,\mathbf{x}_j)-\\
-\frac{1}{2}
\sum\limits_{i=1}^p 
\sum\limits_{j=1}^p 
\alpha _i\alpha_jy_iy_j
\end{multline}
subject to:
\begin{itemize}
  \item $0 \leq \alpha_i \leq C$
\end{itemize}
} % ------ Fim Italico ---
\end{quote}
% ----------------------------------------------------------------
The bias term can be directly calculated as:
% -----------------------------------------
\begin{equation}
b = \sum\limits_{i=1}^{p}\alpha_i y_i.
\label{EQ-BIAS-SOR}
\end{equation}
% -----------------------------------------
Using SOR equations it is possible to obtain an updating rule for
Lagrange multipliers based on gradient ascent. Consider $\aN$
as the next Lagrange multiplier to be optimized. Since Equation 
\ref{EQ-TRAIN-DUAL-SLACK-SOR} differs from Equation 
\ref{EQ-TRAIN-SVM-QP-BASIC} in the third term, only this term
will be differentiate below and the result added to Equation 
\ref{EQ-TRAIN-DERIV-AN}.

Rewriting last part of Equation \ref{EQ-TRAIN-DUAL-SLACK-SOR}
to separate $\aN$:
%----------------
$$
W_3(\gbf{\alpha}) = 
-\frac{1}{2}
\sum\limits_{i=1}^p 
\sum\limits_{j=1}^p 
\alpha _i\alpha_jy_iy_j
$$
\begin{equation}
W_3(\gbf{\alpha})
= - \frac{1}{2}
\left(
\aN^2 + 2\aN y_N \sum\limits_{i\neq N}\alpha_i y_i
+ 
\sum\limits_{\underset{i\neq N}{i=1}}^{p}
\sum\limits_{\underset{i\neq N}{i=1}}^{p}
\alpha_i y_i \alpha_j y_j
\right)
\end{equation}
%----------------
Grouping constant terms:
%----------------
\begin{equation}
W_3(\gbf{\alpha}) = -\frac{1}{2}
\left(
\aN^2 + 2\aN y_N \sum\limits_{\underset{i\neq N}{i=1}}^{p}\alpha_i y_i
+ 
\mathcal{C}
\right).
\end{equation}
%----------------
Differentiating  with respect to $\aN$ provides
%----------------
\begin{equation}
\frac{\partial W_3(\aN)}{\partial \aN}=
-\aN - y_N
\sum\limits_{\underset{i\neq N}{i=1}}^{p} \alpha_i y_i
%= \sum\limits_{i=1}^{p} \alpha_i y_i = 0.
\end{equation}
%----------------
This result can be joined with Equation 
\ref{EQ-TRAIN-DERIV-AN}:%, resulting:
%----------
\begin{equation}
\frac{\partial W(\aN)}{\partial \aN}=
1  -\aN K_{NN} -
y_N \sum\limits_{i\neq N}\alpha_iy_iK_{iN}
-\aN - y_N\sum\limits_{\underset{i\neq N}{i=1}}^{p}
\alpha_i y_i
\end{equation}
%----------
%----------
%\begin{equation}
$$
\aN(K_{NN}+1)= 
1 - y_N \sum\limits_{\underset{i\neq N}{i=1}}^{p}
\alpha_iy_i(K_{iN}+1)
$$
%\end{equation}
%----------
%----------
%\begin{equation}
$$
\aN= 
\frac{1 - y_N \sum\limits_{\underset{i\neq N}{i=1}}^{p}
\alpha_iy_i(K_{iN}+1)} {K_{NN}+1}.
$$
Or, for the new Lagrange multiplier value:
\begin{equation}
\aNn = 
\frac{1 - y_N \sum\limits_{\underset{i\neq N}{i=1}}^{p}
\alpha_iy_i(K_{iN}+1)} {K_{NN}+1}.
\label{EQ-NEWALPHA-EQ1}
\end{equation}
The output for pattern $\mathbf{x}_N$ is given by:
$$
f(\mathbf{x}_N) = \sum\limits_{i = 1}^{p}\alpha_i y_i K_{iN} + b
$$
$$
f(\mathbf{x}_N) = \alpha_N y_N K_{NN} + 
\sum\limits_{\underset{i\neq N}{i=1}}^{p}\alpha_i y_i K_{iN} + 
\sum\limits_{i = 1}^{p}\alpha_i y_i
$$
\begin{equation}
\sum\limits_{\underset{i\neq N}{i=1}}^{p}\alpha_i y_i K_{iN} = 
f(\mathbf{x}_N) - \alpha_N y_N K_{NN} -
\sum\limits_{i = 1}^{p}\alpha_i y_i
\label{EQ-OUTPUT-MOD-SOR}
\end{equation}
Using Equation \ref{EQ-OUTPUT-MOD-SOR} in \ref{EQ-NEWALPHA-EQ1}, provides:
$$
\aNn = 
\frac{1- y_N\left(
f(\mathbf{x}_N)-\aN y_N K_{NN} -
\sum\limits_{i=1}^p\alpha_i y_i + 
\sum\limits_{\underset{i\neq N}{i=1}}^p\alpha_i y_i
\right)}
{K_{NN}+1}
$$
%\end{equation}
%----------
$$
\aNn = 
\frac{y_N\left(
y_N - f(\mathbf{x}_N)+\aN y_N K_{NN} + \alpha_N y_N
\right)}
{K_{NN}+1}
$$
%----------
$$
\aNn = 
\frac{y_N\left(
-E_N+\aN y_N K_{NN} + \aN y_N
\right)}
{K_{NN}+1}
$$
%----------
$$
\aNn = 
\frac{y_N\left(
-E_N+\aN y_N( K_{NN}+1) 
\right)}
{K_{NN}+1}
$$
%----------
and the new Lagrange multiplier can be updated with the 
following rule:
%----------
\begin{equation}
\aNn = \aN - \frac{y_N}{K_{NN}+1}E_N.
\label{EQ-ALPHA-UPDATE-SOR1}
\end{equation}
%----------
Cache or errors are always possible:
%----------
$$
E_j^{\mathrm{new}} = f(\mathbf{x}_j)-y_j 
$$
$$
E_j^{\mathrm{new}}= 
\sum\limits_{i=1}^{p}\alpha_iy_iK_{ij} + b -y_j
    =
\sum\limits_{\underset{i\neq N}{i=1}}^{p}
    \alpha_iy_iK_{ij} + \aNn y_N K_{Nj} + 
     \sum\limits_{\underset{i\neq N}{i=1}}^{p}\alpha_iy_i + \aNn y_N 
      -y_j
$$
$$
E_j^{\mathrm{new}}= \sum\limits_{\underset{i\neq N}{i=1}}^{p}
    \alpha_iy_iK_{i j} + (\aN-\Delta\aN)y_N K_{Nj} + 
    \sum\limits_{\underset{i\neq N}{i=1}}^{p}\alpha_iy_i +
     (\aN-\Delta\aN)y_N - y_j
$$
%$$
%E_j^{\mathrm{new}}= \sum\limits_{i=1}^{p}
%    \alpha_iy_iK_{ij} -\Delta\aN y_N K_{jN} +b -\Delta\aN y_N -  y_j
%$$
$$
E_j^{\mathrm{new}}=  E_j - \Delta\aN y_N K_{Nj} -\Delta\aN y_N
$$
\begin{equation}
E_j^{\mathrm{new}} = E_j-\Delta\aN y_N (K_{Nj} + 1).
\label{EQ-ERROR-UPDATE-SOR1}
\end{equation}
% --------------------------
This procedure must be repeated for all patterns and 
both errors and bias ($\sum\limits_{i}\alpha_i y_i$)
can be kept in cache to speed up the training.
% +----------------------------------------------------+
\subsection{Working set methods}
% +----------------------------------------------------+
A possible optimization strategy for training SVMs is to divide
the original QP problem into smaller parts, denoted here as 
QP sub-problems. Each QP sub-problem is optimized separately
while the remaining QP problem is kept constant. Furthermore, 
some heuristics are implemented to detect violated constraints
and select their respective Lagrange multipliers for 
optimization. Methods that use this approach are known as
``active set'' or ``working set'' methods.

\subsubsection{QP sub-problem}

Before describing working set methods it is interesting to show
how the QP problem can be subdivided.
Considering that training patterns can be divided in two sets, called
B and N, one may write (notation and development used 
are based on \cite{Joachims98b}):
$$
\gbf{\alpha} =
\left[
\begin{array}{c}
\gbf{\alpha}_B \\
\gbf{\alpha}_N
\end{array}
\right],
\qquad
\mathbf{y} =
\left[
\begin{array}{c}
\mathbf{y}_B \\
\mathbf{y}_N
\end{array}
\right]
\qquad \mathrm{and}
\qquad
\mathbf{H} =
\left[
\begin{array}{cc}
\mathbf{H}_{BB} & \mathbf{H}_{BN} \\
\mathbf{H}_{NB} & \mathbf{H}_{NN}
\end{array}
\right],
$$
%----------------------------------------------------------------
where each element of $\mathbf H$ matrix is given by
$H_{ij} = y_iy_jK(\mathbf x_i,\mathbf x_j)$.
As $\mathbf{H}$ is symmetrical, $\mathbf{H}_{NB} = \mathbf{H}_{BN}^T$. 
Thus, the original QP problem in matrix form
$$
W(\gbf{\alpha}) = -\gbf{\alpha}^T\mathbf{1}+\frac{1}{2}
\gbf{\alpha}^T\mathbf{H}\gbf{\alpha}
$$
can be written as
%----------------------------------------------------------------
$$
W_\mathrm{sub}(\gbf{\alpha}) =
-\gbf{\alpha}_B^T\mathbf{1} -\gbf{\alpha}_N^T\mathbf{1} +
\frac 1 2
\left(
\left[
\begin{array}{c}
\gbf{\alpha}_B \\
\gbf{\alpha}_N
\end{array}
\right]^T
\left[
\begin{array}{c}
\mathbf{H}_{BB}\gbf{\alpha}_B + \mathbf{H}_{BN}\gbf{\alpha}_N \\
\mathbf{H}_{NB}\gbf{\alpha}_B + \mathbf{H}_{NN}\gbf{\alpha}_N
\end{array}
\right]
\right)
$$
%----------------------------------------------------------------
$$
W_\mathrm{sub}(\gbf{\alpha}) =
-\gbf{\alpha}_B^T\mathbf{1} -\gbf{\alpha}_N^T\mathbf{1} +
\frac 1 2
\left[
\begin{array}{c}
\gbf{\alpha}_B^T\mathbf{H}_{BB}\gbf{\alpha}_B +
\gbf{\alpha}_B^T\mathbf{H}_{BN}\gbf{\alpha}_N \\
\gbf{\alpha}_N^T\mathbf{H}_{NB}\gbf{\alpha}_B +
\gbf{\alpha}_N^T\mathbf{H}_{NN}\gbf{\alpha}_N
\end{array}
\right].
$$
%----------------------------------------------------------------
Since $(\gbf{\alpha}_N^T\mathbf{H}_{NB}\gbf{\alpha}_B)^T = 
\gbf{\alpha}_B^T\mathbf{H}_{BN}\gbf{\alpha}_N$, last equation
can be written as:
%----------------------------------------------------------------
$$
W_\mathrm{sub}(\gbf{\alpha}) =
-\gbf{\alpha}_B^T\mathbf{1} -\gbf{\alpha}_N^T\mathbf{1} +
\gbf{\alpha}_B^T\mathbf{H}_{BN}\gbf{\alpha}_N +
\frac 1 2
\gbf{\alpha}_B^T\mathbf{H}_{BB}\gbf{\alpha}_B +
\frac 1 2
\gbf{\alpha}_N^T\mathbf{H}_{NN}\gbf{\alpha}_N
$$
%----------------------------------------------------------------
Therefore, complete QP problem is given by:
%----------------------------------------------------------------
\begin{quote}
{ \em   % ------ Inicio Italico ---
Maximize
\begin{align}
W_{sub}(\gbf{\alpha}) & =
  - \gbf{\alpha}_B^T \left(1 - \mathbf{H}_{BN}\gbf{\alpha}_N \right) +
  \frac 1 2
  \gbf{\alpha}_N^T\mathbf{H}_{NN}\gbf{\alpha}_N + \nonumber \\
  & \phantom{=} +
  \frac 1 2
  \gbf{\alpha}_B^T\mathbf{H}_{BB}\gbf{\alpha}_B -
  \gbf{\alpha}_N^T\mathbf{1}
  \label{EQ-TRAIN-QP-MIN-FOLGA-SUB}
\end{align}
subject to:
\begin{enumerate}
  \item $\gbf{\alpha}_B^T\mathbf{y}_B + \gbf{\alpha}_N^T\mathbf{y}_N=0$
  \item $\mathbf{0} \leq \gbf{\alpha} \leq C\mathbf{1}$.
\end{enumerate}
} % ------ Fim Italico ---
\end{quote}
If B is the active set, the N set is considered constant and 
QP problem complexity is reduced. 

Working set methods differ mainly on size of B set 
and which patterns belong to it. 
Most common working set methods are described in next 
sections.

\subsubsection{Chunking}

Chunking method was suggested by Vapnik \cite{Vapnik92b} in 1992.
It uses a standard QP solver associated with a selection heuristic.
The method is briefly described as follows:
%---------------------------------
\begin{enumerate}
\item A QP sub-problem (``chunk'') is selected for optimization, using
	some standard QP solver.
\item After optimization, support vectors present in the chunk 
	are kept while other patterns are discarded. New 
	training patterns are selected from the training set and added
	to the chunk (chunk size may vary during training).
	Selection heuristic takes into account those vectors that most 
	violate the KKT conditions.
\item	The process is repeated until some convergence criterion is
	satisfied.
\end{enumerate}
%--------------------------------
Although no formal proof of convergence exists, the chunking method 
produces good results, dealing with tens of thousand of vectors.

\subsubsection{\svmlight{}}

One efficient training procedure, based on the QP sub-problem, is
called \svmlight{}\cite{Joachims98b}.
Its main difference in relation to other working set methods 
is how the QP sub-problem is determined. 
Using a first-order approximation of the cost function,
\svmlight{} searches for a steepest feasible direction. Lagrange 
multiplier on this direction will compose the working set which
is solved using LOQO interior-point solver \cite{Vanderbei94}.

\subsubsection{Sequential Minimal Optimization}

The idea of optimizing a fixed QP sub-problem at each iteration
is due to Osuna \emph{at al} \cite{Osuna97a}, in 1997. Platt 
takes this idea to the limit when he proposes a QP sub-problem 
with only two Lagrange multipliers.
This method is known as ``Sequential Minimal Optimization''
(SMO) \cite{Platt98a,Platt98b}, where at least two Lagrange 
multipliers are necessary to preserve the linear equality constraint.
With SMO, analytical solutions are obtained for two Lagrange
multipliers, eliminating QP solver and any matrix 
storage. Moreover, training set with many tens of thousand
of vectors can be used.
SMO is strongly dependent on kernel evaluations, and it may become
slower when more complex kernels are necessary.  
Employment of cache for errors and kernel evaluations and
an appropriate sparse data treatment are common 
techniques to speed up SMO.

Due to the importance of SMO for this work, it is described here in details.
Development presented are in accordance with those in 
\cite{Platt98a} and \cite{CRISTIANINI0001}.
%-------------------
\parag{Analytical solution for two Lagrange multipliers}
%------------------------
Consider the SVM QP problem (Equation \ref{EQ-TRAIN-SVM-QP-BASIC}) 
and suppose that $\aN$ and $\aM$ are the next two Lagrange multipliers
to be optimized. Equation \ref{EQ-TRAIN-SVM-QP-BASIC} can be 
rewritten as a function of $\aN$ and $\aM$, as follows:
%---------------------------------
\begin{multline}
W(\aN,\aM) =
\aN+\aM + \sum\limits_{i\neq N,M}\alpha_i - \frac 1 2
\left(
\aN^2K_{NN} + \aM^2K_{MM} +
\phantom{\sum\limits_{i\neq N,M}} \right. \\
+ 2\aN\aM y_Ny_MK_{NM}
+ 2\sum\limits_{\underset{i\neq N,M}{i=1}}^{p}
\alpha_i\aN y_iy_NK_{iN} + \\  \left.
+ 2\sum\limits_{\underset{i\neq \{N,M\}}{i=1}}^{p}
\alpha_i\aM y_iy_MK_{iM}
+\sum\limits_{\underset{i\neq N,M}{i=1}}^{p}
\sum\limits_{\underset{i\neq N,M}{j=1}}^{p}
\alpha_i\alpha_j y_iy_jK_{ij}
\right),
\end{multline}
%---------------------------------
where $K_{ij} = K(\mathbf x_i,\mathbf x_j)$.
Replacing constant terms by $\mathcal{C}_1$ and
$\mathcal{C}_2$ provides:
%---------------------------------
\begin{multline}
W(\aN,\aM) =
\aN + \aM + \mathcal{C}_1
- \frac{1}{2}\aN^2K_{NN}
- \frac{1}{2}\aM^2K_{MM}
- \aN\aM y_Ny_MK_{NM} - \\
- \aN y_N\sum\limits_{i\neq N,M}\alpha_i y_iK_{iN}
- \aM y_M\sum\limits_{i\neq N,M}\alpha_i y_iK_{iM} + \mathcal{C}_2,
\end{multline}
%---------------------------------
\begin{multline}
W(\aN,\aM) =
\aN + \aM
- \frac{1}{2}\aN^2K_{NN}
- \frac{1}{2}\aM^2K_{MM}
- \aN\aM y_Ny_MK_{NM} - \\
- \aN y_N S_N
- \aM y_M S_M + \mathcal{C}_3,
\label{EQ-Q-N-M}
\end{multline}
%---------------------------------
where $\mathcal{C}_3=\mathcal{C}_1+\mathcal{C}_2$ and
%-------------------------------------------
\begin{equation}
S_j = \sum\limits_{i \neq N,M}\alpha_i y_i K_{ij} = f(\mathbf x_j)-
\sum\limits_{i = N,M}\alpha_i y_i K_{ij}.
\label{EQ-SUM-N}
\end{equation}
%--------------------------------------
Expression \ref{EQ-Q-N-M} must take into account equality and inequality
constraints. The equality constraint can be arranged as
$$
\gbf{\alpha}^T\mathbf y = 0 \  \rightarrow \
\aN y_N + \aM y_M + \sum\limits_{i\neq N,M}\alpha_i y_i = 0
$$
or
$$
\aN y_N + \aM y_M = \mathcal{C}
$$
Multiplying this equation by $y_N$ (note that $y_i^2$ is always $1$ 
since targets are $\pm 1$), gives:
\begin{equation}
\aN + \aM \beta = \gamma
\label{EQ-RETA-MUL-LAGR}
\end{equation}
where $\beta = y_M y_N$ and $\gamma = \mathcal{C}y_N$.

The equality constraint forms a straight line and new values for
$\aN$ and $\aM$ must stay on it.
Substituting  $\aN = \gamma - \aM\beta$ in Equation 
\ref{EQ-Q-N-M}, gives:
%---------------------------------
\begin{multline}
W(\aM) = \gamma -\aM\beta+\aM-\frac{1}{2}(\gamma-\aM\beta)^2K_{NN}
-\frac{1}{2}\aM^2K_{MM}- \\
-(\gamma-\aM\beta)\aM\beta K_{NM}-
(\gamma-\aM\beta)y_NS_N - \aM y_MS_M + \mathcal{C}_3,
\end{multline}
%---------------------------------
\begin{multline}
W(\aM) = \aM(1-\beta+\gamma\beta K_{NN} - \gamma\beta K_{NM}
+\beta y_N S_N - y_MS_M) +\\
+\aM^2(K_{NM}-\frac{1}{2}K_{NN}-\frac{1}{2}K_{MM})
-\gamma( \frac{1}{2}\gamma K_{NN}+ y_NS_N- 1)+\mathcal{C}_3.
\label{EQ-FULL-M-N}
\end{multline}
%---------------------------------
Differentiating in relation to $\aM$
%---------------------------------
\[
\frac{\partial W(\aM)}{\partial \aM} = 0
\]
%---------------------------------
\begin{multline}
\aMn (2K_{NM}-K_{NN}-K_{MM})  + 1
-\beta+\gamma\beta(K_{NN}-K_{NM})  +\\
+y_M(S_N - S_M) = 0,
\end{multline}
%---------------------------------
where the new value of $\aM$ is denoted as $\aMn$.

Substituting $S_N$ and $S_M$ and multiplying both sides
by $y_M$ results:
%---------------------------------
\begin{multline}
\aMn y_M(2K_{NM}-K_{NN}-K_{MM}) =
y_N -y_M-\gamma y_N(K_{NN}-K_{NM})- \\ -S_N + S_M,
\end{multline}
%---------------------------------
\begin{multline}
\aMn y_M(2K_{NM}-K_{NN}-K_{MM}) =
y_N -y_M- \gamma y_N(K_{NN}-K_{NM})- \\
-f(\mathbf x_N)+\sum\limits_{i = \{ N,M\}}\alpha_i y_i K_{iN}
+f(\mathbf x_M)-\sum\limits_{i = \{ N,M\}}\alpha_i y_i K_{iM}.
\end{multline}
%---------------------------------
Expanding the summation:
%---------------------------------
\begin{multline}
\aMn y_M(2K_{NM}-K_{NN}-K_{MM}) =
y_N -y_M -f(\mathbf x_N) +f(\mathbf x_M) +  \\
+\aN y_N K_{NN}
+\aM y_M K_{MN}
-\aN y_N K_{NM} - \\
-\aM y_M K_{MM}
+\gamma y_NK_{NM}
-\gamma y_NK_{NN}.
\end{multline}
%---------------------------------
Since $\aN = \gamma - \aM\beta$, last equation becomes:
%---------------------------------
\begin{multline}
\aMn y_M(2K_{NM}-K_{NN}-K_{MM}) =
y_N -y_M -f(\mathbf x_N) +f(\mathbf x_M) +  \\
+(\gamma-\aM\beta)y_NK_{NN}
-(\gamma-\aM\beta)y_NK_{NM}
-\gamma y_N K_{NN} + \\
+\gamma y_NK_{NM}
+\aM y_M K_{MN}
-\aM y_MK_{MM},
\end{multline}
%---------------------------------
%---------------------------------
\begin{multline}
\aMn y_M(2K_{NM}-K_{NN}-K_{MM}) =
y_N -y_M -f(\mathbf x_N) +f(\mathbf x_M) +  \\
+\aM y_M (2K_{NM}-K_{NN}-K_{MM}).
\label{EQ-BEFORE-K}
\end{multline}
%---------------------------------
Denoting the second derivative of  Function 
\ref{EQ-FULL-M-N} in relation to $\aM$ as
$$
\kappa = 2K_{NM}-K_{NN}-K_{MM}.
$$
Equation \ref{EQ-BEFORE-K} is written as:
%---------------------------------
\begin{equation}
\begin{array}{lcl}
\aMn y_M \kappa & = &
y_N -y_M +f(\mathbf x_M) -f(\mathbf x_N) +\aM y_M \kappa \\
& = & \aM y_M \kappa  + [ f(\mathbf x_M) - y_M ] - [f(\mathbf x_N) -y_N].
\end{array}
\label{EQ-BEFORE-ERRO}
\end{equation}
%-----------------------------------------------
Since output error for pattern $i$ can be expressed as 
%---------------------------------
\begin{equation}
E_i = \left( \sum\limits_{j=1}^{p}y_j\alpha_jK_{ij}
+bias \right) - y_i,
\end{equation}
%---------------------------------
Expression \ref{EQ-BEFORE-ERRO} becomes
%---------------------------------
$$
\aMn y_M \kappa  = \aM y_M \kappa  - (E_N -  E_M),
$$
\begin{equation}
\aMn =  \aM - y_M\frac{(E_N -  E_M)}{\kappa}.
\label{EQ-AFTER-ERRO}
\end{equation}
%---------------------------------
Using Equation \ref{EQ-RETA-MUL-LAGR}, $\aNn$ can be calculated as
$$
\aNn + \aMn\beta = \gamma = \aN + \aM\beta,
$$
$$
\aNn = \aM\beta + \aN - \aMn\beta,
$$
\begin{equation}
\aNn = \aN + \beta(\aM - \aMn).
\end{equation}
%-----------------------------------------------------------------
The equality constraint line must lie inside the square defined by
inequality constraints (feasible area), resulting in more restrictive upper 
and lower limits for Lagrange multipliers.
%-----------------------------------------------------------------
\parag{Calculating new upper and lower bounds} 
%-----------------------------------------------------------------
%-----------------------------------------------------------------
\begin{figure}
\centering
\psfrag{EQ1}{$\aMo = \aNo - \gamma_1$}
\psfrag{EQ2}{$\aMo = \aNo - \gamma_2$}
\psfrag{V0}{$\aNo = 0$}
\psfrag{VC}{$\aNo = C$}
\psfrag{V0NEW}{$\aMn = \max(0,\aMo-\aNo)$}
\psfrag{VCNEW}{$\aMn = \min(C,C-\aNo+\aMo)$}
\includegraphics[scale=.4]{figs/yn_neq_ym.eps}
\caption[two possible placement for the equality constraint line (case a)]
{Case a: 
two possible placement for the equality constraint line
represented by offsets $\gamma_1$ and $\gamma_2$.}
\label{FIG-NOVO-VELHO-DIR}
\end{figure}
%------------------
\\
{\bf \noindent Case a) $y_M \neq y_N$ ($\beta=-1$):} In
this situation, Lagrange multipliers under optimization 
can be represented as a straight line with $+45^\circ$ degrees
of slope. Since new values for $\aN$ and $\aM$ must stay inside the
box formed by inequality equations (see Figure \ref{FIG-NOVO-VELHO-DIR}),
upper or lower limits may change.
Consider the old and new straight line:
%----
\begin{equation}
\aNn - \aMn = \gamma,
\label{EQ-TRAIN-OLD-LINE}
\end{equation}
%------
\begin{equation}
\aNo - \aMo = \gamma.
\label{EQ-TRAIN-NEW-LINE}
\end{equation}
%------
Equating \ref{EQ-TRAIN-OLD-LINE} with \ref{EQ-TRAIN-NEW-LINE}
when $\aNn$ reaches its upper limit ($C$), provides:
$$
\aMn = \aNn - \gamma = C - \aNo + \aMo.
$$
This line has positive slope, thus $\aN$ increases $\aM$
increases as well but the upper limit ($C$) can not be
exceed. So, $\aMn$ may be given by:
$$
\aMn = \min(C,C - \aNo + \aMo).
$$
Similarly, when $\aNn$ reaches its lower limit ($0$), $\aMn$
must be at least zero:
$$
\aMn = \aNn - \gamma = 0 - \aNo + \aMo,
$$
$$
\aMn = \max(0, \aMo - \aNo).
$$
\\
{\bf \noindent Case b) $y_M = y_N$ ($\beta=1$):}
Now $\aN$ and $\aM$ constitute a line with negative slope
($-45^\circ$ degrees) and when one increases the other 
decreases (see Figure \ref{FIG-NOVO-VELHO-ESQ}). 
When $\aN$ reaches its maximum ($C$), $\aMn$ must be 
at least zero:
$$
\aMn = \gamma -\aNn = \aNo + \aMo -C,
$$
$$
\aMn = \max(0,\aNo + \aMo -C).
$$
Similarly, when $\aN$ gets to zero, 
upper limit for $\aMn$ must be:
$$
\aMn = \gamma -\aNn = \aNo + \aMo -0,
$$
$$
\aMn = \min(C,\aNo + \aMo ).
$$

\begin{figure}
\centering
\psfrag{EQ1}{$\aMo = -\aNo + \gamma_1$}
\psfrag{EQ2}{$\aMo = -\aNo + \gamma_2$}
\psfrag{V0}{$\aNo = 0$}
\psfrag{VC}{$\aNo = C$}
\psfrag{V0NEW}{$\aMn = \max(0,\aMo+\aNo-C)$}
\psfrag{VCNEW}{$\aMn = \min(C,\aNo+\aMo)$}
\includegraphics[scale=.4]{figs/yn_eq_ym.eps}
\caption[two possible placement for the equality constraint line (case b)]
{Case b: 
two possible placement for the equality constraint line
represented by offsets $\gamma_1$ and $\gamma_2$.}
\label{FIG-NOVO-VELHO-ESQ}
\end{figure}

Table \ref{TAB-ALPHA-NOVO} summarizes how to calculate 
new limits for $\aM$, given the Lagrange multiplier $\aN$
between zero and $C$.
%----------------------------------------
\begin{table}
\centering
\caption{Upper and lower limits summary}
\label{TAB-ALPHA-NOVO}
\begin{tabular}{|c|c|c|} \hline \hline
& \textbf{Lower limit} & \textbf{Upper limit} \\ \hline
$y_M \neq y_N$
&
$
\aMn = \max(0, \aMo - \aNo)
$
&
$
\aMn = \min(C,C - \aNo + \aMo)
$
\\
$y_M = y_N$
&
$
\aMn = \max(0,\aNo + \aMo -C)
$
&
$
\aMn = \min(C,\aNo + \aMo )
$
\\ \hline \hline
\end{tabular}
\end{table}
%-----------------------------------------------------------
\parag{Selection heuristics}
% --------------------------------------------------------
Platt suggests two selection heuristics for SMO, one for each
Lagrange multiplier to be optimized \cite{Platt98a}. 
First heuristic iterates over patterns that are most likely to 
violate KKT conditions, selecting them for optimization.
The second heuristic tries to maximize the step size during
joint optimization (Equation \ref{EQ-AFTER-ERRO}) using 
error difference $|E_N - E_M|$ as step estimation.
%-----------------------------------------------------------
\parag{Improvements to SMO}
% --------------------------------------------------------
Keerthi \emph{at al} \cite{KeeSheBhaMur99b} suggest two 
bias parameters to speed up SMO for classification problems.
A  version of SMO for regression problems was proposed by 
Smola and Sch{\"o}lkopf, with modifications suggested in 
\cite{KeeSheBhaMur99c}.

\section{Conclusion}

In this chapter, some alternatives to solve the
quadratic problem arising from SVM training were presented. Although 
there exists many QP solution methods,
size of training set can be a problem and
which method to use is mainly dependent on size of the 
training set.
%%
In general, methods based on SMO and \svmlight{} principles are 
efficient to solve the QP problem. The major disadvantage 
is their algorithm complexity, which renders the implementation 
difficult.

In the next chapters, two new strategies for training 
SVMs are presented. The first one, SVM-KM, is distinguished by the 
use of $k$-means as a pre-selection method before 
the SVM training. 
SVM-KM gives a deep insight into sample selection strategies for SVMs,
highlighting the necessity to consider the structure of SVMs when 
creating sampling methods.

The second, SVM-EDR, solves the
QP problem arising from SVMs training but without any 
assumption about support vectors or KKT conditions, only
using an iterative process based on gradient ascent
and error information.

