%------------------------------------------------------------------
\chapter{The SVM-EDR training algorithm}
%------------------------------------------------------------------
\label{CAP-SVM-EDR}
%------------------------------------------------------------------
\section{Introduction}
%------------------------------------------------------------------

SVMs training algorithms like SOR \cite{Mangasarian99,ManMus99}, 
SMO \cite{Platt98a,Platt98b} and \svmlight{} \cite{Joachims98b}
have a common characteristic: a pattern selection 
strategy determines which vectors are optimized. 
For instance, SMO selects its first pattern to optimize by moving
back and forth all training patterns and support vectors.
The second pattern is selected 
based on the maximum error in relation to the first pattern. 
SOR picks ups vectors using the same approach of
SMO for selecting the first vector.
% but changing the order. 
Additionally, in \svmlight{} an optimization problem is formulated
to find the best direction to optimize
and which training patterns belongs to it. 
Since the number of training data can be large, these selection
strategies are very important to design efficient training algorithms.

Since 1992, strategies to increase the presentation of
patterns with larger output errors are known from the article 
by Munro \cite{MUNRO9201}.
In his work, Munro proposes a deterministic selection strategy
where patterns related to large output errors are selected and repeatedly
presented to the network until their error become smaller than a
pre-determined value. However, this procedure has a drawback: it 
can reach local minimum since learned patterns are gradually 
forgotten if they are not presented anymore \cite{CACHIN9401}.

New strategies that take into account learned patterns
and patterns with higher errors are presented in 1994 
by Cachin \cite{CACHIN9401}. Cachin describes some pattern selection strategies
(called ``pedagogical pattern selection strategies'') to train
neural networks. 
The relationship between the strategies proposed by Cachin 
and the well known Boosting algorithms are clear since both
use the concept of selecting more frequently incorrect patterns
than correct ones. While this selection follows 
a probability distribution in Boosting algorithms, Cachin employs
a deterministic schedule to choose patterns.

In this chapter a new training algorithm for training SVMs
called SVM-EDR is presented \cite{barros:2001}. 
Using an iterative process based on gradient ascent
and a re-sampling strategy dependent on error, 
SVM-EDR solves the dual problem without any assumption about 
support vectors or the Karush-Kuhn-Tucker (KKT) conditions.
As will be shown, SVM-EDR is a Boosting algorithm with a
convergent distribution probability that governs the pattern selection.

%------------------------------------------------------------------
\section{Boosting}
%------------------------------------------------------------------

One interesting branch of learning theory is known as Probably Approximately
Correct (PAC) model of learning \cite{VALIANT84,MITCHEL,MICHAEL94}. In this model, 
the \emph{learner}, based on a set of examples of a \emph{concept}, attempts to formulate a 
prediction rule (\emph{hypothesis}) that correctly classifies new instances as belonging 
to that concept or not. The examples used by the learner are drawn from a fixed but 
unknown and arbitrary distribution on the instance space.
The probability that a random instance is classified correctly is a measurement
of hypothesis accuracy. Since these are statistical measurements,
the learner is said to have accuracy $1-\epsilon$ with reliability $1-\delta$,
where $\epsilon$ and $\delta$ are small positive constants.

Although the reliability of any learning algorithm could be improved 
by expanding the set of examples \cite{HAUSSLER91}, increasing the accuracy
is a complex task. This question was treated by Kearns and Valiant 
\cite{KEARNS88,KEARNS89,KEARNS94} who defined two different PAC models: \emph{strong} and
\emph{weak} PAC learning. A strong learner is defined as a learner that, 
given an initial accuracy $\epsilon$,
generates an hypothesis whose error is smaller than $\epsilon$ in a
polynomial-time $1/\epsilon$. This requirement is not necessary for a weak
learner that only needs to be slightly better than a random guess.

The difference between strong and weak learning was accepted until 1990, when
Schapire, in his work ``\emph{The Strength of Weak Learnability}'' \cite{SCHAPIRE90},
proved that weak and strong PAC learning are equivalent in the distribution-free 
case. Schapire demonstrated that a weak hypothesis can be ``\emph{boosted}'' to
generate a strong hypothesis with arbitrary precision by employing the 
proposed \emph{Boosting algorithm}. With Boosting algorithm, several 
weak hypothesis are combined to 
generate a strong hypothesis,  with the required accuracy and in a polynomial-time 
$1/\epsilon$. 

Two versions of Boosting algorithms, \emph{boosting by finding a
consistent hypothesis} and \emph{boosting by filtering}, appeared in 1995, in the work
of Freund \cite{FREUND95}. Moreover, Freund proposed a generalization for concepts 
with $k$-values and real values and not only binary values. 
In the same year, Freund and Schapire 
\cite{FRESCHAP97} presented the initial version of the well known 
Boosting algorithm, called AdaBoost (from \emph{Adaptive Boosting}).
With AdaBoost, it is not necessary a prior knowledge of 
hypothesis's accuracy for a weak learning algorithm. All 
weak hypothesis are weighted according to
its error to build a final strong hypothesis. 
Slight modifications were made in the original AdaBoost resulting in
a generalized AdaBoost algorithm \cite{SCHAPSING99}.
The AdaBoost algorithm is described in the next section.
%Afterwards, 
%EDR is discussed how it fits in the context of 
%SVM and Boosting. 
%. Other boosting approach, known as \emph{Bagging}, 
%and a brief discussion about boosting,
%margins and SVM are presented as well. Finally, 

%------------------------------------------------------------------ 
\subsection{AdaBoost algorithm}
%------------------------------------------------------------------

Consider a training set $\{(\mathbf{x}_i,yi) \}_{i=1}^{p}$ with two distinct class labels,
given by $(+1)$ and $(-1)$. Each instance $\mathbf{x}_i$ belongs to an instance space given
by $\mathcal{X} \in \Re^m$ and labels $y_i$ belong to a label space 
$\mathcal{Y} \in \Re^1$. 
Consider also $D_t(i)$ as a distribution (\emph{weight}) 
for the pattern $\mathbf{x}_i$ on iteration $t$.
Initially all instances have the same weight which are changed during the
training. Examples incorrectly classified have theirs weights increased whereas
weights for examples correctly classified are decreased. Because of this, the weak
learner is forced to concentrate its efforts on the hard examples, that is, on the most
difficult examples to learn.

At each iteration, weak hypothesis with error
% --
\begin{equation}
%\epsilon_t = \mathrm{Pr}_{i \sim D_t}[h_t(x_i) \neq y_i] 
\epsilon_t = P_{i \in D_t}[h_t(\mathbf{x}_i) \neq y_i] 
\label{EQ-SHAPIRE-ERROR}
\end{equation}
% --
are combined to build the final hypothesis,
% --
\begin{equation}
H(\mathbf{x}) = \mathrm{sign}\left( \sum\limits_{t=1}^{T}a_t h_t(\mathbf{x})\right).
\end{equation}
% --
The expression for the error used here (Equation \ref{EQ-SHAPIRE-ERROR}) 
follows original notation
used by Schapire \cite{FRESCHAP99}. The expression 
$P_{i \in D_t}[h_t(\mathbf{x}_i) \neq y_i]$ represents
the probability that $h_t$  misclassifies a pattern $\mathbf{x}_i$
chosen at random according to the distribution $D_t$, both
at iteration $t$, indicated by the subscript index.
For binary concepts, a possible function can be \cite{FRESCHAP99}:
% --
\begin{equation}
\epsilon_t = \sum\limits_{\forall i | h_t(\mathrm{x}_i) \neq y_i}D_t(i).
\end{equation}
% --
Weak hypotheses are weighted by the variable $a_t$, which is controlled
by the error $\epsilon_t$ over the distribution $D_t$. When the error becomes
small, $a_t$ increases and the corresponding weak hypothesis 
becomes more ``reliable''. Hypothesis with a small degree of
confidence are generated when large errors, over the distribution $D_t$, occurs.
The behavior of $a_t$ and $\epsilon_t$  is represented in Figure 
\ref{FIG-ALPHA-EPS-BEH}.
The AdaBoost algorithm for binary concepts 
\cite{FRESCHAP99,SCHAPIRE99,SCHAPSING99} 
is depicted in Algorithm \ref{ADABOOST-ALG}.
% ------------------
\begin{algorithm}
\caption{\label{ADABOOST-ALG}AdaBoost algorithm for binary concepts}
\begin{algorithmic}
\small
\STATE \ding{237} Given a training set $\{(\mathbf{x}_i,yi) \}_{i=1}^{p}$ %\\
        with $\mathbf{x}_i \in \mathcal{X}$,
                  $y_i \in \mathcal{Y} = \{-1,+1\}$
\STATE \ding{237} Initialize $D_0(i)=1/p$
\FOR{$t=1$ to $T$}
\STATE \ding{237} Train a weak learner using distribution $D_t$
\STATE \ding{237} Get weak hypothesis $h_t:\mathcal{X} \rightarrow \{-1,+1\}$ with error: 
	$$\displaystyle \epsilon_t = \mathrm{Pr}_{i \in D_t}[h_t(\mathbf{x}_i) \neq y_i] $$
\STATE \ding{237} Choose: 
	$$\displaystyle a_t = \frac{1}{2} \ln \left( \frac{1-\epsilon_t}{\epsilon_t}\right) $$
\STATE \ding{237} Update:
	$$\displaystyle 
	\begin{array}{lcl}
	D_{t+1}(i) & = & \displaystyle \frac{D_t(i)}{Z_t}\times\left\{ 
		\begin{array}{ll}
		e^{-a_t} \;\; & \mathrm{if}\; h_t(\mathbf{x}_i) = y_i \\
		e^{a_t}  \;\; & \mathrm{if}\; h_t(\mathbf{x}_i) \neq y_i \\
		\end{array}
		\right. \\
		& = & \displaystyle \frac{D_t(i)\exp(a_ty_ih_t(\mathbf{x}_i))}{Z_t} \\
	\end{array}
	$$
	where $Z_t$ is a normalization factor (chosen so that $D_{t+1}$ will be
	a distribution).
\ENDFOR
\STATE \ding{237} Output the final hypothesis:
	$$ 
	\displaystyle
	H(\mathbf{x}) = \mathrm{sign}\left( \sum\limits_{t=1}^{T}a_t h_t(\mathbf{x})\right)
	$$
\end{algorithmic}
\end{algorithm}
% ------------------
%--------------------------------------------
\begin{figure}[!]
\centering
\psfrag{X}{$\epsilon_t$}
\psfrag{Y}{$a_t$}
\includegraphics[scale=.35,trim=0 0 0 -40]{figs/error_alpha.eps}
\caption{The $a_t$ and $\epsilon_t$ behavior.}
\label{FIG-ALPHA-EPS-BEH}
\end{figure}
%--------------------------------------------
The distribution $D_t$ is updated according to $a_t$ and the current weak hypothesis.
For a pattern correctly classified 
$(h_t(\mathbf{x}_i)=y_i)$, the value $D_t(\mathbf{x}_i)$ decreases.
Conversely, incorrectly classified patterns have an increase on theirs corresponding 
$D_t(i)$ values. Since the selection probability of a pattern is controlled by the distribution
$D_t$, hard examples are more likely to be selected than correctly classified ones.
 
For real-valued concepts, Leisch and Hornik \cite{LEISCH97} propose  a modification
in the distribution $D_t$, as follow:
%---
\begin{equation}
\epsilon_t(\mathbf{x}_i)=|y_i - h_t(\mathbf{x}_i)|^2
\end{equation}
\begin{equation}
D_{t+1}(i) = \frac{D_t(i)+ \epsilon_t(\mathbf{x}_i)}{Z_t}.
\end{equation}
%--
With this procedure, it is taken into account ``how much'' a pattern is misclassified 
and not only if it is misclassified or not. 
This idea conforms with the Error-Dependent
Repetition (EDR) procedure \cite{CACHIN9401}, 
described next for SVMs.

%by Cachin  in 1994. In the next section
%the EDR algorithm and its connection with boosting are presented.
%AdaBoost can be applied to real-valued concepts. In this case, a measure of ``confidence''
%can be given by $|h_t(x)|$ and the predicted class is defined according to the sign of
%$h_t(x)$. With slight modifications, the AdaBoost algorithm for real-valued \cite{}
%concepts is presented in Algorithm \ref{ALG-}.

%\section{Pedagogical pattern selection strategies}

%------------------------------------------------------------
\section{The SVM-EDR training algorithm}
%------------------------------------------------------------

The SVM-EDR training algorithm is a Boosting algorithm
that uses a deterministic schedule based on error
to re-sample the data set. Patterns related to large
errors have its Lagrange multipliers updated more
frequently, according to a gradient ascent strategy.
The schedule is calculated using a modified EDR procedure,
adapted to the output error of the SVMs.
The SVM-EDR is presented in a format similar to
the AdaBoost algorithm, with its convergent distribution 
probability $D_t(i)$ represented. The pattern selection
is controlled by this distribution.

Two new training parameters are introduced with SVM-EDR.
The first, denoted by $n_E$, represents the number of
scans performed over the training set, at each iteration,
in search of patterns related to large errors.
The second parameter is used to change the shape of the error probability
distribution  and, consequently, the number of vectors
presented at each iteration.
All these aspects are explained in the next sections.

%-----------------------------------------------------------
\subsection{Error Dependent Repetition}
\label{SEC-EDR-STANDARD}
%-----------------------------------------------------------

The Error Dependent Repetition (EDR) pattern selection strategy 
was introduced in 1994 \cite{CACHIN9401}.
With EDR, the presentation frequency
of a pattern depends on its error: pattern with large output errors are 
selected more frequently and patterns with smaller error (or learned)
are presented with minor frequency.
This can be carried out by employing the following procedure,
split here in two phases for better understanding:
%----------
\begin{itemize}
\item {\bf Phase 1:} All training patterns are presented to the network
      and output errors are stored.
      Error for the $i$-th pattern is denoted by
      \begin{equation}
      e_i = f(\mathbf{x}_i) - y_i
      \end{equation}
      The largest error is saved:
      $e_{\textrm{max}}=\max\{e_i\}_{i=1}^{p}$
%\;\;\;\mathrm{for}\;i=1,\ldots,p$.
\item {\bf Phase 2:} With these values defined, the training set is scanned $n_E$ times.
      Patterns that obey the condition
%----------
\begin{equation}
e_i \geq j \frac{ e_{\textrm{max}} } {n_E},
\;\;\;\;\;\mathrm{for}\;\;j=1,2,\ldots,n_E
\label{EQ-SVMEDR-COMPAR}
\end{equation}
%----------
are selected for optimization. In this case, $n_E$ is less or equal to
the number of patterns ($p$).
\end{itemize}
%----------

The number of vectors presented at each iteration can be changed
by using different comparison expressions in Equation \ref{EQ-SVMEDR-COMPAR}.
For SVM-EDR it is used a generic comparison function
with a parameter ($p_E$) that controlls its power:
%
\begin{equation}
\begin{array}{rcl}
\displaystyle
(e_i)^{p_E}    &\geq& j \displaystyle \frac{ e_{\textrm{max}} } {n_E}. \\
\label{EQ-SVMEDR-COMPAR-SPECIF}
\end{array}
\end{equation}

%Some functions suggested by Cachin \cite{CACHIN9401} are 
%presented below:
%\;\;\;\;\mathrm{for}\;\;j=1,2,\ldots,W
%----------
%\begin{equation}
%\begin{array}{rcl}
%\displaystyle
%(e_i)^2    &\geq& j \displaystyle \frac{ e_{\textrm{max}} } {n_E} \\
%(e_i)^4    &\geq& j \displaystyle \frac{ e_{\textrm{max}} } {n_E} \\
%\exp(e_i)  &\geq& j \displaystyle \frac{ e_{\textrm{max}} } {n_E} \\
%10^{e_i}   &\geq& j \displaystyle \frac{ e_{\textrm{max}} } {n_E} \\
%\end{array}
%\end{equation}

%-------------------------------------------------------
\subsection{EDR for SVMs}
\label{SEC-EDR-SVM-ERROR}
%----------------------------------------------------------
\begin{table}[b]
\caption[Eliminating class label dependence for output error.]
{Eliminating class label dependence for output error. Positive
error indicates a wrong classified pattern whereas negative error
indicates a well classified pattern}
\label{TAB-ERRO-EDR}
\centering
\begin{tabular}{ccccc} \hline
Vector & Class & Error & Error$\times$Class & 
Error$\times$Class$\times$(-1) \\ \hline
\ding{192} & $+1$ & $>0$ & $>0$ & $<0$ \\
\ding{193} & $+1$ & $<0$ & $<0$ & $>0$ \\
\ding{194} & $-1$ & $<0$ & $>0$ & $<0$ \\
\ding{195} & $-1$ & $>0$ & $<0$ & $>0$ \\ \hline
\end{tabular}
\end{table}
%--------------------
Employment of the EDR heuristic in the training of 
SVMs is not immediate.
For SVMs, the output error depends on which class the training vector
belongs to. Using the transformation
%----------
\begin{equation}
e_i^{'} = -e_iy_i
\label{EQ-EDR-ERROR-INDEP}
\end{equation}
%----------
it is possible to have an error measure 
that it is not dependent on the class label.

Figure \ref{FIG-ERRO-EDR} shows four possible placements
for a training vector, represented by the circled numbers 1 to 4.
Vectors 1 (correctly classified) and 2 (incorrectly classified)
belongs to class $(+1)$ and vectors 3 (correctly classified) 
and 4 (incorrectly classified) to class $(-1)$.
Table \ref{TAB-ERRO-EDR} illustrates how the transformation 
changes the error to generate positive errors for incorrectly classified
vectors and negative errors for correctly classified ones.
%---------------------
\begin{figure}[htb]
\centering
\psfrag{SV1}{\large\ding{192}}
\psfrag{SV2}{\large\ding{193}}
\psfrag{SV3}{\large\ding{194}}
\psfrag{SV4}{\large\ding{195}}
\psfrag{CL1}{\shortstack{Class \\$(-1)$}}
\psfrag{CL2}{\shortstack{Class \\$(+1)$}}
\psfrag{ZERO}{Hyper-plane}
\includegraphics[scale=.6]{figs/erroedr.eps}
\caption{Four possible placements for a training vector.}
\label{FIG-ERRO-EDR}
\end{figure}
%------------------------------------

EDR assumes that output errors are positive or zero while
the output error given by  $e_{i}^{'}$ may assume any 
value. Consequently,
it is necessary to shift  $e_{i}^{'}$ by the 
minimum error:
%----------
\begin{equation}
e_{i}^{\mathrm{svm}} = e_{i}^{'} - e_{\mathrm{min}}^{'}
\label{EQ-SVMEDR-INDEP-CLASS-MIN}
\end{equation}
%----------
where 
$
e_{\mathrm{min}}^{'} = \min\{e_{i}^{'}\}_{i=1}^{p}
$ 
and $e_{i}^{\mathrm{svm}}$ is the output error with 
zero as reference.


The complete SVM-EDR expression, using Equations  
\ref{EQ-SVMEDR-COMPAR-SPECIF}, \ref{EQ-EDR-ERROR-INDEP} and
\ref{EQ-SVMEDR-INDEP-CLASS-MIN}, is
$$
(e_{i}^{'} - e_{\mathrm{min}}^{'})^{p_E} 
\geq j \frac{ e_{\textrm{max}}^{'}  - e_{\mathrm{min}}^{'}} {n_E} 
$$
%-------------------------
\begin{equation}
e_{i}^{'} \geq 
\sqrt[p_E]
{
j \frac{ e_{\textrm{max}}^{'}  - e_{\mathrm{min}}^{'} }{n_E} 
}
+ e_{\mathrm{min}}^{'}
\;\;\;\;\;\mathrm{for}\;\;j=1,2,\ldots,n_E
\end{equation}
%-------------------------
where 
$
e_{\mathrm{max}}^{'} = \max\{e_{i}^{'}\}_{i=1}^{p}
$ 
is the maximum error, not dependent on the class label.

It is recommended to apply EDR for each class separately since
training vectors belonging to different class labels may
have output errors with large discrepancy among them.
Therefore, classes with the largest errors would be benefited if EDR 
was not applied per class.

%--------------------------------------------------------
\subsection{Understanding and estimating $n_E$}
\label{SEC-ESTIMATING-NE}
%---------------------------------------------------------

The role of the number of scans is to separate patterns in $n_E$ 
different groups, according to their relative error values.
As a consequence, a virtual training set is generated where
patterns associated with large errors are included several
times.

In the EDR phase 2, each group is selected with a 
frequency which is dependent of these values. 
Figure \ref{FIG-EDR-GROUPS} shows an example with
$n_E$ equal to five. Errors are ordered, starting from 
$e_t(1)$ (minimum error) to $e_t(p)$ (maximum error). 
The EDR phase 1 is necessary to determine the error values
used in phase 2 and to present each training vector
at least once.
However, when taking into account only the number
of vectors presented at iteration $t$, phase 1 
can be considered as part of phase 2, but starting from $j=0$.
%---------------------------------------------
\begin{figure}
\centering
\begin{tabular}{|cccl|}
\hline
Phase 1: & ($j = 0$) & \vspace*{0.1cm}
\rule{.32cm}{.32cm}
\rule{.32cm}{.32cm}
\rule{.32cm}{.32cm}
\rule{.32cm}{.32cm}
\rule{.32cm}{.32cm}
& all \\
\hline
Phase 2: &  & 
$e_t(1)\leftarrow$ $\rightarrow e_t(p)$ & \\
& $j=1$ & 
\setlength{\fboxsep}{0cm}
\fbox{\phantom{\rule{.3cm}{.3cm}}}
\rule{.32cm}{.32cm} 
\rule{.32cm}{.32cm} 
\rule{.32cm}{.32cm} 
\rule{.32cm}{.32cm} 
& $e \geq e_t(p)/5$ \\
& $j=2$ & 
\setlength{\fboxsep}{0cm}
\fbox{\phantom{\rule{.3cm}{.3cm}}}
\fbox{\phantom{\rule{.3cm}{.3cm}}}
\rule{.32cm}{.32cm} 
\rule{.32cm}{.32cm} 
\rule{.32cm}{.32cm} 
& $e \geq 2e_t(p)/5$ \\
& $j=3$ & 
\setlength{\fboxsep}{0cm}
\fbox{\phantom{\rule{.3cm}{.3cm}}}
\fbox{\phantom{\rule{.3cm}{.3cm}}}
\fbox{\phantom{\rule{.3cm}{.3cm}}}
\rule{.32cm}{.32cm} 
\rule{.32cm}{.32cm} 
& $e \geq 3e_t(p)/5$ \\
& $j=4$ & 
\setlength{\fboxsep}{0cm}
\fbox{\phantom{\rule{.3cm}{.3cm}}}
\fbox{\phantom{\rule{.3cm}{.3cm}}}
\fbox{\phantom{\rule{.3cm}{.3cm}}}
\fbox{\phantom{\rule{.3cm}{.3cm}}}
\rule{.32cm}{.32cm} 
& $e \geq 4e_t(p)/5$ \\
\hline
\end{tabular}
\caption{\label{FIG-EDR-GROUPS}Groups generated by EDR, 
				supposing $n_E = 5$.}
\end{figure}
%---------------------------------------------

An effect of $n_E$ is to change the number of 
vectors presented at each iteration, denoted by $Z_t$.
The number of vectors can be precisely estimated  
if the distribution of errors $\mathcal{E}_t$ is known but, 
since decision surface is under optimization, this distribution 
changes at each iteration.
To better understand this concept, two examples are given in next
sections.
%----------------------------------------------------------
\subsubsection{Example 1}
%----------------------------------------------------------
Consider a uniform 
error distribution at iteration $t$, as depicted in
Figure \ref{FIG-UNIFORM-ERROR-DISTR}. 
If a training vector is chosen at random, the probability of
finding its correspondent error value in any region 
between $e_t(1)$ and $e_t(p)$ is the same. So, for
this uniform distribution, errors are equally 
spread in the range $[e_t(1),e_t(p)]$. 
% ----
\begin{figure}
\setlength{\unitlength}{1mm}
\begin{picture}(110,40)(0,7)
% --
\thicklines
% eixo x
\put(20,10){\vector(1,0){90}}
% eixo y
\put(20,10){\vector(0,1){30}}
% linha horiz.
\put(30,30){\line(1,0){70}}
\thinlines
\put(100,10){\line(0,1){20}}
\put(30,10){\line(0,1){20}}
\multiput(50,11)(0,2){10}{\line(0,1){1}}
%\put(50,10){\line(0,1){20}}
% --
\put(25,05){\makebox(0,0)[bl]{$e_t(1)$}}
\put(48,05){\makebox(0,0)[bl]{$e(j)$}}
\put(95,05){\makebox(0,0)[bl]{$e_t(p)$}}
\put(110,05){\makebox(0,0)[bl]{$Error$}}
\put(22,35){\makebox(0,0)[bl]{$Frequency$}}
\put(0,26){\makebox(0,0)[bl]{$\frac{1}{e_t(p)-e_t(1)}$}}
\put(70,18){\makebox(0,0)[bl]{$A(j)$}}
% --
\multiput(20,30)(2,0){5}{\line(1,0){1}}
\end{picture}
\caption{\label{FIG-UNIFORM-ERROR-DISTR}Uniform error
				distribution.}
\end{figure}
% ---------------------------------------------

In the EDR algorithm, the training set is scanned $j=1, \ldots, n_E$
times and vectors that obey the condition
%----------
$$
e_t(i) \geq j \frac{ e_t(p) } {n_E}
$$
%----------
are selected for optimization. For the uniform distribution,
the number of vectors selected is proportional to the area 
after $e(j)$ (see Figure \ref{FIG-UNIFORM-ERROR-DISTR}).
As the whole area is $1$, the delimited area  in $j$-th
scanning, multiplied by the training set size ($p$),
provides the number of vectors in that scanning.
Moreover, the number of vectors presented at iteration $t$
(denoted as $Z_t$) may be obtained starting the 
scanning in $j=0$:
% ------
$$%\begin{equation}
Z_t = p\sum\limits_{j=0}^{n_E}
\left(
1-\frac{j}{n_E}
\right)
=
p
\left[
\sum\limits_{j=0}^{n_E}1 -
\frac{1}{n_E}\sum\limits_{j=0}^{n_E}j
\right]
$$%\end{equation}
% ------
$$%\begin{equation}
Z_t = p
\left[
(n_E+1) - 
\frac{1}{n_E}\frac{n_E(n_E+1)}{2}
\right]
$$%\end{equation}
% ------
\begin{equation}
\label{EQ-UNIFORM-TOTAL-VEC}
Z_t = \frac{(n_E+1)p}{2}.
\end{equation}
% ------
For achieving this result, two sums were accomplish: 
the first one to calculate the area after $e(j)$ and 
the second one to sum all these areas. 
These sums may be represented by integrals when $n_E$ 
becomes large,  as will be demonstrated below.
First, it is necessary to determine the area after $e(j)$:
% -------
\begin{equation}
A(j) = \int\limits_{e(j)}^{e_t(p)}
       \frac{1}{e_t(p) - e_t(1)}
      de
\end{equation}
% -------
where
% ---
\begin{equation}
 e(j) = j\frac{(e_t(p) - e_t(1))} {n_E}	+ e_t(1)
\end{equation}
% ---
$$%\begin{equation}
\left.
A(j) = \frac{1}{e_t(p) - e_t(1)} e \right|_{e(j)}^{e_t(p)}
$$%\end{equation}
% ---
$$%\begin{equation}
A(j) = \frac{1}{e_t(p) - e_t(1)} 
\left(
e_t(p) - j\frac{(e_t(p) - e_t(1))} {n_E}	- e_t(1)
\right)
$$%\end{equation}
% ---
\begin{equation}
A(j) = 1 - \frac{j}{n_E}.
\end{equation}
% ----
Using this result, the second integral becomes:
% ---
$$%\begin{equation}
Z_t = p\int\limits_{0}^{n_E}A(j)dj
$$%\end{equation}
% ----
$$%\begin{equation}
Z_t = p\int\limits_{0}^{n_E}
\left(
1 - \frac{j}{n_E}
\right)
dj
$$%\end{equation}
% ----
$$%\begin{equation}
Z_t = 
p\left[
\left. j \right|_{0}^{n_E}
\left. - \frac{j^2}{2n_E}\right|_{0}^{n_E}
\right]
=
p\left[
n_E - \frac{n_E}{2}
\right]
$$%\end{equation}
% ----
\begin{equation}
Z_t = \frac{n_E}{2}p.
\label{EQ-INTEG-UNIF}
\end{equation}
% ----
Comparing  Equations \ref{EQ-UNIFORM-TOTAL-VEC}
and \ref{EQ-INTEG-UNIF}, when $n_E$ becomes large, groups
generated by EDR tend to zero and the difference as well:
$$
\frac{(n_E+1)}{2} \approx \frac{n_E}{2} \;\; \mathrm{when} \;\; n_E >> 1.
$$
In general, indicating $g_t(e)$ as the error distribution at iteration $t$,
$Z_t$ can be calculated as follows:
% -------
$$%\begin{equation}
Z_t = \int\limits_{0}^{n_E}
      \int\limits_{e(j)}^{e_t(p)}.
      g_t(e)dedj
$$%\end{equation}
% -------
% ----------------------------------------------
\begin{figure}
\centering
\setlength{\unitlength}{1mm}
\begin{picture}(55,50)(10,7)
% --
\thicklines
% eixo x
\put(20,10){\vector(1,0){50}}
% eixo y
\put(20,10){\vector(0,1){45}}
% linha diag.
\put(20,10){\line(1,1){40}}
\thinlines
\put(60,10){\line(0,1){40}}
\multiput(35,11)(0,2){7}{\line(0,1){1}}
%\put(50,10){\line(0,1){20}}
% --
\put(12,05){\makebox(0,0)[bl]{$e_t(1)=0$}}
\put(31,05){\makebox(0,0)[bl]{$e(j)$}}
\put(55,05){\makebox(0,0)[bl]{$e_t(p)$}}
\put(67,05){\makebox(0,0)[bl]{$Error$}}
\put(22,50){\makebox(0,0)[bl]{$Frequency$}}
%\put(0,26){\makebox(0,0)[bl]{$\frac{1}{e_t(p)-e_t(1)}$}}
\put(42,20){\makebox(0,0)[bl]{$A(j)$}}
% --
\multiput(20,25)(2,0){8}{\line(1,0){1}}
\end{picture}
\caption{\label{FIG-LINEAR-ERROR-DISTR}Linear error
				distribution.}
\end{figure}
% ---------------------------------------------

%----------------------------------------------------------
\subsubsection{Example 2}
%----------------------------------------------------------

As a second example, consider the error distribution 
showed in Figure \ref{FIG-LINEAR-ERROR-DISTR} and represented as
% -------
\begin{equation}
g_t(e) = \frac{2e}{e_t^2(p)}
\end{equation}
% -------
and with
% -------
\begin{equation}
e(j) = j\frac{e_t(p)}{n_E}.
\end{equation}
% -------
The area after $e(j)$ is given by the first integral
% -------
$$%\begin{equation}
A(j) = \int\limits_{e(j)}^{e_t(p)}
       \frac{2e}{e_t^2(p)}
      de
      =
      \left.
      \frac{1}{e_t^2(p)}
      e^2
      \right|_{e(j)}^{e_t(p)}
      =
			\frac{1}{e_t^2(p)}
			\left[
      e_t^2(p) - j^2\frac{e_t^2(p)}{n_E^2}
      \right]      
$$%\end{equation}
% -------
% -------
\begin{equation}
A(j) =
      1 - \frac{j^2}{n_E^2}
\end{equation}
% -------
whereas second integral use this result to calculate $Z_t$:
% -------
$$%\begin{equation}
Z_t = p\int\limits_{0}^{n_E}
			\left(
      1 - \frac{j^2}{n_E^2}
      \right)
      dj
$$%\end{equation}
% -------
% ----
$$%\begin{equation}
Z_t = 
p\left[
\left. j \right|_{0}^{n_E}
\left. - \frac{j^3}{3n_E^2}\right|_{0}^{n_E}
\right]
=
p\left[
n_E - \frac{n_E}{3}
\right]
$$%\end{equation}
% ----
\begin{equation}
Z_t = \frac{2n_E}{3}p.
\label{EQ-INTEG-LIN}
\end{equation}
% ----
Using sums, it is necessary to represent the normalized trapezoidal 
area $A(j)$:
$$
A(j) = \frac{\frac{(e_t(p)+e(j))(e_t(p)-e(j))}{2}}{\frac{e_t^2(p)}{2}}
= 
\frac{e_t^2(p)-e^2(j)}{e_t^2(p)}.
$$
Substituting $e(j)$, results on:
$$
A(j) = \frac{e_t^2(p)-\frac{j^2e_t^2(p)}{n_E^2}}{e_t^2(p)}
     = 1 - \frac{j^2}{n_E^2}.
$$
And the second sum gives $Z_t$:
% ------
$$%\begin{equation}
Z_t = p\sum\limits_{j=0}^{n_E}
\left(
1 - \frac{j^2}{n_E^2}
\right)
=
p
\left[
(n_E + 1) -
\frac{1}{n_E^2}\sum\limits_{j=0}^{n_E}j^2
\right]
$$%\end{equation}
% ------
% ------
$$%\begin{equation}
Z_t = p
\left[
(n_E + 1) -
\frac{n_E(n_E+1)(2n_E+1)}{6n_E^2}
\right]
$$
%-----
\begin{equation}
Z_t = p
\left[
\frac{4n_E^2+3n_E-1}{6n_E}
\right].
\end{equation}
% ------
Again, when $n_E$ is large, the value yielded by the sums 
becomes close to the integral:
%-----
$$%\begin{equation}
\frac{4n_E^2+3n_E-1}{6n_E} \approx \frac{2n_E}{3}
 \;\; \mathrm{when} \;\; n_E >> 1.
$$%\end{equation}
% ------

%----------------------------------------------------------
\subsubsection{Virtual training set}
%----------------------------------------------------------

In summary, the size of the virtual training set generated by $n_E$, 
at iteration $t$, may be estimated by
% -------
\begin{equation}
Z_t = \int\limits_{0}^{n_E}
      \int\limits_{e(j)}^{e_t(p)}
      g_t(e)dedj.
\label{EQ-Z-SET-SIZE}      
\end{equation}
% -------
The Boosting characteristic of SVM-EDR is a consequence of this set,
since patterns associated with large errors are included several
times on it.

The $n_E$ value is not critical but it may slow down
the training if a very large EDR set 
is generated. In general, values smaller than ten are 
appropriate for $n_E$.


%-----------------------------------------------------------
\subsection{SVM-EDR as a Boosting algorithm}
%-----------------------------------------------------------

The SVM-EDR Boosting algorithm performs data re-sampling 
based on error, like other Boosting algorithms, despite 
the fact that distinct implementations were made in each case.
Two major differences between SVM-EDR and Boosting can be pointed out:
%--
\begin{itemize}
\item In Boosting algorithms, selection of patterns follows 
      a probability distribution whereas a deterministic 
      schedule, based on current errors, is used to select patterns 
      in the EDR algorithm.
%
\item Boosting decision takes into account final hypothesis $H(\mathbf{x})$, 
      represented as a set of several weak hypothesis ($h_t$) weighted by $a_t$.
      For SVM-EDR, this kind of weighting is not necessary since the same 
      hypothesis is refined during training, according to the evolution
      of the Lagrange multipliers.
\end{itemize}
%----
With some modifications, it is possible to represent SVM-EDR 
in a format similar to the traditional AdaBoost algorithm 
\cite{FRESCHAP97}, as depicted in Algorithm \ref{EDR-BOOSTING}
and demonstrated in next sections.
% ------------------
\begin{algorithm}
\caption{\label{EDR-BOOSTING}SVM-EDR Boosting algorithm for binary concepts}
\begin{algorithmic}
\small
\STATE \ding{237} Given a training set $\{(\mathbf{x}_i,yi) \}_{i=1}^{p}$ %\\
        with $\mathbf{x}_i \in \mathcal{X}$,
        $y_i \in \mathcal{Y} = \{-1,+1\}$
\STATE \ding{237} Initialize $D_0(i)=1/p$, $a_0(i) = 0$ and $b_0 = 0$
\FOR{$t=1$ to $T$}
\STATE \ding{237} Train a weak learner using distribution $D_t$
\STATE \ding{237} Get weak hypothesis $h_t:\mathcal{X} \rightarrow \{-1,+1\}$ with error: 
	$$\displaystyle \epsilon_t(i) =  -(f(\mathbf{x}_i) - y_i)y_i$$
\STATE \ding{237} Choose: 
	$$\displaystyle a_t(i) = a_{t-1}(i) - 
	   \frac{y_i\epsilon_{t-1}(i)}{k(\mathbf{x}_i,\mathbf{x}_i) + 1}$$
	$$b_t = b_{t-1} - \Delta a_t(i)y_i$$
	\hspace*{0.4cm}where 
	$\Delta a_t(i) = a_t(i) - a_{t-1}(i)$.
\STATE \ding{237} Update:
	$$\displaystyle 
     D_{t}(i) = \frac{1+d_t(i)}{Z_t} 
	$$
	\hspace*{0.4cm}where $Z_t = p + p_t^E$ is a normalization factor.
\ENDFOR
\STATE \ding{237} Output the final hypothesis:
	$$ 
	\displaystyle
	H(\mathbf{x}) = \mathrm{sign}\left( 
	\sum\limits_{k=0}^{T}h_k(\mathbf{x})
	\right)
	$$
	\hspace*{0.4cm}where 
	$$
	h_k(\mathbf{x}) = \sum\limits_{j | a_k(j) > 0}
	\Delta a_k(j)y_jk(\mathbf{x}_j,\mathbf{x}) + \Delta b_k
	$$
	\hspace*{0.4cm}and 
	$\Delta b_t(i) = b_t(i) - b_{t-1}(i)$.
\end{algorithmic}
\end{algorithm}
% ------------------

%------------------------------------------------------------
\subsubsection{SVMs output as a sum of weak hypothesis} 
%------------------------------------------------------------

For this representation, some modifications are necessary.
Variables $a_t$ are, in fact, Lagrange multipliers at $t$-th
iteration. It was necessary to introduce the index $i$ since
there are several Lagrange multipliers and not only one $a_t$.
They vary in inverse proportion to the errors,
as in AdaBoost algorithm. The error $\epsilon_t(i)$ is the
output error for SVM-EDR, as in Equation \ref{EQ-EDR-ERROR-INDEP}
and bias is denoted by $b_t$. 
Updates rules for $a_t$ and $b_t$ are based on the dual problem of SOR, as
described in Section \ref{CAP-SEC-TRAIN-SOR}.

The final hypothesis $H(\mathbf{x})$ was represented as a sum
of several weak hypothesis but it is equivalent to the standard SVM output,
as demonstrated in the next equations.

Each $a_t$ may be described summing all increments received in
the previous iterations:
%------
\begin{equation}
a_t(i) = \sum\limits_{j=0}^{t}\Delta a_j(i),
\end{equation}
where $\Delta a_t(i) = a_t(i) - a_{t-1}(i)$.
%------

Same reasoning is valid to the bias:
%------
\begin{equation}
b_t = \sum\limits_{j=0}^{t}\Delta b_j,
\end{equation}
where $\Delta b_t(i) = b_t(i) - b_{t-1}(i)$.
%------

The final hypothesis for SVM-EDR, represented as the 
standard SVM output, is given by:
%------
\begin{equation}
H(\mathbf{x}) = \mathrm{sign}\left( 
	\sum\limits_{j | a_T(j) > 0}
	a_T(j)y_jk(\mathbf{x}_j,\mathbf{x}) + b_T
	\right).
\end{equation}
%------
This equation can be modified to assume the same form presented in Algorithm
\ref{EDR-BOOSTING}:
%------
$$%\begin{equation}
H(\mathbf{x}) = \mathrm{sign}\left( 
	\sum\limits_{j | a_T(j) > 0}
	\left(
	\Delta a_T(j) + \Delta a_{T-1}(j) + \cdots + \Delta a_1(j)
	\right)
	y_jk(\mathbf{x}_j,\mathbf{x}) 
	+ \sum\limits_{k=0}^{T}\Delta b_k
	\right)
$$%\end{equation}
%------
%------
$$%\begin{equation}
H(\mathbf{x}) = \mathrm{sign}\left( 
	\sum\limits_{k=0}^{T}\sum\limits_{j | a_T(j) > 0}
	\Delta a_k(j)y_jk(\mathbf{x}_j,\mathbf{x}) 
	+ \sum\limits_{k=0}^{T}\Delta b_k
	\right)
$$%\end{equation}
%------
%------
\begin{equation}
H(\mathbf{x}) = \mathrm{sign}\left( 
	\sum\limits_{k=0}^{T}h_k(\mathbf{x})
	\right),
\end{equation}
%------
where
%------
\begin{equation}
h_k(\mathbf{x}) = \sum\limits_{j | a_T(j) > 0}
	\Delta a_k(j)y_jk(\mathbf{x}_j,\mathbf{x}) 
	+ \Delta b_k.
\end{equation}
%------
The annealing present in the AdaBoost algorithm, denoted by the 
exponential function in its original formulation, can be noted in 
SVM-EDR training as well. It is a natural consequence of SVM training 
convergence since errors and incorrectly classified pattern tend to decrease,
generating less changes in $\mathcal{E}_t^E$.


%-----------------------------------------------------------
\subsubsection{Error distribution probability convergence}
%-----------------------------------------------------------

Although the distribution  $D_t$ does not take into account 
its previous values ($D_{t-1}$), it is a convergent distribution,
as proved below:
%-----
\begin{thm}
\label{THEO-EDR-1}
When optimal hyperplane is found, the distribution $D_t$ 
converges to
% ----
$$
D(i) = \frac{1+d(i)}{Z}
$$
% --
where 
$$
Z=p + p^E \approx 
\int\limits_{0}^{n_E}
\int\limits_{e(j)}^{e(p)}
g(e)dedj.
$$
\end{thm}

% -----
\begin{proof}
Consider $\mathcal{E}_t=\{e_t(i)\}_{i=1}^p$ an ordered set, at
iteration $t$, so that $e_t(1)\leq e_t(2)\leq\cdots\leq e_t(p)$.
Each element $e_t(i)$ is related to a single training pattern,
representing its output error.
Without loss of generality, consider yet a linear comparison function applied to the
error (Equation \ref{EQ-SVMEDR-COMPAR}) such as the set $\mathcal{E}_t^{E}$ 
(EDR set) is defined as:
% ----
\begin{equation}
\mathcal{E}_t^{E} = \left\{ e_t(i) \;|\; 
	e_t(i) \geq j \frac{ e_t(p) }{n_E} \right\}
	\;\;\;\mathrm{for}\;\;j=1,2,\ldots,n_{E}.
\end{equation}
% ----
The number of times the dataset is scanned is given by $n_E$,
the value $p_t^E$ is defined as the size of the set 
$\mathcal{E}_t^E$  at iteration $t$ and $p$ is the number
of training patterns.
Note that $\mathcal{E}_t^E$ may have repeated elements.

Each pattern is selected at least one time (EDR phase 1) 
and other subsequent  selections
(EDR phase 2) will depend on its output error. 
Suppose that the number of
presentations in the EDR phase 2 
for a pattern $i$, at iteration $t$, is expressed as $d_t(i)$. 
Therefore, the equivalent distribution $D_t(i)$ is:
$$
D_{t}(i) = \frac{1+d_t(i)}{Z_t},
$$
where (Section  \ref{SEC-ESTIMATING-NE})
$$
Z_t=p + p_t^E \approx 
\int\limits_{0}^{n_E}
\int\limits_{e(j)}^{e_t(p)}
g_t(e)dedj.
$$
When the optimal hyperplane is reached  (uniqueness and minimum guaranteed, 
see Vapnik \cite{VAPNIK9801}, Theorems 10.1 and 10.2),
the set $\mathcal{E}_t$ becomes fixed since the order and 
value of its elements ($e_t(i)$) do not change anymore.
In such situation, $d_t(i)$, $g_t(e)$ and $Z_t$ are fixed and the 
distribution $D_t(i)$ converges. This steady state is 
indicated without the index $t$, as seen in Theorem proposition.
\end{proof}
%-----------------------------------------------------------

%----------------------------------------------------
\subsection{Algorithm details}
%----------------------------------------------------

SVM-EDR algorithm is based on dual problem of SOR  
since it does not contain equality
constraint and updates one Lagrange multiplier per time 
(Section \ref{CAP-SEC-TRAIN-SOR}). 
Rules for updating Lagrange multipliers and
cache of errors are changed to 
the representation described in Section \ref{SEC-EDR-SVM-ERROR},
eliminating class label dependence for output error. 

The output error for the pattern $\mathbf{x}_i$, at iteration 
$t$ is given by
%----------
$$
e_t^{'}(i) = - y_i e_t(i) = -y_i(f(\mathbf{x}_i) - y_i) 
$$
%----------
\begin{equation}
e_t^{'}(i) =  -y_i f(\mathbf{x}_i) + 1,
\end{equation}
%----------
where $y_i$ is the target for the pattern $\mathbf{x}_i$ and
the SVM output is $f(\mathbf{x}_i)$.

Using this expression in the Lagrange multiplier 
update rule (Equation \ref{EQ-ALPHA-UPDATE-SOR1}), results on:
$$
a_{t+1}(i) = a_t(i) - 
    \frac{y_i}{K_{ii}+1}e_t(i) 
     = a_t(i) - \frac{y_i}{(K_{ii}+1)}\frac{e_t^{'}(i)}{-y_i}
$$
%----------
\begin{equation}
a_{t+1}(i) = a_t(i) + \frac{e_t^{'}(i)}{K_{ii}+1}.
\end{equation}
%----------
Errors are kept in cache (Equation \ref{EQ-ERROR-UPDATE-SOR1}), 
with the following update expression:
$$
e_{t+1}(j) = e_t(j)-\Delta a_t(i) y_i (K_{ij} + 1)
$$
$$
\frac{ e_{t+1}^{'}(j) }{-y_j} = \frac{ e_t^{'}(j) }{-y_j} -\Delta a_t(i) y_i (K_{ij} + 1)
$$
\begin{equation}
e_{t+1}^{'}(j) = e_t^{'}(j) + \Delta a_t(i) y_i y_j(K_{ij} + 1),
\end{equation}
%
where $\Delta a_t(i) = a_t(i) - a_{t-1}(i)$.

The computational cost of the algorithm depends on four main operations:
%
\begin{itemize}
	\item to fill up the kernel cache, with cost $O(p^2m)$ or 
				$O(qpm)$ if chunking \cite{Vapnik92b} is used;
	\item to present all vector of $Z$ set to optimization, with $O(Zp)$
	      as computational cost;
	\item to determine maximum and minimum errors, with cost $O(p)$ and;
	\item to generate the new EDR set ($Z$), an operation with
	 			cost equal to $O(n_Ep)$.
\end{itemize}
%
Supposing that kernel matrix is in cache, the cost per iteration is 
%
\begin{equation}
	O(Zp) + O(p) + O(n_Ep).
\end{equation}
%
When $Z \approx p$, the computational cost can be approximated by
$O(p^2)$ and when $Z >> p$ this cost becomes $O(Zp)$. It is
smaller when compared to SMO (if $Z < p^2 $) but it is important
to note that the SVM-EDR algorithm optimizes one Lagrange multiplier per 
time whereas SMO takes two Lagrange multipliers. 
Other difference is related to Lagrange multipliers precision:
it was observed that SVM-EDR requires smaller tolerance values than SMO
for a proper convergence, generating an increase in training time.
Since SVM-EDR relies on the error distribution and not on the KKT conditions, 
it is more susceptible to the spatial distribution of the training set
than SMO. Thus, even so computational costs may be defined, it is not
easy to estimate which algorithm runs faster. 

%----------------------------------------
\section{Simulation}

Three experiments were carried out to show the SVM-EDR performance.
The first one is a simulation using a 2D example, only to show the decision 
surface generated by SVM-EDR. The second one is an example
with dimensionality equal to 60 and the third experiment is focused on 
EVM-EDR algorithm, exhibiting details as SVM-EDR parameters choice, 
convergence, generalization and simulation time.
All simulations
were carried on a Pentium III 750MHz with 128MB of memory, 1982 MIPS
%of ALU 
(Dhrystone), 1007 MFLOPS 
%of FPU 
(Whetstone)
% FPU 
and running on Windows 2000.

\subsection{First experiment}

For the first experiment it was used a training set with 250 vectors with
dimension 2 in which 100 vectors belonging to class ($-1$) and
150 vectors to class ($+1$). 
The ($-1$) class has its
vectors distributed around the centers (1,5) and (5,1), with
standard deviation 0.9 and according to a Gaussian distribution.
The class ($+1$) has its centers around (1,1), (3,3) and (5,5),
with the same standard deviation and distribution used in class
($-1$). 
EDR parameter ($n_E$) was set to 12 after some trials and it was used a
quadratic comparison function. The same parameters were
employed for SVM-EDR and SMO: $C = 5$ and RBF kernel with variance equals
to 2.  
The simulation was repeated 30 times. 
During generalization test, 2000 vectors  were used, generated according to
the same distributions defined before. 
Table \ref{TAB-COMP-EDR1} summarizes 
the results for this experiment and in Figure \ref{FIG-EXPEDR-1}
are presented the separation hyperplanes.
% ----------------------------------------------------------------
\begin{table}[b]
\centering
\caption{Results for the experiment 1, using SVMBR as training
				program}
\label{TAB-COMP-EDR1}
\begin{tabular}{|l|c|c|} \hline \hline
              & SVM-EDR       		& SMO       \\ \hline
Number of SVs & 38.00 $\pm$ 0.00			& 38.06 $\pm$ 0.25   \\
Gener. (\%)		& 93.45 $\pm$ 0.00 & 93.45 $\pm$ 0.00   \\
Iterations 		& 58,5 $\pm$ 1.01		& 137.9 $\pm$ 32.94 \\
Time          & 0.90 $\pm$ 0.06    & 1.24 $\pm$ 0.09   \\
Z size        & 1170.00 $\pm$ 0.00 & --- \\
 \hline \hline
\end{tabular}
\end{table}
% ----------------------------------------------------------------
\subsection{Second experiment}

The Sonar database \cite{SonarDB8801} was used for this experiment.
The task is to discriminate between sonar signals bounced
off a metal cylinder and those bounced off a roughly cylindrical rock.
There are 111 patterns obtained by bouncing sonar
signals off a metal cylinder at various angles and under various
conditions and 97 patterns obtained from
rocks under similar conditions.  The transmitted sonar signal is a
frequency-modulated chirp, rising in frequency.  The data set contains
signals obtained from a variety of different aspect angles, spanning 90
degrees for the cylinder and 180 degrees for the rock.
Each pattern is a set of 60 numbers in the range 0.0 to 1.0.  Each number
represents the energy within a particular frequency band, integrated over
a certain period of time.  The integration limits for higher frequencies
occur later in time, since these frequencies are transmitted later during
the chirp.

The combined set of 208 cases
is divided randomly into 13 disjoint sets with 16 cases in each.  For each
experiment, 12 of these sets are used as training data, while the 13th is
reserved for testing.  The experiment is repeated 13 times so that every
case appears once as part of a test set.  The reported performance is an
average over the entire set of 13 different test sets, each run 10 times.

EDR parameter ($n_E$) was set to 6 after some trials and it was used a
quadratic comparison function. The same parameters were
employed for SVM-EDR and SMO: $C = 1$ and RBF kernel with variance equal
to 1 and normalization factor (P3, see Appendix \ref{SVMBR-PROGRAMA}) 
equal to $0.5$.  
Table \ref{TAB-COMP-EDR12} summarizes the results for this experiment.

% ----------------------------------------------------------------
\begin{table}[b]
\centering
\caption{Results for the experiment 2 (Sonar), using SVMBR as training
				program}
\label{TAB-COMP-EDR12}
\begin{tabular}{|l|c|c|} \hline \hline
              & SVM-EDR       		 & SMO       \\ \hline
Number of SVs & 152.16 $\pm$  2.92 & 152.55 $\pm$  3.02   \\
Gener. (\%)		&  86.63 $\pm$  9.19 &  87.02 $\pm$  9.35   \\
Iterations 		& 181.17 $\pm$ 30.06 &  74.11 $\pm$ 10.15 \\
Time          &   0.92 $\pm$  0.66 &   0.95 $\pm$ 0.02   \\
Z size        & 516.57 $\pm$ 46.94 & --- \\
 \hline \hline
\end{tabular}
\end{table}
% ----------------------------------------------------------------
%EDR size(alphaf,2) 100*nwell/total smo_cputimef numiterf  zs
%       152.16       86.635      0.92046       181.17       516.57
%       2.9223       9.1999     0.066155       30.058       46.942  
% ----------------------------------------------------------------
%SMO size(alphaf,2) 100*nwell/total smo_cputimef numiterf 
%      152.55       87.019      0.95392       74.115
%      3.0246       9.3585     0.024762       10.154
       
\subsection{Third experiment}

For the third experiment, the Adult database \cite{UCI1998} with $1605$ 
vectors was used (please, refer to Section \ref{SEC-SVMKM-ADULTDATABASE} for details
about Adult database) and only SVM-EDR training was performed.

The number of dataset scans ($n_E$) was varied from 1 to 10 and 
the comparison function exponent from 0.5 to 3.0 in increments of 0.5.
Five features were taken into account: training time,
number of iterations, number of support vectors,
generalization and Z (size of the virtual training
set generated by SVM-EDR). 

Two graphics are presented for each property: a 3D graphic showing
the surface of each feature and a 2D graphic 
with a color bar, for better comprehension
(Figures \ref{FIG-EDR-S1-TEMPO2D}, \ref{FIG-EDR-S1-NUMIT2D}, 
\ref{FIG-EDR-S1-GENER2D}, \ref{FIG-EDR-S1-ZSIZE2D} and \ref{FIG-EDR-S1-NUMSV2D}).
All values can be found in Tables \ref{TAB-EDR-S1-1}, \ref{TAB-EDR-S1-2} and  \ref{TAB-EDR-S1-3}.
%Only mean values are represented in these tables since all properties had a very small 
%standard deviation after 10 simulations.

SVM-EDR convergence is treated in the last five figures.
Firstly, the  error distribution evolution for $(+1)$
and $(-1)$ classes are depicted in Figures \ref{FIG-EDR-ERR-A-EV}
and \ref{FIG-EDR-ERR-B-EV}. Next, in Figures \ref{FIG-EDR-ZSIZE-EV-A}
and \ref{FIG-EDR-ZSIZE-EV-B}, are represented how the size of the
virtual training set evolves. Finally, the time consumed in each
iteration is represented in Figure \ref{FIG-EDR-TIME-EV}.

For comparison, the time wasted by SMO was $7.07 \pm 0.37$s, with
generalization of $84.78 \pm 0.00$ and $675 \pm 0.00$ 
support vectors (the training was repeated ten times).

% SMO TRAIN TIME [7.97 6.78 7 7.02 6.73 7.24 7.13 7.1 7.11 6.63]     
         
\section{Discussions}

\subsection{Separation hyperplanes}

The goal of the first experiment is to compare separation hyperplanes generate
by SMO and SVM-EDR. As shown is Figure \ref{FIG-EXPEDR-1}, hyperplanes
are very similar, with the same support vectors. Small 
differences in Lagrange multipliers may happen as a consequence
of the two training parameters that control the algorithm 
precision, denoted by \cmd{epsilon} and \cmd{tolerance} 
\cite{Platt98a,Platt98b}.
%
The first one is related to the bounds, indicating when 
a Lagrange multiplier (LM) is 
considered as 0 (if $\mathrm{LM} < \mathrm{eps}$) or 
$C$ (if $\mathrm{LM} > (C - \mathrm{eps})$). The second one is used to
control the overall precision and for verifying KKT conditions 
in the SMO algorithm.

Thus, small differences between corresponding LMs are expected
but within a specified precision given by \cmd{eps} and \cmd{tol}.

Result for SVM-EDR and SMO in the second experiment 
are very similar (Table \ref{TAB-COMP-EDR12}) as well,
with not significant differences. 

\subsection{Convergence}

SVM-EDR implements a boosted gradient ascent algorithm 
where vectors related to large errors are presented
more frequently. The idea is to speed up the training process,
avoiding the learned vectors. 

This behavior has, as a consequence, a large initial computation
cost, as represented in Figure \ref{FIG-EDR-TIME-EV}. 
The reason for that is that some vectors are only associated with null Lagrange
multipliers (in the SMO case) or with small errors (for SVM-EDR)
after at least one iteration. 
These associations avoid unnecessary loops over vectors without significant
information for the training process.
Moreover, cache allocation and
control structures initialization are set at the beginning as well,
slowing down the initial phase of the training process.

Other way  to observe the convergence process is to analyze the
error probability evolution (Figures \ref{FIG-EDR-ERR-A-EV}
and \ref{FIG-EDR-ERR-B-EV}). After several initial rearrangements,
the error distribution approximates quickly to their final 
shape. This convergence in distribution is 
essential to the algorithm convergence.

As stated in Theorem \ref{THEO-EDR-1}, when the convergence is 
reached, the EDR set becomes stable, with a fixed number of 
vectors and with fixed order. In Figures \ref{FIG-EDR-ZSIZE-EV-A}
and \ref{FIG-EDR-ZSIZE-EV-B} are represented the EDR set size for 
vectors belonging to class $+1$ and $-1$, respectively. One 
weakness of most SVM algorithms also appears
here: the difficult to improve the final solution. For both classes,
after about sixty percent of training, several Lagrange
multipliers are already very close to their optimal values.
However, since SVM decision hyperplane is given by a large
sum over all SVs weighted by the Lagrange multipliers, even
small differences in Lagrange multipliers can generate
unsatisfactory results. 


\subsection{Training time}

For almost all runs of the algorithm, time consumed was between 8.06 and 10.54
seconds, except for training with $n_E = 12$ and
comparison function power equals to 1, when training time was 13.21.
However, this value is associated with the larger standard deviation among
all training times, indicating a possible outlier.

The smallest training time happened for $n_E = 1$ and comparison function power equals
to 1, with $8.06 \pm 0.03$s. An average over all training times generates 
$9.57 \pm 0.60$s, indicating a small sensitivity in relation to changes
in $n_E$ or in comparison function power. Even so, all training times were
larger than SMO for the adult dataset ($7.07 \pm 0.37$s).

\subsection{Number of iterations and Z set size}

Figure \ref{FIG-EDR-S1-NUMIT2D} shows that
the number of iterations decrease when $n_E$ or the comparison 
function power increases. 
%
When a larger $n_E$ is chosen, the same vector may
be presented several times, mainly if it is associated 
with a large error. This boosting characteristic leads the
Lagrange multiplier under optimization to a faster convergence, increasing
the iteration time but decreasing the number of iterations.
On the other hand, an increase in comparison function power favors 
the presentation of vectors associated to large errors since 
the error distributions is modified.
Again, several presentations of vectors related to large errors occurs, 
generating less iterations but with more time consumed.

The Z set size (Figure \ref{FIG-EDR-S1-ZSIZE2D}) may work as
an indirect time measure since its size becomes large
when $n_E$ or comparison  function power increase. 
 
The slope depicted in Figure \ref{FIG-EDR-S1-NUMIT2D} may change,
depending on data set but a similar behavior is expected for all.

\subsection{Generalization and number of support vectors}

The changes in generalization are very small, showing
the algorithm convergence and training 
parameters independence.
An average over all values provides $84.78 \pm 0.01$, indicating
a very stable generalization.

The same reasoning is valid for the number of support vectors found,
with a narrow range and small standard deviation: 682.4 to 683.9.
Performing an average over all values provides
a number of support vectors equals to $682.95 \pm 0.26$ or,
rounding the result to the nearest integer, 683 support vectors.

%----------------------------------------------------------------------
\section{Conclusion}
% ----------------------------------------------------------------

A new algorithm called SVM-EDR for training SVMs
was presented in this chapter.
SVM-EDR training algorithm is a Boosting algorithm
that uses a deterministic schedule based on error
to re-sample the data set. Patterns related to large
errors have its Lagrange multipliers updated more
frequently, according to a gradient ascent strategy.
SVM-EDR solves the dual problem without any assumption about 
support vectors or the Karush-Kuhn-Tucker (KKT) conditions.
Generalization and number of support vectors are similar to
the ones found by SMO.


%--------------------
\begin{figure}[!]
\centering
\includegraphics[scale=0.45]{figs/edrhp_s1.ps}
\includegraphics[scale=0.45]{figs/smohp_s1.ps}
\caption[Training set and decision boundaries for  experiment 1 using SVM-EDR]
        {Training set and decision boundaries in the input space 
         for  experiment 1 using SVM-EDR (first graph) and
         SMO (second graph) training algorithms. Support
         vectors are represented by circled points.} 
\label{FIG-EXPEDR-1}
\end{figure}
%--------------------
%\begin{figure}[!]
%\centering
%\includegraphics[scale=0.4]{figs/smohp2.ps}
%\caption{Decision boundaries in the input space for  experiment 2
%         using SMO training algorithm.} 
%\label{FIG-EXPEDR3-3}
%\end{figure}
%---------------------
%---------------------
%\section{Discussion}
%\label{SEC-DISCUSSION}
%----------------------------------------------------------------------

%--------------------
\begin{figure}[!tbp]
\centering
\includegraphics[scale=0.6]{figs/tempoave3d.ps}
\includegraphics[scale=0.55]{figs/tempoave2d.ps}
\caption[Training time as a function of $n_E \times e_i^x$]
{\label{FIG-EDR-S1-TEMPO2D}
Training time, in seconds, as a function of $n_E \times e_i^x$ 
for Adult database - 3D (first graph) and 2D (second graph) representations.} 
\end{figure}
%---------------------
\begin{figure}[!tbp]
\centering
\includegraphics[scale=0.6]{figs/numitave3d.ps}
\includegraphics[scale=0.55]{figs/numitave2d.ps}
\caption[Number of iterations as a function of $n_E \times e_i^x$]
{\label{FIG-EDR-S1-NUMIT2D}
Number of iterations as a function of $n_E \times e_i^x$ 
for Adult database - 3D (first graph) and 2D (second graph) representations.} 
\end{figure}
%---------------------
\begin{figure}[!tbp]
\centering
\includegraphics[scale=0.6]{figs/zsizeave3d.ps}
\includegraphics[scale=0.55]{figs/zsizeave2d.ps}
\caption[Virtual set generated as a function of $n_E \times e_i^x$]
{\label{FIG-EDR-S1-ZSIZE2D}
Virtual set generated as a function of $n_E \times e_i^x$ 
for Adult database - 3D (first graph) and 2D (second graph) representations.} 
\end{figure}
%---------------------
\begin{figure}[!tbp]
\centering
\includegraphics[scale=0.6]{figs/generave3d.ps}
\includegraphics[scale=0.55]{figs/generave2d.ps}
\caption[Generalization as a function of $n_E \times e_i^x$]
{\label{FIG-EDR-S1-GENER2D}
Generalization, in percentage, as a function of $n_E \times e_i^x$ 
for Adult database - 3D (first graph) and 2D (second graph) representations.} % ---------------------------------
\end{figure}
\begin{figure}[!tbp]
\centering
\includegraphics[scale=0.6]{figs/numsvave3d.ps}
\includegraphics[scale=0.55]{figs/numsvave2d.ps}
\caption[Number of SVs as a function of $n_E \times e_i^x$]
{\label{FIG-EDR-S1-NUMSV2D}
Number of SVs as a function of $n_E \times e_i^x$ 
for Adult database - 3D (first graph) and 2D (second graph) representations.} 
\end{figure}
%---------------------
% ----------------------------------------------------------------
\begin{table}
\centering
\caption{\label{TAB-EDR-S1-1}Results for simulation 2}
\begin{tabular}{|c|c|c|c|c|c|} \hline
\hline \multicolumn{6}{|c|}{\small Power = 0.5} \\ \hline 
\small EDR & \small Time & \small Iter & \small SV & \small Z & \small Gener \\ \hline 
\scriptsize 1 & \scriptsize 8.13 $\pm$ 0.04 & \scriptsize 602.70 $\pm$ 1203.41 & 
 \scriptsize 682.80 $\pm$ 0.16 & \scriptsize 1607.00 $\pm$ 0.00 & 
 \scriptsize 84.79 $\pm$ 0.00 \\ \hline 
\scriptsize 2 & \scriptsize 10.01 $\pm$ 1.03 & \scriptsize 567.80 $\pm$ 132.36 & 
 \scriptsize 682.50 $\pm$ 0.25 & \scriptsize 1607.00 $\pm$ 0.00 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 3 & \scriptsize 9.50 $\pm$ 0.55 & \scriptsize 277.50 $\pm$ 143.85 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 2121.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 4 & \scriptsize 9.96 $\pm$ 0.02 & \scriptsize 283.90 $\pm$ 104.29 & 
 \scriptsize 682.70 $\pm$ 0.21 & \scriptsize 2777.20 $\pm$ 0.16 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 5 & \scriptsize 10.11 $\pm$ 0.41 & \scriptsize 288.70 $\pm$ 142.01 & 
 \scriptsize 682.60 $\pm$ 0.24 & \scriptsize 3159.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 6 & \scriptsize 9.24 $\pm$ 1.05 & \scriptsize 190.70 $\pm$ 17.41 & 
 \scriptsize 682.60 $\pm$ 0.24 & \scriptsize 3681.10 $\pm$ 0.09 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 7 & \scriptsize 9.83 $\pm$ 0.46 & \scriptsize 194.40 $\pm$ 2.84 & 
 \scriptsize 682.40 $\pm$ 0.24 & \scriptsize 4111.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 8 & \scriptsize 10.01 $\pm$ 0.03 & \scriptsize 186.80 $\pm$ 18.16 & 
 \scriptsize 682.80 $\pm$ 0.16 & \scriptsize 4542.10 $\pm$ 0.09 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 9 & \scriptsize 10.33 $\pm$ 0.04 & \scriptsize 144.60 $\pm$ 40.84 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 5092.10 $\pm$ 0.09 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 10 & \scriptsize 10.13 $\pm$ 0.55 & \scriptsize 145.70 $\pm$ 35.01 & 
 \scriptsize 682.50 $\pm$ 0.25 & \scriptsize 5566.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 11 & \scriptsize 10.40 $\pm$ 0.02 & \scriptsize 144.30 $\pm$ 9.01 & 
 \scriptsize 682.60 $\pm$ 0.24 & \scriptsize 5954.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 12 & \scriptsize 9.67 $\pm$ 3.51 & \scriptsize 113.00 $\pm$ 6.80 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 6499.30 $\pm$ 0.21 & 
 \scriptsize 84.79 $\pm$ 0.00 \\ \hline 
\scriptsize 13 & \scriptsize 9.75 $\pm$ 0.01 & \scriptsize 111.50 $\pm$ 21.45 & 
 \scriptsize 682.90 $\pm$ 0.09 & \scriptsize 6971.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 14 & \scriptsize 9.10 $\pm$ 1.32 & \scriptsize 114.90 $\pm$ 2.69 & 
 \scriptsize 682.70 $\pm$ 0.21 & \scriptsize 7400.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 15 & \scriptsize 9.83 $\pm$ 0.01 & \scriptsize 97.60 $\pm$ 6.64 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 7907.90 $\pm$ 0.09 & 
 \scriptsize 84.79 $\pm$ 0.00 \\ \hline 
\hline \multicolumn{6}{|c|}{\small Power = 1.0} \\ \hline 
\small EDR & \small Time & \small Iter & \small SV & \small Z & \small Gener \\ \hline 
\scriptsize 1 & \scriptsize 8.06 $\pm$ 0.03 & \scriptsize 603.90 $\pm$ 1352.09 & 
 \scriptsize 682.80 $\pm$ 0.16 & \scriptsize 1607.00 $\pm$ 0.00 & 
 \scriptsize 84.79 $\pm$ 0.00 \\ \hline 
\scriptsize 2 & \scriptsize 9.75 $\pm$ 0.02 & \scriptsize 279.40 $\pm$ 78.64 & 
 \scriptsize 683.00 $\pm$ 0.20 & \scriptsize 2266.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 3 & \scriptsize 9.81 $\pm$ 0.11 & \scriptsize 275.20 $\pm$ 81.16 & 
 \scriptsize 682.90 $\pm$ 0.29 & \scriptsize 2902.90 $\pm$ 0.09 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 4 & \scriptsize 10.03 $\pm$ 0.19 & \scriptsize 189.80 $\pm$ 36.56 & 
 \scriptsize 682.60 $\pm$ 0.24 & \scriptsize 3719.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 5 & \scriptsize 9.66 $\pm$ 0.52 & \scriptsize 184.80 $\pm$ 7.96 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 4384.00 $\pm$ 0.00 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 6 & \scriptsize 9.81 $\pm$ 0.19 & \scriptsize 138.90 $\pm$ 30.69 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 5126.00 $\pm$ 0.00 & 
 \scriptsize 84.79 $\pm$ 0.00 \\ \hline 
\scriptsize 7 & \scriptsize 9.25 $\pm$ 0.72 & \scriptsize 124.90 $\pm$ 7.49 & 
 \scriptsize 683.90 $\pm$ 0.09 & \scriptsize 5871.00 $\pm$ 0.00 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 8 & \scriptsize 9.81 $\pm$ 0.17 & \scriptsize 110.00 $\pm$ 13.60 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 6571.00 $\pm$ 0.00 & 
 \scriptsize 84.79 $\pm$ 0.00 \\ \hline 
\scriptsize 9 & \scriptsize 8.96 $\pm$ 1.52 & \scriptsize 94.60 $\pm$ 8.84 & 
 \scriptsize 683.10 $\pm$ 0.29 & \scriptsize 7292.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 10 & \scriptsize 9.39 $\pm$ 1.10 & \scriptsize 92.50 $\pm$ 8.45 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 7998.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 11 & \scriptsize 9.84 $\pm$ 0.25 & \scriptsize 80.80 $\pm$ 7.56 & 
 \scriptsize 682.80 $\pm$ 0.16 & \scriptsize 8763.00 $\pm$ 0.00 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 12 & \scriptsize 13.21 $\pm$ 36.13 & \scriptsize 81.60 $\pm$ 13.64 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 9446.00 $\pm$ 0.00 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 13 & \scriptsize 9.63 $\pm$ 0.01 & \scriptsize 71.50 $\pm$ 10.65 & 
 \scriptsize 683.00 $\pm$ 0.20 & \scriptsize 10178.60 $\pm$ 0.84 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 14 & \scriptsize 9.73 $\pm$ 0.01 & \scriptsize 71.70 $\pm$ 10.61 & 
 \scriptsize 683.20 $\pm$ 0.16 & \scriptsize 10898.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 15 & \scriptsize 9.74 $\pm$ 0.05 & \scriptsize 62.50 $\pm$ 1.45 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 11633.70 $\pm$ 0.41 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
  \hline 
\end{tabular}
\end{table}
% -------------------------------

% ----------------------------------------------------------------
\begin{table}
\centering
\caption{\label{TAB-EDR-S1-2}Results for simulation 2 (continued)}
\begin{tabular}{|c|c|c|c|c|c|} \hline
\hline \multicolumn{6}{|c|}{\small Power = 1.5} \\ \hline 
\small EDR & \small Time & \small Iter & \small SV & \small Z & \small Gener \\ \hline 
\scriptsize 1 & \scriptsize 8.72 $\pm$ 0.94 & \scriptsize 578.10 $\pm$ 758.89 & 
 \scriptsize 682.40 $\pm$ 0.24 & \scriptsize 1607.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 2 & \scriptsize 9.66 $\pm$ 0.78 & \scriptsize 277.60 $\pm$ 137.44 & 
 \scriptsize 683.10 $\pm$ 0.29 & \scriptsize 2594.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 3 & \scriptsize 9.71 $\pm$ 0.01 & \scriptsize 184.50 $\pm$ 19.05 & 
 \scriptsize 682.90 $\pm$ 0.09 & \scriptsize 3671.70 $\pm$ 0.21 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 4 & \scriptsize 9.49 $\pm$ 0.24 & \scriptsize 140.50 $\pm$ 14.05 & 
 \scriptsize 683.10 $\pm$ 0.09 & \scriptsize 4678.80 $\pm$ 0.16 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 5 & \scriptsize 9.19 $\pm$ 0.90 & \scriptsize 120.20 $\pm$ 15.36 & 
 \scriptsize 683.60 $\pm$ 0.44 & \scriptsize 5731.00 $\pm$ 0.00 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 6 & \scriptsize 8.83 $\pm$ 1.17 & \scriptsize 94.70 $\pm$ 13.81 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 6765.30 $\pm$ 0.21 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 7 & \scriptsize 8.61 $\pm$ 1.27 & \scriptsize 85.70 $\pm$ 16.61 & 
 \scriptsize 683.30 $\pm$ 0.21 & \scriptsize 7781.80 $\pm$ 0.16 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 8 & \scriptsize 9.22 $\pm$ 1.19 & \scriptsize 82.70 $\pm$ 16.61 & 
 \scriptsize 683.50 $\pm$ 0.45 & \scriptsize 8797.60 $\pm$ 0.64 & 
 \scriptsize 84.76 $\pm$ 0.00 \\ \hline 
\scriptsize 9 & \scriptsize 9.83 $\pm$ 0.02 & \scriptsize 69.50 $\pm$ 6.05 & 
 \scriptsize 683.20 $\pm$ 0.16 & \scriptsize 9835.00 $\pm$ 0.20 & 
 \scriptsize 84.76 $\pm$ 0.00 \\ \hline 
\scriptsize 10 & \scriptsize 8.94 $\pm$ 1.45 & \scriptsize 66.80 $\pm$ 17.56 & 
 \scriptsize 683.30 $\pm$ 0.21 & \scriptsize 10887.00 $\pm$ 0.00 & 
 \scriptsize 84.76 $\pm$ 0.00 \\ \hline 
\scriptsize 11 & \scriptsize 8.86 $\pm$ 2.25 & \scriptsize 62.40 $\pm$ 11.24 & 
 \scriptsize 683.70 $\pm$ 0.21 & \scriptsize 11884.00 $\pm$ 0.00 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 12 & \scriptsize 9.76 $\pm$ 0.21 & \scriptsize 55.80 $\pm$ 10.96 & 
 \scriptsize 683.30 $\pm$ 0.21 & \scriptsize 12930.50 $\pm$ 0.85 & 
 \scriptsize 84.76 $\pm$ 0.00 \\ \hline 
\scriptsize 13 & \scriptsize 9.61 $\pm$ 0.02 & \scriptsize 53.50 $\pm$ 7.25 & 
 \scriptsize 683.30 $\pm$ 0.21 & \scriptsize 13946.30 $\pm$ 0.21 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 14 & \scriptsize 8.67 $\pm$ 1.63 & \scriptsize 53.00 $\pm$ 6.20 & 
 \scriptsize 683.10 $\pm$ 0.49 & \scriptsize 14974.50 $\pm$ 0.45 & 
 \scriptsize 84.76 $\pm$ 0.00 \\ \hline 
\scriptsize 15 & \scriptsize 8.93 $\pm$ 0.94 & \scriptsize 46.40 $\pm$ 9.04 & 
 \scriptsize 682.80 $\pm$ 0.36 & \scriptsize 16021.00 $\pm$ 0.00 & 
 \scriptsize 84.76 $\pm$ 0.00 \\ \hline 
 \hline \multicolumn{6}{|c|}{\small Power = 2.0} \\ 
\hline \small EDR & \small Time & \small Iter & \small SV & \small Z & \small Gener \\ \hline 
\scriptsize 1 & \scriptsize 9.58 $\pm$ 0.93 & \scriptsize 566.20 $\pm$ 388.56 & 
 \scriptsize 682.60 $\pm$ 0.24 & \scriptsize 1607.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 2 & \scriptsize 9.44 $\pm$ 0.62 & \scriptsize 273.00 $\pm$ 46.80 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 2745.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 3 & \scriptsize 9.97 $\pm$ 0.03 & \scriptsize 192.50 $\pm$ 65.65 & 
 \scriptsize 682.70 $\pm$ 0.21 & \scriptsize 3921.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 4 & \scriptsize 9.15 $\pm$ 0.94 & \scriptsize 140.10 $\pm$ 28.29 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 5089.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 5 & \scriptsize 9.47 $\pm$ 0.58 & \scriptsize 112.10 $\pm$ 5.29 & 
 \scriptsize 682.90 $\pm$ 0.09 & \scriptsize 6268.40 $\pm$ 0.24 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 6 & \scriptsize 9.27 $\pm$ 1.15 & \scriptsize 95.70 $\pm$ 7.81 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 7442.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 7 & \scriptsize 8.88 $\pm$ 1.21 & \scriptsize 85.80 $\pm$ 18.96 & 
 \scriptsize 683.40 $\pm$ 0.24 & \scriptsize 8602.40 $\pm$ 0.24 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 8 & \scriptsize 9.53 $\pm$ 0.70 & \scriptsize 73.30 $\pm$ 17.21 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 9793.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 9 & \scriptsize 9.87 $\pm$ 0.01 & \scriptsize 67.10 $\pm$ 16.49 & 
 \scriptsize 683.10 $\pm$ 0.09 & \scriptsize 10953.00 $\pm$ 0.00 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 10 & \scriptsize 9.86 $\pm$ 0.01 & \scriptsize 58.40 $\pm$ 6.64 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 12136.00 $\pm$ 0.40 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 11 & \scriptsize 9.84 $\pm$ 0.01 & \scriptsize 54.90 $\pm$ 5.29 & 
 \scriptsize 683.10 $\pm$ 0.09 & \scriptsize 13298.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 12 & \scriptsize 9.58 $\pm$ 0.01 & \scriptsize 52.30 $\pm$ 4.21 & 
 \scriptsize 683.30 $\pm$ 0.21 & \scriptsize 14477.20 $\pm$ 0.16 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 13 & \scriptsize 9.59 $\pm$ 0.00 & \scriptsize 48.60 $\pm$ 4.64 & 
 \scriptsize 683.10 $\pm$ 0.09 & \scriptsize 15650.00 $\pm$ 0.00 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 14 & \scriptsize 9.57 $\pm$ 0.01 & \scriptsize 45.90 $\pm$ 9.09 & 
 \scriptsize 683.10 $\pm$ 0.09 & \scriptsize 16831.90 $\pm$ 0.69 & 
 \scriptsize 84.76 $\pm$ 0.00 \\ \hline 
\scriptsize 15 & \scriptsize 9.62 $\pm$ 0.01 & \scriptsize 43.50 $\pm$ 1.45 & 
 \scriptsize 682.90 $\pm$ 0.09 & \scriptsize 17994.10 $\pm$ 2.89 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
 \hline 
\end{tabular}
\end{table}
% -------------------------------

% ----------------------------------------------------------------
\begin{table}
\centering
\caption{\label{TAB-EDR-S1-3}Results for simulation 2 (continued)}
\begin{tabular}{|c|c|c|c|c|c|} \hline
\hline \multicolumn{6}{|c|}{\small Power = 2.5} \\ \hline 
\small EDR & \small Time & \small Iter & \small SV & \small Z & \small Gener \\ \hline 
\scriptsize 1 & \scriptsize 9.81 $\pm$ 0.86 & \scriptsize 567.80 $\pm$ 286.16 & 
 \scriptsize 682.70 $\pm$ 0.21 & \scriptsize 1607.00 $\pm$ 0.00 & 
 \scriptsize 84.76 $\pm$ 0.00 \\ \hline 
\scriptsize 2 & \scriptsize 9.50 $\pm$ 1.25 & \scriptsize 288.30 $\pm$ 73.81 & 
 \scriptsize 682.60 $\pm$ 0.24 & \scriptsize 2840.00 $\pm$ 0.00 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 3 & \scriptsize 9.93 $\pm$ 0.02 & \scriptsize 191.20 $\pm$ 38.56 & 
 \scriptsize 682.60 $\pm$ 0.24 & \scriptsize 4071.10 $\pm$ 0.09 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 4 & \scriptsize 9.29 $\pm$ 0.98 & \scriptsize 141.20 $\pm$ 36.76 & 
 \scriptsize 682.90 $\pm$ 0.09 & \scriptsize 5317.20 $\pm$ 0.16 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 5 & \scriptsize 9.32 $\pm$ 0.60 & \scriptsize 112.50 $\pm$ 13.05 & 
 \scriptsize 682.80 $\pm$ 0.16 & \scriptsize 6564.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 6 & \scriptsize 8.71 $\pm$ 1.58 & \scriptsize 95.40 $\pm$ 4.04 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 7810.20 $\pm$ 0.16 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 7 & \scriptsize 9.18 $\pm$ 0.89 & \scriptsize 81.30 $\pm$ 7.61 & 
 \scriptsize 682.90 $\pm$ 0.09 & \scriptsize 9040.90 $\pm$ 0.09 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 8 & \scriptsize 10.02 $\pm$ 0.01 & \scriptsize 71.20 $\pm$ 5.36 & 
 \scriptsize 682.90 $\pm$ 0.09 & \scriptsize 10296.60 $\pm$ 0.24 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 9 & \scriptsize 9.82 $\pm$ 0.12 & \scriptsize 63.70 $\pm$ 2.81 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 11532.90 $\pm$ 0.09 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 10 & \scriptsize 9.66 $\pm$ 1.47 & \scriptsize 58.20 $\pm$ 5.36 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 12790.70 $\pm$ 0.61 & 
 \scriptsize 84.79 $\pm$ 0.00 \\ \hline 
\scriptsize 11 & \scriptsize 9.58 $\pm$ 0.62 & \scriptsize 53.40 $\pm$ 3.44 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 14028.00 $\pm$ 0.00 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 12 & \scriptsize 9.63 $\pm$ 0.01 & \scriptsize 50.00 $\pm$ 5.40 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 15285.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 13 & \scriptsize 9.66 $\pm$ 0.01 & \scriptsize 46.20 $\pm$ 4.16 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 16519.40 $\pm$ 0.84 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 14 & \scriptsize 9.58 $\pm$ 0.01 & \scriptsize 42.10 $\pm$ 3.89 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 17771.30 $\pm$ 0.41 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 15 & \scriptsize 9.55 $\pm$ 0.01 & \scriptsize 40.60 $\pm$ 2.84 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 19016.00 $\pm$ 0.00 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\hline \multicolumn{6}{|c|}{\small Power = 3.0} \\ \hline 
\small EDR & \small Time & \small Iter & \small SV & \small Z & \small Gener \\ \hline 
\scriptsize 1 & \scriptsize 10.54 $\pm$ 0.05 & \scriptsize 581.00 $\pm$ 692.00 & 
 \scriptsize 682.80 $\pm$ 0.16 & \scriptsize 1607.00 $\pm$ 0.00 & 
 \scriptsize 84.79 $\pm$ 0.00 \\ \hline 
\scriptsize 2 & \scriptsize 10.30 $\pm$ 0.12 & \scriptsize 300.90 $\pm$ 171.09 & 
 \scriptsize 682.70 $\pm$ 0.21 & \scriptsize 2891.00 $\pm$ 0.00 & 
 \scriptsize 84.79 $\pm$ 0.00 \\ \hline 
\scriptsize 3 & \scriptsize 9.64 $\pm$ 0.57 & \scriptsize 187.90 $\pm$ 46.49 & 
 \scriptsize 682.70 $\pm$ 0.21 & \scriptsize 4155.00 $\pm$ 0.00 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 4 & \scriptsize 9.78 $\pm$ 0.08 & \scriptsize 142.60 $\pm$ 5.84 & 
 \scriptsize 682.80 $\pm$ 0.16 & \scriptsize 5460.00 $\pm$ 0.00 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 5 & \scriptsize 9.75 $\pm$ 0.73 & \scriptsize 114.80 $\pm$ 10.76 & 
 \scriptsize 682.50 $\pm$ 0.25 & \scriptsize 6757.10 $\pm$ 0.09 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 6 & \scriptsize 9.23 $\pm$ 1.06 & \scriptsize 97.20 $\pm$ 8.76 & 
 \scriptsize 682.60 $\pm$ 0.24 & \scriptsize 8036.70 $\pm$ 0.21 & 
 \scriptsize 84.79 $\pm$ 0.00 \\ \hline 
\scriptsize 7 & \scriptsize 9.46 $\pm$ 0.46 & \scriptsize 82.60 $\pm$ 2.84 & 
 \scriptsize 682.90 $\pm$ 0.09 & \scriptsize 9335.00 $\pm$ 0.00 & 
 \scriptsize 84.79 $\pm$ 0.00 \\ \hline 
\scriptsize 8 & \scriptsize 10.21 $\pm$ 0.26 & \scriptsize 72.90 $\pm$ 5.29 & 
 \scriptsize 682.80 $\pm$ 0.16 & \scriptsize 10628.40 $\pm$ 1.04 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 9 & \scriptsize 9.85 $\pm$ 0.65 & \scriptsize 63.80 $\pm$ 5.56 & 
 \scriptsize 682.90 $\pm$ 0.09 & \scriptsize 11923.30 $\pm$ 0.21 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 10 & \scriptsize 9.27 $\pm$ 1.46 & \scriptsize 59.50 $\pm$ 2.85 & 
 \scriptsize 682.70 $\pm$ 0.21 & \scriptsize 13215.50 $\pm$ 1.65 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 11 & \scriptsize 8.99 $\pm$ 1.55 & \scriptsize 53.30 $\pm$ 4.81 & 
 \scriptsize 683.00 $\pm$ 0.00 & \scriptsize 14512.00 $\pm$ 0.00 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 12 & \scriptsize 9.72 $\pm$ 0.01 & \scriptsize 49.20 $\pm$ 3.56 & 
 \scriptsize 682.90 $\pm$ 0.09 & \scriptsize 15800.70 $\pm$ 0.21 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 13 & \scriptsize 9.45 $\pm$ 0.57 & \scriptsize 45.60 $\pm$ 2.64 & 
 \scriptsize 682.80 $\pm$ 0.16 & \scriptsize 17098.90 $\pm$ 1.29 & 
 \scriptsize 84.78 $\pm$ 0.00 \\ \hline 
\scriptsize 14 & \scriptsize 9.35 $\pm$ 0.59 & \scriptsize 42.80 $\pm$ 1.56 & 
 \scriptsize 682.80 $\pm$ 0.16 & \scriptsize 18386.00 $\pm$ 0.00 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\scriptsize 15 & \scriptsize 9.04 $\pm$ 1.75 & \scriptsize 40.50 $\pm$ 2.65 & 
 \scriptsize 682.80 $\pm$ 0.16 & \scriptsize 19686.60 $\pm$ 1.64 & 
 \scriptsize 84.77 $\pm$ 0.00 \\ \hline 
\hline 
\end{tabular}
\end{table}
% -------------------------------

%---------------------
\begin{figure}[!tbp]
\centering
\includegraphics[scale=0.6]{figs/errdist_a_e5_p2.ps}
\caption{\label{FIG-EDR-ERR-A-EV}
Evolution of the error distribution probability
for vectors belonging to $+1$ class.} 
\end{figure}
%---------------------
\begin{figure}[!tbp]
\centering
\includegraphics[scale=0.6]{figs/errdist_b_e5_p2.ps}
\caption{\label{FIG-EDR-ERR-B-EV}
Evolution of the error distribution probability
for vectors belonging to $-1$ class.}  
\end{figure}
%---------------------
\begin{figure}[!tbp]
\centering
\includegraphics[scale=0.6]{figs/zsize_a.ps}
\caption{\label{FIG-EDR-ZSIZE-EV-A}
Virtual set evolution for $+1$ class.
Horizontal axis represents the number of iterations
and vertical axis is the $Z$ size when
considering only $+1$ class.}
\end{figure}
%---------------------
\begin{figure}[htb]
\centering
\includegraphics[scale=0.6]{figs/zsize_b.ps}
\caption{\label{FIG-EDR-ZSIZE-EV-B}
Virtual set evolution for $-1$ class.
Horizontal axis represents the number of iterations
and vertical axis is the $Z$ size when
considering only $-1$ class.}
\end{figure}
%---------------------
\begin{figure}[htb]
\centering
\includegraphics[scale=0.6]{figs/iter_time.ps}
\caption{\label{FIG-EDR-TIME-EV}
Time per iteration evolution.
Horizontal axis represents the number of iterations
and vertical axis is the time, in seconds.} 
\end{figure}
%---------------------


