%----------------------------------------------------------------
\chapter{Conclusions and future works}
\label{CAP-CONC-FUTURE}
%----------------------------------------------------------------

%----------------------------------------------------------------
\section{Introduction}
%----------------------------------------------------------------

Two new algorithms for training SVMs, SVM-KM and
SVM-EDR, were presented in this work.
Conclusions are separated for each one,
in Sections \ref{SEC-SVM-KM-CONC} and \ref {SEC-SVM-EDR-CONC}.
Future works are discussed in Section \ref{SEC-FUTURE-WORKS}.

%----------------------------------------------------------------
\section{The SVM-KM training strategy}
\label{SEC-SVM-KM-CONC}
%----------------------------------------------------------------

SVM-KM is characterized by the employment of $k$-means as 
a pre-selection method before training of SVMs.
The $k$-means algorithm is applied
to organize input vectors into several clusters, from which input 
vectors are selected to form the new training set,
following one of the five pre-selection strategies proposed.
The strategies differ on how the pre-selection of input
patterns is accomplished, after clustering by $k$-means.
Some features of SVM-KM are discussed in next
sections.

%----------------------------------------------------------------
\subsection{Boundary estimation by $k$-means}
%----------------------------------------------------------------

The proposed boundary estimation by $k$-means is based on the idea
that clusters with mixed composition (those that are associated to
more than one class label) are likely to happen near the
separation margins and they may contain some support vectors.
How $k$-means and boundary estimation are related is explained 
using The Bayes Decision Theory \cite{dudahart73} 
and a measure for the homogeneity of the cluster, 
defined as follows:
%
$$
\lambda_j = 
\frac{\left|\sum\limits_{i \in c_j}y_i\right|}
{\sum\limits_{i \in c_j}|y_i|}.
$$

The index $j$ indicates the cluster $c_j$ and 
the targets of those vectors that belong to the cluster $c_j$
are denoted by $y_i$ (valid targets are only $\pm 1$).

With this expression, mixed clusters have $\lambda$ values in the
range $[0,1)$, where zero denotes a cluster with maximum mixed
situation (fifty-fifty), and non-mixed clusters have $\lambda$
values equal to one.
This feature provides additional information to the 
Bayes Rule, enhancing the decision process.

If the Bayes Rule it is not considered, the decision if a vector 
is in the boundary region or not is only based  on the
states of nature probabilities:
%
$$
\mathrm{Decide} \; \omega_1 \;
\mathrm{if}  \;
 P(\omega_1)
>
P(\omega_2);\;
\mathrm{otherwise \; decide} \;
\omega_2
$$
where $P(\omega_i)$ represents the state of nature probabilities,
indicating the probability of finding a vector 
in the boundary region ($\omega_1$) or not ($\omega_2$).

When the Bayes Rule is taken into account, 
it is possible to observe how \emph{a posteriori}
probability $P(\omega_i | \lambda)$ is changed based on the information
provided by $\lambda$ and \emph{a priori} probability $P(\omega_i)$:
$$
P(\omega_i | \lambda) = 
\frac{ p(\lambda | \omega_i)P(\omega_i)}{p(\lambda)}
$$

With the Bayes Rule, the decision if a vector
is in the boundary region or not is not only based on a priori
probability $P(\omega_i)$, but now it takes into account the
homogeneity of the cluster:
$$
\mathrm{Decide} \; \omega_1 \;
\mathrm{if} \;  P(\omega_1|\lambda) 
>P(\omega_2|\lambda); \;
\mathrm{otherwise \; decide} \;
\omega_2.
$$

The measure $\lambda$ has demonstrated to be
more effective for detecting non-support vectors,
as depicted in Figure \ref{FIG-BAYESDEMORESUL-30}.
From the pair (c) of graphics, the frequency of
non-support vectors that comes from non-mixed
clusters is very high. However, from the pair (b),
it is clear that support vectors come from 
clusters with different values of $\lambda$,
even from non-mixed cluster, when the proposed 
measure is employed.

Five pre-selection heuristics that take into
consideration the homogeneity of the cluster were proposed
in this work: KMDC, KMCC, KMAC, KMCD and KMAD.
The strategies differ on how the pre-selection of input
patterns is performed, after clustering by $k$-means.
Their names are composed using the base name KM and two letters that
indicate with kind of action is performed with mixed and non-mixed
clusters, respectively: [D]iscard all vectors, 
use only the [C]enter of the cluster
or use [A]ll vectors.

The KMCC strategy is equivalent to the standard $k$-means, 
and it is useful to demonstrate the importance of considering
the structure of SVMs when creating sampling methods. 
In Figure \ref{FIG-ROUNDS-GEN} is shown how the
generalization of SVMs is affected by random sampling.
If the training is performed using a subset generated
by random sampling (indicated as round 0), even for large subsets, the
generalization is smaller than training 
using KMCC as pre-selection method.
With KMCC, the spatial distribution of the data set
is taken into account, generating a subset that
represents better the original data set.

%----------------------------------------------------------------
\subsection{Speeding up KM}
%----------------------------------------------------------------

The high computational cost of KMs is evident from 
Figure \ref{FIG-ALLTIMES}. The excessive number of 
distance calculation (Figure \ref{FIG-DISTCALC}
and Table \ref {TAB-SWAPS}) are the main reasons of its low
performance \cite{Judd96a,Judd96b}. 
Other interesting characteristic is related to the number of
swaps: the number of swaps is very high at the first 
iteration but decreases quickly for the subsequent iterations. 

A simulation  for testing the joint
application of KM and SVM was performed, but 
with different number of iterations
for KM. Stopping criterion equals to zero meant 
just the random sampling over the dataset.
Results (Figures \ref{FIG-ROUNDS-TIME} and \ref{FIG-ROUNDS-GEN})
showed that, specifically for the joint
application of KM and SVM, the generalization is only slighted
degraded when this kind of earling stop for KM is executed.
Only one iteration is enough to have good performance,
without loss in the generalization capability.

The dimensionality reduction or, in other words, 
the distance calculation cost, was reduced as well. 
Using ideas extracted from \cite{Fuka0101}, two efficient techniques
were developed:
%----
\begin{description}
\item[Feature selection:] It is based on removal of features whose
frequencies are less than a defined threshold value. ($t_h$).
\item[Feature extraction:] In this method, several features are
grouped (as clusters) into a new one, generating a new pattern with
dimension $d$.
\end{description}
%----
Both methods provided very
similar generalization rates, with a much smaller 
simulation time and small increase in the initialization time,
as can be seen in Tables \ref{TAB-FEATSEL} (feature selection)
and \ref{TAB-FEATEXT} (feature extraction).

A satisfactory generalization may not be
reached in same cases. This will depend on how much the decision
surface is changed when the new training set is defined. In this
situation, it is necessary to tune the SVM's parameters again.

%----------------------------------------------------------------
\subsection{Generalization and performance analysis}
%----------------------------------------------------------------

The joint application of KM with SVM is feasible
despite the fact that KM is an exhaustive technique 
for clustering.
For vectors with large dimensionality,
SVM-KM  performance may decrease since it  
performs many distance evaluation between training 
vectors. In these cases, two strategies were used:
dimensionality reduction and only one iteration for 
the KM algorithm.

When the dimensionality is decreased, the distance calculation
cost is reduced, spending less time (Figure \ref{FIG-DISTCALC}). 
Moreover, during the first iteration, the number of swaps is 
very high  but it decreases quickly for the subsequent 
iterations \cite{Fredrik00} (Figure \ref{FIG-NUMSWAPS}).
Specifically for the joint application of KM and SVM, 
the generalization is only slighted degraded when this kind of 
earling stop for KM is executed (Figure \ref{FIG-ROUNDS-GEN}).

It was observed that it is not necessary to reach 
the minimum with KM neither
use complex methods to reduce the dimensionality: SVMs can overcome
these simplifications. 

%----------------------------------------------------------------
\subsection{Performance of the proposed strategies}
%----------------------------------------------------------------

Among all proposed strategies, KMAC is a special case since its pre-selection
strategy is just based on preserving vectors near decision surface.
As showed in Figures \ref{FIG-RESUL-FINAL-NO-DIMRED},
\ref{FIG-RESUL-FINAL-SELECTION} and \ref{FIG-RESUL-FINAL-EXTRACTION},
this aim is reached, with the number of SVs high and nearly 
constant for KMAC. 

The ability of KMAC for discovering SVs grants itself two features:
the slowest pre-selection strategy and the best generalization
capability. KMAC strategy overcomes the NC parameter and 
estimates data set boundaries, but using a large number of vectors.
Applied together with $k$-means and feature selection, time consumed
by this strategy is more than three times smaller than the original SVM time
(8.46s for 321 clusters against the original 28s for pure SVM training
and with high generalization: 84.45\%).

Other stable option to be considered is KMCC, the traditional
approach: 82.25\% in 2.75s, using NC equal to 321 and feature selection.
This time is ten times smaller than pure SVM and with slight decrease in
generalization (pure SVM has a generalization capability of 84.5\%).
In KMCC strategy is used the standard $k$-means which does not take 
into account nature of a support vector machine and may ignore many
probably SVs.

KMCD was the worst method in all simulation groups, a typical case 
where much information is lost due to the excessive vectors discard:
for all groups, KMCD has the smallest number of training vectors.
Using feature selection, the best result for KMCD is
a generalization of 79.87\%, in 2.38s.

KMAD can be considered a improvement in KMCD, collecting more 
information for its training and obtaining 84.12\% of generalization
in 7.05s. Unfortunately, KMAD is very sensitive to NC and its 
generalization decreases quickly when NC grows. Again, the lack 
of information is evident. 

A very interesting case happens when KMDC is used. The main idea
behind this strategy is to increase the margins by 
removing mixed clusters. For small NCs, this goal is not
reached, since many clusters are mixed (lack of information)
but, when NC is larger, KMDC shows satisfactory results. For
instance, using the second group of simulation (feature 
selection) and NC equal to 642, a generalization of
83.49\% is obtained in 4.37s for KMDC.


%----------------------------------------------------------------
\section{The SVM-EDR training algorithm}
\label{SEC-SVM-EDR-CONC}
%----------------------------------------------------------------

SVM-EDR training algorithm is a Boosting algorithm
that uses a deterministic schedule based on error
to re-sample the data set. Patterns related to large
errors have its Lagrange multipliers updated more
frequently, according to a gradient ascent strategy.
The schedule is calculated using a modified EDR procedure,
adapted to the output error of the SVMs.
SVM-EDR solves the QP problem arising from SVMs training 
but without any assumption about support vectors 
or KKT conditions.
SVM-EDR is based on the dual problem of SOR
and can be applied to linear and non-linear problems.
Generalization errors and support vectors obtained are 
similar to that one found by SMO.
Some features of SVM-EDR are discussed in next
sections.

%----------------------------------------------------------------
\subsection{SVM-EDR implementation} 
%----------------------------------------------------------------

The Dependent Repetition procedure uses a two phases procedure
to change the frequency presentation of patterns.
In the first phase, each pattern is presented to the 
network and all output errors ($e_i$) are gathered.
In the second phase, the training set is scanned $n_E$ times
and patterns that obey the condition
$$
(e_i)^{p_E} \geq j \frac{ e_{\textrm{max}} } {n_E} 
\;\;\;\;\;\mathrm{for}\;\;j=1,2,\ldots,n_E
$$
are selected again for optimization, where $p$ is the number of 
patterns and $p_E$ is the comparison function power.

Since the output error for SVMs depends on which class the training vector
belongs to, using the following transformation: 
$$
e_i^{'} = -e_iy_i
$$
it is possible to have an error measure 
that it is not dependent on the class label, where
$y_i$ is the class label for pattern $\mathbf x_i$.

EDR uses errors greater than zero, so it is required 
to shift  $e_{i}^{'}$ by the minimum error,
resulting on the complete expression for SVM-EDR:
$$
(e_{i}^{'} - e_{\mathrm{min}}^{'})^{p_E} 
\geq j \frac{ e_{\textrm{max}}^{'}  - e_{\mathrm{min}}^{'}} {n_E} 
$$
\begin{equation}
e_{i}^{'} \geq 
\sqrt[p_E]
{
j \frac{ e_{\textrm{max}}^{'}  - e_{\mathrm{min}}^{'} }{n_E} 
}
+ e_{\mathrm{min}}^{'}
\;\;\;\;\;\mathrm{for}\;\;j=1,2,\ldots,n_E
\label{EQ-SVMEDR-EQ-CONC}
\end{equation}
where $e_{\textrm{max}}^{'}$ and $e_{\textrm{min}}^{'}$
are the minimum and maximum errors, both not dependent on
the class label.
The role of the two new training parameters 
are clear in last equation: to control the number of
scan performed over the training set via, $n_E$,
and to change the shapeof the error probability
distribution, via $p_E$.

The $n_E$ value is not critical but it may slow down
the training if a very large EDR set 
is generated. In general, values smaller than ten are 
appropriate for $n_E$. 

%----------------------------------------------------------------
\subsection{Virtual training set and Boosting}
%----------------------------------------------------------------

The Boosting characteristic of SVM-EDR is a consequence of 
virtual set generated by Equation \ref{EQ-SVMEDR-EQ-CONC}, since patterns 
associated with large errors are included several times on it.
When the distribution probability of errors ($g(e)$) is known, 
it is possible to estimate the size of the virtual set:
% -------
$$
Z = \int\limits_{0}^{n_E}
      \int\limits_{e^{'}(j)}^{e^{'}(p)}
      g(e)dedj.
$$
% -------
 
Two major differences between SVM-EDR and Boosting can be pointed out:
%--
\begin{itemize}
\item In Boosting algorithms, selection of patterns follows 
      a probability distribution whereas a deterministic 
      schedule, based on current errors, is used to select patterns 
      in the EDR algorithm.
%
\item Boosting decision takes into account final hypothesis $H(\mathbf{x})$, 
      represented as a set of several weak hypothesis ($h_t$) weighted by $a_t$.
      For SVM-EDR, this kind of weighting is not necessary since the same 
      hypothesis is refined during training, according to the evolution
      of the Lagrange multipliers.
\end{itemize}
%----
With some modifications, it is possible to represent SVM-EDR 
in a format similar to the traditional AdaBoost algorithm 
\cite{FRESCHAP97}, as depicted in Algorithm \ref{EDR-BOOSTING}.

%----------------------------------------------------------------
\subsection{Convergence of SVM-EDR}
%----------------------------------------------------------------

SVM-EDR implements a boosted gradient ascent algorithm 
where vectors related to large errors are presented
more frequently. The idea is to speed up training process,
avoiding the learned vectors. 

This behavior has, as a consequence, a large initial computation
cost, as represented in Figure \ref{FIG-EDR-TIME-EV}. 
The reason for that is that some vectors are only associated with null Lagrange
multipliers (in the SMO case) or with small errors (for SVM-EDR)
after at least one iteration. 
These associations avoid unnecessary loops over vectors without significant
information for the training process.
Moreover, cache allocation and
control structures initialization are set at the beginning as well,
slowing down the initial phase of the training process.

Other way  to observe the convergence process is to analyze the
error probability evolution (Figures \ref{FIG-EDR-ERR-A-EV}
and \ref{FIG-EDR-ERR-B-EV}). After several initial rearrangements,
the error distribution approximates quickly to their final 
shape. This convergence in distribution is 
essential to the algorithm convergence.

As stated in Theorem \ref{THEO-EDR-1}, when the convergence is 
reached, the EDR set becomes stable, with a fixed number of 
vectors and with fixed order. In Figures \ref{FIG-EDR-ZSIZE-EV-A}
and \ref{FIG-EDR-ZSIZE-EV-B} are represented the EDR set size for 
vectors belonging to class $+1$ and $-1$, respectively. One 
weakness of most SVM algorithms also appears
here: the difficult to improve the final solution. 
After few iteration, several Lagrange
multipliers are already very close to their optimal values.
However, since SVM decision hyperplane is given by a large
sum over all SVs weighted by the Lagrange multipliers, even
small differences in Lagrange multipliers can generate
unsatisfactory results. 

%----------------------------------------------------------------
\subsection{Separation hyperplanes}
%----------------------------------------------------------------

Due to the convexity of the problem, with uniqueness and minimum guaranteed, 
(see Vapnik \cite{VAPNIK9801}, Theorems 10.1 and 10.2), the
hyperplane found by SVM-EDR is expected to be the same 
that one found by SMO, with the same support vectors. 
However, small differences in Lagrange multipliers may happen,
but within a specified precision. The default value for this
precision in SVM-EDR is $10^{-4}$.

%----------------------------------------------------------------------
\subsection{Training time}
%----------------------------------------------------------------------

The computational cost of the algorithm, per iteration,  is 
$O(Zp) + O(p) + O(n_Ep)$.
When $Z \approx p$, the computational cost can be approximated by
$O(p^2)$ and when $Z >> p$ this cost becomes $O(Zp)$. It is
smaller when compared to SMO (if $Z < p^2 $) but it is important
to note that the SVM-EDR algorithm optimizes one Lagrange multiplier per 
time whereas SMO takes two Lagrange multipliers. 
Other difference is related to Lagrange multipliers precision:
it was observed that SVM-EDR requires smaller tolerance values than SMO
for a proper convergence, generating an increase in training time.
Since SVM-EDR relies on the error distribution and not on the KKT conditions, 
it is more susceptible to the spatial distribution of the training set
than SMO. Thus, even so computational costs may be defined, it is not
easy to estimate which algorithm runs faster. 

%----------------------------------------------------------------------
\subsection{Number of iterations and Z set size}
%----------------------------------------------------------------------

Figure \ref{FIG-EDR-S1-NUMIT2D} shows that
the number of iterations decrease when $n_E$ or the comparison 
function power increases. 
%
When a larger $n_E$ is chosen, the same vector may
be presented several times, mainly if it is associated 
with a large error. This boosting characteristic leads the
Lagrange multiplier under optimization to a faster convergence, increasing
the iteration time but decreasing the number of iterations.
On the other hand, an increase in comparison function power favors 
the presentation of vectors associated to large errors since 
the error distributions is modified.
Again, several presentations of vectors related to large errors occurs, 
generating less iterations but with more time consumed.

The Z set size (Figure \ref{FIG-EDR-S1-ZSIZE2D}) may work as
an indirect time measure since its size becomes large
when $n_E$ or comparison  function power increase. 
The slope depicted in Figure \ref{FIG-EDR-S1-NUMIT2D} may change,
depending on data set but a similar behavior is expected for all.

%----------------------------------------------------------
\subsection{Generalization and number of support vectors}
%-----------------------------------------------------------

The changes in generalization are very small, showing
the algorithm convergence and training parameters independence.
For the third experiment (Adult database), an average over all generalization 
values provides $84.78 \pm 0.01$, indicating a very stable generalization.

The same reasoning is valid for the number of support vectors found,
with a narrow range and small standard deviation: 682.4 to 683.9.
Performing an average over all values provides
a number of support vectors equals to $682.95 \pm 0.26$ or,
rounding the result to the nearest integer, 683 support vectors.

%----------------------------------------------------------------
\section{Future works}
\label{SEC-FUTURE-WORKS}
%----------------------------------------------------------------

In relation to SVM-KM, some topics are listed below:
%
\begin{itemize}
\item In Section \ref{SEC-BOUNDARY-ESTIMATION-KM}  
			a measure for the homogeneity of the cluster was defined (Equation 
			\ref{SVMKM-FEATURE-DEF}, represented by $\lambda$). 
      This measure has demonstrated to be more appropriated
			to detect non-support vectors. However, when considering the
			computational cost of SMO, the algorithm performance can be 
			increased if likely support vectors are detected, \apriori{},
			with more reliability.
			The study and creation of other measures for 
			the homogeneity of the cluster is a possible branch of this work.
			For instance, if it was included in $\lambda$
			information about the homogeneity of neighboring clusters, 
			probably $\lambda$ would be more accurate.
\item Estimate the number of cluster is an open question.
			Depending on the pre-selection strategy,
			the optimal NC may change. For instance, a large NC 
			is necessary for KMDC  but for
			KMAC is better to choice a small NC. In fact,
			the homogeneity measure and the pre-selection 
			strategy used need to be considered when choosing NC.
			A starting point for this investigation 
			are Figures \ref{FIG-SV2} and \ref{FIG-NSV2} where
			the origin of support vector and non support vectors are
			described in function of NC.
			The definition of a measure that describes 
			the loss of information for a given pre-selection
			strategy, a homogeneity measure and a specific NC it is interesting. 
			After minimizing this measure, the optimal NC value
			would be determined. For instance, when KMDC is used
			with large values for NC, several small clusters are 
			generated. In this case, mixed clusters are discarded but
			with small loss of information. However, for few
			clusters, the loss of information is large.
\item Vectors near separation hyperplane or misclassified 
      are associated with larger errors, as shown in 
      SVM-EDR chapter. 
      Besides, KMAC strategy, that tries to preserve 
      vectors near separation hyperplane, was the strategy 
      with the best generalization. 
      This suggests that a pre-selection method, based
      on errors, may be very efficient. A suggestion is 
      to use SVM-EDR as a boundary detector. For SVM-EDR, after few 
      iterations, vectors near the bounds are located,
      even if Lagrange multipliers did not converge yet.
      Therefore, it may be useful for reducing the 
      training set size or to implement a new working set
      method. 
\end{itemize}

%
For SVM-EDR, some topics for future work are:
%
\begin{itemize}
\item A better understanding of how to estimate the $n_E$ parameter
			and the comparison functions are necessary. Although a step
			was given in this work with the expression for $Z$ size 
			(Equation \ref{EQ-Z-SET-SIZE}) and formal convergence proof
			(Theorem \ref{THEO-EDR-1}),	without the knowledge of the error 
			distribution it is impossible to know the $Z$ size precisely.
			By its turn, the error distribution is generated only 
			after the first iteration, becoming practically impossible
			to known $Z$ before starting training.
      An idea is to use the size of $Z$ as a training 
			parameter, avoiding the comparison function power or $n_E$ parameters. 
			For the	first iteration it would be used some default values (like $n_E$
			equals to one and a quadratic comparison function) and for
			subsequent iterations the $Z$ size could be estimated using
			the error distribution.
\item The employment of SVM-EDR for regression problems
			is other possible branch of this work.
			First, it is necessary
			to modify the primal problem for regression, adding 
			the minimization of the bias term
			(see differences between Equations \ref{EQ-PRIMAL-SLACK} 
			and \ref{EQ-TRAIN-PRIMAL-SLACK-SOR}, for classification).
			After calculating the dual problem, it is possible to determine the
			new update rules for Lagrange multipliers, bias and cache of
			errors, used by SVM-EDR.
\item It is necessary to implement efficient chunking strategies 
			in the SVMBR program. In the implemented strategy, chunks with fixed
			size (default is 500) are created and solved by the selected method
			(SMO or SVM-EDR) until convergence is reached.
			Each chunk is filled with the training vectors related to the largest errors. 
			If the last optimized chunk does not change any Lagrange multiplier,
			the training is finished. Otherwise, a new chunk is chosen,
			with the most misclassified patterns.
			This strategy is slower than the standard Vapnik's chunking \cite{Vapnik92b}
			since vectors already used may be chosen again to compose the new chunk.
			In the Vapnik's strategy, a new chunk is created using the support
			vectors calculated in the last chunk, a small fraction of the non-support vectors 
			present in the last chunk and filled up with training vectors not used yet.
			
\end{itemize}
