\chapter*{Abstract}

The first goal of this thesis is to present strategies 
based on $k$-means as a pre-selection method for 
Support Vector Machines (SVM) training.
These strategies differ on how the pre-selection of input
patterns is carried out, after clustering by $k$-means.
The $k$-means computational cost is reduced with the application
of methods for dimensionality reduction, generating a combined 
training algorithm for SVM denoted as SVM-KM.
Since the structure of SVM is taken into account when
SVM-KM is employed, the results are better if compared
with the standard random selection.

Other result of this work is a new training algorithm for SVMs based on the 
pedagogical pattern selection strategy called  
Error Dependent Repetition (EDR). 
Using an iterative process based on gradient ascent
and a re-sampling strategy dependent on error, 
SVM-EDR solves the dual problem without any assumption about 
support vectors or the Karush-Kuhn-Tucker (KKT) conditions.
Although two new parameters are introduced with SVM-EDR, it is
a efficient and easy to implement algorithm.

The third contribution of this work is a program for training 
support vector machines, called SVMBR.
SVMBR is a framework in which not only new kernels
may be added, as in the majority of SVM programs, but
also it permits adding new training algorithms and 
Quadratic Programming (QP)
solver strategies, with little additional effort.
