\chapter*{Abstract}

One goal of this thesis is to present strategies 
based on $k$-means as pre-selection method.
These strategies differ on how the pre-selection of input
patterns is done, after clustering by $k$-means.
The $k$-means computational cost is reduced with the application
of methods for dimensionality reduction, generating a combined 
training algorithm  for SVM denoted as SVM-KM.

Other result of this work is a new training algorithm for SVMs based on the 
pedagogical pattern selection strategy called EDR 
(Error Dependent Repetition). 
Using an iterative process based on gradient ascent
and a re-sampling strategy dependent on error, 
SVM-EDR solves the dual problem without any assumption about 
support vectors or the Karush-Kuhn-Tucker (KKT) conditions.

The third contributions in this work is a program to training 
support vector machines, called SVMBR.
SVMBR is a framework in which not only new kernels
may be added, as in the majority of SVMs programs, but
also it also permits to add new training algorithms and QP
solver strategies, with small effort.

