\chapter{Theoretical Background}\label{ch:theory}

% This chapter aims at describing the theory involved in the present work. 

% \section{Condition-Based Maintenance}

% Relation between reliability and maintenance.

% The main types of maintenance are:
% \begin{itemize}
% \item Corrective maintenance;
% \item Preventive maintenance;
% \item Condition-based maintenance.
% \end{itemize}

% Corrective maintenance, also known as run-to-failure maintenance, is
% per\-for\-med af\-ter e\-quip\-ment fai\-lu\-re and its main objective
% is to restore it to an operational condition as soon as
% possible. Given its unscheduled nature, corrective maintenance may
% incur sudden equipment (or even system) unavailability and all its
% associated costs that, among others, involve costs of the maintenance
% action itself and costs due to non-expected breakdowns.

% Preventive maintenance, in turn, is time-based and takes place
% periodically regardless the current equipment condition. It aims at
% preventing failures by staving off aging effects ({\it e.g.} wear,
% corrosion, fatigue). Since the system status is neglected, preventive
% actions can be unnecessary when the equipment has not crossed a
% deterioration level yet or can be inefficient when a latent failure
% exists and the maintenance epoch is not sufficiently near. In the
% former situation, planned equipment downtime and maintenance costs may
% be uselessly high. In the latter context, failures occur between two
% preventive actions and then corrective maintenance is required. Hence,
% in addition to the planned downtime and maintenance costs,
% unavailability and costs due to corrective actions are also incurred.

% In order to overcome the drawbacks of the previous maintenance
% policies, \gls{cbm} emerges. According to \citeonline{jardine2006},
% \gls{cbm} is a maintenance program that recommends maintenance actions
% based on information collected via equipment condition
% monitoring. \gls{cbm} attempts to avoid unnecessary maintenance tasks
% by taking action only when there is evidence of equipment abnormal
% behavior. If properly established and effectively implemented, a
% \gls{cbm} policy can significantly reduce maintenance costs by
% diminishing the number of unnecessary scheduled preventive actions
% and/or by detecting and preventing latent failures. A \gls{cbm}
% program consists of three main steps:

% \begin{enumerate}
% \item Data acquisition: in this step, data regarding equipment status
%   is collected.
% \item Data processing: the acquired data is processed, analyzed and
%   interpreted.
% \item Maintenance decision-making: after data interpretation,
%   efficient maintenance policies are recommended.
% \end{enumerate}

% This work focus on the last two steps and it is supposed that the data
% acquisition is already performed and those...

\section{Support Vector Machines}

\gls{svm} is a learning method widely used in pattern recognition and
regression problems. The applications of \gls{svm} belong to different
domains, from computational biology \cite{ben-hur2008} to
financial series forecasting \cite{gestel2001}. In the reliability context,
\gls{svm}
has been used for example in \gls{cbm} and fault detection
\cite{widodo2007}, to classify anomalies in components
\cite{roccoezio2007}, to forecast equipments' reliability
\cite{hongepai2006,chen2007}, among others.

% In reliability context, \citeonline{rocco2002} have used \gls{svm} to
% classify a component as operational or faulty in order to evaluate
% system overall reliability. The authors take advantage of the
% \gls{svm} velocity, which is greater than the one from the traditional
% discrete event simulation approach of Monte Carlo
% \cite{banks2002}. Then, they couple Monte Carlo simulation with
% \gls{svm}.


In its classical formulation, \gls{svm} is a supervised learning
method, since it is based on (input, output) examples. \gls{svm} stems
from the \gls{slt} and it is particularly useful when the process in
which inputs are mapped into outputs is not known. According to
\citeonline{kecman2005}, the learning problem is as follows: there is
an unknown nonlinear dependence (mapping, function) $y =
f(\mathbf{x})$ between a multidimensional input vector $\mathbf{x}$
and an output $y$. The only information available is a data set $D =
\{(\mathbf{x}_1,y_1),(\mathbf{x}_2,y_2), \dots,
(\mathbf{x}_{\ell},y_{\ell})\}$ , where ${\ell}$ is the number of
examples in $D$. The data set $D$ is called training set due to its
purposeful use in training the learning machine.

Depending on the type of output $y$, different learning problems are
defined:
\begin{itemize}
\item Classification problems: $y$ assumes discrete values that
  represent categories. If only two categories are considered ({\it
    e.g.} $y = -1$ or $y = +1$), the problem at hand is of binary
  classification. Otherwise, if three or more categories are taken
  into account, it is the case of a multi-classification problem.
\item Regression problems: $y$ is real-valued and its relation with
  the input vector $\mathbf{x}$ is given by a function.
\end{itemize}

The solution of the learning problem is a decision function or a
regression (target) function when, respectively, a classification or a
regression is considered. For some learning machines like \gls{nn},
the underlying idea to obtain the answer of the learning problem is
the principle of \gls{erm}, which measures only the errors from the
training step and is suitable for situations where there is a large
quantity of training examples \cite{vapnik2000}. On the other hand,
obtaining the learning problem solution via \gls{svm} involves a
convex quadratic optimization problem, whose objective function
embodies the principle of \gls{srm}. This principle entails the
minimization of the upper bound of the generalization error, which is
formed by two parts: one of them is associated with the machine
ability to classify or predict unseen data ({\it i.e.}, examples that
are not in $D$), and the other regards the training errors. In this
way, there is a trade-off between model's capacity and training
accuracy. Machines with low capacity have high training and
generalization errors, which characterizes the situation of {\it
  underfitting} the data. On the other hand, increasing too much the
machine capacity yields small training error but the generalization
error grows due to its bad performance in classifying or predicting
unseen data. This latter is the case of {\it overfitting} the data,
that is, the machine is too specialized in the training set that it is
unable to work well on data not in $D$. The behavior of those errors
in relation to the machine capacity is illustrated in Figure
\ref{fig:caperr} and the examples of underfitting and overfitting in
classifying linearly separable data are depicted in Figure
\ref{fig:underover}.
% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[height=6cm]{fig/capErr.pdf}
\caption{Relation between model capacity and error}
\label{fig:caperr}}
\end{figure}
% --------------------------------------------------------------------------
% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[height=14cm]{fig/underOver.pdf}
\caption{Underfitting (up) and overfitting (down) the data in the case
  of binary classification of linearly separable
  data. {\footnotesize{Adapted from \citeonline{kecman2005}, p. 8}}}
\label{fig:underover}}
\end{figure}
% --------------------------------------------------------------------------
According to \citeonline{kecman2005}, the \gls{srm} principle was
proved to be useful when dealing with small samples and its main idea
consists in finding a model with adequate capacity to describe the
given training data set. For further details in \gls{erm} and
\gls{srm}, see \citeonline{vapnik2000}, \citeonline{kecman2001} and
\citeonline{kecman2005}.

An important advantage of \gls{svm} is that the training phase is
accomplished by the resolution of a convex quadratic optimization
problem with a unique local optimum that is also global
\cite{boyd2004}. Such problem involves well-known and established
optimization theory and techniques to solve it, for example the
concept of Lagrangian multipliers and \gls{kkt} conditions. Besides
that, the desired decision or regression function is always linear and
presented by a hyperplane. Even if the relation in the input space is
not linear, \gls{svm} uses kernel functions (see
\citeonline{schol2002}) to map the input data $\mathbf{x}$ into a
feature space, often of higher dimension, in which such relation is
linear. In this feature space the training procedure is executed. The
following subsections introduce the main \gls{svm} classifiers as well
as regression via \gls{svm}.

\subsection{Linear Maximal Margin Classifier for Linearly Separable Data}

The binary classification of linear separable data is the most simple
learning problem. Due to its simplicity, it is often not applicable
in practical situations. Despite that, it presents the fundamental
aspects of \gls{svm} and it is useful to understand the more complex
and realistic \gls{svm} approaches.

Let $D = \{(\mathbf{x}_1,y_1),(\mathbf{x}_2,y_2), \dots,
(\mathbf{x}_{\ell},y_{\ell})\}$ be the training set with $\mathbf{x}_i
\in {\mathbb R}^n$, $y_i \in \{-1,1\}$ and $i = 1, 2, \dots, \ell$
denoting the $i^{th}$ training example. Suppose that the data is linearly
separable, that is, they can be perfectly separated by
hyperplanes. The best of those hyperplanes is the one with maximal
margin, {\it i.e.}, the one which maximizes the minimum distance
between examples from distinct classes. If this optimal hyperplane is
defined, one gets the decision function. The hyperplane equation in
matrix form is given by:
% --------------------------------------------------------------------------
\begin{equation}\label{hyp}
H = \mathbf{w}^T\mathbf{x} + b = 0
\end{equation}
% --------------------------------------------------------------------------
where $\mathbf{w}$ is the vector normal to the hyperplane, $T$
indicates the matrix transpose operation, $\mathbf{x}$ is the input
vector and $b$ is the linear coefficient of the hyperplane.

Since the data is separable, in order to correctly classify a given
example $(\mathbf{x}_i,y_i)$, the decision function should satisfy the
constraints:
% --------------------------------------------------------------------------
\begin{eqnarray}
\mathbf{w}^T\mathbf{x}_i + b \geq 1, & & \text{if } y_i = 1 \label{constr1}\\
\mathbf{w}^T\mathbf{x}_i + b \leq -1, & & \text{if } y_i = -1 \label{constr2}
\end{eqnarray}
% --------------------------------------------------------------------------
that can be expressed by a single inequality:
% --------------------------------------------------------------------------
\begin{equation}
y_i \cdot (\mathbf{w}^T\mathbf{x}_i + b) \geq 1 \label{uniqueconstr}
\end{equation}
% --------------------------------------------------------------------------
When \eqref{constr1} and \eqref{constr2} are active, {\it i.e.},
equalities hold, two hyperplanes $H_+$ and $H_-$ are respectively
defined and the distance between them is the margin ($M$). The vectors
$\mathbf{x}$ from the training set which satisfy either $H_+$ or $H_-$
are the so-called support vectors. As an illustration, Figure
\ref{fig:margin} depicts a simple two dimensional case of binary
classification in which the margin, hyperplanes and support vectors
are indicated.
% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[height=6cm]{fig/margin.pdf}
\caption{Binary classification. {\footnotesize{Adapted from
      \citeonline{kecman2001}, p. 154}}}
\label{fig:margin}}
\end{figure}
% --------------------------------------------------------------------------

The margin is, indeed, defined by the distance between support vectors
from distinct classes ({\it e.g.}  $\mathbf{x}_{SV,-1}$ and
$\mathbf{x}_{SV,+1}$) projected onto the same direction of the
hyperplanes' perpendicular vector $\mathbf{w}$. That is,
% --------------------------------------------------------------------------
\begin{equation}\label{marg}
  M
  = (\mathbf{x}_{SV,+1} - \mathbf{x}_{SV,-1})_{\mathbf{w}} = ||\mathbf{x}_{SV,+1}||\cos(\omega) - ||\mathbf{x}_{SV,-1}||\cos(\rho)
\end{equation}
% --------------------------------------------------------------------------
in which $\omega$ and $\rho$ are respectively the angles between
$\mathbf{x}_{SV,+1}$ and $\mathbf{w}$ and between $\mathbf{x}_{SV,-1}$
and $\mathbf{w}$ (see Figure \ref{fig:margin}). These angles are given
by:
% --------------------------------------------------------------------------
\begin{equation}\label{angles}
  \cos(\omega) = \frac{\mathbf{w}^T\mathbf{x}_{SV,+1}}{||\mathbf{w}||\cdot||\mathbf{x}_{SV,+1}||} \qquad \cos(\rho) = \frac{\mathbf{w}^T\mathbf{x}_{SV,-1}}{||\mathbf{w}||\cdot||\mathbf{x}_{SV,-1}||}
\end{equation}
% --------------------------------------------------------------------------
By replacing \eqref{angles} in \eqref{marg} and considering that
$\mathbf{x}_{SV,+1}$ and $\mathbf{x}_{SV,-1}$ belong to $H_+$ and
$H_-$, respectively, it is obtained:
% --------------------------------------------------------------------------
\begin{equation}
  M = \frac{\mathbf{w}^T\mathbf{x}_{SV,+1} - \mathbf{w}^T\mathbf{x}_{SV,-1}}{||\mathbf{w}||} = \frac{-b+1 - (-b-1)}{||\mathbf{w}||} = \frac{2}{||\mathbf{w}||}
\end{equation}
% --------------------------------------------------------------------------
where $||\mathbf{w}|| = \sqrt{\mathbf{w}^T\mathbf{w}} = \sqrt{w_1^2 +
  w_2^2 + \cdots + w_{n}^2}$ is the $\ell_2$-norm of vector
$\mathbf{w}$. For further details in margin definition consult
\citeonline{kecman2001}. Also, one can find information about the
underlying analytic geometry concepts in \citeonline{reis1996}.

It is noticeable that minimizing $||\mathbf{w}||$ is equivalent to
maximize $M$. Moreover, to minimize $\sqrt{\mathbf{w}^T\mathbf{w}}$ is
similar to minimize $\mathbf{w}^T\mathbf{w}$. In this way, the convex
optimization problem to be resolved during the training step is:
% --------------------------------------------------------------------------
\begin{eqnarray}
  \min_{\mathbf{w}, b} & & \frac{1}{2}\mathbf{w}^T\mathbf{w} \label{problinsep} \\
  \text{subject to} & & y_i \cdot (\mathbf{w}^T\mathbf{x}_i + b) \geq 1, \quad i = 1, \dots, \ell \nonumber
\end{eqnarray}
% --------------------------------------------------------------------------
where the constant $\frac{1}{2}$ is a numerical convenience and does
not change the solution. Notice that in the case of linearly separable
data, there is no training error and $\mathbf{w}^T\mathbf{w}$ is
associated with machine capacity \cite{vapnik2000,kecman2005}. To
solve \eqref{problinsep}, a classic quadratic programming problem,
Lagrange multipliers along with \gls{kkt} conditions are used. Due to
its convex nature (objective function is convex and its constraints
result in a convex feasible region), the \gls{kkt} conditions are
necessary and sufficient for an optimum \cite{boyd2004}. Firstly, the
Lagrangian function is structured as follows:
% --------------------------------------------------------------------------
\begin{equation}\label{lp}
  {\mathcal L}(\mathbf{w},b, \bm{\alpha}) = \frac{1}{2}\mathbf{w}^T\mathbf{w} - \sum_{i=1}^{\ell}\alpha_i \cdot [y_i \cdot (\mathbf{w}^T\mathbf{x}_i + b) - 1]
\end{equation}
% --------------------------------------------------------------------------
in which $\bm{\alpha}$ is the $\ell$-dimensional vector of Lagrange
multipliers. It is necessary to find the saddle point
$(\mathbf{w}_0,b_0, \bm{\alpha}_0)$ of ${\mathcal L}$, since
\eqref{lp} has to be minimized with respect to the {\it primal
  variables} $\mathbf{w}$ and $b$ and minimized with respect to the
{\it dual variables} $\alpha_1, \alpha_2, \dots, \alpha_{\ell}$, which
should be non-negative, {\it i.e.}, $\alpha_i \geq 0$ for all
$i$. This problem can be resolved either in the primal or in the dual
space.  However, the latter approach has been adopted by a number of
works due to the insightful results it provides (for example, see
\citeonline{vapnik2000}, \citeonline{schol2002},
\citeonline{kecman2005}).
 
In order to find the saddle point of ${\mathcal L}$, the \gls{kkt} conditions
are then stated:
% --------------------------------------------------------------------------
\begin{align}
  \frac{\partial\,{\mathcal L}(\mathbf{w}_0,b_0,
    \bm{\alpha}_0)}{\partial\,\mathbf{w}} & =  0, \quad \mathbf{w}_0 = \sum_{i=1}^{\ell}\alpha_{0i}\,y_i\,\mathbf{x}_i \label{der1}\\
  \frac{\partial\,{\mathcal L}(\mathbf{w}_0,b_0,
    \bm{\alpha}_0)}{\partial\,b} & =  0, \quad \sum_{i=1}^{\ell}\alpha_{0i}\,y_i = 0 \label{der2}\\
  y_i \cdot (\mathbf{w}_0^T\mathbf{x}_i + b_0) - 1 & \geq  0, \quad  i = 1, 2, \dots, \ell \label{uniquemodif}\\
  \alpha_{0i} & \geq  0, \quad  \forall \, i  \label{alphas} \\
  \alpha_{0i} \cdot [ y_i \cdot (\mathbf{w}_0^T\mathbf{x}_i + b_0) - 1] & =  0,  \quad  \forall \, i \label{compl} 
\end{align}
% --------------------------------------------------------------------------
where the first two equalities in \eqref{der1} and \eqref{der2} result
directly by the derivatives of ${\mathcal L}$ with respect to the
primal variables being null when evaluated at the optimum
$(\mathbf{w}_0,b_0, \bm{\alpha}_0)$; the inequalities in
\eqref{uniquemodif} and \eqref{alphas} require primal and dual
feasibility of the solution, respectively; the last equations
\eqref{compl} are the \gls{kkt} complementarity conditions which
states that the product of dual variables and primal constraints must
vanish at the optimum. A useful insight stems from equations
\eqref{compl}: support vectors lie on either $H_+$ or $H_-$ and then
nullify the second term of the \gls{kkt} complementarity conditions,
which along with condition \eqref{alphas} implies non-negative
Lagrange multipliers. According to \citeonline{nocedal2006}, the
constraints can be classified as:
\begin{itemize}
\item Inactive: if the constraint strictly satisfies the inequality
  and the related Lagrange multiplier is exactly 0. Then, the optimal
  solution as well as the optimal objective function value are
  indifferent to whether such constraint is present or not.
\item Strongly active: if the constraint satistfies the equality and
  the respective Lagrange multiplier is srictly positive. Thus, a
  perturbation in the constraint has an impact on the optimal
  objective value with magnitude proportional to the Lagrange
  multiplier.
\item Weakly active: if the constraint satisfies the equality and the
  associated Lagrange multiplier is exactly 0. In these cases, small
  perturbations in such constraint in some directions hardly affects
  the optimal objective value and solution.
\end{itemize}
In this way, for practical purposes, weakly active constraints can be
treated as inactive. Hence, support vectors are identifiable by
strictly positive Lagrange multipliers. It is important to emphasize
that solving the \gls{svm} training problem is equivalent to solve the
system defined by the \gls{kkt} conditions, given they are necessary
and sufficient for a convex optimization problem. By substituting the
equalities \eqref{der1} and \eqref{der2} into ${\mathcal L}$, it is
obtained an expression involving only the dual variables which has to
be maximized:
% --------------------------------------------------------------------------
\begin{eqnarray}
 \max_{\bm{\alpha}} & & {\mathcal L}_d(\bm{\alpha}) = \sum_{i = 1}^{\ell}\alpha_i - \frac{1}{2}\sum_{i = 1}^{\ell}\sum_{j = 1}^{\ell}y_iy_j\alpha_i\alpha_j\mathbf{x}_i^T\mathbf{x}_j \label{ld}\\
\text{subject to} & & \alpha_i \geq 0, \quad i = 1, 2, \dots, \ell \label{alphasi}\\
& & \sum_{i=1}^{\ell}\alpha_i\,y_i = 0 \label{alphasy}
\end{eqnarray}
% --------------------------------------------------------------------------

The dual problem resolution yields $\ell$ non-negative values for the
Lagrange multipliers. By replacing these values in equation
\eqref{der1}, the optimal normal vector $\mathbf{w}_0$ is directly
defined. Still, notice that the summation over all $\ell$ is exactly
the same as over only the support vectors, since $\alpha_i = 0$ if $i$
is not a support vector. Differently from $\mathbf{w}_0$, $b_0$ is
implicitly determined by \gls{kkt} complementarity conditions for any
chosen support vector $s$ ($s = 1, 2, \dots, nSV$, where $nSV$ is the
number of support vectors). However, due to numerical instabilities,
it is better to set $b_0$ as the mean over all values resulted from
$nSV$ calculations of \eqref{compl}, that is \cite{burges1998,
  kecman2005}:
% --------------------------------------------------------------------------
\begin{equation}\label{b}
  b_0 = \frac{1}{nSV}\sum_{s=1}^{nSV} \left(\frac{1}{y_s} - \mathbf{w}_0^T\mathbf{x}_s\right)
\end{equation}
% --------------------------------------------------------------------------

Once the optimal solution has been found, the decision function is
obtained from the separating hyperplane $H$:
% --------------------------------------------------------------------------
\begin{equation}\label{decfun}
  d(\mathbf{x}) = \mathbf{w}_0^T\mathbf{x} + b_0 = \sum_{i=1}^{\ell}\alpha_{0i}y_i\mathbf{x}_i^T\mathbf{x} + b_0
\end{equation}
% --------------------------------------------------------------------------
If $d(\mathbf{x}) < 0$, then $\mathbf{x}$ is categorized into the
negative class $(y = -1)$. Otherwise, if $d(\mathbf{x}) > 0$,
$\mathbf{x}$ is classified as being in the positive class $(y =
+1)$.

\subsection{Linear Soft Margin Classifier for Non-Linearly Separable Data}

In some situations, the linear classifier still is a good option to
separate overlapping data. The training optimization problem can then
be adapted to support the training step for linearly non-separable
data. Differently from the previous classifier, it is now necessary to
allow for training errors, since constraints \eqref{constr1} and
\eqref{constr2} can be violated. Because of this, the margin becomes
soft and examples within or beyond it (either in the correct side of
the separating hyperplane or in the wrong one) are permitted, see
Figure \ref{fig:soft}. 

% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[height=6cm]{fig/soft.pdf}
\caption{Binary classification for non-linearly separable
  data. {\footnotesize{Adapted from \citeonline{kecman2005}, p. 20}}}
\label{fig:soft}}
\end{figure}
% --------------------------------------------------------------------------

In order to tackle this situation, new non-negative slack variables
($\xi_i, i = 1, 2, \dots, \ell$) along with a penalization factor for
wrongly classified examples ($C$) are introduced to the training
problem, which becomes:
% --------------------------------------------------------------------------
\begin{eqnarray}
  \min_{\mathbf{w}, b, \bm{\xi}} & & \frac{1}{2}\mathbf{w}^T\mathbf{w} + \, C \cdot \sum_{i = 1}^{\ell}\xi_i \label{probsoft} \\
  \text{subject to} & & y_i \cdot (\mathbf{w}^T\mathbf{x}_i + b) \geq 1 - \xi_i, \quad i = 1, 2, \dots, \ell \\
  & & \xi_i \geq 0, \quad \forall \, i \label{xis}
\end{eqnarray}
% --------------------------------------------------------------------------
The $i^{th}$ slack variable concerns the distance from point $i$ to its
corresponding margin bound. If it crosses such bound and is at its
``wrong'' side, $\xi_i > 0$. Otherwise, $\xi_i = 0$. In order to
clarify the role of these slack variables, one may see Figure
\ref{fig:soft}. The factor $C$, in turn, is related to the trade-off
between training error and machine capacity. Actually, the problem
\eqref{probsoft}-\eqref{xis} is a generalization for the training
problem for linearly separable data. In the same way, the Lagrangian
function is formulated and the \gls{kkt} conditions are used to solve
it. The Lagrangian is:
% --------------------------------------------------------------------------
\begin{equation}\label{lsoft}
  {\mathcal L}(\mathbf{w},b,\bm{\xi},\bm{\alpha},\bm{\beta}) = \frac{1}{2}\mathbf{w}^T\mathbf{w} + \, C \cdot \sum_{i=1}^{\ell}\xi_i - \sum_{i=1}^{\ell}\alpha_i \cdot [y_i \cdot (\mathbf{w}^T\mathbf{x}_i + b) - 1 + \xi_i] - \sum_{i = 1}^{\ell}\beta_i \xi_i
\end{equation}
% --------------------------------------------------------------------------
in which $\bm{\alpha}$ and $\bm{\beta}$ are $\ell$-dimensional vectors
of Lagrange multipliers. Once more, it is necessary to find the saddle
point $(\mathbf{w}_0,b_0,\bm{\xi}_0,\bm{\alpha}_0,\bm{\beta}_0)$ of
\eqref{lsoft}. The \gls{kkt} conditions are:
% --------------------------------------------------------------------------
\begin{align}
  \frac{\partial\,{\mathcal L}(\mathbf{w}_0,b_0,\bm{\xi}_0,\bm{\alpha}_0,\bm{\beta}_0)}{\partial\,\mathbf{w}} & =  0, \quad \mathbf{w}_0 = \sum_{i=1}^{\ell}\alpha_{0i}\,y_i\,\mathbf{x}_i \label{der1soft}\\
  \frac{\partial\,{\mathcal L}(\mathbf{w}_0,b_0,\bm{\xi}_0,\bm{\alpha}_0,\bm{\beta}_0)}{\partial\,b} & =  0, \quad \sum_{i=1}^{\ell}\alpha_{0i}\,y_i = 0 \label{der2soft}\\
  \frac{\partial\,{\mathcal L}(\mathbf{w}_0,b_0,\bm{\xi}_0,\bm{\alpha}_0,\bm{\beta}_0)}{\partial\,\xi_i} & =  0, \quad \alpha_{0i} + \beta_{0i} = C, \quad i = 1, 2, \dots, \ell\label{der3soft}\\  
  y_i \cdot (\mathbf{w}_0^T\mathbf{x}_i + b_0) - 1 + \xi_{0i} & \geq  0, \quad  \forall \, i \label{uniquemodifsoft}\\
  \xi_{0i} & \geq  0, \quad  \forall \, i \label{xissoft} \\
  \alpha_{0i} & \geq  0, \quad  \forall \, i \label{alphassoft} \\
  \beta_{0i} & \geq  0, \quad  \forall \, i \label{betassoft} \\
  \alpha_{0i} \cdot [ y_i \cdot (\mathbf{w}_0^T\mathbf{x}_i + b_0) - 1 + \xi_{0i}] & =  0,  \quad  \forall \, i \label{compl1soft}\\
  \beta_{0i} \xi_{0i} & =  0, \quad (C - \alpha_{0i})\cdot \xi_{0i} = 0, \quad \forall \, i \label{compl2soft}
\end{align}
% --------------------------------------------------------------------------

If equations \eqref{der1soft}, \eqref{der2soft} and \eqref{compl2soft}
are replaced in \eqref{lsoft}, it is obtained a Lagrangian in
function only of the dual variables $\alpha_i, i = 1,2, \dots,
\ell$. The dual problem is exactly the same as the one presented for
the classifier of linearly separable data, except the fact that the
Lagrange multipliers $\alpha_i$ have now parameter $C$ as upper bound
so as to respect the non-negativity of the Lagrange multipliers
$\beta_i$ (see \eqref{compl2soft}). The problem is as follows:
% --------------------------------------------------------------------------
\begin{eqnarray}
  \max_{\bm{\alpha}} & & {\mathcal L}_d(\bm{\alpha}) = \sum_{i = 1}^{\ell}\alpha_i - \frac{1}{2}\sum_{i = 1}^{\ell}\sum_{j = 1}^{\ell}y_iy_j\alpha_i\alpha_j\mathbf{x}_i^T\mathbf{x}_j \nonumber\\
  \text{subject to} & & 0 \leq \alpha_i \leq C, \quad i = 1, 2, \dots, \ell \label{alphasupper}\\
  & & \sum_{i=1}^{\ell}\alpha_i\,y_i = 0 \nonumber
\end{eqnarray}
% --------------------------------------------------------------------------

The resulting decision function is also given by \eqref{decfun} and
its signal defines the classification of an input $\mathbf{x}$ either
in the positive or negative class as is presented in the previous
subsection. The Lagrange multipliers, in turn, due to \gls{kkt}
complementarity conditions \eqref{compl1soft} and \eqref{compl2soft} have
the following possible solutions:
\begin{itemize}
\item $\alpha_{0i} = 0, \xi_{0i} = 0$ and the input $\mathbf{x}_i$ is
  correctly classified.
\item $0 < \alpha_{0i} < C$, then $y_i\cdot(\mathbf{w}_0^T\mathbf{x}_i
  + b_0) - 1 + \xi_{0i} = 0$ and $\xi_{0i} = 0$. Therefore,
  $y_i\cdot(\mathbf{w}_0^T\mathbf{x}_i + b_0) = 1$ and the example
  $(\mathbf{x}_i, y_i)$ is a support vector. The support vectors with
  associated Lagrange multipliers satisfying the condition $0 <
  \alpha_{0i} < C$ are named {\it free support vectors} and permit the
  calculation of $b_0$ as follows:
% --------------------------------------------------------------------------
\begin{equation}\label{bsoft}
  b_0 = \frac{1}{nFSV}\sum_{s=1}^{nFSV}\left(\frac{1}{y_s}-\mathbf{w}_0^T\mathbf{x}_s\right)
\end{equation}
% --------------------------------------------------------------------------
where $nFSV$ is the number of free support vectors. Again, it is
recommended to set $b_0$ as an average value.
\item $\alpha_{0i} = C$, then $y_i\cdot(\mathbf{w}_0^T\mathbf{x}_i +
  b_0) - 1 + \xi_{0i} = 0$ and $\xi_{0i} \geq 0$. Hence,
  $(\mathbf{x}_i, y_i)$ is a {\it bounded support vector}, given that
  its related Lagrange multiplier achieves the upper bound $C$. Note
  that the slack variable is not precisely defined and because of this,
  bounded support vectors do not participate in the calculation of
  $b_0$. A bounded support vector is placed at the wrong side of
  either $H_+$ or $H_-$ depending on the label $y_i$. For $0 <
  \xi_{0i} < 1$, $\mathbf{x}_i$ is correctly classified. Otherwise, if
  $\xi_{0i} \geq 1$, $\mathbf{x}_i$ is wrongly classified. For the
  sake of illustration, in Figure \ref{fig:soft}, A's actual label is
  $y = -1$, but it is at the right side of $H$ and is then
  misclassified. On the other hand, B has label $y = 1$, but it is at
  the left side of $H_+$. However, because B does not pass the
  separating hyperplane $H$, it is still correctly classified. The
  same analysis can be done for the bounded support vectors C and D to
  conclude they are correctly and wrongly classified, in this order.
\end{itemize}

\subsection{Non-Linear Classifier of Maximal Margin}

When the decision function is not linear in input space, the linear
classifiers previously described can not be used to separate data. For
example, in Figure \ref{fig:classnonlin}-a, the hyperplane has a poor
performance in classifying the examples into their actual categories
in the input space. On the other hand, the non-linear function
perfectly accomplishes such task.
% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[height=5.5cm]{fig/classnonlin.pdf}
\caption{Non-linear binary classification}
\label{fig:classnonlin}}
\end{figure}
% --------------------------------------------------------------------------

In these situations, \gls{svm}'s main idea is to map the input vectors
$\mathbf{x}_i \in \mathbb{R}^n$ into vectors $\Phi(\mathbf{x}_i)$ of a
space of higher dimension named {\it feature space} ($\cal F$). The
notation $\Phi$ represents the mapping $\mathbb{R}^n \rightarrow \cal
F$. In $\cal F$, \gls{svm} obtains a linear separation hyperplane of
maximal margin so as a linear classification is still executed but in
a different space, see Figure \ref{fig:classnonlin}-b.

Notice that the dual formulation \eqref{ld} as well as the decision
function \eqref{decfun}, which hold for both the linearly separable
and overlapping data, present input data only in form of dot products
$\mathbf{x}_i^T\mathbf{x}_j, i, j = 1, 2, \dots, \ell$. Then, when the
mapping $\Phi$ is applied, the training algorithm becomes dependent on
products of the form $\Phi^T(\mathbf{x}_i)\Phi(\mathbf{x}_j)$. The
choice of an appropriate mapping may be difficult, and besides that,
the explicit calculation of the dot products
$\Phi^T(\mathbf{x}_i)\Phi(\mathbf{x}_j)$ may be computationally
burdensome if the dimension of $\cal F$ is very large. The latter
problem is related to the phenomenon known as {\it curse of
  dimensionality} and can be avoided by introducing kernel functions
$K(\mathbf{x}_i, \mathbf{x}_j) =
\Phi^T(\mathbf{x}_i)\Phi(\mathbf{x}_j)$. In this way, dot products in
feature space are directly calculated by computing $K(\mathbf{x}_i,
\mathbf{x}_j)$, which in turn is defined on input space.  Therefore,
only the kernel function may be used in the training algorithm, a
possibly high dimension of $\cal F$ is bypassed and explicit knowledge
of the mapping $\Phi$ is not even required
\cite{burges1998,kecman2005}. Some kernel functions are presented in
Table \ref{tab:kernel}.
% --------------------------------------------------------------------------
\begin{table}[ht]
\begin{center}
\begin{footnotesize}
\caption{Common kernel functions. {\footnotesize{Adapted from \citeonline{kecman2001}, p. 171}}}
\label{tab:kernel}

\vspace{0.2cm}

\begin{tabular}{ll} \toprule 
  \textbf{Kernel Function} & \textbf{Type} \\\midrule 
  $K(\mathbf{x}_i, \mathbf{x}_j) = [(\mathbf{x}_i^T\mathbf{x}_j) + 1]^d$ & Complete polynomial of degree $d$ \\[0.5em]
  $K(\mathbf{x}_i, \mathbf{x}_j) = \exp\left[- \frac{(\mathbf{x}_i - \mathbf{x}_j)^2}{2 \sigma^2} \right]$ & Gaussian \gls{rbf} \\[0.9em]
  $K(\mathbf{x}_i, \mathbf{x}_j) = \tanh[(\mathbf{x}_i^T\mathbf{x}_j) + b]^*$ & Multilayer perceptron \\\bottomrule
\end{tabular}

\vspace{0.1cm}

\hspace{-6cm}$^*$Only for particular values of $b$
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

Kernel functions should satisfy the Mercer's conditions, which state
that to describe a dot product in some $\cal F$, $K(\mathbf{x}_i,
\mathbf{x}_j)$ must be symmetric and fulfill the inequality
% --------------------------------------------------------------------------
\begin{equation}\label{mercer}
  \int\int K(\mathbf{x}_i, \mathbf{x}_j) g(\mathbf{x}_i) g(\mathbf{x}_j) d\mathbf{x}_i d\mathbf{x}_j > 0
\end{equation}
% --------------------------------------------------------------------------
for all function $g(\cdot) \neq 0$  satisfying 
% --------------------------------------------------------------------------
\begin{equation}\label{g2}
  \int  g^2(\mathbf{x})d\mathbf{x} < \infty
\end{equation}
% --------------------------------------------------------------------------
For further details on kernel functions and Mercer's conditions, the
interested reader may consult \citeonline{cristiniani2000},
\citeonline{vapnik2000}, \citeonline{kecman2001} and
\citeonline{schol2002}.

The generalized dual Lagrangian adapted to non-linear classifiers then
becomes:
% --------------------------------------------------------------------------
\begin{equation}
  {\mathcal L}_d(\bm{\alpha}) = \sum_{i = 1}^{\ell}\alpha_i - \frac{1}{2}\sum_{i = 1}^{\ell}\sum_{j = 1}^{\ell}y_iy_j\alpha_i\alpha_j K(\mathbf{x}_i, \mathbf{x}_j) 
\end{equation}
% --------------------------------------------------------------------------
If the data is separable, constraints \eqref{alphasi} and
\eqref{alphasy} should be taken into account, whereas if the data is
non-separable, the Lagrange multipliers must be upper bounded by
parameter $C$. Therefore, in the latter case, constraint
\eqref{alphasupper} must be considered along with equality
\eqref{alphasy}. The decision function \eqref{decfun} is slightly
modified so as to incorporate the kernel function and the decision
once again relies on its signal:
% --------------------------------------------------------------------------
\begin{equation}
    d(\mathbf{x}) = \mathbf{w}_0^T\Phi(\mathbf{x}) + b_0 = \sum_{i=1}^{\ell}\alpha_{0i}y_iK(\mathbf{x}_i,\mathbf{x}) + b_0
\end{equation}
% --------------------------------------------------------------------------
To compute $b_0$ it is necessary to replace the dot products by the
kernel function where they naturally emerge in \eqref{b} or
\eqref{bsoft}, depending on the nature of training data (separable or
not). Nevertheless, given that the training algorithm for overlapping
data is in fact a generalization of the separable case, \gls{svm}
non-linear classifiers embody the more general approach.

\subsection{Multi-classification}

The multi-classification problem is a generalization of the binary
classification which entails three or more categories $(m \geq 3)$. A
usual technique to tackle multi-classification is to combine several
binary classifiers \cite{hsu2002}.

One of the implementations of multi-class classifiers is the {\it
  one-against-all} approach. It constructs $m$ decision functions by
means of the resolution of $m$ training problems of the form:
% --------------------------------------------------------------------------
\begin{eqnarray}
  \min_{\mathbf{w}_k,b_k,\bm{\xi}_k} & & \frac{1}{2}\mathbf{w}_k^T\mathbf{w}_k+ C \cdot \sum_{i = 1}^{\ell}\xi_{ki} \label{oaa}\\
  \text{subject to} & & \mathbf{w}_k^T\Phi(\mathbf{x}_i) + b_k \geq 1 - \xi_{ki}, \quad \text{if } y_i = k, \quad i = 1, 2, \dots, \ell \label{i=k}\\
  & & \mathbf{w}_k^T\Phi(\mathbf{x}_i) + b_k \leq -1 + \xi_{ki}, \quad \text{if } y_i \neq k, \quad \forall \, i \label{i!=k}\\
  & & \xi_{ki} \geq 0, \quad \forall \, i \label{xiki}
\end{eqnarray}
% --------------------------------------------------------------------------
where $k = 1, 2, \dots, m$ is the category index. Note that
\eqref{oaa}-\eqref{xiki} is a general primal formulation of the
training problem, since it permits errors during training step and
also input vectors $\mathbf{x}_i$ are mapped into a feature space
through $\Phi$. However, transforming such problem into its dual
counterpart, once more the dot products
$\Phi^T(\mathbf{x}_i)\Phi(\mathbf{x}_j)$ naturally appear and instead
of them, a kernel function $K(\mathbf{x}_i, \mathbf{x}_j)$ may be
used. In this way, the explicit knowledge of the mapping $\Phi$ is not
necessary. This same argument is valid for the decision functions
resulted from training problems.

Given an unseen input $\mathbf{x}$ ({\it i.e.}, not from the training
set $D$), one has the task of selecting its category. For that, the
$m$ decision functions are calculated for the considered
$\mathbf{x}$. The chosen category is the one associated with the
decision function that has returned the greatest value. In summary:
% --------------------------------------------------------------------------
\begin{equation}
  \text{Class}(\mathbf{x})\equiv \arg\max_{k} [\mathbf{w}_{0k}^T\Phi(\mathbf{x}) + b_{0k}]  \label{oaa-class}
\end{equation}
% --------------------------------------------------------------------------

Other approach for multi-classification using binary classifiers is
the {\it one-against-one} me\-thod. Its main idea consists in the
construction of $\frac{m(m-1)}{2}$ classifiers involving only two
categories each. For instance, to train the data from classes $j$ and
$k$, the following optimization problem has to be solved:
% --------------------------------------------------------------------------
\begin{eqnarray}
  \min_{\mathbf{w}_{jk},b_{jk},\bm{\xi}_{jk}} & & \frac{1}{2}\mathbf{w}_{jk}^T\mathbf{w}_{jk}+ C \cdot \sum_{i = 1}^{\ell}\xi_{(jk)i} \label{oao}\\
  \text{subject to} & & \mathbf{w}_{jk}^T\Phi(\mathbf{x}_i) + b_{jk} \geq 1 - \xi_{(jk)i}, \quad \text{if } y_i = j, \quad i = 1, 2, \dots, \ell \label{i=j}\\
  & & \mathbf{w}_{jk}^T\Phi(\mathbf{x}_i) + b_{jk} \leq -1 + \xi_{(jk)i}, \quad \text{if } y_i = k, \quad \forall \, i \label{i=k-oao}\\
  & & \xi_{(jk)i} \geq 0, \quad \forall \, i \label{xijki}
\end{eqnarray}
% --------------------------------------------------------------------------
The dual formulation of such problem may be constructed and then
solved as in the binary classifiers training step described in previous
subsections. If in average each class has $\frac{\ell}{m}$ training
examples, $\frac{m(m-1)}{2}$ problems with $\frac{2\ell}{m}$ decision
variables are resolved.

In order to classify an unseen input $\mathbf{x}$, each one of the
resulting $\frac{m(m-1)}{2}$ decision functions are calculated for
$\mathbf{x}$. If the sign of $[\mathbf{w}_{0jk}^T\Phi(\mathbf{x}) +
b_{0jk}]$ is positive, than one vote is computed for class
$j$. Otherwise, if the sign is negative, then class $k$ gains an
additional vote. Following this reasoning, the class that has obtained
the greatest number of votes, is the one to be associated to
$\mathbf{x}$. This strategy of class choice is named {\it max-wins}
strategy.

\citeonline{platt2000} consider a similar training algorithm as the
one-against-one method. The selection of class for unseen inputs
($\mathbf{x}$), however, is made by a different strategy. It is based
on a \gls{dag} whose nodes represent the binary decision
functions. Then, before predicting the class of $\mathbf{x}$ it is
necessary to go through a path along the \gls{dag}.

Besides the combination of bi\-nary classi\-fiers approach to solve
multi-classification pro\-blems, there are me\-thods which con\-sider
all classes si\-mul\-ta\-ne\-ously. According to
\citeonline{schol2002}, in terms of accu\-racy, the results obtained
from these methods are comparable to those obtained from the methods
previously described. However, the optimization problem has to deal
with all support vectors at the same time and the binary classifiers,
in turn, usually have smaller support vector sets, which has positive
effects on training time. In addition, the authors contend that
probably there is no multi-class approach that generally outperforms
the others. In this way, the multi-classification method selection
depends on the nature of the problem at hand, on the required accuracy
and also on the available time for training. For further details in
multi-classification strategies, see \citeonline{hsu2002} and
\citeonline{schol2002}.

Notice that both classification and multi-classification cases have
the parameter $C$ and the kernel parameter ({\it e.g.} $\sigma, d$) in
their training problems. The values for these parameters must be
defined beforehand by the user or by a systematic procedure. Indeed,
this issue is detailed in Section \ref{sec:msc}, where the influence
of these parameters in \gls{svm} performance is further discussed.

\subsection{Support Vector Regression}

In a regression, it is necessary to estimate the functional dependence
between an output variable $y \in \mathbb{R}$ and an input variable
$\mathbf{x} \in \mathbb{R}^n$. The main difference between
classification and the regression problems is that, in the former, only
discrete numbers associated with categories may be attributed to $y$,
whilst in the latter, $y$ assume real values, since $y =
f(\mathbf{x})$ and $f: \mathbb{R}^n \rightarrow \mathbb{R}$. Similar
to classification, the function estimate is based on the training of a
\gls{svm} model using examples of the form (input, output). The
training phase of an \gls{svm} for regression resembles the training
phase of an \gls{svm} for classification purposes, given that both involve
the resolution of a convex quadratic optimization
problem. Nevertheless, \gls{svr} dual training problem entails $2\ell$
decision variables, instead of only $\ell$ as in classification
training step \cite{schol2002,kecman2005}.

Additionally, for the regression case, an analog of the soft margin is
constructed in the space of the target values $y$ by using Vapnik's
linear $\varepsilon$-insensitive loss function (see Figure
\ref{fig:vapnikerror}), which is defined as:
% --------------------------------------------------------------------------
\begin{equation}
  L(\mathbf{x},y,f) = \max(0,|y - f(\mathbf{x})| - \varepsilon) \label{loss}
\end{equation}
% --------------------------------------------------------------------------
that is, the loss (cost) is zero if the difference between the
predicted $f(\mathbf{x})$ and the observed $y$ values is less than
$\varepsilon$. Otherwise, the loss is given by the absolute difference
between these two values. The Vapnik linear $\varepsilon$-insensitive
loss function defines a ``tube'' of ``radius'' $\varepsilon$ (Figure
\ref{fig:vapnikerror}-a). If the observed value is within the tube, there
is no computed loss, whilst for values outside the tube the cost is
positive. It follows that:
% --------------------------------------------------------------------------
\begin{eqnarray}
  |y - f(\mathbf{x})| - \varepsilon  = \xi, & & \text{for data ``above'' the } \varepsilon \text{-tube} \label{uptube} \\
  |y - f(\mathbf{x})| - \varepsilon  = \xi^*, & & \text{for data
    ``below'' the } \varepsilon \text{-tube} \label{downtube}
\end{eqnarray}
% --------------------------------------------------------------------------
where $\xi$ and $\xi^*$ are the slack variables for the mutually
exclusive situations presented.
% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[height=6cm]{fig/vapnikerror.pdf}
\caption{Vapnik's $\varepsilon$-insensitive loss function}
\label{fig:vapnikerror}}
\end{figure}
% --------------------------------------------------------------------------

Besides the $\varepsilon$-insensitive loss function, there are other
loss functions which can be incorporated by \gls{svr} as long as they
are convex in order to ensure the convexity of the training
optimization problem and then the existence (and uniqueness for strict
convexity) of a minimum. For example, the quadratic
$\varepsilon$-insensitive \eqref{loss2} loss function leads to a
training problem with the same conveniences provided by the linear
$\varepsilon$-insensitive loss function. In this work, however, only
the latter loss function is considered. For more information about
loss functions in \gls{svr} context, consult \citeonline{vapnik2000}
and \citeonline{schol2002}.
% --------------------------------------------------------------------------
\begin{equation}
  L(\mathbf{x},y,f) = \max(0,|y - f(\mathbf{x})|^2 - \varepsilon) \label{loss2}
\end{equation}
% --------------------------------------------------------------------------

This subsection presents
the basic concepts of \gls{svr} for the case of linear and
non-\-li\-ne\-ar approximation functions as well as some insights of
\gls{svr} applied to time series prediction.

\subsubsection{Linear Regression Function}\label{sec:linregfun}

As in classification, the only information available for a \gls{svr}
is the training data set $D = \{(\mathbf{x}_1,y_1),(\mathbf{x}_2,y_2),
\dots, (\mathbf{x}_{\ell},y_{\ell})\}, \mathbf{x}_i\in \mathbb{R}^n, y
\in \mathbb{R}$. It is supposed that a linear function is a good
regression alternative. Then, it is necessary to find the regression
hyperplane which describes best the training data, so as to allow for
the use of such hyperplane to effectively regress unseen input
vectors. The equation of the regression hyperplane is:
% --------------------------------------------------------------------------
\begin{equation}
  f(\mathbf{x}) = \mathbf{w}^T\mathbf{x} + b \label{reghyp}
\end{equation}
% --------------------------------------------------------------------------

In order to find the optimal regression hyperplane, besides the
training error measured by the $\varepsilon-$insensitive loss
function, as well as in classification, it is necessary to minimize
the term $\mathbf{w}^T\mathbf{w}$ related to machine capacity. A small
$\mathbf{w}^T\mathbf{w}$ corresponds to a linear function that is flat
\cite{schol2002,smola2004}. The primal optimization problem is then defined:
% --------------------------------------------------------------------------
\begin{eqnarray}
  \min_{\mathbf{w}, b, \bm{\xi}, \bm{\xi}^*} & & \frac{1}{2}\mathbf{w}^T\mathbf{w} + \, C \cdot \sum_{i = 1}^{\ell}\xi_i + \xi_i^* \label{linreg} \\
  \text{subject to} & & y_i - \mathbf{w}^T\mathbf{x}_i -  b \leq \varepsilon  + \xi_i, \quad i = 1, 2, \dots, \ell \label{above}\\
  & & \mathbf{w}^T\mathbf{x}_i +  b -  y_i \leq \varepsilon  + \xi_i^*, \quad \forall \, i \label{below}\\
  & & \xi_i \geq 0, \quad \forall \, i \label{xisreg}\\
  & & \xi_i^* \geq 0, \quad \forall \, i \label{xis*reg}
\end{eqnarray}
% --------------------------------------------------------------------------

The corresponding Lagrangian is:
% --------------------------------------------------------------------------
\begin{equation}\label{lreg}
\begin{split}
  {\mathcal L}(\mathbf{w},b,\bm{\xi},\bm{\xi}^*,\bm{\alpha},\bm{\alpha}^*,\bm{\beta},\bm{\beta}^*) = \frac{1}{2}\mathbf{w}^T\mathbf{w} + \, C \cdot \sum_{i=1}^{\ell}(\xi_i+\xi_i^*) - \sum_{i=1}^{\ell}\alpha_i \cdot [\mathbf{w}^T\mathbf{x}_i + b - y_i + \varepsilon + \xi_i] \\- \sum_{i=1}^{\ell}\alpha_i^* \cdot [y_i - \mathbf{w}^T\mathbf{x}_i - b + \varepsilon + \xi_i^*] - \sum_{i = 1}^{\ell}(\beta_i \xi_i + \beta_i^* \xi_i^*)
\end{split}
\end{equation}
% --------------------------------------------------------------------------
in which $\bm{\alpha},\bm{\alpha}^*,\bm{\beta},\bm{\beta}^*$ are the
$\ell$-dimensional vectors of Lagrange multipliers associated to
constraints \eqref{above}-\eqref{xis*reg} respectively. Notice that
that $\alpha_i$ and $\alpha_i^*$ can not be strictly positive
simultaneously, given that there is no point satisfying both
\eqref{above} and \eqref{below} at the same time. Hence,
$\alpha_i\alpha_i^* = 0$. The Lagrangian in \eqref{lreg} must be
minimized with respect to primal variables $\mathbf{w}, b, \bm{\xi},
\bm{\xi}^*$ and maximized with respect to dual variables
$\bm{\alpha},\bm{\alpha}^*,\bm{\beta},\bm{\beta}^*$. Then the saddle
point ($\mathbf{w}_0, b_0, \bm{\xi}_0,
\bm{\xi}_0^*,\bm{\alpha}_0,\bm{\alpha}^*_0,\bm{\beta}_0,\bm{\beta}_0^*$)
has to be found. The \gls{kkt} conditions are:
% --------------------------------------------------------------------------
\begin{align}
  \frac{\partial\,{\mathcal L}(\mathbf{w}_0, b_0, \bm{\xi}_0, \bm{\xi}_0^*,\bm{\alpha}_0,\bm{\alpha}^*_0,\bm{\beta}_0,\bm{\beta}_0^*)}{\partial\,\mathbf{w}} & =  0, \quad \mathbf{w}_0 = \sum_{i=1}^{\ell}(\alpha_{0i}-\alpha_{0i}^*)\mathbf{x}_i \label{der1reg} \\
  \frac{\partial\,{\mathcal L}(\mathbf{w}_0, b_0, \bm{\xi}_0, \bm{\xi}_0^*,\bm{\alpha}_0,\bm{\alpha}^*_0,\bm{\beta}_0,\bm{\beta}_0^*)}{\partial\,b} & = 0, \quad \sum_{i=1}^{\ell}(\alpha_{0i} - \alpha_{0i}^*) = 0 \label{der2reg} \\
  \frac{\partial\,{\mathcal L}(\mathbf{w}_0, b_0, \bm{\xi}_0,\bm{\xi}_0^*,\bm{\alpha}_0,\bm{\alpha}^*_0,\bm{\beta}_0,\bm{\beta}_0^*)}{\partial\,\xi_i} & = 0, \quad C - \alpha_{0i} = \beta_{0i}, \quad i = 1, 2, \dots, \ell \label{der3reg} \\
  \frac{\partial\,{\mathcal L}(\mathbf{w}_0, b_0, \bm{\xi}_0,\bm{\xi}_0^*,\bm{\alpha}_0,\bm{\alpha}^*_0,\bm{\beta}_0,\bm{\beta}_0^*)}{\partial\,\xi_i^*} & = 0, \quad C - \alpha_{0i}^* = \beta_{0i}^*, \quad \forall \, i \label{der4reg} \\
  \mathbf{w}_0^T\mathbf{x}_i + b_0 - y_i + \varepsilon + \xi_{0i} &
  \geq 0, \quad \forall \, i \label{primalconst1reg} \\
  y_i - \mathbf{w}_0^T\mathbf{x}_i - b_0 + \varepsilon + \xi_{0i}^* &
  \geq 0, \quad \forall \, i \label{primalconst2reg} \\
  \xi_{0i} & \geq 0, \quad \forall \, i \label{primalconst3reg} \\
  \xi_{0i}^* & \geq 0, \quad \forall \, i \label{primalconst4reg} \\
  \alpha_{0i} & \geq  0, \quad  \forall \, i  \label{alphasreg} \\
  \alpha_{0i}^* & \geq  0, \quad  \forall \, i  \label{alphas*reg} \\
  \beta_{0i} & \geq  0, \quad  \forall \, i  \label{betasreg} \\
  \beta_{0i}^* & \geq  0, \quad  \forall \, i  \label{betas*reg} \\
  \alpha_{0i} \cdot [\mathbf{w}_0^T\mathbf{x}_i + b_0 - y_i +
  \varepsilon + \xi_{0i}] & = 0, \quad \forall \, i \label{compl1reg} \\
  \alpha_{0i}^* \cdot [y _i - \mathbf{w}_0^T\mathbf{x}_i - b_0 +
  \varepsilon + \xi_{0i}^*] & = 0, \quad \forall \,
  i \label{compl2reg} \\
  \beta_{0i}\xi_{0i} & = 0, \quad (C - \alpha_{0i}) \cdot \xi_{0i} =
  0, \quad \forall \, i \label{compl3reg} \\
  \beta_{0i}^*\xi_{0i}^* & = 0, \quad (C - \alpha_{0i}^*) \cdot
  \xi_{0i}^* = 0, \quad \forall \, i \label{compl4reg}
\end{align}
% --------------------------------------------------------------------------
By replacing equalities \eqref{der1reg}-\eqref{der2reg} in
\eqref{lreg}, a Lagrangian in function only of the dual vectors
$\bm{\alpha}$ and $\bm{\alpha}^*$ is obtained. As in classification, the
dual optimization problem may be solved:
% --------------------------------------------------------------------------
\begin{equation}
  \max_{\bm{\alpha}, \bm{\alpha}^*} \quad {\mathcal L}_d(\bm{\alpha}, \bm{\alpha}^*) = - \frac{1}{2} \sum_{i = 1}^{\ell}\sum_{j = 1}^{\ell} (\alpha_i - \alpha_i^*)(\alpha_j - \alpha_j^*)\mathbf{x}_i^T\mathbf{x}_j - \sum_{i = 1}^{\ell}[ \varepsilon (\alpha_i + \alpha_i^*)  + y_i (\alpha_i - \alpha_i^*)] \label{duallinreg}
\end{equation}

\vspace{-1.1cm}

\begin{eqnarray}
  \hspace{-8.4cm} \text{subject to} & & \sum_{i=1}^{\ell} (\alpha_i - \alpha_i^*) = 0 \label{dualconst1reg}\\
  \hspace{-8.4cm} & & 0 \leq \alpha_i \leq C, \quad i = 1, 2, \dots, \ell \label{alphaupreg}\\
  \hspace{-8.4cm}& & 0 \leq \alpha_i^* \leq C, \quad \forall i \label{alphadownreg}
\end{eqnarray}
% --------------------------------------------------------------------------

As a result from the dual problem, non-negative values for the
Lagrange multipliers $\alpha_i$ and $\alpha_i^*$ for all $i$ are
obtained. From them, the optimal normal vector $\mathbf{w}_0$ is
directly defined by \eqref{der1reg}. By the possible values for
$\alpha_{0i}$ and $\alpha_{0i}^*$ as well as considering the \gls{kkt}
complementarity conditions \eqref{compl1reg}-\eqref{compl4reg}, one
may conclude:
\begin{itemize}
\item If $0 < \alpha_{0i} < C$, then $\xi_{0i} = 0$ is true. Besides
  that, the equality $\mathbf{w}_0^T\mathbf{x}_i + b - y_i = -
  \varepsilon$ is obtained and then the example $(\mathbf{x}_i,y_i)$
  intercepts the parallel hyperplane that is $\varepsilon$ above the
  regression hyperplane ($H_+$). Similarly, if $0 < \alpha_{0i}^* < C$, from
  $\xi_{0i}^* = 0$ holds and consequently the equality $y_i -
  \mathbf{w}_0^T\mathbf{x}_i - b = - \varepsilon$ is valid. Hence, the
  point $(\mathbf{x}_i,y_i)$ intercepts the hyperplane that is
  $\varepsilon$ below the regression hyperplane
  ($H_-$). When either $0 < \alpha_{0i} < C$ or $0 <
  \alpha_{0i}^* < C$ is true, the related example $(\mathbf{x}_i,y_i)$
  is named {\it free support vector}, which allow for the calculation
  of the linear coefficient $b_0$:
% --------------------------------------------------------------------------
\begin{equation}\label{b0-calc}
  b_0 = \frac{1}{nFSV}\left[\sum_{s=1}^{nUFSV}(y_s - \mathbf{w}_0^T\mathbf{x}_s - \varepsilon)+ \sum_{s=1}^{nLFSV}(y_s - \mathbf{w}_0^T\mathbf{x}_s + \varepsilon) \right]
\end{equation}
% --------------------------------------------------------------------------
where $nUFSV$ and $nLFSV$ are respectively the number of free support
vectors lying in $H_+$ and $H_-$.
\item For data ``above'' the $\varepsilon$-tube, {\it i.e.} $\xi_{0i}
  > 0$, the equality $\alpha_{0i} = C$ holds. On the other hand, if
  data is ``below'' the $\varepsilon$-tube, then $\xi_{0i}^* > 0$,
  which implies $\alpha_{0i}^* = C$. Training data that satisfy these
  conditions are called {\it bounded support vectors}. As in the case
  of soft-margin classifier, these support vectors do not construct
  the value of $b_0$, since it can not be uniquely
  determined due to positive but not exactly known slack variables values.
\item For training examples lying ``within'' the $\varepsilon$-tube,
  $|y_i - \mathbf{w}_0^T\mathbf{x}_i - b| < \varepsilon$ is valid and
  as a consequence $\alpha_{0i} = \alpha_{0i}^* = 0$. Such points are
  not support vectors and also do not construct the regression
  hyperplane, which is defined as follows:
\end{itemize}
% --------------------------------------------------------------------------
\begin{equation}
  f(\mathbf{x}) = \mathbf{w}_0^T\mathbf{x} + b_0 = \sum_{i=1}^{\ell}(\alpha_i - \alpha_i^*)\mathbf{x}_i^T\mathbf{x} + b_0
\end{equation}
% --------------------------------------------------------------------------

\subsubsection{Non-Linear Regression Function}\label{sec:nonlinregfun}

The generalization from linear to non-linear regression functions is
possible by the use of kernel functions in the same way linear
classifiers evolve to non-linear ones. In this way, even if the
regression function is non-linear in input space, a regression
hyperplane can be found in a feature space.

% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[height=6cm]{fig/nonlinreg.pdf}
\caption{Non-linear regression}
\label{fig:nonlinreg}}
\end{figure}
% --------------------------------------------------------------------------

Once more notice that the dual training problem in subsection
\ref{sec:linregfun} presents input data in the form of dot products
$\mathbf{x}_i^T\mathbf{x}_j$. Analogous to non-linear classifiers,
such dot products are replaced by a kernel function $K(\mathbf{x}_i,
\mathbf{x}_j) = \Phi^T(\mathbf{x}_i)\Phi(\mathbf{x}_j)$ set {\it a
  priori}. The normal vector $\mathbf{w}_0$ may not be
directly obtained, since its expression \eqref{w0nonlinreg} becomes
function of the mapping $\Phi$, which often involves a high dimension
or it is even not known. Both cases render the explicit computation of
$\mathbf{w}_0$ impractical.
% --------------------------------------------------------------------------
\begin{equation}
  \mathbf{w}_0 = \sum_{i=1}^{\ell}(\alpha_{0i} - \alpha_{0i}^*)\Phi(\mathbf{x}_i) \label{w0nonlinreg}
\end{equation}
% --------------------------------------------------------------------------

The linear coefficient $b_0$, in turn, can be calculated by either the
\gls{kkt} complementarity conditions \eqref{compl1reg} or
\eqref{compl2reg}. After replacing $\mathbf{w}_0$ in these equations,
the dot product $\Phi^T(\mathbf{x}_i)\Phi(\mathbf{x}_j)$ naturally
appears and may be substituted by
$K(\mathbf{x}_i,\mathbf{x}_j)$. Again, it is better to set $b_0$ as
the average over all calculated expressions \cite{kecman2005}.

Therefore, one may have the non-linear regression function:
% --------------------------------------------------------------------------
\begin{equation}\label{nonlinfunc-last}
  f(\mathbf{x}) = \mathbf{w}_0^T\Phi(\mathbf{x}) + b_0 = \sum_{i=1}^{\ell}(\alpha_{0i} - \alpha_{0i}^*)\Phi^T(\mathbf{x}_i)\Phi(\mathbf{x}) + b_0 = \sum_{i=1}^{\ell}(\alpha_{0i} - \alpha_{0i}^*)K(\mathbf{x}_i,\mathbf{x}) + b_0
\end{equation}
% --------------------------------------------------------------------------

\gls{svr} and traditional statistical (non)-linear regression models
are different approaches to solve regression problems. Firstly,
traditional models are classified as (non)-linear with respect to the
parameters of the regression function.  \gls{svr}, however, is
classified as (non)-linear depending on the (non)-linearity on the
input variables ($\mathbf{x}$). One advantage of \gls{svr} over
statistical linear regression models is the fact that the errors do
not need to be drawn from a Normal distribution with zero mean and
constant variance. In fact, \gls{svr} does not require assumptions
over the errors. Additionally, differently from \gls{svr}, to apply a
statistical non-linear regression model, a previous knowledge about
the relationship between the response and input variables is often
required. Also, due to the use of kernels, \gls{svr} always involve a
linear regression function, either in the input or in the feature
space. For more in traditional statistical (non)-linear regression
methods, see \citeonline{montgomery2001}.


\subsubsection{Time Series Prediction} \label{sec:time-series}

According to \citeonline{fuller1996}, a real valued time series can be
considered as a collection of random variables indexed in time. For a specific
event under analysis ({\it e.g.} system failure), the realizations of such
random variables generate a set
of observations ordered in time. Some of the most common purposes of studying
time series are to learn about the
underlying mechanism generating the data, to predict future values of
the phenomenon under analysis and/or to optimally control a
system. For instance, financial series of stock prices may provide
investors with information whether or not they may invest in the
next future. Also, a series of reliability values of a critical
machine gives insights of its ``health'', which is very useful for
maintenance planning. 

Generally, a time series is not
drawn independently. Conversely, the statistical learning model underlying
\gls{svr} assumes independent and identically distributed
samples. Despite this fundamental difference mentioned by
\citeonline{schol2002}, \gls{svr} has been widely applied to the
problem of time series prediction and has provided excellent results
for them, comparable to or even better than the ones originated from
other approaches such as \gls{nn}. This empirical evidence can be
verified in the work by \citeonline{muller1999} that shows the
superior performance of \gls{svr} in a benchmark set from the Santa Fe
Time Series Competition \cite{weigend1994} as well as in the
reliability related papers from \citeonline{hongepai2006} and
\citeonline{chen2007}. Moreover, a survey of works using \gls{svm} for
time series prediction applied to diverse fields is presented by
\citeonline{sap2009}.

The general time series prediction model can be represented as follows:
% --------------------------------------------------------------------------
\begin{equation}
y_{t} = f(y_{t-1}, y_{t-2}, \dots, y_{t-p})
\end{equation}
% --------------------------------------------------------------------------
where $y_{t}$ is the observation at time $t$ which is function of the
$p$ past observations. In other words, the input vector $\mathbf{x}_t
= (y_{t-1}, y_{t-2}, \dots, y_{t-p})$, where $p$ denotes its
dimension, is directly related to the future value $y_{t}$.
Additionally, if $t$ data observations compose the time series, one
may have to construct the examples in the form $(\mathbf{x}_i,y_i), i
= 1, 2, \dots t-p$ as shown in Table \ref{tab:data}. In this way, the
training set is comprised by $t-p$ training examples.
% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
\caption{Construction of data pairs for time series prediction}
\label{tab:data}

\vspace{0.2cm}

\begin{tabular}{llllll} \toprule 
  $i$ & \textbf{Input} $\mathbf{x}_i$ & & & & \textbf{Output} $y_i$ \\\midrule 
  $1$ & $y_1$ & $y_2$ & $\cdots$ & $y_p$ & $y_{p+1+k}$ \\
  $2$ & $y_2$ & $y_3$ & $\cdots$ & $y_{p+1}$ & $y_{p+2+k}$ \\
  $\vdots$ & $\vdots$ & $\vdots$ & $\cdots$ & $\vdots$ & $\vdots$ \\
  $t-p$ & $y_{t-p}$ & $y_{t-p+1}$ & $\cdots$ & $y_{t-1}$ & $y_{t+k}$ \\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

Besides that, the future outcomes may be predicted for different steps
in time. In Table \ref{tab:data}, $k$ is the number of steps ahead
minus one. For example, if the $(i-1)$th values are inputs to forecast
the $i^{th}$ one, it is then performed a single-step-ahead prediction ($k
= 0$).  If the same inputs are used to predict the $(i+1)$th outcomes,
a two-step-ahead forecast takes place ($k = 1$). Generally, predicting
two or more steps ahead with $(i-1)$th input values is said to be a
multi-step-ahead forecast. 

Once the training examples are formed, the application of \gls{svr}
follows the same reasoning shown in subsections \ref{sec:linregfun}
and \ref{sec:nonlinregfun}. An advantage of using \gls{svr} approach
in dealing with time series prediction is the fact that it is
model-independent and can tackle non-linear and non-stationary series,
which may not be handled by traditional methods if simplifying
assumptions or alternative techniques to render them stationary are
not considered. Basically, a non-stationary time series randomly
varies along time around a constant mean, reflecting a form of
equilibrium \cite{morettin2004}.

As in the (multi-)classification problems, \gls{svr} also depends on a
set of parameters that has to be defined {\it a priori}. Besides the
penalization for errors $C$ and the kernel parameter ({\it e.g.}
$\sigma$ in \gls{rbf} case), \gls{svr} also demands the definition of
$\varepsilon$ from the Vapnik's $\varepsilon$-insensitive loss
function. In both classification and regression tasks, the choice of
these parameters is often very difficult. This issue gives rise to the
model selection problem, which is discussed in Section
\ref{sec:msc}. The following Subsection introduces the most used
optimization techniques to solve the \gls{svm} training problem.

\subsection{Optimization Techniques and Available Support Vector
  Ma\-chi\-nes Libra\-ries}

Different optimization techniques can be applied to the \gls{svm}
learning problem. One of them is the Interior Point Method
\cite{nocedal2006}, that is indicated for small to moderately sized
data sets, up to $10^4$ \cite{schol2002}. Alternatively, the
faster \gls{smo} approach can be adopted. Loosely speaking, \gls{smo}
decomposes the quadratic training problem into the smallest possible
quadratic subproblem, which involves only two examples (two Lagrange
multipliers) so as to allow for the analytic treatment of them
instead of numerical. At every step, \gls{smo} chooses two Lagrange
multipliers to jointly optimize, finds the optimal values for these
multipliers, and updates the SVM to reflect the new optimal
values. Since the storage of large matrices is not required, large
data sets can be handled and problems with numerical precision are
avoided \cite{platt1998}.


There are several free \gls{svm} solvers available in the Internet,
which are periodically improved by the developers. Two of them are the
\gls{svm}$^{light}$ and the {\sf LIBSVM}. \gls{svm}$^{light}$ is an
implementation of \gls{svm} learner that solves the \gls{svm} dual
problem by means of a decomposition strategy that generates a series
of smaller optimization problems to be resolved. The dual decision
variables ($\bm{\alpha}$) are divided in two groups: the first one
comprises the variables that can vary their values during the
optimization process and forms the so-called working set; the second
one is formed by variables with fixed values. The selection of which
variables will be in the working set is made through the choice of the
steepest descent feasible direction, which in turn will make much
progress towards the minimum of the objective function. For further
details, see \citeonline{joachims1999}.

{\sf LIBSVM} is an integrated software for support vector
classification, multi-classification and regression tasks. It can be
easily linked with other programs through several programming
languages such as MATLAB. Additionally, it implements a \gls{smo}
algorithm for solving the training problem
\cite{libsvm2001,fan2005}. In this dissertation, {\sf LIBSVM} is the
used library for training of \gls{svm}, as well as for prediction via
a particular \gls{svm} already trained.

\gls{svm} performance is influenced by the values of the parameter
$C$, $\varepsilon$ (regression case) and kernel parameter ({\it e.g.}
$\sigma$, $d$). The problem of selecting the most suitable set of
parameters is detailed in next section.

\section{Model Selection Problem}\label{sec:msc}

The performance of \gls{svm} strongly depends on the chosen set of
parameters. The task of tuning the \gls{svm} parameters so as to
obtain the most suitable set of values for them is known as the {\it
  model selection problem}. In classification problems, the trade-off
between model capacity and training error represented by $C$ and the
kernel parameter ({\it e.g.}  the \gls{rbf} width $\sigma$ or the
polynomial degree $d$) are the user defined parameters. For instance,
a large $C$ forces the \gls{svm} classification algorithm to reduce
the training errors, which in turn can be accomplished by increasing
the machine capacity (by means of $\mathbf{w}^T\mathbf{w}$) and as a
consequence may reduce the margin. This is contrary to the main
objective of margin maximization and also does not guarantee a good
generalization performance of the classifier.

Regression problems, along with $C$ and the kernel parameter, present
the ``radius'' $\varepsilon$ of the tube. For a small $C$, penalty on
errors gets negligible and regression function becomes flat, while for
a large $C$ a penalty gets more important and the resulting regression
function tries to fit the data. A small $\varepsilon$ inclines the
\gls{svr} function to fit the data, since a very thin
$\varepsilon$-tube may be not sufficiently wide to include even few data
points. A large $\varepsilon$-tube, however, may be wide enough, which renders
the \gls{svr} function flat and, as a consequence, it may not describe well
the training set. If a \gls{rbf} kernel is considered, a small
$\sigma$ means the kernel is more localized. Thus, the \gls{svr} function
has a tendency to overfit, while a large $\sigma$ makes the
$\varepsilon$-tube less flexible \cite{ito2003}. In Figure
\ref{fig:influence}, it is illustrated the effects of small and large
values of the parameters for the regression case.

% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[height=17cm]{fig/influence.pdf}
\caption{Influence of parameters in SVR. {\footnotesize{Adapted from
      \citeonline{ito2003}, p. 2078}}}
\label{fig:influence}}
\end{figure}
% --------------------------------------------------------------------------


Given the influence of parameters on \gls{svm} performance, it is
necessary to select them as accurately as possible. The most naive
attempt to set these parameters is the trial and error method, which
is time consuming and does not ensure a useful set will be found. More
educated ways to tune these parameters are then listed:
\begin{itemize}
\item Grid search: ranges of parameters are defined and then made
  discrete so as to allow for the test of all possible
  combinations. This procedure may be suitable if only few parameters
  are to be tuned, but otherwise may be time consuming and may become
  impractical if there are several parameters as well as many possible
  values for each one of them. \citeonline{hsu2002} use this method for
  adjusting the \gls{svm} parameters in the context of
  multi-classification problems. 
\item Pattern search: is a direct search method, suitable to optimize
  a wide range of objective functions, given that it does not require
  derivatives \cite{lewis2000}. The number of evaluated combinations is
  smaller than the quantity assessed in the grid-search method. It
  was applied to \gls{svr} model selection in the context of drug
  designs by \citeonline{momma2002}. The authors also make a comparison between
  the pattern search and the grid search.
\item Gradient-based search: requires continuous and differentiable
  error functions. \citeonline{vapnikchapelle2000} derive some
  generalization error bounds for classification and in the later work
  by \citeonline{chapelle2002} a gradient descent method is applied to
  obtain the set of parameters that minimize those
  bounds. \citeonline{chang2005} present some error bounds for
  regression and use a quasi-Newton method to get the parameters
  minimizing them. As stated by \citeonline{ito2003} the linear
  $\varepsilon$ loss function does not produce a smooth error surface,
  which burden gradient methods. Alternatively, the authors
  incorporate the quadratic $\varepsilon$ loss function to \gls{svr}
  and then present the derivatives of the error function with respect
  to the parameters. For details in gradient-based methods see
  \citeonline{nocedal2006}.
\item Bayesian evidence framework: its main idea is to
  maximize the posterior probability of the parameter distribution to
  get the optimal parameter. The evidence framework was adapted to
  \gls{svm} model selection by \citeonline{kwok2000}. The author
  describes the methodology for classification problems. Later,
  \citeonline{gestel2001} and \citeonline{yan2004} apply evidence
  framework to regression situations. The first work is related to
  financial series forecasting. The second one is associated to
  the refinery oil industry and aims at estimating the freezing point
  of light diesel oil in a fractionator by means of process data
  records.
\item Probabilistic search heuristics: are flexible optimization
  techniques that do not require derivatives and are often based on
  nature aspects. For example, \gls{ga} and \gls{pso} are widely
  applied to optimization problems from different
  contexts. Specifically in \gls{svm} field, \citeonline{pai2006} and
  \citeonline{chen2007} apply \gls{ga} to select the \gls{svr}
  parameters to forecast series related to reliability of engineered
  components. Additionally, in the electricity management field,
  \gls{pso} is used by \citeonline{fei2009} in a \gls{svr} to predict
  the quantity of gases dissolved in power transformer oil based on
  previous observed values and by \citeonline{hong2009} in selecting
  the parameter of a \gls{svr} to forecast electric load. In the
  domain of fault detection \citeonline{samanta2009} apply \gls{pso}
  to choose the kernel parameter of the \gls{svm} classifier. \gls{sa}
  may also be part of this group and is applied to \gls{svr} forecast
  electricity load and software reliability by respectively
  \citeonline{paihong2005} and \citeonline{paihong2006}.
\end{itemize}

Other systematic manners to find the parameter values so as to allow
for generalization error minimization are available in literature. The
interested reader may consult for instance \citeonline{frohlich2005}
for \gls{svm} classification and regression parameters selection and
\citeonline{wu2009} for the classification case only. In this work,
the \gls{pso} approach is adopted to select the parameter values of
\gls{svm}, given it is well-suited to optimize real-valued functions
as well as does not require derivatives, which avoids problems with
non-smooth error surfaces. In addition, if compared to \gls{ga}, for example,
\gls{pso} requires less computational effort.

In practice, the data set is divided in two parts, one for actual
training (training set formed by $\ell$ elements) and the other for
posterior test (test set comprised by $\lambda$ examples). The test
set plays the role of unseen data and are not used during \gls{svm}
training. Instead, it is used to evaluate an error function usually
involving the real output values ($y_h$) and the predicted ones
($\hat{y}_h$) resulted from the trained \gls{svm} model, where $h =
1,2,\dots,\lambda$ is the index of an example from the test set. Two
error functions commonly applied for \gls{svm} testing are the
\gls{nrmse} and the \gls{mape}. Eventually the \gls{mse} is also
used. These error functions are defined as follows:
% --------------------------------------------------------------------------
\begin{equation}\label{nrmse}
\text{NRMSE} = \sqrt{\frac{\sum_{h=1}^{\lambda}(y_h - \hat{y}_h)^2}{\sum_{h=1}^{\lambda}y_h^2}}
\end{equation}
% --------------------------------------------------------------------------
% --------------------------------------------------------------------------
\begin{equation}\label{mape}
  \text{MAPE} = \frac{1}{\lambda}\sum_{h=1}^{\lambda}\left \arrowvert\frac{y_h-\hat{y}_h}{y_h} \right \arrowvert \cdot 100\%
\end{equation}
% --------------------------------------------------------------------------
% --------------------------------------------------------------------------
\begin{equation}\label{mse}
  \text{MSE} = \frac{1}{\lambda}\sum_{h=1}^{\lambda}(y_h-\hat{y}_h)^2
\end{equation}
% --------------------------------------------------------------------------

In order to select the most suitable parameters, the mentioned error
functions may be iteratively evaluated on the test set so as to guide
the quest for improved parameter values. Nevertheless, a unique test
set may mislead to an error estimate far from the real generalization
error. In this way, specially the grid, pattern and probabilistic
search methodologies incorporates during the training phase a sort of
model validation, in which the training set is split into even smaller
subsets that participate in either actual training or model validation
in an alternated manner. One of the following training
strategies may be adopted:
\begin{itemize}
\item Validation sets: the training set is divided into two subsets,
  one for actual training and a second one to guide the search for
  optimal parameter values (validation set). Once the parameters are
  found, the machine's generalization performance can be evaluated on
  a test set. It is expected a test error greater than the validation
  error, since the machine would be specialized in the validation
  set. However, with this procedure, one may have insights about the
  machine ability to generalize. \citeonline{hongepai2006} and
  \citeonline{pai2006} apply this strategy. Also, instead of one
  single validation set, \citeonline{chen2004}, in their work related
  to electricity load prediction, make use of two validation sets and
  the quest for the \gls{svr} parameters are based on the mean
  validation error resulted from both of them.
\item Cross-validation: the training set is split randomly into
  several subsets, say $k$. A $k$-fold cross-validation consists in
  training the \gls{svm} model with $k-1$ subsets and validate it on
  the remaining subset by means of an error function. This is made $k$
  times so as each subset participate in the validation phase
  once. The error is averaged over the $k$ validations (equations
  \eqref{nrmse} and \eqref{mape} may be multiplied by
  $\frac{1}{k}$). This procedure is more time consuming than the
  single test set approach but it gives a better estimate of the
  generalization error.
\item Leave-one-out: this is the extreme of the cross-validation
  strategy. If the training set is formed by $\ell$ examples, $\ell$
  trainings are performed. In each one of them $\ell-1$ examples
  actually participate in the training phase and the remaining one is
  used for model validation. Hence, every example participate in
  validation once. This approach is the most time-consuming but,
  according to \citeonline{schol2002} it provides an almost unbiased
  estimate of the error. Additionally, \citeonline{lee2004} present a
  manner to enhance the efficiency of this procedure for Gaussian
  kernel-based \gls{svm}.
\end{itemize}

It is important to emphasize that the described strategies occur
during the training phase, which can be didactically divided in actual
training and validation. After that phase, the set of parameters which
return the minor error value may be adopted and are then often used to
train the entire training set ({\it i.e.} all $\ell$ examples at
once). This procedure has some theoretical issues specially associated
with cross-validation that are described by
\citeonline{schol2002}. For example, given that the retraining with
all examples makes use of the same data which guided the search for
parameters, it can lead to overfitting. Also the optimal parameter
settings for data sets of size $\frac{(k-1)\,\ell}{k}$ and $\ell$ do
not usually coincide. Nevertheless, applications not considering these
potential problems are frequently performed with promising
results. Finally, after setting the parameters, the \gls{svm} model is
tested on test set.

Additionally, it is noteworthy that when handling time series data
it is important to realize that the entries are inherently indexed on
time. This is crucial specially when dealing with non-stationary time
series, where the mixing of data may lead to completely different
realizations of the underlying phenomenon at hand. In this way,
choosing random subsets to apply cross-validation for parameter
selection is not an indicated technique for these cases, even if those
subsets could be internally sorted or divided in approximate equal
time intervals. Given that a subset is extracted from the training in
order to validate the model, the machine would not be able to catch
the existent correlation from the original data. As an alternative,
the validation sets approach respecting the time series order can be
adopted by dividing data set into training, validation and test sets
considering the original order, so as to allow for performance
evaluations of the machine ability to predict future outcomes. The
leave-one-out procedure could also be applied since only one data
entry at a time is excluded from training with possibly no serious
influence on the natural relationship among data. However, this latter
approach has the cost of high computational effort, which depending on
the parameter search tool used may be prohibitive.

The majority of the application examples considered in this work are
related to reliability time series (Chapter
\ref{ch:svm-rel-forecast}). Thus, along with \gls{pso}, this work
makes use of the validation sets approach. Even for the example
concerning real data from oil production wells the validation sets
procedure is adopted, since the cross-validation and leave-one-out
techniques are time consuming. In next section, the main ideas and
characteristics of \gls{pso} are described.

\section{Particle Swarm Optimization}

The \gls{pso} algorithm was introduced by \citeonline{kennedy1995} and
it is based on the social behavior of biological organisms that move
in groups such as birds and fishes. It was originally developed to
solve non-linear optimization problems. \gls{pso} also has some ties to
evolutionary algorithms such as \gls{ga}, since it is population-based
(swarm). However, a fundamental difference between these paradigms is
the fact that evolutionary algorithms are based on natural evolution
concepts, which embody a competitive philosophy denoted by the idea
that only the fittest individuals tends to survive. Conversely,
\gls{pso} incorporates a cooperative approach to solve a problem,
given that all individuals (particles) -- which are allowed to survive
-- change themselves over time and one particle's successful
adaptation is shared and reflected in the performance of its
neighbors \cite{kennedy2001}.

The basic element of \gls{pso} is a particle, which can fly throughout
search space towards an optimum by using its own information as well
as the information provided by other particles comprising its
neighborhood.  The performance of a particle is determined by its {\it
  fitness}, which is assessed by calculating a predefined objective
function related to the problem to be solved at its current position.

In \gls{pso}, a particle's neighborhood is the subset of particles
with which it is able to communicate \cite{bratton2007}. Depending on
how the neighborhood is determined, the \gls{pso} algorithm may embody
the {\it gbest} model, where each particle is connected to every other
particle in the swarm so as it can obtain information from them. In
other words, the neighborhood of a particle is the entire
swarm. Alternatively, in the {\it lbest} model a particle is not able
to communicate with all other particles but only with some of
them. The simplest {\it lbest} model, also known as {\it ring} model,
connects each particle to only two other particles in the swarm. It is
important to notice that the neighborhood concept gives rise to a
communication network among particles which does not necessarily
depends on distances. Indeed, it is better to think about the swarm
communication network as a graph where vertices represent particles
and edges indicate the connections among them without any associated
weight. Then, the {\it gbest} model is related to a complete graph in
which all vertices ({\it i.e.}  particles) are connected to each
other, whereas the {\it ring} model forms a cycle with length equal to
the number of particles. For example, the left side of Figure
\ref{fig:pso-net} depicts a cycle of length 12. Other types of swarm
communication networks are also shown in Figure \eqref{fig:pso-net}.
For more on graphs, see \citeonline{bondy2008}.
% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[height=5cm]{fig/pso-net.pdf}
\caption{Different swarm communication
  networks. {\footnotesize{Adapted from \citeonline{bratton2007}}}}
\label{fig:pso-net}}
\end{figure}
% --------------------------------------------------------------------------

If Euclidean distances are used to define particles' neighbors, the
communication networks' inherent cycles as depicted on Figure
\ref{fig:pso-net} are not guaranteed. This may eventually lead to a
lack of exploration ability, which is not interesting.

According to \citeonline{bratton2007}, the {\it gbest} model usually
converges more rapidly than the {\it lbest} approach. Sometimes this
characteristic may be actually a drawback of the former model, since
it can eventually result in premature convergence of the algorithm to
a inferior local in the case of multi-modal functions. However, in
some cases the {\it gbest} model can deliver competitive performance
even on complex multi-modal problems. Additionally, the {\it gbest}
model has the advantage that it often requires less function
evaluations, which is very useful when such assessments are
computationally costly. In fact, as the number of particles' neighbors
increases, one may get a mix of both advantages and shortcomings of
{\it lbest} and {\it gbest} approaches \cite{eberhart1995}.

A particle $i$ is formed by three vectors:
\begin{itemize}
\item Its current position in search space $\mathbf{x}_i = (x_{i1},
  x_{i2}, \dots, x_{in})$.
\item The best individual position it has found so far $\mathbf{p}_i =
  (p_{i1}, p_{i2}, \dots, p_{in})$.
\item Its velocity $\mathbf{v}_i = (v_{i1}, v_{i2}, \dots, v_{in})$.
\end{itemize}

Traditionally the current positions $\mathbf{x}_i$ and velocities
$\mathbf{v}_i$ are initialized respectively by means of a uniform
distribution parametrized by the search space and by the maximum
velocity $v_{max}$. The particles then move throughout the search
space by the following set of recursive update equations:
% --------------------------------------------------------------------------
\begin{align}
  v_{ij}(t+1) & = v_{ij}(t) + c_1 \, u_1 \cdot [p_{ij}(t)-x_{ij}(t)] + c_2 \, u_2
  \cdot [p_{gj}(t) - x_{ij}(t)], \quad j = 1, 2, \dots, n \label{velupdt}\\
  x_{ij}(t+1) & = x_{ij}(t) + v_{ij}(t+1), \quad  \forall \, j \label{xupdt}
\end{align}
% --------------------------------------------------------------------------
where $c_1$ and $c_2$ are constants, $u_1$ and $u_2$ are independent
uniform random numbers from the interval [0,1] generated at every
update for each individual dimension $j = 1, 2, \dots, n$ and
$\mathbf{p}_g(t)$ is the $n$-dimensional vector formed by the best
position encountered so far by any neighbor of particle $i$. Note that
velocities and positions at time $t+1$ are influenced by the distances
of the particle's current position from its own best experience
$\mathbf{p}_i(t)$ and the neighborhood's best experience
$\mathbf{p}_g(t)$. The second part of \eqref{velupdt} represents the
``cognition'' of the particle, which is its private ``thinking''. The
last part of \eqref{velupdt}, in turn, is associated with the particle's
``social'' ability, which represents the collaboration among
particles. Notice also that velocities and positions are part of the
same equation, even though units of both being different (length per
time unit and length, respectively). One may interpret the future
position as the previous one plus the velocity multiplied by a time
unit. In this way, there is no problem involving units in equations
\eqref{velupdt} and \eqref{xupdt}, as it would be thought at first
glance.

During the iterations, if velocities are not constrained to an upper
bound ($v_{max}$), the \gls{pso} algorithm is prone to enter a state
of explosion, since the random weighting of $u_1$ and $u_2$ causes
velocities and thus particle's positions to increase rapidly. In this
way, the $v_{max}$ approach, illustrated in the following pseudocode,
can be used for every dimension $j$ and particle $i$. According to
\citeonline{bratton2007}, however, a single value $v_{max}$ is not
necessarily applicable to all sizes of problem spaces and finding its
appropriate value is critical to the \gls{pso} performance and it may
be a difficult task.
% --------------------------------------------------------------------------
\vspace{0.5cm}

\begin{footnotesize}
\hrule
\begin{algorithmic}[0]
  \Procedure{\sc ConstrainVelocity}{$v_{max}$} 
   \If {$v_{ij}(t+1) > v_{max}$} 
   \State {$v_{ij}(t+1) = v_{max}$} 
   \Else \If {$v_{ij}(t+1)
    < -v_{max}$} \State {$v_{ij}(t+1) = -v_{max}$}
   \EndIf %\State {\bf end if}
   \EndIf %\State {\bf end if}
  \EndProcedure \State {\bf end procedure}

\end{algorithmic}
\hrule
\end{footnotesize}

\vspace{0.5cm} 
% --------------------------------------------------------------------------

Alternatively, the {\it inertia weight} $w$ may replace $v_{max}$ to
adjust the influence of previous velocity of the particle during the
optimization process and then to balance the trade-off between global
and local search. By adjusting $w$, the swarm has a tendency to
eventually constrict itself to the region containing the best fitness
and exploit this region \cite{shi1998,bratton2007}. The function for
velocity update \eqref{velupdt} becomes:
% --------------------------------------------------------------------------
\begin{equation}
  v_{ij}(t+1)  = w \, v_{ij}(t) + c_1 \, u_1 \cdot [p_{ij}(t)-x_{ij}(t)] + c_2 \, u_2 \cdot [p_{gj}(t) - x_{ij}(t)], \quad j = 1, 2, \dots, n \label{velupdtinertia}
\end{equation}
% --------------------------------------------------------------------------

It is also possible to vary the value of $w$ during \gls{pso}
iteration, which may be greater at the beginning so as to allow for
exploration (coverage of the entire search space) and gradually
smaller while the algorithm evolves to encourage exploitation (fine
adjustments in a specific area) on the most promising regions found
during exploration. If $w$ is a constant, \citeonline{shi1998}
recommend to pick a value for it from the interval $[0.9, 1.2]$. On
the other hand, if it is changed dynamically, typically it varies from
0.9 to 0.4 throughout \gls{pso} iterations \cite{kennedy2001}.

Another manner to avoid the velocity explosion during \gls{pso}
iterations is to use a {\it constriction factor} $\chi$ multiplying
all parts of the velocity update equation:
% --------------------------------------------------------------------------
\begin{equation}\label{velupdtchi}
  v_{ij}(t+1)  = \chi \left\{v_{ij}(t) + c_1 \, u_1 \cdot [p_{ij}(t)-x_{ij}(t)] + c_2 \, u_2 \cdot [p_{gj}(t) - x_{ij}(t)]\right \}, \quad j = 1, 2, \dots, n 
\end{equation}
% --------------------------------------------------------------------------
where

% --------------------------------------------------------------------------
\begin{equation}
  \chi = \frac{2}{\left \arrowvert 2 - \varphi - \sqrt{\varphi^2 - 4 \varphi}\right \arrowvert}, \quad \varphi = c_1 + c_2
\end{equation}
% --------------------------------------------------------------------------
Notice that $\chi$ has no real values when $\varphi < 4$. The idea of
the constriction factor is that the amplitude of particles'
oscillations decreases as they focus on a previous best point from
their respective neighborhoods. In this way, as $\varphi$ increases,
$\chi$ decreases and such amplitudes become even smaller. However,
if a member of a neighborhood finds a better point, the other
particles can perfectly explore the new region. Hence, the
constriction factor does not forbid particles switching from
exploratory to exploitative mode and vice versa
\cite{kennedy2001}. \citeonline{bratton2007} affirms that for
simplicity most implementations of constricted \gls{pso} set $\varphi
= 4.1$, which assures convergence and yields $\chi \approx 7.2984
\cdot 10^{-1}$, $c_1= c_2 = 2.05$.

Although the constriction factor does not require a limit on
particles' velocities, empirical experiments have shown that taking
the variables' ranges as bounds for velocities can provide better
results. Limiting velocities this way does not confine particles to
feasible search space, but in general does not allow them to go far
beyond the region of interest \cite{kennedy2001}. Actually,
artificially constraining particles' positions when they reach the
boundary of search space is not recommended since it can affect the
performance of \gls{pso}. In this way, infeasible particles may emerge
and then the most straightforward method for handling them is to leave
their velocities and infeasible positions unaltered. Besides that, the
fitness evaluation step may be skipped in order to avoid infeasible
positions becoming particles' best and/or neighborhood best
positions. With this procedure, called {\it let particles fly},
infeasible particles may be drawn back to feasible search space by the
influence of their personal and neighborhood bests.