\chapter[Reliability Prediction by PSO and SVM]{Reliability Prediction
  by Particle Swarm Optimization and Support Vector
  Ma\-chi\-nes}\label{ch:svm-rel-forecast}

This chapter presents three problems from literature related to
reliability prediction based on time series data and one application
example with data collected from oil production wells.  They are
solved by means of the \gls{pso}+\gls{svm} algorithm described in
Chapter \ref{ch:model-prob}.  As recommended by
\citeonline{bratton2007}, the {\it lbest} model is adopted.  The first
two literature examples are related to the forecast of failure times
of engineered components and the third one concerns the prediction of
miles to failure of a car engine. It is worth to forecast these
aspects associated with component failures so as to capture non-linear
trends they may present and thus provide support to reliability-based
maintenance decisions.

\citeonline{xu2003} contends that in practice the short-term
(single-step) forecasts are more useful since they provide timely
information for preventive and corrective maintenance plans, even
though the multi-step predictions may catch some of system
dynamics. In this way, single-step-ahead forecasts are taken into
account. In addition, one-dimensional input vectors are considered,
that is, $p = 1$ and $\mathbf{x}_i = x_i = y_{i-1}$ and consequently
the data sets are reduced by one entry.

The last example is related to the prediction of \gls{tbf} of oil
wells by means of different characteristics of the system, such as the
number of installed rods. This example illustrates the application of
the proposed methodology in a real situation. Also, it entails
numerical as well as categorical variables, which may be handled in
different manners before they are used by the \gls{pso}+\gls{svm}. In
addition, differently from the time series based examples, the last
one involves a multi-dimensional input vector.

Given that \gls{pso} is a stochastic tool, 30 runs of the algorithm
are performed in order to analyze its behavior. Although the
\gls{nrmse} was the only error function which guided the search for
parameters by \gls{pso}, the \gls{mape} and the \gls{mse} related to
particles were also computed. Additionally, in the forthcoming
Sections \ref{sec:ascher}, \ref{sec:turbo}, \ref{sec:miles} and
\ref{sec:ex4}, apart from the Tables regarding descriptive statistics,
all other Tables along with the Figures are associated with the {\it
  lbest} \gls{pso}+\gls{svm} run that resulted in the smallest test
\gls{nrmse} value. Even though such ``machines'' may not give the most
suitable validation \gls{nrmse}, they show the best generalization
performance. For comparison purposes, all examples were also solved by
means of a {\it gbest} \gls{pso} model combined with \gls{svm}
(Section \ref{sec:comparacao}).

The \gls{pso} algorithm was implemented in MATLAB 7.8 and linked with
{\sf{LIBSVM}}. All experiments were run in a computer with 2GHz, 2.9Gb
of RAM and Linux Ubuntu 9.04 operational system.

\section{Example 1: Failure Times of a Submarine Diesel
  Engine}\label{sec:ascher}

In this first example, the time series regards the times of
unscheduled maintenance actions for a submarine diesel engine under
deterioration process and is extracted from
\citeonline{ascher1984}. The data is presented in Table
\ref{tab:ex-ascher}. Since a single-step ahead forecast with
one-dimensional input vectors is performed, the data set is reduced
from 71 to 70 in order to elaborate a time series in the same
reasoning described in Section \ref{sec:time-series}. Then the first
44 data points are used for training, the following 12 for validation
and the last 14 for test purposes.
% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Engine age ($\times$ 1000 hours) at time of
    unscheduled maintenance actions. {\footnotesize{Adapted from
        \citeonline{ascher1984}, p. 75}}}
\label{tab:ex-ascher}

\vspace{0.2cm}

\begin{tabular}{llllllllll} 
  \toprule \textbf{Action} & \textbf{Age} &
  \textbf{Action} & \textbf{Age} & \textbf{Action} & \textbf{Age} & \textbf{Action} & \textbf{Age} & \textbf{Action} & \textbf{Age} \\\midrule
  1 &1.382 &16&17.632&30&21.061&44&21.888&58&23.491\\
  2 &2.990 &17&18.122&31&21.309&45&21.930&59&23.526\\
  3 &4.124 &18&19.067&32&21.310&46&21.943&60&23.774\\
  4 &6.827 &19&19.172&33&21.378&47&21.946&61&23.791\\
  5 &7.472 &20&19.299&34&21.391&48&22.181&62&23.822\\
  6 &7.567 &21&19.360&35&21.456&49&22.311&63&24.006\\
  7 &8.845 &22&19.686&36&21.461&50&22.634&64&24.286\\
  8 &9.450 &23&19.940&37&21.603&51&22.635&65&25.000\\
  9 &9.794 &24&19.944&38&21.658&52&22.669&66&25.010\\
  10&10.848&25&20.121&39&21.688&53&22.691&67&25.048\\
  11&11.993&26&20.132&40&21.750&54&22.846&68&25.268\\
  12&12.300&27&20.431&41&21.815&55&22.947&69&25.400\\
  13&15.413&28&20.525&42&21.820&56&23.149&70&25.500\\
  14&16.497&29&21.057&43&21.822&57&23.305&71&25.518\\
  15&17.352&&&&&&&&
  \\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

%As mentioned before, this example is solved by means of {\it lbest}
%and {\it gbest} approaches so as to compare their performance. 
The {\it lbest} model involves 4 neighbors and the underlying swarm
communication network can be visualized in the middle graph of Figure
\ref{fig:pso-net}. The \gls{pso} required parameters are listed in
Table \ref{tab:pso-param} and are also valid for the subsequent
examples.

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{PSO required parameters}
\label{tab:pso-param}

\vspace{0.2cm}

\begin{tabular}{p{4.4cm}l} 
  \toprule
  \textbf{Parameter}           & \textbf{Value}     \\\midrule
  Number of particles          & 30                 \\
  Number of neighbors          & 4                  \\
  $c_1 = c_2$                  & 2.05               \\
  Constriction factor ($\chi$) & $7.2984 \cdot 10^{-1}$ \\
  Maximum number of iterations & 6000               \\
  Maximum number of iterations with equal best fitness value & 600 \\
  Tolerance ($\delta$)         & $1 \cdot 10 ^{-12}$ \\
  \bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

In addition, the definition intervals of the \gls{pso} variables ($C,
\varepsilon, \gamma$) as well as the initial values of $v_j^{max}, j =
1, 2, 3$ ({\it i.e.} 10\% of variables ranges) are shown in Table
\ref{tab:pso-ranges}. Notice that after the swarm initialization,
these maximum velocities become equal to the definition range of each
variable. Additionally, $\varepsilon$ interval is determined as
described in Chapter \ref{ch:model-prob}, that is, from 0.1\% to 15\% of
the mean of training outputs.

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{PSO variables' intervals and initial maximum
    velocities, Example 1}
\label{tab:pso-ranges}

\vspace{0.2cm}

\begin{tabular}{lll} 
  \toprule 
  \textbf{Variable} & \textbf{Interval}  & \textbf{Initial $v_j^{max}$} \\\midrule
  $C$               & $[1 \cdot 10^{-1}, 2000]$      & $199.9900$\\
  $\varepsilon$     & $[1.7302 \cdot 10^{-2}, 2.5953]$ & $2.5780 \cdot 10^{-1}$\\
  $\gamma$          & $[1 \cdot 10^{-6}, 50]$ & 4.9999\\
  \bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

% Given that \gls{pso} is a stochastic tool, 30 runs of the algorithm
% were performed considering each swarm communication network approach
% so as to analyze its behavior in both cases. Although the \gls{nrmse}
% was the only error function which guided the search for parameters by
% \gls{pso}, the \gls{mape} and the \gls{mse} related to particles were
% also computed. The descriptive statistics of the obtained results from
% the 30 runs are in Tables \ref{tab:ex-ascher-lbest} and
% \ref{tab:ex-ascher-gbest} for {\it lbest} and {\it gbest} models in
% this order. Observe that the maximum value of the validation
% \gls{nrmse} is far away from its median in the two
% situations. Actually, in each one of them, the maximum point can be
% considered as an outlier \cite{montgomery2003} and thus eliminated
% from sample since it is localized at a distance greater than $1.5(q_3
% -q_1)$, where $q_1$ and $q_3$ are respectively the 25$^{\text{th}}$
% and 75$^{\text{th}}$ percentiles concerning the validation
% \gls{nrmse}.

The descriptive statistics of the obtained results from the 30
\gls{pso}+\gls{svm} runs are in Table \ref{tab:ex-ascher-lbest}. The
parameters related to the machine which provided the smallest test
\gls{nrmse} are $C = 263.6966$, $\varepsilon = 1.3701 \cdot 10^{-1}$
and $\gamma = 6.7315 \cdot 10^{-3}$.

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Descriptive statistics of parameters and error functions,
    stop criteria frequency and performance for 30 PSO+SVM runs, {\it
      lbest}, Example 1}
\label{tab:ex-ascher-lbest}

\vspace{0.2cm}

\begin{tabular}{p{2cm}llllll} 
  \toprule
  & & \textbf{Minimum} & \textbf{Maximum}  & \textbf{Median}  & \textbf{Mean} & \textbf{Std. dev.}$^*$\\\cmidrule{2-7}
  \multirow{3}{*}{\textbf{Parameters}} & $C$ & 190.4848 & 1930.2743  & 732.5368   & 822.3439  & 560.3221 \\
  & $\varepsilon$ & $3.4194 \cdot 10^{-2}$ & $2.3899 \cdot 10^{-1}$  & $1.9672 \cdot 10^{-1}$  & $1.5857 \cdot 10^{-1}$  & $6.0396 \cdot 10^{-2}$ \\
  & $\gamma$ & $4.4980 \cdot 10^{-3}$ & 49.9974  & $6.6326 \cdot 10^{-3}$  & 1.6729  & 9.1270 \\[1.5ex]
  \multirow{2}{*}{\textbf{Validation}} & \gls{nrmse} & $4.4060 \cdot 10^{-3}$ & $2.9305 \cdot 10^{-1}$  & $4.4123 \cdot 10^{-3}$ & $1.4035 \cdot 10^{-2}$ & $ 5.2700 \cdot 10^{-2}$\\
  \multirow{2}{*}{\textbf{error}} & \gls{mape} (\%) & $3.7297 \cdot 10^{-1}$ & 26.2757  & $3.7501 \cdot 10^{-1}$  & 1.2384 & 4.7288 \\
  & \gls{mse} & $9.9229 \cdot 10^{-3}$ & 43.8973  & $9.9515 \cdot 10^{-3}$  & 1.6979 & 8.6070 \\[1.5ex]
  \multirow{3}{*}{\textbf{Test error}} & \gls{nrmse} & $7.1126 \cdot 10^{-3}$ & $3.8265 \cdot 10^{-1}$ & $7.4624 \cdot 10^{-3}$ & $2.0307 \cdot 10^{-2}$ & $ 6.8440 \cdot 10^{-2}$ \\
  & \gls{mape} (\%) &  $4.7571 \cdot 10^{-1}$& 38.1006  & $5.2994 \cdot 10^{-1}$  & 1.8181 & 6.8533 \\
  & \gls{mse} & $3.0472 \cdot 10^{-2}$ & 88.1988 &  $3.3680 \cdot 10^{-2}$ & 3.4281 & 17.2899 \\\midrule
  & & & & \multicolumn{3}{l}{\textbf{Absolute (relative, \%) frequency}} \\\cmidrule{2-7}
  \multirow{3}{*}{\textbf{Stop criteria}}& \multicolumn{3}{l}{Maximum number of iterations (6000)} & 13 (43.3333) & & \\
 & \multicolumn{3}{l}{Equal best fitness for 600 iterations} & 13 (43.3333) & & \\
 & \multicolumn{3}{l}{Tolerance $\delta = 1 \cdot 10^{-12}$} & 4 (13.3334) & & \\ \midrule
 & & & & \multicolumn{3}{l}{\textbf{Metric value}} \\\cmidrule{2-7}
  \multirow{3}{*}{\textbf{Performance}} & \multicolumn{3}{l}{Mean time per run (minutes)} & 10.8342 & & \\
 & \multicolumn{3}{l}{Mean number of trainings} & 116016.1667 & & \\
 & \multicolumn{3}{l}{Mean number of predictions} & 116016.1667 & & \\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\vspace{-0.3cm}
{\scriptsize{$^*$Standard deviation}}
\end{table}
% --------------------------------------------------------------------------

% --------------------------------------------------------------------------
% \begin{sidewaystable}[!ht]
% \begin{center}
% \begin{footnotesize}
%   \caption{Descriptive statistics of parameters and error functions
%     for, stop criteria frequency and performance, 30 PSO+SVM runs,
%     {\it lbest}, example 1}
% \label{tab:ex-ascher-lbest}

% \vspace{0.2cm}

% \begin{tabular}{p{2cm}lllllllll} 
%   \hline 
%   & & \textbf{Minimum} & \textbf{Maximum} & $q_1$ & \textbf{Median} & $q_3$  & \textbf{Mean} & \textbf{Variance} & \textbf{Std. dev.}$^*$\\\cline{2-10}
%   \multirow{3}{*}{\textbf{Parameters}} & $C$ & 190.4848 & 1930.2743 & 260.6540 & 732.5368  & 1185.2611 & 822.3439 & 313960.8160 & 560.3221 \\
%   & $\varepsilon$ & 0.0342 & 0.2390 & 0.0987 & 0.1967 & 0.2048 & 0.1586 & 0.0036  & 0.0604 \\
%   & $\gamma$ & 0.0045 & 49.9974 & 0.0063 & 0.0066 & 0.0076 & 1.6729 & 83.3029 & 9.1270 \\[1.5ex]
%   \multirow{2}{*}{\textbf{Validation}} & \gls{nrmse} & $4.4060 \cdot 10^{-3}$ & $2.9305 \cdot 10^{-1}$ & $4.4110 \cdot 10^{-3}$ & $4.4123 \cdot 10^{-3}$ & $4.4181 \cdot 10^{-3}$ & $1.4035 \cdot 10^{-2}$ & $2.7771 \cdot 10^{-3}$ & $ 5.2700 \cdot 10^{-2}$\\
%   \multirow{2}{*}{\textbf{error}} & \gls{mape} (\%) & 0.3730 & 26.2757 & 0.3746 & 0.3750 & 0.3747 & 1.2384 & 22.3615 & 4.7288 \\
%   & \gls{mse} & $9.9229 \cdot 10^{-3}$ & 43.8973 & $9.9456 \cdot 10^{-3}$ & $9.9515 \cdot 10^{-3}$ & $9.9773 \cdot 10^{-3}$ & 1.6979 & 74.0808 & 8.6070 \\[1.5ex]
%   \multirow{3}{*}{\textbf{Test error}} & \gls{nrmse} & $7.1126 \cdot 10^{-3}$ & $3.8265 \cdot 10^{-1}$ & $7.2858 \cdot 10^{-3}$ & $7.4624 \cdot 10^{-3}$ & $8.8878 \cdot 10^{-3}$ & $2.0307 \cdot 10^{-2}$ & $4.6841 \cdot 10^{-3}$ & $ 6.8440 \cdot 10^{-2}$ \\
%   & \gls{mape} (\%) & 0.4757 & 38.1006 & 0.5056 & 0.5299 & 0.6584 & 1.8181 & 46.9672 & 6.8533 \\
%   & \gls{mse} & $3.0472 \cdot 10^{-2}$ & 88.1988 & $3.1975 \cdot 10^{-2}$ & $3.3680 \cdot 10^{-2}$ & $4.7582 \cdot 10^{-2}$ & 3.4281 & 298.9404 & 17.2899 \\\hline
%   & \multicolumn{3}{l}{\textbf{Maximum number of iterations (6000)}} & \multicolumn{3}{l}{\textbf{Equal best fitness for 600 iterations}} & \multicolumn{3}{l}{\textbf{Tolerance $\delta = 1 \cdot 10^{-12}$}} \\\cline{2-10}
%  {\textbf{Stop criteria}} & 13 (43.3333\%) & & & 13 (43.3333\%) & & & 4 (13.3334\%) & & \\ \hline
%   & \multicolumn{3}{l}{\textbf{Mean time per run (minutes)}} & \multicolumn{3}{l}{\textbf{Mean number of trainings}} & \multicolumn{3}{l}{\textbf{Mean numbr of predictions}} \\\cline{2-10}
%  {\textbf{Performance}} & 10.8342 & & & 116016.1667 & & & 116016.1667 & & \\ \hline
% \end{tabular}
% \end{footnotesize}
% \end{center}
% \vspace{-0.3cm}
% \hspace{0.8cm}{\scriptsize{$^*$Standard deviation}}

% \begin{center}
% \begin{footnotesize}
%   \caption{Descriptive statistics of parameters and error functions
%     for, stop criteria frequency and performance, 30 PSO+SVM runs,
%     {\it gbest}, example 1}
% \label{tab:ex-ascher-gbest}

% \vspace{0.2cm}

% \begin{tabular}{p{2cm}lllllllll} 
%   \hline 
%   & & \textbf{Minimum} & \textbf{Maximum} & $q_1$ & \textbf{Median} & $q_3$ & \textbf{Mean} & \textbf{Variance} & \textbf{Std. dev.}$^*$\\\cline{2-10}
% %  \multirow{3}{*}{\rotatebox{90}{\textbf{Parameters}}} & $C$ & 1.382 & 16 & 17.632 & 30 & 21.061&44\\
%   \multirow{3}{*}{\textbf{Parameters}} & $C$ & 189.5631 & 1914.7146 & 593.0418 & 1339.3118 & 1735.4693 & 1158.7514 & 379309.1375 & 615.8808\\
%   & $\varepsilon$ & 0.0342 & 0.2391 & 0.0946 & 0.0989 & 0.2285 & 0.1463 & 0.0044 & 0.0663\\
%   & $\gamma$ & 0.0045 & 49.9968 & 0.0051 & 0.0070 & 0.0076 & 1.6727 & 83.3012 & 9.1269\\[1.5ex]
%   \multirow{2}{*}{\textbf{Validation}} & \gls{nrmse}& $4.4060 \cdot 10^{-3}$ & $2.9305 \cdot 10^{-1}$ & $4.4074 \cdot 10^{-3}$& $4.4181 \cdot 10^{-3}$ & $4.4234 \cdot 10^{-3}$& $1.4037 \cdot 10^{-2}$ & $2.7770 \cdot 10^{-3}$ &$ 5.2698 \cdot 10^{-2}$\\ 
%   \multirow{2}{*}{\textbf{error}}& \gls{mape} (\%) & 0.3730 & 26.2753 & 0.3735 & 0.3758 & 0.3769 & 1.2386 & 22.3603 &4.7287\\
%   & \gls{mse} & $9.9228 \cdot 10^{-3}$ & 43.8977 & $9.9294 \cdot 10^{-3}$ & $9.9773 \cdot 10^{-3}$ & $1.0004 \cdot 10^{-2}$ & 1.4729 & 64.2043 & 8.0128\\[1.5ex]
%   \multirow{3}{*}{\textbf{Test error}} & \gls{nrmse} & $7.1101 \cdot 10^{-3}$ & $3.8265 \cdot 10^{-1}$ & $7.5271 \cdot 10^{-3}$& $8.9532 \cdot 10^{-3}$ & $9.2946 \cdot 10^{-3}$& $2.1064 \cdot 10^{-2}$ & $4.6647 \cdot 10^{-3}$ &$ 6.8298 \cdot 10^{-2}$\\
%   & \gls{mape} (\%) & 0.4775 & 38.1009 & 0.5565 & 0.6667 & 0.7026 & 1.8939 & 46.7734 & 6.8391\\
%   & \gls{mse} &$3.0451 \cdot 10^{-2}$ & 88.1999 & $3.4119 \cdot 10^{-2}$& $4.8286 \cdot 10^{-2}$ & $5.2038 \cdot 10^{-2}$ & 2.9834 & 259.0438 &16.0948\\
%   \hline
%  & \multicolumn{3}{l}{\textbf{Maximum number of iterations (6000)}} & \multicolumn{3}{l}{\textbf{Equal best fitness for 600 iterations}} & \multicolumn{3}{l}{\textbf{Tolerance $\delta = 1 \cdot 10^{-12}$}} \\ \hline
%  {\textbf{Stop criteria}} & 7 (23.3333\%) & & & 20 (66.6667\%) & & & 3 (10\%) & &\\ \cline{2-10}
%   & \multicolumn{3}{l}{\textbf{Mean time per run (minutes)}}  & \multicolumn{3}{l}{\textbf{Mean number of trainings}} & \multicolumn{3}{l}{\textbf{Mean numbr of predictions}} \\\cline{2-10}
%  {\textbf{Performance}} & 13.1813 & & & 123007.3 & & & 123007.3 & & \\ \hline
% \end{tabular}
% \end{footnotesize}
% \end{center}
% \vspace{-0.3cm}
% \hspace{0.8cm}{\scriptsize{$^*$Standard deviation}}
% \end{sidewaystable}
% --------------------------------------------------------------------------




% % --------------------------------------------------------------------------
% \begin{figure}[!h]
% \centering{
% \includegraphics[width=11cm]{fig/boxplotTestNRMSE.pdf}
% \caption{Boxplot for {\it lbest} and {\it gbest} for test NRMSE
%   values, example 1}
% \label{fig:boxplot}}
% \end{figure}
% % --------------------------------------------------------------------------

Figure \ref{fig:ascher-pso-evolution} provides a snapshot of
the swarm evolution during the optimization process. Notice that at
the 50$^{\text{th}}$ iteration there are particles outside the
feasible search space, but the final swarm is comprised only by
feasible particles. Observe also that given the influence of the {\it
  lbest} model and of the validation \gls{nrmse}, particles were
inclined to form clusters in the search space. Figure
\ref{fig:ascher-nrmse}, in turn, shows the \gls{nrmse} convergence
along \gls{pso}. This specific \gls{pso} run stopped at iteration 703,
in which the tolerance $\delta = 1 \cdot 10^{-12}$ was attained.
% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=0.5\linewidth]{fig/initialSwarm25-ascher.pdf}\hfill
\includegraphics[width=0.5\linewidth]{fig/intermediateSwarm25-ascher.pdf}\\[1.5ex]
\includegraphics[width=0.5\linewidth]{fig/finalSwarm25-ascher.pdf}
\caption{Swarm evolution during PSO, Example 1}
\label{fig:ascher-pso-evolution}}
\end{figure}
% --------------------------------------------------------------------------
% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=10cm]{fig/nrmseConv25-ascher.pdf}
\caption{Validation NRMSE convergence, Example 1}
\label{fig:ascher-nrmse}}
\end{figure}
% --------------------------------------------------------------------------

Table \ref{tab:ex-ascher-pred} presents the real output values as well
as validation and prediction results from the selected
\gls{svr}. Table \ref{tab:ex-ascher-alphas}, in turn, shows the
support vectors, their respective Lagrange multipliers values and the
classification as free or bounded support vector. Notice that only 22
from the 44 training points were chosen to be support vectors and from
these, only 5 are free. Substituting the values from Table
\ref{tab:ex-ascher-alphas} in equations \eqref{b0-calc} and
\eqref{nonlinfunc-last}, one may obtain the linear coefficient $b_0$
and the regression function $f(\mathbf{x})$, in this order. The real
outputs, the validation and test prediction values as well as the
support vectors are depicted in Figure \ref{fig:ascher-result}.

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Real failure time (engine age) and predictions by SVR, Example 1}
\label{tab:ex-ascher-pred}

\vspace{0.2cm}

\begin{tabular}{llllll} 
  \toprule
  \multicolumn{3}{c}{\textbf{Validation}} & \multicolumn{3}{c}{\textbf{Test}} \\ \midrule
  \textbf{Action} & \textbf{Real age} & \textbf{Predicted age} & \textbf{Action} & \textbf{Real age} & \textbf{Predicted age}\\\midrule
  46 & 21.9430& 22.0328& 58 & 23.4910 &23.4414\\
  47 & 21.9460& 22.0459& 59 & 23.5260&23.6327\\
  48 & 22.1810& 22.0489& 60 & 23.7740&23.6686\\
  49 & 22.3110& 22.2870& 61 & 23.7910&23.9224\\
  50 & 22.6340& 22.4195& 62 & 23.8220&23.9398\\
  51 & 22.6350& 22.7503& 63 & 24.0060&23.9714\\ 
  52 & 22.6690& 22.7513& 64 & 24.2860&24.1582\\
  53 & 22.6910& 22.7863& 65 & 25.0000&24.4398\\ 
  54 & 22.8460& 22.8089& 66 & 25.0100&25.1356\\ 
  55 & 22.9470& 22.9684& 67 & 25.0480&25.1450\\ 
  56 & 23.1490& 23.0725& 68 & 25.2680&25.1809\\ 
  57 & 23.3050& 23.2807& 69 & 25.4000&25.3857\\  
     & &               & 70 & 25.5000&25.5062\\     
     & &               & 71 & 25.5180&25.5962\\\midrule 
  \multicolumn{2}{l}{\gls{nrmse}} & $4.4142 \cdot 10^{-3}$ & \multicolumn{2}{l}{\gls{nrmse}}  & $7.1126 \cdot 10^{-3}$\\ 
  \multicolumn{2}{l}{\gls{mape}(\%)}&$3.7486\cdot
10^{-1}$&\multicolumn{2}{l}{\gls{mape} (\%)} & $4.7767 \cdot 10^{-1}$\\ 
  \multicolumn{2}{l}{\gls{mse}} & $9.9597 \cdot 10^{-3}$ &
\multicolumn{2}{l}{\gls{mse}} & $3.0472 \cdot 10^{-2}$\\ 
  \bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Support vectors' details, Example 1}
\label{tab:ex-ascher-alphas}

\vspace{0.2cm}

\begin{tabular}{llllllll} 
  \toprule
  $(x,y)$ & $\alpha$ & $\alpha^*$ & \textbf{Type} & $(x,y)$ & $\alpha$ & $\alpha^*$ & \textbf{Type}\\\midrule
  (1.382, 2.990)   & 0 & 135.2668 & free    & (16.497, 17.352) & 122.0212 & 0 & free    \\
  (4.124, 6.827)   & 263.6966 & 0 & bounded & (17.352, 17.632) & 0 & 263.6966 & bounded \\
  (7.472, 7.567)   & 0 & 263.6966 & bounded & (18.122, 19.067) & 263.6966 & 0 & bounded \\
  (7.567 , 8.845)  & 263.6966 & 0 & bounded & (19.067, 19.172) & 0 & 263.6966 & bounded \\
  (8.845 , 9.450)  & 0 & 263.6966 & bounded & (19.172, 19.299) & 0 & 44.2857  & free    \\
  (9.450 , 9.794)  & 0 & 263.6966 & bounded & (19.299, 19.360) & 0 & 263.6966 & bounded \\ 
  (9.794 , 10.848) & 263.6966 & 0 & bounded & (19.940, 19.944) & 0 & 263.6966 & bounded \\
  (10.848, 11.993) & 57.7336  & 0 & free    & (20.121, 20.132) & 0 & 263.6966 & bounded \\ 
  (11.993, 12.300) & 0 & 263.6966 & bounded & (20.132, 20.431) & 263.4942 & 0 & free    \\ 
  (12.300, 15.413) & 263.6966 & 0 & bounded & (20.525, 21.057) & 263.6966 & 0 & bounded \\ 
  (15.413, 16.497) & 263.6966 & 0 & bounded & (21.061, 21.309) & 263.6966 & 0 & bounded \\ 
  \bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=10cm]{fig/supVecResults25-ascher.pdf}
\caption{SVR results, Example 1}
\label{fig:ascher-result}}
\end{figure}
% --------------------------------------------------------------------------

Indeed, \citeonline{hongepai2006} solve this same example application
by means of \gls{svm} coupled with a method to find the parameters $C,
\varepsilon, \sigma$ as well as with other forecast tools, namely the
Duane model, the \gls{arima} and the \gls{grnn}. The search method for
\gls{svr} parameters entails the following idea: (i) fix the values of
two parameters ($C, \varepsilon$) and find the optimal value of the
remaining one ($\sigma_0$); (ii) given $\sigma_0$ and $\varepsilon$,
obtain the optimal value $C_0$; (iii) find $\varepsilon_0$ based on
$\sigma_0$ and $C_0$. This procedure is guided by the evaluation of
\gls{nrmse} and the authors consider 45, 12 and 14 points for
training, validation and test, respectively. 

Nevertheless, the optimal parameters' values presented along with the
information provided are not sufficient to reproduce the \gls{nrmse}
value of $6.4500 \cdot 10^{-3}$ reported. This data set with the
mentioned division as well as the presented optimal parameters were
used to train and predict the validation and test outputs by means of
{\sf{LIBSVM}}, but the \gls{nrmse} found was much greater than $6.4500
\cdot 10^{-3}$ and also the tendency of both validation and test
predictions was the opposite of the real one. Despite that, the test
\gls{nrmse} results from \citeonline{hongepai2006} are presented in
Table \ref{tab:ex-ascher-difftools} with the additional entry
corresponding to the best test \gls{nrmse} obtained by the
\gls{pso}+\gls{svm} approach from this work. It can be observed that
it is competitive with all other values provided by the different
tools.

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Test NRMSE from different forecast models, Example
    1. {\footnotesize{Adapted from \citeonline{hongepai2006},
        p. 160}}}
\label{tab:ex-ascher-difftools}

\vspace{0.2cm}

\begin{tabular}{ll} 
  \toprule 
  \textbf{Method}     & \textbf{Test NRMSE}    \\\midrule
  \gls{pso}+\gls{svm} & $7.1126 \cdot 10^{-3}$ \\
  \gls{svm}           & $6.4500 \cdot 10^{-3}$ \\
  Duane               & $1.0590 \cdot 10^{-2}$ \\
  \gls{grnn}          & $9.7300 \cdot 10^{-3}$ \\
  \gls{arima}         & $3.3660 \cdot 10^{-2}$ \\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------


\section{Example 2: Turbochargers in Diesel Engines}\label{sec:turbo}

The second application example is extracted from \citeonline{xu2003}
and is related to turbochargers in diesel engines. As stated by the
authors, the reliability is the one of the most important
considerations for diesel engine systems. In this way, an accurate
prediction of its reliability provides a good assessment of its
performance. Table \ref{tab:ex-turbo} presents the failure times of 40
turbochargers of the same type as well as the non-parametric estimation of
their reliabilities calculated by:
% --------------------------------------------------------------------------
\begin{equation}\label{rel}
  R(T_i) = 1 - \frac{i-0.3}{n+0.4}
\end{equation}
% --------------------------------------------------------------------------
where $i$ is the failure index and $n$ is the data sample size.
% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Turbochargers failure times ($\times$ 1000 hours) and
    reliability data. {\footnotesize{Adapted from \citeonline{xu2003},
        p. 259}}}
\label{tab:ex-turbo}

\vspace{0.2cm}

\begin{tabular}{llllllllllll} 
  \toprule $i$ & $T_i$ & $R(T_i)$ & $i$ & $T_i$ & $R(T_i)$ & $i$ & $T_i$ & $R(T_i)$ & $i$ & $T_i$ & $R(T_i)$\\\midrule
  1 &1.6&0.9930&11&5.1&0.8934&21&6.5&0.7938&31&7.9&0.6942\\
  2 &2.0&0.9831&12&5.3&0.8835&22&6.7&0.7839&32&8.0&0.6843\\
  3 &2.6&0.9731&13&5.4&0.8735&23&7.0&0.7739&33&8.1&0.6743\\
  4 &3.0&0.9631&14&5.6&0.8635&24&7.1&0.7639&34&8.3&0.6643\\
  5 &3.5&0.9532&15&5.8&0.8536&25&7.3&0.7540&35&8.4&0.6544\\
  6 &3.9&0.9432&16&6.0&0.8436&26&7.3&0.7440&36&8.4&0.6444\\
  7 &4.5&0.9333&17&6.0&0.8337&27&7.3&0.7341&37&8.5&0.6345\\
  8 &4.6&0.9233&18&6.1&0.8237&28&7.7&0.7241&38&8.7&0.6245\\
  9 &4.8&0.9133&19&6.3&0.8137&29&7.7&0.7141&39&8.8&0.6145\\
  10&5.0&0.9034&20&6.5&0.8038&30&7.8&0.7042&40&9.0&0.6046
  \\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

\citeonline{xu2003} made two experiments with these data, both
regarding reliability forecasts. In a first situation they considered
previous reliability values and also the failure times as input
data. The second experiment, in turn, had their inputs comprised only
by past reliability values. The authors used several sorts of \gls{nn}
as forecast tool and observed that better results were obtained in the
latter experiment. Additionally, they compared various types of
\gls{nn} (\gls{mlpnn} with logistic and Gaussian activations and
\gls{rbfnn} with Gaussian activation) and \gls{arima} performances.

Also, \citeonline{chen2007} makes use of the same data set to train a
\gls{svm} and then predict reliability using it. The authors consider
4-dimensional input vectors $\mathbf{x}$ ({\it i.e.} $p = 4$) and the
\gls{svr} parameters are obtained by \gls{ga}. Also, despite the time
series characteristics, they adopt a cross-validation technique
embedded in \gls{ga} so as to guide the search for optimal $C,
\varepsilon, \sigma$. Once this set is found, they retrain all
training data and then perform prediction tasks on the test set. The
author compared \gls{ga}+\gls{svm} results with the ones obtained by
\gls{grnn}, \gls{mlpnn}, \gls{rbfnn}, \gls{nfn} and \gls{arima}.

In this work, besides the turbocharger reliability prediction it is
also performed the forecast of its failure times. The input vectors
for the reliability experiment are made only by a single past
reliability value and do not consider failure times at all. For
failure times case, similarly, input vectors are one-dimensional and
formed by the immediately previous failure time.


\subsection{Example 2.1: Reliability Forecast}

Actually, this example is the simplest among the ones presented in
this work, given that the reliability values does not depend on
previous values of reliability or on failure times, but only on the
failure index (see Equation \eqref{rel}). In this way, the time series
pairs $(y_{i-1},y_i), i = 1, 2, \cdots n$ form a straight line. One
may question why such an ``easy'' problem may need sophisticated tools
like \gls{nn}, \gls{arima} and \gls{svm} to be solved. However, this
particular example is widely used in literature works, as can be
inferred from the comments in the beginning of this Section, maybe to
show the effectiveness of these methods in also tackling simple
problems.

For this case the first 30 points are used for \gls{svr} training, the
following 4 entries guide the search for optimal parameters $C$,
$\epsilon$ and $\gamma$ by \gls{pso} and the last 5 examples form the
test set. The only difference in parameters' intervals shown in Table
\ref{tab:pso-ranges} regards the lower and upper bounds of
$\varepsilon$, whose definition depends on the data at hand. For the
reliability prediction case, they are respectively $8.5358 \cdot
10^{-4}$ and $1.2804 \cdot 10^{-1}$.

The descriptive statistics of the model selection results for the 30
\gls{pso}+\gls{svm} runs are presented in Table
\ref{tab:ex-turborel-lbest}. Observe that due to problem simplicity,
standard deviations of all parameters and errors were small if
compared to the ones from the other examples. Additionally, none of
the runs made use of the maximum number of iterations as stop
criterion and the mean elapsed time was very low, only about one
minute per run.


% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}

  \caption{Descriptive statistics of parameters and error functions,
    stop criteria frequency and performance for 30 PSO+SVM runs, {\it
      lbest}, Example 2.1}
\label{tab:ex-turborel-lbest}

\vspace{0.2cm}

\begin{tabular}{p{2cm}llllll} 
  \toprule
  & & \textbf{Minimum} & \textbf{Maximum} & \textbf{Median}  & \textbf{Mean} & \textbf{Std. dev.}$^*$\\\cmidrule{2-7}
%  \multirow{3}{*}{\rotatebox{90}{\textbf{Parameters}}} & $C$ & 1.382 & 16 & 17.632 & 30 & 21.061&44\\
  \multirow{3}{*}{\textbf{Parameters}}& $C$& 1900.9063 & 1946.8917  & 1917.5483 & 1919.3760  & 10.8901\\
  & $\varepsilon$ &  $8.3864 \cdot 10^{-4}$ & $8.4078 \cdot 10^{-4}$  & $8.3869 \cdot 10^{-4}$  &  $8.3906 \cdot 10^{-4}$   & $6.7860 \cdot 10^{-7}$ \\
  & $\gamma$ &  $8.8721 \cdot 10^{-4}$ & $9.0869 \cdot 10^{-4}$  & $9.0078 \cdot 10^{-4}$  &  $9.0041 \cdot 10^{-4}$   & $4.6488 \cdot 10^{-6}$ \\[1.5ex]
  \multirow{2}{*}{\textbf{Validation}} & \gls{nrmse}  & $8.4451 \cdot 10^{-5}$ & $4.1839 \cdot 10^{-4}$ & $8.4487 \cdot 10^{-5}$ & $1.0678 \cdot 10^{-4}$ & $8.4700 \cdot 10^{-5}$\\
  \multirow{2}{*}{\textbf{error}}& \gls{mape} (\%) & $7.4023 \cdot 10^{-3}$ & $4.1137 \cdot 10^{-2}$ & $7.4037 \cdot 10^{-3}$  & $9.6523 \cdot 10^{-3}$ & $8.5579 \cdot 10^{-3}$\\
  & \gls{mse} & $3.1960 \cdot 10^{-9}$ & $7.8443 \cdot 10^{-8}$ & $3.1987 \cdot 10^{-9}$  & $8.2172 \cdot 10^{-9}$ & $1.9088 \cdot 10^{-8}$\\[1.5ex]
  \multirow{3}{*}{\textbf{Test error}} & \gls{nrmse} & $ 2.0304 \cdot 10^{-4}$ & $5.8311 \cdot 10^{-4}$ & $2.0444 \cdot 10^{-4}$ & $2.2999 \cdot 10^{-4}$ & $ 9.5987 \cdot 10^{-5}$\\
  & \gls{mape} (\%) & $1.8678 \cdot 10^{-2}$ & $5.7884 \cdot 10^{-2}$ & $1.8831 \cdot 10^{-2}$  & $2.1465 \cdot 10^{-2}$ & $9.8995 \cdot 10^{-3}$\\
  & \gls{mse} & $1.6087 \cdot 10^{-8}$ & $1.3267 \cdot 10^{-7}$ &$1.6308 \cdot 10^{-8}$ & $2.4114 \cdot 10^{-8}$ & $2.9507 \cdot 10^{-8}$\\\midrule  
& & & & \multicolumn{3}{l}{\textbf{Absolute (relative, \%) frequency}} \\\cmidrule{2-7}
  \multirow{3}{*}{\textbf{Stop criteria}}& \multicolumn{3}{l}{Maximum number of iterations (6000)} & 0 (0) & & \\
 & \multicolumn{3}{l}{Equal best fitness for 600 iterations} & 25 (83.3333) & & \\
 & \multicolumn{3}{l}{Tolerance $\delta = 1 \cdot 10^{-12}$} & 5 (16.6667) & & \\ \midrule
 & & & & \multicolumn{3}{l}{\textbf{Metric value}} \\\cmidrule{2-7}
  \multirow{3}{*}{\textbf{Performance}} & \multicolumn{3}{l}{Mean time per run (minutes)} & 1.0066 & & \\
 & \multicolumn{3}{l}{Mean number of trainings} & 67989.7000  & & \\
 & \multicolumn{3}{l}{Mean number of predictions} & 67989.7000  & & \\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\vspace{-0.3cm}
{\scriptsize{$^*$Standard deviation}}
\end{table}
% --------------------------------------------------------------------------

The parameter values associated with the \gls{pso}+\gls{svm} run that
resulted in the smallest test \gls{nrmse} are $C = 1923.6203,
\varepsilon = 8.3873 \cdot 10^{-4}, \gamma = 8.9796 \cdot 10^{-4}$.
Figure \ref{fig:turborel-pso-evolution} depicts the particle swarm in
the initial, $50^{\text{th}}$ and $1822^{\text{th}}$ (final)
iterations. At this particular run, the best global validation
\gls{nrmse} was found in iteration 1222 and remained the same for the
next 600 iterations. The validation \gls{nrmse} evolution can be seen
in Figure \ref{fig:turborel-nrmse}.

% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=0.5\linewidth]{fig/initialSwarm7-turborel.pdf}\hfill
\includegraphics[width=0.5\linewidth]{fig/intermediateSwarm7-turborel.pdf}\\[1.5ex]
\includegraphics[width=0.5\linewidth]{fig/finalSwarm7-turborel.pdf}
\caption{Swarm evolution during PSO, Example 2.1}
\label{fig:turborel-pso-evolution}}
\end{figure}
% --------------------------------------------------------------------------

% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=10cm]{fig/nrmseConv7-turborel.pdf}
\caption{Validation NRMSE convergence, Example 2.1}
\label{fig:turborel-nrmse}}
\end{figure}
% --------------------------------------------------------------------------

The real and predicted values for validation and test outputs were
very near from each other and are listed in Table
\ref{tab:ex-turborel-pred}. The support vectors features are shown in
Table \ref{tab:ex-turborel-alphas}, from which can be noted that only
two bounded support vectors were required by the \gls{svr}
model. Figure \ref{fig:turborel-result} presents the two support
vectors in addition to the validation and reliability
forecasts. Interestingly, the chosen support vectors were the first
and last training points.

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Real failure time and predictions by SVR, Example 2.1}
\label{tab:ex-turborel-pred}


\vspace{0.2cm}

\begin{tabular}{llllll} 
  \toprule
  \multicolumn{3}{c}{\textbf{Validation}} & \multicolumn{3}{c}{\textbf{Test}} \\\midrule
  $i$ & $R(T_i)$ & \textbf{Predicted} $R(T_i)$ & $i$ & $R(T_i)$ & \textbf{Predicted} $R(T_i)$\\\midrule
  32 &0.6843 & 0.6842 & 36 &0.6444 & 0.6445\\
  33 &0.6743 & 0.6743 & 37 &0.6345 & 0.6345\\
  34 &0.6643 & 0.6644 & 38 &0.6245 & 0.6247\\
  35 &0.6544 & 0.6544 & 39 &0.6145 & 0.6147\\
     &       &        & 40 &0.6046 & 0.6047\\\midrule
  \multicolumn{2}{l}{\gls{nrmse}} & $8.4525 \cdot 10^{-5}$ & \multicolumn{2}{l}{\gls{nrmse}} & $2.0305 \cdot 10^{-4}$\\ 
  \multicolumn{2}{l}{\gls{mape} (\%)} & $7.4037 \cdot 10^{-3}$& \multicolumn{2}{l}{\gls{mape} (\%)} & $1.8678 \cdot 10^{-2}$ \\ 
  \multicolumn{2}{l}{\gls{mse}} & $3.2016 \cdot 10^{-9}$ & \multicolumn{2}{l}{\gls{mse}} & $1.6087 \cdot 10^{-8}$\\ 
  \bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
\caption{Support vectors' details, Example 2.1}
\label{tab:ex-turborel-alphas}

\vspace{0.2cm}

\begin{tabular}{llll} 
  \toprule 
  $(x,y)$    & $\alpha$  & $\alpha^*$ & \textbf{Type} \\\midrule
  (0.9930, 0.9831) & 1923.6203 & 0 & bounded          \\
  (0.7042, 0.6942) & 0 & 1923.6203 & bounded       \\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=10cm]{fig/supVecResults7-turborel.pdf}
\caption{SVR results, Example 2.1}
\label{fig:turborel-result}}
\end{figure}
% --------------------------------------------------------------------------

\citeonline{xu2003} and \citeonline{chen2007} present the test
\gls{nrmse} values for this example from different time series
models. In order to update the list of such values with the results of
the proposed methodology in this work, Table
\ref{tab:ex-turborel-difftools} is provided. Notice that among all
methods, \gls{pso}+\gls{svm} was able to give the smallest test
\gls{nrmse}. This fact indicates the ability of \gls{pso} in handling
the model selection problem related to \gls{svr} as well as the great
capacity of \gls{svr} itself to tackle reliability forecast
problems. It is important to emphasize that this better result was
attained even with a smaller training set size. \citeonline{xu2003}
mention only the training and validation sets. Thus, one may infer
that the validation set actually played the role of the test
set. \citeonline{chen2007}, in turn, makes use of the cross-validation
approach and after parameters have been found, all training set was
retrained by \gls{svr}. Hence, in both cases, the training sets were
indeed greater than the one used in this work.

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Test NRMSE from different forecast models, Example
    2.1. {\footnotesize{Updated from \citeonline{xu2003},
        p. 260, and \citeonline{chen2007}, p.430}}}
\label{tab:ex-turborel-difftools}

\vspace{0.2cm}

\begin{tabular}{ll} 
  \toprule 
  \textbf{Method}     & \textbf{Test \gls{nrmse}} \\\midrule
  \gls{pso}+\gls{svm} & $2.0305 \cdot 10^{-4}$\\
  \gls{ga}+\gls{svm}  & $4.0000 \cdot 10^{-4}$\\
  \gls{nfn}           & $3.6900 \cdot 10^{-3}$\\
  \gls{rbfnn} (Gaussian activation) & $3.9100 \cdot 10^{-3}$\\
  \gls{mlpnn} (Gaussian activation) & $2.4970 \cdot 10^{-2}$ \\
  \gls{mlpnn} (logistic activation) & $3.9700 \cdot 10^{-2}$\\
  \gls{grnn}          & $1.0850 \cdot 10^{-2}$\\
  \gls{arima}         & $1.9900 \cdot 10^{-2}$\\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

\subsection{Example 2.2: Failure Times Forecast}

Differently from the reliability prediction problem, in the failure
times forecast the data set is divided approximately with the same
proportions of the first example, that is, 63\%, 17\% and 20\% from
the data dedicated to training, validation and test purposes, in this
order. This yields respectively 24, 7 and 8 points. Also, $\varepsilon
\in [5.2750 \cdot 10^{-3}, 7.9125 \cdot 10^{-1}]$.

Descriptive statistics of parameters and error functions as well as
some performance metrics for {\it lbest} model are listed on Table
\ref{tab:ex-turbo-lbest}. Observe that in the majority of the runs,
the best global fitness value remained the same in 600 consecutive
iterations and the maximum number of iterations was not attained in
none of them. This indicates that the \gls{pso} is able to find good
solutions without requiring the maximum number of iterations, which
positively influence the algorithm elapsed time.

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}

  \caption{Descriptive statistics of parameters and error functions,
    stop criteria frequency and performance for 30 PSO+SVM runs, {\it
      lbest}, Example 2.2}
\label{tab:ex-turbo-lbest}

\vspace{0.2cm}

\begin{tabular}{p{2cm}llllll} 
  \toprule
  & & \textbf{Minimum} & \textbf{Maximum} & \textbf{Median}  & \textbf{Mean} & \textbf{Std. dev.}$^*$\\\cmidrule{2-7}
%  \multirow{3}{*}{\rotatebox{90}{\textbf{Parameters}}} & $C$ & 1.382 & 16 & 17.632 & 30 & 21.061&44\\
  \multirow{3}{*}{\textbf{Parameters}}& $C$& 34.3824 & 1997.3652  & 1650.7974   & 1276.5337  & 778.9501\\
  & $\varepsilon$ & $1.1999 \cdot 10^{-2}$   & $1.5487 \cdot 10^{-1}$ & $5.2240 \cdot 10^{-2}$  &  $8.8812 \cdot 10^{-2}$   & $6.3591 \cdot 10^{-2}$  \\
  & $\gamma$ & $4.6374 \cdot 10^{-3}$  & $5.2242 \cdot 10^{-1}$  & $1.9170 \cdot 10^{-2}$  & $1.5319 \cdot 10^{-1}$ & $2.2412 \cdot 10^{-2}$\\[1.5ex]
  \multirow{2}{*}{\textbf{Validation}} & \gls{nrmse}  & $1.6620 \cdot 10^{-2}$ & $1.6862 \cdot 10^{-2}$ & $1.6823 \cdot 10^{-2}$ & $1.6777 \cdot 10^{-2}$ & $9.3827 \cdot 10^{-5}$\\
  \multirow{2}{*}{\textbf{error}}& \gls{mape} (\%) & 1.2262 & 1.3132 & 1.2422 & 1.2539 & $2.8847 \cdot 10^{-2}$\\
  & \gls{mse} & $1.6274 \cdot 10^{-2}$ & $1.6751 \cdot 10^{-2}$ & $1.6674 \cdot 10^{-2}$  & $1.6583 \cdot 10^{-2}$ & $1.8497 \cdot 10^{-4}$\\[1.5ex]
  \multirow{3}{*}{\textbf{Test error}} & \gls{nrmse} & $1.3412 \cdot 10^{-2}$  & $7.1785 \cdot 10^{-2}$ & $1.7837 \cdot 10^{-2}$ & $3.1674 \cdot 10^{-2}$ & $ 2.4001 \cdot 10^{-2}$\\
  & \gls{mape} (\%) & 1.1299 & 5.0759 & 1.4722  & 2.4071 & 1.6025\\
  & \gls{mse} & $1.3087 \cdot 10^{-2}$ & $3.7489 \cdot 10^{-1}$ &$2.3149 \cdot 10^{-2}$ & $1.1353 \cdot 10^{-1}$ & $1.5178 \cdot 10^{-1}$\\\midrule  
& & & & \multicolumn{3}{l}{\textbf{Absolute (relative, \%) frequency}} \\\cmidrule{2-7}
  \multirow{3}{*}{\textbf{Stop criteria}}& \multicolumn{3}{l}{Maximum number of iterations (6000)} & 0 (0) & & \\
 & \multicolumn{3}{l}{Equal best fitness for 600 iterations} & 29 (96.6667) & & \\
 & \multicolumn{3}{l}{Tolerance $\delta = 1 \cdot 10^{-12}$} & 1 (3.3333) & & \\ \midrule
 & & & & \multicolumn{3}{l}{\textbf{Metric value}} \\\cmidrule{2-7}
  \multirow{3}{*}{\textbf{Performance}} & \multicolumn{3}{l}{Mean time per run (minutes)} & 10.5648 & & \\
 & \multicolumn{3}{l}{Mean number of trainings} & 68977.6667  & & \\
 & \multicolumn{3}{l}{Mean number of predictions} & 68977.6667  & & \\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\vspace{-0.3cm}
{\scriptsize{$^*$Standard deviation}}
\end{table}
% --------------------------------------------------------------------------

The parameter values associated with the ``machine'' which provided
the smallest {\it lbest} test \gls{nrmse} are $C = 1936.7744,
\varepsilon = 1.5474 \cdot 10^{-1}, \gamma = 4.6374 \cdot 10^{-3}$ and that
specific run stopped
at iteration 2836 since the global best particle reached the
associated best position 600 iterations earlier. Figure
\ref{fig:turbo-pso-evolution} show the evolution of the particle swarm
in three different phases of the algorithm and Figure
\ref{fig:turbo-nrmse} depicts the \gls{nrmse} values {\it versus}
the \gls{pso} iterations. Tables \ref{tab:ex-turbo-pred} and
\ref{tab:ex-turbo-alphas} present, respectively, the real and predicted
failure times for validation and test sets and the support vectors'
details. Notice that for this example the test errors are smaller than the
validation ones and that only 6 from the 24 training examples are support
vectors (50\% of them are free). Lastly, the \gls{svr}
results are summarized in Figure \ref{fig:turbo-result}.
% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=0.5\linewidth]{fig/initialSwarm16-turbo.pdf}\hfill
\includegraphics[width=0.5\linewidth]{fig/intermediateSwarm16-turbo.pdf}\\[1ex]
\includegraphics[width=0.5\linewidth]{fig/finalSwarm16-turbo.pdf}
\caption{Swarm evolution during PSO, Example 2.2}
\label{fig:turbo-pso-evolution}}
\end{figure}
% --------------------------------------------------------------------------
% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=10cm]{fig/nrmseConv16-turbo.pdf}
\caption{Validation NRMSE convergence, Example 2.2}
\label{fig:turbo-nrmse}}
\end{figure}
% --------------------------------------------------------------------------
% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Real failure time and predictions by SVR, Example 2.2}
\label{tab:ex-turbo-pred}


\vspace{0.2cm}

\begin{tabular}{llllll} 
  \toprule
  \multicolumn{3}{c}{\textbf{Validation}} & \multicolumn{3}{c}{\textbf{Test}} \\\midrule
  $i$ & $T_i$ & \textbf{Predicted} $T_i$ & $i$ & $T_i$ & \textbf{Predicted} $T_i$\\\midrule
  26 & 7.3 &7.4145 & 33 &8.1 &8.0696\\
  27 & 7.3 &7.4145 & 34 &8.3 &8.1622\\
  28 & 7.7 &7.4145 & 35 &8.4 &8.3464\\
  29 & 7.7 &7.7902 & 36 &8.4 &8.4381\\
  30 & 7.8 &7.7902 & 37 &8.5 &8.4381\\
  31 & 7.9 &7.8836 & 38 &8.7 &8.5294\\ 
  32 & 8.0 &7.9767 & 39 &8.8 &8.7109\\
     &     &       & 40 &9.0 &8.8011\\\midrule
  \multicolumn{2}{l}{\gls{nrmse}} & $1.6827 \cdot 10^{-2}$ & \multicolumn{2}{l}{\gls{nrmse}} & $1.3412 \cdot 10^{-2}$\\ 
  \multicolumn{2}{l}{\gls{mape} (\%)} & $1.2344$& \multicolumn{2}{l}{\gls{mape} (\%)} & 1.1299 \\ 
  \multicolumn{2}{l}{\gls{mse}} & $1.6682 \cdot 10^{-2}$ & \multicolumn{2}{l}{\gls{mse}} & $1.3087 \cdot 10^{-2}$\\ 
  \bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------
% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
\caption{Support vectors' details, Example 2.2}
\label{tab:ex-turbo-alphas}

\vspace{0.2cm}

\begin{tabular}{llll} 
  \toprule 
  $(x,y)$    & $\alpha$  & $\alpha^*$ & \textbf{Type} \\\midrule
  (1.6, 2.0) & 0         & 390.8909   & free          \\
  (3.9, 4.5) & 1936.7744 & 0          & bounded       \\
  (4.5, 4.6) & 0         & 975.3477   & free          \\
  (6.0, 6.0) & 0         & 1936.7744  & bounded       \\
  (6.5, 6.5) & 0         & 570.5358   & free          \\
  (6.7, 7.0) & 1936.7744 & 0          & bounded       \\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=10cm]{fig/supVecResults16-turbo.pdf}
\caption{SVR results, Example 2.2}
\label{fig:turbo-result}}
\end{figure}
% --------------------------------------------------------------------------

% --------------------------------------------------------------------------
% \begin{sidewaystable}[!h]

% \begin{footnotesize}

% \begin{minipage}[t]{\linewidth}\centering
%   \caption{Descriptive statistics of parameters and error functions
%     for, stop criteria frequency and performance, 30 PSO+SVM runs,
%     {\it lbest}, example 2}
% \label{tab:ex-turbo-lbest}

% \vspace{0.2cm}

% \begin{tabular}{p{2cm}lllllllll} 
%   \hline 
%   & & \textbf{Minimum} & \textbf{Maximum} & $q_1$ & \textbf{Median} & $q_3$  & \textbf{Mean} & \textbf{Variance} & \textbf{Std. dev.}$^*$\\\cline{2-10}
% %  \multirow{3}{*}{\rotatebox{90}{\textbf{Parameters}}} & $C$ & 1.382 & 16 & 17.632 & 30 & 21.061&44\\
%   \multirow{3}{*}{\textbf{Parameters}}& $C$& 34.3824 & 1997.3652  & 252.8071 & 1650.7974  & 1977.1268 & 1276.5337 & 606763.2792 & 778.9501\\
%   & $\varepsilon$ & 0.0120   & 0.1549    & 0.01913 & 0.0522    & 0.1547 &  0.0888  & 0.0040      & 0.0636  \\
%   & $\gamma$ & 0.0046  & 0.5224  & 0.0068 & 0.0192  & 0.4928 & 0.1532 & 0.0502& 0.2241\\[1.5ex]
%   \multirow{2}{*}{\textbf{Validation}} & \gls{nrmse}  & $1.6620 \cdot 10^{-2}$ & $1.6862 \cdot 10^{-2}$ & $1.6659 \cdot 10^{-2}$& $1.6823 \cdot 10^{-2}$ & $1.6827 \cdot 10^{-2}$& $1.6777 \cdot 10^{-2}$ & $8.8034 \cdot 10^{-9}$ & $9.3827 \cdot 10^{-5}$\\
%   \multirow{2}{*}{\textbf{error}}& \gls{mape} (\%) & 1.2262 & 1.3132 & 1.2344 & 1.2422 & 1.2709 & 1.2539 & 0.0008 & 0.0288\\
%   & \gls{mse} & $1.6274 \cdot 10^{-2}$ & $1.6751 \cdot 10^{-2}$ & $1.6351 \cdot 10^{-2}$ & $1.6674 \cdot 10^{-2}$ & $1.6682 \cdot 10^{-2}$ & $1.6583 \cdot 10^{-2}$  & $3.4216 \cdot 10^{-8}$ & $1.8497 \cdot 10^{-4}$\\[1.5ex]
%   \multirow{3}{*}{\textbf{Test error}} & \gls{nrmse} & $7.1785 \cdot 10^{-2}$ & $1.3412 \cdot 10^{-2}$ & $1.3844 \cdot 10^{-2}$& $1.7837 \cdot 10^{-2}$ & $6.7047 \cdot 10^{-2}$& $3.1674 \cdot 10^{-2}$ & $5.7646 \cdot 10^{-4}$ &$ 2.4001 \cdot 10^{-2}$\\
%   & \gls{mape} (\%) & 1.1299 & 5.0759 & 1.1642 & 1.4722 & 4.8363 & 2.4071 & 2.5681 & 1.6025\\
%   & \gls{mse} & $1.3087 \cdot 10^{-2}$ & $3.7489 \cdot 10^{-1}$ & $1.3943 \cdot 10^{-2}$ &$2.3149 \cdot 10^{-2}$ & $3.2703 \cdot 10^{-1}$ & $1.1353 \cdot 10^{-1}$ & $2.3038 \cdot 10^{-2}$ & $1.5178 \cdot 10^{-1}$\\\hline
%   & \multicolumn{3}{l}{\textbf{Maximum number of iterations (6000)}} & \multicolumn{3}{l}{\textbf{Equal best fitness for 600 iterations}} & \multicolumn{3}{l}{\textbf{Tolerance $\delta = 1 \cdot 10^{-12}$}} \\\cline{2-10}
%  {\textbf{Stop criteria}} & 0 (0\%) & & & 29 (96.6667\%) & & & 1 (3.3333\%) & &\\ \hline
%   & \multicolumn{3}{l}{\textbf{Mean time per run (minutes)}}  & \multicolumn{3}{l}{\textbf{Mean number of trainings}} & \multicolumn{3}{l}{\textbf{Mean number of predictions}} \\\cline{2-10}
%  {\textbf{Performance}} & 10.5648 & & & 68977.6667 & & & 68977.6667 & & \\ \hline
% \end{tabular}
% \end{minipage}
% \vspace{-0.3cm}
% \hspace{0.8cm}{\scriptsize{$^*$Standard deviation}}

% \vspace{0.2cm}

% \begin{minipage}[t]{0.5\linewidth}\centering
%   \caption{Real failure time and predictions by SVR, example 2}
% \label{tab:ex-turbo-pred}

% \vspace{0.2cm}

% \begin{tabular}{llllll} 
%   \hline 
%   \multicolumn{3}{c}{\textbf{Validation}} & \multicolumn{3}{c}{\textbf{Test}} \\ [1ex]
%   $i$ & $T_i$ & \textbf{Predicted} $T_i$ & $i$ & $T_i$ & \textbf{Predicted} $T_i$\\\hline
%   26 & 7.3 &7.4145 & 33 &8.1 &8.0696\\
%   27 & 7.3 &7.4145 & 34 &8.3 &8.1622\\
%   28 & 7.7 &7.4145 & 35 &8.4 &8.3464\\
%   29 & 7.7 &7.7902 & 36 &8.4 &8.4381\\
%   30 & 7.8 &7.7902 & 37 &8.5 &8.4381\\
%   31 & 7.9 &7.8836 & 38 &8.7 &8.5294\\ 
%   32 & 8.0 &7.9767 & 39 &8.8 &8.7109\\
%      &     &       & 40 &9.0 &8.8011\\\hline 
%   & \gls{nrmse} & $1.6827 \cdot 10^{-2}$ & & \gls{nrmse} & $1.3412 \cdot 10^{-2}$\\ 
%   & \gls{mape} (\%) & $1.2344$& & \gls{mape} (\%) & 1.1299 \\ 
%   & \gls{mse} & $1.6682 \cdot 10^{-2}$ & & \gls{mse} & $1.3087 \cdot 10^{-2}$\\ 
%   \hline
% \end{tabular}
% \end{minipage} 
% %\hspace{-0.5cm}
% \begin{minipage}[t]{0.5\linewidth}\centering
% \caption{Support vectors' details, example 2}
% \label{tab:ex-turbo-alphas}

% \vspace{0.2cm}

% \begin{tabular}{llll} 
%   \hline 
%   $(x,y)$    & $\alpha$  & $\alpha^*$ & \textbf{Type} \\\hline
%   (1.6, 2.0) & 0         & 390.8909   & free          \\
%   (3.9, 4.5) & 1936.7744 & 0          & bounded       \\
%   (4.5, 4.6) & 0         & 975.3477   & free          \\
%   (6.0, 6.0) & 0         & 1936.7744  & bounded       \\
%   (6.5, 6.5) & 0         & 570.5358   & free          \\
%   (6.7, 7.0) & 1936.7744 & 0          & bounded       \\\hline
% \end{tabular}
% \end{minipage}
% \end{footnotesize}
% \end{sidewaystable}
% % --------------------------------------------------------------------------

\section{Example 3: Miles to Failure of a Car Engine}\label{sec:miles}

This example is associated with the prediction of \gls{mtf} of a car
engine and it also comes from \citeonline{xu2003}, who collected data
from 100 units of a specific car engine (Table \ref{tab:ex-mtf}).
% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Miles to failure ($\times$ 1000 hours) of a car
    engine. {\footnotesize{Adapted from \citeonline{xu2003},
        p. 262-263}}}
\label{tab:ex-mtf}

\vspace{0.2cm}

\begin{tabular}{llllllll} 
  \toprule \textbf{Number} & \textbf{\gls{mtf}} & \textbf{Number} &
\textbf{\gls{mtf}} & \textbf{Number} & \textbf{\gls{mtf}} & \textbf{Number} &
\textbf{\gls{mtf}} \\\midrule
  1 &37.1429&26&35.8095&51&37.1429&76&36.3810\\
  2 &37.4286&27&36.9524&52&37.8095&77&38.0000\\
  3 &37.6190&28&37.6190&53&38.0952&78&38.1905\\
  4 &38.5714&29&37.8095&54&38.6667&79&38.6667\\
  5 &40.0000&30&38.0952&55&40.0619&80&38.6667\\
  6 &35.8095&31&36.8571&56&36.1905&81&37.1429\\
  7 &36.2857&32&38.0952&57&36.3810&82&37.6190\\
  8 &36.2857&33&38.0952&58&37.0476&83&37.6190\\
  9 &36.4762&34&38.3810&59&37.2381&84&38.0952\\
  10&38.1905&35&39.0476&60&38.0000&85&39.0476\\
  11&36.1905&36&37.2381&61&35.7143&86&36.2857\\
  12&36.8571&37&37.3333&62&36.4762&87&37.1429\\
  13&37.6190&38&37.5238&63&37.3333&88&37.5238\\
  14&37.8095&39&37.8095&64&37.6190&89&37.8095\\
  15&38.7619&40&38.5714&65&38.4762&90&38.0000\\
  16&35.9048&41&37.1429&66&36.8571&91&36.8571\\
  17&36.4762&42&37.2381&67&37.1429&92&37.0476\\
  18&36.8571&43&37.6190&68&37.9048&93&37.9048\\
  19&37.1429&44&38.1905&69&38.0952&94&38.1905\\
  20&37.4286&45&38.5714&70&38.8571&95&39.5238\\
  21&37.4286&46&36.0952&71&37.1429&96&35.4286\\
  22&37.6190&47&37.2381&72&37.6190&97&36.0000\\
  23&38.3810&48&37.4286&73&37.6190&98&37.7143\\
  24&38.5714&49&37.5238&74&37.8095&99&38.0952\\
  25&39.4286&50&39.0476&75&38.3810&100&38.5714
  \\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

The objective is to predict future \gls{mtf} of car engines based on
past failure evidence. Once more, it is performed a single-step-ahead
forecast with one-dimensional input vectors, which results in a data
set with 99 entries. From these 80, 9 and 10 points are used for
training, validation and test purposes, respecting their natural
order. In this example $\varepsilon \in [3.7590 \cdot 10^{-2},
5.6385]$.

Table \ref{tab:ex-miles-lbest} presents the descriptive statistics
related to parameters and error functions in 30 \gls{pso}+\gls{svm}
runs. Only in this example the mean number of predictions were
different from the number of trainings. This is possible because, for
some values of $\varepsilon$, all points may lay within the
$\varepsilon$-tube, resulting in a model without support vectors. In
this way, the first part of the regression function vanish. Then, when
{\sf{LIBSVM}} predicts based on a ``machine'' with these features, it
returns only a constant value equal to $b_0$. This situation is not
desirable, thus when the outcome of a training is an \gls{svm} without
support vectors, {\sf{LIBSVM}} is not allowed to predict and the
fitness values associated with the particle under consideration
remains unaltered.


The run which resulted in the smallest test \gls{nrmse} is
related to the following parameter values: $C = 18.2629$, $\varepsilon
= 7.9411 \cdot 10^{-2}$ and $\gamma = 6.8979 \cdot 10^{-1}$. In this
particular run, \gls{pso}+\gls{svm} steps continued up to the
$4225^{\text{th}}$ iteration and, like all other runs, stopped after
600 iterations with the same best global validation \gls{nrmse}.
% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}

  \caption{Descriptive statistics of parameters and error functions,
    stop criteria frequency and performance for 30 PSO+SVM runs, {\it
      lbest}, Example 3}
\label{tab:ex-miles-lbest}

\vspace{0.2cm}

\begin{tabular}{p{2cm}llllll} 
  \toprule
  & & \textbf{Minimum} & \textbf{Maximum} & \textbf{Median}  & \textbf{Mean} & \textbf{Std. dev.}$^*$\\\cmidrule{2-7}
%  \multirow{3}{*}{\rotatebox{90}{\textbf{Parameters}}} & $C$ & 1.382 & 16 & 17.632 & 30 & 21.061&44\\
  \multirow{3}{*}{\textbf{Parameters}}& $C$& 16.3248 & 30.1075 & 21.7998 & 22.5316  & 5.4798\\
  & $\varepsilon$ & $7.9411 \cdot 10^{-2}$ & $2.3345 \cdot 10^{-1}$ & $1.6092 \cdot 10^{-1}$  & $1.5861 \cdot 10^{-1}$ & $7.6099 \cdot 10^{-2}$\\
  & $\gamma$ & $4.9354 \cdot 10^{-1}$ & $7.2213 \cdot 10^{-1}$ & $5.9863 \cdot 10^{-1}$  & $6.0241 \cdot 10^{-1}$ & $1.0515 \cdot 10^{-1}$\\[1.5ex]
  \multirow{2}{*}{\textbf{Validation}} & \gls{nrmse} & $8.5041 \cdot 10^{-3}$ & $9.1959 \cdot 10^{-3}$ & $8.8512 \cdot 10^{-3}$ & $8.8506 \cdot 10^{-3}$ & $3.5098 \cdot 10^{-4}$\\
  \multirow{2}{*}{\textbf{error}}& \gls{mape} (\%) & $6.2633 \cdot 10^{-1}$ & $7.0907 \cdot 10^{-1}$ & $6.7005 \cdot 10^{-1}$ & $6.6897 \cdot 10^{-1}$ & $4.0743 \cdot 10^{-2}$\\
  & \gls{mse} & $1.0273 \cdot 10^{-1}$ & $1.2012 \cdot 10^{-1}$ & $1.1145 \cdot 10^{-1}$  & $1.1144 \cdot 10^{-1}$ & $8.8249 \cdot 10^{-3}$\\[1.5ex]
  \multirow{3}{*}{\textbf{Test error}} & \gls{nrmse} & $1.8969 \cdot 10^{-2}$ & $1.9468 \cdot 10^{-2}$ & $1.9265 \cdot 10^{-2}$ & $1.9242 \cdot 10^{-2}$ & $ 2.2394 \cdot 10^{-4}$\\
  & \gls{mape} (\%) & 1.4338 & 1.4978 & 1.4697  & 1.4679 & $2.9455 \cdot 10^{-2}$\\
  & \gls{mse} & $5.0739 \cdot 10^{-1}$ & $5.3441 \cdot 10^{-1}$ & $5.2338 \cdot 10^{-1}$ & $5.2217 \cdot 10^{-1}$ & $1.2150 \cdot 10^{-2}$\\\midrule  
& & & & \multicolumn{3}{l}{\textbf{Absolute (relative, \%) frequency}} \\\cmidrule{2-7}
  \multirow{3}{*}{\textbf{Stop criteria}}& \multicolumn{3}{l}{Maximum number of iterations (6000)} & 0 (0) & & \\
 & \multicolumn{3}{l}{Equal best fitness for 600 iterations} & 30 (100) & & \\
 & \multicolumn{3}{l}{Tolerance $\delta = 1 \cdot 10^{-12}$} & 0 (0) & & \\ \midrule
 & & & & \multicolumn{3}{l}{\textbf{Metric value}} \\\cmidrule{2-7}
  \multirow{3}{*}{\textbf{Performance}} & \multicolumn{3}{l}{Mean time per run (minutes)} & 4.9337 & & \\
 & \multicolumn{3}{l}{Mean number of trainings} & 66653.6333  & & \\
 & \multicolumn{3}{l}{Mean number of predictions} & 66563.8000  & & \\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\vspace{-0.3cm}
{\scriptsize{$^*$Standard deviation}}
\end{table}
% --------------------------------------------------------------------------

Figure \ref{fig:miles-pso-evolution} depicts the particle swarm in
three different moments. The final swarm, as in Example 1, forms some
clusters. However it can be noticed from the axes ranges that the
parameter values are rather concentrated in those ranges if compared
with their respective original intervals. The validation \gls{nrmse}
convergence can be visualized in Figure \ref{fig:miles-nrmse}.
% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=0.5\linewidth]{fig/initialSwarm20-miles.pdf}\hfill
\includegraphics[width=0.5\linewidth]{fig/intermediateSwarm20-miles.pdf}\\[1.5ex]
\includegraphics[width=0.5\linewidth]{fig/finalSwarm20-miles.pdf}
\caption{Swarm evolution during PSO, Example 3}
\label{fig:miles-pso-evolution}}
\end{figure}
% --------------------------------------------------------------------------
% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=10cm]{fig/nrmseConv20-miles.pdf}
\caption{Validation NRMSE convergence, Example 3}
\label{fig:miles-nrmse}}
\end{figure}
% --------------------------------------------------------------------------

Table \ref{tab:ex-miles-pred} shows the real and predicted values by
the \gls{svm} which provided the smallest test \gls{nrmse}. Table
\ref{tab:ex-miles-alphas}, in turn, lists the support vectors, the
associated Lagrange multipliers and also their classification as free
or bounded. From the 80 training points, 68 were selected as support
vectors and only 5 of them were free. This high number of support
vectors can be justified by the small $\varepsilon$ (near 0.2\% of the
mean of output training values), which results in a thin
$\varepsilon$-tube and also by the complex behavior of the time
series, as can be visualized in Figure \ref{fig:miles-result}.

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Real MTF and predictions by SVR ($\times$ 1000
    hours), Example 3}
\label{tab:ex-miles-pred}

\vspace{0.2cm}

\begin{tabular}{llllll} 
  \toprule
  \multicolumn{3}{c}{\textbf{Validation}} & \multicolumn{3}{c}{\textbf{Test}} \\\midrule
  \textbf{Number} & \textbf{\gls{mtf}} & \textbf{Predicted \gls{mtf}}  &
\textbf{Number} & \textbf{\gls{mtf}} & \textbf{Predicted \gls{mtf}}\\\midrule
  82 & 37.6190 &37.5078 & 91  & 36.8570 &38.3673\\
  83 & 37.6190 &37.8891 & 92  & 37.0480 &37.4383\\
  84 & 38.0960 &37.8891 & 93  & 37.9050 &37.4813\\
  85 & 39.0470 &38.4474 & 94  & 38.1900 &38.2621\\
  86 & 36.2860 &36.9183 & 95  & 39.5240 &38.4914\\
  87 & 37.1430 &37.0125 & 96  & 35.4290 &35.7357\\ 
  88 & 37.5240 &37.5082 & 97  & 36.0000 &36.3744\\
  89 & 37.8090 &37.7766 & 98  & 37.7140 &36.6496\\
  90 & 38.0000 &38.1398 & 99  & 38.0950 &38.0127\\
     &         &        & 100 & 38.5714 &38.4468\\\midrule
  \multicolumn{2}{l}{\gls{nrmse}} & $8.5074 \cdot 10^{-3}$ & \multicolumn{2}{l}{\gls{nrmse}} & $1.8969 \cdot 10^{-2}$\\ 
  \multicolumn{2}{l}{\gls{mape} (\%)} & $6.3127 \cdot 10^{-1}$& \multicolumn{2}{l}{\gls{mape} (\%)} & 1.4338 \\ 
  \multicolumn{2}{l}{\gls{mse}} & $1.0281 \cdot 10^{-1}$ & \multicolumn{2}{l}{\gls{mse}} & $5.0739 \cdot 10^{-1}$\\ 
  \bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
\caption{Support vectors' details, Example 3}
\label{tab:ex-miles-alphas}

\vspace{0.2cm}

\begin{tabular}{llllllll} 
  \toprule 
  $(x,y)$    & $\alpha$  & $\alpha^*$ & \textbf{Type} & $(x,y)$    & $\alpha$  & $\alpha^*$ & \textbf{Type}\\\midrule
  (37.1429, 37.4286) & 0 & 7.3029  & free    & (37.1420, 37.2390) & 8.9927 & 0  & free   \\
  (37.4286, 37.6190) & 18.2629 & 0 & bounded & (37.6190, 38.1900) & 0 & 18.2629 & bounded\\
  (37.6190, 38.5714) & 18.2629 & 0 & bounded & (38.1900, 38.5710) & 18.2629 & 0 & bounded\\
  (38.5714, 40.0000) & 0 & 15.1115 & free    & (38.5710, 36.0960) & 0 & 18.2629 & bounded\\
  (40.0000, 35.8095) & 0 & 18.2629 & bounded & (36.0960, 37.2380) & 0 & 18.2629 & bounded\\
  (35.8095, 36.2857) & 0 & 18.2629 & bounded & (37.2380, 37.4280) & 18.2629 & 0 & bounded\\
  (36.2857, 36.2857) & 0 & 18.2629 & bounded & (37.4280, 37.5240) & 18.2629 & 0 & bounded\\
  (36.2857, 36.4762) & 18.2629 & 0 & bounded & (39.0480, 37.1430) & 18.2629 & 0 & bounded\\
  (36.4762, 38.1905) & 0 & 18.2629 & bounded & (37.1430, 37.8090) & 18.2629 & 0 & bounded\\
  (38.1905, 36.1905) & 18.2629 & 0 & bounded & (38.0950, 38.6670) & 18.2629 & 0 & bounded\\
  (36.8571, 37.6190) & 0 & 18.2629 & bounded & (38.6670, 40.0620) & 18.2629 & 0 & bounded\\
  (37.6190, 37.8095) & 18.2629 & 0 & bounded & (40.0620, 36.1900) & 0 & 18.2629 & bounded\\
  (37.8095, 38.7619) & 0 & 18.2629 & bounded & (36.1900, 36.3810) & 0 & 18.2629 & bounded\\
  (38.7619, 35.9048) & 0 & 18.2629 & bounded & (36.3810, 37.0480) & 18.2629 & 0 & bounded\\
  (36.4762, 36.8571) & 0 & 18.2629 & bounded & (37.0480, 37.2380) & 0 & 18.2629 & bounded\\
  (36.8571, 37.1429) & 0 & 18.2629 & bounded & (37.2380, 38.0000) & 1.0729  & 0 & free   \\
  (37.6190, 38.3810) & 18.2629 & 0 & bounded & (38.0000, 35.7140) & 18.2629 & 0 & bounded\\
  (38.3810, 38.5714) & 18.2629 & 0 & bounded & (35.7140, 36.4770) & 18.2629 & 0 & bounded\\
  (38.5714, 39.4286) & 18.2629 & 0 & bounded & (36.4770, 37.3330) & 0 & 18.2629 & bounded\\
  (39.4286, 35.8095) & 0 & 18.2629 & bounded & (37.6190, 38.4760) & 0 & 18.2629 & bounded\\
  (35.8095, 36.9524) & 18.2629 & 0 & bounded & (38.4760, 36.8570) & 18.2629 & 0 & bounded\\
  (36.9524, 37.6190) & 18.2629 & 0 & bounded & (36.8570, 37.1430) & 0 & 18.2629 & bounded\\
  (37.6190, 37.8095) & 0 & 18.2629 & bounded & (37.1430, 37.9050) & 18.2629 & 0 & bounded\\
  (38.0952, 36.8571) & 0 & 18.2629 & bounded & (37.9050, 38.0950) & 0 & 18.2629 & bounded\\
  (36.8571, 38.0960) & 18.2629 & 0 & bounded & (38.8570, 37.1430) & 18.2629 & 0 & bounded\\
  (38.0960, 38.0950) & 0 & 18.2629 & bounded & (37.1430, 37.6190) & 0 & 18.2629 & bounded\\  
  (38.0950, 38.3810) & 18.2629 & 0 & bounded & (37.6190, 37.6190) & 0 & 5.9140  & free   \\
  (38.3810, 39.0470) & 18.2629 & 0 & bounded & (37.6190, 37.8100) & 18.2629 & 0 & bounded\\
  (39.0470, 37.2390) & 0 & 18.2629 & bounded & (37.8100, 38.3810) & 0 & 18.2629 & bounded\\
  (37.2390, 37.3330) & 0 & 18.2629 & bounded & (38.3810, 36.3810) & 18.2629 & 0 & bounded\\
  (37.3330, 37.5240) & 18.2629 & 0 & bounded & (38.0000, 38.1900) & 0 & 18.2629 & bounded\\
  (37.5240, 37.8090) & 0 & 18.2629 & bounded & (38.1900, 38.6670) & 18.2629 & 0 & bounded\\
  (37.8090, 38.5720) & 0 & 18.2629 & bounded & (38.6670, 38.6670) & 18.2629 & 0 & bounded\\
  (38.5720, 37.1420) & 18.2629 & 0 & bounded & (38.6670, 37.1420) & 0 & 18.2629 & bounded\\
  \bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------
 
% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=10cm]{fig/supVecResults20-miles.pdf}
\caption{SVR results, Example 3}
\label{fig:miles-result}}
\end{figure}
% --------------------------------------------------------------------------

\citeonline{zio2008} applied an \gls{iir} to solve this example
and updated the results presented in \citeonline{xu2003}. These
results are repeated in Table \ref{tab:ex-miles-difftools} along with
the best test \gls{nrmse} provided by the \gls{pso}+\gls{svm}
methodology. The latter value occupies the third position in the
rank. One possible reason for that is the smaller number of training
points, due to the validation phase adopted. In order to investigate
this issue, 30 runs of \gls{pso}+\gls{svm} with the same parameters
and without a validation set were executed. In this way, the number of
training points increased to 89 and the number of test entries
remained the same (10). The best test \gls{nrmse} result was $1.2536
\cdot 10^{-2}$ with associated parameters quite different from the
ones obtained with the validation and test procedures: $C = 1144.3099,
\varepsilon = 5.7140 \cdot 10^{-3}, \gamma = 7.8020$. In addition, 37
of the 89 training points became support vectors, and 20 from these
were free.  Such test \gls{nrmse} would shift the \gls{pso}+\gls{svm}
approach to a second position, only loosing for \gls{mlpnn} (Gaussian
activation), but for a small amount, in the order of $10^{-4}$.

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Test NRMSE from different forecast models, Example
    3. {\footnotesize{Updated from \citeonline{xu2003},
        p. 264, \citeonline{zio2008}}}}
\label{tab:ex-miles-difftools}

\vspace{0.2cm}

\begin{tabular}{ll} 
  \toprule 
  \textbf{Method} & \textbf{Test \gls{nrmse}}\\\midrule
  \gls{pso}+\gls{svm} & $1.8969 \cdot 10^{-2}{}^*; \,\, 1.2536 \cdot
10^{-2}{}^{**}$\\
  \gls{rbfnn} (Gaussian activation) &  $2.1100 \cdot 10^{-2}$\\
  \gls{mlpnn} (Gaussian activation) &  $1.2200 \cdot 10^{-2}$\\
  \gls{mlpnn} (logistic activation) &  $1.5600 \cdot 10^{-2}$\\
  \gls{iir}         &  $1.5800 \cdot 10^{-2}$\\
  \gls{arima}       &  $4.2200 \cdot 10^{-2}$\\\bottomrule
\end{tabular}
\\[0.5ex]\hspace{-5cm}{\scriptsize{$^*$ With validation; $^{**}$ Without
validation}}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

\section{Example 4: Time Between Failures of Oil Production Wells
}\label{sec:ex4}

In this example, differently from the previous ones, it is resolved a
regression problem involving real data related to features (numerical
and categorical) of the system under analysis.

The systems of interest are oil production wells. The reliability
metric considered is the \gls{tbf}, which is believed to be influenced
by specific features of the wells. The failure of wells represent the
interruption of oil production and, as a consequence, economical
losses. In this way, the prediction of the \gls{tbf} of these systems
may permit preventive actions so as to reduce or even avoid the
effects of the very next failure.

This example is based on a database that was presented by
\citeonline{pf2006}. It contains records of \gls{tbf}, \gls{ttr} and
related factors of different onshore wells from 1983 to 2006. The
author makes a comprehensive analysis of the variables of the database
and proposes the use of \gls{bn}s integrated with Markov chains to
estimate the availability of oil wells. The database incorporates the
concept of {\it socket} \cite{ascher1984}, which loosely speaking
means that the records are associated with the equipments installed in
the wells and are not related to the equipments themselves. For
example, it is expected that the behaviors of pumps consecutively
installed in a specific place of a well are approximately the same,
since the environmental and operational conditions to which they are
subjected have not changed.

According to \citeonline{pf2006}, in the considered context, it has
been observed that the most critical components of an oil production
well are the pump, the rods and the columns. These equipments are
related to the artificial elevation of oil to the surface. The two
considered artificial elevation methods are the mechanical and the one
via progressive cavities. For both types of elevation methods the
columns have the same role of permitting the passage of the rods and
also of isolating the well boundaries. In the mechanical elevation,
the rotating energy of an engine is transformed in an alternating
motion that is transmitted to the rods and the pump is responsible to
transmit the energy to the fluid, which is brought to the surface. In
the elevation by progressive cavities, in turn, the rotating energy of
an engine on the surface is transmitted to the rods that also
rotates. The rotating rods transmit energy to the pump, which is
within the well and whose components' disposition permits the passage
of the oil.

In this example, it is considered the wells' failures due to failures
on their installed rods. The elevation method type, the kind of
installed filter and the concentration of water and solids within the
well are factors that influence the rods' performance. These factors,
along with the number of installed rods, are the variables considered
to predict the wells' \gls{tbf}. Hence, only a subset of the entire
database is used. Despite the great number of entries in this subset
(more than 10.000), there are many empty cells or cases that present
inconsistent information. Also, the database involves essentially
non-homogeneous data, given that the records concern various wells
located in different places and consequently subject to diverse
environmental factors.

As an attempt to reduce the effects of the data non-homogeneity, it
was selected a specific group of wells that are located essentially in
the same geographical area with similar characteristics. The cases
that presented any empty cell associated with a variable of interest
were eliminated.  After pre-processing the database, a data set with
214 examples was obtained and divided in a training set with the first
170 points, a validation set with the following 20 entries and a test
set formed by the last 24 examples.

The description of the input variables selected from the database are
presented in Table \ref{tab:ex4-var} along with the characteristics of
the \gls{tbf} itself. Each one of the input variables reflects a
specific feature of the wells taken into account. Basically, it is
considered the percentage of water and solids within the wells; the
number of installed rods of different lengths ($3/4$, $5/8$, $7/8$,
$1$, in inches); the absence (N) or presence of a filter (if present,
its type C, S or F is recorded and the related quality increases from
C to F); the way the oil is pumped upwards (\gls{pcp} or \gls{mp}).

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
\caption{Selected variables that influence the TBF}
\label{tab:ex4-var}

\vspace{0.2cm}

\begin{tabular}{llll} 
  \toprule 
  {\bf{Name}} & {\bf{Variable}} & {\bf{Type}}   & \textbf{Range/Categories} \\\midrule
  {$x_1$} & Percentage of water and solids in the well & Numerical & $[0,98.3]$ \\
  {$x_2$} & Number of $3/4$'' rods installed & Numerical & $[0,104]$ \\
  {$x_3$} & Number of $5/8$'' rods installed & Numerical & $[0,101]$ \\
  {$x_4$} & Number of $7/8$'' rods installed & Numerical & $[0,96]$ \\
  {$x_5$} & Number of $1$'' rods installed & Numerical & $[0,95]$ \\
  {$x_6$} & Type of installed filter & Categorical & N$^*$, C, S, F\\
  {$x_7$} & Type of elevation method & Categorical & \gls{pcp}, \gls{mp}\\\midrule
  {$y$}   & Time Between Failures (TBF, in days) & Numerical & $[2, 3469]$\\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\vspace{-0.3cm}
\hspace{1.75cm}{\scriptsize{$^*$No filter installed}}
\end{table}
% --------------------------------------------------------------------------


Notice that $x_6$ and $x_7$ are categorical variables. The former has
an ordinal scale, that is, the associated categories have an underlying
order, but can not be quantified. The latter, in turn, has a nominal
scale and then its categories only denotes the sort of elevation
method, which forbids any sort of ordering or arithmetical operations.
Nevertheless, the \gls{svm} training problem only accepts numerical
values, thus the categorical variables have to be treated before being
used by the \gls{pso}+\gls{svm} algorithm. Traditional statistical
regression methods often handle categorical variables by transforming
them into indicator or dummy variables \cite{montgomery2001}. That is,
if a variable has two associated categories, for example the type of
pump used, the indicator variable $x^{ind}$ is either 0 to denote that
a \gls{pcp} is used or 1 to indicate that a \gls{mp} is installed. In
general, if a categorical variable has $r$ related categories, then
$r-1$ indicator variables are necessary. For \gls{svm},
\citeonline{hsu2009} also recommend the use of indicator variables to
handle categorical variables. The transformation of the categorical
variables $x_6$ and $x_7$ are shown in Table \ref{tab:ex4-catvar}.

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Transformation of categorical variables $x_6$ and $x_7$
    into indicator variables}
\label{tab:ex4-catvar}

\vspace{0.2cm}

\begin{tabular}{lllllll} 
  \toprule 
  $x_6$ & $x^{ind}_{6,1}$ & $x^{ind}_{6,2}$ & $x^{ind}_{6,3}$ & & $x_7$ & $x^{ind}_{7}$\\\midrule
  N  & 0 & 0 & 0 & & \gls{pcp} & 0 \\
  C  & 0 & 0 & 1 & & \gls{mp}  & 1\\
  S  & 0 & 1 & 0 & & & \\
  F  & 1 & 0 & 0 & & & \\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

It can be observed from Table \ref{tab:ex4-var} that the \gls{tbf}'
interval is quite different from the ranges of the numerical input
variables. In this way, in order to obtain better results, it is
necessary to scale the data. The usual scaling range is [0,1],
however, a 0 value for the output $y$ gives rise to a division by 0 in
the computation of \gls{mape}. Thus, instead of using [0,1], the data
is scaled within [1,2]. In addition, each variable is scaled by using
its proper minimum and maximum training values (scaling factors),
which results in 8 different scales, 7 for inputs and 1 for the
\gls{tbf}. In other words, each dimension of the input vector
$\mathbf{x}$ as well as the output variable $y$ are scaled on their
own. Also, as the validation and test sets play the role of unseen
data, they are scaled using the scaling factors from the training
set. The formula is as follows:

% --------------------------------------------------------------------------
\begin{equation}\label{scale}
  \text{\small{Scaled} } x_{ij} = \frac{(x_{ij} - x_{min,j})}{(x_{max,j} - x_{min,j})}\cdot(up - low) + low
\end{equation}
% --------------------------------------------------------------------------
where $i$ is the example index, $j$ is the dimension of $\mathbf{x}_i$
under consideration, $low$ and $up$ are the boundaries of the scale
interval, $x_{min,j}$ and $x_{max,j}$ are respectively the minimum and
maximum training values of the $j^{\text{th}}$ dimension of
$\mathbf{x}_i$ for $i = 1, 2, \dots, \ell$. When validation and test
points are considered ($i > \ell$), the same scaling factors
$x_{min,j}$ and $x_{max,j}$ are used. Moreover, for the present
example, $low = 1$ and $up = 2$. Following the same reasoning from
Equation \eqref{scale}, one obtains a scaling expression for the
output variable $y$:

% --------------------------------------------------------------------------
\begin{equation}\label{scaley}
  \text{\small{Scaled} } y_i = \frac{(y_{i} - y_{min})}{(y_{max} - y_{min})}\cdot(up - low) + low
\end{equation}
% --------------------------------------------------------------------------
in which $y_{min}$ and $y_{max}$ are the minimum and maximum training
values of $y$, that is, $y_{min} = \min_i (y_1, y_2, \dots, y_i,
\dots, y_{\ell})$ and $y_{max} = \max_i (y_1, y_2, \dots, y_i, \dots,
y_{\ell})$. Again, when validation and test values of $y$ are
considered ($i > \ell$), $y_{min}$ and $y_{max}$ are also used.

\citeonline{montgomery2001} assert that, although indicator variables
with 0-1 values are often a best choice, any two distinct values ({\it
  e.g.} 1 and 2) for an indicator variable would be
satisfactory. Hence, in order to follow the same scale of the
numerical variables, the categorical ones are transformed in 1-2
indicator variables. In Table \ref{tab:ex4-catvar}, 0 and 1 values are
then substituted by 1 and 2, in this order.

After applying the necessary transformations and scales, the proposed
\gls{pso}+\gls{svm} methodology can be used. The \gls{pso} parameters
as well as the bounds for $C$ and $\gamma$ were the same as the ones
adopted in the previous time series based examples. However,
$\varepsilon \in [1.1106 \cdot 10^{-3}, 1.6659 \cdot 10^{-1}]$.

Descriptive statistics of the 30 \gls{pso}+\gls{svm} runs are
presented in Table \ref{tab:desc-real}. All runs attained the stop
criterion related to equal best fitness values for 600 consecutive
iterations. The parameter values related to the ``machine'' that
provided the smallest test \gls{nrmse} are $C = 4.4422$, $\varepsilon
= 1.1774 \cdot 10^{-2}$ and $\gamma = 2.1226 \cdot 10^{-1}$. The related errors
are listed in Table \ref{tab:ex4-errors}. 
% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}

  \caption{Descriptive statistics of parameters and error functions,
    stop criteria frequency and performance for 30 PSO+SVM runs, {\it
      lbest}, Example 4}
\label{tab:desc-real}

\vspace{0.2cm}

\begin{tabular}{p{2cm}llllll} 
  \toprule
  & & \textbf{Minimum} & \textbf{Maximum} & \textbf{Median}  & \textbf{Mean} & \textbf{Std. dev.}$^*$\\\cmidrule{2-7}
%  \multirow{3}{*}{\rotatebox{90}{\textbf{Parameters}}} & $C$ & 1.382 & 16 & 17.632 & 30 & 21.061&44\\
  \multirow{3}{*}{\textbf{Parameters}}& $C$& 2.6099 & 11.5072  & 2.8707  & 3.7511  & 2.6397\\
  & $\varepsilon$ & $1.1774 \cdot 10^{-2}$   & $2.1867 \cdot 10^{-2}$ & $1.5028 \cdot 10^{-2}$  &  $1.5799 \cdot 10^{-2}$   & $2.1879 \cdot 10^{-3}$  \\
  & $\gamma$ & $1.1226 \cdot 10^{-1}$  & $7.5687 \cdot 10^{-1}$  & $6.7570 \cdot 10^{-1}$  & $6.5571 \cdot 10^{-1}$ & $1.0901 \cdot 10^{-1}$\\[1.5ex]
  \multirow{2}{*}{\textbf{Validation}} & \gls{nrmse}  & $3.1551 \cdot 10^{-2}$ & $3.1852 \cdot 10^{-2}$ & $3.1561 \cdot 10^{-2}$ & $3.1595 \cdot 10^{-2}$ & $9.3203 \cdot 10^{-5}$\\
  \multirow{2}{*}{\textbf{error}}& \gls{mape} (\%) & 2.2365 & 2.4069 & 2.2412 & 2.2602 & $5.0494 \cdot 10^{-2}$\\
  & \gls{mse} & $1.0697 \cdot 10^{-3}$ & $1.0903 \cdot 10^{-3}$ & $1.0704 \cdot 10^{-3}$  & $1.0728 \cdot 10^{-3}$ & $6.3498 \cdot 10^{-6}$\\[1.5ex]
  \multirow{3}{*}{\textbf{Test error}} & \gls{nrmse} & $4.2354 \cdot 10^{-2}$  & $4.8617 \cdot 10^{-2}$ & $4.7869 \cdot 10^{-2}$ & $4.7234 \cdot 10^{-2}$ & $1.8450 \cdot 10^{-3}$\\
  & \gls{mape} (\%) & 3.4024 & 4.1029 & 3.9496  & 3.9067 & $1.7208 \cdot 10^{-1}$\\
  & \gls{mse} & $1.9059 \cdot 10^{-3}$ & $2.5112 \cdot 10^{-3}$ &$2.4346 \cdot 10^{-3}$ & $2.3739 \cdot 10^{-3}$ & $1.7768 \cdot 10^{-4}$\\\midrule  
  & & & & \multicolumn{3}{l}{\textbf{Absolute (relative, \%) frequency}} \\\cmidrule{2-7}
  \multirow{3}{*}{\textbf{Stop criteria}}& \multicolumn{3}{l}{Maximum number of iterations (6000)} & 0 (0) & & \\
  & \multicolumn{3}{l}{Equal best fitness for 600 iterations} & 30 (100) & & \\
  & \multicolumn{3}{l}{Tolerance $\delta = 1 \cdot 10^{-12}$} & 0 (0) & & \\ \midrule
  & & & & \multicolumn{3}{l}{\textbf{Metric value}} \\\cmidrule{2-7}
  \multirow{3}{*}{\textbf{Performance}} & \multicolumn{3}{l}{Mean time per run (minutes)} & 14.6685 & & \\
  & \multicolumn{3}{l}{Mean number of trainings} & 67929.3000  & & \\
  & \multicolumn{3}{l}{Mean number of predictions} & 67929.3000  & & \\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\vspace{-0.3cm}
{\scriptsize{$^*$Standard deviation}}
\end{table}
% --------------------------------------------------------------------------

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Validation and test errors from the ``machine'' with the smallest
test NRMSE, Example 4}
\label{tab:ex4-errors}

\vspace{0.2cm}

\begin{tabular}{lllllll} 
  \toprule 
  \textbf{Error function} & \textbf{Validation} & \textbf{Test}\\\midrule
  \gls{nrmse}             & $3.1747 \cdot 10^{-2}$ & $4.2354 \cdot 10^{-2}$ \\
  \gls{mape} (\%)         & 2.4069 & 3.4024\\
  \gls{mse}               & $1.0831 \cdot 10^{-3}$ & $1.9059 \cdot 10^{-3}$
\\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

In Figure \ref{fig:ex4-pso-evolution}, the evolution of the particle
swarm can be visualized in three different moments. Notice that in
iteration 50 there were infeasible particles, but at the
5276$^{\text{th}}$ and last iteration all particles were within the
feasible search space. The validation \gls{nrmse} convergence is
shown in Figure \ref{fig:ex4-nrmse}.

% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=0.5\linewidth]{fig/initialSwarm20-ex4.pdf}\hfill
\includegraphics[width=0.5\linewidth]{fig/intermediateSwarm20-ex4.pdf}\\[1.5ex]
\includegraphics[width=0.5\linewidth]{fig/finalSwarm20-ex4.pdf}
\caption{Swarm evolution during PSO, Example 4}
\label{fig:ex4-pso-evolution}}
\end{figure}
% --------------------------------------------------------------------------
% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=10cm]{fig/nrmseConv20-ex4.pdf}
\caption{Validation NRMSE convergence, Example 4}
\label{fig:ex4-nrmse}}
\end{figure}
% --------------------------------------------------------------------------

For this example, the \gls{svr} results are presented in separate
pictures, given that a unique graphic would be very dense due to the
number of data entries involved and would render its analysis
difficult. In this way, Figure \ref{fig:ex4-trainResult} presents the
\gls{svr} training results, whilst Figure \ref{fig:ex4-result} depicts
the \gls{svr} validation and test outcomes. From the former figure, it
can be observed that the majority of the training examples became
support vectors. Indeed, from the 170 training points, 141 were
selected as support vectors (120 bounded and 21 free). From the latter
figure, notice that despite the low quantity of precise
predictions, the machine attempts to catch the trend of the validation
and test data. Additionally, both figures have the scaled output as
vertical axis and the example number as the horizontal one, provided
that the input vectors are multi-dimensional and it is not possible to
draw a graphic involving all input vectors with the output \gls{tbf}
values.

% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=10cm]{fig/supVecResultsTrain20-ex4.pdf}
\caption{SVR training results, Example 4}
\label{fig:ex4-trainResult}}
\end{figure}
% --------------------------------------------------------------------------

% --------------------------------------------------------------------------
\begin{figure}[!ht]
\centering{
\includegraphics[width=10cm]{fig/supVecResults20-ex4.pdf}
\caption{SVR validation and test results, Example 4}
\label{fig:ex4-result}}
\end{figure}
% --------------------------------------------------------------------------

\section{Discussion}

In all presented examples, the validation \gls{nrmse} dropped to
values very near the best one found in early stages of the \gls{pso}
algorithm, which suggests the ability of \gls{pso} in finding good
solutions even with a few number of iterations. Additionally, it can
be inferred that the \gls{svm}, with an appropriate set of parameters,
is able to provide excellent results to reliability prediction based
on time series, better than or comparable to its competitive models such as
\gls{nn} and \gls{arima}.

Considering Example 4, the quality of the obtained results certainly
is related to the quality of the data set used. Firstly, even
considering wells from the same geographical area, the original
database subset presented, essentially, non-homogeneous records and
many empty cells or cases with contradictory information. Moreover,
the use of categorical variables influences the \gls{svm} performance,
since it involves a quadratic programming problem in its training
step, which treat all variables as if they were numerical. Hence, the
use of categorical variables are indicated for the cases when there is
not a quantitative manner to measure the factor of interest. For
example, transform a variable that is naturally numerical into a
categorical one is usually not recommended.

Even with these shortcomings, the \gls{pso}+\gls{svm} was able to
provide small \gls{nrmse} test values in Example 4. In this way, its
performance would be certainly enhanced if the data set were
originated from a database without errors. Additionally, other manners
to handle categorical variables have to be analyzed.

In the majority of the examples, the descriptive statistics showed a
high variance for the parameter representing the trade-off between
training error and machine capacity ($C$). This fact indicates the
difficulty in tuning this parameter and also in using techniques that
assign only discrete values for it, such as the grid search model. If
the inherent trade-off of \gls{svm} could be explicitly treated, $C$
could be omitted, remaining only the other two parameters
$\varepsilon$ and $\gamma$ to adjust. Indeed, \citeonline{mierswa2007}
proposes a multi-objective approach of \gls{svm}, in which the
minimization of training errors is one objective and the margin
maximization is the other one. In this situation, $C$ is no longer
needed.

\subsection{Performance Comparison Between {\it lbest} and {\it gbest}
  Models}\label{sec:comparacao}

All examples were solved also by the {\it gbest} model, in the same
conditions of {\it lbest}, {\it i.e.}, using the same \gls{pso}
parameters and \gls{svm} data set division among training, validation
and test points. The test \gls{nrmse} is the metric of greatest
interest because it provides an idea of the generalization ability of
the ``machine'' under consideration. By taking the 30 test \gls{nrmse}
values resulted from each \gls{pso} model as independent samples, a
Wilcoxon-Mann-Whitney test \cite{wackerly2002} can be performed for
each example so as to assess the chance of obtaining greater values
for the test \gls{nrmse} values with the {\it gbest} model than with
the {\it lbest} approach. In Table \ref{tab:lbestXgbest}, the
$p$-value of the one-sided test is presented for every case. Notice
that the two examples concerning the reliability and failure times of
turbochargers as well as Example 4 yielded a non-significant $p$-value
for the level of significance of 5\%.

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Mean test NRMSE values and Wilcoxon-Mann-Whitney test results {\it
lbest} $\times$ {\it gbest}}
\label{tab:lbestXgbest}

\vspace{0.2cm}

\begin{tabular}{llll} 
  \toprule 
  \multirow{2}{*}{\textbf{Example}} & \multicolumn{2}{l}{\textbf{Mean test
  NRMSE}} & \multirow{2}{*}{\textbf{$p$-value}}\\
  & {\it lbest} & {\it gbest} & \\\midrule
  1   & $2.0307 \cdot 10^{-2}$ & $3.3670 \cdot 10^{-2}$
&$9.3340 \cdot 10^{-5}$\\
  2.1 & $2.2999 \cdot 10^{-4}$ & $5.5485 \cdot 10^{-4}$ & $1.1490
\cdot 10^{-1}$ \\
  2.2 & $3.1674 \cdot 10^{-2}$ & $3.3395 \cdot 10^{-2}$ &$9.1130
\cdot 10^{-2}$\\
  3   & $1.9242 \cdot 10^{-2}$ & $2.0282 \cdot 10^{-2}$
&$2.0160 \cdot 10^{-3}$\\
  4   & $4.7234 \cdot 10^{-2}$ & $5.0871 \cdot 10^{-2}$
&$6.1820 \cdot 10^{-1}$\\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

An interesting point of the turbochargers failure times (Example 2.2)
is that not all \gls{pso} runs could catch their increasing
trend. Only 20\% from the {\it gbest} runs were not able to predict
output values from the test set with the correct increasing trend
against 26.6667\% of the {\it lbest} runs. Nevertheless, the
Wilcoxon-Mann-Whitney test was non-significant, {\it i.e.}, there is
no evidence to affirm that the {\it gbest} provides better test
\gls{nrmse} values than the {\it lbest} does, considering the level of
significance of 5\%.

\citeonline{bratton2007} asserts that usually the {\it gbest} approach
presents a faster convergence if compared to {\it lbest}. However, for
the presented examples, all {\it lbest} runs had a smaller mean time
per run in absolute terms (Table \ref{tab:times}). Also, apart from
Example 1, the mean number of \gls{svm} predictions (fitness
evaluations) was smaller for the {\it lbest} approach for all examples
(Table \ref{tab:predictions}). In order to statistically compare the
required times by each model and also the number of predictions,
Wilcoxon-Mann-Whitney tests may be performed for each case.

For the computational time case, Table \ref{tab:times} presents the
$p$-values from the related statistical tests. Notice that, for a
level of significance of 10\%, all results were statistically
significant. Table \ref{tab:predictions}, in turn, provides the
$p$-values resulted from the statistical tests associated with the
mean number of predictions. Taking into account a level of
significance of 10\%, only the test concerning Example 1 was
non-significant. Therefore, the {\it lbest} model is prone to require
less time as well as less fitness evaluations than the {\it gbest}
model does.

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Mean time per run (minutes) and Wilcoxon-Mann-Whitney test results
{\it lbest} $\times$ {\it gbest}}
\label{tab:times}

\vspace{0.2cm}

\begin{tabular}{llll} 
  \toprule 
  \multirow{2}{*}{\textbf{Example}} & \multicolumn{2}{c}{\textbf{Mean time per run (minutes)}} & \multirow{2}{*}{\textbf{$p$-value}}\\
  & {\it lbest} & {\it gbest} & \\\midrule
  1   & 10.8342 & 13.2886 &  $6.3310 \cdot 10^{-2}$\\
  2.1 & 1.0066  & 1.3956  & $1.1710 \cdot 10^{-3}$\\
  2.2 & 10.5648 & 15.3496 & $6.9060 \cdot 10^{-2}$\\
  3   & 4.9337  & 9.4805  & $2.6620 \cdot 10^{-2}$\\
  4   & 14.6685 & 16.9978 & $8.4030 \cdot 10^{-2}$\\\bottomrule
\end{tabular}

\vspace{0.3cm}

 \caption{Mean number of predictions per run   and
Wilcoxon-Mann-Whitney test results {\it lbest} $\times$
    {\it gbest}}
\label{tab:predictions}

\vspace{0.2cm}

\begin{tabular}{llll} 
  \toprule 
  \multirow{2}{*}{\textbf{Example}} & \multicolumn{2}{c}{\textbf{Mean number of
predictions per run}} & \multirow{2}{*}{\textbf{$p$-value}}\\
      & {\it lbest} & {\it gbest} & \\\midrule
  1   & 116016.1667 & 106998.6333 & $6.8340 \cdot 10^{-1}$\\
  2.1 & 67989.7000  & 85499.2000  & $3.7350 \cdot 10^{-2}$\\
  2.2 & 68977.6667  & 82603.7000  & $8.8710 \cdot 10^{-2}$\\
  3   & 66563.8000  & 89619.9333  & $1.0790 \cdot 10^{-2}$\\
  4   & 67929.3000  & 89402.3000  & $1.8560 \cdot 10^{-2}$\\\bottomrule
\end{tabular}

%\begin{tabular}{lllllll} 
%  \toprule 
%  & & \multicolumn{4}{c}{\textbf{Example}} \\[1ex]
%  & & 1 & 2.1 & 2.2 & 3 & 4\\\midrule
%  \multirow{2}{*}{\textbf{Mean time per run (minutes)}} & {\it lbest} & 10.8342 &1.0066 & 10.5648 & 4.9337 & 14.6685\\
%  & {\it gbest}&  13.2886 & 1.3956 & 15.3496 & 9.4805 & 16.9978\\\midrule
%  & $p$-value & $6.3310 \cdot 10^{-2}$ & $1.1710 \cdot 10^{-3}$ & $6.9060 \cdot 10^{-2}$ & $2.6620 \cdot 10^{-2}$ & $8.4030 \cdot 10^{-2}$\\\bottomrule
%\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------
% % --------------------------------------------------------------------------
% \begin{table}[!ht]
% \begin{center}
% \begin{footnotesize}
%   \caption{Mean number of predictions per run   and
% Wilcoxon-Mann-Whitney test results {\it lbest} $\times$
%     {\it gbest}}
% \label{tab:predictions}
% 
% \vspace{0.2cm}
% 
% \begin{tabular}{llll} 
%   \toprule 
%   \multirow{2}{*}{\textbf{Example}} & \multicolumn{2}{c}{\textbf{Mean number
%of
% predictions per run}} & \multirow{2}{*}{\textbf{$p$-value}}\\
%       & {\it lbest} & {\it gbest} & \\\midrule
%   1   & 116016.1667 & 106998.6333 & $6.8340 \cdot 10^{-1}$\\
%   2.1 & 67989.7000  & 85499.2000  & $3.7350 \cdot 10^{-2}$\\
%   2.2 & 68977.6667  & 82603.7000  & $8.8710 \cdot 10^{-2}$\\
%   3   & 66563.8000  & 89619.9333  & $1.0790 \cdot 10^{-2}$\\
%   4   & 67929.3000  & 89402.3000  & $1.8560 \cdot 10^{-2}$\\\bottomrule
% \end{tabular}
% \end{footnotesize}
% \end{center}
% \end{table}
% % --------------------------------------------------------------------------

\vspace{-0.15cm}
In this way, for the resolved examples, one can infer that the {\it
  lbest} approach is inclined to provide smaller values of test
\gls{nrmse} than the {\it gbest} model does, or at least comparable
ones. Moreover, for the considered examples, the {\it lbest} \gls{pso}
has a tendency to converge more rapidly than the {\it gbest} approach.






% First example: 

% The ultimate interest of the \gls{pso}+\gls{svm} algorithm is to
% obtain a machine with a good generalization ability.  The test
% \gls{nrmse} values can be interpreted as indicators of this ability
% and a statistical comparison of the {\it lbest} and {\it gbest} models
% can be based on these values. Indeed, a box-plot of them is presented
% in Figure \ref{fig:boxplot}, where the line within the boxes indicate
% the medians, the boxes ranges are defined by $(q_3 - q_1)$ and [$q_1 -
% 1.5(q_3-q_1), q_3 + 1.5(q_3 - q_1)$] determine the whiskers'
% intervals. This visual inspection indicates that quantiles and medians
% from {\it lbest} are smaller than their {\it gbest} counterparts. This
% can be corroborated by a Wilcoxon-Mann-Whitney sum rank test
% \cite{degroot2001} with the associated null hypothesis being that the
% two samples (with 29 observations each - outliers may be discarded) of
% test \gls{nrmse} values were drawn from the same distribution. The
% underlying suppositions of such non-parametric test are satisfied
% given that the two samples are independent and a total ordering of all
% observations can be obtained. The test yielded a significant
% $p$-value equal to $9.2474 \cdot 10^{-4}$, which indicates the samples
% were originated from different distributions. Hence, for this example,
% the {\it lbest} model is statistically better than the {\it gbest} one
% as it resulted in smaller median and mean values for the test
% \gls{nrmse}.

% Second example, part 2:

