\chapter{Introduction}\label{ch:introduction}

\section{Opening Remarks}
Reliability can be understood as the probability of a system to
properly perform the tasks it was designed for, under certain
conditions, during a predefined time length \cite{rausand2004}. These
systems can be either  a specific component or an entire system. With a
slightly different interpretation, the absence of reliability means
frequent system breakdowns and thus loss of productivity and increased
costs, which may be associated with maintenance actions, legal
penalties and also with the (bad) image of the organization in face of
their costumers. It is also a critical issue in systems that entails
environmental and human risks, such as oil refineries and nuclear
power plants, given that their failures may incur in catastrophic
events. Therefore, reliability is a key factor to production systems,
since it is directly related to the competitive performance of
organizations in the market share they are inserted.

The systems' reliability varies during their lifetime since it is
influenced by the environmental and load conditions under which they
operate. In this way, it is of great interest to identify this
quantitative indicator of systems' performance in order to control it
by means of appropriate maintenance actions, so as to guarantee the
desired level of production and safety. According to
\citeonline{zio2008}, reliability prediction modeling of an item may
be conducted during various phases of its life-cycle, including the
concept validation and definition, the design, operation and
maintenance phases. At any stage, the reliability predictions obtained
serve the purpose of anticipating the evolution of the reliability of
the component so as to allow for taking the proper actions for its
maintaining and, possibly, improvement. For systems design considering
reliability and cost metrics, see for example \citeonline{lins2008}
and \citeonline{lins2009}.

The evaluation of the reliability behavior in time has been
accomplished by means of stochastic methods, which usually involves
simplifying assumptions so as to allow for the analytical
treatment. The usual stochastic processes to model the reliability
evolution are the \gls{rp} and \gls{nhpp}. If \gls{rp} is chosen, the
times between failures are independent and identically distributed
with an arbitrary probability distribution. Besides, one assumes that
the component, after a failure, is subjected to a perfect repair and
returns into operation with a condition it presented when new (``as
good as new''). A special case of the \gls{rp} is the \gls{hpp}, in
which the times between failures are modeled by identical and
independent Exponential distributions and has the underlying
supposition that the probability of occurrence of a failure in any
time interval depends only on the length of that interval. This
assumption may be true for some electronic components or in a short
period of time \cite{ross2000}. On the other hand, using \gls{nhpp},
the times between failures are neither independent nor identically
distributed. In addition, it is supposed that the maintenance crew
makes a minimal repair in the failed component, that is, it is
returned to an operational state with the same condition it had just
before the failure occurrence (``as bad as old'').

However, the hypothesis of minimal or perfect repairs required to
utilize either \gls{nhpp} or \gls{rp}, respectively, are often not
realistic. In practical situations, corrective maintenance actions are
likely to be imperfect repairs, {\it i.e.}, they are intermediate
actions between minimal and perfect repairs and the equipment returns
into operation with a condition better than old and worse than new. In
this context, \gls{grp} can be used to model failure-repair processes
of components subject to imperfect repairs. In \gls{grp}, a
parameter $q$ (rejuvenation parameter) is introduced in the model and
the value it assumes is related to the maintenance action
effectiveness. However, this value is often considered as a constant
for all interventions without taking into consideration the current
state of the system. For further details in \gls{rp}, \gls{hpp},
\gls{nhpp} and \gls{grp}, the interested reader may consult
\citeonline{rigdon2000} and \citeonline{rausand2004}.

In reality, the reliability of a system is affected by a set of
time-dependent, external (operational and environmental) variables
which are dependent among them. As an outcome, reliability prediction
may demand sophisticated probabilistic models so as to realistically
capture the complexities of the systems and components reliability
behavior, which may result in burdensome mathematical formulations
that in the end, may not provide the required accuracy of the
reliability estimates \cite{moura2009}.

In this context, learning methodologies based on data emerge and
\gls{svm} is the one selected to be studied and applied to reliability
problems in this dissertation. \gls{svm} has been developed since the
years 1960's and was first introduced by Vapnik and Chervonenkis (see
\citeonline{vapnik2000}). Loosely speaking, \gls{svm} is a learning
method whose theory is based on statistical concepts. It incorporates
the idea of learning about the phenomenon under analysis from real
observations about it. The main idea is to train a ``machine'' with
real pairs of inputs and outputs so as to allow for the prediction of
future outputs based on observed inputs. The training algorithm is a
quadratic optimization problem, where the objective function
essentially entails a generalization error, which comprises the
training error as well as the error related to the machine ability in
handling unseen data. The learning problem can be either of
classification, in which the outputs are discrete values representing
categories, or of regression, when the outputs can assume any real
value.

Competitive models to \gls{svm} are the artificial \gls{nn}
\cite{haykin1999} and \gls{bn} \cite{korb2003}. It is interesting to
notice that, in accordance with \citeonline{kecman2005}, \gls{svm} has
been developed in the reverse order to the development of
\gls{nn}. \gls{svm} evolved from the theory to the implementation and
experiments, while \gls{nn} followed a more ``heuristic'' path, from
applications to theory. The strong theoretical background of \gls{svm}
did not make it widely appreciated at first. It was believed that,
despite its theoretical foundations, \gls{svm} was neither suitable nor
relevant for practical purposes. However, afterwards, the use of
\gls{svm} in learning benchmark problems involving for example digit
recognition provided excellent results and then such tool was finally
taken seriously. 

\citeonline{haykin1999} asserts that an \gls{nn} is
designed to model the way in which the brain performs a particular
task or function of interest and is usually simulated in software on a
digital computer. To achieve good performance, \gls{nn} employs a
massive interconnection of simple computing cells referred to as
``neurons'' or ``processing units'', which are basically formed by (i)
a set of connecting links, each one of them characterized by a weight;
(ii) an adder for summing the input signals, weighted by the
respective connecting links; (iii) an activation function for limiting
the amplitude of its output.

\gls{nn} involves the \gls{erm}, which measures only the errors from
the training step and is appropriate when there is a large quantity of
training examples \cite{vapnik2000}. On the other hand, the training
phase of \gls{svm} entails a convex quadratic optimization problem,
whose objective function embodies the principle of \gls{srm}. The
general idea of the \gls{erm} is to minimize the error during the
training phase, while the \gls{srm} aims at minimizing the upper bound
on the generalization error. Additionally, the characteristics of the
\gls{svm} training optimization problem enable the \gls{kkt}
conditions to be necessary and sufficient to provide a global optimum,
differently from \gls{nn} that may be trapped on local minima
\cite{schol2002}.


According to \citeonline{korb2003}, \gls{bn}s are graphical models for
reasoning under uncertainty, represented by acyclic graphs
\cite{bondy2008} whose nodes denote variables and arcs represent
causal connections between the related variables. Also, a \gls{bn} can
model the quantitative strength of the connections between variables,
allowing probabilistic beliefs about them to be updated automatically
as information becomes available.

For example, \citeonline{ramesh2003} use a hybridism of \gls{bn} and
\gls{svm} in order to predict the axis positioning errors in machine
tools. Such errors depend on the machine temperature profile and also
on the specific operating condition the machine is subjected
to. Firstly, a \gls{bn} approach is used to classify the error into
categories associated with the different operating conditions. After
that, a knowledge base of errors due to specific machine conditions is
formed and combined with classification results as input to an
\gls{svm} regression model for mapping the temperature profiles with
the measured errors. The authors provide the following reasons for
using a \gls{bn}: paucity of data, expert knowledge can be
incorporated when data availability is sparse and it permits the
learning of causal relationships between variables. For the use of
\gls{svm}, the authors note that it does not require previous
knowledge of the relationship between input and output variables.

In the context of reliability prediction from time series data, in
addition to the previous methods, there is also the \gls{arima} and
the Duane models as alternatives for
\gls{svm}. \citeonline{morettin2004} state that the \gls{arima} model
has been one of the most popular approaches in time series
forecasting %The model construction is based on three phases: the
%model identification, parameters estimation and diagnostic checking by
%means of residual analysis.
and assume that predicted values are a linear combination of the
previous values and errors. The Duane model, in turn, are frequently
applied on the analysis of reliability during the early stages of
product design and development, which may involve reliability growth
modeling. It assumes an empirical relationship whereby the improvement
in \gls{mtbf} is proportional to $T^{\theta}$, where $T$ is the
equipment's total operational time and $\theta$ is the reliability
growth factor \cite{lewis1987,smith2001}.

Therefore, the advantages of \gls{svm} in relation to the other
methods are: (i) no requirements of previous knowledge of or
suppositions about the relation between inputs and outputs; (ii) no
need of a large quantity of data; (iii) incorporation of the \gls{srm}
principle which provides it with a better generalization ability and
(iv) the resolution of a convex optimization problem in the training
stage.

Although \gls{svm} has been introduced in the sixties, its first
applications in prediction based on data series occurred in the end of
the years 1990's, for example see \citeonline{muller1999}. Indeed,
\citeonline{sap2009} provide a literature survey of the works using
\gls{svm} in time series predictions. They make a count on the number
of works regarding the knowledge areas in which they are inserted
(Table \ref{tab:svrapplic}). It can be noticed that only three works
were classified into the reliability prediction context, making it as
the last field in number of related works. From this fact, it can be
inferred that \gls{svm} is actually in the early stages of its
applicability in the reliability prediction problems based on time
series data, if compared for example with the economic field.

% --------------------------------------------------------------------------
\begin{table}[!ht]
\begin{center}
\begin{footnotesize}
  \caption{Number of SVM time series prediction papers by
    application. {\footnotesize{Adapted from \citeonline{sap2009},
        p. 26}}}
\label{tab:svrapplic}

\vspace{0.2cm}

\begin{tabular}{ll} 
  \toprule \textbf{Application} & \textbf{Number of papers}\\\midrule
  Financial market prediction & 21\\
  Electric utility forecasting & 17\\
  Control systems and signal processing & 8\\
  Miscellaneous applications & 8\\
  General business applications & 5\\
  Environmental parameter estimation & 4\\
  Machine reliability forecasting & 3
  \\\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
% --------------------------------------------------------------------------

Nevertheless, the performance of \gls{svm} is influenced by a set of
parameters that appear in the training problem. This fact gives rise
to the model selection problem that consists in choosing suitable
values for these parameters. They are actually very difficult to be
manually tuned and systematic procedures may be required to perform
this task. In this way, methods such as \gls{pso} \cite{bratton2007}
can be used to tackle the model selection problem. \gls{pso} is an
optimization probabilistic heuristic, well-suited to deal with real
variables, based on the behavior of biological organisms that move in
groups such as birds. The nature concepts of cognition and
socialization are translated in mathematical formulations for updating
particles velocities and positions throughout the search space towards
an optimum position. There are basically two models of communication
networks among particles: in one of them, the most simple, all
particles are connected to each other ({\it gbest}) and in the other,
they are able to communicate only with some of them ({\it lbest}).

Besides \gls{pso}, there are other probabilistic approaches to solve
the model selection problem such as \gls{ga}
\cite{goldberg1989}. \gls{ga} is a
computational method usually used for optimization tasks and attempts
to mimic the natural evolution process. It is based on several genetic
operators such as crossover and mutation and is often computationally
more expensive if compared to \gls{pso}. Other possible alternative to
tackle the \gls{svm} model selection problem is the grid search
method \cite{momma2002}. The latter assumes the parameters as discrete values
within a
range and all possible combinations of them are assessed. Its main
drawbacks consist in the discretization of the search space as well as
in the great number of possibilities to be evaluated when there are
several parameters to adjust.

In the following Section, some previous works related to \gls{svm} as
well as to \gls{svm} model selection problem are presented.

% For example, \citeonline{ramesh2003} use a hybridism of \gls{bn} and
% \gls{svm} in order to predict the axis positioning errors in machine
% tools. The authors assert that such errors depend on the machine
% temperature profile and also on the specific operating condition the
% machine is subjected to. Temperatures of different points of the
% machine are monitored by sensors and are influenced by the operating
% condition. Firstly, a Bayesian network approach is used to classify
% the error into classes associated with the different operating
% conditions. After that, a knowledge base of errors due to specific
% machine conditions is formed and used combined with classification
% results as input to a \gls{svm} regression model for mapping the
% temperature profiles with the measured errors. The authors provide the
% following reasons for using a \gls{bn}: paucity of data, expert
% knowledge can be incorporated when data availability is sparse and it
% also avoids the overfitting of data since all data set can be used for
% training. For the use of \gls{svm}, the authors note that it does not
% require the previous knowledge of the relashionship between input and
% output variables.

\section{Previous works}

For classification tasks, \citeonline{rocco2002} have used \gls{svm}
to classify a component as operational or faulty in order to evaluate
system overall reliability. The authors take advantage of the
\gls{svm} velocity, which is greater than the one from the traditional
discrete event simulation approach of Monte Carlo
\cite{banks2001}. Then, they couple Monte Carlo simulation with
\gls{svm}. \citeonline{roccoezio2007}, in turn, use a multi-classification
\gls{svm} to categorize anomalies in
components. \citeonline{widodo2007} make a review of the \gls{svm}
applied to condition monitoring and fault diagnosis.

For the prediction of reliability related measures based on time
series, \citeonline{hongepai2006} use \gls{svm} coupled with an
iterative method for selecting the associated \gls{svm}
parameters. They forecast time failures of an engine. Additionally,
\citeonline{pai2006} and \citeonline{chen2007} propose a \gls{ga}+
\gls{svm} approach to predict reliability values of an
engine. \gls{ga} is used as optimization tool to obtain the most
suitable parameter values. No work was found relating the use of
\gls{svm} with system characteristics, like temperature or the number
of installed components, to predict continuous reliability metrics
({\it e.g.} \gls{tbf}, \gls{ttr}).


The methodology of \gls{pso}+\gls{svm} is presented in some
works. \citeonline{lin2008} use such approach to the model selection
problem but also to the choice of the most relevant input entries
(problem known as feature selection). They apply the methodology on
freely available general data sets in Internet
repositories. Dissolved gases contents on power transformers' oil are
predicted in the work of \citeonline{fei2009} with \gls{pso}+\gls{svm}
approach. Also in the electricity context, \citeonline{hong2009}
predicts the electric load by means of a \gls{pso} combined with a
regression \gls{svm}. \citeonline{samanta2009} apply \gls{pso} with a
classification \gls{svm} in the context of fault detection.

In this way, so far, no \gls{pso}+\gls{svm} approach has been proposed
to tackle reliability prediction tasks based either on time series
data or on specific metrics of the system under
consideration. Additionally, all of them involved the most simple
communication networks among particles, the {\it gbest} model.



% \citeonline{yam2001} propose a intelligent predictive decision support
% system for condition-based maintenance by means of a \gls{nn}. The
% authors forecast the deterioration trend of a mechanical equipment in
% order to assess its remaining lifetime. As a result, maintenance
% actions can be arranged in advance and also equipment breakdowns are
% avoided. 


\section{Justification}

As already mentioned, reliability of components and systems is a key
element of production systems due to its direct relation with
productivity and costs, and thus with the performance of organizations
within market. Thus, reliability prediction is a subject of great
interest that can be reverted in economic and competitive gains to
organizations. 

Besides this first motivation and reason, as can be concluded by the
survey of \citeonline{sap2009}, there are few works of reliability
prediction based on time series using \gls{svm} as forecast tool. In
addition, from the literature review, no work was found either
concerning \gls{svm} regression to predict reliability metrics based
on system features or relating \gls{pso} with \gls{svm} to tackle the
model selection problem in this specific context. Also, all works that
involved \gls{pso}+\gls{svm} used a {\it gbest} communication network
among particles.

Also, the \gls{cbm}, which is a maintenance program that recommends
maintenance actions based on information collected via equipment
condition monitoring, may incorporate \gls{svm}. \gls{cbm} consists in
three main steps \cite{jardine2006}:
\begin{itemize}
\item Data acquisition: data regarding equipment status is collected.
\item Data processing: the acquired data is processed, analyzed and
  interpreted.
\item Maintenance decision-making: after data interpretation,
  efficient maintenance policies are recommended.
\end{itemize}

\gls{svm} may take place in the second step of \gls{cbm}. If the
\gls{cbm} approach is well established and implemented, it can provide
the decrease of maintenance costs. For example, the work of
\citeonline{moura2009esrel} assumes that the considered system is
continuously monitored and that its current state is available. Then,
the authors propose maintenance policies based simultaneously on
availability and cost metrics of systems by means of \gls{smdp} and
\gls{ga}. Hence, they tackle the last step of a \gls{cbm} program and
the inclusion of \gls{svm} as a previous step would result in a more
comprehensive approach of the considered problem.


\section{Objectives}

\subsection{Main Objective}

This dissertation proposes a \gls{pso} algorithm to solve the
\gls{svm} model selection problem. The resulted \gls{pso}+\gls{svm}
methodology is then applied to the reliability context specifically
involving regression problems such as reliability prediction problems
based on time series data and/or on system specific features.

\subsection{Specific Objectives}

In order to attain the main objective, the following specific goals
are established:
\begin{itemize}
\item Literature review of the most used methods to solve the
  \gls{svm} model selection problem.
\item Implementation of a \gls{pso} algorithm linked with a \gls{svm}
  library to tackle the \gls{svm} model selection problem, resulting
  in a \gls{pso}+\gls{svm} approach.
\item Application of the proposed \gls{pso}+\gls{svm} to reliability
  prediction problems based on time series data and also based on data
  collected from a real system.
\item Performance comparison between the proposed \gls{pso}+\gls{svm}
  and other methodologies such as \gls{ga}+\gls{svm}, as well as other
  time series methods such as \gls{nn} and \gls{arima}.
\item Performance comparison between {\it gbest} and {\it lbest}
  models for the application examples taken into account.
\end{itemize}

% \section{Methodology}

% In this work, the adopted methodology entails the research of books
% and scientific papers that are directly or indirectly related to the
% considered subjects: \gls{pso}, \gls{svm}, applications of these tools
% to reliability related problems. Additionally, this work involves
% experimental tasks, since an implementation of \gls{pso} linked with a
% \gls{svm} library in MATLAB 7.8 environment is performed. The
% developed methodology is applied to solve examples of reliability
% prediction problems which are available in literature.

\section{Dissertation Layout}

Besides this introductory chapter, this dissertation comprises four
more chapters, whose contents are described as follows:
\begin{itemize}
\item{\bf Chapter 2}: the theoretical background is
  presented. Initially, the \gls{svm} methods for classification and
  regression tasks are detailed. Then, the related model selection
  problem as well as a survey of the methods that have been applied to
  tackle it are described. This chapter also contains the general
  ideas underlying \gls{pso} algorithms.
\item{\bf Chapter 3:} the \gls{pso} methodology proposed in this work to
  tackle the \gls{svm} model selection problem is detailed. Also, the
  \gls{pso}+\gls{svm} combination is commented.
\item{\bf Chapter 4:} three application examples from literature are
  presented and resolved by means of the proposed \gls{pso}+\gls{svm}
  methodology. One of them is used in two different ways, which yields
  four examples. The outcomes are compared to results from other tools
  available in literature (\gls{nn}, \gls{arima}, among others). Also,
  an application example involving data collected from oil production
  wells is solved. Then, a comparison between the {\it gbest} and {\it
    lbest} \gls{pso} and a discussion about the obtained results take
  place.
\item{\bf Chapter 5:} a summary of the main contributions of this
  dissertation is provided along with some comments about its
  limitations. In addition, this chapter presents some topics
  associated with the ongoing research and suggestions for future
  works.
\end{itemize}

