\chapter{Synthesis of Fault-Tolerant Control Systems} \label{chap:faulttolerance}
\lettrine[lines=3]{M}{odern} distributed computing platforms 
may have components that fail temporarily or permanently during operation. 
In this chapter, we present a framework for fault-tolerant operation
of control applications in the presence of permanent faults. 

When a node fails, the \emph{configuration} of the underlying
distributed system changes. A configuration of the system is a set of
currently operational computation nodes. When the configuration of the
system changes (i.e., when one or several nodes fail), the tasks that
were running on the failed nodes must now be activated at nodes of
this new configuration.  To guarantee stability and a minimum level of
control quality of the control applications, it is necessary that a
\emph{solution} for this new configuration is synthesized at design
time and that the system has the ability to adapt to this solution at
runtime.  A solution comprises a mapping, a schedule, and controllers
that are optimized for the computation nodes in a certain
configuration.  However, such a construction is not trivial due to
that the total number of configurations is exponential in the number
of nodes of a distributed system.  Synthesis of all possible
configurations is thus impractical, because of the very large
requirements on design time and memory space of the underlying
execution platform.  In this chapter, we propose a method to
synthesize solutions for a small number of \emph{minimal}
configurations and still provide guarantees on fault tolerance and a
minimum level of control performance. Additional configurations are
considered for synthesis with multiple trade-off considerations
related to control performance, synthesis time, migration time, and
memory consumption of the platform.

In the next section, we shall present notation and assumptions related
to the distributed execution platform and its configurations.
Section~\ref{faulttolerance:sec:configclass} presents a classification
of the set of configurations, leading to the identification of
\emph{base} configurations in
Section~\ref{faulttolerance:sec:baseidentification}.
Section~\ref{faulttolerance:sec:mapping} presents the extension of our
synthesis framework in Chapter~\ref{chap:distributed} for mapping
optimization of distributed control systems.  In
Section~\ref{faulttolerance:sec:minconfig}, we consider the synthesis
of solutions for base configurations and, if needed, for a set of
\emph{minimal} configurations.  In
Sections~\ref{faulttolerance:sec:dse} and~\ref{faulttolerance:sec:probform},
we motivate and formulate a
design-space exploration problem for control-quality optimization,
followed by an optimization heuristic in
Section~\ref{faulttolerance:sec:heuristic}. 
This chapter ends with experimental results
in Section~\ref{faulttolerance:sec:experiments} and conclusions in
Section~\ref{faulttolerance:sec:conclusion}.



\section{Distributed Platform and Configurations} \label{faulttolerance:sec:systemmodel}
The execution platform
comprises a set of computation nodes $\nodeset$ that are connected to a single bus. 
The set of nodes $\nodeset$ is indexed with $\indexset_\nodeset$.
Figure~\ref{faulttolerance:fig:systemexample} shows a set of control
loops comprising $n$ plants $\plants$ with index set
$\indexset_\plants = \{1,\ldots,n\}$ and, for each plant $\plant_i$, a
control application $\application_i$ with three tasks $\taskset_i =
\{\task_{is}, \task_{ic}, \task_{ia}\}$.  The edges indicate the data
dependencies between tasks, as well as communication between sensors
and actuators (arrows with dashed lines).
\begin{figure}
        \centering 
%        \includegraphics[width=0.5\textwidth]{faulttolerance/figures/systemexample} 
        \includegraphics[width=0.6\textwidth]{faulttolerance/figures/systemexample_v2} 
        \caption[A set of feedback-control applications running on a distributed execution platform]
        {A set of feedback-control applications running on a distributed
	execution platform. Task $\task_{1s}$ reads sensors and may execute on nodes $\node_A$
        or $\node_C$, whereas the actuator task $\task_{1a}$ may execute on nodes $\node_C$ or
        $\node_D$. Task $\task_{1c}$ can be mapped to any of the four nodes.}  
        \label{faulttolerance:fig:systemexample}
\end{figure}
For the platform in the same figure, we have $\nodeset = \{ \node_A,
\node_B, \node_C, \node_D \}$ and its index set $\indexset_\nodeset = \{A,B,C,D\}$.

We consider that a function $\mappinglimitation :
\taskset_\applicationset \longrightarrow \powerset{\nodeset}$ is given,
as described in Section~\ref{prel:sec:mappingscheduling} on
page~\pageref{prel:sec:mappingscheduling}.  
This determines the set of allowed computation nodes for each task in the system:
The set of computation nodes that task $\task \in \taskset_\applicationset$ can be mapped to
is $\mappinglimitation(\task)$. In
Figure~\ref{faulttolerance:fig:systemexample}, tasks $\task_{1s}$ and
$\task_{1a}$ may be mapped to the nodes indicated by the dashed lines.
Thus, we have $\mappinglimitation(\task_{1s}) =
\{\node_A, \node_C\}$ and $\mappinglimitation(\task_{1a}) = \{\node_C,
\node_D\}$.  We consider that task $\task_{1c}$ can be mapped to any of the four
nodes in the platform---that is, $\mappinglimitation(\task_{1c}) = \nodeset$.
\cbstart
We have omitted the many dashed lines for task $\task_{1c}$ to obtain a clear
illustration. The mapping constraints are the same for the other
control applications.
\cbend
%For each task $\task \in \taskset_\applicationset$ and
%each computation node $\node \in \mappinglimitation(\task)$, we
%consider the best-case and worst-case execution times are given when
%task $\task$ executes on node $\node$.



At any moment in time, the system has a set of computation nodes
$\nodeconfig \subseteq \nodeset$\newnot{symbol:nodeconfig} that are
operational.  The remaining nodes $\nodeset \setminus \nodeconfig$
have failed and are not available for computation. We shall refer to
$\nodeconfig$ as a \emph{configuration} of the distributed
platform. The complete set of configurations is the power set
\newnot{symbol:configurationset}$\configurationset =
\powerset{\nodeset}$ and is partially ordered by inclusion.
The partial order of configurations is shown in
Figure~\ref{faulttolerance:fig:configurationdiagram} as a Hasse
diagram of configurations for our example with four computation nodes
in Figure~\ref{faulttolerance:fig:systemexample}. 
\begin{sidewaysfigure}
\centerline{\xymatrix{
& & & & \{\node_A,\node_B,\node_C,\node_D\} \\ \\
%
& \{\node_A,\node_B,\node_C\} \ar[uurrr] & & \{\node_A,\node_B,\node_D\} \ar[uur] & \{\node_A,\node_C,\node_D\} \ar[uu] & \{\node_B,\node_C,\node_D\} \ar[uul] \\ \\
%
\{\node_A,\node_B\} \ar[uur]\ar[uurrr] & \{\node_A,\node_C\} \ar[uu]\ar[uurrr] & \{\node_B,\node_C\} \ar[uul]\ar[uurrr] & \{\node_A,\node_D\} \ar[uu]\ar[uur]& \{\node_B,\node_D\} \ar[uul]\ar[uur] & \{\node_C,\node_D\} \ar[uul]\ar[uu] \\ \\
%
\{\node_A\} \ar[uu]\ar[uur]\ar[uurrr] & \{\node_B\} \ar[uul]\ar[uur]\ar[uurrr] & \{\node_C\} \ar[uul]\ar[uu]\ar[uurrr]& & \{\node_D\} \ar[uul]\ar[uu]\ar[uur] \\ \\
%
& \emptyset \ar[luu] \ar[uu] \ar[ruu] \ar[rrruu]
}}

%    \centering
%    \includegraphics[width=0.75\textwidth]{faulttolerance/figures/configurationdiagram}
    \caption[Hasse diagram of configurations]
    {Hasse diagram of configurations.  
    Sixteen possible configurations of the platform are shown. The vertices
    indicate configurations, which are partially ordered by inclusion. 
    The edges connect configurations according to
    the subset relation. The configuration $\emptyset$
    models the situation in which all nodes have failed.}
    \label{faulttolerance:fig:configurationdiagram}    
\end{sidewaysfigure}
For example,
configuration $\{\node_A, \node_B, \node_C\}$ indicates that $\node_D$
has failed and only the other three nodes are available for
computation. The configuration $\emptyset$ indicates that all nodes
have failed. 

We consider that the platform implements appropriate mechanisms for
fault detection~\cite{kopetz97, korenFTbook}.  The failure of a node
must be detected and all remaining operational nodes must know about
such failures. In addition, the event that a failed node has been
repaired is detected by the operational nodes in the system. This
allows each operational node to know about the current
configuration. Adaptation due to failed or repaired nodes involves
switching schedules and control algorithms that are optimized for the
available resources in the new configuration
(Section~\ref{faulttolerance:sec:heuristic}), or, if no such
optimizations have been performed at design time, to switch to
mandatory backup solutions
(Section~\ref{faulttolerance:sec:minconfig}).  This information is
stored in the nodes of the
platform~\cite{srivastava05,kopetz97}. Another phase during system
reconfiguration is task migration~\cite{Lee10} that takes place when
tasks that were running on failed nodes are activated at other nodes
in the system. We consider that the system has the ability to migrate
tasks to other nodes in the platform. Each node stores information
regarding those tasks that it must migrate through the bus when the system
is adapting to a new configuration. This information is generated at
design time (Section~\ref{faulttolerance:sec:mappingrealization}).
For the communication, we assume that the communication protocol of
the system ensures fault-tolerance for messages by different means of
redundancy~\cite{navet05,kopetz97}.


\section{Classification of Configurations} \label{faulttolerance:sec:configclass}
In this section, we shall provide a classification of the different configurations
that may occur during operation.
The first subsection illustrates the idea with the running example
in Figure~\ref{faulttolerance:fig:systemexample}. The second subsection gives
formal definitions of the different types of configurations.


\subsection{Example of Configurations} \label{faulttolerance:sec:classificationexample}
Let us consider our example in Figure~\ref{faulttolerance:fig:systemexample}. Task
$\task_{1s}$ reads sensors and $\task_{1a}$ writes to actuators. Task
$\task_{1c}$ does not perform input--output operations and can be
executed on any node in the platform. Sensors can be read by nodes
$\node_A$ and $\node_C$, whereas actuation can be performed by
nodes $\node_C$ and $\node_D$. The mapping constraints for the tasks
are thus given by $\mappinglimitation(\task_{1s}) = \{\node_A,
\node_C\}$, $\mappinglimitation(\task_{1c}) = \nodeset$, and
$\mappinglimitation(\task_{1a}) = \{\node_C, \node_D\}$. 
\cbstart
The same mapping constraints and discussion hold for the other 
control applications $\application_{i}$ ($i = 2,\ldots,n$). Thus, in the remainder
of this example, we shall restrict the discussion to application $\application_{1}$.
\cbend

First, let us consider the initial scenario in which all computation nodes are
operational and are executing one or several tasks each. The system is
thus in configuration $\nodeset = \{\node_A, \node_B, \node_C, \node_D\}$ (see
Figure~\ref{faulttolerance:fig:configurationdiagram}) and we assume that the actuator
task $\task_{1a}$ executes on node $\node_C$ in this
configuration. Consider now that $\node_C$ fails and the system
reaches configuration $\nodeconfig_{ABD} = \{\node_A, \node_B,
\node_D\}$. Task $\task_{1a}$ must now execute on $\node_D$ in this
new configuration, because actuation can only be performed by nodes
$\node_C$ and $\node_D$. According to the mapping constraints given by
$\mappinglimitation$, there exists a possible mapping for each task in
configuration $\nodeconfig_{ABD}$, because 
$\nodeconfig_{ABD} \cap
\mappinglimitation(\task) \neq \emptyset$
for each task $\task \in \taskset_\applicationset$.
We refer to such configurations as
\emph{feasible} configurations.  Thus, for a feasible configuration
and any task, there is at least one node in that configuration on
which the task can be mapped---without violation of the imposed mapping constraints.


If the system is in configuration $\nodeconfig_{ABD}$ and node $\node_A$
fails, a new configuration $\nodeconfig_{BD} =
\{\node_B,\node_D\}$ is reached. Because task $\task_{1s}$ cannot be mapped to any
node in the new configuration, we say that $\nodeconfig_{BD}$ is an
\emph{infeasible} configuration (i.e., we have $\mappinglimitation(\task_{1s})
\cap \nodeconfig_{BD} = \emptyset$). If, on the other hand, node
$\node_B$ fails in configuration $\nodeconfig_{ABD}$, the system reaches
configuration $\nodeconfig_{AD} = \{\node_A,\node_D\}$. In this
configuration, tasks $\task_{1s}$ and $\task_{1a}$ must execute on $\node_A$
and $\node_D$, respectively. Task $\task_{1c}$ may run on either
$\node_A$ or $\node_D$.  Thus, $\nodeconfig_{AD}$ is a feasible
configuration, because it is possible to map each task to a node that
is both operational and allowed according to the given mapping
restrictions. We observe that if either of the nodes in
$\nodeconfig_{AD}$ fails, the system reaches an infeasible configuration.
We shall refer to configurations like $\nodeconfig_{AD}$ as \emph{base}
configurations. Note that any configuration that is a superset of the
base configuration $\nodeconfig_{AD}$ is a feasible configuration. By
considering the mapping constraints, we observe that the only other
base configuration in this example is $\{\node_C\}$ (node $\node_C$ may
execute any task). The set of base
configurations for our example system is thus 
\begin{displaymath}
\baseconfigset = \{
\{\node_A,\node_D\}, \{\node_C\} \}.
\end{displaymath}

Let us consider that design solutions are generated for the two base configurations
in $\baseconfigset$.
Considering Figure~\ref{faulttolerance:fig:configurationdiagram} again, we
note that the mapping for base configuration
$\{\node_A,\node_D\}$, including the produced schedule, task periods,
and control laws, can be used to operate the system in the feasible
configurations $\{\node_A,\node_B,\node_C,\node_D\}$,
$\{\node_A,\node_B,\node_D\}$, and $\{\node_A,\node_C,\node_D\}$.
This is done by merely using the two nodes in the base configuration
(i.e., $\node_A$ and $\node_D$), even though more nodes are operational
in the mentioned feasible configurations. 
Similarly, base configuration $\{\node_C\}$ covers another subset 
of the feasible configurations.
Figure~\ref{faulttolerance:fig:partialhassediag} shows the partial order
that remains when infeasible configurations in 
Figure~\ref{faulttolerance:fig:configurationdiagram} are removed.
Specifically, note that the two base configurations cover all feasible configurations together
(there is a path to any feasible configuration, starting from a base configuration).

\begin{sidewaysfigure}
\centerline{\xymatrix{
& & & \{\node_A,\node_B,\node_C,\node_D\} \\ \\
%
\{\node_A,\node_B,\node_C\} \ar[uurrr] & & \{\node_A,\node_B,\node_D\} \ar[uur] & \{\node_A,\node_C,\node_D\} \ar[uu] & \{\node_B,\node_C,\node_D\} \ar[uul] \\ \\
%
\{\node_A,\node_C\} \ar[uu]\ar[uurrr] & \{\node_B,\node_C\} \ar[uul]\ar[uurrr] & \underline{\boldsymbol{\{\node_A,\node_D\}}} \ar[uu]\ar[uur]& & \{\node_C,\node_D\} \ar[uul]\ar[uu] \\ \\
%
& \underline{\boldsymbol{\{\node_C\}}} \ar[uul]\ar[uu]\ar[uurrr]
}}
\caption[Partial Hasse diagram of the set of configurations]{Partial Hasse diagram of the set of configurations. Only the feasible configurations are shown. The two base configurations (underlined and typeset in bold) cover all feasible configurations.}
\label{faulttolerance:fig:partialhassediag}
\end{sidewaysfigure}

By generating a mapping (as well as customized schedules, periods, and
control laws as in Chapter~\ref{chap:distributed}) for each base
configuration, and considering that tasks are stored in the memory of
the corresponding computation nodes to realize the base configuration
mappings, the system can tolerate any sequence of node failures that
lead the system to any feasible configuration. Thus, a necessary and
sufficient step in the design phase (in terms of fault tolerance) is
to identify the set of base configurations and to generate design
solutions for them.  It can be the case that the computation capacity
is insufficient in some base configurations, because of the small
number of operational nodes.  We shall discuss this issue in
Section~\ref{faulttolerance:sec:minconfig}.  Although faults leading to
any feasible configuration can be tolerated by the fact that execution
is supported in base configurations, the control quality of the system
can be improved if all computation nodes are utilized to efficiently
distribute the executions.  Towards this, we shall consider synthesis
of additional feasible configurations in
Section~\ref{faulttolerance:sec:heuristic}.


\subsection{Formal Definitions}
We consider that the mapping constraint $\mappinglimitation :
\taskset_\applicationset \longrightarrow \powerset{\nodeset}$ is given, meaning
that $\mappinglimitation(\task)$ defines the set of computation nodes
that task $\task \in \taskset_\applicationset$ may execute on. 
Thus, $\mappinglimitation$ decides directly the
set of configurations for which the system must be able to adapt to by
using the information that is synthesized at design time and stored in
memory.  Specifically, a given configuration $\nodeconfig \in
\configurationset$ is defined as a \emph{feasible} configuration if
$\nodeconfig \cap \mappinglimitation(\task) \neq \emptyset$ for each
task $\task \in \taskset_\applicationset$. The set of feasible
configurations is denoted $\feasibleconfigset$.\newnot{symbol:feasibleconfigset}

For an infeasible configuration $\nodeconfig \in \configurationset
\setminus \feasibleconfigset$, there exists at least one task that due
to the given mapping constraints cannot execute on any computation
node in $\nodeconfig$ (i.e., $\nodeconfig \cap
\mappinglimitation(\task) = \emptyset$ for some $\task \in
\taskset_\applicationset$).
A \emph{base configuration} $\nodeconfig$ is a feasible configuration
for which the failure of any computation node $\node \in \nodeconfig$
results in an infeasible configuration $\nodeconfig \setminus \{ \node
\}$. The set of base configurations is thus defined as\newnot{symbol:baseconfigset}
\begin{displaymath}
\baseconfigset = \{ \nodeconfig \in
\feasibleconfigset : \nodeconfig \setminus \{\node\} \notin
\feasibleconfigset \textrm{ for each } \node \in \nodeconfig \}.
\end{displaymath}
The set of configurations $\configurationset = \powerset{\nodeset}$
is thus partitioned into disjoint sets of
feasible and infeasible configurations. Some of the feasible
configurations form a set of base configurations, which represents the
boundary between the set of feasible and infeasible configurations.

In the next section, we shall discuss an approach to identify the set
of base configurations. In the ideal case, solutions for base
configurations are synthesized, enabling the system to operate in any
feasible configuration.  
\cbstart
If not all base configurations 
allow for acceptable solutions to be
synthesized, we construct solutions for a
set of \emph{minimal} configurations in
Section~\ref{faulttolerance:sec:minconfig} to cover as many feasible
configurations as possible. Such situations may occur, for example, 
if the computation capacity is too restricted in certain base configurations.
\cbend




\section{Identification of Base Configurations} \label{faulttolerance:sec:baseidentification}
A straightforward approach to find the set of base configurations is
to perform a search through the Hasse diagram of configurations.
Given the mapping constraint
$\mappinglimitation : \taskset_\applicationset \longrightarrow
\powerset{\nodeset}$, we find the set of
base configurations $\baseconfigset$ based on a breadth-first
search~\cite{lewis91} of the Hasse diagram of configurations. The
search starts at the full configuration $\nodeset$ with
$\baseconfigset$ initialized to $\emptyset$. It is assumed that $\nodeset$
is a feasible configuration. Let us consider an arbitrary visit of a
feasible configuration $\nodeconfig$ during any point of the
search. To determine whether or not to add $\nodeconfig$ to the set of
base configurations $\baseconfigset$, we consider each configuration
$\nodeconfig^\prime$ with $\setsize{\nodeconfig^\prime} =
\setsize{\nodeconfig} - 1$ (i.e., we consider the failure of any node
in $\nodeconfig$). If each such configuration
$\nodeconfig^\prime$ is infeasible, we add $\nodeconfig$ to the set of
base configurations $\baseconfigset$. Infeasible configurations
$\nodeconfig^\prime$, as well as any configuration
$\nodeconfig^\bis \subset \nodeconfig^\prime$, are not visited during
the search.

Due to the complexity of the Hasse diagram,
a breadth-first search starting from the full
configuration $\nodeset$ is practical only for 
systems with relatively small number of nodes.
Let us therefore discuss a practically efficient algorithm that constructs
the set of base configurations $\baseconfigset$ from the mapping
constraint $\mappinglimitation : \taskset_\applicationset
\longrightarrow \powerset{\nodeset}$ directly. Without loss of generality,
we shall assume that the function $\mappinglimitation$ is injective
(i.e., $\mappinglimitation(\task_i) \neq \mappinglimitation(\task_j)$
for $\task_i \neq \task_j$). If this is not the
case, then, for the purpose of finding the set of base configurations,
it is an equivalent problem to study an injective function
$\mappinglimitation^\prime : \taskset_\applicationset^\prime
\longrightarrow \powerset{\nodeset}$ as a mapping constraint, where
$\taskset_\applicationset^\prime \subset
\taskset_\applicationset$. Further in that case, it is required that,
for each $\task \in \taskset_\applicationset \setminus
\taskset_\applicationset^\prime$, there exists exactly one
$\task^\prime \in \taskset_\applicationset^\prime$ for which
$\mappinglimitation(\task) = \mappinglimitation^\prime(\task^\prime)$.
Finally, in the following discussion, $\taskset_\applicationset^\prime$
and $\mappinglimitation^\prime$ replace $\taskset_\applicationset$ and
$\mappinglimitation$, respectively.


We construct the set of base configurations starting from the
tasks that have the most restrictive mapping constraints. Towards
this, let us consider a bijection
\begin{displaymath}
\ordering : \{ 1,\ldots,
\setsize{\taskset_\applicationset}\} \longrightarrow
\taskset_\applicationset,
\end{displaymath}
where
\begin{displaymath}
\setsize{\mappinglimitation(\ordering(k))} \leqslant
\setsize{\mappinglimitation(\ordering(k+1))}
\end{displaymath}
for $1 \leqslant k < \setsize{\taskset_\applicationset}$.  
This order of the tasks is considered during the construction of the set of
base configurations $\baseconfigset$.
The construction is based on a function
\begin{displaymath}
\configconstruct : \{1,\ldots,\setsize{\taskset_\applicationset}\}
\longrightarrow \powerset{\configurationset},
\end{displaymath}
where $\configconstruct(k)$ returns a set of configurations that 
include the base configurations of the system when considering 
the mapping constraints for only tasks $\ordering(1), \ldots, \ordering(k)$.
We shall give a recursive definition of the function $\configconstruct$.
For the base case, we define
\begin{displaymath}
  \configconstruct(1) = \bigcup_{\node \in
    \mappinglimitation(\ordering(1))} \{\node\}.
\end{displaymath}

Before we define $\configconstruct(k)$ for $1 < k \leqslant
\setsize{\taskset_\applicationset}$, let us define a function
\begin{displaymath}
\feasibleconstruct : \configurationset \times
\{1,\ldots,\setsize{\taskset_\applicationset}\} \longrightarrow
\powerset{\configurationset}
\end{displaymath}
as 
\begin{equation}
\feasibleconstruct(\nodeconfig,k) =
\{\nodeconfig\}
\label{faulttolerance:eq:feasibleconstructdef1}
\end{equation}
if $\nodeconfig \cap \mappinglimitation(\ordering(k))
\neq \emptyset$---that is, if configuration $\nodeconfig$ already includes
an allowed computation node for task $\ordering(k)$---and
\begin{equation}
  \feasibleconstruct(\nodeconfig,k) =
  \bigcup_{\node \in \mappinglimitation(\ordering(k))} \nodeconfig \cup \{\node\}
  \label{faulttolerance:eq:feasibleconstructdef2}
\end{equation}
otherwise. If $\nodeconfig$ contains a computation node that task $\ordering(k)$
can execute on, then $\feasibleconstruct(\nodeconfig,k)$ does not add
additional nodes to $\nodeconfig$ (Equation~\ref{faulttolerance:eq:feasibleconstructdef1}). 
If not, however, then
$\feasibleconstruct(\nodeconfig,k)$ extends $\nodeconfig$ in several
directions given by the set of nodes
$\mappinglimitation(\ordering(k))$ that task $\ordering(k)$ may
execute on
(Equation~\ref{faulttolerance:eq:feasibleconstructdef2}).
Now, we define recursively
\begin{displaymath}
  \configconstruct(k) = \bigcup_{\nodeconfig \in \configconstruct(k-1)}
  \feasibleconstruct(\nodeconfig,k)
\end{displaymath}
for $1 < k \leqslant \setsize{\taskset_\applicationset}$. The set
$\configconstruct(k)$ thus comprises configurations for which it is
possible to execute the tasks $\{\ordering(1), \ldots, \ordering(k)\}$
according to the mapping constraints induced by $\mappinglimitation$.

We know by construction that $\baseconfigset \subseteq
\configconstruct(\setsize{\taskset_\applicationset})$. We also know
that $\configconstruct(\setsize{\taskset_\applicationset})$ does not
contain infeasible configurations. A pruning of the set
$\configconstruct(\setsize{\taskset_\applicationset})$ must
be performed to identify feasible configurations
$\configconstruct(\setsize{\taskset_\applicationset}) \setminus
\baseconfigset$ that are not base configurations.
%The complexity of this pruning is equivalent to sorting~\cite{lewis91}.
This shall end our discussion regarding the
identification of the set $\baseconfigset$.




\section{Task Mapping for Feasible Configurations} \label{faulttolerance:sec:mapping}
Let us define the \emph{mapping} of the task set
$\taskset_\applicationset$ onto a feasible configuration $\nodeconfig
\in \feasibleconfigset$ as a function \newnot{symbol:configmapping}$\mapping_\nodeconfig :
\taskset_\applicationset \longrightarrow \nodeconfig$. For each task
$\task \in \taskset_\applicationset$, $\mapping_\nodeconfig(\task)$ is
the computation node that executes task $\task$ when the system
configuration is $\nodeconfig$. It is required that the mapping
constraints are considered, meaning that
$\mapping_\nodeconfig(\task) \in \mappinglimitation(\task)$ for each
$\task \in \taskset_\applicationset$.
For a given feasible configuration $\nodeconfig \in \feasibleconfigset$ and
mapping $\mapping_\nodeconfig : \taskset_\applicationset
\longrightarrow \nodeconfig$, we use our integrated control
and scheduling framework for distributed embedded systems in
Chapter~\ref{chap:distributed} to obtain a design solution.
The solution parameters that are synthesized are
the periods and control laws, as well as the execution and communication schedule
of the tasks and messages in the system.
The objective is to minimize the overall control cost\newnot{symbol:overallconfigcost}
\begin{equation}
  \controlcost^\nodeconfig =
  \sum_{i \in \indexset_\plants} \controlcost_i,
  \label{faulttolerance:eq:overallcontrolcost}
\end{equation}
which indicates maximization of the total control quality of the
system, under the consideration that only the nodes in $\nodeconfig$ 
are operational.

\cbstart
We have used a genetic algorithm-based approach---similar to the approach
in Section~\ref{distributed:sec:periodoptimization}---to
optimize task mapping~\cite{aminifar11}. 
\cbend
The mapping affects the delay characteristics
indirectly through task and message scheduling. It is thus of great
importance to optimize task mapping, schedules, and control laws to
obtain a customized solution with high control quality in a given
configuration. Thus, for a given $\nodeconfig \in \feasibleconfigset$,
we can find a customized mapping
$\mapping_\nodeconfig : \taskset_\applicationset \longrightarrow
\nodeconfig$ together with a schedule and controllers (periods and
control laws). The mapping is constructed to satisfy the mapping
constraints (i.e., $\mapping_\nodeconfig(\task) \in
\mappinglimitation(\task)$ for each $\task \in
\taskset_\applicationset$) and with the objective to minimize the
total control cost given by Equation~\ref{faulttolerance:eq:overallcontrolcost}. We
denote with $\mem_d^\nodeconfig$\newnot{symbol:nodememconfig}
the amount of memory required on node
$\node_d$ ($d \in \indexset_\nodeset$) to store information related to
the mapping, schedule, periods, and control laws that are customized for
configuration $\nodeconfig$. This memory consumption is given as an
output of the synthesis step; we shall consider this memory consumption
in the context of memory limitations in
Section~\ref{faulttolerance:sec:mappingrealization}.


\section{Minimal Configurations} \label{faulttolerance:sec:minconfig}
By definition, it is not possible to operate the system in infeasible configurations, because
at least one task cannot be executed in such situations.
In this section, we shall discuss the synthesis of mandatory solutions that are 
required to achieve system operation in feasible configurations.
The first approach is to synthesize solutions for each base configuration of the system.
It can be the case, however, that no solution can be found for some base
configurations; the control cost in Equation~\ref{faulttolerance:eq:overallcontrolcost}
is infinite in such cases, indicating that at least one control loop is unstable.
\cbstart
If a solution cannot be found for a certain configuration, this means that
the computation capacity of the platform is insufficient for that configuration.
In such cases, we shall
progressively synthesize solutions for configurations with additional computation
nodes.
\cbend

We first synthesize a solution for each base configuration $\nodeconfig \in \baseconfigset$.
If a solution could be found---the control cost $\controlcost^\nodeconfig$
is finite---then that solution can be used to operate the system in any feasible configuration
$\nodeconfig^\prime \in \feasibleconfigset$ for which $\nodeconfig \subseteq \nodeconfig^\prime$.
If a solution cannot be found for base configuration $\nodeconfig$,
we proceed by synthesizing solutions for configurations with one additional computation node.
This process is repeated as long as solutions cannot be found.
\cbstart
Let us now outline such an approach.


During the construction of solutions to configurations, 
we shall maintain two sets $\minconfigset$ and $\configsettosynthesize$
with initial values $\minconfigset = \emptyset$ and $\configsettosynthesize = \baseconfigset$.
\cbend
The set $\minconfigset$ shall contain the configurations that have been 
synthesized successfully: Their design solutions have finite control cost and stability is guaranteed.
The set $\configsettosynthesize$ contains configurations that are yet to be synthesized.
The following steps are repeated as long as $\configsettosynthesize \neq \emptyset$.
\begin{enumerate}
\item Select any configuration $\nodeconfig \in \configsettosynthesize$.
\item Synthesize a solution for $\nodeconfig$. 
	This results in the control cost $\controlcost^\nodeconfig$.
\item Remove $\nodeconfig$ from $\configsettosynthesize$ by the update
	\begin{displaymath}
		\configsettosynthesize \algAssignment \configsettosynthesize \setminus \{\nodeconfig\}.
	\end{displaymath}
\item If $\controlcost^\nodeconfig < \infty$, update $\minconfigset$ according to
	\begin{equation}
		\minconfigset \algAssignment
		\minconfigset \cup \{\nodeconfig\},
		\label{faulttolerance:eq:synthesisfinitecost}
	\end{equation}
	otherwise update $\configsettosynthesize$ as
	\begin{equation}
		\configsettosynthesize \algAssignment
		\configsettosynthesize \cup
		\bigcup_{\node \in \nodeset \setminus \nodeconfig} \left( \nodeconfig \cup \{\node\} \right).
				\label{faulttolerance:eq:synthesisinfinitecost}
	\end{equation}	
\item If $\configsettosynthesize \neq \emptyset$, go back to Step~1.
\end{enumerate}
In the first three steps, configurations can be chosen for synthesis in any order 
from $\configsettosynthesize$. In Step~4, we observe that the set $\configsettosynthesize$
becomes smaller as long as solutions can be synthesized with finite control cost
(Equation~\ref{faulttolerance:eq:synthesisfinitecost}).
If a solution for a certain configuration cannot be synthesized (i.e.,
the synthesis framework returns an infinite control cost, 
indicating an unstable control system), we consider configurations with one additional 
computation node to increase the possibility to find solutions
(Equation~\ref{faulttolerance:eq:synthesisinfinitecost}).


The configurations for which solutions could be synthesized form a set of 
\emph{minimal} feasible configurations \newnot{symbol:minconfigset}$\minconfigset$.
\cbstart
The set of minimal configurations $\minconfigset$ is thus defined by Steps~1--5.
\cbend
A configuration $\nodeconfig
\in \minconfigset$ is minimal in the sense that it is either a base configuration or
it is a feasible configuration with minimal number of nodes that cover a base configuration
that could not be synthesized due to insufficient computation capacity of the platform.
For each minimal configuration $\nodeconfig \in \minconfigset$, we consider that each node
$\node \in \nodeconfig$ stores all tasks $\task \in \taskset_\applicationset$ for
which $\mapping_\nodeconfig(\task) = \node$; that is, we consider
that tasks are stored permanently on nodes to realize mappings for minimal configurations.
Further, we consider that all information (e.g., periods, control laws, and
schedules) that is needed to switch solutions for minimal configurations
at runtime is stored in the memory of computation nodes.

\cbstart
The set of feasible configurations for which the system is operational with 
our solution is\newnot{symbol:opconfigset}
\begin{equation}
	\opconfigset =
	\bigcup_{\nodeconfig \in \minconfigset} \{ \nodeconfig^\prime \in \feasibleconfigset :
	\nodeconfig \subseteq \nodeconfig^\prime \}
        \label{faulttolerance:eq:opconfig}
\end{equation}
and it includes the minimal configurations $\minconfigset$, as well as
feasible configurations that are covered by a minimal configuration.
The system is not able to operate in the
feasible configurations $\feasibleconfigset \setminus \opconfigset$---this set represents the border
between base and minimal configurations---because of
insufficient computation capacity of the platform.
A direct consequence of the imposed mapping constraints is that the system cannot operate
when it is in any infeasible configuration in $\configurationset \setminus \feasibleconfigset$.
Infeasible configurations, as well as feasible configurations not covered by minimal configurations,
are identified by our approach. To tolerate particular fault scenarios that lead the system to 
configurations in
\begin{displaymath}
\left(
	\configurationset \setminus \feasibleconfigset
\right)
\cup
\left(
	\feasibleconfigset \setminus \opconfigset
\right),
\end{displaymath}
the problem of insufficient computation capacity has to be solved by
considering complementary fault-tolerance techniques (e.g., hardware replication).
The system remains operational in all other configurations $\opconfigset$ by using the
solutions generated for minimal configurations.
\cbend
As a special case, we have
$\minconfigset = \baseconfigset$ if solutions to all base configurations could be synthesized.
In that case, we have $\opconfigset = \feasibleconfigset$, meaning that the system is operational
in all feasible configurations.


\section{Motivational Example for Optimization} 
\label{faulttolerance:sec:dse}
\cbstart
The synthesis of a set of minimal configurations $\minconfigset$ in the previous section
results in a solution that covers all fault scenarios that lead the system
to a configuration in $\opconfigset$ (Equation~\ref{faulttolerance:eq:opconfig}).
The synthesis of minimal configurations provides not only fault tolerance
for the configurations $\opconfigset$ but also a minimum level of control quality.
Considering that all solutions for minimal configurations are realized by storing information
in the memory of the platform, we shall in this section motivate and formulate an optimization
problem for control-quality improvements, relative to the minimum quality provided by 
minimal configurations.
\cbend


\cbstart 
Let us resume our example in
Section~\ref{faulttolerance:sec:classificationexample} by considering
synthesis of additional configurations than the minimal
configurations.  We have considered three control applications for
three inverted pendulums (i.e., $n = 3$
\Figref{faulttolerance:fig:systemexample}). 
We shall find that such
optimizations can lead to better control quality than a system that
only uses the mandatory design solutions for minimal configurations.
\cbend

\subsection{Improved Solutions for Feasible Configurations}
Let us consider the set of base configurations
\begin{displaymath}
  \baseconfigset = \{
  \{\node_A, \node_D\},
  \{\node_C\}
  \}.
\end{displaymath}
Considering that solutions for the two base configurations have been
synthesized, and that these solutions have finite control costs, we
note that the set of minimal configurations is $\minconfigset =
\baseconfigset$. We thus have $\opconfigset = \feasibleconfigset$,
meaning that the system can operate in any feasible configuration with
the solutions for minimal configurations.  Let us also consider that a
customized solution (mapping, schedule, and controllers) has been
synthesized for the configuration in which all nodes are operational.
This solution exploits the full computation capacity of the platform
to achieve as high control quality as possible.  
Note that all feasible configurations can be handled with solutions for 
the two base configurations (Figure~\ref{faulttolerance:fig:partialhassediag}).

We shall now improve
control quality by additional synthesis of configurations. Towards
this, we have synthesized solutions for the two minimal
configurations, as well as configuration
$\{\node_A,\node_B,\node_C\}$. 
Table~\ref{faulttolerance:tab:configurationcosts} shows the
obtained control costs defined by
Equation~\ref{faulttolerance:eq:overallcontrolcost}.
%
\begin{table}
  \centering          
  \caption[Control costs for several configurations]{Control costs for
    several configurations. The first two entries indicate a minimum
    level of control quality given by the two minimal
    configurations. Control quality is improved (cost is reduced) for
    configurations with additional operational nodes.}
  \label{faulttolerance:tab:configurationcosts}
  \centering  
  \begin{tabular}{cc}
    \hline
    Configuration $\nodeconfig$ & Control cost $\controlcost^\nodeconfig$ \\
    \hline
    $\{\node_A,\node_D\}$ & 5.2 \\
    $\{\node_C\}$ & 7.4 \\
    \hline
    $\{\node_A,\node_B,\node_C,\node_D\}$ & 3.1 \\
    $\{\node_A,\node_B,\node_C\}$ & 4.3 \\
    \hline
  \end{tabular}
\end{table}
%
Considering that a solution for $\{\node_A,\node_B,\node_C\}$ would
not have been generated, then in that configuration the system can
only run with the solution for the minimal configuration $\{\node_C\}$
with a cost of~7.4. By generating a customized solution, however, we can
achieve a better control quality in that configuration according to
the obtained cost~4.3---a cost improvement of~3.1.
By synthesizing additional feasible configurations, we can obtain additional
control-quality improvements---however, at the expense of the total synthesis time
of all solutions. The particular selection of additional configurations
to synthesize solutions for is based on the allowed synthesis time,
the failure probabilities of the nodes in the system, and
the potential improvement in control quality relative to the minimum level
provided by the minimal configurations. We shall elaborate on this
selection in more detail in Section~\ref{faulttolerance:sec:heuristic}.


\begin{table}
  \caption[Task mapping for two configurations and three control
    applications] {Task mapping for two configurations and three
    control applications. Each row shows tasks that run on a certain
    node in a given configuration.}
  \label{faulttolerance:tab:taskmappings}
  \centering  
%  \begin{tabular}{@{}c@{}c@{}c@{}c@{}c@{}}
  \begin{tabular}{ccccc}
    \hline
    $\nodeconfig$ & $\node_A$ & $\node_B$ & $\node_C$ & $\node_D$  \\ 
    \hline
    $\{\node_A,\node_B,\node_C\}$ & $\task_{1s}, \task_{2s}, \task_{3s}$ & $\task_{1c}, \task_{2c}, \task_{3c}$ & $\task_{1a}, \task_{2a}, \task_{3a}$ & -- \\ \\
    
%   $\{\node_A,\node_B,\node_C,\node_D\}$ \\
%	\nodeset \\

	& &  & $\task_{1s}, \task_{1c}, \task_{1a},$ \\
	$\{ \node_C \}$ & -- & -- & $\task_{2s}, \task_{2c}, \task_{2a},$ & -- \\
	& &  & $\task_{3s}, \task_{3c}, \task_{3a}$\\
 
    \hline
  \end{tabular}
\end{table}

\subsection{Mapping Realization}
Once a solution for a configuration---not a minimal
configuration---has been synthesized, it must be verified whether it
is possible for the system to adapt to this solution at runtime. 
\cbstart
Thus, for the additional mapping of configuration
$\{\node_A,\node_B,\node_C\}$ in our example, we must check whether
the mapping can be realized if the system is in configuration $\{
\node_A, \node_B, \node_C, \node_D \}$ and node $\node_D$ fails. 
\cbend
In Table~\ref{faulttolerance:tab:taskmappings}, we show the mapping for
this configuration, as well as the mapping of its corresponding
minimal configuration $\{\node_C\}$.  For the minimal configurations,
we consider that the tasks are stored on the corresponding computation
nodes. For example, the tasks in the column for $\node_C$,
corresponding to the minimal configuration, are stored on node
$\node_C$.  Let us consider the mapping of the tasks to the
configuration $\{\node_A,\node_B,\node_C\}$. We note that all tasks
that are needed to realize the mapping for node $\node_C$ are already
stored on that node.  Nodes $\node_A$ and $\node_B$, however, do not
store the tasks that are needed to realize the mapping for
configuration $\{\node_A,\node_B,\node_C\}$.  When switching to the 
solution for this configuration---from configuration $\{ \node_A, \node_B, \node_C, 
\node_D\}$---the tasks for nodes $\node_A$ and $\node_B$ need to be
migrated from node $\node_C$.  
\cbstart
Note that it is always possible to
migrate tasks from nodes in a minimal configuration: Because any
feasible configuration in $\opconfigset$ is covered by a minimal configuration, which
realizes its mapping by storing tasks in memory of the operational
nodes, there is always at least one operational node that stores a
certain task for a given feasible configuration.  

During task migration, the program state does not need to be
transferred (because of the feedback mechanism of control applications,
the state is automatically restored
when task migration has completed).  
\cbend
The migration time cannot exceed specified bounds, in order to
guarantee stability. Hence, if the migration time for tasks
$\task_{1s}$, $\task_{2s}$, $\task_{3s}$, $\task_{1c}$, $\task_{2c}$,
and $\task_{3c}$ satisfies the specified bound, the system can realize
the solution for configuration $\{\node_A,\node_B,\node_C\}$ at
runtime.

If the time required to migrate the required
tasks at runtime exceeds the given bounds, then the solution for
the minimal configuration
$\{\node_C\}$ is used at runtime with control cost~7.4.
In that case, the operational nodes $\node_A$ and $\node_B$
are not utilized.
Alternatively, more memory can be
used to store additional tasks on nodes $\node_A$ and $\node_B$,
in order to realize the
mapping at runtime without or with reduced task migration. In this way,
we avoid the excessive amount of migration time and we can realize the
mapping, although at the cost of larger required memory space
to achieve the better control cost of~4.3 in configuration
$\{\node_A,\node_B,\node_C\}$.
In the following section, we present
a formal statement of the design-space exploration
problem for control-quality optimization.
Thereafter, in Section~\ref{faulttolerance:sec:heuristic}, we
present an optimization approach that synthesizes selected
configurations and considers the trade-off between control quality,
memory cost, and synthesis time.


\section{Problem Formulation} \label{faulttolerance:sec:probform}
Given is a distributed platform with computation nodes $\nodeset$, a
set of plants $\plants$, and their control applications
$\applicationset$.  We consider that a task mapping
$\mapping_\nodeconfig : \taskset_\applicationset \longrightarrow
\nodeconfig$, as well as corresponding schedules and controllers, 
have been generated for each minimal configuration
$\nodeconfig \in \minconfigset$ as discussed in
Section~\ref{faulttolerance:sec:minconfig}.  We consider that tasks are
stored permanently on appropriate computation nodes to realize the
task mappings for the minimal configurations (i.e., no task migration is
needed at runtime to adapt to solutions for minimal
configurations). Thus, to realize the mappings for minimal
configurations, each task $\task \in
\taskset_\applicationset$ is stored on nodes
\begin{displaymath}
\bigcup_{\nodeconfig \in
  \minconfigset} \{\mapping_\nodeconfig(\task)\}.
\end{displaymath}
\cbstart
The set of tasks that are stored on node $\node_d \in \nodeset$ is
\begin{equation}
  \storedtasks{d} = \bigcup_{\nodeconfig \in \minconfigset}
  \left\{
  \task \in \taskset_\applicationset : \mapping_\nodeconfig(\task) = \node_d
  \right\}.
  \label{faulttolerance:eq:storednodetasks}
\end{equation}\newnot{symbol:storedtasks}%
\cbend
In addition, the inputs specific to the optimization step discussed in
this section are
\begin{itemize}
\item the time $\migrationtime(\task)$\newnot{symbol:taskmigrationtime}
required to migrate task
$\task$ from a node to any other node in the platform;
\item the maximum amount of migration time $\migrationtimemax_i$ for plant $\plant_i$
      (this constraint is based on the maximum amount of time that a
      plant $\plant_i$ can stay in open loop without leading to
      instability~\cite{tabuada07tac} or degradation of control
      quality below a specified threshold, as well as the actual time
      to detect faults~\cite{kopetz97,korenFTbook});
\item the memory space $\mem_d(\task)$\newnot{symbol:memtasknode}
	required to store task $\task
    \in \taskset_\applicationset$ on node $\node_d$ ($d \in
    \indexset_\nodeset$);
\item the additional available memory $\maxmemory_d$ of each node
    $\node_d$ in the platform (note that this does not include
    the memory consumed for the minimal configurations, as these are
    mandatory to implement and sufficient dedicated memory is assumed
    to be provided); and
\cbstart
\item the failure probability $\failureprob(\node)$\newnot{symbol:failureprob}
per time unit for each node $\node \in \nodeset$.
\cbend
\end{itemize}
The failure probability $\failureprob(\node)$ depends on the mean time
to failure (MTFF) of the computation node. The MTFF is decided by the
technology of the production process, the ambient temperature of the
components, and voltage or physical shocks that the components may
suffer in the operational environment of the
system~\cite{korenFTbook}. 

The decision variables of the optimization
problem are a subset of 
configurations \newnot{symbol:mappedconfigset}$\mappedconfigset \subseteq
\opconfigset \setminus \minconfigset$ and a mapping
$\mapping_\nodeconfig$, schedule, and controllers for each $\nodeconfig \in \mappedconfigset$.
Thus, in addition to the minimal configurations, we generate mappings for the
other feasible configurations $\mappedconfigset$. We require that
$\nodeset \in \mappedconfigset$, which means that it is mandatory to
generate solutions for the case when all nodes in the system are
operational.




%%%%%%%%%%%%%%%%%
% Cost function %
%%%%%%%%%%%%%%%%%
Let us now define the cost that characterizes the overall control
quality of the system in any feasible configuration based on the
solutions (mappings, schedules, and controllers) for the selected set
of configurations. We shall associate a cost
$\controlcost^\nodeconfig$ for each feasible configuration
$\nodeconfig \in \opconfigset$.  If $\nodeconfig \in \minconfigset
\cup \mappedconfigset$, a customized mapping for that configuration
has been generated with a cost $\controlcost^\nodeconfig$ given by
Equation~\ref{faulttolerance:eq:overallcontrolcost}. If $\nodeconfig
\notin \minconfigset \cup \mappedconfigset$ and $\nodeconfig \in
\opconfigset$, then at runtime the system uses the mapping of a
configuration $\nodeconfig^\prime$ for which $\nodeconfig^\prime \in
\minconfigset \cup \mappedconfigset$ and $\nodeconfig^\prime \subset
\nodeconfig$. It is guaranteed that such a configuration
$\nodeconfig^\prime$ can be found in the set of minimal configurations
$\minconfigset$ (Equation~\ref{faulttolerance:eq:opconfig}).  If such
a configuration is also included in $\mappedconfigset$, then the
control quality is better than in the corresponding minimal
configuration because of better utilization of the operational
computation nodes. Thus, for the case $\nodeconfig \in \opconfigset
\setminus (\minconfigset \cup \mappedconfigset)$, the cost of the
feasible configuration $\nodeconfig$ is
\begin{equation}
  \controlcost^\nodeconfig =
  \min_{
    \begin{array}{cc}
      \nodeconfig^\prime \in \minconfigset \cup \mappedconfigset\\
      \nodeconfig^\prime \subset \nodeconfig
    \end{array}
  }
  \controlcost^{\nodeconfig^\prime},
  \label{faulttolerance:eq:inheritedcost}
\end{equation}
which means that the best functionally correct solution---in
terms of control quality---is used to operate the system in
configuration $\nodeconfig$. The cost to minimize when selecting the
set of additional feasible configurations $\mappedconfigset \subseteq
\opconfigset \setminus \minconfigset \setminus \{\nodeset\}$ to
synthesize is defined as
\begin{equation}
\label{faulttolerance:eq:dseCost}
  \controlcost = 
  \sum_{\nodeconfig \in \opconfigset \setminus \minconfigset \setminus \{\nodeset\}}
  \failureprob^\nodeconfig \controlcost^\nodeconfig,
\end{equation}
where $\failureprob^\nodeconfig$\newnot{symbol:failureprobconfig} is
the probability of node failures that lead the system to configuration
$\nodeconfig$
\cbstart
(we shall discuss the computation of this probability in 
Equation~\ref{faulttolerance:eq:failureprobability} on 
page~\pageref{faulttolerance:eq:failureprobability}).
\cbend
Towards this, we shall consider the given failure
probability $\failureprob(\node)$ of each computation node $\node \in
\nodeset$. 

The cost in Equation~\ref{faulttolerance:eq:dseCost}
characterizes the control quality of the system as a function of the
additional feasible configurations for which solutions have been
synthesized. If solutions are available only for the set of minimal
configurations, the system tolerates all node failures that lead the
system to a configuration in $\opconfigset$---however, at a large cost
$\controlcost$ in Equation~\ref{faulttolerance:eq:dseCost}. This is
because other feasible configurations operate at runtime with
solutions of minimal configurations. 
\cbstart
In those situations, not all
operational computation nodes are utilized, at the cost of reduced
overall control quality. 
\cbend
By synthesizing solutions for additional
feasible configurations in $\opconfigset \setminus \minconfigset
\setminus \{\nodeset\}$, the cost in
Equation~\ref{faulttolerance:eq:dseCost} is reduced (i.e., the overall
control quality is improved) due to the cost reduction in the terms
related to the selected set of configurations.



\section{Optimization Approach} \label{faulttolerance:sec:heuristic}
\noindent
Figure~\ref{faulttolerance:fig:overview} shows an overview of our
proposed design approach. The first component, which we discussed in
Sections~\ref{faulttolerance:sec:baseidentification}--\ref{faulttolerance:sec:minconfig},
is the identification of base configurations and synthesis of minimal
configurations (labeled as ``fault-tolerant design'' in the
figure). The second component (labeled as ``optimization'') comprises
the exploration and synthesis of additional configurations, as well as
the mapping-realization step that considers the constraints related to
task migration and memory space. This second component is the topic of
this section and is our proposed solution to the problem formulation
in Section~\ref{faulttolerance:sec:probform}. The selection and
synthesis of additional feasible configurations is described in
Section~\ref{faulttolerance:sec:exploration}. For each synthesized
feasible configuration, it must be checked whether the solution can be
realized with regard to the memory consumption in the platform and the
amount of task migration required at runtime. Memory and migration
trade-offs, as well as memory-space and migration-time constraints,
are presented in Section~\ref{faulttolerance:sec:mappingrealization}.


\cbstart
\begin{figure}[!t]
  \centering
%  \includegraphics[width=0.2\textwidth,angle=90]{figures/overview}
  \includegraphics[width=0.9\textwidth]{faulttolerance/figures/overview_v2}
  \caption[Overview of the design framework] {Overview of the design
    framework.  The first step is to construct solutions for a set
    of minimal configurations, which is based on the identification of
    base configurations, to achieve fault-tolerance and a minimum
    control-quality level. In the second step, the system is further
    optimized for additional configurations.}
  \label{faulttolerance:fig:overview}
\end{figure}
\cbend

\subsection{Exploration of the Set of Configurations} \label{faulttolerance:sec:exploration}
\noindent
Our optimization heuristic aims to minimize the cost in
Equation~\ref{faulttolerance:eq:dseCost} and is based on a
priority-based search of the Hasse diagram of configurations. The
priorities are computed iteratively as a step of the optimization
process based on probabilities for the system to reach the different configurations.
The heuristic belongs to the class of anytime algorithms, meaning that
it can be stopped at any point in time and return a feasible solution.
This is due to that minimal configurations already have been
synthesized and fault tolerance is achieved. The overall quality of
the system is improved as more optimization time is invested.


Initially, as a mandatory step, we synthesize a mapping for the
configuration $\nodeset$, in order to support the execution of the
control system for the case when all computation nodes are
operational. During the exploration process, a priority queue with
configurations is maintained. Whenever a mapping $\mapping_\nodeconfig
: \taskset_{\applicationset} \longrightarrow \nodeconfig$ has been
synthesized for a certain feasible configuration $\nodeconfig \in
\opconfigset$ (note that $\nodeset$ is the first synthesized
configuration), each feasible configuration $\nodeconfig^\prime
\subset \nodeconfig$ with $\setsize{\nodeconfig^\prime} =
\setsize{\nodeconfig} - 1$ is added to the priority queue with
priority equal to the probability
\begin{equation}
\failureprob^{\nodeconfig^\prime} = \failureprob^\nodeconfig
\failureprob(\node),
\label{faulttolerance:eq:failureprobability}
\end{equation}
where $\{\node\} = \nodeconfig \setminus
\nodeconfig^\prime$. For the initial configuration $\nodeset$, 
we consider $\failureprob^\nodeset = 1$.
%If two configurations have the same probability, we give priority to
%the one that has the worse control quality in the
%corresponding minimal configuration.

Subsequently, for configuration $\nodeconfig$, we check whether it is
possible to realize the generated mapping $\mapping_\nodeconfig :
\taskset_{\applicationset} \longrightarrow \nodeconfig$ at runtime
with task migration and the available additional memory to store
tasks. This step is described in detail in the next subsection
(Section~\ref{faulttolerance:sec:mappingrealization}). If this step
succeeds, it means that the mapping can be realized at runtime and we
thus add $\nodeconfig$ to the set $\mappedconfigset$ (this set is
initially empty). Further in that case, for each node $\node_d$, the
set of tasks $\storedtasks{d}$ stored on $\node_d$ and the amount of
additional consumed memory $\mem_d$ is updated.
The set of tasks $\storedtasks{d}$ that are stored on node $\node_d$
is initialized according to Equation~\ref{faulttolerance:eq:storednodetasks}. 
\cbstart
If the mapping realization does not succeed, the generated solution for
configuration $\nodeconfig$ is excluded.
This means that a solution for a minimal
configuration must be used at runtime to operate the system in the
feasible configuration $\nodeconfig$. 
\cbend
Independently of whether the
mapping realization of $\nodeconfig$ succeeds, the exploration
continues by generating a solution for the next configuration in the
maintained priority queue of configurations. The exploration
terminates when the additional memory space on all computation nodes has
been consumed, or when a specified design time has passed (e.g., the
designer stops the exploration process). Let us now discuss the
mapping-realization step that deals with the memory and migration-time
constraints for a given solution of a configuration.


\subsection{Mapping Realization} \label{faulttolerance:sec:mappingrealization}
\noindent
In the previous subsection
(Section~\ref{faulttolerance:sec:exploration}), we proposed a search
order to explore and synthesize solutions for other feasible configurations
than the minimal configurations.  For each configuration
$\nodeconfig \in \opconfigset \setminus \minconfigset$ that is
considered in the exploration process, a mapping $\mapping_\nodeconfig
: \taskset_\applicationset \longrightarrow \nodeconfig$ is constructed
(along with customized schedules and controllers).  We shall in the
remainder of this section focus on whether and how this mapping can be
realized at runtime in case the system reaches configuration
$\nodeconfig$.  We first check whether there is sufficient memory to
store information related to the solution (mapping, schedules, and
controllers) for the configuration. The required memory for this
information is denoted $\mem_{d}^{\nodeconfig}$ and is an output of
the mapping and synthesis step for configuration $\nodeconfig$
(Section~\ref{faulttolerance:sec:mapping}). Let us denote with
$\mem_d$ the amount of additional memory that is already consumed on
$\node_d$ for other configurations in $\mappedconfigset \subset
\opconfigset \setminus \minconfigset$.  If
\begin{displaymath}
\mem_d + \mem_d^\nodeconfig > \maxmemory_d
\end{displaymath}
for some $d \in \indexset_\nodeset$,
it means that the information related to the mapping, schedules, and controllers for
configuration $\nodeconfig$ cannot be stored on the
computation platform. For such cases, we declare that the mapping
$\mapping_\nodeconfig$ cannot be realized (we remind that 
solutions for minimal configurations, however, 
can be used to operate the system in configuration $\nodeconfig$).

If the solution for $\nodeconfig$ can be stored within the given
memory limit, we check whether migration of tasks that are needed to
realize the mapping can be done within the maximum allowed migration
time\newnot{symbol:migrationtimemax}
\begin{displaymath}
\migrationtimemax =
\min_{i \in \indexset_\plants} \migrationtimemax_i.
\end{displaymath}
If the migration-time constraint cannot be met, we reduce the
migration time below the threshold $\migrationtimemax$ by storing
tasks in the memory of computation nodes 
\cbstart
(this memory consumption is
separate from the memory space needed 
to store tasks for the realization of minimal configurations).
\cbend
The main idea is to store as few tasks as possible to satisfy the
migration-time constraint.  Towards this, let us consider the set of
tasks
\begin{displaymath}
  \migratedtasks_d(\nodeconfig) = \left\{
  \task \in \taskset_\applicationset \setminus \storedtasks{d} :
  \mapping_\nodeconfig(\task) = \node_d
  \right\}
\end{displaymath}
that need to be migrated to node $\node_d$ at runtime in order to
realize the mapping $\mapping_\nodeconfig$, given that
$\storedtasks{d}$ is the set of tasks that are already stored on node
$\node_d$.
The objective is to find a set of tasks $\selectedtaskstostore_d
\subseteq \migratedtasks_d(\nodeconfig)$ to store on each node $\node_d
\in \nodeset$ such that the memory consumption is minimized and 
the maximum allowed migration time is considered. We formulate this
problem as an integer linear program (ILP) by introducing a binary
variable $\taskboolean_d^\task$ for each node $\node_{d} \in \nodeset$
and each task $\task \in \migratedtasks_d(\nodeconfig)$. Task
$\task \in \migratedtasks_d(\nodeconfig)$ is stored on $\node_d$ if
$\taskboolean_d^\task = 1$, and migrated if $\taskboolean_d^\task =
0$. The memory constraint is thus formulated as
\begin{equation}
	\mem_d + \mem_d^\nodeconfig + \sum_{\task \in \migratedtasks_d(\nodeconfig)} \taskboolean_d^\task\mem_d(\task)
	\leqslant \maxmemory_d,
	 \label{faulttolerance:eq:memconstr}
\end{equation}
which models that the memory consumption $\mem_d^\nodeconfig$ of
the solution together with
the memory needed to store the selected tasks do not exceed the memory limitations.
The migration-time constraint is formulated similarly as
\begin{equation}
	\sum_{d \in \indexset_\nodeset} \left( \sum_{\task \in \migratedtasks_d(\nodeconfig)} 
	(1 -\taskboolean_d^\task )\migrationtime(\task) \right)
        \leqslant \migrationtimemax.
	\label{faulttolerance:eq:migrationconstr}
\end{equation}
The memory cost to minimize in the selection of
$\selectedtaskstostore_d \subseteq \migratedtasks_d(\nodeconfig)$ is
given by
\begin{equation}
	 \sum_{d \in \indexset_\nodeset} \left( \sum_{\task \in  \migratedtasks_d(\nodeconfig)} \taskboolean_d^\task\mem_d(\task) \right).
	\label{faulttolerance:eq:realizationobjective}
\end{equation}

If a solution to the ILP formulation cannot be found, then the mapping
cannot be realized.  If a solution is found, we have
\begin{displaymath}
\selectedtaskstostore_d = \{ \task \in \migratedtasks_d(\nodeconfig)
: \taskboolean_d^\task = 1\}
\end{displaymath}
and we update the set $\storedtasks{d}$
and the memory consumption $\mem_d$, respectively, according to
\begin{displaymath}
\storedtasks{d} \algAssignment \storedtasks{d} \cup
\selectedtaskstostore_d
\end{displaymath}
and
\begin{displaymath}
	\mem_d \algAssignment
	\mem_d + \mem_d^\nodeconfig + \sum_{\task \in \selectedtaskstostore_d} \mem_d(\task).
\end{displaymath}
Even for large systems, the ILP given by
Equations~\ref{faulttolerance:eq:realizationobjective},
\ref{faulttolerance:eq:memconstr},
and~\ref{faulttolerance:eq:migrationconstr} can be solved optimally
and efficiently with modern solvers. We have used the \texttt{eplex}
library for ILP in \eclipse~\cite{clpbook}, and it incurred
negligible time overhead---less than one second---in our experiments.




\section{Experimental Results} \label{faulttolerance:sec:experiments}
\noindent
We have conducted experiments to evaluate our proposed design
framework. We constructed a set of test cases with inverted
pendulums, ball and beam processes, DC servos, and harmonic
oscillators~\cite{astrom97}. The test cases vary in size between~5
and~9 computation nodes with~4 to~6 control applications.
All experiments were performed on a PC with a
quad-core CPU at 2.2~GHz, 8~GB of RAM, and running Linux.


As a baseline of comparison, we considered a straightforward design
approach for which we synthesize solutions for all minimal
configurations and the initial configuration $\nodeset$. This
constitutes the mandatory set of solutions to achieve fault tolerance
in any feasible configuration, as well as an optimized solution for
the case when all nodes are operational.  We computed a cost
$\controlcost^\textrm{min}$ according to
Equation~\ref{faulttolerance:eq:dseCost}, considering that solutions
have been synthesized for minimal configurations and the initial
configuration, and that all other feasible configurations run with the
corresponding minimal configuration with the minimum level of control
quality given by Equation~\ref{faulttolerance:eq:inheritedcost}.  The
cost $\controlcost^\textrm{min}$ indicates the overall control
quality of the fault-tolerant control system with only the mandatory
solutions synthesized.

Subsequently, we made experiments with our optimization heuristic to
select additional configurations for synthesis. For each feasible
configuration that is synthesized, individual cost terms in
Equation~\ref{faulttolerance:eq:dseCost} are decreased (control
quality is improved compared to what is provided by minimal
configurations).  The optimization phase was conducted for varying
amounts of design time. For each additional configuration that was
synthesized, the total cost in
Equation~\ref{faulttolerance:eq:dseCost} was updated.  Reminding that
a small control cost indicates high control quality, and vice versa,
we are interested in the control-cost improvement 
\begin{displaymath}
\frac{\controlcost^\textrm{min} - \controlcost}{\controlcost^\textrm{min}}
\end{displaymath}
relative to the control cost
$\controlcost^\textrm{min}$ that is obtained when only considering the
mandatory configurations.


\begin{figure}
  \centering
  \includegraphics[width=\textwidth]{faulttolerance/plot/results}
  \caption[Relative cost improvements and runtimes of the proposed design approach]
  {Relative cost improvements and runtimes of the proposed design approach.
  The synthesis time related to zero improvement corresponds to the construction of
  solutions for the mandatory minimal configurations and the configuration in which all nodes
  are operational. Additional design time for other feasible configurations leads 
  to improved control quality.}
  \label{faulttolerance:fig:results}
\end{figure}

Figure~\ref{faulttolerance:fig:results} shows the design time on the horizontal axis
and the corresponding relative improvement on the vertical axis. 
The design time corresponding to the case of zero improvement
refers to the mandatory design phase of identification and synthesis
of minimal configurations. The mandatory design phase for minimal
configurations is around only 10~minutes, which is sufficient to cover
all fault scenarios and provide a minimum level of control
quality in any feasible configuration.
Any additional design time that is invested leads to improved control quality compared
to the already synthesized fault-tolerant solution.
\cbstart
For example, we can achieve an improvement of around
30~percent already after 20~minutes for systems with~5 and~7 computation nodes. 
\cbend
We did not run the heuristic for
the case of 5~nodes for more than 23 minutes, because at that time it
has already synthesized all feasible configurations. For the other
cases, the problem size was too large to afford an exhaustive
exploration of all configurations. It should be noted that the quality
improvement is smaller at large design times. At large design times,
the heuristic typically evaluates and optimizes control quality for
configurations with many failed nodes. However, these quality
improvements do not contribute significantly to the overall quality
(Equation~\ref{faulttolerance:eq:dseCost}), because the probability of many nodes
failing is very small (Equation~\ref{faulttolerance:eq:failureprobability}).
We conclude that the designer can stop the optimization process
when the improvement at each step is no longer considered significant.



\section{Summary and Discussion} \label{faulttolerance:sec:conclusion}
\noindent
We proposed a design framework for distributed embedded control
applications with support for execution even if some computation nodes
in the system fail. We presented an algorithm to identify base
configurations and construct mappings for minimal configurations of the
distributed system to achieve fault-tolerant operation. To improve the
overall control quality relative to the minimum level of quality
provided by the minimal configurations, we construct additional design
solutions efficiently.  

The system can adapt to situations in
which nodes have failed by switching to an appropriate solution that
has been synthesized at design time. Task replication and migration
are mechanisms that are used to implement remapping of control
tasks.  In this way, the system adapts to different configurations as
a response to failed components. These mechanisms and the solutions
prepared by our framework are sufficient to operate the system in case
computation nodes fail. The alternative to this software-based
approach is hardware replication, which can be very costly in some
application domains; for example, the design of many applications in
the automotive domain are highly cost constrained.

We note that our framework is not restricted to control applications,
but can be adapted to other application domains for which distributed
platforms and fault-tolerance requirements are inherent.  In
particular, our idea of base and minimal configurations is general and
may be applied to any application area. The information regarding base and
minimal configurations also serves as an indication to the designer
regarding those computation nodes that are of particular importance.
Hardware replication of nodes in minimal
configurations reduces the probability of reaching infeasible
configurations, or reaching configurations that are not covered by
minimal configurations, whereas all other fault scenarios
are handled with the less costly software-based approach that we
presented in this chapter. The design optimization problem is relevant for other
application domains for which performance metrics exist and depend on
the available computing and communication resources.
