\section{System Model} \label{sec:systemmodel}
\noindent
We shall first introduce the background related to feedback-control
applications. Thereafter, we discuss the underlying distributed
platform and define its different configurations that may arise at
runtime as a result of faults.

\subsection{Feedback-control applications} \noindent
This subsection is divided in three parts. First, we discuss the model
of the physical processes that are to be controlled. Second, we
present the structure of control applications that are supported by
our framework. Last, we discuss the metric for control quality and its
dependence on implementation-related factors.

\subsubsection{Plant model}
\noindent
We consider given a set of plants $\plants$, indexed by
$\indexset_\plants$, where each plant $\plant_i \in \plants$ ($i
\in \indexset_\plants$) is modeled as a continuous-time linear
system~\cite{astrom97}. Specifically, the dynamical behavior of a
plant $\plant_i$ is given by a system of linear differential equations
\begin{equation}
  \timederivative{\plantstate}_i(t) = A_i \plantstate_i(t) + B_i \plantinput_i(t) + \disturbance_i(t),
  \label{eq:plantdynamics}
\end{equation}
where the vector functions of time $\plantstate_i$ and~$\plantinput_i$
are the plant state and controlled input, respectively, and the
vector~$\disturbance_i$ models plant disturbance as a white-noise stochastic process.
The matrices $A_i$
and $B_i$ model how the plant state evolves in time depending on the
current plant state and provided control input, respectively.  The
measured plant outputs are modeled as
\begin{equation}  
  \plantoutput_i(t) = C_i \plantstate_i(t) + \measnoise_i(t),
  \label{eq:plantoutputs}
\end{equation}
where $\measnoise_i$ is an additive measurement noise.  The
continuous-time output~$\plantoutput_i$ is measured and sampled
periodically and is an input to the computation and update of the control
signal~$\plantinput_i$. 
The control law that describes the mapping from $\plantoutput_{i}$
to $\plantinput_{i}$ is a design parameter.
The $C_i$ matrix, which often is diagonal,
thus indicates those plant states that can be measured by available
physical sensors ($C_i$ is the identity matrix if all plant states can
be measured). The control signal is actuated at discrete time-instants
and is held constant between two updates by a hold circuit in the
actuator~\cite{astrom97}.

\begin{figure}
	\centering
	\includegraphics[width=0.32\textwidth]{figures/systemexample}
	\caption{ (a) A set of feedback-control applications running on a (b) distributed execution platform.
	Several periodic tasks execute on the computation nodes to read sensors, compute control signals,
	and write to actuators.}
	\label{fig:systemexample}
\end{figure}

As an example of a system with two plants,
let us consider a set of two inverted pendulums~$\plants = \{\plant_1,
\plant_2\}$. Each pendulum $\plant_i$ ($i \in \indexset_\plants =
\{1,2\}$) is modeled according to Equations~\ref{eq:plantdynamics}
and~\ref{eq:plantoutputs}, with
%
$A_i = {\left[ \begin{array}{cc}
    0 & 1\\
    g/l_i & 0
  \end{array} \right]}$,
$B_i = \transpose{\left[ \begin{array}{cc}
    0 &
    g/l_i
  \end{array} \right]}$, and
$C_i = \left[ \begin{array}{cc}
    1 & 0 \end{array} \right]$,
%
where $g \approx 9.81$~$\textrm{m}/\textrm{s}^2$ and~$l_i$ are the
gravitational constant and length of pendulum~$\plant_i$, respectively
($l_1 = 0.2$~m and~$l_2 = 0.1$~m). The two states are the pendulum
position and speed, respectively. The inverted pendulum model appears often in
literature as an example of control problems for unstable processes.


\subsubsection{Application model}
\noindent
For each plant $\plant_i$, we have a control application
$\application_i = (\taskset_i,\msgset_i)$ that implements a
feedback-control loop and is modeled as a directed acyclic graph in
which the vertices $\taskset_i$ represent computation tasks and the
edges $\msgset_i \subseteq \taskset_i \times \taskset_i$ represent
messages and data dependencies between tasks.
Let us denote the set of
control applications by $\applicationset$ and index it with the index
set $\indexset_\plants$ of $\plants$. 
Thus, for each $i \in \indexset_\plants$, the pair $\application_{i}$ and $\plant_{i}$
form a closed-loop control system.
We also introduce the set of
all tasks in the system as 
\begin{displaymath}
\taskset_\applicationset = \bigcup_{i \in
  \indexset_\plants} \taskset_i.
\end{displaymath}
Tasks are released for execution periodically. The period of each task
is a design parameter and is decided mainly based on the dynamics
of the control plant, the available computation and communication
bandwidth, and trade-offs with the period of other
applications. Figure~\ref{fig:systemexample}(a) shows a set of control
loops comprising $n$ plants $\plants$ with index set
$\indexset_\plants = \{1,\ldots,n\}$ and, for each plant $\plant_i$, a
control application $\application_i$ with three tasks $\taskset_i =
\{\task_{i1}, \task_{i2}, \task_{i3}\}$.  The edges indicate the data
dependencies between tasks, as well as communication between sensors
and actuators.


\subsubsection{Control Quality}
\noindent
Considering one of the controlled plants $\plant_i$ in isolation, the
goal is to control the plant states in the presence of the additive
plant disturbance $\disturbance_i$ and measurement error
$\measnoise_i$. We use quadratic control costs~\cite{astrom97} as a
quality and performance metric for control applications. This includes
a cost for the error in the plant state and the cost of changing the
control signals (the latter cost can be related to the amount of
energy spent by the actuators).  Specifically, the quality of a
controller for plant $\plant_i$ is given by the quadratic cost
\begin{equation}
  \controlcost_i = \lim_{T \rightarrow \infty} \frac{1}{T}
  \expectedvalue{
    \int_0^T \transpose{\left[
      \begin{array}{c}\plantstate_i\\\plantinput_i\end{array}\right]} Q_i 
    \left[ \begin{array}{c}\plantstate_i\\\plantinput_i\end{array}\right] dt
  }.
  \label{eq:controlcost}
\end{equation}
A small cost indicates high control quality, and vice versa.
The weight matrix $Q_i$ is used to model weights
of individual components of the state and input vectors, as well as
to indicate the importance relative to other control applications
in the system. An infinite cost indicates an 
unstable closed-loop system.

The cost in Equation~\ref{eq:controlcost} is a common quality metric
in the literature of control systems~\cite{astrom97}. The control cost
is a function of the sampling period of the controller, the control
law, and the characteristics of the delay between sampling and
actuation. This delay is complex and is induced by the schedule and
mapping of the tasks on the distributed
platform~\cite{bini08,blind}. We use the Jitterbug
toolbox~\cite{cervincontroltiming} to compute the control cost
$\controlcost_i$ by providing as inputs the controller and the
characteristics of the sampling--actuation delay. 

\subsection{Distributed platform}
\noindent
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Distributed execution platform
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The distributed execution platform, on which the control applications
run, comprises a set of computation nodes~$\nodeset$, indexed by
$\indexset_\nodeset$, which are connected to a bus. For the platform
in Figure~\ref{fig:systemexample}(b), we have $\nodeset = \{ \node_A,
\node_B, \node_C, \node_D \}$ ($\indexset_\nodeset = \{A,B,C,D\}$).
We consider given a function $\mappinglimitation :
\taskset_\applicationset \longrightarrow \powerset{\nodeset}$ that,
for each task $\task \in \taskset_\applicationset$ in the system,
gives the set of computation nodes $\mappinglimitation(\task)
\subseteq \nodeset$ that task $\task$ can be mapped to.  For example,
tasks that read sensors or write to actuators can only be mapped to
computation nodes that provide input--output interfaces to the needed
sensors and actuators.  Also, some tasks may require specialized
instructions or hardware accelerators that are available only on some
nodes in the platform.  In Figure~\ref{fig:systemexample},
$\task_{11}$ and $\task_{13}$ may be mapped to the nodes indicated by
the dotted line.  Thus, we have $\mappinglimitation(\task_{11}) =
\{\node_A, \node_C\}$ and $\mappinglimitation(\task_{13}) = \{\node_C,
\node_D\}$.  Task $\task_{12}$ can thus be mapped to any of the four
nodes in the platform (i.e., $\mappinglimitation(\task_{12}) =
\nodeset$). For each task $\task \in \taskset_\applicationset$ and
each computation node $\node \in \mappinglimitation(\task)$, we
consider the best-case and worst-case execution times are given when
task $\task$ executes on node $\node$.

\begin{figure}
  \centering
  \includegraphics[width=0.42\textwidth]{figures/configurationdiagram}
  \caption{Hasse diagram of configurations for a system with four nodes. The set of possible configurations due to faults is partially ordered under the subset relation.}
  \label{fig:configurationdiagram}
\end{figure}

At any moment in time, the system has a set of computation nodes
$\nodeconfig \subseteq \nodeset$ that are available.  The remaining
nodes $\nodeset \setminus \nodeconfig$ have failed and are not
available for computation. We shall refer to $\nodeconfig$ as a
\emph{configuration} of the distributed platform. The complete set of
configurations is the power set $\configurationset =
\powerset{\nodeset}$ of $\nodeset$ and is a partially ordered set
under the subset relation. The partial order of configurations is
shown in Figure~\ref{fig:configurationdiagram} as a Hasse diagram of
configurations for our example with four computation nodes in
Figure~\ref{fig:systemexample} (note that we have excluded the empty
configuration $\emptyset$ because it is of no interest to consider the
scenario where all nodes have failed). For example, the configuration
$\{\node_A, \node_B, \node_C\}$ indicates that $\node_D$ has failed
and only the other three nodes are available for computation.  In a
typical system, it is highly unlikely that a very large number of
nodes will become unavailable due to faults. To model realistic
settings, we assume that the designer specifies the minimum number of
nodes $\minimumavailablenodes$ that are available for execution at any
point in time. Thus, at most $\setsize{\nodeset} -
\minimumavailablenodes$ nodes have failed and are unavailable at the
same time\footnote{If the designer does not specify the minimum number
  of available nodes, our formulation leads to the specific case for
  which $\minimumavailablenodes = 1$.}. We assume that the
communication protocol of the system ensures fault-tolerance for
messages~\cite{navet05,kopetz97}.


We consider that the computing platform implements appropriate
mechanisms for fault detection. The failure of a node must be detected
and all remaining operational nodes must know about such
failures~\cite{kopetz97}. In addition, when a node has been repaired
it is detected by the other nodes in the system. This allows each
operational node to know about the current configuration.  Other
approaches rely on failure prediction by observing deviations and
anomalies in the system behavior and performance~\cite{williams07}.
Adaptation due to failed or repaired nodes involves switching
schedules and control algorithms that are optimized for the available
resources in the new configuration (Section~\ref{sec:heuristic}). This
information is stored in the nodes of the
platform~\cite{srivastava05,kopetz97}. Another phase during system
reconfiguration is task migration~\cite{Lee10} that takes place when
tasks running on failed nodes must be activated at other
nodes in the system. The system has the ability to migrate tasks to
other nodes in the platform. Each node stores information regarding
those tasks that it must migrate on the bus when the system is
adapting to a new configuration. This information is generated at
design time (Section~\ref{sec:mappingrealization}).
