\section{Per-flow server graph}\label{sec:SSG}

%The first step of our approach computes, for each execution flow $\ExecutionFlow_{i,j}$ of every task $\Task_i$, a synchronous DAG of servers which we call \emph{synchronous server graph} (SSG) and denote by $\SSG_{i,j}$. Formally, we define an SSG as follows:

\begin{figure*}[!ht]
\centering
\includegraphics[width=.80\linewidth]{ssg2.png}
\caption{SSG $\SSG_{i,1}$ obtained by running Algorithm~\ref{algo:createSSG} with input $\ExecutionFlow_{i,1}$.}
\label{fig:ssg}
\end{figure*}

For each execution flow $\ExecutionFlow_{i,j}$ of every task $\Task_i$, we derive a synchronous DAG of servers referred to as \emph{synchronous server graph} (SSG) and is denoted by $\SSG_{i,j}$. Formally, we define an SSG as follows:

\begin{Definition}[SSG] 
\label{def:ssg}
A Synchronous Server Graph is a synchronous DAG of nodes (here the nodes are the servers) organized as a set $\{ \Segment_1, \Segment_2, \ldots, \Segment_{r} \}$ of $r$ segments. Each segment $\Segment_{\ell}$ with $\ell \in [1, r]$ is characterized by a pair $\left\langle \Budget_\ell, \NbServersInSegment_\ell \right\rangle$, where $\NbServersInSegment_\ell$ is the number of servers in $\Segment_\ell$ and $\Budget_\ell$ is the cpu-budget associated to each of these $\NbServersInSegment_\ell$ servers. Directed edges exist only between nodes of adjacent segments. Specifically, every node within a segment is connected to every node of the next segment (if any). 
\end{Definition}

Informally, the purpose of the method developed in this section is to be able to represent each execution flow of a given task $\Task_i$ as a synchronous DAG of servers such that, when $\Task_i$ takes one of its execution flows $\ExecutionFlow_{i,j}$ at run-time, the corresponding SSG $\SSG_{i,j}$ provides the required budget to finish the execution of all the sub-tasks of $\ExecutionFlow_{i,j}$ without violating any precedence constraint. This is an \emph{intermediate} step in our approach; in the next section we will develop a second step (based on this first one) that assigns a single synchronous DAG to \emph{each task}, rather than one SSG for each flow. 

%during run-time, none of these SSGs will exist but only a global SSG for each task as we discuss in the next section.

%Computing an SSG for every execution flow of every task is an \emph{intermediate} step in our approach, as at the end we want to have only one SSG for each task and not for each of its execution flows. However, to simplify the proof of correctness of our approach we prefer to proceed in two steps, i.e., first creating an SSG for each execution flow of a task and then, merging all the SSGs of the task into one SSG that can accommodate all the execution flows of the task.
%and as of now, \emph{let us assume that all the tasks have only one possible execution flow and we want to derive an SSG for each of these unique flows}. 

%The mechanism to handle these per-flow SSG at runtime works as follows: each segment of an SSG is a collection of servers whose budget is used to execute exclusively the ready sub-tasks of the execution flow from which the SSG has been derived. The servers are the entities to compete for, and to be scheduled on, the $m$ cores by the scheduling algorithm of the operating system. Each time a server is granted a core, its budget is used to execute a ready sub-task; Again, only sub-tasks that belong to the execution flow from which the SSG was derived can be executed within the budget provided by the servers of that SSG. Each time a job of a task is released (and thus executes one of its execution flows), the first segment of the corresponding SSG ``releases'' all its servers, in the sense that they become ready to provide budget to the sub-tasks of that task. Then, each of the subsequent segments releases all its servers only after all the servers from the previous segment have exhausted their budgets. That is, servers belonging to a segment $\Segment_\ell$ are allowed to provide cpu-budget to the sub-tasks of the dedicated execution flow only when all the servers from segment $\Segment_{\ell-1}$ have exhausted their budget. Since at some point in time we may have several sub-tasks from a same execution flow that are ready-to-execute and several servers in the corresponding SSG that are ready to provide budget, there must be a mapping rule to define which sub-task is granted budget from which server.

The mechanism to handle these per-flow SSG at run-time works as follows: each segment of an SSG is a collection of servers whose budget is used to execute exclusively the ready sub-tasks of the execution flow from which the SSG has been derived. The servers are the entities to compete for, and to be scheduled on, the $m$ cores by the scheduling algorithm of the operating system. Each time a server is granted a core, its budget is used to execute a ready sub-task. Each time a job of a task is released (and thus executes one of its execution flows), the first segment of the corresponding SSG ``releases'' all its servers, in the sense that they become ready to provide budget to the sub-tasks of that particular flow. Then, each of the subsequent segments releases all its servers only after all the servers from the previous segment have exhausted their budgets. That is, servers belonging to a segment $\Segment_\ell$ are allowed to provide cpu-budget to the sub-tasks of the dedicated execution flow only when all the servers from segment $\Segment_{\ell-1}$ have exhausted their budget. Since at some point in time we may have several sub-tasks from a same execution flow that are ready-to-execute and several servers in the corresponding SSG that are ready to provide budget, there must be a mapping rule to define which sub-task is granted budget from which server. 

%[Defining such a mapping rule is much harder than it seems, as illustrated in the following example.]

%The reminder of this section is organized as follows. A mapping rule is defined (Definition~\ref{def:rule}) which is used throughout the paper and then our algorithm (Algorithm~\ref{algo:createSSG}) to construct an SSG for each execution flow is proposed and finally it is proven that the algorithm always constructs a \emph{valid} SSG according to our mapping rule, as per the definition of \emph{validity} given below (Property~\ref{prt:validity}).

Firstly, we state the generic conditions that assert the \emph{validity} of an SSG toward a given execution flow through Property~\ref{prt:validity}. Then, we define a simple, yet efficient, mapping rule which is used throughout the paper to arbitrate the assignment of ready sub-tasks to servers. From that point onward, every time we refer to a valid SSG it implies that the mapping rule given by Definition~\ref{def:rule} is enforced. Finally, we present an algorithm to construct an SSG for each execution flow and prove its correctness.%The SSG construction together with the mapping rule assure that no original precedence constraint is violated.

%We first provide a mapping rule which is used throughout the paper. Then the algorithm (Algorithm~\ref{algo:createSSG}) to construct an SSG for each execution flow is proposed and finally it is proven that the algorithm always constructs a \emph{valid} SSG according to our mapping rule, as per the definition of \emph{validity} given below (Property~\ref{prt:validity}).

%At runtime, a precise assignment of sub-tasks that are ready for execution to the running servers is crucial to prevent inadvertent consumption of the wrong budget. Otherwise, precedence constraints may not be satisfied when they are supposed to and hence server-budget may be wasted. In that case, unless over-provisioning was accounted for, servers would fail to provide enough cpu-budget to guarantee deadlines are met. While an accurate mapping from sub-tasks to servers is fundamental to not jeopardize system's schedulability, it is also true that such mapping has to be very simple to avoid runtime overheads and implementation issues. Based on this observation, we construct the $\SDG$ with equal-size servers per segment which allows us to define a lightweight mapping rule as follows.

\begin{Property}[Validity]
For a platform $\Platform$, a scheduling algorithm $\Scheduler$, a mapping rule $\MappingRule$, and an execution flow $\ExecutionFlow_{i,j}$ of a task $\Task_i$, an SSG $\SSG_{i,j}$ is said to be valid for $\ExecutionFlow_{i,j}$ according to $\MappingRule$ if and only if for any schedule of the servers of $\SSG_{i,j}$ produced by $\Scheduler$ on $\Platform$, at run-time all the nodes of $\ExecutionFlow_{i,j}$ are guaranteed to be mapped by $\MappingRule$ to the server nodes of $\SSG_{i,j}$ in such a way that (1) all the dependencies between the nodes of $\ExecutionFlow_{i,j}$ are satisfied, and (2) $\ExecutionFlow_{i,j}$ receives the required budget to execute all its nodes. 
\label{prt:validity}
\end{Property}

\begin{Definition}[Mapping rule]
\label{def:rule}
Let $\ExecutionFlow_{i,j}$ be an execution flow of a task $\Task_i$ and let $\SSG_{i,j}$ be the corresponding SSG constructed using Algorithm~\ref{algo:createSSG}. A server $\Server_{\ell,x} \in \Segment_\ell \subseteq \SSG_{i,j}$, with $x \in [1, \NbServersInSegment_\ell]$, can execute a ready sub-task $\SubTask_k \in \SetofSubTasks_{i,j}$ if and only if $\SubTask_k$ has not been executed by a server $\Server_{\ell,y} \neq \Server_{\ell,x}$ such that $\Server_{\ell,y} \in \Segment_\ell$ as well.
\end{Definition}

\begin{algorithm}[h]
\footnotesize
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
   \Input{$\ExecutionFlow_{i,j}$ - An execution flow of task $\Task_i$}
   \Output{$\SSG_{i,j}$ - An SSG for $\ExecutionFlow_{i,j}$}
	 $\SSG_{i,j} \leftarrow \emptyset$ \;
	 %$\Scurr \leftarrow \emptyset$ \;
	 \While{$\SetofSubTasks_{i,j} \neq \emptyset$} {
		 $\Scurr \leftarrow \left\{ \SubTask_k \in \SetofSubTasks_{i,j} | \Predecessor(\SubTask_k) = \emptyset \right\}$\;
		 $\Cmin \leftarrow \min\left\{\WCET_k | \SubTask_k \in \Scurr\right\}$ \;
		 $\SSG_{i,j} \leftarrow \SSG_{i,j} \otimes \left\{\left\langle \Cmin, \left|\Scurr\right|\right\rangle\right\}$ \;
		 \ForEach{$\SubTask_k \in \Scurr$} {
		   	$\WCET_k \leftarrow \WCET_k - \Cmin$\;
			 \If{$\WCET_k = 0$} {
			   $\SetofSubTasks_{i,j} \leftarrow \SetofSubTasks_{i,j} \setminus \left\{\SubTask_k\right\}$ \;
			   $\SetofEdges_{i,j} \leftarrow \SetofEdges_{i,j} \setminus \left\{ (\SubTask_k, *)\right\}$ \;
			 }
		 }
	 }
	 \Return $\SSG_{i,j}$\;
\caption{generateSSG($\ExecutionFlow_{i,j}$)}
\label{algo:createSSG}
\normalsize
\end{algorithm}

% The notation $\Server_{\ell,x} \Leftarrow \SubTask_k$ indicates a possible assignment of sub-tasks in $\ExecutionFlow_{i,1}$ to servers in $\SSG_{i,1}$.


The pseudo-code of the SSG creation algorithm is shown in Algorithm~\ref{algo:createSSG}, whereas Fig.~\ref{fig:ssg} depicts the resulting SSG\footnote{As a coincidence, in this example, all segments of the SSG have unitary budgets although it may not be the case in general.} for the execution flow $\ExecutionFlow_{i,1}$ illustrated in Fig.~\ref{fig:model}. This algorithm takes an execution flow $\ExecutionFlow_{i,j}$ of task $\tau_i$ as input and outputs an SSG $\SSG_{i,j}$ for that flow, working as follows. The algorithm traverses the DAG $\ExecutionFlow_{i,j}$ by starting at its unique entry node (first iteration at line~3). At each iteration in the while loop, the algorithm adds a new segment at the end of $\SSG_{i,j}$ (line~5). The addition is represented by the operator $\otimes \left\langle \Budget, \NbServersInSegment \right\rangle$ which appends a segment of $\NbServersInSegment$ servers, each with a budget of $\Budget$. This new segment has as many servers as there are sub-tasks with no predecessor(s) in $\SetofSubTasks_{i,j}$ (i.e., ready sub-tasks) and each of these servers is assigned a budget equal to the minimum execution requirement among these sub-tasks (computed at line~4). The algorithm then proceeds by updating the DAG $\ExecutionFlow_{i,j}$ and "simulating" the execution of its sub-tasks within the created servers, i.e. for each sub-task with no predecessor, its execution time is decreased by $\Cmin$ time units (line~7), thus reflecting its execution within that dedicated server. The number of servers per segment is basically tied to the number of sub-tasks that are guaranteed to be ready at that point in time, at run-time. Sub-tasks reaching zero execution requirement are removed from the input DAG $\ExecutionFlow_{i,j}$, as well as their respective outgoing edges (lines~8--10). Algorithm~\ref{algo:createSSG} is guaranteed to terminate as $\SetofSubTasks_{i,j}$ eventually becomes empty. 

We now prove that the SSG output by Algorithm~\ref{algo:createSSG} is always valid (see~\ref{prt:validity}) for its input execution flow.

\begin{Lemma}
\label{lem:workload}
$\Workload(\SSG_{i,j}) = \Workload(\ExecutionFlow_{i,j})$
\end{Lemma}
\begin{proof}
At each iteration in the $\mathrm{while}$ loop in Algorithm~\ref{algo:createSSG}, $\left|\Scurr\right| \times \Cmin$ units of workload are added to $\SSG_{i,j}$ (at lines~5), and the same amount is then iteratively subtracted from $\ExecutionFlow_{i,j}$ (lines~6 and~7). From this and the $\mathrm{while}$ loop termination condition, the claim trivially holds.
\end{proof}

\begin{Theorem}
\label{theo:SSG_validity}
The SSG $\SSG_{i,j}$, obtained by running Algorithm~\ref{algo:createSSG} with input $\ExecutionFlow_{i,j}$, is valid for the execution flow $\ExecutionFlow_{i,j}$.
\end{Theorem}
\begin{proof}
According to the validity property, we need to show that (a) all the dependencies in $\ExecutionFlow_{i,j}$ are preserved and (b) $\ExecutionFlow_{i,j}$ is provided the required budget to finish the execution of all its sub-tasks. 

\textbf{Proof of (a):} It can be easily seen that all the precedence constraints in $\ExecutionFlow_{i,j}$ are preserved by construction since the servers are created for only those sub-tasks which are ready to execute (lines 3--5 in Algorithm~\ref{algo:createSSG}), and the mapping rule ensures that no sub-task is assigned to more than one server within the same segment.

\textbf{Proof of (b):} Let us recall the run-time management mechanism of an SSG: all the servers within a segment of an SSG become ``ready'' to provide budgets only when all the servers from the previous segment have exhausted their budgets. Given this run-time mechanism, we prove by induction on the number of segments that no budget provided by the servers is wasted, i.e. all the servers of each segment use their entire budget to execute sub-tasks of $\ExecutionFlow_{i,j}$ that are ready-to-execute. Therefore, since at the end no budget is wasted and the total amount of budget provided by the SSG is equal to the workload of $\ExecutionFlow_{i,j}$ (from Lemma~\ref{lem:workload}) the claim holds true. The detailed proof follows.

\noindent\textbf{Base case.} In the first iteration of the $\mathrm{while}$ loop, there is only one sub-task with no predecessor (remember that there is only one entry point to any execution flow) and thus only one server is created and added to $\SSG_{i,j}$ at line~5. This server has a budget $\Cmin$ equal to the WCET $\WCET_k$ of that sub-task (line~4), and at run-time this single server will provide budget to that single sub-task as soon as it is released, i.e. when $\ExecutionFlow_{i,j}$ is taken for execution. Hence, this first sub-task will execute entirely within the budget of that first server and no budget is wasted in this first segment. In addition, the algorithm "simulates" the completion of this first sub-task as it is removed from $\ExecutionFlow_{i,j}$ at line~9 and~10, implying that at the next iteration $\SetofSubTasks_{i,j}$ will contain only the sub-tasks that have not completed yet.

\noindent\textbf{Inductive step.} Assume that at run-time, the $\ell$'th segment just released all its servers and no budget has been wasted by the servers of all the previous segments. Also (as mentioned above), at the $\ell$'th iteration of the $\mathrm{while}$ loop, $\SetofSubTasks_{i,j}$ contains only the sub-tasks that have not completed yet and $\Scurr$ therefore contains the set of all the uncompleted sub-tasks that are ready-to-execute at the release of the servers of the $\ell$'th segment. As seen in line~5, Algorithm~\ref{algo:createSSG} creates in segment $\Segment_\ell$ as many servers as there are ready sub-tasks, i.e. $\left|\Scurr\right|$ servers are created in $\Segment_\ell$. Each of these $\left|\Scurr\right|$ servers is assigned a budget of $\Cmin$, which corresponds to the minimum remaining WCET of all the ready sub-tasks. At run-time, the mapping rule guarantees that each one of the $\left|\Scurr\right|$ ready sub-tasks will be allocated to one (and only one) of the $\left|\Scurr\right|$ servers, and they will all execute for $\Cmin$ time units, which is "simulated" at lines~6 and~7 of Algorithm~\ref{algo:createSSG}. Here again, no budget is wasted in the $\ell$'th segment and since the tasks that complete at the end of this segment are removed from $\ExecutionFlow_{i,j}$ at lines~8--10, at the next iteration of the $\mathrm{while}$ loop, $\SetofSubTasks_{i,j}$ will once more contain only the uncompleted sub-tasks.

The algorithm terminates when $\SetofSubTasks_{i,j}$ is empty, which means that there are no more uncompleted sub-tasks. In every segment, no budget has been wasted and since we have $\Workload(\SSG_{i,j}) = \Workload(\ExecutionFlow_{i,j})$ by Lemma~\ref{lem:workload}, it holds that all the sub-tasks of $\ExecutionFlow_{i,j}$ have been executed entirely.%Therefore, $\SSG_{i,j}$ is valid for $\ExecutionFlow_{i,j}$ and our mapping rule $R$ as defined in Definition~\ref{def:rule}.
\end{proof}

Note that upon applying Algorithm~\ref{algo:createSSG} to each execution flow of a task $\Task_i$, we obtain a set of SSGs for that task, where each SSG is defined and proven valid for one of $\Task_i$'s execution flow. With that, we now describe how to construct a single synchronous DAG of servers for each \emph{task} which accommodates all of its execution flows through its SSGs.