\chapter{Schedulability Analysis }
\label{chapter.Schedulability_Analysis}

\newcommand{\slotsize}{s}	%slot time
\newcommand{\dmatime}{DMA}	%dma time as function of size
\newcommand{\datasize}{data}
\newcommand{\codesize}{code}
\newcommand{\scratchdatasize}{SPM_{D-SIZE}}
\newcommand{\scratchcodesize}{SPM_{C-SIZE}}
\newcommand{\mcycle}{H}
\newcommand{\timencrit}{NC}
\newcommand{\CS}{C_s}
\newcommand{\DMAover}{DMA_{setup}}
\newcommand{\tickover}{t_s}

This section discusses how to derive a predictable slotted schedule for critical tasks using our Real-time Scratchpad memory Management Unit (RSMU) architecture, according to the two-level scheduling mechanism described in Chapter \ref{chapter.system_model}. We assume that the system is comprised of a set $\Gamma^c$ of $N$ critical tasks $\{\tau_1^c, \ldots, \tau^c_N\}$. Each critical task $\tau_i^c$ is characterized by a period $p^c_i$, a worst-case computation time $e^c_i$, data size in scratchpad $\datasize_i$ and code size in scratchpad $\codesize_i$. The computation time $e^c_i$ represents the worst-case execution time of the task, assuming that its data and code are already loaded into the scratchpad and discounting all OS overhead. Non-critical tasks run within a set of $M$ non-critical partitions. We assume that the timing requirements of each partition $\tau^{NC}_i$ are expressed by a tuple $\{Q_i, p^{nc}_i\}$, meaning that the partition must receive at least $Q_i$ units of execution time every $p^{nc}_i$ time units.



Following the example in \cite{Sha2004}, we assume that all task and partition periods are multiples of a base period $H$; this is a common assumption in avionics systems. We then set the length of the minor cycle to be $H$. Each task is assigned one slot in each minor cycle. If a task has a period that is multiple of the minor cycle, it executes for a fraction of its execution time in the minor cycle. Next we construct a schedule with $N+M$ fixed slots in each minor cycle: each of $M$ slots is assigned to partition $\tau^{nc}_i$ and has a length $\frac{Q_i}{p^{nc}_i / \mcycle}$; each of the remaining $N$ slots is assigned to a critical task $\tau_i^c$, which executes for $\frac{e^c_i}{p^c_i / \mcycle}$ in the minor cycle. 


We next compute the size $\slotsize^c_i$ of the slot assigned to critical task $\tau_i^c$ during each minor cycle. As detailed in Section \ref{sec:OS_Support}, each task suffers OS overhead due to: 1) context switch overhead $\CS$; 2) DMA set-up time $\DMAover$, which is paid three times; and 3) timer tick overhead $\tickover$. Starting from the task execution $\frac{e^c_i}{p^c_i / \mcycle}$, the slot size must thus be enlarged to include all overheads:

\begin{equation} 
\label{eq:slotsize}
\slotsize^c_i = \frac{e^c_i}{p^c_i / \mcycle} + \CS + 3 \DMAover + \lceil \frac{\slotsize^c_i}{1ms}  \rceil \tickover.
\end{equation}

Note that with $1ms$ system-tick, $ \lceil \frac{\slotsize^c_i}{1ms}  \rceil$ represents the worst-case number of times that the timer can interrupt the slot execution; since this number depends on $\slotsize^c_i$, it is easy to see that the size of the slot can be computed by iterating over Equation \ref{eq:slotsize}, starting from the execution time $\frac{e^c_i}{p^c_i / \mcycle}$. Furthermore, note that the following equation represents an obvious necessary condition on the schedulability of the system:

\begin{equation}
\sum_{1\leq i \leq N} \slotsize^c_i + \sum_{1\leq i \leq M} \frac{Q_i}{p^c_i / \mcycle} \leq \mcycle.
\end{equation}


\begin{table}[h!]
\centering
\caption{An illustrative sample of \CTs \ with $H= min(p^c) = 10$}
\label{tbl:tasks_example}
\scalebox{1}{
\begin{tabular}{c|cccc}

           	&         $\tau^c_1$ &         $\tau^c_2$&         $\tau^c_3$ &         $\tau^c_4$\\
\hline
     $e^c$ \small(time units) &         1 &         4 &         16 &          18 \\

     $p^c$ \small(time units) &         10 &         20 &         40 &         80 \\

     $s^c$ \small(time units) &         1.03602 &   2.03828 &        4.0428 &       2.28828 \\
\hline
\hline
\end{tabular}  
}
\end{table}


To conduct an illustrative example consider the set of arbitrary critical tasks in Table \ref{tbl:tasks_example}. There are four \CTs \ with different harmonic periods. The length of the minor cycle is set to 10, which is the base period (H). Therefore, $\tau^c_1$ completes execution every minor cycle because its period is equal to the base period (H). On the other hand, the period of $\tau^c_2$ is two times the base period. As a result, $\tau^c_2$ completes execution every two minor cycles by running only for half the amount of its execution time in each minor cycle; $\tau^c_2$ gets one slot in each minor cycle. The slot size $s^c$ includes overheads. Therefore, to ensure schedulability, the sum of all slots sizes has to be less than the length of the minor cycle, which is 10 time units in this example. Figure \ref{fig.MinorCycle} shows an example of how the four tasks can be scheduled using the minor cycle method. Each slot shows the fraction of execution time the task executes in the slot.

\begin{figure}[h!]
\begin{center}
\includegraphics[clip=true,scale=2.5]{../figures/MinorCycle.pdf}
\caption{An illustration showing the slots of four \CTs \ in a minor cycle schedule}
\label{fig.MinorCycle}
\end{center}
\end{figure}

To create the slotted schedule, we finally need to determine the order in which slots are executed within the minor cycle. Note that the order of non-critical partitions does not impact the schedulability of critical tasks in any way. Hence, we focus on computing the slot order for critical tasks; the schedulability of non-critical tasks can then be assessed based on the analysis in \cite{Sha2004}. When deciding the slot order, we need to respect two constraints. First, the size of both the data and code scratchpad must be sufficient to execute the task in the current slot while performing the load/unload operations for the previous and next executed critical task, as detailed in Chapter \ref{chapter:SoftwareBuldFlow}. Second, the size of the slot must be sufficient to complete all DMA operations in time: the data of the previous critical task must be moved from the scratchpad to main memory, and both the code and data of the next critical task must be loaded from main memory. 

To solve the slot assignment problem for critical tasks, we construct a simple satisfiability problem. In the problem formulation in Equation \ref{eq:dma}, $\dmatime(b)$ represents the total time required to move $b$ bytes from/to main memory using DMA, including OS overhead; we detail how to compute $\dmatime(b)$ in Section \ref{sec:Hardware-Evaluation}. Binary indicator variable $x_{i,j}$ indicates that task $\tau^c_j$ is executed in the $i$-th critical slot, with slots numbered from $0$ to $N-1$. The code scratchpad size $\scratchcodesize$, and the data scratchpad size $\scratchdatasize$ represent the space of the scratchpads dedicated for \CTs.

\begin{align}
% & \max \sum_{0 \leq i \leq N-1, 1 \leq j \leq N} x_{i,j} \label{eq:SAT} \\
& \forall i, 0 \leq i \leq N-1: \sum_{1 \leq j \leq N} x_{i,j} = 1\label{eq:onetask} \\
& \forall j, 1 \leq j \leq N: \sum_{0 \leq i \leq N-1} x_{i,j} = 1 \label{eq:oneslot} \\
& \forall i, 0 \leq i \leq N-1: \sum_{1 \leq j \leq N} x_{(i-1)\%N, j} \cdot \dmatime(\datasize_j) + \notag \\
& \hspace{5mm} + \sum_{1 \leq j \leq N} x_{(i+1)\%N, j} \cdot \Big( \dmatime(\datasize_j) + \dmatime(\codesize_j) \Big) \leq \sum_{1 \leq j \leq N} x_{i,j} \slotsize_j \label{eq:dma} \\
& \forall i, 0 \leq i \leq N-1:  \sum_{1 \leq j \leq N} x_{i, j} \cdot \datasize_j + \notag \\
& \hspace{5mm} + \sum_{1 \leq j \leq N} x_{(i+1)\%N, j} \cdot \datasize_j \leq \scratchdatasize \label{eq:data}\\
& \forall i, 0 \leq i \leq N-1:  \sum_{1 \leq j \leq N} x_{i, j} \cdot \codesize_j + \notag \\
& \hspace{5mm} + \sum_{1 \leq j \leq N} x_{(i+1)\%N, j} \cdot \codesize_j \leq \scratchcodesize \label{eq:code}
\end{align}

Based on Equations \ref{eq:onetask} and \ref{eq:oneslot}, each slot has to be assigned to only one critical task and each critical task has to be assigned to only one slot. Equation \ref{eq:dma} expresses the constraint on the DMA time. The right-hand side $\sum_{1 \leq j \leq N} x_{i,j} \slotsize_j$ computes the size of the $i$-th slot in the schedule, based on the task assigned to the slot. In the left-hand side, $\sum_{1 \leq j \leq N} x_{(i-1)\%N, j} \cdot \dmatime(\datasize_j)$ represents the time required to unload from data scratchpad to main memory the data of the critical task executed in the previous slot, i.e., slot $(i-1)\%N$-slot, where $\%$ is the module operation. Note that in slot 0, we need to unload the data used by the task executed in slot $N-1$ in the previous minor cycle. Finally,  $\sum_{1 \leq j \leq N} x_{(i+1)\%N, j} \cdot \Big( \dmatime(\datasize_j) + \dmatime(\codesize_j) \Big)$ represents the time required to load from main memory to data/code scratchpad the data/code (respectively) of the critical task executed in the following slot $(i+1)\%N$; again, note that in slot $N-1$, we need to load the code and data used by the task executed in slot $0$ in the next minor cycle.

Finally, Equations \ref{eq:data}, \ref{eq:code} express constraints on the size of the data and code scratchpad, respectively. As detailed in Chapter \ref{chapter:SoftwareBuldFlow}, when a critical task is running, one portion of the scratchpad is used to contain the data/code of the running task, while the second portion allocates memory for the next executed critical task. Hence, we constrain the size of the data/code scratchpad to be at least equal to the data/code of the task running in slot $i$, plus the data/code of the task running in slot $(i+1)\%N$.


For instance consider Table \ref{tbl:tasks_parameters}, which depicts task parameters for an arbitrary set of tasks. Assume the sizes of each scratchpad is 16 size units and the total utilization of this set of tasks is less than one. The schedulability of the system depends on the task execution order. For example, the order of \{$\tau^c_1$, $\tau^c_2$, $\tau^c_3$, $\tau^c_4$\} is not schedulable due to violation of DMA time constraint. The execution time of $\tau^c_4$ is not long enough to let the DMA finish unloading data of $\tau^c_3$ and loading code plus data of $\tau^c_1$. On the other hand, a schedule such as \{$\tau^c_2$, $\tau^c_1$, $\tau^c_3$, $\tau^c_4$\} has no timing violation but it violates the code scratchpad size constraint because the code sizes of $\tau^c_1$ + $\tau^c_3$ exceed the limit of 16 size units. Actually, there are many different permutations for these four \CTs. To get a task order that makes the system schedulable, we feed all system constraints to an SMT (Satisfiability Modulo Theories) solver, and the SMT solver finds a feasible schedule if it exists. A task order that can make the system schedulable is \{$\tau^c_2$, $\tau^c_1$, $\tau^c_4$, $\tau^c_3$\}. The next chapter discusses the experimental results and evaluates the system components: hardware, software and schedulability.

\begin{table}[h!]
\centering
\caption{An illustrative task's parameters of an arbitrary set of \CTs }
\label{tbl:tasks_parameters}
\scalebox{1}{
\begin{tabular}{c|cccc}
			    				 &  $\tau^c_1$ &  $\tau^c_2$ & $\tau^c_3$ &  $\tau^c_4$ \\
\hline
     $e^c$ \small(time units) &          11&          15&   		9 &      	10    \\     
     $code$ \small(size units) &         12 &         3 &    		6 &         4    \\     
     $data$ \small(size units) &         6 &          6 &    		4 &       1.5    \\     
     $DMA(_{code})$ \small(time units) & 8 &          2 &    		4 &         3    \\   
     $DMA(_{data})$ \small(time units) & 4 &          4 &    		3 &       2    \\  
\hline
\hline
\end{tabular}  
}
\end{table}