\chapter{System Model}
\label{chapter.system_model}

The solution introduced in this thesis is a simple, novel and effective hardware-software technique that achieves the predictability required in real-time systems, without compromising performance and cost. It targets hard real-time systems running on single-core processors with the belief that it will scale well to multi-core systems. It is more suitable for critical task sets that have relatively small working sets, such as control tasks available in engine controllers. 

A system with tasks of different criticalities is considered, similar to the Integrated Modular Avionic system \cite{Sha2004}. For simplicity only two sets are considered: a set of periodic \CTs , and a set of \NTs . The main objective is to protect the \CTs \ from interference of other critical or non-critical tasks. The \CTs \ run out of the SPM, and the \NTs \ run out of the cache. With this setup, the \CTs \ have a predictable execution time, precisely determined WCET, without degrading the performance of other \NTs . 

\begin{figure}[h!]
\begin{center}
\includegraphics[clip=true,scale=1]{../figures/2Levels_schedule.pdf}
\end{center}
\caption{Two-level Schedule}
\label{fig.twoLevel_Schedule}
\end{figure}

A two-level scheduling scheme is adopted (Figure \ref{fig.twoLevel_Schedule}). Each critical task ($\tau_i^c$) is assigned one or more fixed-time slots within a system period (major cycle). Non-critical tasks ($\tau_i^{nc}$) are executed within partitions. Each partition gets one or more fixed-time slots. Within each partition, \NTs \ can be scheduled using any scheduling policy. A fixed-priority scheduling used in \cite{Sha2004} has been adopted to schedule \NTs \ among their assigned partition. To illustrate how this scheduling works, in Figure \ref{fig.twoLevel_Schedule}, the level-1 schedule is fixed (predetermined off-line), and has the highest priority. At this level, \CTs \ ($\tau_i^c$) are executed within their assigned slots. In addition, one or more fixed-time slots, called non-critical partitions, are considered in this level. On the other hand, the level-2 schedule is priority based and has a lower priority than level-1. Non-critical tasks, such as $\tau_1^{nc}$, $\tau_2^{nc}$, are executed in this level, within the assigned non-critical partitions. Each \NT \ has a fixed priority, and its slot assignment is determined dynamically at runtime. This scheduling scheme provides the protection needed for \CTs \ in level-1, while giving the flexibility to schedule \NTs \ using any scheduling policy in level-2.


Figure \ref{fig.SPM_DMA_Loading} depicts the mechanism used to load tasks into the SPM. The SPM contents are shown at the end of each time slot. The SPM is partitioned into three dynamically sized partitions. One for the operating system and the other two are for \CTs \ (see Chapter \ref{chapter.Real-time Scr}). Figure \ref{fig.SPM_DMA_Loading} shows only the two partitions dedicated for \CTs.
 During the execution of a \CT, the system uses DMA to load the code and data for the next scheduled \CT \ into the SPM. To free space, the system also unloads the previously executed \CT \ and writes back the modified content (data) to the main memory. For instance, $\tau_2^{c}$ (code + data ) is loaded into the SPM while $\tau_1^{c}$ is running; hence, it can run right after $\tau_1^{c}$ without any latency other than that for the context switch. In addition, $\tau_1^{c}$ (data) is unloaded at the same time $\tau_2^{c}$ is running; hence, a new task can be loaded. There is no need to unload code from the SPM back to main memory, as it is not modified. As shown in Figure \ref{fig.SPM_DMA_Loading}, no DMA activity is allowed while  \NTs \ are running. Consequently, the \NTs , which run out of the cache, are not interfered with by the DMA activities, leading to better performance. In addition, loading and unloading the SPM while a \CT \ is running will not degrade the performance of the running \CT, as the SPM is dual ported. Therefore, the \CTs, which run out of the SPM, are not interfered with by the DMA as well.
 
 
  This work is different from \cite{Whitham2012} as it does not stall the CPU in order to load tasks into the SPM. Instead, overlapping the DMA transfers and the execution of \CTs \ is considered. This scheme results in better overall schedulability by hiding the latency required by the DMA transfers. Details on schedulability analysis are provided in Chapter \ref{chapter.Schedulability_Analysis}. Realization of the introduced technique requires hardware-software co-design. The next chapter discusses how hardware and software are integrated to achieve the desired goals, with both performance and cost in mind.

\begin{figure}[h!]
\begin{center}
\includegraphics[clip=true,scale=1.4]{../figures/SPM_DMA_Loading.pdf}
\end{center}
\caption{DMA Transfer of \CTs \ to SPM}
\label{fig.SPM_DMA_Loading}
\end{figure}


%End Section ()
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
