\chapter{Background}

Researchers pay a significant attention to the predictability issue that has been one of the most important issues in the embedded real-time domain. Therefore, they have tried to solve it using different approaches. 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Off-line Static Approach}
 Suhendra and Mitra\cite{Suhendra2008} explored the effect of locking and partitioning L2 shared cache-memory on predictability. They use different schemes of locking and partitioning. The study combined static/dynamic locking with task-based/core-based partitioning. The results showed obvious impact of different
configuration on predictability versus performance. The conclusion is
that there is no one good configuration suitable for all type of applications. The best cache configuration for predictability was static locking/task-based partitioning, as each task has its own cache partition that is not affected by preemption. However, a task still can invalidate its own cache. The lesson learned here is that utilising this technique can only reduce the problem and not eliminate it.

On the other hand, Pellizzoni et al\cite{Pellizzoni2011} recently introduced PRedictable Execution Model (PREM) that enforces predictable execution for a certain non-preemptable code block called Predictable Interval (PI). PREM achieves constant execution time for the PIs by injecting codes at the beginning of the them that pre-fetch all required data into cache. As a result, not only that PIs do not suffer cache misses, but also IO peripherals can be scheduled to access main memory without suffering bus contention from the CPU side. However, this technique requires code annotation and support from both compiler and OS, which might make porting applications a a little concern.
	

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Using SPM in hard real-time system is very promising due to its interesting features such as predictability, performance and power efficiency. \cite{Puaut2007} quantitatively compared memory-mapped SPM to locked cache in terms of WCET. They use compile-time algorithm to dynamically load tasks into on-chip memory as needed. Similar to in \cite{Pellizzoni2011}, they inject code at the boundary of functions and basic blocks. Regardless of the inability, in some cases, of locking two basic blocks simultaneously in the cache due to their conflicting addresses, The results of testing benchmarks' WCET were close to each other. One important reason is that loading tasks into SPM, using this technique, adds more CPU overhead than in the case of cache.

%%%%%%%%%%End Section Off-line Static Approach%%%%%%%%%%%%%%%%%%%%%%%


\section{Runtime Dynamic Approach}
On the other hand, runtime approaches to manage SPM toward predictable execution time are also explored. \cite{Whitham2009} proposed a runtime load/store operations using custom instructions. SMMU (Scratch-pad Memory Management Unit) is a custom hardware to map/unmap data objects, variables, to SPM. The advantage of this approach over the compile-time approaches is that it avoids problems such as dynamic data limitation due to pointer aliasing, pointer invalidating and object sizing making the whole program pointer analysis not required. On the other hand, SMMU is an object-based hardware that can map upto N objects making the comparator array circuit a performance bottleneck of the system (fmax). Therefore, it can not scale well for greater number of objects. In addition, mapping data objects is left to the application programmers, which is difficult to manage especially in multi-tasking systems.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Other non-WCET oriented works also targeted SPM to improve power efficiency and average-case execution time. \cite{Francesco2004}\cite{Egger2006}\cite{Park2007}\cite{Muck2011} use conventional MMU to manage the SPM at runtime, these works are based on instruction set simulators. \cite{Francesco2004} exposed a set of APIs to the programmers in order to load data into the SPM.\cite{Egger2006} uses compile-time technique to utilize the MMU and loads tasks' code into the SPM dynamically. since, this work is not targeting real-time systems, knowing  sizes of the tasks and the SPM is not needed at the design time.\cite{Park2007}  automatically loads the system stack into the SPM to achieve better average system performance, by tracking the stack pointer. \cite{Muck2011} exposed APIs to allocate dynamic data, system heap, in the SPM.


The advantage of using MMU is that the SPM management can be done by the OS, thus, it becomes transparent to the programmers. The disadvantage of the conventional MMU is that it is not designed to serve real-time embedded systems, and introduces performance barriers. Conventional MMU divides the whole CPU address space into pages to bring a complete virtual address space for processes, which is not needed and makes the TLB larger and slower. Also, MMU takes several CPU cycles to configure a page map. Furthermore, MMU is designed to cause exceptions, which may cause a context switch, in which the exception handler reconfigures it after copying the necessary data to the target. As a result, the predictable execution of the running tasks is not guaranteed any more. Our solution, as explained in the flowing chapters, overcomes all of the aforementioned problems and barriers, while enforcing reasonable restrictions on critical tasks towards a guaranteed predictable execution.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

The most recent and appealing work came to our knowledge in this domain is \cite{Whitham2012}. This work is based on simulation and it is different from other previous works that tried to dynamically manage the SPM within the task. It was the first work proposed how to manage the SPM among different tasks.
It represents the local memory (SPM) as a stack. Tasks get pushed to the stack (SPM), using DMA, when it gets invoked by the OS, and gets popped when it finishes. It is similar to the function invocation in C/C++. In the case of preemption, this model swaps out some blocks of the SPM, at the beginning of the preempting task's context, to accommodate the preempting task. Before the preemting task finishes, it swaps in all the blocks that have been swapped out at the beginning of the preempting task. These blocks can be parts of the preempted task or other tasks residing in the SPM. Figure \ref{fig.Whitham_fig} shows this mechanism. Although this solution is very promising, except it requires the CPU to stall until the DMA transfer is finished.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\begin{figure}[h!]
\begin{center}
\includegraphics[clip=true,scale=1]{../figures/Whitham_swap_tasks.pdf}
\end{center}
\caption{Task Block Swaps During Context Switch as Presented in \cite{Whitham2012}}
\label{fig.Whitham_fig}
\end{figure}


%%%%%%%%%%End Section Runtime Dynamic Approach %%%%%%%%%%%%%%%%%%%%%%%