\chapter{Background}
\label{chapter.Background}
Scratchpad memory (SPM) has received considerable attention in the embedded and real-time communities as an alternative to processor caches. Contrary to cache memory, scratchpad memory supports consistent and predictable access times. Therefore, any task that runs out of the SPM is guaranteed to have a predictable execution time. The SPM is also better than the cache in terms of area and power efficiency \cite{Banakar2002a}. Figures \ref{fig.SMP_arch} and \ref{fig.Direc_Cache_arch} compare the SPM and cache architectures. The SPM is a simpler digital circuit, and hence occupies smaller area and consumes less power. The on-chip memory (SPM and cache) is usually built using static ram circuitry \cite{DigitalICs_2002}. 

\begin{figure}[h!]
\begin{center}
\includegraphics[clip=true,scale=0.80]{../figures/SPM_memory_arch.pdf}
\end{center}
\caption{SPM Architecture}
\label{fig.SMP_arch}
\end{figure}

The SPM memory, as shown in Figure \ref{fig.SMP_arch}, is a simple memory circuit. From the architectural prospective, accessing a memory word only requires a decoder and a multiplexer. Accessing memory and selecting a specific word is called indexing. During the access, the target SRAM cells are selected and then read or written. On the other hand, the cache architecture is more complicated. It consists of multiple data memories, all of which are indexed at the same time \cite{Banakar2002a}. Each data memory (SRAM array) is indexed in the same way as the SPM. That means cache is, at least, n times more complicated and consumes more power than SPM, where n is the number of data memories the cache utilizes. For instance, if the cache has 32-byte cache lines (8 words), there will be 8 data memories. In addition, valid-bit and tag arrays are indexed for each access. Moreover, the selection of the addressed word is implemented in two levels. First, one word from each data memory is selected the same way used in the SPM (N:1 MUX). This selects the whole cache line. Second, the block offset (part of the address) determines which word is selected from the cache line. This multiple level selection mechanism makes cache memory slower than the scratchpad memory; and the cache uses more memory to store tags and valid bits. 

\begin{figure}[h!]
\begin{center}
\includegraphics[clip=true,scale=1.3]{../figures/Direct_mapped_cache.pdf}
\end{center}
\caption{Direct-Mapped Cache Architecture (From \cite{ECE3055_cache})}
\label{fig.Direc_Cache_arch}
\end{figure}


In the direct mapped cache, which is the simplest cache architecture, there is one comparator that compares the requested tag and the stored tag, to determine if the access is hit or miss. Associative caches are more complicated, thus slower than direct mapped cache. In full associative caches, as shown in Figure \ref{fig.Full_associative_Cache}, the entire address is used as a tag. Therefore, one comparator is required for each cache line. The increased number of comparators in the associative cache negativity impacts the maximum clocking frequency of the system. However, associative caches have better overall system performance than the direct mapped cache as it reduces the conflict-miss rate. The set-associative cache is somewhere in between the direct mapped cache and the full associative cache in order to balance the trade-of between the clocking frequency and the conflict-miss rate. \cite{Banakar2002a} is a study in the embedded domain that compares energy, area and performance of both SPM and cache. The study indicates that SMP achieves better than cache almost on all counts.

\begin{figure}[h!] 
\begin{center}
\includegraphics[clip=true,scale=1]{../figures/Full_Associative_cache.pdf}
\end{center}
\caption{Full Associative Cache Architecture (From \cite{ECE3055_cache}) }
\label{fig.Full_associative_Cache}
\end{figure}

Faster access, smaller area, lower power, and predictable access time make scratch-pad memories attractive for the embedded and real-time domains. However, SPMs are more difficult to manage. In many cases the application programmers need to do low-level programming in order to utilize the SPM. Thus, the programmers need to have some knowledge about the hardware platform they are working on. In multitasking systems, this becomes very difficult because programmers need to manage the SPM, and synchronize their work accordingly. Caches on the other hand are self managed at the hardware level and do not impact the software model. In other words, caches are transparent to programmers. Consequently, applications can be ported to new platforms more easily. 

Researchers in the real-time domain have focused on the predictability problem caused by caches. They attempt to solve it by either using smart allocation algorithms for the cache, or by adding scratchpad memories as a hardware component to the platform. Researchers in the general embedded domain, not really interested in predictability, have focused more on ways to utilize the scratch-pad memories in embedded platforms due to its interesting qualities such as power efficiency and performance. The following two sections review recent work in the general embedded domain and the real-time embedded domain. 


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{General Embedded Oriented Approach}

As mentioned before, the most important issues in the domain of general embedded systems, are power consumption and processing performance. Using scratch-pad memories in embedded platforms can result in significant gains to power efficiency and processing performance. Different approaches were taken to employing scratch-pad memories in embedded systems, but the common objective was to make it more usable and more programmer friendly in order to encourage porting software to the new platform. \cite{Francesco2004}\cite{Egger2006}\cite{Park2007}\cite{Muck2011} used conventional MMU to manage the SPM at runtime. These works were based on simulation only. Managing the SPM at runtime allows different applications to utilize the SPM dynamically resulting in better resource utilization. 
 
B. Egger et al \cite{Egger2006} used a compile-time technique to manage the MMU, and loads tasks' code into the SPM dynamically. In this work, the compiled binary has to be processed by the post-optimizer, a tool they developed, in order to make the final binary optimized for their memory architecture. Basically, the post-optimizer does a lot of work to disassemble and profile the code statically to determine the basic blocks and functions boundaries. Then the post-optimizer generates a profiling binary image by injecting profiling code. The profiling image in-turn runs in a simulator. By running the profiling image several times a new set of profiling information can be extracted, such as which parts of the code consume more power and which parts are executed more frequently, etc. Finally, the post-optimizer can now generate the SPM-optimized binary image by grouping the code sections into three main regions. The cached and uncached regions are mapped normally, using the MMU, to physical addresses. The pageable region is mapped using a runtime component called the ScratchPad Memory Manager (SPMM). When a process is first loaded, the SPMM disables all the mapping entries in the page table that correspond to the pageable region. Then when the code reaches a point that is not mapped to a physical address, the MMU causes an exception. The runtime, in turn, forewords the exception to the SPMM which loads the code into the SPM and adds the mapping entries into the page table. One advantage of this technique is that neither the size of the SPM nor the size of the application's code is needed to be known at compile time as loading is done at the granularity of the page size. Although this work is a significant step toward dynamic runtime managed SPM, it has some drawbacks. The first drawback is that it is based on simulation and we don't know what complications can occur in real implementations. Second, it requires off-line profiling of the application in order to make it optimized for the SPM. In addition, the CPU is used to copy the code from the main memory to the SPM based on interruption from the MMU, which can degrade the performance. Finally, this work did not investigate the case of multitasking system, which is more realistic than a single task system.
 
P. Francesco et al \cite{Francesco2004} adopted a hardware-software co-design approach to dynamically manage the SPM at runtime with minimum overhead on the CPU. This study proved the power efficiency of the the SPM in dynamic applications. The study even showed an improvement in the performance. Figure \ref{fig.Integrated_MMU_SPM_DMA} shows the hardware extensions that they proposed. The DMA component, which is in the Transfer Engine of the extended figure (right side), was used to relieve the CPU of copying data to/from the SPM. Loading applications code into the SPM is not considered in this study as it only focused on data. The MMU helps map the addresses at runtime. This work exposes a high-level API to manage and allocate data into the SPM. However, it is still the programmer's responsibility to use the API in order to allocate space in the SPM and move the data back and forth using the DMA.

\begin{figure}[h!] 
\begin{center}
\includegraphics[clip=true,scale=1]{../figures/Integrated_MMU_SPM_DMA.pdf}
\end{center}
\caption{Hardware extensions for scratchpad management:(left) original, (right) extended. From (\cite{Francesco2004}) }
\label{fig.Integrated_MMU_SPM_DMA}
\end{figure}


In \cite{Muck2011}, a simple technique at the OS level is used to allocate dynamic data (application-heap) to the SPM. A C++ framework enables the programmer to easily annotate the code. The annotation directs the OS to make the decision of where to put the data: on the system heap (main memory) or on the application heap (SPM). The C++ \texttt{new} and \texttt{delete} operators were overloaded to handle the programmer's preference of allocation. The OS handles the actual allocation depending on the available space; hence no guarantee to allocate data in the desired SPM heap. This work is evaluated on an FPGA platform and showed improvement in both power efficiency and performance. This technique only targets dynamically  allocated data. In order to get more significant improvement it is intended to be coupled with other compile-time techniques to allocate code and static data into the SPM. 


Work presented in \cite{Park2007} is also an interesting effort that automatically loads the system stack into the SPM at runtime to achieve better system performance and power consumption. This work exploits the locality characteristics generally observed in the stack memory access to achieve performance improvement. The basic idea is to continuously map the stack pointer into the SPM, using the MMU. Since the active part of the stack is always near to the stack pointer, mapping the stack pointer into the the SPM will improve the performance of the active part of the stack, which contains the local variables of the current function. If stack outgrows the SPM or if the stack grows out of the SPM, the MMU causes an exception. The exception handler will do a replacement for a segment of the SPM, by saving that segment into the main memory and loading the new addresses into the SPM. The page table entries are modified accordingly. This mechanism is somewhat similar to the one used in \cite{Egger2006}, but this work is still  interesting as it does not need post-compile processing or specialized hardware in addition to the MMU. It is also transparent to the application programmers and there is no API needed. Like in \cite{Egger2006}, this work is only based on simulation, not a real platform. Furthermore, there is no DMA component to load/unload the SPM, which results in reduced performance.


% show the architecture of the conventional MMU
The advantage of using the MMU for the SPM management is that it can be done dynamically by the OS at runtime. Thus, it becomes transparent to application programmers. The disadvantage is that the conventional MMU is not designed to serve real-time embedded systems and introduces performance barriers, as explained later in Chapter \ref{chapter.System Architecture}. Our solution, as introduced in the following chapters, overcomes the aforementioned problems and barriers while enforcing reasonable restrictions on critical tasks towards a guaranteed predictable execution.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%End Section General Embedded System Oriented Approach %%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


\section{Real-time Embedded Oriented Approach}
For the embedded real-time domain, predictable execution time is the more important characteristic of the scratch-pad memory. However, in order to get this predictability the SPM itself has to be present in the system in the first place. Therefore, researchers in the embedded real-time domain also explored ways toward predictable execution based on cache utilization only.

% LockedCache-predictability
 Suhendra and Mitra\cite{Suhendra2008} explored the effect of locking and partitioning of L2 shared cache on predictability in multi-core systems. They used different locking and partitioning schemes. The study combined static/dynamic locking with task-based/core-based partitioning. The results showed clear impacts of different configurations on predictability versus performance. They concluded that there is no one configuration suitable for all type of applications. The best cache configuration for predictability was static locking/task-based partitioning, as each task has its own cache partition that was not affected by preemption. However, a task gets a smaller partition as the number of tasks increases. It can be concluded from this research that adopting this technique is not desirable as the predictability gained comes at a significant cost in performance and it does not scale well. In addition, lockable caches are considered specialized features and not available on all embedded systems. Using an SPM that is more power efficient and occupies a much smaller silicon area may be a better approach.


% Cache-predictability
\begin{figure}[h!] 
\begin{center}
\includegraphics[clip=true,scale=1]{../figures/PREM.pdf}
\end{center}
\caption{PREM Predictable Interval with Constant Execution Time (From \cite{Pellizzoni2011})}
\label{fig.PREM}
\end{figure}

Pellizzoni et al \cite{Pellizzoni2011} recently introduced the PRedictable Execution Model (PREM) that enforces predictable execution for some tasks. First, In PREM, any task can have one or more critical code partitions, called Predictable Intervals (PIs). The PIs are not preemtable and are preloaded into the cache. Therefore, PREM achieves constant execution time for the PIs. See Figure \ref{fig.PREM}. PREM is based on code annotation and uses compiler extensions to inject extra code before the PIs in order to pre-fetch all required data into the cache. This technique divides the CPU execution into two phases, memory phase and execution phase. In the execution phase, the CPU does not initiate any memory request. As a result, not only do PIs not suffer cache misses, but IO peripherals can also be scheduled to access main memory without suffering bus contention from the CPU. However this technique requires code annotation and support from both compiler and OS which might make porting applications a concern.

	
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%SPM-VS-lockedCache Performance and pwer
Other works such as \cite{Puaut2007} investigated the use of the SPM in embedded systems. \cite{Puaut2007} quantitatively compared memory-mapped SPM to locked cache in terms of WCET. They use a compile-time algorithm to dynamically load tasks into on-chip memory as needed. Similar to the work by Pellizzoni et al \cite{Pellizzoni2011}, they inject code at the boundary of functions and basic blocks. Regardless of the inability, in some cases, of locking two basic blocks simultaneously in the cache due to their conflicting addresses, the results of testing benchmarks' WCET were close to each other. However, this comparison might be unfair as loading tasks into SPM uses the CPU which adds more overhead than the cache.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%Predictable-Dynamic-SPM SMMU (Load/Store)
Other approaches attempted to manage SPM at runtime to achieve predictable execution time. J. Whitham and N. Audsley \cite{Whitham2009} proposed simplified runtime load/store operations using custom instructions coupled with custom hardware. The Scratchpad Memory Management Unit (SMMU) is a custom hardware component that maps/unmaps data objects and variables to the SPM. In addition the SMMU performs DMA functionality by moving data from main memory to the SPM and visa-versa. The SMMU makes memory accesses transparent by allowing address translation from the CPU virtual address to the physical address in the main memory or in the SPM. The advantage of this approach over the compile-time approaches is that it avoids problems such as dynamic data limitation due to pointer aliasing, pointer invalidating and object sizing making program pointer analysis unnecessary. This significantly simplifies the WCET analysis, especially for applications that use dynamic data allocation. On the other hand, programmers use the specialized instructions to load and map data objects to the SPM, which is difficult to manage especially in multitasking systems. In addition, the hardware structure of the SMMU, shown in Figure \ref{fig.SMMU_hardware}, is object-based and allows only up to N data objects to be mapped at the same time. As a result, the comparator array circuit of the SMMU can be a performance bottleneck for the system operating frequency, resulting in a scalability problem.

% Cache-predictability
\begin{figure}[h!] 
\begin{center}
\includegraphics[clip=true,scale=1.5]{../figures/SMMU_hardware.pdf}
\end{center}
\caption{The translation unit structure of the SMMU (From \cite{Whitham2009}) }
\label{fig.SMMU_hardware}
\end{figure}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The most recent and appealing work to our knowledge in this domain is by J. Whitham et al \cite{Whitham2012} \cite{WhithamN2012}. It is the first work to consider managing on-chip scratchpad dynamically in a multitasking system. It introduced a new memory model called Carousel. Carousel acts as a stack of B fixed-size blocks. The top n blocks are stored in the SPM, and the remaining block are stored in main memory. A task is divided into several Carousel blocks depending on its size. 

% Carousel Swapping mechanism
\begin{figure}[h!] 
\begin{center}
\includegraphics[clip=true,scale=.45]{../figures/Carousel_swaps.jpg}
\end{center}
\caption{Carousel stack mechanism (From \cite{Whitham2012}) }
\label{fig.Carousel_swaps}
\end{figure}

Starting a task requires pushing all its blocks into the top of the Carousel stack, meaning that the task will be moved from the main memory to the SPM. In order to free space in the SPM, Carousel also swaps out a similar number of blocks to the main memory when they become not among the top n blocks of the stack. The process is reversed when ending a task. All task's blocks are popped from the top of the stack, moving the task to the main memory. Carousel also brings the previously swapped out blocks back to the SPM in a stack-wise fashion. Figure \ref{fig.Carousel_swaps} shows this mechanism.

Carousel shifts the responsibility of block swapping to tasks. Each task swaps out some blocks to free space when it starts, then it loads (opens) its own blocks, code and data, before it starts executing. After the task finishes executing, it saves (closes) its own blocks back to the main memory. Finally, the task  swaps in all the blocks it swapped out earlier, so the stack is restored to the state earlier to the task invocation. Using Carousel, tasks are scheduled according to Rate Monotonic in a stack manner. Therefore, tasks are executed in a last-in first-out (LIFO) order, having the highest priority task at the top of the stack. Carousel allows each task to allocate any amount of the SPM it requires while preventing inter-task interference. However, the cost associated with moving blocks between the main memory and the SPM seems high. Figure \ref{fig.Carousel_overhead} shows the overhead associated with each task invocation. From the figure, assuming that the blocks in each swappable set are adjacent, each task invocation incurs at least six DMA operations: two p blocks (code plus data), two y code blocks and two z stack blocks.
In addition, a special operating system is designed for carousel architecture, called "Carousel OS". Supporting the architecture in existing operating systems is a better approach to facilitate easier software porting.


% Carousel Overhead
\begin{figure}[h!] 
\begin{center}
\includegraphics[clip=true,scale=0.5]{../figures/Carousel_overhead.jpg}
\end{center}
\caption{Carousel task invocation (From \cite{Whitham2012}) }
\label{fig.Carousel_overhead}
\end{figure}


From hardware prospective, Carousel used DMA to move tasks' blocks between the SPM and the main memory. However, the CPU stalls until the transfer finishes. The Carousel mechanism that requires a task to load/unload itself impedes the ability of overlapping DMA transfers with the CPU execution. In addition, Carousel used a translation unit to virtualize block addresses. The translation unit translates all the n blocks stored in the SPM in parallel. This becomes a concern since the performance of the translation unit is dependent on the number of blocks it translates in parallel. Having many small blocks decreases the maximum operating frequency of the translation unit, while having few large blocks reduces the space utilization of the SPM. In addition, Carousel architecture requires a separate SPM only for the OS. This is required as the OS has to be in the SPM all the time, and it is not possible to put the OS on the Carousel SPM with other tasks. This approach is less appealing to adopt as the optimum size ratio between the two SPMs has to be determined at the hardware level.


This chapter reviewed several approaches either to manage the SPM dynamically, or to solve the predictability issue associated with cache memory. Each work has a contribution to solve the problem we are targeting from a certain perspective. Our work, introduced in the following chapters, is aimed to be more comprehensive and solves the problem from different aspects at the same time. Our solution avoids most of the problems highlighted in this review, and proposes a hardware-software platform that helps to bring real-time applications to general-purpose architecture with minimum impact on cost and performance. The next chapter introduces our solution at the system-level.


%%%%%%%%%%End Section Real-time Embedded System Oriented Approach %%%%%%%%%%%%%%%%%%%%%%%

%\section{Off-line Static Approach}
  




%\section{Runtime Dynamic Approach}
