\chapter{Platform Evaluation}
\label{chapter.Platform_Evaluation}

In this chapter, hardware, software, and schedulability are evaluated in order to give a clear view of how the system is performing. The hardware and software platforms are first evaluated independent of any running task. Then, several benchmarks are run to evaluate both hardware and software combined. Finally, evaluation of the system schedulability concludes this chapter.


\section{Hardware Evaluation}
\label{sec:Hardware-Evaluation}

\begin{table}[h!]
\centering
\caption{System-Under-Test Parameters}
\label{tbl:Hardware_parameters}

\begin{tabular}{r|ccc}

 & {\bf Without SPM} & {\bf With SPM} & {\bf Difference \%} \\
\hline
{\bf Area (LE)} & 9707 & 10212 & 5.2 \\

{\bf Memory (bits)} & 36832 & 69600 & 88.9 \\

{\bf Operating frequency MHz} & 100 & 98.2 & 1.8 \\
\hline
\hline
\end{tabular}  

\end{table}


 The hardware platform is based on Altera's Cyclone II, a 90-nm process FPGA. The native platform has the fastest Nios-II/f soft-core processor, with separate instruction and data buses. The instruction cache is 16-KB and the D-cache is 16-KB. The cache is direct mapped with 32-byte cache line width, which is the maximum in this platform. Making the cache line this wide increases the cache's efficiency, as with a smaller cache line the CPU makes more trips to main memory. There is a 64-MB off-chip SDRAM, running at 100 MHz, and a DMA core, which comes standard in most embedded platforms. The operating frequency of the native platform was 100 MHz. A 64-bit cycle counter running at the same speed as the CPU is used to measure the timing. This original platform is modified to form the competing platform. The modified platform has two RSMUs and two SPMs, as shown in Chapter \ref{chapter.Real-time Scr} Figure \ref{fig.SystemBlocks}. Each SPM is 16-KB in size. The RSMUs are connected to the Nios-II tightly-coupled memory interfaces. As shown in Table \ref{tbl:Hardware_parameters}, the frequency has dropped to 98.2 MHz, the area increased by 5.2\% in terms of logic elements, and the used memory bits increased by 88.9\%. As explained in Chapter \ref{chapter.system_model}, the DMA does not interfere with cache while accessing main memory. As a result, the DMA core is able to transfer 16 KB in 45 us from main memory to scrachpad and vice versa. Figure \ref{fig.DMA_Performance} depicts the linear performance of the DMA. Both calculated and measured performance of the DMA are presented in the figure. However, the two lines are on top of each other due to the match in both calculated and measured performance. The equation for DMA timing is as follows: \\

\begin{center}
$DMA(b) = 280 + 0.2578 * b$ \\
Where $DMA(b)$ is in cycles, and $b$ is in bytes.
\end{center}

% The DMA over head includes the interrupt overhead
It is important here to know that the above equation does not include the timing required to prepare the DMA transfer (see Table \ref{tbl:Software_parameters}). To measure the DMA performance, first, the DMA set-up overhead is determined by reading the cycle counter before and after the set-up routine. After that, the DMA performance is measured by reading the cycle counter right before the DMA set-up routine and after the DMA finishes, by polling the status register of the DMA. Finally the overhead is subtracted from the measurements.

As mentioned earlier in Chapter \ref{chapter.System Architecture}, the hardware architecture is very simple. Thus, it has been easily integrated into the Altera's Nios-II platform without modifying any existing hardware component, just adding the new components. This confirms one of the main design goals, which is to involve minimal modifications to the original hardware architecture. Another interesting point about the proposed hardware architecture is the footprint area it requires. On the other hand, the consumption of memory blocks is relatively high because the scratchpads are added in addition to cache. However, other solutions utilizing booth cache and scratchpad would require a similar area for memory blocks.

\begin{figure}[h!]
\centering
%\begin{center}
\includegraphics[clip=true,scale=1, trim=0cm 0cm 0cm 0cm]{../figures/DMA_performance.pdf}
%\end{center}
\caption{DMA Performance: the calculated performance is matching the measured one}
\label{fig.DMA_Performance}
\end{figure}

%End subSection 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Software Evaluation}
\label{sec:Software_Evaluation}
The software platform used to evaluate the system consists of a real-time operating system (FreeRTOS), and some benchmarks. Table \ref{tbl:Software_parameters} shows the Software system parameters. The OS ticks every 1 ms. The system-tick overhead is measured by reading the cycle counter at the beginning of the timer Interrupt Service Routine (ISR) and reading it again before it returns. Then we added the time needed by executing assembly instructions before and after the ISR (interrupt response and recovery time). This time is only for system ticks that do not lead to a context switch. In the case of a context switch, the cycle counter is first read at the beginning of the timer ISR, then read again at the beginning of the new context considering the timer interrupt response time. In the same way, the context switch timing does not include any DMA set-up happening at the context switch. To take the timing of DMA set-up into account simply add the time required by the DMA set-up routine to the normal context switch timing, eg., $450 + 900 = 1350$ cycles. It is noticeable from the table that the DMA set-up routine requires significant time. The reason for that is the non-optimized DMA driver provided by the Altera Hardware Abstraction Layer (HAL). The HAL provides Linux-like APIs in which all device drivers are treated in the same way. This abstraction adds many layers between the OS and the hardware. If we would write an optimised DMA driver, 100 - 150 cycles is expected for the DMA set-up routine. The RSMU driver is implemented as a macro that allows the kernel to control the RSMUs and maps a \CT \ in eight clock cycles. The software successfully managed the scratchpads at the OS level. As proposed in Chapter \ref{chapter:SoftwareBuldFlow}, the system achieved the second design goal by keeping the software model unchanged. As a result, several applications have been ported to this platform successfully without changing the applications' source code.

\begin{table}[h!]
\centering
\caption{Software Parameters}
\label{tbl:Software_parameters}

\begin{tabular}{l|c}

 & {\bf Cycles} \\
\hline
{\bf Context Switch} & 450 \\

{\bf DMA prepare} & 900 \\

{\bf System Tick} & 226 \\
\hline
\hline
\end{tabular}  
\end{table}


%End subSection 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsection{Benchmark Results}
After the evaluation of hardware and software is done independently, the platform is evaluated by running some chosen benchmarks. A synthetic benchmark is used to evaluate the hardware performance, stressing the efficiency of the memory subsystem. It reads the first word (32-bits) of each cache line. This is done by iterating among all the 512 cache lines, for a 16-KB cache and 32-byte cache-line width, and reads only one word from each cache line. The proposed system not only provides a predictable execution time for \CTs \ that run out of the scratchpad, but it also shows improvement in the performance as well. From the synthetic benchmark results shown in the last row of Table \ref{tbl:Benchmarks}, the scratchpad performance is about 79\% better than the cold-cache. The dirty-cache will be worse as the cache needs to write the evicted lines back to main memory before bringing the new cache lines in. In the case of the scratchpad, there is no hot and cold situations, as \CTs \ are preloaded into the scratchpad according to a schedule (refer to Chapter \ref{chapter.system_model}). As we know, a synthetic benchmark is not a real-life application. The synthetic benchmark is useful to show the hardware's capability, but does not necessarily predict how the system will work in real-life applications. Therefore, a set of real benchmarks has been selected to evaluate the platform and its applicability in the embedded real-time domain. The selection of the benchmarks is based on their relevant characteristics such as being memory intensive to stress the platform's memory subsystem. We chose three benchmarks from the well-known automotive EEMBC benchmark suite. The selected benchmarks, a2time (angle to time conversion), canrdr (response to remote CAN request), and rspeed (road speed calculation), have been studied by \cite{Bak2012} and their characteristics are known. Another benchmark suite is DIS (Data Intensive System) benchmark \cite{Musmanno2003}. Two benchmarks are selected from this suite, transitive and corner-turn. This selection of real benchmarks is aimed to represent several applications used in the embedded real-time domain.

%\begin{sidewaystable}
\begin{table}[!h]
\caption{Benchmarks Results}
\label{tbl:Benchmarks}
%\small
%\tabcolsep=0.01cm
\scalebox{0.57}{
\begin{tabular}{cccccccccc}
\hline
{\bf Benchmark} & {\bf code size(B)} & {\bf Data size(B)} & {\bf Elements} & {\bf Iterations} & {\bf Cold(cycles)} & {\bf Hot(cycles)} & {\bf SPM(cycles)} & {\bf Difference(cycles)} & {\bf Ratio \%} \\
\hline
a2time & 3108 & 5420 & 1298 & 118 & 105162 & 100776 & 100497 & 4386 & 4.17 \\

rspeed & 1956 & 6864 & 1661 & 151 & 59420 & 55624 & 55688 & 3796 & 6.39 \\

canrdr & 2724 & 8030 & 1958 & 178 & 50720 & 47419 & 47280 & 3301 & 6.51 \\

corner-turn & 2032 & 8192  & 2040 & 1020 &  20681 & 16726 & 16728 & 3955 & 19.12 \\

transitive & 2080 & 3024 & 361 & 6859 & 105661 & 102995 & 102898 & 2666 & 2.52 \\

synthetic & 136 & 16384 & 512 & 512 & 12507 & 2592 & 2584 & 9915 & 79.28 \\
\hline
\hline
\end{tabular}  
}
\end{table}
%\end{sidewaystable}


Table \ref{tbl:Benchmarks} is generated by running each benchmark as a \CT, in which each task is limited either by the execution time of that task or by its size. Each benchmark will take more or less space based on number of iterations it performs \cite{Bak2012}. Each benchmark, depending on its nature, performs a fixed number of ALU instructions per iteration. Similarly, it performs a fixed number of memory operation (read/write) per iteration. Each memory operation does not necessarily access a new memory location: it may access the same location more than once. The number of elements shows the number of memory locations that the benchmarks access after performing a certain number of iterations. These locations may not necessarily be adjacent or accessed one after another.

Each two \CTs \ can share the SPM space as long as the sum of their required sizes in SPM is less than or equal to the SPM size; for example, one may occupy one quarter and the other takes three quarters and so on. Tasks sizes have been taken into account according to the schedulability analysis presented in the previous chapter. Table \ref{tbl:Benchmarks} compares the cold-cache time versus hot-cache time. A small difference in the execution time between the hot cache and the SPM is caused by the dynamic branch predictor \footnote{A dynamic history-based branch predictor can cause unpredictable execution time to a task. The execution time of a task is dependant on the current state of the branch predictor as mentioned earlier in Chapter \ref{chapter.Introduction}}. Although dirty cache is a common case in multitasking systems, comparing to the dirty cache is not considered: the comparison to a cold-cache is easier to analyze and gives better feedback on how the proposed system performs in a worst case scenario.

The result is read from the table as a cache stall ratio, which is the time the CPU stalls for the cold cache to fetch cache lines from the main memory divided by the execution time of a task on a cold cache. Higher ratio means that the execution time of a task is improved due to the improvement in the memory subsystem performance, which is the SPM in this case. The corner-turn benchmark, usually used in digital signal processing, scored the highest ratio, about 19\%, due to its nature as it performs unit-stride and non-unit-stride access. Figure \ref{fig.cornerturn_ratio} depicts the cache stall ratio of this benchmark on our platform.

 On the other hand, the transitive benchmark scored the lowest ratio as it performs regular unit-stride access $N^3$ times the number of elements. The other bench marks scored a fair ratio depending on each benchmark's nature. Note that, in this particular implementation of the proposed platform, the CPU and the main memory (SDRAM) are operating at the same frequency. A significant performance improvement is expected when the CPU operates at a higher frequency than the main memory. It is worth mentioning that the performance we describe here is not the only objective of our design. The main objective is to have predictable execution time of \CTs \ with minimum impact on performance and cost. The proposed design is very cost effective as it has a tiny hardware implementation that is fast and independent of the number of tasks; it also offers seamless software porting.
 
\begin{figure}[!h]
\centering
\includegraphics[clip=true,scale=.75, trim=0cm 0cm 0cm 0cm]{../figures/cornerTurn_ratio.pdf}
\caption{The cache stall ratio for the corner-turn benchmark}
\label{fig.cornerturn_ratio}
\end{figure}
 
%End subSection 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%End subSection  

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Schedulability Evaluation}
\label{sec:Schedulability-Evaluation}
The scheduling scheme proposed for the platform, explained in Chapter \ref{chapter.system_model}, is based on hiding the latency of the loading and unloading the scratchpads by overlapping the execution of \CTs \ and DMA transfers. This novel technique has significantly improved the system schedulability. The theoretical background on how to compute and generate the slotted schedule is detailed in Chapter \ref{chapter.Schedulability_Analysis}. This section evaluates the proposed scheduling scheme and compares it to a recent work \cite{Whitham2012}. As described in Chapter \ref{chapter.Schedulability_Analysis}, the formulation of the schedulability problem is kept simple. The Z3 \cite{Moura2008} SMT solver is used to solve the satisfiability problem and produce the correct working order of tasks within the minor cycle. The applications in Table \ref{tbl:Benchmarks}, excluding the "synthetic" benchmark, are used to generate sets of random tasks. Given a system utilization, each application is randomly selected and assigned a random period from a predefined set of harmonic periods, \{10 ms, 20 ms, 40 ms\}. After that, the task's utilization is computed. At every iteration a new task is randomly generated. The generation stops when the sum of the individual tasks' utilization reaches the required system utilization. After that, the overhead is added and the slot sizes are computed. All the constraining equations, described in Chapter \ref{chapter.Schedulability_Analysis}, are then applied to the generated set of tasks using the Z3 SMT solver. If there is a valid schedule, the Z3 returns sat(satisfiable) and gives the working assignment for the binary indicators that represents tasks and slots. Otherwise, Z3 returns un-sat(un-satisfiable) indicating that there is no valid schedule.

It is obvious from the description in Chapter \ref{chapter.Schedulability_Analysis} that assigning slots within a minor cycle that is as long as the smallest period incurs more context switches, hence more overhead. However, we consider this method for two reasons. First, evaluating the platform requires putting it in a challenging situation to get better feedback of its performance, hence more fair when comparing to other scheduling schemes. Second, it is less complex than other schedulability formulation, and solving a complex schedulability problem is not the main focus of this thesis.

On our platform, we simulated randomly generated sets of tasks using two scheduling techniques. Carousel technique presented in \cite{Whitham2012}, and our proposed technique. We set the same system parameters for both schemes, such as the context-switch and the DMA time. Carousel schedulability is verified by applying response-time analysis as described by the authors of Carousel. The main difference between the two schemes is that Carousel blocks tasks' execution (CPU) to complete scratchpad load and unload operations, as explained earlier in Chapter \ref{chapter.Background}. Harmonic periods favour Carousel because with harmonic periods rate monotonic scheduling can schedule up to 100\% utilization. Figure \ref{fig.schedulability_comp} depicts the significant improvement in schedulability over Carousel technique. Each point in the graph represents 100 task sets. Our technique successfully hid the latency of loading and unloading the scratchpad, which led to this result. 


\begin{figure}[!h]
\centering
\includegraphics[clip=true,scale=.70, trim=0cm 0cm 0cm 0cm]{../figures/schedulability_comp.pdf}
\caption{Our schedulability mechanism outperforms Carousel scheme}
\label{fig.schedulability_comp}
\end{figure}


It is recommended to stick to the application design guidelines provided in Chapter \ref{chapter.system_model} and \ref{chapter:SoftwareBuldFlow} in order to get most of the platform. However, testing the platform with sets of tasks that violate the size constraint can provides further information about the system behaviour in such situations. This can be done either by increasing the tasks' working sets, or by reducing the size of the scratchpad. Evaluating the system schedulability when the size of the scratchpad is reduced is interesting, as there will be some tasks that cannot fit into the scratchpad together as they could before. In order to show the pure effect of the scratchpad size on schedulability, low utilization point is selected (10\%). Figure \ref{fig.schedulability_SPM_Effect} depicts the effect of changing the size of the scratchpad on the system schedulability. For instance, when the SPM size is reduced to 12KB, only 20\% of the tasks sets are schedulable. In addition, it can be noticed from the figure that the schedulability is a quadratic function of the scratchpad size until it saturates. Therefore, it is important to consider the cost that can be expressed as trade-off between on-chip memory size and system utilization. Unlike in a cache where the size of cache has a similar effect on the average performance, the performance here is guaranteed once a set of tasks has a valid schedule. In Carousel, schedulability is not directly affected by the scratchpad size unless if a task cannot entirely fit into the scratchpad. However, Carousel is affected by the time required to swap blocks between the scratchpad and the main memory; thus, the sizes of tasks have a direct effect on Carousel schedulability. In addition, Carousel does not utilize cache besides scratchpad. Therefore, Carousel requires less on-chip memory than what we require.

\begin{figure}[!h]
\centering
\includegraphics[clip=true,scale=.75, trim=0cm 0cm 0cm 0cm]{../figures/schedulability_SPM.pdf}
\caption{The effect of changing the size of the scratchpad on schedulability}
\label{fig.schedulability_SPM_Effect}
\end{figure}

Finally, when increasing the set of the harmonic periods that is assigned randomly to the participating tasks, there will be some tasks having large periods and small execution time. As a result, their slot's sizes will not be long enough for the DMA to load other tasks into the scratchpads. In addition, tasks with smaller slots incurs relatively more overhead compared to their slot's sizes. Figure \ref{fig.schedulability_Periods_Effect} shows the effect of including larger periods into the set of the harmonic periods. A high utilization point (90\%) is selected in order to show the system tolerance for overhead.
The graph depicts that the system has a good tolerance to the overhead caused by scheduling many tasks with tiny slots. In the graph, the system goes below 10\% schedulability only when the maximum period is 21 times double the base period, e.g. if the base period is 10 ms, the maximum in this case will be 10,485,760 ms.

\begin{figure}[!ht]
\centering
\includegraphics[clip=true,scale=.75, trim=0cm 0cm 0cm 0cm]{../figures/schedulability_Periods.pdf}
\caption{The effect of increasing the set of the harmonic periods on schedulability}
\label{fig.schedulability_Periods_Effect}
\end{figure}

Throughout this chapter, the functionality and performance of the proposed system were evaluated. The proposed system achieved the design objectives that aim to diminish the gap between the general-purpose architecture and real-time applications. In addition, the proposed system scored significant improvements in both execution performance and system schedulability. The next chapter concludes the thesis and highlights future work. 

%End subSection 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%End Chapter
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
