\section{Comparison Result}
\vspace{-0.5em}
This Chapter shows the comparison results based on experiments using the DSE framework explained in Section \ref{DSEframeworkSection}. The benchmark used in the project is firstly introduced. Secondly the two different deadline calculations, max-plus algorithm and Dijkstra algorithm, are compared according to their execution costs in the preprocessing due to the deadline calculation. The next is to illustrate the comparison between three different scheduling algorithms (SSFP, DSWC and DSPD) and to analyze the results. In addition, the prioritized deadline in DSPD is also shown.
\vspace{-0.5em}
\subsection{Benchmark}
\vspace{-0.5em}
In order to obtain the experimental results, the benchmark used in the project is the Embedded System Synthesis Benchmarks Suite (E3S). The version is 0.9. E3S is mostly on basis of data from the Embedded Microprocessor Benchmark Consortium (EEMBC) which is "a non-profit organization formed in 1997 with the aim of developing meaningful performance benchmarks for the hardware and software used in embedded systems"\cite{EEMBC}.

E3S is used for the embedded systems synthesis research, such as automated system-level allocation, assignment, and scheduling research. It includes 17 processors, e.g., the IBM PowerPC 405GP, AMD K6-IIIE, and the Motorola MPC555. "These processors are characterized based on the measured execution times of 47 tasks, power numbers derived from processor datasheets, and additional information, e.g., die sizes, some of which were necessarily estimated, and prices gathered by emailing and calling numerous processor vendors." The task sets follow the organization of the EEMBC benchmarks. E3S has five application suites: automotive/industrial, consumer, networking, office automation, and telecommunications.\cite{benchmark}

Before starting the comparison among different kinds of algorithms, the benchmark used in the experiment is displayed in Table \ref{tab:benchmark}. Application suit telecommunications includes 9 threads, and the networking has 4 threads. Their task graphs are illustrated in the appendix. The architecture has seven kinds of cores. Table \ref{tab:benchmark} indicates the core types in the benchmark, such as Core\_11, and specific type information of each core practically, such as IDT79RC64575.

\begin{table}[h]
\begin{center}
\begin{tabular}[t]{l|c|c} % l and c stand for column
\hline  % draw a line of table, and between hline is row
\multirow{2}{*}{Application} & \multicolumn{2}{|c}{telecommunications}\\
   & \multicolumn{2}{|c}{networking}\\
\hline
\multirow{7}{*}{Architecture}  & Core\_11 & IDT79RC64575 \\
 & Core\_21 & AMD K6-IIIE+\\
 & Core\_30 & Motorola MPC555 \\
 & Core\_38 & AMD K6-IIIE+ \\
  & Core\_40 & IBM PowerPC 405GP \\
   & Core\_43 & IDT79RC32364 \\
    & Core\_67 & TI TMS320C6203 \\
\hline
\end{tabular}
\end{center}
\caption{Benchmark}
\label{tab:benchmark}
\end{table}
\vspace{-0.5em}
\subsection{Max-Plus Algorithm vs. Dijkstra Algorithm}
\vspace{-0.5em}
In the framework, the max-plus algorithm and the dijkstra algorithm are used for the calculation of deadline in the preprocessing part. Because this process needs only application and mapping options, selecting of the architecture from the benchmark is not necessary. The calculation of deadline in the framework needs such a short time. The problem is rather that the task graphs in the benchmark do not contain many tasks. So the application suit of telecommunications is picked up from the application suits because it has the biggest number of threads, 9 threads.

\begin{table}[h]
\begin{center}
\begin{tabular}[t]{l|c|c|c|c} % l and c stand for column
\hline  % draw a line of table, and between hline is row
\multicolumn{5}{c}{Execution  time in milliseconds}  \\
\hline
Number of Threads & 18 & 27 & 36 & 45   \\
\hline
Dijkstra & 3.8 & 5 & 7.2 & 8.6  \\
Max-Plus & 1.8 & 2.2 & 2.6 & 3 \\
\hline
\end{tabular}
\end{center}
\caption{Execution time of dijkstra algorithm and max-plus algorithm}
\label{tab:MaxPluvsDijkstra}
\end{table}

 When the number of threads is 9, the execution time is too small and the computer can not get that. So we increase the number of the threads to 18, 27, 36 and 45. The results of comparison of these four different situations are displayed in Table \ref{tab:MaxPluvsDijkstra}. From this result, it is concluded that the dijkstra algorithm needs more execution time of deadline calculation than the max-plus algorithm.

\begin{figure}[htb!]
\centering
\includegraphics[width=10cm]{Chapter5/dijkstravsMaxplus.eps}
\caption{Execution time of dijkstra algorithm and max-plus algorithm}
\label{fig:MaxPluvsDijkstra}
\end{figure}

In order to make the rounded analysis for the result, Figure \ref{fig:MaxPluvsDijkstra} illustrates the comparison result in a line graph. In Figure \ref{fig:MaxPluvsDijkstra}, the execution time of the max-plus algorithm increases slowly, while the execution time of the dijkstra algorithm increases rapidly, when there are more and more threads. The difference of the execution time between the dijkstra algorithm and the max-plus algorithm increases from 2 milliseconds when there are 18 threads to 5.6 milliseconds when there are 45 threads. The result above proves, the max-plus algorithm has a better performance than the dijkstra algorithm, when the network is larger. If the network is enormous, the dijkstra algorithm needs too much steps to calculate the deadline. Floyd-Warshall algorithm is able to calculates the transitive closure of the CDFG. The dijkstra algorithm needs to find the longest path between two pairs in each step, and in the framework Floyd-Warshall algorithm achieves the function through the triple loops. Compared to the dijkstra algorithm with Floyd-Warshall algorithm, the max-plus algorithm needs the same steps to get the deadline, and the only difference between them is the matrix size. So the bigger the network is, the better performance max-plus algorithm can have, compared to the dijkstra algorithm with Floyd-Warshall algorithm.

Section \ref{BestCaseAverageCaseandWorstCase} has explained, the prioritized deadline in DSPD is only used for the calculation of slack in LST scheduling, but the actual deadlines of tasks are also worst case deadline. So currently DSPD implements the calculation of deadline twice, one is the worst case deadline via the dijkstra algorithm, and the other is prioritized deadline via the max-plus algorithm. DSPD spends more time in the deadline calculation than DSWC. In the future, worst case deadline in DSPD can be calculated via the max-plus algorithm in order to reduce the run time.

\vspace{-0.5em}
\subsection{Prioritized Deadline}
\vspace{-0.5em}
In DSPD, an important feature is to change the scheduling of tasks via the priority control. So the type of the deadline in the framework turns from the worst case deadline in DSWC to the prioritized deadline in DSPD. The following example explains the difference between both types of the deadline.

Figure \ref{fig:telecom1} in the appendix indicates the task graph of telecom1 in the application suit telecommunications. The deadline of the thread telecom1 is 5000, and the deadline of the end task (sink) in telecom1 is also 5000. According to the different priority of the threads, the selection of mapping options leads to the different execution latencies and the different deadlines. But sometimes the number of available mapping options is smaller than the number of different priorities. For example, there are four different priorities, but the number of available mapping options is only three. According to the explanation in Section \ref{BestCaseAverageCaseandWorstCase}, the range (1-100) of priority is divided into 3 parts regarding the number of available mapping options. Tasks find their parts and select the mapping options according to their priority.

Table \ref{tab:DeadlineofTelecom1} indicates the different deadlines of tasks in the different situations. The first task (src) has the biggest deadline difference among the different types. The deadline difference between the average case and the worst case is 879, and between the best case and worst case is 3405. The tasks with the shorter deadline have higher priority via LST scheduling. They are served earlier and more likely to be executed on the faster processor cores especially in a weakly utilized system, e.g. at system start.
\begin{table}[h]
\begin{center}
\begin{tabular}[t]{l|c|c|c} % l and c stand for column
\hline  % draw a line of table, and between hline is row
Thread  & Worst case  & Average Case   & Best Case\\
(telecom1) & (Prioirty=1) & (Prioirty=50) & (Prioirty=100)\\
\hline
src & 3618 & 2739 & 213    \\
\hline
ac2 & 4243 & 3372 & 864   \\
\hline
fpba2 & 4609 & 4158 & 2828 \\
\hline
ce2 & 4740 & 4740 & 4650 \\
\hline
fft2 & 4804 & 4804 & 4714 \\
\hline
sink & 5000 & 5000 & 5000 \\
\hline
\end{tabular}
\end{center}
\caption{Worst/average/best case deadlines of thread telecom1}
\label{tab:DeadlineofTelecom1}
\end{table}
\vspace{-0.5em}
\subsection{Fixed-Priority vs. Dynamic-Priority Scheduling}
\vspace{-0.5em}
The scheduling of tasks in SSFP is based on the fixed-priority scheduling RMS. The RMS only focusses on the period of threads as the rule of the scheduling but never considers the deadline of the tasks. Some threads have shorter period, and in fact they can not meet the deadline. But according to RMS, they have higher priority and run first, as a result, they occupy the faster processors, and not only themselves  but also other threads miss the deadlines. This situation usually appears when the resources are not enough for all tasks hence the system is under-provisioned.

\begin{table}[h]
\begin{center}
\begin{tabular}[t]{l|c|c|c|c|c} % l and c stand for column
\hline  % draw a line of table, and between hline is row
Number of Cores & 7 & 6 & 5 & 4 & 3  \\
\hline
SSFP& 12 & 10 & 6 & 1 & 1 \\
DSWC & 12 & 12 & 8 & 4 & 1\\
DSPD & 12 & 12 & 7 & 5 & 3\\
\hline
\end{tabular}
\end{center}
\caption{Number of executed threads}
\label{tab:NumberofExecutedThreads}
\end{table}

This experiment tests the number of executed threads in order to make the comparison among SSFP, DSWC and DSPD. Three threads in the application suit "networking" have the same task graph structure, and a major portion of task types in them are the same. So in the low provision system, they are more likely to miss the deadlines. In the experiment, the application suit "networking" is chosen and scheduled via three different kinds of scheduling algorithms, SSFP, DSWC and DSPD. The architecture select seven cores, Core\_11, 21, 30, 38, 40, 43, 67. The deadline of each thread is also reduced in order to make the threads more easily to miss the deadline. Table \ref{tab:NumberofExecutedThreads} indicates the execution result in the different architecture situations. At first, all seven cores are used for the execution of tasks. Next find out the core which has the highest utilization, delete it in the architecture and its mapping in the mapping option, and then run again. The second step will repeat over and over till the rest processor cores can not execute the threads. In this way we can generate the situation of lack of the resource in order to check the performance of the three different kinds of the scheduling algorithms.

\begin{figure}[htb!]
\centering
\includegraphics[width=10cm]{Chapter5/numberofExecutedThreads.eps}
\caption{Number of executed threads}
\label{fig:NumberofExecutedThreads}
\end{figure}

According to Table \ref{tab:NumberofExecutedThreads}, Figure \ref{fig:NumberofExecutedThreads} illustrates the relationship between the number of executed threads and the number of cores using different kinds of scheduling algorithms. When there are seven cores in the architecture, it turns out that there is 12 executed threads using either of these three scheduling. But when the number of cores decreases, SSFP executes less threads than DSWC and DSPD only except when the number of the cores is 3. This specific experiment result indicates that, the SSFP is short of the consideration of deadline, as a result, several tasks in the threads are scheduled, and other tasks in the same thread miss the deadline and are canceled.

From Figure \ref{fig:NumberofExecutedThreads}, DSPD has a better performance than DSWC, when the number of processor cores is small. Because DSPD gives the higher priority to the threads which has shorter periods, these threads have smaller deadline and run first via LST scheduling. They are more likely to terminate before their deadlines.

\vspace{-0.5em}
\subsection{New vs. Traditional Dynamic-Priority Scheduling}
\vspace{-0.5em}
In DSPD, the threads with the shorter period are given the higher priority, so the tasks in these threads are earlier served which reduces the response time. They are more likely to be executed on the faster processor cores especially in a weakly utilized system, e.g. at system start.

This experiment is used to compare the response time of threads between DSWC and DSPD. The application suit "telecommunications" is used for this experiment. There are figures in Appendix to illustrate the task graph in the application suit "telecommunications". According to the period of the threads, threads telecom0 to telecom4 are given the lowest priority, while the threads telecom5 to telecom8 have highest priority. Because the  utilization of core\_11, 30, 40 is null or very low when the application is executed on all seven processor cores, the remaining cores core\_21, 38, 43, 67 are used in this comparison experiment but the number of these cores are doubled. The core with the lowest utilization will be deleted from the architecture and mapping options in order to test the performance of algorithms in the different architecture situations. Table \ref{tab:averageResponseTimeofThreads} indicates the average response time of threads in different situation. When the number of cores is four, deadline misses occur in this situation which distorts the result, so this situation will not be shown in this table. The last row in the table indicates, except the situation of three cores, DSPD reduces the average response time of the threads compared to DSWC.

\begin{table}[h]
\begin{center}
\begin{tabular}[t]{l|c|c|c|c|c} % l and c stand for column
\hline  % draw a line of table, and between hline is row
Number of Cores & 8 & 7 & 6 & 5 & 3  \\
\hline
DSWC & 629 & 629 & 629 & 672 & 1696\\
DSPD & 599 & 599 & 599 & 661 & 1698\\
\hline
Difference & 30 & 30 & 30 & 11 & -2\\
\hline
\end{tabular}
\end{center}
\caption{Average response time of threads in DSWC and DSPD}
\label{tab:averageResponseTimeofThreads}
\end{table}

The tasks with higher priorities in DSPD are served earlier and more likely to be executed on faster processor cores, especially in a weakly utilized system, e.g. at the system starting. This experiment compares the average response time of threads with higher priorities in DSWC and DSPD, as shown in Table \ref{tab:averageResponseTimeofThreadswithHhigherPriorities}. Because telecom5 to telecom8 have shorter period than telecom0 to telecom4, they have higher priorities. These threads in DSPD have shorter average response time than that in DSWC in Table \ref{tab:averageResponseTimeofThreadswithHhigherPriorities}.

\begin{table}[h]
\begin{center}
\begin{tabular}[t]{l|c|c|c|c|c} % l and c stand for column
\hline  % draw a line of table, and between hline is row
Number of Cores & 8 & 7 & 6 & 5 & 3  \\
\hline
DSWC & 194 & 194 & 194 & 269 & 791\\
DSPD & 172 & 172 & 172 & 193 & 762\\
\hline
Difference & 22 & 22 & 22 & 76 & 29\\
\hline
\end{tabular}
\end{center}
\caption{Average response time of threads with higher priorities in DSWC and DSPD}
\label{tab:averageResponseTimeofThreadswithHhigherPriorities}
\end{table}

The average utilization of cores in DSWC and DSPD are displayed in Table \ref{tab:AverageUtilization}. This experiment compares the utilization of cores between DSWC and DSPD. The missing of deadlines does not influence over the comparison of utilization, so the situation of the architecture with four cores is added into the table. When the number of cores is four and five, DSPD has a higher utilization of cores than DSWC. In other situations, the utilization of cores are the same in either DSPD or DSWC. So DSPD is able to improve the utilization of the resource of chip.

\begin{table}[h]
\begin{center}
\begin{tabular}[t]{l|c|c|c|c|c|c} % l and c stand for column
\hline  % draw a line of table, and between hline is row
Number of Cores & 8 & 7 & 6 & 5 & 4 & 3  \\
\hline
DSPD & 0.16 & 0.18 & 0.21 & 0.26 & 0.32 & 0.45\\
DSWC & 0.16 & 0.18 & 0.21 & 0.24 & 0.29 & 0.45\\
\hline
Difference & 0 & 0 & 0 & 0.02 & 0.03 & 0\\
\hline
\end{tabular}
\end{center}
\caption{Average utilization of cores in DSWC and DSPD}
\label{tab:AverageUtilization}
\end{table}


\vspace{-0.5em}
\subsection{Summary}
\vspace{-0.5em}
This chapter shows the different experimental results based on the benchmark E3S in the framework. The analysis of the experimental results helps to research different kinds of algorithms and study their advantages and disadvantages.

Through three comparison among different algorithms, the conclusions are obtained. Firstly for the calculation of the deadline, the max-plus algorithm requires less time to finish its job than dijkstra algorithm with Floyd-Warshall algorithm. In addition, as the number of threads rises, the execution time increases slowly in the max-plus algorithm while quickly in the dijkstra algorithm with Floyd-Warshall algorithm. Secondly, SSFP is based on the fixed-priority scheduling RMS, but RMS only focusses on the period of the tasks or threads and never considers the deadline of the tasks. As a result, when the processing capacity of cores is not enough for the tasks or threads, more threads miss the deadline in SSFP than in DSWC and DSPD. Finally, DSPD is able to reduce the average response time of tasks or threads with the higher priority and improve the utilization of the resource of the chip. In the experiment, DSPD also decreases the average response time of threads in some situations.











\clearpage
