\section{New Dynamic-Priority Scheduling}
\vspace{-0.5em}
This chapter presents the introduce of the new dynamic-priority scheduling technique. This scheduling technique is the dynamic-priority scheduling with prioritized deadlines, short for DSPD, which is developed by Falko Guderian. The main goal of this thesis is to implement, prove and eventually improve the feasibility of this new scheduling technique. So the theoretical principle, the feature of functionality, the execution process and the class structure supplied in the framework will be explained in this chapter.

The last chapter has explained how the DSWC works in the framework. The principal difference between DSWC and DSPD is accounting for priorities during deadline calculation, called prioritized deadline. The first crucial point in this chapter is the calculation of the deadline in the framework. In the following content, the node in the graph means the task and the graph is the task graph, which is the thread.
\vspace{-0.5em}
\subsection{Deadline and Latency}
\vspace{-0.5em}
Before the calculation of the deadline in the complex network, a simple situation is used to introduce the technique for calculating task deadlines. For the achievement of the deadline, the concept of the latency has to be known. "Latency is a measure of time delay experienced in a system". The precise definition depends on the system and the time being measured~\cite{latency}. In this project latency means the sum of the execution time and the transfer time, which is shown in Equation \ref{latency}. The execution time is the time when the task is executed on the processors and the transfer time includes transfer time for loading input data provided by its predecessors and transferring to provide output data to its successors.
\begin{equation}
l=t_{e}+t_{t}
\label{latency}
\end{equation}
where $l$ is the latency, $t_{e}$ is the execution time which is also called execution latency, and $t_{t}$ is the transfer time which is also called transfer latency.

\begin{figure}[htb!]
\centering
\includegraphics[width=10cm]{Chapter4/latency1.eps}
\caption{Latency calculation}
\label{fig:Latency}
\end{figure}

From Equation \ref{latency}, if the execution latency and transfer latency are determined, then the calculation of latency is simple. After that the deadline is obtained just by means of a simple addition operation. Figure \ref{fig:Latency} shows a simple system which has two tasks and processors. Assuming that, the transfer latency is $t_{t}$, the execution latency of task B is $t_{e}$, and the deadline of task B is $d_{B}$.
\begin{equation}
d_{A}=d_{B}-t_{e}-t_{t}
\label{DeadlineCalculation}
\end{equation}
According to Equation \ref{DeadlineCalculation}, the deadline of task A $d_{A}$ is the difference between deadline of task B $d_{B}$ and the latency $l$ ($t_{e}+t_{t}$). The task must accomplish the data transfer from its predecessors and its execution on the processor before its deadline.

This is only a simple example, while for larger application and architecture models a task has mostly not only one predecessor and successor, which leads to a more complicated calculation process. So it is necessary to find an easier method to calculate the deadlines. The following content introduces the usage of max-plus algorithm applied in DSPD and Dijkstra algorithm which is used in DSWC.
\vspace{-0.5em}
\subsection{Scheduling via the Max-Plus Algorithm}
\vspace{-0.5em}
"In max-plus algorithm we work with the max-plus semi-ring which is the set $\mathbb{R}$ =${-\infty}$$\cup$$\mathbb{R}$ together with operations $a\oplus b = max(a,b)$ and $a\otimes b = a+b$. The additive and multiplicative identities are taken to be $\epsilon = -\infty$ and $e = 0$ respectively. Its operations are associative,
commutative and distributive as in conventional algebra."~\cite{maxplus} Though the adjacency matrix of the task graph, the max-plus algorithm is able to get the critical path "which is the the longest necessary path through a network of activities when respecting their interdependencies" ~\cite{criticalpath}. The critical path analysis helps figure out how long the network will complete and then gets the deadlines of tasks. In this project, DSPD uses the max-plus algorithm to achieve the deadline calculation, while DSWC uses the dijkstra algorithm to calculate the deadline.
The folloiwng discusses the max-plus algorithm, the dijkstra algorithm and their comparison.
\vspace{-0.5em}
\subsubsection{Longest Path}
\vspace{-0.5em}
In a network of n nodes, there are many paths from node j to node i and our purpose is to find the longest path among them. The simplest method is to compare one path to others in order to get the longest one.

$A_{ij}$ denotes the distance directly from node j to node i. $A_{ij} \neq A_{ji}$ because $A_{ji}$ indicates the opposite direction from $A_{ij}$. Here it should use max operator, so $\oplus$ is to get the maximum and $\epsilon=-\infty$~\cite{maxplusScheduling}~\cite{Floyd-Warshall}.

Sometimes the path from j to i via k is longer than going directly from j to i, which can be symbolized for $A_{ik}+A_{kj}>A_{ij}$~\cite{maxplusScheduling}. It is necessary to find the longest one with two links shown in Equation \ref{twolink}. The link is the edge between two nodes.
\begin{equation} \max_{K=1,\ldots n}(A_{ik}+A_{kj})
\label{twolink}
\end{equation}
In Equation \ref{addtwolink}, $(A^{2})_{ij}$ indicates the longest path with two links.
\begin{equation} (A^{2})_{ij}=\max_{K=1,\ldots n}(A_{ik}+A_{kj})
\label{addtwolink}
\end{equation}
The purpose is to get the longest path, so it is necessary to compare the path with one link with that with two links and find the longest one~\cite{maxplusScheduling}.
\begin{equation*}(A\oplus A^{2})_{ij}\end{equation*}
And then the longest path is obtained by select the longest one among the pathes with one to n links~\cite{maxplusScheduling}.
\begin{equation*}(A\oplus A^{2}\oplus A^{3}\ldots \oplus A^{n})_{ij}\end{equation*}
where $A^{n}=A^{n-1}\otimes A$.

\begin{figure}[htb!]
\centering%
\includegraphics[width=10cm]{Chapter4/MaxPlusScheduling1.eps}
\caption{A simple network}
\label{fig:A simple network}
\end{figure}

The following simple example shows in detail how to get the longest path. Figure \ref{fig:A simple network} shows a network. The purpose is to get the longest path between nodes in the network.
\begin{equation*}A=
\begin{pmatrix}
	\epsilon & \epsilon & \epsilon & \epsilon \\
	1 & \epsilon & \epsilon & \epsilon \\
	4 & \epsilon & \epsilon & \epsilon \\
	\epsilon & 2 & 3 & \epsilon  \end{pmatrix} \end{equation*}
Firstly, matrix A refers to the path among the paths with one link. $A_{ij}$ denotes the path from node j to node i. The next is to get the longest path among the pathes which has two links, shown in Equation \ref{pathwithtwolink}.
\begin{equation}A^{2}=A\otimes A=
\begin{pmatrix}
	\epsilon & \epsilon & \epsilon & \epsilon \\
	\epsilon & \epsilon & \epsilon & \epsilon \\
	\epsilon & \epsilon & \epsilon & \epsilon \\
	7 & \epsilon & \epsilon & \epsilon
\end{pmatrix}
\label{pathwithtwolink}
\end{equation}
The below matrix calculation is based on the max-plus algorithm. It is similar to the regular matrix multiplication calculation. Equation \ref{A41} indicates in detail the calculation of $A^{2}_{41}$ following the max-plus algorithm.
\begin{equation}
\begin{split}
A^{2}_{41}=&(A_{41}\oplus A_{14})\otimes (A_{42}\oplus A_{24})\otimes (A_{43}\oplus A_{34})\otimes (A_{44}\oplus A_{44})\\
=&(\epsilon\oplus \epsilon)\otimes (2\oplus 1)\otimes (4\oplus 3)\otimes (\epsilon\oplus \epsilon)\\
=&\epsilon\otimes3\otimes7\otimes\epsilon\\
=&7
\end{split}
\label{A41}
\end{equation}
Because the maximum of links in the network is two, Equation \ref{A+} only refers to $A$ and $A^{2}$. In the end, $A^{+}$ is the longest path matrix in the network. The last row of $A^{+}$ is the longest path to the last node. 7, 2, 3 is respectively the longest path from node 1, 2, 3 to the node 4.
\begin{equation}
A^{+}=A\oplus A^{2}=
\begin{pmatrix}
	\epsilon & \epsilon & \epsilon & \epsilon \\
	1 & \epsilon & \epsilon & \epsilon \\
	4 & \epsilon & \epsilon & \epsilon \\
	\epsilon & 2 & 3 & \epsilon
\end{pmatrix}
\oplus
\begin{pmatrix}
	\epsilon & \epsilon & \epsilon & \epsilon \\
	\epsilon & \epsilon & \epsilon & \epsilon \\
	\epsilon & \epsilon & \epsilon & \epsilon \\
	7 & \epsilon & \epsilon & \epsilon
\end{pmatrix}
=
\begin{pmatrix}
	\epsilon & \epsilon & \epsilon & \epsilon \\
	1 & \epsilon & \epsilon & \epsilon \\
    4 & \epsilon & \epsilon & \epsilon \\
	7 & 2 & 3 & \epsilon
\end{pmatrix}
\label{A+}
\end{equation}
\vspace{-0.5em}
\subsubsection{Scheduling}
\label{sectionscheduling}
\vspace{-0.5em}
This section refers to the scheduling via the max-plus algorithm. Because the last example is too simple, take another more complex network shown in Figure \ref{fig:Graphofproject}. In this section, a different method from the last example is chosen to calculate the longest path of the network. Here, introduce the way how to calculate ASAP and ALAP via max-plus algebra. In the last section the method is to set the node 1 the start node, but another way is to set the last node, Node 6, the initial node and go back to node 1. The tasks in Figure \ref{fig:Graphofproject} are scheduled via ASAP, which depends on the longest path, in max-plus algorithm.

\begin{figure}[htb!]
\centering
\includegraphics[width=10cm]{Chapter4/MaxPlusScheduling2.eps}
\caption{Network}
\label{fig:Graphofproject}
\end{figure}
\begin{equation}
\begin{split}
\hat{x}=-x=&-(A^{'}\otimes x)\oplus(B^{'}\otimes u)\\
  =&-(A^{'2}\otimes x)\oplus(A^{'}\otimes B^{'}\otimes u)\oplus (B^{'} \otimes u)\\
  =&-(A^{'2}\otimes x)\oplus(A^{'}\oplus e)\otimes (B^{'} \otimes u)\\
  &\vdots\\
  =&-(A^{'}\oplus A^{'2}\oplus A^{'3}\oplus A^{'4}\oplus e)\otimes B^{'} \otimes u\\
  =&-(A^{'+}\oplus e)\otimes B^{'} \otimes u
\end{split}
\label{maxplusEqu}
\end{equation}
Equation \ref{maxplusEqu} indicates the calculation process via the max-plus algorithm. The final result is shown in Equation \ref{maxplusFinalEqu}. First introduce the notation and afterwards equation. $A^{'}$ means the transport matrix of A. e refers to the identity matrix, and there are zeros on the diagonal and $\epsilon$'s in the other positions. $B^{'}$ is the matrix which has only one column, and the last element is zero and others are $\epsilon$'s. u means the negative value of the deadline of the task graph.

\begin{equation}
\hat{x}=-x=-(A^{'+}\oplus e)\otimes B^{'} \otimes u
\label{maxplusFinalEqu}
\end{equation}

From Figure \ref{fig:Graphofproject} it is easy to get the matrix $A$ which is the weighted adjacency matrix. The matrix denotes the weights of paths with one link in the network.
\begin{equation*}
A=
\begin{pmatrix}
	\epsilon & 5 & 3 & \epsilon & \epsilon & \epsilon \\
	5 & \epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\
	3 & \epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\
  \epsilon & 2 & \epsilon & \epsilon & 5 & \epsilon \\
  \epsilon & 1 & 4 & \epsilon & \epsilon & \epsilon \\
  \epsilon & \epsilon & 8 & 2 & 4 & \epsilon \\
\end{pmatrix}
\end{equation*}
According to Equation \ref{maxplusFinalEqu}, the following calculation needs the the transport matrix $A^{'}$ and $B^{'}$ .
\begin{equation*}
A^{'}=
\begin{pmatrix}
	\epsilon & 5 & 3 & \epsilon & \epsilon & \epsilon \\
	\epsilon & \epsilon & \epsilon & 2 & 1 & \epsilon \\
	\epsilon & \epsilon & \epsilon & \epsilon & 4 & 8 \\
  \epsilon & \epsilon & \epsilon & \epsilon & \epsilon & 2 \\
  \epsilon & \epsilon & \epsilon & 5 & \epsilon & 4 \\
  \epsilon & \epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\
	 \end{pmatrix},
B^{'}=
\begin{pmatrix}
\epsilon\\
\epsilon\\
\epsilon\\
\epsilon\\
\epsilon\\
0\\
\end{pmatrix}
\end{equation*}
According to $A^{n}=A^{n-1}\otimes A$, the matrix which indicates the weights of paths with more than one link $A^{'2}$, $A^{'3}$ and $A^{'4}$ are easily obtained.
\begin{equation*}	
A^{'2}=
\begin{pmatrix}
	\epsilon & \epsilon & \epsilon & 7 & 7 & 11 \\
	\epsilon & \epsilon & \epsilon & 6 & \epsilon & 5 \\
	\epsilon & \epsilon & \epsilon & 9 & \epsilon & 8 \\
  \epsilon & \epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\
  \epsilon & \epsilon & \epsilon & \epsilon & \epsilon & 7 \\
  \epsilon & \epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\
	 \end{pmatrix}
\end{equation*}
\vspace{4pt}
\begin{equation*}
 A^{'3}=
\begin{pmatrix}
	\epsilon & \epsilon & \epsilon & 12 & \epsilon & 11 \\
	\epsilon & \epsilon & \epsilon & \epsilon & \epsilon & 8 \\
	\epsilon & \epsilon & \epsilon & \epsilon & \epsilon & 11 \\
  \epsilon & \epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\
  \epsilon & \epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\
  \epsilon & \epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\
	 \end{pmatrix}
\end{equation*}
\vspace{4pt}
\begin{equation*}
 A^{'4}=
\begin{pmatrix}
	\epsilon & \epsilon & \epsilon & \epsilon & \epsilon & 14 \\
	\epsilon & \epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\
	\epsilon & \epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\
  \epsilon & \epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\
  \epsilon & \epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\
  \epsilon & \epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\
	 \end{pmatrix}
\end{equation*}
According to the maximum links, the longest path matrix $A^{'+}$ is obtained.
\begin{equation*}
A^{'+}=A^{'}\oplus A^{'2}\oplus A^{'3}\oplus A^{'4}=
\begin{pmatrix}
	\epsilon & 5 & 3 & 12 & 7 & 14 \\
	\epsilon & \epsilon & \epsilon & 6 & 1 & 8 \\
	\epsilon & \epsilon & \epsilon & 9 & 4 & 11 \\
  \epsilon & \epsilon & \epsilon & \epsilon & \epsilon & 2 \\
  \epsilon & \epsilon & \epsilon & 5 & \epsilon & 7 \\
  \epsilon & \epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\
	 \end{pmatrix}
\end{equation*}
The purpose is to get the deadline of each node (task). Here it is assumed, the deadline of task graph (thread) is 20, that means, the deadline of sink tasks (here node 6) is 20. If the longest path from other nodes to the node 6 are determined, the difference between the deadline of node 6 and the distance to node 6 is the deadline of that node (task). From the longest path matrix $A^{'+}$, the distance to node 6 should be calculated.
\begin{equation}
 (A^{'+}\oplus e)\otimes B^{'}=
\begin{pmatrix}
14\\
8\\
11\\
2\\
7\\
0\\
\end{pmatrix}
\label{longestPathC}
\end{equation}
Equation \ref{longestPathC} shows the longest path vector which is from other nodes to the node 6. So the last step is to obtain the deadline of each node (task). $\hat{x}$ is the deadline vector of each node (task).
\begin{eqnarray*}
\begin{split}
 x=&(A^{'+}\oplus e)\otimes B^{'}\otimes u=
\begin{pmatrix}
-6\\
-12\\
-9\\
-18\\
-13\\
-20\\
\end{pmatrix}\\
\vspace{4pt}
\hat{x}=&-x=
\begin{pmatrix}
6\\
12\\
9\\
18\\
13\\
20\\
\end{pmatrix}
\end{split}
\end{eqnarray*}
\vspace{-1em}
\subsection{Dijkstra Algorithm}
\vspace{-0.5em}
"Dijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1956 and published in 1959, is a graph search algorithm that solves the single-source shortest path problem for a graph with non-negative edge path costs, producing a shortest path tree."~\cite{dijkstra} In the project, the longest necessary path is calculated to get the deadline. So it is necessary to make an extend for the dijkstra algorithm to make it available to the longest path. In order to explain the extend of the dijkstra algorithm, here take the same example in section \ref{sectionscheduling}. Firstly see Figure \ref{fig:Graphofproject2}. Just like max-plus algorithm, set the node 6 the initial node and the start time zero. Here the node which has start time is called the "scheduled node". Next is to find the predecessor of node 6. The dependence of nodes is used to get the start time of each node and then calculate the deadline.

\begin{figure}[htb!]
\centering
\includegraphics[width=10cm]{Chapter4/MaxPlusScheduling2.eps}
\caption{Network}
\label{fig:Graphofproject2}
\end{figure}

From Figure \ref{fig:Graphofproject2}, the predecessors of node 6 are node 3, 4, 5. Node 3-5 will be checked if their successors are the "scheduled node". Node 3 has two successors, node 5 and 6 ,in which node 5 is not the scheduled node. Node 5 also has the successor node 4 which is not the scheduled node. Node 4 has one successor node 6 which is the scheduled node. So only the node 4 is the scheduled node and has start time 2.

Next is to find the predecessor of node 4, that is, node 2 and 5. All successors of node 5 are the scheduled nodes. But node 5 has two kind of start time. One is from node 5 direct to node 6, and start time is 4 (4+0) which is the weight between node 5 and node 6 plus the start time of node 6. Another is from node 5 to node 4, and the start time is 7 (5+2) which is the weight between node 5 and node 4 plus the start time of node 4. In order to get the longest path, the bigger one is chosen via ALAP.



The predecessors of node 5 are node 2 and node 3. Their successors both are the scheduled nodes. Firstly start from node 2. It has two ways, one is to node 4 and the start time is 4 (2+2); the other is to node 5 and its start time is 8 (1+7). According to ALAP, the path from node 2 to node 5 is selected. The other node is node 3. It also has two paths, to node 5 based on which start time is 11 (4+7) and to node 6 based on which start time is 8 (8+0). The bigger one, from node 3 to node 5, is chosen.

The last node 1 is considered. Table \ref{tab:DijkstraAlgorithm} shows the process of calculation of node 1 start time. Because the path from node 1 to node 3 is bigger, the start time of node 1 is 14.

The final step is to calculate the deadline of each node (task). The deadline of node 6 is 20. The longest path which is the start time above is obtained. That means, the longest time from the other nodes to node 6 is known, so their deadline is the difference between the deadline of node 6 and the start time of every node. Table \ref{tab:DijkstraAlgorithm} indicates the process of deadline calculation via the dijkstra algorithm. Table \ref{tab:DeadlineResult} shows the final deadline result.

\begin{table}[hbp!]
\footnotesize
\begin{center}
\begin{tabular}[t]{l|c|c|c|c|c|c} % l and c stand for column
\hline  % draw a line of table, and between hline is row
Node & $N_{6}$ & $N_{5}$ & $N_{4}$ & $N_{3}$ & $N_{2}$ & $N_{1}$ \\
\hline
Start & 0 &   & 2  &   &  &   \\
\hline
Start &     & 7($N_{5}->N_{4}$)& & &  &\\
 &  & \st{4($N_{5}->N_{6}$)} & & &  &  \\ %st{}is the delete line
\hline
Start &     & & & 11($N_{3}->N_{5}$)& 8($N_{2}->N_{5}$)  &\\
 &  &  & & \st{8($N_{3}->N_{6}$)}& \st{4($N_{2}->N_{4}$)} &  \\ %st{}is the delete line
\hline
Start &     & & & &   & 14($N_{1}->N_{3}$)\\
 &  &  & & &  & \st{13($N_{1}->N_{2}$)} \\ %st{}is the delete line
\hline
\end{tabular}
\end{center}
\caption{Dijkstra Algorithm}
\label{tab:DijkstraAlgorithm}
\end{table}

\begin{table}[h]
\begin{center}
\begin{tabular}[t]{l|c|c|c|c|c|c} % l and c stand for column
\hline  % draw a line of table, and between hline is row
Node & $N_{6}$ & $N_{5}$ & $N_{4}$ & $N_{3}$ & $N_{2}$ & $N_{1}$ \\
\hline
Start & 0 & 7 & 2  & 11  & 8 &  14 \\
\hline
Deadline &  20 & 13 & 18 & 9 & 12  & 6\\
\hline
\end{tabular}
\end{center}
\caption{Deadline result}
\label{tab:DeadlineResult}
\end{table}

Through the dijkstra algorithm, the same deadline result with the max-plus algorithm is obtained. From the calculation of the dijkstra algorithm, each node except the last node must be respectively calculated, that means, the number of calculation steps is number of the nodes minus 1. If the network is enormous, the dijkstra algorithm need too much step which spends much time. Compared to the dijkstra algorithm, independent on the number of the nodes, the max-plus algorithm needs the same steps to get the deadline and the only difference is matrix size. Chapter 5 shows some comparison results between the dijkstra algorithm and the max-plus algorithm, which proves that, the max-plus algorithm allows to finish the calculation of deadline in shorter time than the dijkstra algorithm.
\vspace{-0.5em}
\subsection{Best Case, Average Case and Worst Case}
\label{BestCaseAverageCaseandWorstCase}
\vspace{-0.5em}
Until here the only difference between DSWC and DSPD is the algorithm of deadline calculation, although the max-plus algorithm is possible to be better than the dijkstra algorithm, they have the same result and the difference is the run time. So the following presents the difference between two different kinds of algorithms.

"In computer science, best, worst and average cases of a given algorithm express what the resource usage is at least, at most and on average, respectively"~\cite{bestWostCase}. Average case analysis and worst case analysis are the most applied for the algorithm analysis. Best case analysis is less used, because it describes the performance of the system under the optimal condition. Especially in real-time systems, best case performance is hardly guaranteed. For example, the best case deadline is used in the project, that means, only when all tasks can be executed by the fastest processors, they will not miss the deadlines. But that is only obtained in the deadline calculation, that means, that exists only in the theoretical arithmetic. In the real process, by the reasons of the limited number of the processors, the task can not end before the best case deadline, which leads to the deadline missing. So the best case analysis in the project is of no concern.

The worst case analysis is opposite to the best case analysis, and it has the system performance in the worst case scenario. Worst case analysis mainly relates to life-critical real-time systems, such as in aeronautics, also called hard real-time systems. In the normal situation, the system is over-provisioned due the assumption of the worst-case. The consequences would be higher costs in terms of area, power consumption or even reliability. Hence, considering average cases allow to overcome these problems. The average case analysis is needed. It stays between the best case and worst case and is closer to the real situation since the best case and worst case mainly exist in the theoretical analysis. This work considers soft real-time systems which are less critical for example in telecommunications, networking etc.

The worst case deadline is considered in the worst case, therefore at most situations it guarantees the accomplishment of tasks before the worst case deadline. DSWC is based on the worst case deadline. In the preprocessing part of the framework, DSWC finds the slowest processors for the tasks in the mapping options. And according that, DSWC gets the deadlines with the help of the Dijkstra algorithm. DSWC leads to the uncontrollable condition of the execution process. Whatever the situations, the execution scheduling is fixed and nothing can modify it. For example, in DSWC, when the application, the architecture, the mapping option and the configuration are determined, the scheduling is confirmed.

In the project, the worst case deadline means the deadline when the tasks are executed by the slowest processors. But the worst case analysis has the problem: "It is typically impossible to determine the exact worst-case scenario. Instead, a scenario is considered such that it is at least as bad as the worst case."~\cite{bestWostCase} That means, in the real execution, all tasks is possible not to be terminated before the worst case deadline, because in the deadline calculation which is described in the first half of this chapter, the number of processors is infinite. While in the real execution process, the number is limited so some tasks must wait for the processors and sometimes the tasks would miss the deadline.

So in order to analyze the MPSoC, it is necessary to build a new scheduling algorithm DSPD based on the average case analysis. Other than the DSWC, DSPD has a totally different deadline calculation, that is, average case deadline. In DSWC, the thread priorities are given before the deadline calculation similar to RMS.
\vspace{-0.5em}
\begin{verbatim}
<state name="s0">
<method tmpl="telecom0" priority="50" format="YML"/>
</state>
<transition source="s0" target="s0" probability="1.00"/>
\end{verbatim}
\vspace{-0.5em}
The code example above describes the periodic execution of an application (thread) within a state machine. The application is wrapped by a state including a transition to itself responsible for the periodicity. In the code, it is easy to find "priority="50"". This code means that the priority of this thread is 50. The range of priority is from 1 to 100. The bigger the number of priority is, the higher priority the thread has. The tasks in the thread have the same priority as the thread, so they have medium priority. The tasks with priorities involve in the deadline calculation after the load of the application.

\begin{figure}[htb!]
\centering
\includegraphics[width=10cm]{Chapter4/priorityandLatecny.eps}
\caption{Priority and Latency}
\label{fig:priorityandLatecny}
\end{figure}

Figure \ref{fig:priorityandLatecny} describes how to get the execution latency by means of the different priorities of tasks. In DSWC, the task only selects the slowest processor, while in DSPD, the priority is the critical factor in the mapping between tasks and processors. For example, there are 3 processors which can execute the task A. Firstly the range (1-100) of priority is divided into 3 parts, low priority (1-33), medium priority (34-66) and high priority (67-100). The priority of task A is 50 which is the medium priority. So the mapping of task A and PE2 is selected, and the according execution time is 10. Because the transfer latency is fixed in the assumed system, latency is the sum of execution latency and transfer latency. After that it is able to get the deadline of task A by means of max-plus algorithm. Assuming the current implementation of DSPD, if the priority of threads are 1, the deadlines of tasks are the worst case deadline. So DSWC is a situation with the lowest priority in DSPD.

From DSPD, the prioritized deadline makes the execution of threads controllable to a certain extent. According to the setting of priorities, the task with high priority has a smaller deadline. In the scheduling part of the framework, DSWC is scheduled via LST. In the framework, slack time is defined in following equation:
\begin{equation}
s=d-t_{c}-l
\label{slackTime}
\end{equation}

In the framework, $c$ varies over time and $l$ is dependent on the current load so the smaller deadline means the smaller slack. The task is more likely to start than other tasks. So the selected tasks have the earlier start time which means that the faster processors are possibly available and then the selected tasks have the shorter response time.

But the average case deadline leads to a problem, that is, the deadline is smaller than worst case deadline and the task have more possibility to miss deadline. So in the framework, the average case deadline is only used for LST, but in the other part of scheduling, the deadline in DSPD is still the worst case deadline.

For example, assume the thread with the high priority includes only one task, so the start time of thread is the start time of task. Figure \ref{fig:executionPeriod} indicates the thread by means of DSPD and DSWC. From Figure \ref{fig:executionPeriod}, the worst case deadline is 15 and the best case deadline is 10. The execution interval in DSWC is 15 (15-0). But in DSPD, the prioirty of task is high, it has best case deadline which is only 10. So the execution interval in DSPD in only 10 (10-0). That will lead to the deadline missing. In order to resolve this problem, in the framework, the prioritized deadline is only for LST and the actual deadline of task is the worst case deadline. So in DSPD, the task has the same execution interval with in DSWC.

\begin{figure}[htb!]
\centering
\includegraphics[width=10cm]{Chapter4/executionPeriod.eps}
\caption{Execution Interval}
\label{fig:executionPeriod}
\end{figure}

In the following the advantage of DSPD is presented and the example below is further used for explanation. Table \ref{tab:Latency} shows the mapping options including execution time of task A. It can be executed on processors PE1 and PE2 and the latencies are respectively 10 and 12. Figure \ref{fig:DSWCvsDSDP} displays the execution process of task A in DSPD and DSWC. Although the start time of task A is time point zero, but in DSWC the actual start time is 1. Because at instant of time zero, many tasks start but the number of processors is not enough, so some task must wait. Task A with high priority in the DSPD has smaller deadline than in DSWC, according to LST task A in DSWC is more likely to have a delay compared to DSPD. So assume the delay of task A in DSWC is 1. Based on the same reason, task A in DSPD is executed by PE1 while by PE2 in DSWC. The response time of task is:
\begin{equation}
t_{r}=t_{d}+l
\label{slackTime}
\end{equation}
where $t_{r}$ is the response time of task. $t_{d}$ is the delay.

\begin{table}[h]
\begin{center}
\begin{tabular}[t]{l|c|c|c} % l and c stand for column
\hline  % draw a line of table, and between hline is row
Task & Start & PE1 & PE2 \\
\hline
a & 0 & 10 & 12  \\
\hline
\end{tabular}
\end{center}
\caption{One example of scheduling}
\label{tab:Latency}
\end{table}

\begin{figure}[htb!]
\centering
\includegraphics[width=10cm]{Chapter4/DSWCvsDSDP.eps}
\caption{DSWC vs DSPD}
\label{fig:DSWCvsDSDP}
\end{figure}

From Figure \ref{fig:DSWCvsDSDP}, the response time of task A is 10 (0+10) in DSPD and 13 (1+12) in DSWC. Task A has a smaller response time in the DSPD than DSWC. In Chapter 5, results from the comparison of DSPD and DSWC confirm the described behavior.
\vspace{-0.5em}
\subsection{Class Structure}
\vspace{-0.5em}
The following content is about the class structure of DSPD. In the scheduling part of the framework, DSPD functionality is equal to DSWC except the prioritized deadline in DSPD which is used for LST. DSPD needs two kinds of different case deadlines, so the prioritized deadline is first calculated and saved for the LST, and then the worst case deadline is obtained and adopted as the actual deadline of tasks. In order to achieve DSPD, it should be to implement the calculation of prioritized deadline within the preprocessing part of the framework and save it for LST.

Figure \ref{fig:flowChart} indicates the process of the prioritized deadline calculation via the max-plus algorithm. From the flow chart, the first step is to get the adjacent matrix from the application described via task graph. Next the transfer latency is obtained according to the transferred data and the available bandwidth according to the priority of task the execution latency is selected from the mapping options. The arithmetic operation of summing the transfer latency and execution latency element wise for each task determines the latency matrix. Next is the operation of max-plus algorithm for the longest path of the task graph. In the end, the difference between the deadline of thread and the longest path is the final result, the prioritized deadline.

\begin{figure}[htb!]
\centering
\includegraphics[width=9cm]{Chapter4/flowChart.eps}
\caption{Flow chart of the prioritized deadline calculation}
\label{fig:flowChart}
\end{figure}

The following code achieves the calculation of the prioritized deadline via the max-plus algorithm. $A$ is the adjacency matrix of the thread and $n$ is the matrix size. In the loop, firstly get the transfer latency $l\_t$ from the node j to node i and the execution latency $l\_e$ of the node i. Through the addition operation of $l\_t$ and $l\_e$, matrix $A$ turns from the adjacency matrix to the latency matrix after the two loops. Then according to the max-plus algorithm, the longest path matrix $C$ is obtained. In the end, the deadline is calculated via the matrix $C$.

\clearpage 
\begin{Verbatim}[frame=single, xrightmargin=5cm]
get A;
for(i = 0; i < n; i++)
{
    for(j = 0; j < n; j++)
    {
        get l_t;
        get l_e;
        A_ij = l_t +l_e;
    }
}
B = A;
C = A;
for(i = 0; i < n-2; i++)
{
    B = B max operator A;
    C = C plus operator B;
}
calculate deadline via C;
\end{Verbatim}


