\documentclass{sig-alternate}

\begin{document}

\title{02223 Project}
\subtitle{Very Simple Simulator \\
	  Response-time Analysis \\
	  List Scheduling}

\numberofauthors{2}

\author{
% 1st. author
\alignauthor
Julie Meinicke Nielsen, s113842, G09\\
       \affaddr{DTU}\\
       \email{s113842@student.dtu.dk}
% 2nd. author
\alignauthor
Mindaugas Laganeckas, s100972,  G09\\
       \affaddr{DTU}\\
       \email{s100972@student.dtu.dk}
}

\maketitle

During the development phase, workload has been spread evenly and people have contributed the same
amount to the final software and documentation.
\begin{table}[h!]
\begin{tabular}{l l}
Author &  Part of the project\\  
\hline
Mindaugas Laganeckas &		VSS\\
Julie Meinicke Nielsen &	RTA\\
Mindaugas Laganeckas &		ASAP priority assignment\\
Julie Meinicke Nielsen &	List scheduling\\
Both &						Task duplication approach\\
\end{tabular}
\end{table}

\begin{abstract}
This document describes a solution of the project for the 02223 course - Fundamentals of Modern Embedded Systems. Our project focuses on the simulation, analysis and design of modern embedded
systems by designing and implementing a software tool that can simulate and analyze an embedded application. More specifically, we have implemented:
\begin{enumerate}
	\item A Very Simple Simulator (VSS) for preemptive fixed-priority scheduling as described in \cite{project}.
	\item A Response-Time Analyzer (RTA) for preemptive fixed-priority scheduling as described in \cite{project}.
	\item A Scheduler for a system having more than one processor using list scheduling algorithm  as described in \cite{sinnen2005} and \cite{sinnen2007}.
\end{enumerate}

Next our technical solution to the problem will be given. 
\end{abstract}

\section{Technical solution}
In this section an overview of our system design and implementation will be given. A rough description of the main system components will be presented. In the implementation section we will list the implementation details of the main system components and the challenges we met.

Next system design will be described in more details.

\subsection{Design of VSS and RTA}
In this part we will describe the system in overall and then describe the design of the most important classes without going into the details of the code.

First we will list our general assumptions for VSS and RTA \cite{project}:

\begin{itemize}
 \item Single processor
 \item Strictly periodic tasks
 \item All tasks are released as soon as they arrive
 \item The periods are synchronized - all tasks start at the same time.
 \item Deadlines are equal to the periods	
 \item All tasks are independent
 \item No precedence or resource constraints
 \item No task can suspend itself	
 \item All overheads in	the	kernel are assumed to be zero
 \item RTA interprets execution time as WCRT
\end{itemize}

The system (VSS and RTA) consist of 3 parts: the data input framework, the simulation visualization framework and the main part are the remaining classes.

The main part consists of the scheduling tool placed in \textit{Simulator}, the analyzing tool placed in \textit{ResponseTimeAnalyzer} and the main class called \textit{SchedulabilityTestManager} for running the project. 

The purpose of the data input framework is to create the data input used by the \textit{Simulator} and the \textit{ResponseTimeAnalyzer}, while the purpose of the simulation visualization framework is to be able to visualize the schedules produced by the \textit{Simulator}.

\subsubsection{Job}
\textit{Job} class is a data structure representing a job. A job is an actual run of the task at a specific period, and it is used by the simulator. The purpose of having jobs is to emulate surroundings's influence at the processor: Jobs for the same task are often different from each other because there is randomness in the creation of the computation time for each job.

\subsubsection{Task}
\textit{Task} class is a data structure representing a task.

\subsubsection{DataGenerator}
\textit{DataGenerator} class is responsible for creating the data set. That way it is created only once even though it is used by both \textit{Simulator} and \textit{ResponseTimeAnalyzer}.

\subsubsection{Simulator}
\textit{Simulator} class is an implementation of the VSS (Very Simple Simulator) given in the project description \cite{project}. As an input for the algorithm a task set and a number of cycles must be given. Based on the input a set of jobs for each task is created. The actual length of each job is a random number between task's best case and worst case execution time. Then the jobs are sorted based on the release time and the priority. Finally, for each execution time unit a job with the highest priority and release time which is less or equal to the current execution time unit is picked. The simulator uses soft deadlines: if a job misses its deadline the simulator continues to schedule it until the job completes.

\subsubsection{ResponseTimeAnalyzer}
\textit{ResponseTimeAnalyzer} class is an implementation of the RTA (Response Time Analysis) given in the project description \cite{project} and shown in figure \ref{fig-rta}. The algorithm tests schedulability for a task set, $\Gamma$ under Deadline Monotonic. This is done by for each task in the task set analysing whether the longest response time R of the task is greater than the deadline D of the task. The longest response time is computed by summing the computation time C of the task with the interference I of the task which is due to the task begin preempted by higher priority tasks. The interference is computed from the computation times and the periods of tasks with higher priority than the current task. Therefore it is important to do the computations on the tasks as they are sorted. If neither of the computed longest response times exceeds the deadline of the corresponding task, the task set is schedulable.

\begin{figure}[h!]
  \centering 
  \includegraphics[scale=0.5]{resources/rta-algorithm.png}
  \caption{Response Time Analysis Algorithm from \cite{project}}\label{fig-rta}
\end{figure}

For the cases where a task set is only schedulable sometimes (since the actual execution time is not the same for each run), the response time analyzer computes the task set as not schedulable, because we only consider a task set schedulable if it is schedulable in all runs.


\subsection{Design of the list scheduler and the task duplication approach}
In this subsection we will discuss a design of list scheduling with task duplication approach.
\subsubsection{Scheduling to more than one processor using list scheduling}
Finding the optimal schedule - that is the one with the shortest length - for execution of tasks when having more than one processor available, is an NP-hard problem. Therefore several heuristics for solving the problem have been developed. One of them is list scheduling, which is a greedy algorithm.\cite{sinnen2005}.

List scheduling can both be used on tasks with static and dynamic assigned priorities. We have considered the list scheduling for tasks with static assigned priorities. 

% For the last part of the assignment we are going to implement an algorithm for list scheduling which considers mapping of tasks to different processors. We are going to use the following litterature from the file sharing section at the course folder at campusnet:
% \begin{itemize}
%  \item list scheduling: Sinnen2005trt.pdf, Sinnen2004lse.pdf, task\_scheduling.pdf
%  \item mapping: mapping.pdf, Stochastic allocation and scheduling for conditional task graphs.pdf
% \end{itemize}

% Limitations/simplifications/assumptions:

To define the assignment, we have made the following assumptions:

\begin{itemize}
 \item A processor can communicate and execute a task at the same time.
 \item A processor can communicate with more than one other processor at the same time.
 \item The processor graph is fully connected.
 \item All processors are homogeneous.
 \item The tasks has static assigned priorities.
 \item The tasks are represented in a directed acyclic graph (DAG).
 \item Information is kept to infinity on a processor. (so we don't need to communicate if the parent node has been scheduled on the same processor long time ago.)
\end{itemize}
% 
% Our list scheduler supports
% 
% In order to make the scheduling plan tight, our scheduler supports the following functionalities:
% 
% The scheduler takes into account the following opportunities and chooses the one, that makes the task finish execution first:
% 
% Instead of just execute a task on the fastest ready processor, the sched. also have the following utilitizes/considerations/takes the following into account to plan the most effecient ...
% 
% Creating an efficient schedule is not just about finding the processor which is ready first. The data for the task also needs to be computed first, and to be transfered to the processor where the task is going to be executed.
% 
% \begin{itemize}
%  \item execution of the same task on different processors %in order to minimize
%  \item execution of a task on the processor where the needed data is ready first.
% 
%  \item which processor is ready first
%  \item where is the data ready first

In order to minimize communication cost our scheduler supports the following functionalities:
\begin{itemize}
 \item execution of a task on the same processor as its parents 
 \item duplicate a parent task to also be computed at the same processor as its child
\end{itemize}





% Supports, takes into account:
% Communication of needed data finish before the task can be executed.
% Serve a task on another processor than the one that is first ready in order to spare communication time.
% Dublicate one or more of parent tasks in order to spare communication time.


% In the mapping process, the program takes communication into account, which makes it able to choose to:
% In order to spare communication time:
% Run on same processor as parents.
% Dublicate one or more of parent tasks.



% 
% Limitations/simplifications/assumptions:
% 
% Supports, takes into account:
% 
% For describing the constraints, the following symbols and functions are used:
% 
% Constraints:
% 
% 
% 
% 
% * Schedule on the 
% 
% Mapning:
% 
% * Consider when data will be ready
% 
% * Consider if it more efficient to run the parent processor on 2 processors.
% 
% 
% 
% Not always running on earliest ready processor. Considering to run on another processor if communication is faster (e.g. no communication when ran on same processor as parents.). Considering to duplicate parent processors, if that is faster than communication.

% The scheduler follows Algorithm 1 (\cite{sinnen2005}, page 269), shown in figure \ref{fig-ls}, where a processor is assigned to each task in a list of tasks sorted by their priority, starting with the task having the highest priority. The list scheduling algorithm is expanded to duplicate a parent node, so it is also computed at the same processor as its child node, if the execution of the parent and the communication between the parent and its parents are faster, than the communication from the parent to its child node.
 
\begin{figure}[h!]
  \centering 
  \includegraphics[scale=0.5]{resources/listscheduling-algorithm.png}
  \caption{List scheduling algorithm from \cite{sinnen2005}}\label{fig-ls}
\end{figure}

The scheduler follows Algorithm 1 (\cite{sinnen2005}, page 269), shown in figure \ref{fig-ls}, where the tasks are sorted (1. Part) and each task starting with the task having the highest priority is assigned a processor (2. Part). We have chosen to assign priorities to tasks based on ASAP algorithm \cite{sinnen2007}. Tasks are ordered topologically and those which does not have a predecessor are assigned the highest priority. For the rest of the tasks their priority depends on all predecessor priorities i.e. predecessor computation time and communication cost (for more details see Figure \ref{fig-asap}).

The chosen processor is the one where the task can be executed at as fast as possible. That can both mean that a task starts being executed as fast as possible or that it finishes being executed as fast as possible. It does not make a difference in our case, since we assume, that all processors are homogeneous.

The list scheduling algorithm is expanded to duplicate a parent node, so it is also computed at the same processor as its child node, if the execution of the parent and the communication between the parent and its parents are faster, than the communication from the parent to its child node.\\

\begin{figure}[h!]
  \centering 
  \includegraphics[scale=0.6]{resources/asap-algorithm.png}
  \caption{ASAP priorities algorithm. An input for the algorithm is a weighted DAG. A list \textit{L} of vertices is created based on the graph topology. A value is calculated for each vertex reflecting vertex earliest possible start time. \cite{sinnen2007}}\label{fig-asap}
\end{figure}


To define constraints for the list scheduler these symbols are used:
\begin{itemize}
 \item $n$: a task
 \item $P$: a processor
% \item [$\mathbf{V}$] all tasks in the task set
 \item $proc(n)$: The processor(s) where n is scheduled
 \item $pred(n)$: The parent(s) of n
% \item [$\max_{hej}$]
\end{itemize}

\begin{itemize}
 \item $t_{s}(x)$: start time of $x$
 \item $t_{f}(x)$: finish time of $x$
 \item $t_{dr}(n_{j},P)$: Time when all data is ready for task $n_{j}$ on processor $P$.
\end{itemize}
where $x$ can be:
\begin{itemize}
 \item $e_{ij}$: communication of data from task $i$ to task $j$.
 \item $(e_{ij},P_{i},P_{j})$: communication of data from task $n_{i}$ scheduled on processor $P_{i}$ to task $n_{j}$ scheduled on processor $P_{j}$. If task $n_{i}$ and $n_{j}$ are executed on the same processor, then the communication is 0.
 \item $(n_{i},P)$: execution of task $n_{i}$ on processor $P$
\end{itemize}




Creating an efficient schedule is not just about finding the processor which is ready first. The data for the task also needs to be computed first, and to be transfered to the processor where the task is going to be executed. This can be specified as the following constraints:

\begin{itemize}
 \item A task $n_i$ must be finished executed before its results can be communicated: $ t_{s}(e_{ij}) \geq t_{f}(n_{i},P)$
 \item Before a task $n_{j}$ can be executed on a processor $P_{j}$, both the processor and the data must be ready: \[t_{s}(n_{j},P_{j}) \geq \max \{ t_{f}(P_{j}), t_{dr}(n_{j},P_{j}) \}\].
\end{itemize}

The processor is ready when it finishes the last task scheduled on it until now. The data is the results from a task's parent tasks. The data ready time is specified by
\[ t_{dr}(n_{j},P_{j}) = \max_{n_{i} \in pred(n_{j})} \{ \min_{P_{i} \in proc(n_{i})} \{t_{f}(e_{ij},P_{i},P_{j})\} \}, \]
which computes the latest time data from a parent is ready in order to make sure that data from all parents are ready. In case a parent is duplicated, the earliest time, data from that parent is ready, is chosen.




\subsubsection{Task duplication algorithm}
In this subsection we will briefly discuss our task duplication algorithm. The idea of the algorithm was given by Professor Jan Madsen during the lecture: sometimes the communication cost between a parent task and the children tasks is relatively much higher then the parent task computation cost. Then it is worth (if possible) to execute the parent task on each processor were its children tasks are being executed. In this way the expensive communication can be avoided.

Our duplication algorithm extends the list scheduling algorithm in the following way:
\begin{enumerate}
	\item checks what is the current task start time on a given processor
	\item checks if the current task start time can be improved by scheduling one of its parent tasks on the same processor
\end{enumerate}

The results of the task duplication algorithm will be discussed in the \textit{Analysis} section.

%About
%Choosing of processor
%Dublication

%Dublication:
%Dublicate parent, but not parent's parent.

%Implementation
%How do duplication works?
%Are all parents duplicated or only one level up?
%Is it a parent, the current task or a child, that is duplicated?








% All parents must be finished before a task can be executed
% $t_{s}(n,P) \geq t_{f}()$

% Before a task can be executed, both the processor and the data must be ready:
% $t_{s}(n,P) \geq max\{t_{f}(P), t_{dr}(n_{j},P_{j})\} $.


%Data ready time:
% We need to get the latest time, data from a parent is ready in order to make sure that data from all parents are ready. In case a parent is duplicated, we need the earliest time, data from that parent is ready.

% Processor finish time:
% The processor finish time is the time, when the processor is finish with the last task scheduled on it until now.
% $t_{f}(P)$



% Communication:
% 
% Before a task n can be executed on a processor p, the data it needs shall be communicated to p from the processors where n's parents where executed. 

% Data Ready Time: The time the data is ready for a node, so it can start being executed. That is, take communication into account if any.

% A task, $n$ can only be executed on a processor when its data is ready: $t_{s} \geq t_{dr}(n,P)$  

\subsection{Implementation}
%Challenges, solutions

\subsubsection{DataGenerator}
\textit{DataGenerator} class reads the input data set from a GraphML file, and creates the corresponding task set. For each task set it also creates the corresponding job sets. The computation time for each job is chosen randomly with an uniform distribution from the range from \textit{best case execution time}(BCET) to \textit{worst case execution time} (WCET). We use a uniform distribution because we want to model an average system behavior. By making a job running time close to BCET or WCET we would have another analysis tool instead.

The data generator sorts all the created jobs after 2 parameters: first according to their release time, so the earliest release time comes first, and then according to their priority so the highest priority comes first.

Furthermore the \textit{DataGenerator} class also computes the number of time units the simulation shall run. It gets the number of cycles as input (because the user knows how many time cycles he wants to run).


\subsubsection{Simulator}
The job queue is sorted after job release and thereafter after priority, so instead of copying the job queue to another array called ready list, we just skipped the ready list, and in each time unit cycle a job is executed if it has the highest priority and if its release time is less or equal to the current time.

To be able to visualize the schedules produced by the \textit{Simulator}, we have added a graphics engine to the project: it uses the SVG Salamander graphics library \cite{SVG} to actually make an image and is designed to employ the composite design pattern \cite{COMPOSITE-DP}.


\subsubsection{ResponseTimeAnalyzer}
As mentioned in the design section, the cases where a task set is only schedulable sometimes are computed as not schedulable. This is done by letting the \textit{computation time} be the worst case execution time.

Besides decide if an overall task set is executable, the \textit{ResponseTimeAnalyzer} also computes the worst case execution time for each task. The results are stored in an array. For tasks that are not executable the value $-1$ is stored to keep the order in the array.

% \subsubsection{SVG Engine}
% Our graphics engine: SVGSalamander SVG graphics library and Composite Design pattern used in our implementation.
% 
% It consist of the 5 classes:
% \begin{enumerate}
% 	\item \textit{GObject}: the main abstract graphics class.
% 	\item \textit{JobCanvas}: a canvas to display \textit{Job}.
% 	\item \textit{TaskCanvas}: a canvas to display \textit{Task} and all its jobs.
% 	\item \textit{Container}: a container for each {GObject} type object (Composite Design pattern).
% 	\item \textit{JobViewer}: a class to draw geometrical primitives e.g. lines, polygons etc.
% \end{enumerate}

\subsubsection{Scheduling to more than one processor using list scheduling}
Since we assume that the system is homogeneous the execution time equals the computation cost of a node (\cite{sinnen2005}, page 2).

When there is more than one candidate for the processor where a task can finish executing earliest, we have chosen that the task finish executing on the processor which is always considered first. That way the most of the work is at the first processor, which means less communication than if choosing the processor where fewer tasks have been executed at.

The duplication check is done by considering if a task can be executed earlier if one of its parents is duplicated. It is only done one step backwards, that is, only the current nodes parents are considered to be dublicated. In order to find out if it is worth duplicating a parent task, the duplication of the communication between the parent and its parents is taken into account.


\section{System testing}
In this section we will discuss the testing of VSS and RTA and of the list scheduler.

\subsection{VSS and RTA}
We have ran test case taskgraph\_1.graphml that is schedulable and taskgraph\_2.graphml and taskgraph\_4.graphml that are not schedulable.

\begin{figure*}
  \centering 
  \includegraphics[scale=0.6]{resources/simulation-1.png}
  \caption{Schedule of taskgraph\_1.graphml computed by the \textit{Very Simple Simulator}.
  Green arrow indicates a release of a task, red arrow - a deadline, long horizontal arrow - a time axis, numbers below time axis - time scale, T$\lbrace$any digit$\rbrace$ - task name. The tasks are sorted by their priority i.e. T1 has the highest priority and T2 - the lowest.}\label{img-sched1}
\end{figure*}


%\newpage
%
%
%\begin{figure*}[h!t!p!]
%  \centering 
%  \includegraphics[scale=0.4]{resources/simulation-2.png}
%  \caption{Schedule of taskgraph\_2.graphml computed by the \textit{Simulator}}\label{img-sched2}
%\end{figure*}

\begin{table}
\caption{Worst case response time for taskgraph\_1 computed using response-time analysis and Very Simple Simulator:}
\begin{tabular}{r c c}
task &  RTA & VSS \\  
\hline
T1 &	1.0 &	0 \\
T3 &	2.0 &	1 \\
T4 &	4.0 &	2 \\
T5 &	6.0 &	3 \\
T6 &	10.0 &	5 \\
T7 &	28.0 &	8 \\
T2 &	54.0 &	14 \\
\end{tabular}
\label{table1}
\end{table}

\begin{table}
\caption{Worst case response time for taskgraph\_2 computed using response-time analysis and Very Simple Simulator:}
\begin{tabular}{r c c}
task &  RTA & VSS \\  
 \hline
T1 &	1.0 &	0 \\
T2 &	3.0 &	1 \\
T3 &	6.0 &	3 \\
T4 &	10.0 &	5 \\
T5 &	15.0 &	9 \\
T6 &	23.0 &	12 \\
T7 &	37.0 &	18 \\
T8 &	49.0 &	28 \\
T9 &	98.0 &	34 \\
T10 &	-1.0 &	39 \\
T11 &	-1.0 &	57
\end{tabular}
\label{table2}
\end{table}


\begin{table}
\caption{Worst case response time for taskgraph\_4 computed using response-time analysis and Very Simple Simulator:}
\begin{tabular}{r c c}
task &  RTA & VSS \\  
 \hline
T1 &	4.0 &	2 \\
T3 &	-1.0 &	27 \\
T4 &	-1.0 &	0 \\
T5 &	-1.0 &	0 \\
T6 &	-1.0 &	0 \\
T7 &	-1.0 &	0 \\
T2 &	-1.0 &	0 \\
\end{tabular}
\label{table4}
\end{table}

%\newpage
%
%\begin{figure*}[h!t!p!]
%  \centering 
%  \includegraphics[scale=0.6]{resources/simulation-4.png}
%  \caption{Schedule of taskgraph\_4.graphml computed by the \textit{Simulator}. The white line is the delimiter between the job from a previous period and the job from the current period}\label{img-sched4}
%\end{figure*}

% 
% \begin{verbatim}
% schedulable according to RTA:true
% \end{verbatim}



% \begin{verbatim}
% schedulable according to RTA:false
% \end{verbatim}

%Comparing the worst case response times computed by the \textit{Response-Time Analysis} shown below, with the response times from the schedule computed by the 
%\textit{Simulator}, shown in figure \ref{img-sched2} it can be seen, that the worst case response time is the same for task set 1 and 3-6, whereas the scheduled task set 2 has a worst case response time at 7 timeunits more than the computed one, and the scheduled task set 7 has a worst case response time at 2 more than the one computed by the \textit{ResponseTimeAnalyser}



% \begin{verbatim}
% schedulable according to RTA:false
% \end{verbatim}
% 

%The schedule computed by the \textit{Simulator} can be seen in \ref{img-sched2}.

From table \ref{table1} it can be seen that taskgraph\_1 is schedulable according to the \textit{ResponseTimeAnalyzer}. It can also be seen in the schedule computed by the \textit{Simulator} figure \ref{img-sched1}, since no deadlines are missed.

From tables \ref{table2} and \ref{table4} it can be seen that taskgraph\_2 and taskgraph\_4 respectively are not schedulable according to the \textit{ResponseTimeAnalyzer}.

\subsection{The list scheduler}
In this subsection we will briefly discuss the testing of our task scheduler, more precisely we will introduce the task graph which we have used for testing and provide the results which we got when testing our ASAP algorithm implementation. The rest of the testing will be discussed in \textit{Analysis} section.

Our task graph (see Figure \ref{img-task-graph}) consists of 16 tasks. The task computation and communication costs are represented on the graph nodes and branched respectively. It is worth mentioning that tasks 11-16 are identical i.e. have the same parent task T10, the same communication cost with the parent task and the same computation time.
\begin{figure*}
  \centering 
  \includegraphics[scale=0.8]{resources/task-graph.png}
  \caption{Input task graph. The communication cost is represented as a branch weight. The computation cost as well as node ID are represented inside the node: $\lbrace$ node ID $\rbrace$ / $\lbrace$ computation cost $1\rbrace$.}\label{img-task-graph}
\end{figure*}

\begin{figure*}
  \centering 
  \includegraphics[scale=0.6]{resources/graffromslides2-graphml-nodup.png}
  \caption{List scheduling with task duplication disabled. Cyan rectangles indicate task execution: T $\lbrace$ any digit $\rbrace$ - task name. Purple rectangles represent communication between processors: P $\lbrace$ any digit $\rbrace$ $\longrightarrow$ P $\lbrace$ any digit $\rbrace$ - source and destination processors respectively. A long horizontal arrow represents a time line.}\label{img-sched-no-dup}
\end{figure*}

\begin{figure*}
  \centering 
  \includegraphics[scale=0.6]{resources/graffromslides2-graphml-dup.png}
  \caption{List scheduling with task duplication enabled. Cyan rectangles indicate task execution: T $\lbrace$ any digit $\rbrace$ - task name. Purple rectangles represent communication between processors: P $\lbrace$ any digit $\rbrace$ $\longrightarrow$ P $\lbrace$ any digit $\rbrace$ - source and destination processors respectively. A long horizontal arrow represents a time line.}\label{img-sched-dup}
\end{figure*}

Before applying scheduling described in \textit{Technical solution} section it is a requirement to sort tasks. As it was mentioned earlier we have chosen ASAP priority assignment algorithm. As a part of the testing this algorithm was applied on our test graph. The results can be found in Table \ref{table-asap}. As expected tasks 11-16 were assigned the same priority value by ASAP algorithm.
\begin{table}
\caption{Values assigned to each node of the task graph by ASAP algorithm}
\begin{tabular}{r c c}
task &  ASAP values\\  
\hline
0 &		0\\
1 &		8\\
2 &		4\\
3 &		5\\
4 &		6\\
5 &		16\\
6 &		10\\
7 &		14\\
8 &		13\\
9 &		23\\
10 &	27\\
11 &	34\\
12 &	34\\
13 &	34\\
14 &	34\\
15 &	34\\
16 &	34\\
\end{tabular}
\label{table-asap}
\end{table}

The testing of our scheduling algorithm and task duplication will be discussed in the \textit{Analysis} section.

\section{Analysis}
In this section we will discuss the comparison of VSS and RTA and of the list scheduler with and without duplication approach.
\subsection{Simulation and Response Time Analysis}

In this chapter we will briefly discuss the results we got by using our tools which we have implemented for \textit{rate monotonic} scheduling and response time analysis.

It is clearly to determine what situation created the worst-case response time for a particular task during the simulation, i.e. what other tasks interrupted it, when and for how long. This was achieved by using our graphics engine to visualize simulation results. For example, it is easy to see why and for how long a task T2 was preempted at Figure \ref{img-sched1}.


As the next step we applied our \textit{Simulator} 100 times on a given data set (in our case it was taskgraph\_1.graphml). After that we run our \textit{ResponseTimeAnalyzer} on the same task set. The comparison of the results can be seen in Figure \ref{img-sim-vs-analysis}.

\begin{figure}[h!]
  \centering 
  \includegraphics[scale=0.7]{resources/simulation-vs-analysis-100.png}
  \caption{Simulation vs. Analysis. 100 simulations. Only maximal worst-case response value of each simulation is depicted on the graph for each task.}\label{img-sim-vs-analysis}
\end{figure}

It is quite obvious that the worst-case response times we got with \textit{VSS} are smaller than (or equal to) the worst-case response times we got with \textit{RTA}. Furthermore, the values which we got from running simulation are always less or equal to those which was obtained from the analysis.

Based on our assumptions (RTA uses WCRT as execution time) the number of simulations to run the \textit{VSS} in order to get close results to RTA is proportional to the probability of getting the worst case response time for all instances of all tasks in the task set.

\subsection{List scheduler and task duplication}
In this subsection the main results of the scheduling and task duplication algorithm will be discussed.

We have applied the list scheduling algorithm on the input task graph (see Figure \ref{img-task-graph}). The results can be found in Figure \ref{img-sched-no-dup}. It is easy to see that the overall time required to complete all tasks are 29 time units. Also it is obvious that at least processor P1 could be employed in a more efficient manner.

Next we ran the list scheduling algorithm with the task duplication enabled (for more details see \textit{Technical solution} section). The results can be found in the Figure \ref{img-sched-dup}. This time the overall time needed to complete all tasks are 24 time units. Tasks \textit{T0 }and \textit{T10 }were duplicated twice.

Hence, by introducing task duplication approach we could reduce the overall task set execution time. On the other hand it resulted in a longer task scheduling time.

\section{Conclusion}
In this project we designed and implemented a simulator for a rate monotonic scheduling, an analyzer for determining the theoretical limits of the worst case response time of our data set, the List scheduling algorithm, task duplication approach for improving List scheduler and finally a graphics engine to visualize our analysis results. 

Before using our tools in analysis we tested them in the following cases: with schedulable and non-schedulable task set for VSS and RTA and a task graph for the list scheduler with the task duplication approach. In all cases we represented the results in the tables or/and graphically. 

Finally, we made a comparison between
\begin{enumerate}
	\item simulation and response time analysis
	\item the list scheduler with and without task duplication approach
\end{enumerate}

We found out that analysis is much more reliable than simulation and it therefore we suggest it always should be applied in design of the safety-critical systems. 
For scheduling on more than one processor using the list scheduling algorithm we discovered that we can get better results by enabling task duplication algorithm.

%Excluded since it is outdated and since we handle in the new code in a zip file.
%\appendix
%\include{AppendixSourceCode}


\begin{thebibliography}{99}
	\bibitem{SVG}{http://svgsalamander.java.net/}
	\bibitem{COMPOSITE-DP}{http://www.lepus.org.uk/ref/companion/Composite.xml}
	\bibitem{sinnen2005}{O. Sinnen, L. A. Sousa, and F. E. Sandnes. Toward a Realistic Task Scheduling Model. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS,
VOL. 17, NO. 3, MARCH 2006.}
	\bibitem{project}{P. Pop, D. Tamas-Selicean. 02223 Project, 2011}
	\bibitem{sinnen2007}{O. Sinnen. Task Scheduling for Parallel Systems. Wiley, 2007. 98-100}
	
\end{thebibliography}



\end{document}          
