\section{Analysis}

\subsection{Simulation and Response Time Analysis}

In this chapter we will briefly discuss the results we got by using our tools which we have implemented for \textit{rate monotonic} scheduling and response time analysis.

As the first thing we could say that we can clearly determine what situation created the worst-case response time for a particular task during the simulation, i.e. what other tasks interrupted it, when and for how long. This was achieved by using our graphics engine to visualize simulation results. For example see Figure \ref{img-sched1}.


As the next step we applied our \textit{Simulator} 100 times on a given data set (in our case it was taskgraph\_1.graphml). After that we run our \textit{ResponseTimeAnalyzer} on the same task set. The comparison of the results can be seen in Figure \ref{img-sim-vs-analysis} (NB. Only maximal worst-case response value of each simulation is depicted on the graph for each task).

\begin{figure}[h!]
  \centering 
  \includegraphics[scale=0.7]{resources/simulation-vs-analysis-100.png}
  \caption{Simulation vs. Analysis (100 runs)}\label{img-sim-vs-analysis}
\end{figure}

After having a look at the simulation and analysis comparison results it is quite obvious that the worst-case response times we got with \textit{VSS} are smaller than (or equal to) the worst-case response times we got with \textit{RTA}. Furthermore, we could say that values which we got from running simulation is always less or equal to those which was obtained from the analysis.

To conclude the analysis of comparison between \textit{VSS} and \textit{RTA} we could say that since we have decided to let our \textit{RTA} assume that the execution time is the worst case running time, then the number of simulations to run the \textit{VSS} is proportional to the probability of getting the worst case response time for all instances of all tasks in the task set.

\subsection{Scheduler and task duplication}
In this subsection the main results of the scheduling and task duplication algorithm will be discussed.

We have applied the List scheduling algorithm on the input task graph (see Figure ref{img-task-graph}). The results can be found in Figure \ref{img-sched-no-dup}. It is easy to see that the overall time required to complete all tasks are 29 time units. Also it is obvious that at least processor P1 could be employed in a more efficient manner.

\begin{figure*}
  \centering 
  \includegraphics[scale=0.6]{resources/graffromslides2-graphml-nodup.png}
  \caption{List scheduling without task duplication}\label{img-sched-no-dup}
\end{figure*}

Next we ran the List scheduling algorithm with the task duplication enabled (for more details see \textit{Technical solution} section). The results can be found in the Figure \ref{img-sched-dup}. This time the overall time needed to complete all tasks are 24 time units. Tasks \textit{T0 }and \textit{T10 }were duplicated twice.

\begin{figure*}
  \centering 
  \includegraphics[scale=0.6]{resources/graffromslides2-graphml-dup.png}
  \caption{List scheduling applying task duplication}\label{img-sched-dup}
\end{figure*}

Hence, task by introducing task duplication approach we could reduce the overall task set execution time. On the other hand it resulted in a longer task scheduling time.






