\begin{abstract}
This document describes a solution of the project for the 02223 course - Fundamentals of Modern Embedded Systems. Our project focuses on the simulation, analysis and design of modern embedded
systems by designing and implementing a software tool that can simulate and analyze an embedded application. More specifically, we have implemented:
\begin{enumerate}
	\item A Simulator for preemptive fixed-priority scheduling;
	\item Response-time analyzer for preemptive fixed-priority scheduling;
	\item A Scheduler for a system having more than one processor using list scheduling algorithm.
\end{enumerate}

Next our technical solution to the problem will be given. 
\end{abstract}

\section{Technical solution}
In this section an overview of our system design and implementation will be given. A rough description of the main system components will be presented. In the implementation section we will list the implementation details of the main system components and the challenges we met.

Next system design will be described in more details.

\subsection{Design}
In this part we will describe the system in overall and then describe the design of the most important classes without going into the details of the code.

First we will list our general assumptions for A1 and A2 \cite{project}:

\begin{itemize}
 \item Single processor
 \item Strictly periodic tasks
 \item All tasks are released as soon as they arrive
 \item The periods are synchronized - all tasks start at the same time.
 \item Deadlines are equal to the periods	
 \item All tasks are independent
 \item No precedence or resource constraints
 \item No task can suspend itself	
 \item All overheads in	the	kernel are assumed to be zero
\end{itemize}

The system (A1 and A2) consist of 3 parts: the data input framework, the simulation visualization framework and the main part are the remaining classes.

The main part consists of the implemented scheduling tool placed in \textit{Simulator}, the implemented analyzing tool placed in \textit{ResponseTimeAnalyzer} and a main class called \textit{SchedulabilityTestManager} for running the project. 

The purpose of the data input framework is to create the data input used by the \textit{Simulator} and the \textit{ResponseTimeAnalyzer}, while the purpose of the simulation visualization framework is to be able to visualize the schedules produced by the \textit{Simulator}.

\subsubsection{Job}
\textit{Job} class is a data structure representing a job. A job is an instance of a task. It is an actual run of the task at a specific period, and it is used by the simulator. The purpose of having jobs is to emulate surroundings's influence at the processor: Jobs for the same task are often different from each other because there is randomness in the creation of the computation time for each job.

\subsubsection{Task}
\textit{Task} class is a data structure representing a task.

\subsubsection{DataGenerator}
\textit{DataGenerator} class is responsible for creating the data set. That way it is created only once even though it is used by both \textit{Simulator} and \textit{ResponseTimeAnalyzer}.

\subsubsection{Simulator}
\textit{Simulator} class is an implementation of the VSS (i.e. Very Simple Simulator) given in the project description \cite{project}. The simulator uses soft deadlines: if a job miss its deadline the simulator continue to schedule time slot for computing it in the following periods.

\subsubsection{ResponseTimeAnalyzer}
\textit{ResponseTimeAnalyzer} class is an implementation of the RTA (i.e. Response Time Analysis) given in a project description \cite{project}.
For the cases where a task set is only schedulable sometimes (since the actual execution time is not the same for each run), the response time analyzer computes the task set as not schedulable, because we only consider a task set schedulable if it is schedulable in all runs.

\subsubsection{A3 - Scheduling to more than one processor using list scheduling}
Finding the optimal schedule - that is the one with the shortest length - for execution of tasks when having more than one processor available, is a NP-hard problem. Therefore several heuristics for solving the problem have been developed. One of them is list scheduling, which is a greedy algorithm.\cite{sinnen2005}.

List scheduling can both be used on tasks with static and dynamic assigned priorities. We have considered the list scheduling for tasks with static assigned priorities. 

% For the last part of the assignment we are going to implement an algorithm for list scheduling which considers mapping of tasks to different processors. We are going to use the following litterature from the file sharing section at the course folder at campusnet:
% \begin{itemize}
%  \item list scheduling: Sinnen2005trt.pdf, Sinnen2004lse.pdf, task\_scheduling.pdf
%  \item mapping: mapping.pdf, Stochastic allocation and scheduling for conditional task graphs.pdf
% \end{itemize}

% Limitations/simplifications/assumptions:

To define the assignment, we have made the following assumptions:

\begin{itemize}
 \item A processor can communicate and execute a task on the same time.
 \item A processor can communicate with more than one other processor at the same time.
 \item The processor graph is fully connected.
 \item All processors are similar.
 \item The tasks has static assigned priorities.
 \item information is kept to infinity on a processor. (so we don't need to communicate if the parent node has been scheduled on the same processor long time ago.) (simplicity)
\end{itemize}
% 
% Our list scheduler supports
% 
% In order to make the scheduling plan tight, our scheduler supports the following functionalities:
% 
% The scheduler takes into account the following opportunities and chooses the one, that makes the task finish execution first:
% 
% Instead of just execute a task on the fastest ready processor, the sched. also have the following utilitizes/considerations/takes the following into account to plan the most effecient ...
% 
% Creating an efficient schedule is not just about finding the processor which is ready first. The data for the task also needs to be computed first, and to be transfered to the processor where the task is going to be executed.
% 
% \begin{itemize}
%  \item execution of the same task on different processors %in order to minimize
%  \item execution of a task on the processor where the needed data is ready first.
% 
%  \item which processor is ready first
%  \item where is the data ready first

In order to minimize communication cost our scheduler supports the following functionalities:
\begin{itemize}
 \item execution of a task on the same processor as its parents 
 \item dublicate a parent task to also be computed at the same processor as its child
\end{itemize}





% Supports, takes into account:
% Communication of needed data finish before the task can be executed.
% Serve a task on another processor than the one that is first ready in order to spare communication time.
% Dublicate one or more of parent tasks in order to spare communication time.


% In the mapping process, the program takes communication into account, which makes it able to choose to:
% In order to spare communication time:
% Run on same processor as parents.
% Dublicate one or more of parent tasks.



% 
% Limitations/simplifications/assumptions:
% 
% Supports, takes into account:
% 
% For describing the constraints, the following symbols and functions are used:
% 
% Constraints:
% 
% 
% 
% 
% * Schedule on the 
% 
% Mapning:
% 
% * Consider when data will be ready
% 
% * Consider if it more efficient to run the parent processor on 2 processors.
% 
% 
% 
% Not always running on earliest ready processor. Considering to run on another processor if communication is faster (e.g. no communication when ran on same processor as parents.). Considering to dublicate parent processors, if that is faster than communication.

The scheduler follows Algorithm 1 (\cite{sinnen2005}, page 269), shown in figure \ref{fig-ls}, where a processor is assigned to each task in a list of task sorted after priority, starting with the task having the highest priority. The list scheduling algorithm is expanded to dublicate a parent node, so it also are computed at the same processor as its child node, if the execution of the parent and the communication between the parent and its parents are faster, than the communication from the parent to its child node.
 
\begin{figure}[h!]
  \centering 
%  \includegraphics[scale=0.5]{resources/listscheduling-algorithm.png}
  \caption{List scheduling algorithm from \cite{sinnen2005}}\label{fig-ls}
\end{figure}

A given task set has to be sorted before applying the list scheduling algorithm. We have choosen to assign priorities to tasks based on ASAP algorithm \cite{sinnen2007}. Tasks are ordered topologically and those which does not have a predecessor are assigned the highest priority. For the rest of the tasks their priority depends on all predecessor priorities i.e. predecessor computation time and communication cost (for more details see Figure \ref{fig-asap}).

\begin{figure}[h!]
  \centering 
  \includegraphics[scale=0.6]{resources/asap-algorithm.png}
  \caption{ASAP priorities algorithm \cite{sinnen2007}}\label{fig-asap}
\end{figure}

The processor choosen is the one where the task can be executed at as fast as possible. That can both mean that a task starts being executed as fast as possible or that it finishes being executed as fast as possible. It does not make a difference in our case, since we assume, that all processors are similar.\\

To define the constraints these symbols are used:
\begin{itemize}
 \item $n$: a task
 \item $P$: a processor
% \item [$\mathbf{V}$] all tasks in the task set
 \item $proc(n)$: The processor(s) where n is scheduled
 \item $pred(n)$: The parent(s) of n
% \item [$\max_{hej}$]
\end{itemize}

\begin{itemize}
 \item $t_{s}(x)$: start time of $x$
 \item $t_{f}(x)$: finish time of $x$
 \item $t_{dr}(n_{j},P)$: Time when all data is ready for task $n_{j}$ on processor $P$.
\end{itemize}
where $x$ can be:
\begin{itemize}
 \item $e_{ij}$: communication of data from task $i$ to task $j$.
 \item $(e_{ij},P_{i},P_{j})$: communication of data from task $n_{i}$ scheduled on processor $P_{i}$ to task $n_{j}$ scheduled on processor $P_{j}$. If task $n_{i}$ and $n_{j}$ are executed on the same processor, then the communication is 0.
 \item $(n_{i},P)$: execution of task $n_{i}$ on processor $P$
\end{itemize}




Creating an efficient schedule is not just about finding the processor which is ready first. The data for the task also needs to be computed first, and to be transfered to the processor where the task is going to be executed. This can be specified as the following constraints:

\begin{itemize}
 \item A task must be finished executed before its results can be communicated: $ t_{s}(e_{ij}) \geq t_{f}(n_{i},P)$
 \item Before a task $n_{j}$ can be executed on a processor $P_{j}$, both the processor and the data must be ready: \[t_{s}(n_{j},P_{j}) \geq \max \{ t_{f}(P_{j}), t_{dr}(n_{j},P_{j}) \}\].
\end{itemize}

The processor is ready when it is finish with the last task scheduled on it until now. The data is the results from a task's parent tasks. The data ready time is specified by
\[ t_{dr}(n_{j},P_{j}) = \max_{n_{i} \in pred(n_{j})} \{ \min_{P_{i} \in proc(n_{i})} \{t_{f}(e_{ij},P_{i},P_{j})\} \}, \]
which computes the latest time data from a parent is ready in order to make sure that data from all parents are ready. In case a parent is duplicated, the earliest time, data from that parent is ready, is chosen.




\subsubsection{Task duplication algorithm}
In this subsection we will briefly discuss our task duplication algorithm. The idea of the algorithm was given by Professor Jan Madsen during the lecture: sometimes the communication cost between a parent task and the children tasks is relatively much higher then the parent task computation cost. Then it is worth (if possible) to execute the parent task on each processor were its children tasks are being executed. In this way the expensive communication can be avoided.

Our duplication algorithm:
\begin{enumerate}
	\item checks what is the current task start time on a given processor
	\item checks if the current task start time can be improved by scheduling one of its parent tasks on the same processor
	\item follows the List algorithm
\end{enumerate}

The results of the task duplication algorithm will be discussed in the \textit{Analysis} section.

%About
%Choosing of processor
%Dublication

%Dublication:
%Dublicate parent, but not parent's parent.

%Implementation
%How do dublication works?
%Are all parents dublicated or only one level up?
%Is it a parent, the current task or a child, that is dublicated?








% All parents must be finished before a task can be executed
% $t_{s}(n,P) \geq t_{f}()$

% Before a task can be executed, both the processor and the data must be ready:
% $t_{s}(n,P) \geq max\{t_{f}(P), t_{dr}(n_{j},P_{j})\} $.


%Data ready time:
% We need to get the latest time, data from a parent is ready in order to make sure that data from all parents are ready. In case a parent is dublicated, we need the earliest time, data from that parent is ready.

% Processor finish time:
% The processor finish time is the time, when the processor is finish with the last task scheduled on it until now.
% $t_{f}(P)$



% Communication:
% 
% Before a task n can be executed on a processor p, the data it needs shall be communicated to p from the processors where n's parents where executed. 

% Data Ready Time: The time the data is ready for a node, so it can start being executed. That is, take communication into account if any.

% A task, $n$ can only be executed on a processor when its data is ready: $t_{s} \geq t_{dr}(n,P)$  