
\documentclass[11pt]{amsart}
\usepackage{graphics}
\usepackage[linesnumbered]{algorithm2e}
\usepackage{epstopdf}
\usepackage{listings}

\usepackage{lmodern}
\usepackage{graphicx}
\usepackage{color}
\usepackage{algorithm2e}
\usepackage{booktabs}
\usepackage{color}
\usepackage[table]{xcolor}
\usepackage{fancyhdr}
\usepackage[top=3cm, bottom=3cm, left=3cm, right=3cm]{geometry} 
\usepackage{hyperref}
\usepackage{url}
\usepackage{cite}
\usepackage{float}
\usepackage{program}
\usepackage{multirow}
\usepackage{array}
\newcommand{\hiddensubsection}[1]{
\stepcounter{subsection}
\subsection*{\arabic{section}.\arabic{subsection}\hspace{1em}{#1}}}
\makeatletter
\renewcommand\section{\@startsection{section}{1}{\z@}%
                                  {-3.5ex \@plus -1ex \@minus -.2ex}%
                                  {2.3ex \@plus.2ex}%
                                  {\normalfont\Large\bfseries}}
\makeatother

%\usepackage{titlesec}
%\titleformat{\section}{\Huge\bfseries}{\thesection}
%\usepackage{sectsty}
%\sectionfont{\Huge\bfseries}
\pagestyle{fancy}
\fancyhead{}% clear headers
\fancyfoot{}% clear footers
\renewcommand{\headrulewidth}{0pt}% eliminate horizontal line
\fancyfoot[CO, CE]{\thepage}
\title{Problem Statement}
\author{Julius Canute}
%\date{}                                           % Activate to display a given date or no date

\lstset{language=XML, basicstyle=\ttfamily}
\begin{document}
%\maketitle
\begin{titlepage}

\begin{center}


% Upper part of the page
%\includegraphics[width=0.15\textwidth]{./logo}\\[1cm]    
\textsc{\LARGE Computing Static Schedule For A Distributed Real Time System}\\[6cm]
\textbf{\emph{{\large A Project Internship Report}}}\\
\textbf{\emph{{\large Submitted in Partial Fulfillment for the Award of}}}\\
\textbf{\emph{{\large M.Tech in Information Technology}}}\\[1cm]
{\large By}\\[1cm]
\textbf{\large Julius Canute}\\
\textbf{\large MT2010058}\\[6cm]
{\large To}\\[0.5cm]
\includegraphics[width=0.07\textwidth]{iiitb.png}\\[0.5cm]    
\textbf{\large International Institute of Information Technology}\\
\textbf{\large Bangalore - 560100}\\[0.5cm]
\emph{\large June 2012}
\end{center}
\end{titlepage}

\begin{center}
\textsc{\LARGE Certificate}\\[2cm]
\end{center}
\thispagestyle{empty}
This is to certify that the internship report titled \textbf{'Computing Static Schedule for a Distributed Real Time System'} submitted by 'Julius Canute A' (MT2010058) is a bonafide work carried out under my supervision at 'ABB Global Industries and Services Limited, Bangalore' from January 9, 2012 to June 15, 2012, in partial fulfillment of the M.Tech., course of International Institute of Information Technology, Bangalore.\\[0.5cm]
His performance \& conduct during the internship was satisfactory.\\[3cm]
% Author and supervisor
\begin{center}
\begin{minipage}{1.0\textwidth}
\begin{flushright} \large

\begin{minipage}{0.5\textwidth}

\begin{flushleft} \large
Dr.Atul Kumar\\

ABB Global Industries and Services LTD,\\
Bhoruka Tech Park,\\
Whitefield Road,\\
Mahadevpura PO,\\
Bangalore - 560048\\



\end{flushleft}

\end{minipage}


\end{flushright}


\end{minipage}
\end{center}

\begin{center}
\begin{minipage}{1.0\textwidth}
\begin{flushleft} \large



Date:\\
Place: Bangalore\\
\end{flushleft}
\end{minipage}
\end{center}

\newpage
 
\begin{center}
\textsc{\LARGE Acknowledgment}\\[2cm]
\end{center}
\thispagestyle{empty}

This project has become fruitful because of the valuable suggestions and contributions from several people. I take this opportunity to thank all of them and expect their guidance in all my future endeavors.\\

First of all, I would like to thank Prof. S. Sadagopan, Director of The International Institute of Information Technology, Bangalore for bestowing me with a learning experience par excellence during my post graduate program. If not for his constant support and undying rigor, I would not have had the opportunity to complete my internship in a corporate like The ABB GISL.\\

Dr. Srini Ramaswamy has been a great presence in this project as my manager guiding me during its course and helping me establish a good rapport with my team members.\\

I am greatly indebted to ABB Global Industries and Services Ltd., Bangalore for sponsoring my scholarship along the duration of my M.Tech. and providing me a great learning opportunity in doing my project there. I would want to thank all my ISS team members and other staff of the organization.\\

I express my heartfelt gratitude to my mentor at ABB GISL, Dr.Atul Kumar, whose ground knowledge, tireless enthusiasm and valuable suggestions on improvements 
has greatly contributed to the success of the project. \\ 

This acknowledgement is incomplete if I fail to mention the role my parents have played all through. They have been the driving force which kept me going when faced with obstacles providing valuable advice to overcome each of these. Thank You.\\

\newpage
\begin{center}
\textsc{\LARGE Abstract}\\[2cm]
\end{center}
\thispagestyle{empty}

Scheduling constrained applications on a multiprocessor or distributed environment is an Np-hard problem for which heuristic approaches will give only approximate solutions. Distributed Real time applications involve periodic activities and timing constraints associated with it which makes it hard to compute a schedule. Due to these growing complexities of the real time applications, there is a need for finding the proper scheduling algorithm which can handle the given constraints. In our work we propose a deterministic solution for static scheduling on multiprocessor or distributed systems where, given a set of applications and their associated time constraints we try to compute a schedule across all hosts. We model the given set of processors in the form of a tree and use a breadth first traversal to find the fit for the given applications. Numbers of computations are cut down by pruning the tree if possible solution is not found in any of the given branches. The proposed solution can be used in any cyclic control applications which are used to construct real-time control systems which follow a static scheduling scheme.

\newpage
\thispagestyle{empty}
\tableofcontents
\newpage
\thispagestyle{empty}
\listoffigures
\listoftables
\newpage
\setcounter{page}{1}

\section{Introduction}
Real time systems are systems which must ensure completion of task within stipulated deadlines. Based upon the criticality of task real time systems can be classified into Hard Real Time System and Soft Real Time System. Incase of Hard Real Time System's failure to meet a task's deadline results in critical damage to the system, for eg: if the activation of coolants in a nuclear reactor doesn't happen within a stipulated time interval it results in critical damage to the system and the environment in which the reactor is placed. Incase of Soft Real Time Systems failure to meet the task's deadline results in degraded performance eg: In live video streaming even if a few frames are not received it results in choppiness of the video.\\

Scheduling on real time system is of prime importance and the scheduling can be done in two ways (i) Online Scheduling (ii) Offline Scheduling. Based upon whether a task can be preempted or not, scheduling can again be classified into two types (i) Preemptive Scheduling (ii) Non-Preemptive Scheduling. In case of online scheduling were the task occurence are sporadic, the tasks can be preempted from execution based on priorities but reliability of such systems are hard to ensure. In case of static schedulers the applications that would run on such system is known before hand. The schedule for such applications are precomputed and the scheduler for such system executes the precomputed schedule.\\

This work focuses on computation of static schedule across distributed real time systems, where there can be multiple real time applications and each application can be divided into a set of components. Each application has a cycle time and a deadline requirement, and the components of the application has a duration for which they execute. In the distributed environment there are multiple hosts where each host has a cycle time requirement. A schedule should be computed across this distributed setting in such a way that the dependency among components of the Application are satisfied and also the Applications cycle time and deadline requirement are met.\\

The system contains a scheduler that executes an arbitrary but pre-defined schedule of blocks in a cyclic fashion at a given frequency. To this end, timer interrupts are generated at regular intervals, each of which triggers one execution of the scheduler. The scheduler then calls some or all blocks according to its schedule. After the last block has been executed, the scheduler waits until the next timer interrupt. During this period of time, other software may run asynchronously, such as an FTP server. This software will be preempted by the operating system as soon as the timer interrupt triggers the scheduler.\\

The work presented in this report Breadth First Search(BFS) is used to compute a schedule satisfying all the constraints if it is possible. When all components corresponding to all Applications are scheduled while performing the Breadth First Expansion, the algorithm stops and outputs the first schedule that is found in the breadth.  Any schedule node that violates the constraints of the Application is not taken up for expansion while performing BFS. A first fit strategy is used to find a fit for Application components in a host.\\

\newpage

\section{Literature Survey}
The goals for real-time scheduling are completing tasks within specific time constraints and preventing  simultaneous access by shared resources and devices. Although system resource utilization is of interest, it is not a primary driver. In fact, predictability and temporal correctness are the principal concerns. The algorithms used, or proposed for use, in real-time scheduling vary from relatively simple to extremely complex ones.Real-time scheduling algorithms can be studied for either uniprocessor or multiprocessor systems.

\subsubsection{\textbf{Uniprocessor Scheduling Algorithms}}
The set of uniprocessor real-time scheduling algorithms is divided into two major subsets, namely off-line scheduling algorithms and on-line scheduling algorithms. 

\subsubsection{\textbf{Off-line algorithms (Pre-run-time scheduling)}} generate scheduling information prior to system execution [22, 30, 32, 27]. The scheduling information is then utilized by the system during runtime. The EDF algorithm and the off-line algorithm provided in [20] are examples of off-line scheduling algorithms.

In systems using off-line scheduling, there is generally, if not always, a required ordering of the execution of processes. This can be accommodated by using precedence relations that are enforced during off-line scheduling.

Off-line algorithms are good for applications where all characteristics are known a priori and change very infrequently. A fairly complete characterization of all processes involved, such as execution times, deadlines, and ready times are required for off-line scheduling. The off-line algorithms need large amount of off-line processing time to produce the final schedule and due to this they are quite inflexible. A major advantage of off-line scheduling is significant reduction in run-time resources, including processing time, for scheduling. However, since it is inflexible, any change requires re-computing the entire schedule [22, 30, 32, 27].

The real advantage of off-line scheduling is that in a predictable environment it can guarantee system performance. On-line algorithms generate scheduling information while the system is running [22, 30, 32, 27]. The on-line schedulers do not assume any knowledge of process characteristics which have not arrived yet. These algorithms require a large amount of run-time processing time.

However, if different modes or some form of error handling is desired, multiple off-line schedules can be computed, one for each alternate situation. At run-time, a small on-line scheduler can choose the proper one.

One of the severe problems that can occur with priority based preemptive on-line algorithms is priority inversion [22, 32]. This occurs when a lower priority task is using a resource which is required by a higher priority task and this causes blocking the higher priority task by the lower priority one.
On-line scheduling algorithms can be divided into Static-priority based algorithms and Dynamic-priority based algorithms.

\subsubsection{\textbf{Static-priority based algorithms}}
Static-priority based algorithms are relatively simple to implement but lack flexibility. They are arguably the most common in practice and have a fairly complete theory. They work well with fixed periodic tasks but do not handle aperiodic tasks particularly well, although there are some methods to adapt the algorithms so that they can also effectively handle aperiodic tasks. Static priority-based scheduling algorithms have two disadvantages, which have received a significant amount of study. Their low processor utilization and poor handling of aperiodic and soft-deadline tasks have prompted researchers to search for ways to combat these deficiencies [22].
On-line Static-priority based algorithms may be either preemptive or non-preemptive [35, 22, 32, 3, 11, 10]. For example, the Rate-monotonic algorithm and the Ratemonotonic deferred server (DS) scheduling algorithm are in the class of Preemptive Static-priority based algorithms [22, 32].

\subsubsection{\textbf{Dynamic-priority based algorithms}}
Dynamic-priority based algorithms require a large amount of on-line resources. However, this allows them to be extremely flexible. Many dynamic-priority based algorithms also contain an off-line component. This reduces the amount of on-line resources required while still retaining the flexibility of a dynamic algorithm. There are two subsets of dynamic algorithms: planning based and best effort. They attempt to provide better response to aperiodic tasks or soft tasks while still meeting the timing constraints of the hard periodic tasks. This is often accomplished by utilization of spare processor capacity to service soft and aperiodic tasks [22, 32, 27, 26].

\subsubsection{\textbf{Planning Based Algorithms}} PBA guarantee that if a task is accepted for execution, the task and all previous tasks accepted by the algorithm will meet their time constraints [22, 32]. The planning based algorithms attempt to improve the response and performance of a system to aperiodic and soft real-time tasks while continuing to guarantee meeting the deadlines of the hard real-time tasks. The traditional way of handling aperiodic and soft real-time tasks in a system that contained periodic tasks with hard deadlines is to allow the aperiodic or soft real-time tasks to run in the background. By this method, the aperiodic or soft real-time tasks get served only when the processor has nothing else to do. The result of this method is unpredictable and normally rather poor response to these tasks. Planning based algorithms tend to be quite flexible in servicing aperiodic tasks while still maintaining the completion guarantees for hard-deadline tasks.

The Earliest Deadline First scheduling [37, 32] is one of the first planning based algorithms proposed. It provides the basis for many of the algorithms currently being studied and used. The LLF algorithm is another planning based algorithm. The Dynamic Priority Exchange Server, Dynamic Sporadic Server, Total Bandwidth Server, Earliest Deadline Late Server, and Improved Priority Exchange Server are examples of planning based algorithms, which work under EDF scheduling.



\subsubsection{\textbf{Multiprocessor Scheduling Algorithms}}
The scheduling of real-time systems has been much studied, particularly upon uniprocessor platforms, that is, upon machines in which there is exactly one shared processor available, and all the jobs in the system are required to execute on this single shared processor. In multiprocessor platforms there are several processors available upon which these jobs may execute. The Pfari scheduling is one of the few known optimal methods for scheduling tasks on multiprocessor systems [7]. However, the optimal assignment of tasks to processors is, in almost all practical cases, an NP-hard problem [24, 44, 35]. Therefore, we must make do with heuristics. The heuristics cannot guarantee that an allocation will be found that permits all tasks to be feasibly scheduled. All that we can hope is to allocate the tasks, check their feasibility, and, if the allocation is not feasible, modify the allocation to try to render its schedule feasible.

When checking an allocation for feasibility, we must account for communication costs. For example, suppose that task t2 cannot start before receiving the output of task t1 If both tasks are allocated to the same processor, then the communication cost is zero. If they are allocated to separate processors, the communication cost is positive and must be taken into account while checking for feasibility.

Multiprocessor scheduling techniques fall into two general category:

\subsubsection{\textbf{Global Scheduling Algorithms}}

Global scheduling algorithms store the tasks that have arrived but not finished their execution in one queue which is shared among all processors. Suppose there exist  m processors.
At every moment the m highest priority tasks of the queue are selected for execution on the  m processors using preemption and migration if necessary [23, 32].

The focused addressing and bidding algorithm is an example of global scheduling algorithms [32]. The main idea of the algorithm is as follows. Each processor maintains a status table that indicates which tasks it has already committed to run. In addition, each processor maintains a table of the surplus computational capacity at every other processor in the system. The time axis is divided into windows, which are intervals of fixed duration, and each processor regularly sends to its colleagues the fraction of the next window that is currently free.

On the other hand, an overloaded processor checks its surplus information and selects a processor that seems to be most likely to be able to successfully execute that task by its deadline. It ships the tasks out to that processor, which is called selected task. However, the surplus information may have been out of date and it is possible that the selected processor will not have the free time to execute the task. In order to avoid this problem, and in parallel with sending out the task to the selected processor, the originating processor asks other lightly loaded processors how quickly they can successfully process the task.  The replies are sent to the selected processor. If the selected processor is unable to process the task successfully, it can review the replies to see which other processor is most likely to be able to do so, and transfers the task to that processor.

\subsubsection{\textbf{Partitioning Scheduling Algorithms}}
Partitioning scheduling algorithms partition the set of tasks such that all tasks in a partition are assigned to the same processor. Tasks are not allowed to migrate; hence the multiprocessor scheduling problem is transformed to many uniprocessor scheduling problems [23, 32]. The next fit algorithm for RM scheduling is a multiprocessor scheduling algorithm that works based on the partitioning strategy [32]. In this algorithm, we define a set of classes of the tasks. The tasks, which are in the same class, are guaranteed to satisfy the RM schedulability on one processor. We allocate tasks one by one to the appropriate processor class until all the tasks have been assigned. Then, with this assignment, we run the RM scheduling algorithm on each processor. Global strategies have several disadvantages versus partitioning strategies. Partitioning usually has a low scheduling overhead compared to global scheduling, because tasks do not need to migrate across processors. Furthermore, partitioning strategies reduce a multiprocessor scheduling problem to a set of uniprocessor ones and then well-known uniprocessor scheduling algorithms can be applied to each processor. However, partitioning has two negative consequences. First, finding an optimal assignment of tasks to processors is a bin-packing problem, which is an NP-complete problem. Thus, tasks are usually partitioned using non optimal heuristics. Second, as shown in [13], task systems exist that are schedulable if and only if tasks are not partitioned. Still, partitioning approaches are widely used by system designers. In addition to the above approaches, we can apply hybrid partitioning/global strategies. For instance, each job can be assigned to a single processor, while a task is allowed to migrate.

\newpage

\section{Problem Statement}

Each Node  '$N_{i}$' is represented  by the following information \{ T$_{i}$, $\alpha_{i}$, $\beta_{i}$, $\gamma_{i}$, $E_{i}$ \} where 0$<$i$\le$m.

\begin{itemize}

\item 'T$_{i}$' - Cycle time for node 'i'.

\item '$\alpha_{i}$' - Context switch time for node 'i'.

\item '$\beta_{i}$' - Operating system hardware interrupt time for node 'i'.

\item '$\gamma_{i}$' - Management process time for node 'i'.

\item '$E_{i}$' - A schedule representing the starting time of various components pertaining to different applications corresponding to node 'i'.

\end{itemize}
 
\begin{figure}[!ht]
\begin{center}
 \includegraphics[width=0.5\textwidth,totalheight=0.4 \textheight]{Node.pdf} 
\end{center}
\caption{Node Information} 
\end{figure}

An application '$A_{j}$' is sub-divided into a number of components '$c_{jk}$' where 0$<$j$\le$n and 0$<$k$\le$l$_{j}$ , 'l$_{j}$' represents the total number of components of application 'j, a configuration C$_{j}$ is defined for each application'. A configuration C$_{j}$ defines the following attributes for an application $<$ G$_{j}$, S$_{j}$, $\Lambda_{j}$, D$_{j}$ $>$.

\begin{itemize}
\item G$_{j}$ - represents the dependencies among components as a directed acyclic graph, it puts a restriction in the way the components can be scheduled, for example if there is a directed edge between component c$_{jk}$ and c$_{jk+1}$ represented as c$_{jk}$ $\rightarrow$ c$_{jk+1}$  then c$_{jk+1}$ can't execute before c$_{jk}$. In general terms a component can be taken up for execution only if all ancestors of the components complete its execution.
\item S$_{j}$ - is a superset of a set of component groups I$_{x}$ where 0$<$x$\le$g , each group contains a set of components that are not separable across hosts. 
\item $\Lambda_{j}$ - represents the cycle time for application j.
\item D$_{j}$ - represents the deadline for application j.

\end{itemize}

\begin{figure}[!ht]
\begin{center}
 \includegraphics[width=0.8\textwidth,totalheight=0.3 \textheight]{Applications.pdf} 
\end{center}
\caption{Components of Application} 
\label{fig:compofapp}
\end{figure}

The example shown in figure \ref{fig:compofapp} shows the division of components pertaining to the applications A$_{1}$, A$_{2}$ and A$_{3}$.
\begin{itemize}
\item A$_{1}=$ \{ c$_{11}$, c$_{12}$, c$_{13}$, c$_{14}$ \} 
\item A$_{2}=$  \{ c$_{21}$, c$_{22}$, c$_{23}$, c$_{24}$, c$_{25}$, c$_{26}$ \}
\item A$_{3}=$ \{ c$_{31}$, c$_{32}$, c$_{33}$ \} 
\end{itemize}


\begin{figure}[!ht]
\begin{center}
 \includegraphics[width=0.7\textwidth,totalheight=0.3 \textheight]{Schedule.pdf} 
\end{center}
\caption{Schedule} 
\label{fig:schedule}
\end{figure}

Let V$_{jz}$ be the possible set of valid configuration for an application where 0$<$z$\le$z$_{j}$ , 'z$_{j}$' represents the number of possible arrangements for application 'j', A valid configuration defines an arrangement of components pertaining to an application without violating the dependency constraint specified by the directed acyclic graph, for example shown in figure \ref{fig:compofdep} the valid configurations are given as follows.
\begin{itemize}
\item  V$_{11}=<$ c$_{11}$, c$_{12}$, c$_{13}$, c$_{14}$ $>$, V$_{12}=<$ c$_{11}$, c$_{13}$, c$_{12}$, c$_{14}$ $>$, V$_{13}=<$ c$_{11}$, c$_{13}$, c$_{14}$, c$_{12}$ $>$
\item  V$_{21}=<$ c$_{21}$, c$_{22}$, c$_{24}$, c$_{25}$, c$_{26}$, c$_{23}$ $>$,V$_{22}=<$ c$_{21}$, c$_{24}$, c$_{25}$, c$_{22}$, c$_{26}$, c$_{23}$ $>$, V$_{23}=<$ c$_{21}$, c$_{24}$, c$_{22}$, c$_{25}$, c$_{26}$, c$_{23}$ $>$ , V$_{24}=<$ c$_{24}$, c$_{21}$, c$_{22}$, c$_{25}$, c$_{26}$, c$_{23}$ $>$, V$_{25}=<$ c$_{24}$, c$_{25}$, c$_{21}$, c$_{22}$, c$_{26}$, c$_{23}$ $>$ , V$_{26}=<$ c$_{24}$, c$_{21}$, c$_{25}$, c$_{22}$, c$_{26}$, c$_{23}$ $>$  
\item  V$_{31}=<$ c$_{31}$, c$_{32}$, c$_{33}$ $>$,  V$_{32}=<$  c$_{32}$, c$_{31}$, c$_{33}$ $>$
\end{itemize}


'$c_{jk}$' is defined by the following attributes $<$ $\phi$, $\delta$  $>$.

\begin{itemize}

\item $\phi$ - represents starting time constraint on the component.

\item $\delta$ - represents the time duration needed by the component to complete its execution.
 
\end{itemize}

\begin{figure}[!ht]
\begin{center}
 \includegraphics[width=0.8\textwidth,totalheight=0.4 \textheight]{DependentBlock.pdf} 
\end{center}
\caption{Dependancies among components} 
\label{fig:compofdep}
\end{figure}

The example shown in figure \ref{fig:compofdep}  represents the grouping of components.

\begin{itemize}

\item S$_{1} =$ \{ I$_{1}$ \}, I$_{1}$ = \{ c$_{13}$, c$_{14}$ \}  

\item S$_{2} =$ \{ I$_{2}$ \}, I$_{2}$ = \{ c$_{22}$, c$_{25}$, c$_{26}$  \}  

\item S$_{3} =$ \{ I$_{3}$ \}, I$_{3}$ = \{ c$_{31}$, c$_{32}$,  \}  

\end{itemize}


\vspace{10 mm}

The problem is to compute $E_{i}$ using T$_{i}$, $\alpha_{i}$, $\beta_{i}$, $\gamma_{i}$, C$_{j}$. If $E_{i}$ can't be computed because some applications are conflicting each other no matter how the schedule is, then deductions must be made on what minimal set of applications could be removed from the schedule. While computing the schedule across nodes some of the optimization strategies mentioned below can be updated.

\begin{center}
\begin{table}
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{3}{|c|}{Schedule Across Nodes} \\
\hline
Schedule & Components & Start Time \\ \hline
\multirow{6}{*}{$E_{1}$} & $_{OS}$ & s$_{11}$ \\
 & c$_{21}$ & s$_{12}$ \\
 & c$_{11}$ & s$_{13}$ \\
 & c$_{31}$ & s$_{14}$ \\
 & c$_{32}$ & s$_{15}$ \\
 & $_{M}$ & s$_{16}$ \\ \hline
 
 \multirow{7}{*}{$E_{2}$} & $_{OS}$ & s$_{21}$ \\
 & c$_{24}$ & s$_{22}$ \\
 & c$_{22}$ & s$_{23}$ \\
 & c$_{25}$ & s$_{24}$ \\
 & c$_{26}$ & s$_{25}$ \\
  & c$_{12}$ & s$_{26}$ \\
 & $_{M}$ & s$_{27}$ \\ \hline

\multirow{6}{*}{$E_{3}$} & $_{OS}$ & s$_{31}$ \\
 & c$_{13}$ & s$_{32}$ \\
 & c$_{14}$ & s$_{33}$ \\
 & c$_{23}$ & s$_{34}$ \\
 & c$_{33}$ & s$_{35}$ \\
 & $_{M}$ & s$_{36}$ \\ \hline

%\caption{Sample Schedule Output} 
\end{tabular}
\caption{Sample Schedule Output}
\end{table}
\end{center}

%\begin{center}
%\begin{table}
  %\begin{tabular}{| l | c | }
  %\hline
    %Job & Starting Time \\ \hline
    %J11 & e11 \\ \hline
    %J21 & e21 \\ \hline
    %J12 & e12 \\ \hline
    %J12 & e12 \\ \hline
    %J13 & e13 \\ \hline
    %J22 & e22 \\ \hline
    %J23 & e23 \\ \hline
  %\end{tabular}
 %\caption{Example} 
%\end{table}
%\end{center}

\section{Algorithm}
\textbf{computeLCM} (appList) in \emph{line 1} computes the LCM of cycle times of all applications. In \emph{line 2} \textbf{computeSchedulability} (appList,hostList) computes the schedulability of an application in a host, an application is schedulable on a host if the applications cycle time is a LCM of host cycle time and there exists atleast one component in the application whose execution time does not overrun the host clocking period. In \emph{line 3-4} we initialize the start node and push it inside the queue. BFS is performed from  \emph{line 5-37} to find a suitable schedule which satisfies all the constraints. The top node is dequeued from the queue in \emph{line 6-7}. \textbf{computeComponentSum} (schedNode) in \emph{line 8} computes the total number of components scheduled in the schedule node, a check is made in \emph{line 9-12} so as to ensure the total number of components scheduled in the schedule node is equal to the the total sum of components of all the applications, if so we serialize the computed schedule.  A loop is started from \emph{line 13-36} which cycles through all schedulable application on each host. In \emph{line 15}  \textbf{getNextComponent}(app) uses the information in the schedule node to get the next component of the chosen application to be scheduled. In \emph{line 16}  \textbf{firstFitComponent} ((component,schedNode.getAssignmentList(host.getHostID ()))) finds an appropriate place to fit the component based upon the schedule node information on the chosen host. \emph{Line 17} a check is done to make sure the component is schedulable properly on the host, then in \emph{Line 23-28} a check is made to ensure that all occurrences of the component within the LCM is not overlaping in the host,  \emph{Line 24} adds the instances of the components which are non overlapping inside the \textbf{componentList} even if a single instance of allocation of component on the host fails, the entire allocation fails and the \textbf{componentList} is destroyed. 


\begin{algorithm}
\KwIn{\\  \emph{appList:} be the set of applications.\\\emph{hostList:} be the set of hosts.}

%\\\emph{schedList:} contains schedulable application on each host.\\\emph{totalComponents:} is the sum of %total number of components of all applications.\\\emph{schedComponents:} is the number of components %scheduled in a schedule node.\\\emph{schedNode:} is the node structure which holds the schedule for %application across hosts.\\\emph{startNode:} the initial node to start the BFS.\\\emph{queue:} for performing %breadth first search.\\\emph{lcm:} is the least common multiple of cycle time of all applications.}

\KwOut{\\\emph{schedNode:} Output a schedule of components across hosts.}

\SetKwFunction{computeLCM}{computeLCM}
\SetKwFunction{computeSchedulability}{computeSchedulability}
\SetKwFunction{pushBack}{pushBack}
\SetKwFunction{isEmpty}{isEmpty}
\SetKwFunction{topElement}{topElement}
\SetKwFunction{popElement}{popElement}
\SetKwFunction{getAppList}{getAppList}

\SetKwFunction{computeComponentSum}{computeComponentSum}
\SetKwFunction{serializeSchedule}{serializeSchedule}
\SetKwFunction{firstFitComponent}{firstFitComponent}
\SetKwFunction{getNextComponent}{getNextComponent}
\SetKwFunction{getAssignmentList}{getAssignmentList}
\SetKwFunction{getHostID}{getHostID}
\SetKwFunction{isValid}{isValid}
\SetKwFunction{testFirstFitComponent}{testFirstFitComponent}
\SetKwFunction{initializeComponentList}{initializeComponentList}
\SetKwFunction{addComponent}{addComponent}
\SetKwFunction{createNewNode}{createNewNode}
\SetKwFunction{destroyComponentList}{destroyComponentList}
\SetKwFunction{initializeStartNode}{initializeStartNode}


\SetKwData{appList}{appList}
\SetKwData{hostList}{hostList}
\SetKwData{schedList}{schedList}
\SetKwData{startNode}{startNode}
\SetKwData{schedNode}{schedNode}
\SetKwData{schedComponents}{schedComponents}
\SetKwData{totalComponents}{totalComponents}
\SetKwData{host}{host}
\SetKwData{app}{app}
\SetKwData{component}{component}
\SetKwData{lcm}{lcm}
\SetKwData{queue}{queue}
\SetKwData{startTime}{startTime}
\SetKwData{cycleTime}{cycleTime}
\SetKwData{overlap}{overlap}
\SetKwData{componentList}{componentList}
\SetKwData{tempSchedNode}{tempSchedNode}

                

\BlankLine


\lcm $\leftarrow$ \computeLCM(\appList)\\
\schedList $\leftarrow$ \computeSchedulability(\appList,\hostList)\\
\initializeStartNode(\startNode,\appList,\hostList)\\
\queue.\pushBack(\startNode)\\
\BlankLine
\While{\!!\queue.\isEmpty()}
{
	%\CommentSty{//Get the top node from the queue.}\\
	\schedNode $\leftarrow$ \queue.\topElement()\\
	\queue.\popElement()\\
	%\CommentSty{//Compute the total number of scheduled components.}\\
	\schedComponents $\leftarrow$ \computeComponentSum(\schedNode)\\
	\If{\schedComponents == \totalComponents}
	{
			

		%\CommentSty{//Serialize schedule to a file.}\\
		\serializeSchedule(\schedNode)\\
		\textbf{break}
	}

	\ForEach{ \host in \schedList}
	{
	
		\ForEach{ \app in \host.\getAppList() }
		{
			%\CommentSty{//Get the next component of the application to be scheduled.}\\
			\component $\leftarrow$ \schedNode.\getNextComponent(app)\\
			%\CommentSty{//Find first fit of the component on the chosen host.}\\
			\firstFitComponent (\component,\schedNode.\getAssignmentList(\host.\getHostID()))\\
			%\CommentSty{/*Check if a component start time is valid or not, that is if a fit could be found or not.*/}\\

			\If{\isValid (\component.\startTime)}
			{
				
				\startTime $\leftarrow$ \component.\startTime \\
				\overlap $\leftarrow$ \textbf{false} \\
				\For{ i $\leftarrow$ 1 to \lcm / \app.\cycleTime }	
				{
					\initializeComponentList (\componentList,\component)\\
					%\CommentSty{//Compute new start time from initial start time.}\\
					\startTime $\leftarrow$ \startTime + \app.\cycleTime \\
					%\CommentSty{//Test the remaining components to find a fit.}\\
					\eIf{\isValid ( \testFirstFitComponent ( \component, \startTime ) ) }
					{
						\componentList.\addComponent (\component, \startTime)
					}
					{
						%\CommentSty{/*If any of the component overlaps with previously scheduled components we set the overlap flag to true, destroy the list and don't construct a new node.*/}\\
						\overlap $\leftarrow$ \textbf{true} \\
						\textbf{break}\\
					}
					
				}
				\If{\!!\overlap}
				{
					\tempSchedNode $\leftarrow$ \createNewNode(\schedNode, \componentList)
					\queue.\pushBack(\tempSchedNode)\\
	
				}
				\destroyComponentList(\componentList)
			}
			
		}
	}
	
}
\end{algorithm}
\emph{Line 30-32} if there is no overlap of any allocation of the components then a new schedule node is created with the assignment of the component and pushed inside the queue.

\section{Example}

Consider two applications, Application1 and Application2. Application1 has two components and its cycle time is 15ms. Application1's component1 has an execution time of 3ms and component2 has an execution time of 6ms. Application2 has two components and its cycle time is 30ms. Application2's component1 has execution time of 2ms and component2 has execution time of 5ms. Consider there are two hosts, host1 and host2 with cycle time 5ms and 15ms.

First a schedulability check is made to ensure which applications are schedulable on which hosts, in this case since the applications cycle time 15ms and 30ms are LCM of cycle time of the host 5ms and 15ms, both application1 and application2 are schedulable in both host1 and host2. Next computation is made on the LCM of cycle time for all applications which corresponds to 30ms.

In the node structure for the schedule we maintain component start time information for each host. There are also 2 columns corresponding to two applications. The first column shown in node structure corresponds to Application1 and the second column shown in node structure corresponds to Application2. The first row corresponds to the number of components scheduled in that node for each application. The second row corresponds to the next estimated start time for a component in this application. Initially we initialize all the entries to zero. The representation is shown in \ref{fig:expansion1}.
\begin{figure}[!ht]
\begin{center}
 \includegraphics[width=0.8\textwidth,totalheight=0.3 \textheight]{example1.pdf} 
\end{center}
\caption{First Level Expansion} 
\label{fig:expansion1}
\end{figure}

From the starting node a Breadth first search is performed. First a fit is found on host1 for all applications schedulable on host1. Component1 of application1 is tried on host1, since the applications cycle time occurs twice within the LCM computed across all applications, a Component1 of Application1 is duplicated twice, one at the start of the cycle other after the cycle time for the application. The number of components scheduled for the application is marked as 1 and the time after which the next component of that application should start is marked as 3. For the second node from the left Component1 of Application1 is scheduled at the start of the cycle on host1 and the entry corresponding to the number of component is marked as 1, and  the entry corresponding to the estimated start time of Component2 is marked as 2.

In the second node from right Component1 of Application1 is tried on Host2 and entry corresponding to the number of components is marked as 1 and the estimated start time for the next component of this application is marked as 3. In the first node from right Component1 of Application2 is tried on Host2 and the entry corresponing to the number of components is marked as 1 and the estimated start time for the next component of the application is marked as 2.
\begin{figure}[!ht]
\begin{center}
 \includegraphics[width=0.8\textwidth,totalheight=0.3 \textheight]{example2.pdf} 
\end{center}
\caption{Second Level Expansion} 
\label{fig:expansion2}
\end{figure}
In the second level  \ref{fig:expansion2} as shown in  for the sake of example the first level node which presents a solution is taken into consideration. First the second component of Application1 is checked to see if it can fit on host1 since the execution time of the second component of Application1 overruns the cycle time of the host, that node wont be taken into consideration for further expansion. Next first component of Application2 is tried on host1, for this component a first fit is tried to find an empty slot were this component could be placed without overlapping with previous components that are scheduled. Since the first component of Application1 completes execution after 3ms, the second component of Application2 is scheduled after 3ms, correspondingly the number of components that are scheduled in Application2 is marked as 1 and the time after which a component corresponding to Application2 should start is marked to be 5ms.

Next second component corresponding to Application1 is tried on Host2, since the execution of the second component should start after 3ms, the second component is scheduled on Host2 at 3rd ms. Since the period of Application1 occurs twice within the LCM, the component of Application1 is duplicated twice. Correspondingly the number of components scheduled is marked as 2 and the time after which a next component corresponding to this application should start is marked as 9. In the first node from right first component corresponding to Application2 is tried on Host2 and the time after whcih next component corresponding to this application should start is marked as 2.

\begin{figure}[!ht]
\begin{center}
 \includegraphics[width=0.8\textwidth,totalheight=0.3 \textheight]{example3.pdf} 
\end{center}
\caption{Third Level Expansion} 
\label{fig:expansion3}
\end{figure}

The second node from the second level is expanded for this example as it leads to the solution  for the sake of example. The case were Application1 and Application2 components are tried on Host1 will fail and hence that branch won't be expanded any further. Application1 Component2 is tried on Host2 and the entries are marked accordingly. In the first node from the right Application2 Component1 is tried on Host2 and the entries corresponding to the number of components and the start time of components corresponding to the application are marked accordingly. The state of expansion is shown in \ref{fig:expansion3}.


\begin{figure}[!ht]
\begin{center}
 \includegraphics[width=0.2\textwidth,totalheight=0.3 \textheight]{example4.pdf} 
\end{center}
\caption{Fourth Level Expansion} 
\label{fig:compofapp}
\end{figure}

The third node from the third level is chosen for further expansion for the sake of the example as it leads to the solution. Application2 Component2 is tried on Host2 to find the first fit, since the second component of Application1 starts at 3rd ms and ends at 9ms, the second component of Application 2 is scheduled at 9th ms. Now since all the components corresponding to all applications are scheduled the schedule node is serialized to the file.


\section{Output}
\subsection{Input Schema}
The input schema consist of one file for each host and one file for each application. The schedule file \ref{lst:schedule} represents the following info.  
\begin{itemize}
\item \textbf{NodeInfo:} points to the host file.
\item \textbf{ApplicationInfo:} points to the application file.
\end{itemize}
The host file contains the following information:
\begin{itemize}
\item \textbf{Unit:} refer to the unit of timing in host.
\item \textbf{NodeId:} refers to a unique id given to the node.
\item \textbf{CycleTime:} refers to the cycle time of the host.
\item \textbf{OSHWTime:} time taken by OS hardware interrupt.
\item \textbf{ContextSWTime:} time taken to switch from one application to another.
\item \textbf{ManagementTime:} time taken by management process.
\item \textbf{OffsetSupported:} to mark if an offset feature is supported by a host or not.
\end{itemize}
The application file contains the following information:
\begin{itemize}
\item \textbf{Application:} has the following attributes: cycle time of the application represented by \emph{cycle}. Deadline of the application represented by \emph{deadline}. The number of components in the application represented by \emph{jobno}, Application name represented by \emph{appname}, the timing unit for the application represented by \emph{unit}.
\item \textbf{Group:} represents the components that go inside the same binary.
\item \textbf{CompName:} each component is represented by a CompName.
\item \textbf{JobDuration:} each component execution time is represented by a JobDuration.
\item \textbf{JobConstraint:} each component duration is represented by a JobConstraint relative to the start of a cycle.
\end{itemize}
\lstset{
caption=Schedule.xml,label={lst:schedule}
}
\begin{lstlisting}

<Schedule>

  <Nodes no="2" time="0">
    <NodeInfo>Host1.xml</NodeInfo>
    <NodeInfo>Host2.xml</NodeInfo>
    <Applications no="2"/>
    <ApplicationInfo>Application1.xml</ApplicationInfo>
    <ApplicationInfo>Application2.xml</ApplicationInfo>
  </Nodes>

</Schedule>

\end{lstlisting}

\lstset{
caption=Host1.xml
}
\begin{lstlisting}

<Schedule>

  <Host>
    <Unit>ms</Unit>
    <NodeId>0</NodeId>
    <CycleTime>5</CycleTime>
    <OSHWTime>1</OSHWTime>
    <ContextSWTime>1</ContextSWTime>
    <ManagementTime>1</ManagementTime>
    <OffsetSupported>yes</OffsetSupported>
  </Host>

</Schedule>

\end{lstlisting}


\lstset{
caption=Host2.xml
}
\begin{lstlisting}

<Schedule>

  <Host>
    <Unit>ms</Unit>
    <NodeId>1</NodeId>
    <CycleTime>15</CycleTime>
    <OSHWTime>1</OSHWTime>
    <ContextSWTime>1</ContextSWTime>
    <ManagementTime>2</ManagementTime>
    <OffsetSupported>yes</OffsetSupported>
  </Host>

</Schedule>

\end{lstlisting}

\lstset{
caption=Application1.xml
}

\begin{lstlisting}

<Schedule>

  <Application cycle="15" deadline="15" jobno="2" appname="App1" unit="ms">
    <Groups no="1">
      <Group no="1">
        <Item>1</Item>
        <Item>2</Item>
      </Group>
    </Groups>
    <Components>
      <Component>
      	<CompName>Comp1</CompName>
        <JobDuration>3</JobDuration>
        <JobConstraint>0</JobConstraint>
      </Component>
      <Component>
      	<CompName>Comp2</CompName>
        <JobDuration>6</JobDuration>
        <JobConstraint>0</JobConstraint>
      </Component>
    </Components>
  </Application>

</Schedule>

\end{lstlisting}


\lstset{
caption=Application2.xml
}


\begin{lstlisting}


<Schedule>

  <Application cycle="30" deadline="30" jobno="2" appname="App2" unit="ms">
    <Groups no="1">
      <Group no="2">
        <Item>1</Item>
        <Item>2</Item>
      </Group>
    </Groups>
    <Components>
      <Component>
      	<CompName>Comp1</CompName>
        <JobDuration>2</JobDuration>
        <JobConstraint>0</JobConstraint>
      </Component>
      <Component>
      	<CompName>Comp2</CompName>
        <JobDuration>6</JobDuration>
        <JobConstraint>0</JobConstraint>
      </Component>
    </Components>
  </Application>

</Schedule>


\end{lstlisting}


\subsection{Output Schema}
In the output schema \emph{sequential} represents that the execution of component is sequential in the host. In tag \emph{periodic} the attribute \emph{period} corresponds to the number of clock cycles taken for the next execution of this component, the attribute \emph{start} correspondingly indicates that the component execution should start during the first cycle.
\lstset{
caption=ScheduleHost1.xml
}
\begin{lstlisting}

<Schedule>

  <sequential>
    <periodic period="3" start="1">
      <execute application="App1" component="Comp1"/>
    </periodic>
    <periodic period="6" start="1">
      <execute application="App2" component="Comp1"/>
    </periodic>
  </sequential>

</Schedule>

\end{lstlisting}


\lstset{
caption=ScheduleHost2.xml
}
\begin{lstlisting}

<Schedule>

  <sequential>
    <execute application="App1" component="Comp2"/>
    <periodic period="2" start="1">
      <execute application="App2" component="Comp2"/>
    </periodic>
  </sequential>

</Schedule>

\end{lstlisting}




\subsection{Schedule Graph}
The schedule graph represents a gann't chart for each host, the components with their start time and duration represented across all hosts.

\begin{figure}[!ht]
\begin{center}
 \includegraphics[width=0.8\textwidth,totalheight=0.3 \textheight]{scheduleoutput.pdf} 
\end{center}
\caption{Gannt Chart Representing Schedule.} 
\label{fig:compofapp}
\end{figure}

\section{Future Work}
The current implementation does not support parallel computation. Since each branch taken while performing BFS is independent of the other, computation of schedule can happen parallely on multiple cores or the grid. In the current implementation of first fit the complexity of searching for a fit is O(\emph{n}), the complexity of inserting an element is O(\emph{n}). The first fit could be improved by using interval tree were the complexity of searching for a fit is O(\emph{n}) and the complexity of inserting an element is O(\emph{log n}). The search could be made to Best First were the following optimization criteria could be followed.
\begin{itemize}
\item Schedule the applications in minimum number of nodes so the power consumption can be reduced.
\item Schedule in such a way there is maximum separation among components so that even if one of the components misses its deadline, it does not affect the other components following it.
\item Schedule in such a way that load is equally distributed among nodes.
\item Schedule in such a way such that network communication among components can be minimized. In the figure the dotted arrow represents the network communication across nodes.
\end{itemize}



\begin{figure}[!ht]
\begin{center}
 \includegraphics[width=0.8\textwidth,totalheight=0.3 \textheight]{example5.pdf} 
\end{center}
\caption{Parallelization on Multiple Cores} 
\label{fig:compofapp}
\end{figure}

\newpage
\section{Result}
In the current implemenation, BFS is performed to compute the schedule. XML is used as the input/output format by the scheduling algorithm. In the current implementation of first fit the complexity of searching for a fit is O(\emph{n}), the complexity of inserting an element is O(\emph{n}). The complexity of BFS expansion currently is O(\emph{(mn)}$^{c}$), where 'm' is the total number of Applications, 'n' is the total number of Hosts and 'c' is the sum of total number of components.
\newpage
\begin{center}
\textsc{\LARGE Bibliography}\\
\end{center}
\thispagestyle{empty}
\noindent [1] B. Andersson and J. Jonsson, \emph{"The Utilization Bounds of Partitioned and Pfair Static-
Priority Scheduling on Multiprocessors are 50 percent,"} 15th Euromicro Conference on
Real-Time Systems (ECRTS'03), Porto, Portugal, July 02-04, 2003.

\noindent [2] J. Anderson and A. Srinivasan, \emph{"Early release fair scheduling,"} In Proceedings of the
EuroMicro Conference on Real-Time Systems, IEEE Computer Society Press, pp. 35-43,
Stockholm, Sweden, June 2000.

\noindent [3] N. Audsley, A. Burns, M. Richardson, K. W. Tindell, and A. J. Wellings, \emph{"Applying
new scheduling theory to static priority preemptive scheduling,"} Software Engineering
Journal, pp. 284-292, 1983.

\noindent [4] H. Aydin, P. Mejia-Alvarez, R. Melhem, and D. Mosse, \emph{"Optimal reward-based scheduling
of periodic real-time tasks,"} In Proceedings of the Real-Time Systems Symposium,
IEEE Computer Society Press, Phoenix, AZ, December, 1999.

\noindent [5] J. W. de Bakker, C. Huizing, W. P. de Roever and G. Rozenberg, \emph{"Real-Time: Theory
in Practice,"} Preceedings of REX Workshop, Mook, The Netherlands, Springer-Verlag
company, June 3-7, 1991.

\noindent [6] J. M. Bans, A. Arenas, and J. Labarta, \emph{"Efficient Scheme to Allocate Soft-Aperiodic Tasks
in Multiprocessor Hard Real-Time Systems,"} PDPTA 2002, pp. 809-815.

\noindent [7] S. Baruah, N. Cohen, G. Plaxton, and D. Varvel, \emph{"Proportionate progress: A notion of
fairness in resource allocation,"} Algorithmica , Volume 15, Number 6, pp. 600-625, June,
1996.

\noindent [8] P. Berman and B. DasGupta, \emph{"Improvements in Throughput Maximization for Real-Time
Scheduling,"} Department of Computer Science, Yale University, New Haven, CT 06511,
January 31, 2000.

\noindent [9] S. A. Brandt, \emph{"Performance Analysis of Dynamic Soft Real-Time Systems,"} The 20th
IEEE International Performance, Computing, and Communications Conference (IPCCC
2001), April, 2001.

\noindent [10] A. Burns, \emph{"Preemptive priority based scheduling: An appropriate engineering approach,"}
Technical Report, YCS-93-214, Department of Computer Science, university
of York, UK, 1993.

\noindent [11] A. Burns, \emph{"Scheduling hard real-time systems: A review,"} Software Engineering Journal,
Number 5, May, 1991.

\noindent [12] G. C. Buttazzo, \emph{"Hard Real-Time Computing Systems: predictable scheduling algorithms
and applications,"} Springer company, 2005.

\noindent [13] J. Carpenter, S. Funk, P. Holman, A. Srinivasan, J. Anderson, and S. Baruah, \emph{"A Categorization
of Real-time Multiprocessor Scheduling Problems and Algorithms,"} Handbook
of Scheduling: Algorithms, Models, and Performance Analysis, Edited by J. Y. Leung,
Published by CRC Press, Boca Raton, FL, USA, 2004.

\noindent [14] M. Chen and K. Lin, \emph{"A Priority Ceiling Protocol for Multiple-Instance Resources,"} Proc.
of the Real-Time Systems Symposium, 1991.

\noindent [15] M. Chen and K. Lin, \emph{"Dynamic Priority Ceiling: A Concurrency Control Protocol for
Real-Time Systems,"} Real-Time Systems Journal 2, 1990.

\noindent [16] R. K. Clark, \emph{"Scheduling Dependent Real-Time Activities,"} PhD dissertation, Carnegie
Mellon Univ., 1990.

\noindent [17] L. Cucu, R. Kocik and Y. Sorel, \emph{"Real-time scheduling for systems with precedence,
periodicity and latency constraints,"} RTS Embedded Systems 2002, Paris, 26-28 March,
2002.

\noindent [18] B. Dasgupta and M. A. Palis, \emph{"Online Real-Time Preemptive Scheduling of Jobs with
Deadlines on Multiple Machines,"} Journal of Scheduling, Volume 4, Number 6, pp. 297-
312, November, 2001.

\noindent [19] D. A. El-Kebbe, \emph{"Real-Time Hybrid Task Scheduling Upon Multiprocessor Production
Stages,"} International Parallel and Distributed Processing Symposium (IPDPS'03), Nice,
France, 22-26 April, 2003.
\thispagestyle{empty}

\noindent [20] G. Fohler, T. Lennvall, and G. Buttazzo, \emph{"Improved Handling of Soft Aperiodic Tasks in
Offline Scheduled Real-Time Systems using Total Bandwidth Server,"} In Proceedings of
the 8th IEEE International Conference on Emerging Technologies and Factory Automation,
Nice, France, October, 2001.

\noindent [21] W. Fornaciari, P. di Milano, \emph{"Real Time Operating Systems Scheduling Lecturer,"}
www.elet elet.polimi polimi.it/ fornacia it/ fornacia.

\noindent [22] K. Frazer, \emph{"Real-time Operating System Scheduling Algorithms,"} , 1997.

\noindent [23] S. Funk, J. Goossens, and S. Baruah, \emph{"On-line Scheduling on Uniform Multiprocessors,"}, 22nd IEEE Real-Time Systems Symposium (RTSS'01), pp. 183-192, London, England,
December, 2001.

\noindent [24] M. Garey, D. Johnson, \emph{"Complexity Results for Multiprocessor Scheduling under Resource
Constraints,"} SICOMP, Volume 4, Number 4, pp. 397-411, 1975.

\noindent [25] R. Gerber, S. Hong and M. Saksena, , \emph{"GuaranteeingReal-Time Requirements with
Resource-Based Calibrationof Periodic Processes,"} IEEE Transactions on Software Engineering,
Volume 21, Number 7, July, 1995.

\noindent [26] J. Goossens and P. Richard, \emph{"Overview of real-time scheduling problems,"} Euro Workshop
on Project Management and Scheduling, 2004.

\noindent [27] W. A. Halang and A. D. Stoyenko, \emph{"Real Time Computing,"} NATO ASI Series, Series F:
Computer and Systems Sciences, Volume 127, Springer-Verlag company, 1994.

\noindent [28] P. Holman and J. H. Anderson, \emph{"Using Supertasks to Improve Processor Utilization in
Multiprocessor Real-Time Systems,"} 15th Euromicro Conference on Real-Time Systems
(ECRTS'03), Porto, Portugal, 2-4 July, 2003.

\noindent [29] D. Isovic and G. Fohler, \emph{"Efficient Scheduling of Sporadic, Aperiodic and Periodic
Tasks with Complex Constraints,"} In Proceedings of the 21st IEEE RTSS, Florida, USA,
November, 2000.

\noindent [30] M. Joseph, \emph{"Real-time Systems: Specification, Verification and Analysis,"} Prentice Hall,
1996.

\noindent [31] S. Kodase, S. Wang, Z. Gu and K. G. Shin, \emph{"Improving Scalability of Task Allocation
and Scheduling in Large Distributed Real-Time Systems Using Shared Buffers,"} The 9th
IEEE Real-Time and Embedded Technology and Applications Symposium, pp. 181-188,
2003.

\noindent [32] C. M. Krishna and K. G. Shin, \emph{"Real-Time Systems,"} MIT Press and McGraw-Hill Company,
1997.

\noindent [33] P. A. Laplante, \emph{"Real-time Systems Design and Analysis, An Engineer Handbook,"} IEEE
Computer Society, IEEE Press, 1993.

\noindent [34] S. Lauzac and R. Melhem, \emph{"An Improved Rate-Monotonic Admission Control and Its
Applications,"} IEEE Transactions on Computers, Volume 52, Number 3, pp. 337-350,
March, 2003.

\noindent [35] J. Y.-T. Leung and J. Whitehead, \emph{"On the complexity of fixed priority scheduling of periodic
real-time tasks,"} Performance Evaluation, Volume 2, pp. 237-250, 1982.

\noindent [36] P. Li and B. Ravindran, \emph{"Fast, Best-Effort Real-Time Scheduling Algorithms,"} IEEE
Transactions on Computers, Volume 53, Number 9, pp. 1159-1175, September, 2004.

\noindent [37] C. L. Liu and J. W. Layland, \emph{"Scheduling Algorithms for Multiprogramming in Hard
Real-Time Environment,"} Journal of the ACM , Volume 20, Number 1, pp. 46-61, 1973.

\noindent [38] C. D. Locke, \emph{"Best-Effort Decision Making for Real-Time Scheduling,"} PhD dissertation,
Carnegie Mellon University, 1986.

\noindent [39] J. Luo and N. K. Jha, \emph{"Power-conscious Joint Scheduling of Periodic Task Graphs and
Aperiodic Tasks in Distributed Real-time Embedded Systems,"} Proceedings of ICCAD,
pp. 357364, November, 2000.
\thispagestyle{empty}

\noindent [40] G. Manimaran and C. S. Ram Murthy, \emph{"An Efficient Dynamic Scheduling Algorithm for
Multiprocessor Real-Time Systems,"} IEEE Transaction Parallel and Distributed Systems,
Volume 9, Number 3, pp. 312-319, March, 1998.

\noindent [41] F. W. Miller, \emph{"the Performance of a Mixed Priority Real-Time Scheduling Algorithm,"}
Operating System Review, Volume 26, Number 4, pp. 5-13, October, 1992.

\noindent [42] M. Moir and S. Ramamurthy, \emph{"Pfair scheduling of fixed and migrating tasks on multiple
resources,"} In Proceedings of the Real-Time Systems Symposium, IEEE Computer
Society Press, Phoenix, AZ, December, 1999.

\noindent [43] A. K. Mok, \emph{"Fundamental Design Problems of Distributed Systems for the Hard Real-
Time Environment,"} Technical Report, Massachusetts Institute of Technology, June, 1983.

\noindent [44] A. K. Mok, \emph{"Fundamental Design Problems of Distributed Systems for the Hard Real-
Time Environment,"} Ph.D. thesis. Department of Electronic Engineering and Computer
Sciences, Mass. Inst. Technol., Cambridge MA, May, 1983.

\noindent [45] C. Perkins, \emph{"Course Notes: Overview of Real-Time Scheduling, Real-Time and Embedded
Systems (M) Lecture 3,"} University of Glasgow, Department of Computing Science 2004-
2005 Academic Year.

\noindent [46] C. A. Phillips, C. Stein, E. Torng, and J. Wein, \emph{"Optimal time-critical scheduling via
resource augmentation,"} In Proceedings of the Twenty-Ninth Annual ACM Symposium
on Theory of Computing, pp. 140-149, El Paso, Texas, 4-6 May, 1997.

\noindent [47] S. Schneider, \emph{"Concurrent and Real-time systems, The CSP Approach,"} John Wiley and
Sons LTD, 2000.

\noindent [48] G. Quan, L. Niu, J. P. Davis, \emph{"Power Aware Scheduling for Real-Time Systems"}. 



%\bibliographystyle{IEEEannot}
%\bibliography{generatexml}

\end{document}  



