% TODO Batz: Referenzen im Bib-File
% TODO Batz: Was hälst du von der Conclusion? Soll noch was rein?
% TODO All: Wie soll unser Kernel heißen?
% TODO All: sollen wir im Text URLs angeben oder nur in der Ref aufzeigen? URLs
% treten 2x auf, 1x bei High Res Timer und einmal beim Kernel. 

\section{Real-Time}
\label{sec:realtime}
This chapter defines real-time and its components. Furthermore, it gives an
overview about important aspects like time, scheduling, interrupt
handling and other basic characteristics. To close this chapter a description
of real-time combined with a Linux is given.
\subsection{Definition}

\begin{quotation}
``Real-time systems are computing systems that must react within
precise time constraints to events in the environment. As a consequence, the correct behavior
of these systems depends not only on the value of the computation but also
on the time at which the results are produced"\cite{book:hardRealTime}.
\end{quotation}

A real-time computer system is a system in which integrity and
correctness depends on several things. On the one hand logical results of
computations play an important role. On the other hand it is based on
the physical instant at which these results are produced and calculated. 

Such a real-time computer system has to react and work correctly triggered by a
controlled object within given time intervals required by its
environment.

The instant at which the result must be produced is called deadline. If such a
result endures after its time limit, the deadline is qualified as soft,
otherwise it is firm. 
A deadline is called \emph{hard}, if missing such a firm deadline
could lead to accidents or catastrophes\cite{book:rtdesignprinciples}.

A real-time computer system therefore is called hard real-time computer system
or safety-critical real-time computer system if at least one hard deadline
exists; otherwise it is called soft real-time computer system.

The design of these two kind of real-time computer systems differs
fundamentally. Hard real-time computer systems must sustain a guaranteed
temporal behavior under all specified load and fault conditions. However it is
permissible for a soft real-time system to miss a deadline occasionally.

A real-time computer system includes tasks which determine and organize the system behavior,
as well as its resources, which are used by tasks. Such a behavior implies
restrictions on the implementation which affects scheduling constrains.
Therefore typical behavior requirements could be task execution, task
allocation or task priority.

The temporal behavior of a real-time computer system depends heavily on the
environment that the system interacts with. According to this, the requirements
of such a system are rarely analyzed and figured out individually for each task
but rather on chains of several tasks constituting a specific function.

\subsection{Important Aspects of a Real-Time System}
This section will introduce several components of a real-time system. The
mentioned components and their definition and function will be described.

\subsubsection{Time}
Time, timer, time constraints and deadlines are significant aspects in a
real-time system. Therefore this section deals with different aspects of time
in such a system. Different times and time areas in a Linux
operating system and also the so called \emph{High Resolution Timer}, which is
significant for a real time system will be described.

A real-time system can handle times, such as cyclical
interrupt generation, time measurement, time monitoring or timing (for services and
tasks), differently. These types will be explained in the following
sections:

\paperDescription {Cyclical Interrupt Generation} {A timer implemented in a
real-time system is responsible to generate a cyclical interrupt (e.g. every 10
ms). With this interrupt the connected interrupt service routine invokes the
scheduler, which deals with different services and tasks.}

\paperDescription {Time Measurement} {Time measurement is used to quantify the
time e.g. between two tasks or operations, which are used frequently in
automation technology. Time measurement is often a safety related aspect in
real-time systems.}

\paperDescription {Time Monitoring} {Like time measurement the time monitoring
is also a safety related aspect in real-time systems. For example several tasks
or programs are being monitored by several control instances of the system. At
this point the software Watchdog could be mentioned as a system time monitoring example.}

\paperDescription{Timing} {Specific tasks in a system has to be executed in
continuous intervals (e.g. backup routines).}

Furthermore in real-time systems and real-time applications it is necessary to
set and get the actual time. For this purpose the system can choose from
two different timers: the absolute time emitter (clocks) and the relative time
emitter (timers). These two different kinds of timers could be realized in
hard- or software in terms of forward or reverse counters.

In the different levels of a real-time system (hardware, kernel and user) there
are different methods to get the actual time.

\paperDescription{Hardware}{The most common form on the hardware level is the
real-time clock. This kind of clock has a battery so that it still
runs after the system has been shut down. The described real-time clock is an
absolute time emitter, which is significant for distributed real-time systems,
so that each system has the same time relative to the different time zones.}

\paperDescription{Kernel}{The kernel or operating system level gets its time
from the system timer interrupt. This time period, often called 'tick'
(in Linux called jiffy), is based on the clock on the hardware level. For the
sake of completeness it has to be mentioned that this tick can vary from system
to system. A counter overflow can occur after a longer period of time,
which can have an adverse effect on the real-time system, if there is no
verification implemented.}

\paperDescription{User}{The user has a number of functions to operate with time
e.g. \emph{timeofdate} or \emph{sleep}. On the user level several verifications
have to be implemented to prevent the applications from failure. Common
verifications are switching from winter to summer time or the leap year aspect.}
\cite{url:rtTime}
%TODO: Pavlos - Andere Quelle oder so lassen?

%TODO: Pavlos - Quelle fuer HRT
To complete the picture regarding the topic of time, the high resolution timer
has to be mentioned. This special timer is more powerful than the normal timer
which is implemented in the Linux operating system. A high resolution timer can
normally operate in microsecond areas with little overhead (a normal timer could
only reach millisecond resolution). This more precise timer, which can be found
in most Linux distributions is often needed for special operations and
applications, e.g. in the real-time sector. In combination with special
hardware (e.g. High Precision Event Timers) the Linux kernel can execute and react preassigned
operations in more accurate intervals and often faster than one jiffy. 
% TODO Pavlos: URL (siehe oben)
A High Resolution Timer can be downloaded on \cite{url:timer}

\subsubsection{Scheduler}
\label{sec:scheduler}

\paperDescription{\\ Real-Time Task Model}{Real-time tasks are the basic
executable entities which can be scheduled. They can be either periodic or
aperiodic. A task can be defined by four chronological parameters:}

\begin{itemize}
	\item \textit{r}, task release time
	\item \textit{C}, maximum task computational time
	\item \textit{D}, relative deadline of the task
	\item \textit{T}, task period (for periodic tasks)
\end{itemize}
A formal way to describe a task can be written as the following expression:
\begin{equation*}
	\tau(r_{0}, C, D, T)
\end{equation*}

\paperDescription{Overview \& Classification}{Figure
\ref{fig:Scheduling_Classification} represents a classification of real-time
scheduling algorithms.}

\begin{figure}[!t]
	\centering
		\includegraphics[width=0.45\textwidth]{Figures/Scheduling_Classification.png}
	\caption{Classification of Real-Time Scheduling Algorithms}
	\label{fig:Scheduling_Classification}
\end{figure}

In general a scheduler has to guarantee that all tasks will be executed in a
specific period of time.

\paperDescription{Dynamic Scheduling} {A scheduler is called \emph{dynamic} or
\emph{on-line}, if the scheduling decisions are done at run time by selecting
one task of the current pool of ready tasks. Dynamic schedulers are flexible
and adapt to a running task scenario, so they consider only the current task
requests.}

\paperDescription{Static Scheduling}{A \emph{static} or \emph{pre-run-time}
scheduler operates in a different way and decides while compiling. Therefore
it generates a so called \emph{Dispatching Table} for the run-time dispatcher
off-line. The scheduler needs complete prior knowledge about the task set
characteristics e.g. maximum execution time, precedence constrains, mutual
exclusion, constrains and deadlines. Based on this, the dispatching table
contains all information it needs at run time which task should be scheduled
next.}

\paperDescription{Preemptive Scheduling}{In preemptive scheduling a currently
executed task can be preempted, e.g. interrupted if a task with higher
priority requests resources and services.}

\paperDescription{Nonpreemptive Scheduling}{This kind of scheduling is quite
the opposite of preemptive scheduling. A currently executed task will not be
interrupted until it release the allocated resources by itself. Normally this
is done after completion. This kind of scheduling is reasonable in a scenario
where lots of short tasks have to be executed.}

\paragraph{Scheduling Algorithms}

\paperDescription{\\ Rate Monotonic Scheduling (RM)}{This algorithm is a dynamic
preemptive one based on static task priorities. Following assumptions are
made:}

\begin{enumerate}
	\item The requests for all tasks are periodic.
	\item All tasks are independent of each other.
	\item The deadline interval of every task is equal to its period.
	\item The maximum computational time is known.
	\item The time required to switch context can be ignored. 
	\item The sum of the utilization factors $\mu$ of the \emph{n} tasks is given
	by
	\begin{equation*}
		\mu = \sum \frac{C_{i}}{T_{i}} \leq n(2^{\frac{1}{n}} - 1)
	\end{equation*}
\end{enumerate}

\emph{Remark}: The term $n(2^{1/n} - 1)$ approaches $ln~2$, i.e.
about $0.7$ if \emph{n} goes to infinity. This upper bound represents a set of n
tasks with fixed priority (restriction: ration has to be between any two
request periods is less than 2).

The RM-Algorithm assigns static priorities based on task periods. The task
with the shortest period gets the highest priority. According to this the task
with the longest period gets the lowest static priority. Based on the given
priority the dispatcher decides at run time which task should be executed. 

If all conditions are satisfied, it is guaranteed that all tasks will be
executed within their deadlines. This algorithm is optimal for single processor
systems.
\begin{align*}
	\intertext{Example}
	\tau_{1}& (0,20,100,100) \\
	\tau_{2}& (0,40,150,150) \\
	\tau_{3}& (0,100,350,350)
	\intertext{Using the given formula in assumption 6, the result is }
	\mu &= \sum \frac{C_{i}}{T_{i}} \leq n\left(2^{\frac{1}{n}} - 1\right) \\
	\mu &= \frac{20}{100} + \frac{40}{150} + \frac{100}{350} \leq 3
	\left(2^{\frac{1}{3}} - 1 \right) \\ 
	\mu &= 0.75238 \leq 0.77976
\end{align*}

\begin{figure}[!t]
	\centering
	\includegraphics[width=0.45\textwidth]{Figures/Scheduling_RMA.png}
	\caption{Example of Rate Monotonic Scheduling}
	\label{fig:Scheduling_RMA}
\end{figure}

\paperDescription{Deadline Monotonic Scheduling (DM)}{This algorithm assigns
priorities to several tasks to their relative deadline. This means that tasks
with shorter relative deadlines get a higher priority. By modifying this task
parameters, the algorithm is closed to the RM scheduling algorithm except that
the relative deadline may change in order to respect the priority assignment.
In other words, RM and DM are identical if the relative deadline is
proportional to its period. The relation to the lowest upper bound can be
expressed as follows:}

\begin{align*}
	\mu &= \sum \frac{C_{i}}{D_{i}} \leq n\left(2^{\frac{1}{n}} - 1\right) \\
	\intertext{Example:}
	\tau_{1}& (0,1,5,10) \\
	\tau_{2}& (0,3,10,15) \\
	\tau_{3}& (0,5,75,100)
	\intertext{Using the given formula, the result is}
	\mu &= \sum \frac{C_{i}}{D_{i}} \leq n\left(2^{\frac{1}{n}} - 1\right) \\
	\mu &= \frac{1}{5} + \frac{3}{10} + \frac{5}{75} \leq 3 \left(2^{\frac{1}{3}}
	- 1 \right) \\ 
	\mu &= 0.5\overline{6} \leq 0.77976
\end{align*}

\paperDescription{Earliest-Deadline-First (EDF) Algorithm}{This algorithm is
an optimal dynamic preemptive one for a single core processor systems based on
dynamic priorities for which the assumptions 1-5 from the RM-Algorithm have to
hold as well.}
 
The processor utilization $\mu$ can go up to 1, even when task periods are
not multiples of the smallest period. After each significant event (beginning
or end of a task), the task with the earliest absolute deadline is assigned
highest dynamic priority. According to this all tasks are ordered by their
deadlines and the dispatcher chooses the task which has to be finished
first.
\begin{align*}
	\intertext{Example}
	\tau_{1}& (0,6,20,20) \\
	\tau_{2}& (4,4,10,10) \\
	\tau_{3}& (6,6,15,15)
\end{align*}
\begin{figure}[!t]
	\centering
		\includegraphics[width=0.45\textwidth]{Figures/Scheduling_EDF.png}
	\caption{Example of Earliest Deadline First Scheduling}
	\label{fig:Scheduling_EDF}
\end{figure}
\paperDescription{Least-Laxity (LL) Algorithm}{Least-Laxity is another optimal
algorithm for single processor systems. It makes the same assumptions as the
Earliest-Deadline-First-Algorithm. At any point of scheduling decisions, the
task with the shortest laxity \emph{l}, i.e. the difference between deadline
interval \emph{D} and the computation time \emph{C}}
\begin{align*}
	D& - C = l 
	\intertext{is assigned highest dynamic priority}
	\intertext{Example:}
	\tau_{1}& (0,3,7,20) \\
	\tau_{2}& (0,2,4,5) \\
	\tau_{3}& (0,1,8,10)
\end{align*}
\begin{figure}[!t]
	\centering
		\includegraphics[width=0.45\textwidth]{Figures/Scheduling_LL.png}
	\caption{Example of Laxity Least Scheduling}
	\label{fig:Scheduling_LL}
\end{figure}

\paperDescription{SCHED\_OTHER}{By default the so called \emph{SCHED\_OTHER}
schedule strategy is used to handle processes in Unix systems. It works with
all processes given a static priority of 0, which does not need any extension
to real-time. If the use of dynamic priorities is essential, the scheduler
distributes priority values between -20 (highest) and 20 (lowest). These so
called \emph{nice-values} can be modified afterwards by calling the
\emph{nice()} respectively \emph{setpriority()} method (requires root
privileges). The scheduler uses time slicing, this means that several tasks
get a certain amount of time to get executed and finished. After that cyclic
time the task gets interrupted and another one gets executed.}

\paperDescription{SCHED\_FIFO}{The \emph{SCHED\_FIFO} (First In First Out)
scheduler compared to the \emph{SCHED\_OTHER} always works with priorities.
For this purpose values between 0 and 99 are used. It does not support
\emph{time-slicing}. If a process \textbf{A} is interrupted by a process
\textbf{B} with higher priority, \textbf{A} have to wait until \textbf{B} has
finished. If no other process with a higher priority than \textbf{A} is
waiting, \textbf{A} gets reactivated. If a process gets scheduled by
\emph{SCHED\_FIFO}, all other processes, which get executed based on the
\emph{SCHED\_OTHER} scheduling algorithm, get interrupted as soon as possible.}

% TODO All: stimmt der folgende Satz so?
\paperDescription{Remark}{For further and more detailed information about
scheduling in a real-time system, it will be very helpful to take a look at the
book of F. Cottet\cite{book:schedulinginrt} and the paper of C.L.
Liu\cite{url:sched}.}

\subsubsection{Interrupt Handler}
Interrupt handling is a very important topic in real-time and real-time
operating systems. Therefore this section is going to take a detailed
look on the \emph{Interrupt Handler} and how it is connected to real-time.
Furthermore chapter is going to specify the interrupt handler, its purpose and
its function.

The interrupt handler is a routine which is being executed when an
interrupt occurs. The source of this interrupt can be an
external device (like a keyboard) or any kind of software. Each interrupt has
its own \emph{Interrupt Service Routine} (ISR), which is being executed when an
interrupt occur. For example, pressing a specified key on the keyboard will call the
ISR for this key which will assign the corresponding character to the key and
write it to the keyboard buffer.

During the normal execution flow from one instruction to another, the processor
could get an interrupt from a specific source. This will cause the CPU to stop
the on-going execution flow and execute the interrupt and therefore the linked
ISR.

Following steps are being executed if an interrupt occurs:
\begin{itemize}
  \item More interrupts are forbidden (This action depends on the
  architecture of the operating system, if prevention of more interrupts is
  supported or not).
  \item The actual condition of a program is saved, so that the
  program can continue after the interrupt without any failure.
  \item The interrupt and its connected actions are executed.
  \item Interrupts are allowed again (if they have been prevented
  before).
\end{itemize}

The purpose of the interrupt handling is that the CPU has not to wait for
an event. The CPU gets an interrupt from e.g. a device,
which tells the CPU that something has happened. Now the CPU realizes that this
device wants its attention and is going to assign the processor execution time.

While talking about interrupts, two different types have to be mentioned:
synchronous and asynchronous interrupt. A synchronous interrupt is linked with
the processor clock and follows this clock. Therefore interrupts occur only
relative to the clock of the CPU. In contrast to this an asynchronous interrupt
can occur at any time, because it does not follow the processor clock.
Figure \ref{fig:InterHandler} visualizes this difference.

\begin{figure}[!t]
	\centering
		\includegraphics[width=0.45\textwidth]{Figures/interrupt_handler.png}
	\caption{Difference between Synchronous and Asynchronous interrupt handler}
	\label{fig:InterHandler}
\end{figure}

An example for an asynchronous interrupt handler is pressing a key into the
keyboard. An example for synchronous interrupt handling would be a real-time
clock or timer.

In real-time systems or real-time applications it is necessary that an
interrupt is executed in a very short time constraint and therefore in a rapid
way. Furthermore all real-time systems have to challenge with the so called
\emph{Interrupt Latency}.
\begin{quotation}
{Interrupt latency is the interval of time
from an external interrupt request signal being raised to the first interrupt
service routine instruction being fetched. Interrupt latency is a
combination of the hardware system and the software interrupt
handler\cite{url:interLatency}.}
\end{quotation}

Therefore a real-time system or application has to assure low interrupt
latency. Additional to this, that kind of systems has to deal with multiple
interrupts which can occur simultaneously. If these two aspects cannot be
guaranteed by a real-time system or application, they may appear slow and the
real-time requirements cannot be ensured. This aspect is very significant if
human life or environmental damage depends on that
system\cite{url:interHandler}.

%TODO: Pavlos Quelle fue Memory
\subsubsection{Memory}
In a real-time system the memory (also called \emph{Volatile Memory}, because it
requires power to maintain the stored information) is a component,
which has not to be underestimate. This chapter describes the function and the
purpose of the main memory in a real-time system. Furthermore this chapter is 
going to take a detailed look to the possible problems and challenges related
to main memory and real-time systems.

The main memory of a system is managed by the operating system, in this case a
real-time operating system. The operating system has to organize the access to the main
memory with help of the so called \emph{Memory Management}. At this point two
different memory management systems have to be mentioned. On the one hand there
exists the so called \emph{static} memory management and on the other hand the
\emph{dynamic} memory management. Characteristics of a static memory
management are the following ones:
\begin{itemize}
  \item Necessary amount of memory is being allocated when a process starts.
  \item Memory is being reallocated when the process terminates.
  \item Memory segments have same and fixed size.
\end{itemize}

In compare to that, characteristics of a dynamic memory management are the
following ones:
\begin{itemize}
  \item Necessary amount of memory is being allocated as required.
  \item Memory is being reallocated as required.
  \item Memory segments have different and flexible size.
\end{itemize}

In real-time operating systems static memory
management is often found, because of the requirements such a system has to
fulfill (especially in a hard real-time system), like safety, predictability and
determinism. But nowadays, as the complexity of such systems increases, the
trend has shifted. In complex systems static memory management becomes
difficult to maintain and utilization of the memory is not optimal.
Therefore the demand for more flexible dynamic memory management increases
hand in hand with the increasing complex systems.\cite{book:sharedMem}

% Quelle fur Shared Memory
\paperDescription{Shared Memory}{Different time critical operations, processes
and applications are accessing the main memory of a real-time system. It is
often the case that two different threads of one process are accessing the
same memory segment. The purpose of sharing memory with two different threads
is the so called \emph{Inter Process Communication} (IPC). With shared memory
e.g. information sharing or computation speedup could be realized in a
comfortable way.}\cite{book:sharedMem}

%TODO Pavlos Quelle fuer Semaphore
\subsubsection{Semaphore}
This section is going to take a detailed look on the so called \emph{Semaphore}.
It will describes the idea and the purpose of a semaphore. Furthermore this
section is going to take a detailed look at different operations a semaphore
provides. Following up the concept of spinlocks will be explained.

A semaphore is a special variable or data structure which is used to synchronize
two or more parallel processes which share a resource together. Therefore
a semaphore is used to manage the access from different parties to 
shared resources (e.g. shared memory). The purpose of a semaphore is to prevent
race conditions.

Semaphores can be accessed only with the following two atomic operations:
\begin{itemize}
  \item P-Operation: Decreases the value of a semaphore by one.
  \item V-Operation: Increases the value of a semaphore by one.
\end{itemize} 

The P-Operation is used when a process tries to reserve a shared resource. If
this resource is free the P-Operation decreases the value of the semaphore by
one to represent that this resource is reserved. If another holder of the
shared resource wants to operate on the same resource, it has to execute 
the P-Operation on the semaphore as well. The P-Operations decreases the
semaphore only if the potential result will be unequal to zero. If not, the
holder has to wait, until the resource gets released. If there is no need for
the resource anymore, the holder has to execute the V-Operation, which increases
the semaphore-value by one, to represent that the resource is free.

The spinlock concept, which works nearly similar to the semaphore concept is
also a mechanism to prevent race conditions and as well as protecting critical
sections. It is a lock in which a process waits in a loop and checks if the
resource or the critical section is available. If the critical section is not
reserved by another process, the process which wants to enter the critical
section marks the spinlock as active. Any other process now checks if the
spinlock is available. If not, they wait until the spinlock is free and enters
the critical section.\cite{book:sharedMem}

\subsubsection{Critical Section Protection}
The \emph{Critical Section Protection} (CSP) is used to protect program
regions where:
\begin{itemize}
  \item mutual exclusion protection is needed,
  \item it is too expensive to create a mutex and
  \item the given time to spend in a region is very short.
\end{itemize}
Such a critical section is a global object which prevents any threads from
entering regions that are protected, once a thread is already in this
section\cite{url:csp}.

\subsubsection{Priority Inheritance}
\emph{Priority Inheritance} is a method to eliminate priority inversion
problems\cite{url:prioinherit}. 

That means that under priority inheritance a transaction \textbf{A} with low
priority which holds the lock will be executed at the priority of the highest priority
transaction \textbf{B}, which waits for the lock, until it terminates.
Therefore \textbf{A} is not preemptable, but changes its priority at run time.

Moreover a locking transaction may run faster than it would without priority
change, thus releasing its lock quickly. According to this, the time for which
a transaction with high priority is locked may be reduced. 
Additional to that the priority inheritance also reduces or in best
case eliminates the problem of CPU resource blocking.
As a result priority inheritance provides a significant performance improvement
in real-time operating systems.

\subsection{Real-Time Operating System}
Regarding to the given aspects a real-time operating system have to
guarantee and provide the following principles to that kind of system:
\begin{itemize}
  \item It has to be multithreaded and preemptable.
  \item If there is no deadline driven operating system, a notation for thread
  priority has to exist.
  \item The system has to handle predictable thread
  synchronization mechanisms.
  \item A priority inheritance has to be supported by the system.
  \item The behavior and reactions of a system should be known.
\end{itemize}
Therefore a sophisticated kernel is necessary for a good and reliable real-time
operating system. In addition, it is also essential to have a detailed
documentation and tools, which are delivered to develop and tune applications.