\section{Testing}
After implementing and installing the real-time kernel, it is necessary to
test the kernel in different test cases. Therefore the kernel will be tested
against real-time requirements. The different tests will be realized with
several tools and packages, to see how predictable the kernel is.

There are several ways and measurement tools to test a real-time operating
system. One possibility is to use the rt-tests written by Thomas Gleixner.
Alternatively there are the LTP realtime tests which can be found in the Ubuntu
repository. The rt-test package includes:
\begin{itemize}
	\item cyclictest
	\item signaltest
	\item pi\_stress test
	\item classic\_pi test
\end{itemize}
The important tests of the LTP real-time test suite are:
\begin{itemize}
	\item sched\_latency
	\item sched\_football
	\item pi-tests
\end{itemize}
Based on the requirements for checking a real-time operating system against a
normal operating system it was decided to compare the cyclictest results of
both.

\subsection{Prerequisites}
As it is necessary to have comparable results, two different hardware
configurations and different kernels had been chosen. 
In the following section the prerequisites will be presented.

\subsubsection{Hardware}
The following machines were used for the tests

\paperDescription{Hardware 1}
	{Sony VAIO VGN TZ11XN
	\begin{itemize}
      \item Intel Core 2 Duo (1.06 GHz, 2MB Cache, 533MHz FSB) CPU
      \item 1x2GB DDR2-SDRAM
      \item Ultra ATA/100 (30MB/s tested buffered disk reads with the
      Linux program \emph{hdparm})
\end{itemize}}

\paperDescription{Hardware 2}
	{Asus F8SN-4S022C
	\begin{itemize}
      \item Intel Core 2 Duo (2.50 GHz, 6MB Cache, 800MHz FSB) CPU
      \item 1x2GB DDR2-SDRAM, 1x1GB DDR2-SDRAM
      \item SATA 3Gb/s (62MB/s tested buffered disk reads with the
      Linux program \emph{hdparm})
    \end{itemize}}

The hardware will appear with these names in following diagrams.

\subsubsection{Software}
As the main goal is to compare the real-time kernel with the normal one.

\paperDescription{Ubuntu RT Kernel}{This kernel is the real-time counterpart
of the Ubuntu kernel. It was created to have the benefits of the
patched Ubuntu kernel compared to a Vanilla kernel with just the real-time
patch. As described in section \ref{subsec:ubuntuKern}, it can be downloaded
and installed from the Ubuntu repository. It is maintained by \emph{Alessio
Igor Bogani <abogani@ubuntu.com>}. The version used for testing is 2.6.31.9}.

\paperDescription{Ubuntu Kernel}{This kernel is the Ubuntu kernel as it is
installed on nearly every Ubuntu desktop or server system. It is the
patched Ubuntu kernel, which supports advanced hardware compatibility and
features. The version used for the tests is 2.6.31.15}

\paperDescription{\ownkernel}{This kernel is based on the Vanilla kernel
from \cite{url:kernel} and the \emph{CONFIG\_PREEMPT\_RT} kernelpatch as
described in section \ref{subsec:OwnKern}. An additional feature is, that this kernel is adapted to
 \emph{Hardware 1} as many modules and drivers that are not required are
left out. Just to reveal the results from the tests upfront, it has been decided
to leave this kernel out in further tests. 
This decision has been made because the kernel did not match with real-time
requirements.
The kernel has been compiled and tested several times, but unfortunately, with
each iteration this kernel becomes slower. This shows that it is not as easy as
it looks like to optimize a kernel from scratch to meet real-time requirements.
The results are shown in figure \ref{fig:boxplot}}.

\subsection{Cyclictest}
The cyclictest is a program, written by Thomas Gleixner, which tests real-time
kernels against performance and especially latency. In order to do so, it is
possible to define a number of tasks with a certain priority and an interval to
run specified times or cycles. There are many more possibilities to test with
this program, but only the ones which has been used will be explained. 

The cyclictest is being executed on the two machines, as mentioned before. The
following parameters are used for this test:

\paperDescription{-t NUM, --threads=NUM} {Number of test threads (default is 1)}

\paperDescription{-p, --prio=PRIO} {Sets the priority of the first thread}                     

\paperDescription{-d DIST, --distance=DIST} {Specifies the distance of the
threads}

\paperDescription{-D TIME, --duration=TIME} {Specified the time for the program
to run in seconds(no suffix), minutes(m), hours(h) or days(d)}

\paperDescription{-m, --mlockall} {Memory locks to avoid to be shifted out of
the memory}

\paperDescription{-v, --verbose} {Verbose output for the statistics}

\paperDescription{-o FACTOR, --oscope=FACTOR} {Reduce the output by a specified
factor. Normally used for piping the results to another program (oscilloscope),
but in this case useful to limit the amount of data produced as output}

Table \ref{tab:cyclictest} shows four testcases. 
All testcases had ran two minutes with a monotonic clock (monotonic
increasing system time) instead of the real-time clock (time of the day) and with
an activated mlock. For statistical purposes, an output factor of 10 has been
chosen to reduce the data produced by the test. Additionally the priority parameter
is activated to specify the priority of the threads. Related to this
parameter it has to be mentioned that the scheduler changes from SCHED\_OTHER to SCHED\_FIFO.

\begin{table}[!t]
  \centering
  \label{tab:cyclictest}
  \begin{tabularx}{\tabellenbreite\textwidth}{X||X|X|X|X}
    \textbf{Testcase} & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} \\
    \hline
    \hline
    \textbf{Tasks} & 25 & 25 & 100 & 100\\
    \hline
    \textbf{Priority} & - & 80 & - & 80 \\
    \hline
    \hline
    \textbf{Clock} & \multicolumn{4} {c} {real-time or monotonic} \\
    \hline
    \textbf{Time} & \multicolumn{4} {c} {2 mins} \\
    \hline
    \textbf{MLock} & \multicolumn{4} {c} {activated}\\
    \hline
  \end{tabularx}
  \caption{Cyclic Test}
\end{table}

In order to automate testing on several machines, a bash script has been created
which is represented in Listing \ref{lst:cyclictest}.

\begin{lstlisting}[label=lst:cyclictest, float=ht, language=bash,
caption=Extract of the cyclictest bashscript]
#!/bin/bash
# ... 
cyclictest -t$TASKNUMBER1 -o10 -D$CYCLICTIME -m -v -c$CLOCK > $TESTCASE_1
cyclictest -t$TASKNUMBER2 -o10 -p80 -D$CYCLICTIME -m -v -c$CLOCK > $TESTCASE_2
cyclictest -t$TASKNUMBER1 -o10 -D$CYCLICTIME -m -v -c$CLOCK > $TESTCASE_3
cyclictest -t$TASKNUMBER2 -o10 -D$CYCLICTIME -m -p80 -v -c$CLOCK > $TESTCASE_4
# Further operations on the file are made for 
# compatibility but left out to avoid going 
# beyond the scope of this listing.
\end{lstlisting}

With this script, the following steps are performed:
\begin{itemize}
  \item Create a directory, where the results will be stored.
  \item Start four tests in a row with different parameters (see Table
  \ref{tab:cyclictest}).
  \item Trim form of the results to be able to process them with other tools
  (in this case: the programming language \emph{R}\cite{manual:R})
\end{itemize}
The outputs of executing the script are four files with testresults. These files
are processable with R (for an example of these files, see Listing
\ref{lst:showfile.csv}).

\begin{lstlisting}[label=lst:showfile.csv, float=ht, language=bash,
caption=Extract of a Cyclictest Result]
Task Count Latency
0 0 0
0 1 111
1 1 234
2 1 22
0 2 123
0 3 345
1 2 12
2 2 123
...
\end{lstlisting}

It is necessary to mention, that the test hardware has a relatively fast CPU.
Many real-time systems also have conditions to stay minimal to required
hardware in order to reduce costs and consumed energy. Running the same
test on one of these minimal machines, the results would be even more
significant.

\subsubsection{Test for Latency}
As latency is the most important factor in real-time systems, testing latency
outweighs other ones. Table \ref{tab:latencyTest} shows the test attributes. 

\begin{table}[!t]
  \centering
  \label{tab:latencyTest}
  \begin{tabularx}{\tabellenbreite\textwidth}{l||X}
  	\textbf{Attribute} & \textbf{Statement} \\
  	\hline
  	\hline
    \textbf{Tests used} & 	\begin{enumerate}
							  \item Real-time kernel runs without generated CPU load.
							  \item Real-time kernel runs on maximum CPU load.
							  \item \ownkernel runs without generated CPU load.
							  \item \ownkernel runs on maximum CPU load.
							  \item Non-real-time kernel runs without generated CPU load.
							  \item Non-real-time kernel runs on maximum CPU load.
							\end{enumerate} \\
    \hline
    \textbf{Expectations} & \begin{itemize}
                              \item It is expected, that kernels with maximum
                              CPU load have higher latencies than kernels without.
                              \item It is expected, that real-time kernels have
                              a lower mean latency than non-real-time kernels
                              \end{itemize} \\
    \hline
    \textbf{Display Format} & \begin{itemize}
                                \item \emph{Boxplots} due to advantages in
                                displaying the maximum, minimum, median and
                                mean of the different tests.
                                \item \emph{Lineplots} due to advantages in
                                displaying the process over a period of time.
                                \end{itemize}
  \end{tabularx}
  \caption{Test for Latency}
\end{table}

The results are summed up by the R script (this script is shown in Listing
\ref{lst:RSciptboxplot})

\begin{lstlisting}[label=lst:RSciptboxplot, float=!t, language=R,
caption=R Script for Boxplots]
# Read data
data <- read.table("data.csv", header=TRUE)
# Y-Axis marks defining
yAxisMarks <- c(0, 2000, 4000, 6000, 8000, 10000)
# Define colors for the boxplots
colors <- c("red2", "red4","orange2","orange4","lightblue2", "lightblue4")
# Output file
png("boxplot.png")
# The names of the plots
theNames <- c(max(array(unlist(data[1]))), max(array(unlist(data[2]))), max(array(unlist(data[3]))), max(array(unlist(data[4]))), max(array(unlist(data[5]))), max(array(unlist(data[6]))))
# Generate the boxplots
boxplot(data, xlab="Maximum values of different kernels in us", ylab="Latency in us", col=colors, yaxt="n", names=theNames)
# Add the Y-Axis marks and horizontal lines to the diagram
abline(h=yAxisMarks, col="gray", lwd=0.5)
axis(2, at=yAxisMarks)
# Legend
legend("topleft",inset=.05, title="Latency in different kernels",c("(1) RT, no load", "(2) RT, 100% load", "(3) Test kernel, no load", "(4) Test kernel, 100% load", "(5) Normal, no load", "(6) Normal, 100% load"), fill=colors)
# Add a box for a nice view
box()
# Switch the device off to get a safe instance end
dev.off()
\end{lstlisting}

The resulting boxplot is shown in Figure \ref{fig:boxplot}. As expected, the
maximum, median and mean of the non-real-time kernels compared to the real-time
kernels are much higher. The \ownkernel is in between both kernels, but does
not meet the requirements to be a real-time kernel with such high latencies.
These results lead to the conclusion that the \ownkernel will not be
considered in further tests.

The second assumption was, that the kernels with a fully loaded CPU have higher
latencies than the ones without load. This hypothesis is true, but there
are different relations between real-time and non-real-time kernels as it is
shown in Table \ref{tab:latencyRelation}. In this case, the \ownkernel seems to
be better than the real-time kernel. But the closer the maximum gets to
the minimum, the harder it gets to keep a good relation between fully loaded and
non-loaded systems.

\begin{figure}[!t]
	\centering
	\includegraphics[width=0.48\textwidth]{Figures/boxplot.png}
	\caption{Latency Boxplot}
	\label{fig:boxplot}
\end{figure}

\begin{table}[!t]
  \centering
  \label{tab:latencyRelation}
  \begin{tabular}{l|r}
  		\textbf{Kernels} & \textbf{Latency Differences} \\
  		\hline
  		Real-time & 17 (33\%) \\
  		\hline
  		\ownkernel & 142 (16\%) \\
  		\hline
  		Non-real-time & 8055 (361\%)
  \end{tabular}
  \caption[Relation between Latencies of fully- and non-loaded
  Systems]{Relation between Latencies \par of fully- and non-loaded Systems}
\end{table}

The most important attribute of the latency is the maximum. The maximum of the
latency defines the worst case scenario of the response time of a command. If,
for example, a car would react and therefore break after 100.000 us (0.1s), it
could cause a crash and even human life. To prevent this, a real-time system test should result
in an acceptable range, especially acceptable maximum. The different processes
need to define an own acceptable maximum, e.g. the latency of a temperature
measurement does in most cases not need an as high latency as the control unit
of a space shuttle.

To verify the results, another test with 30 minutes runtime was triggered. The
result of this test is represented in Figure \ref{fig:LatencyGraph}.   
It indicates, that the second expectation from Table
\ref{tab:latencyTest} is correct (``It is expected, that real-time kernels
have a lower mean latency than non-real-time kernels.''). The second hardware
was used to act as reference for the first. The corresponding R-script which
generates this plot is shown in Listing \ref{lst:LatencyGraph}. Due to the
large sample size of this test, the graph has higher amplitudes with each task
joining the test during runtime. 

\begin{figure}[!t]
	\centering
	\includegraphics[width=0.48\textwidth]{Figures/Latency.png}
	\caption{Latency Graph}
	\label{fig:LatencyGraph}
\end{figure}

\begin{lstlisting}[label=lst:LatencyGraph, float=!t, language=R,
caption=R Script for the Latency Graph]
# Read files
# R1 -> H1, R2 -> H2
R1_RT <- read.table("H1RT25.csv", header=TRUE)
R1_No <- read.table("H1No25.csv", header=TRUE)
R2_RT <- read.table("H2RT25.csv", header=TRUE)
R2_No <- read.table("H2No25.csv", header=TRUE)
# Factor of testsamples
# (Reduce the samplesize by this factor)
testsampleFactor <- 1
# Define max values
maxLatency <- max(c(R1_RT$Latency, R2_RT$Latency, R1_No$Latency, R2_No$Latency))
maxTime <- max(c(length(R1_RT$Latency), length(R1_No$Latency)))
# Save our graph
png("Latency.png")
# Start a plot with x- and y-axis labels and limitations
plot(0, 0, type="n", xlim=c(0,maxTime), ylim=c(0,maxLatency), xlab="Cycles", ylab="Latency", xaxt="n", yaxt="n")
# Draw y- and x-axis
yAxisMarks <- c(0, round(0.5*maxLatency), round(maxLatency))
xAxisMarks <- c(0, round(0.25*maxTime), round(0.5*maxTime), round(0.75*maxTime), maxTime)
axis(2, at=yAxisMarks)
axis(1, at=xAxisMarks)
# Colors
colors <- c("red1", "lightblue1", "red3", "lightblue3")
meanColors <- c("lightblue4", "red4")
# Draw the 2 lines (functions)
lines(R1_No$Latency[c(1:(length(R1_No$Latency)/testsampleFactor)*testsampleFactor)], col=colors[2])
lines(R2_No$Latency[c(1:(length(R2_No$Latency)/testsampleFactor)*testsampleFactor)], col=colors[4])
lines(R1_RT$Latency[c(1:(length(R1_RT$Latency)/testsampleFactor)*testsampleFactor)], col=colors[1])
lines(R2_RT$Latency[c(1:(length(R2_RT$Latency)/testsampleFactor)*testsampleFactor)], col=colors[3])
# Draw lines for orientation
abline(h=yAxisMarks, col="gray", lwd=0.5)
# Draw the 2 means into
abline(h=mean(c(R1_No$Latency, R2_No$Latency)), col=meanColors[1], lwd=3, lty="dotted")
abline(h=mean(c(R1_RT$Latency, R2_RT$Latency)), col=meanColors[2], lwd=3, lty="dotted")
# Legend
legend("topleft",inset=.05, title="Hardware 1",c("Normal","Realtime"), fill=c(colors[2],colors[1]))
legend("top",inset=.05,title="Hardware 2",c("Normal","Realtime"), fill=c(colors[4],colors[3]))
legend("left",inset=c(.05, 0),title="Means",c("Normal","Realtime"),col=meanColors, lty="dotted", lwd=3)
# Draw a box around the plot
box()
# Switch the device off to get a safe instance end
dev.off()
\end{lstlisting}

The result of this test is, that a real-time system, whether self-made or not,
has a much better latency than a non-real-time system and is recommended for
areas of application, where the maximum latency of one single task could result
in a hazard.

\subsubsection{Test for Task Distribution}
Another interesting part in these tests is the work of the scheduler.
Therefore a test for task distribution was chosen. In this test, the relation
between the number of cycles per task and the testcases with and without
priority will be shown. Table \ref{tab:TaskDistribution} shows the test
attributes.

The result of this test is shown in Figure \ref{fig:25Tasks} and the
corresponding R-script that provided this figure is shown in Listing
\ref{lst:TaskDistribution}.

As expected, the first task always gets the most cycles. But it was not
expected, that the results of the test that runs with priority is nearly equal
to that one that runs without priority. Table \ref{tab:StatTaskDistribution}
shows the descriptive statistics for this test. The numbers show, that the
expectation is not true in this case. Both testcases are nearly equivalent, so
that either the hardware was not forced to use priorities due to the chosen
parameters for the test or something went wrong with the scheduling. At this
point it has to be mentioned that by activating priority capabilities the
scheduling algorithm \emph{SCHED\_FIFO} is used. This means that this algorithm
does not represent one of the optimized real-time schedulers introduced in
section \ref{sec:scheduler}.

\begin{table}[!t]
  \centering
  \label{tab:TaskDistribution}
  \begin{tabularx}{\tabellenbreite\textwidth}{l||X}
  	\textbf{Attribute} & \textbf{Statement} \\
  	\hline
  	\hline
    \textbf{Tests used} & 	\begin{enumerate}
							  \item Real-time kernel test with priority (25 tasks running).
							  \item Real-time kernel test without priority (25 tasks running).
							\end{enumerate} \\
    \hline
    \textbf{Expectations} & \begin{itemize}
                              \item It is expected, that the test with priority
                              has a smaller standard deviation than the test
                              without priority.
                            \end{itemize} \\
    \hline
    \textbf{Display Format} & \begin{itemize}
                                \item \emph{Histograms} due to advantages in
                                displaying the quantitative appearance of
                                single tests in this discrete distribution.
                                \end{itemize} \\
  \end{tabularx}
  \caption{Test for Task Distribution}
\end{table}

\begin{figure}[!t]
	\centering
	\includegraphics[width=0.48\textwidth]{Figures/histogram25.png}
	\caption{Task Distribution with 25 Tasks}
	\label{fig:25Tasks}
\end{figure}

\begin{table}[!t]
  \centering
  \label{tab:StatTaskDistribution}
  \begin{tabular}{l||c|c||r}
  		\textbf{Attribute} & \textbf{With Priority} & \textbf{Without Priority} &
  		\textbf{Difference} \\
  		\hline
  		\hline
  		Mean & 6.75 & 6.75 & 0\\
  		\hline
  		Median & 4 & 4 & 0\\
  		\hline
  		Std. dev. & 6.771521 & 6.771506 & 0.000015
  \end{tabular}
  \caption{Descriptive Statistics for Task Distribution}
\end{table}

\begin{lstlisting}[label=lst:TaskDistribution, float=ht, language=R,
caption=R Script for the Task Distribution]
# load required libs
require(plotrix)
# Read files
Norm25 <- read.table("norm25.csv", header=TRUE)
Norm25P <- read.table("norm25p.csv", header=TRUE)
# Define data and boundaries
maxCycles <- 200000
yAxisMarks <- c(0, round(0.5*maxCycles), round(maxCycles))
colors <- c("lightblue","blue")
# Save our graph
png("histogram25.png")
#postscript("histogram25.ps")
# Histogram (25 Tasks)
data1<-hist(rbind(Norm25$Task),plot=FALSE,breaks=0:24)$counts
data2<-hist(rbind(Norm25P$Task),plot=FALSE,breaks=0:24)$counts
barp(rbind(data1, data2), col=colors, xlab="Task", ylab="Cycles") 
abline(h=yAxisMarks, col="gray", lwd=0.5)
# Print the legend
legend("topright",inset=.05, title="Cycles per Task", c("With Priority", "Without Priority"), fill=colors)
# Switch the device off to get a safe instance end
dev.off()
\end{lstlisting}

In order to proof the results, two more testcases were generated as shown in
Table \ref{tab:cyclictest230}. The results are shown in Figures
\ref{fig:230Tasks} and \ref{fig:230Tasksmlock}.

\begin{table}[!t]
  \centering
  \label{tab:cyclictest230}
  \begin{tabular}{l||c|c}
    \textbf{Testcase} & \textbf{1} & \textbf{2} \\
    \hline
    \hline
    \textbf{MLock} & activated & deactivated\\
    \hline
    \hline
    \textbf{Kernel} & \multicolumn{2} {c} {Ubuntu RT kernel} \\
    \hline
    \textbf{Tasks} & \multicolumn{2} {c} {230} \\
    \hline
    \textbf{Priority} & \multicolumn{2} {c} {-} \\
    \hline
    \textbf{Clock} & \multicolumn{2} {c} {real-time} \\
    \hline
    \textbf{Time} & \multicolumn{2} {c} {2 mins} \\
    \hline
  \end{tabular}
  \caption{Cyclic Test with 230 Tasks}
\end{table}

\begin{figure}[!t]
	\centering
	\includegraphics[width=0.48\textwidth]{Figures/histogram230.png}
	\caption{230 Tasks with mlock}
	\label{fig:230Tasks}
\end{figure}

\begin{figure}[!t]
	\centering
	\includegraphics[width=0.48\textwidth]{Figures/histogram230mlock.png}
	\caption{230 Tasks without mlock}
	\label{fig:230Tasksmlock}
\end{figure}

Both testcases show nearly the same development of the task distribution
compared to the first testcase in Figure \ref{fig:25Tasks} with one fatal
difference: Switching mlock off, leads to the fact that the last 30 tasks do not
even get one single cycle during the runtime of the test. Therefore the task
gets starved. This should not happen under any circumstances in a
productive system, because it may cause damage to objects or even endanger
human life. In addition, this behavior violates the principle of real-time
systems as described in section \ref{sec:realtime}. Further tests need to be
run to get qualified statements about this issue and whether to answer the
question if this is an issue at all.