\section{Benchmark}{\label{benchmark}}

\subsection{Introduction}
The scope of our benchmark is limited to only consider time as a metric. We
will therefore not go into memory or CPU allocation or other metrics that might
be of interest for a full benchmarking of a program. \\

Our analysis will look into benchmarking of both data load and search execution
times of the search engine. We will consider each of our data structures, focus
on the medium data file and comment a bit on the big data file. Further, we will
consider the special case of the use of boolean expression searching. \\

\cite{IBM1} discusses a simple design for benchmarking code-execution:

\begin{itemize}
\item Record the start time.
\item Execute the code.
\item Record the stop time.
\item Compute the time difference.
\end{itemize}

We follow this design, but add a statistical analysis of our results.

\subsubsection{Experimental Design}
Our mini benchmark is inspired by \cite{IBM1, IBM2, IBM3}, but we have chosen
\emph{not} to use the framework from those articles. Instead, we have chosen a
simpler design for our statistical analysis of our benchmark results, but
still a design with many interesting properties.\\

We have utilized our MVC pattern to integrate our benchmarking
functionality as a view. This could be interpreted as a break with some of the
principles behind the design pattern, e.g. should benchmarking perhaps be
integrated into the model? Or as in \cite{IBM3} be implemented using the
\texttt{runnable} interface?\\
 
It is commandline-based and essentialy runs random searches a specified number
of times and outputs this in CSV-format in a file. The (pseudo) 
random search words are generated from the loaded data in our model, i.e. the 
Searcher and accompaning data structure. We limit the loading of data to one
time and to one place in our program. Further, we can easily search for any 
number of random search words, any number of times, essentially only limited by
time.\\

The analysis of the data has taken place in a spreadsheet software. We have
limited this to visual analysis of different graphs generated from the data and
simple statistical analysis. We compare statistics for our different test cases,
including calculation of mean, standard deviation and confidence intervals.
We assume that the data follow the Gaussian (normal) distribution and are independently and identically distributed. We will not return to or test
this assumption in this paper (\cite{IBM2} uses the method of bootstrapping to 
generate a sample distribution). We will not analyse the small and big data file
in detail since they add little extra data at great cost (due to scope of project).\\

\begin{figure}
%%TO  INPUT A TABLE
\begin{center} %centrerer tabellen
 \fontsize{9}{11}\selectfont
\begin{tabular}{|l|c|c|c|}
\hline 
& \textbf{Mean} & \textbf{Standard Deviation} & \textbf{Confidence Interval}\\
\hline
& & &\\
\textbf{NestedLinkedList}&98.619	&	2.146&	\pm1.330\\
& & &\\

\textbf{StaticHashTable (1000)}&781	&	30&	\pm19\\
& & &\\
\textbf{DynamicHashTable (32768)}&710	& 28	&	\pm17\\
& & &\\

\textbf{LinkedList}& 235	& 78 & 	\pm 49\\
& & &\\

\hline
\end{tabular}
\end{center}
\caption{Result for load benchmark (medium data file). Mean, standard
deviation and confidence interval (ms). N = 10.}\label{loadtable}
\end{figure}

\subsection{Data Load Benchmarks}
We have chosen a relatively simple research design for the analysis. For the
data load analysis, we have executed 10 trials for each data structure using the
data from the medium file. We are aware that this is a limited sample, but given
the scope of the project, we have opted to not do more. See figure
\ref{loadtable}. We have also tested the effect of adding boolean search, but
only for the dynamic hash table.

\subsubsection{Linked List}
Our simplest data structure - linked list - simply reads everything from the
file and inserts it. Inserting is done at the end of the list and can be done in
$O(1)$ time. Inserting the $n$ words and $m$ URLs of the file should therefore
run in $O(n)$ time, if we assume that $m\in O(n)$. This should be the fastest
algorithm for loading data.

\subsubsection{Nested Linked List}
Each \texttt{NestedLinkedList} has a \texttt{URLList}, which is a linked list of
URLs. To insert each of the $n$ words and $m$ URLs, we may need to compare with
$O(n)$ already inserted \texttt{NestedLinkedList} objects and $O(m)$ already
inserted \texttt{URLList} objects. The \texttt{put}-method will be called
once for each word (not URL). If $m$ is proportional to $n$, each insertion is
done in $O(n)$. Hence it should take $O(n^2)$ to insert everything. This makes
it very difficult to benchmark the \texttt{NestedLinkedList} insertion with a
larger sample and this is the main reason why we do not benchmark further with
the big data set.
 
\subsubsection{Static Hash Table} 
Our implementation of the static hash table has a fixed array of
lists of \texttt{NestedLinkedList} objects. For inserting into the hash table we
utilize that the hash code of the Strings modulus the length of the array will
return an integer between 0 and the length of the array. We only need to compare
existing \texttt{NestedLinkedList} objects in one array index instead of all
previously inserted words.We have set the array length to be 1000 for our
benchmarks. In general in chained hashing, insertion can be done in $O(1)$ time under the assumption that some of the insertions can be done
directly (if the slot is empty), \cite{CL} p. 258. However, this is without
chekcing if the objects are already there. In worst case, insertion is as slow
as for \texttt{NestedLinkedList}, but in average it should be a factor $k$
faster, where $k$ is the length of hash table.\\

\subsubsection{Dynamic Hash Table}
The dynamic hash table extends the static hash table with functionality to
enlarge the indexing array if the load factor of the array is larger than or equal to 75\%.
The method for enlarging the array copies each element of the previous array and
adds it to the new method. We expect to load data slower than in the case of the
static hash table due to this process of copying data from one array to
another, but this is not noticable due to the increasingly larger array size.\\

In our benchmarks, we have set the dynamic hash table starting size to 1. This
means that there will be a lot of overhead in terms of dynamicly enlarging the
hashtable in the begining. We end with a size of 32768.\\


\subsubsection{Conclusion}
Our results in figure \ref{loadtable} and figure \ref{LoadMedium} are close to
as expected. \texttt{LinkedList} is fastest with a mean of 235 ms. Next comes
the two hash tables, which are close performing the same (though statistically significantly different from each other at a 95\% level). It is
interesting to note that the additional array length for the dynamic
matters little (or is outweighed somewhat by enlarging the array). Further,
their standard deviations are smaller compared to the \texttt{LinkedList}. One
reason for this is that the short running time for the load of data for
\texttt{LinkedList} could mean that classes etc. are not loaded properly into
memory. A follow-up test revealed fluctuations around a mean after a longer running session around
40-50 ms (close to where our trial 7-9 are). This shows that here our sample
size might have affected our results. The \texttt{NestedLinkedList} shows its
drawbacks here as a data structure for loading. Though it is more difficult to
implement, it makes sense to use hash tables rather than nested linked lists for
data load.

\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{../benchmark/Load_medium.pdf}
\caption{Load time (ms, logarithmic scale) for N = 10 loads.}\label{LoadMedium}
\end{center}
\end{figure}

\subsection{Search Execution Benchmarks}
For the search execution benchmark we have to do the following: 1) Select a
random sample of search words from the already loaded data and save to an array.
2) Run a search for each word in the sample array and time the total
of the execution. 3) Run 2. a specifed number of times. 4) Export data for each
run in 3. to be worked on in Excel. 

\subsubsection{Linked List}
Since all data is loaded directly into the data structure we need to match URLs
and words in our \texttt{get}-method instead of relying on our data structure to
structure the information for us. We would therefore expect searching in the linked 
list to be slower than in the other data structures. But searching still
runs in $O(n)$. What slows the search process down is that the list is long, and 
we need to run through the complete list to find all occurrences of a word.

\subsubsection{Nested Linked List}
Searching in the nested linked list should be faster than in the regular
linked list. The list to search in should now be shorter. We only have to
compare search words and then return the URLs and their number of occurrences 
when we find the search word in the list. In terms of big-O-notation,
searcing in a linked list and a nested linked list should be similar. Both
have searching times of $O(n)$. However, the shorter linked list length for the
\texttt{NestedLinkedList} makes a big difference in the end.


\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{../benchmark/Array_size.pdf}
\caption{Search time (ms) for 10,000 searches, 100 times per Array size (1 to
30)}\label{ArraySize}
\end{center}
\end{figure}

\subsubsection{Static Hash Table}
The static hash table should increase the speed of the nested linked list search
by a factor of the length of the indexing array, exactly as for insertion.
Figure \ref{ArraySize} shows this relationship. The gains slow down
asympotically for each increase in array size moving towards the limit for
searching (e.g. hardware limits). We varied the array size from 1 to 10 and then in increments of 5 until a length of 30. 
The search was for 10,000 words in 100 trials, which we averaged for each array size. We used the medium data file for this analysis.

\subsubsection{Dynamic Hash Table}
Since the array would have a size ensuring at least 25\% free capacity (in
terms of search words), and therefore relatively few \texttt{NestedLinkedList}
objects at the same place in the array, we would expect a speed increase over
the static hash table by a factor of the array lengths divided by each other.
However, with very large array sizes, we ran into another problem. Such data structures could potentially be stored in the RAM and not in the CPU cache, with
a decrease in speed as a result. We noticed this when comparing the static
hashtable with our dynamic version for the big data file (array size
128,000. Results not included).\\

We also ran the benchmark with the boolean search option on. The search query
was generated from the formula: $word[n] \, P(50\%)\{AND,OR\}\, word[n+1], n <$
sample size. In this way we tested both \textbf{AND}- and \textbf{OR}-nodes
though only a tree with a height of 1.

\subsubsection{Conclusion}
Figure \ref{Search_medium} and \ref{searchtable} (and figure
\ref{ArraySize}) show the results of our search execution benchmark. 
It is clear that our hypotheses hold. \texttt{LinkedList} is indead
the slowest - by far. \texttt{NestedLinkedList} is much faster, but still not
anything close to our hash tables. As can be seen in our results, their search
time starts out higher for the first ca. 2 searches before settling around a
mean time. This could be due to loading of the classes into memory
(\cite{IBM2}) and we have chosen to exclude these from our results. Adding
boolean searching, we see a dramatic rise in mean search time. Our
implementation of the concrete syntax tree could possibly be optimized a lot to
bring down search time to a level closer to two times the
\texttt{DynamicHashTable} without boolean searching on. Again following \cite{IBM2} we expect some of the variation, and
the one outlier in the \texttt{NestedLinkedList} to be either the JIT compiler
optimizing part of our code, or perhaps unexpected events on the benchmarking
computer out of our control, or random variation due to the specific sample of search word searched for.


\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{../benchmark/Search_medium.pdf}
\caption{Search time (ms, logarithmic scale) for 10,000 searches, 100 times per
data structure}\label{Search_medium}
\end{center}
\end{figure}


\begin{figure}
%%TO  INPUT A TABLE
\begin{center} %centrerer tabellen
 \fontsize{9}{11}\selectfont
\begin{tabular}{|l|c|c|c|}
\hline
& \textbf{Mean} & \textbf{Standard Deviation} & \textbf{Confidence Interval}\\
\hline 
& & &\\
\textbf{LinkedList}& 126.996 	& 984& 	\pm 193\\
& & &\\
\textbf{NestedLinkedList}&2.780&	60&	\pm12\\
& & &\\
\textbf{DynamicHashTable (32768, Boolean)}& 353	& 7&	\pm1 \\
& & &\\
\textbf{StaticHashTable (1000)}&63	&	4&	\pm1\\
& & &\\
\textbf{DynamicHashTable (32768)}&53	& 3	&	\pm1\\
& & &\\

\hline
\end{tabular}
\end{center}
\caption{Result for search benchmark (medium data file). Mean, standard
deviation and confidence interval (ms). N = 100 (98 for the hashmaps) (of 10,000
searchers each).}\label{searchtable}
\end{figure}

\subsection{Validity and Generalisability}
When testing loading of data, we have not put much effort into making sure there
are no other variables that interfere with our experiments. However, we believe
that, even with the sample sizes used, our results can be generalized to some
degree (our results were consistent with tests run on other machinces than the
benchmarking setup used in this section).\\

Regarding search execution, it is worth noting that our benchmarking has used
samples only from the population of search words and URLs provided in the data
file that we searched in. This means that our results may
not reflect everyday use, where some searches may provide no results. Searching for a search word not in our data
structure would take longer time since we would have to compare keys with all
elements in the data structure. More so for \testtt{LinkedList} and
\texttt{NestedLinkedList} than for the hash tables. However, retrieving results
would be somewhat faster. Because of this we would expect our results to be
biased downwards.
