\subsection{Hash Table Searchers}
In the fourth assignment of the project, we are asked to modify our search engine, such as it uses 
another data structure than linked list to store and search for data in; namely a hash table. 
We have chosen to implement both what we call a static and a dynamic hash table, in classes \texttt{StaticHashTable} 
and \texttt{DynamicHashTable} respectively, and corresponding searchers \texttt{DynamicHashTableSearcher} 
and \texttt{StaticHashTableSearcher} extending superclass \texttt{GenericHashTableSearcher}. 
These concepts and classes will be described in more detail in the following sections.

\subsubsection{Hash Table Data Structure}
 According to \cite{CL}, chapter 11 a hash table is basically just an array of length $m$ in which we 
 can store objects, and retrieve them later on. We deal with pairs of a value to store and an 
 associated key. To find out where to store a value in the array, we map the key to an index of the 
 array with a hash function $h: K \to \{0\ldots m-1\}$, where $K$ is the set of keys. 
 The hash function computes a hash value $h(k)$ for each key $k\in K$. The value will now be stored on 
 the index of the array equal to the computed hash value. The hash value can be computed in different 
 ways, but the important thing is that the hash function always maps a key into 
 the same hash code every time it is computed, and the values should be uniformly distributed. 
 Furthermore it should be as random as possible.\\
 
 A problem is that a hash function may map different keys 
 to the same hash value, and thereby put different objects (values) in the same index. We can solve 
 this collision problem by actually putting all elements that maps to a given index into a single 
 linked list object, which we can store on that index. By \cite{CL}, p. 257 this is called 
 resolution by chaining, since we create a chain (linked list) to store the values in.
   
\subsubsection{Our Implementation}\label{hash_imp}
In our search engine we want to store the URLs on which a given words occur together with the number of occurrences 
of the word on each URL. We use the search words as keys. Fortunately we already made a data structure 
for storing the information on the URLs for each word; a nested linked list. We use chained hashing and 
store \texttt{NestedLinkedList}s on the basis of the hash code of the search words. The hash code is an
 integer representation of the keys to be used in the hash function. We used \texttt{String.hashCode()}-method
 provided by the Java-library. We have not investigated how well this hash code distributes the input, 
 but we assume that it is somewhat uniform. We are aware that it can affect how our hash table-based searchers work.
We use the hash function $h: \Z \to \{0\ldots m-1\}$ given by $h(k) = | k \mod m |$, 
where $k$ is the hash code of a key. These hash values can be computed in constant time.
Our hash table data structure is depicted in figure \ref{hashtable}.\\
\begin{figure}
\centering
\includegraphics[scale=0.7]{../hash/hashtable1.pdf}
\caption{Our hash table data structure containing \texttt{NestedLinkedList}s \label{hashtable}}
\end{figure}

The hash table data structure classes have two important methods; \texttt{put(String key, String value)} for inserting 
new data in the hash table and \texttt{get(String key)} for retrieving data from it. To insert a URL for a given key (search word) in
some nested linked list in the hash table, the \texttt{put}-method first computes the hash value of the key.
If the slot in the array corresponding to the hash value is empty, it creates a new \texttt{NestedLinkedList} object and adds it to the slot.
If the slot is not empty, it checks if the word (key) is already in the \texttt{NestedLinkedList}. If it is, it checks if
the URL to insert is in the \textt{URLList} of the \texttt{NestedLinkedList}. If so, the number of occurrences on that URL
is incremented. If not, the URL is added to the \texttt{URLList}. If the search word is not in the \texttt{NestedLinkedList},
it is added in the beginning of the existing list. When we want to search for a word, we want to get the \texttt{URLList} with
the URLs on which the word occurs. The method computes the hash code and then the hash value for the key. Now it can look at the right
index and simply look through the nested linked list until the word is found.

\subsubsection{Load Factor and Dynamic Hash Table}
The load factor $\alpha$ of a hash table is defined by $\frac nm$, where $n$ is the total number of objects to store in the array. 
Then $\alpha$ is the number of objects that would be on each index in the 
array if the elements were distributed uniformly. Hence we can see it as a measure of how full the 
hash table is. A load factor of $1$ means that there is in average one object
per index, which means that look-ups can be made very fast. Therefore an optimal load factor will be at most 1.
 On the other hand, a very small load factor means that we use unnecessary space. It has shown that
 a load factor of maximum $0.75$ is quite good when we want to limit both space
 and time consumption. Thus maintaining a hash table with load factor less than
 $0.75$, should be better than our static hash table. Therefore our dynamic hash
 table differs from our static hash table in the way that its size (the length of the array) is not static, but dynamic - it changes when needed. If the load factor
 exceeds $0.75$ after a word is added, the array is doubled and the hash values
 are recomputed. Other than this, our dynamic hash table works like our static hash table.\\
 
 In the class \texttt{DynamicHashTable} the method \texttt{embiggen(float multiple)} is called with 
\texttt{multiple} equal to \texttt{2f} to double the array, if the load factor
exceeds $0.75$. The method creates a new array of double the size of the
original, and then all the objects are put into the new array by calling \texttt{put(String key, String value)}. This
is inefficient; we need only to copy the search words into the new array, and
then bring the URLs along instead of also copying each URL.

\subsubsection{A Design Problem}
With the hash table based searchers we encounter a problem in differentiating consistently between 
search logic and data structure.
Due to the nature of the hash table data structure, where \texttt{put} and \texttt{get}-methods are incorporated in 
the data structure itself, it is not possible to extract and externalize the 
search logic the same way it is done from the searchers relying on lists. In this case, the
hash table class is the information expert (introduced in section \ref{arc_resp}) that holds the
 information required in order to carry out a search.
 Therefore, we pass the method call representing a search - the method \texttt{get(String key)} - 
 on to the hash table classes themselves. The only other thing the searcher classes do is to load the
 file.
 
 \subsubsection{Time Complexity}
   It is obvious that searching in a chained hash table can be faster than searching in a linked list, 
  since we only need to search in the linked list of a given index. However, in the worst-case all elements
  are stored at the same index of the array. Then searching for a word is not faster than searching
   in the linked list, as this is exactly what need to be done now. Then searching time is $O(n)$. 
   Nonetheless theorem 11.1 and 11.2 of \cite{CL} tells us that the average-case searching time is 
   $O(1+\alpha)$ no matter if the search is successful or not, if we assume that an object is equally 
   likely to hash into any of the slots, independently of where the other objects hashed to.
   If $m$ is proportional to $n$, that is $n \in O(m)$, then $\alpha = O(m)/m = O(1)$, which gives an
    average-case searching time of $O(1)$.