% Chapter Template

\chapter{Adaptive Huffman Implementation} % Main chapter title

\label{Chapter4} % Change X to a consecutive number; for referencing this chapter elsewhere, use \ref{ChapterX}

\lhead{Chapter 4. \emph{Adaptive Huffman Implementation}} % Change X to a consecutive number; this is for the header on each page - perhaps a shortened title

%----------------------------------------------------------------------------------------
%	SECTION 1
%----------------------------------------------------------------------------------------

The SKA data is consists of floating point values only, and thus the AHC algorithm we designed was specified for floating point data.

\section{Tree Structure}
The tree structure for Adaptive Huffman Coding requires the ability to find the most significant node, or the leaf node closest to the root node with a specified weight. The fastest way to do this is by using a priority queue. The queue is kept in memory and contains all the leaf node pointers. This means however that the tree nodes all need another variable, depth. Unlike Dynamic Huffman Coding, the creation of the code can't happen at the end of the tree creation, since the tree is always changing. The code is thus generated while the nodes are created and changed when nodes move, all to save processing time.
\\
\\
The basic node structure for the tree is shown in Figure.~\ref{sudoCode:AHCNode}:
The value is kept and can be null, to indicate that it is a \textit{joiner} node. The weight is required to construct the final tree. The depth of the node and the code is required for AHC tree construction and the usual links to that nodes parent and its left and right children.

\begin{figure}[h!]
\centering
\begin{lstlisting}[frame=single]
Class Node
BEGIN
	// The constructor needs to add the new bit to the code
	Node(float data, Node parent, int width, int depth, bool[] code, bool newBit)
	BEGIN
		//Initialise all variables
				
		if (newBit not NULL)
			code.add(newBit);		
	END Constructor
	
	float data; // can be null
	
	int weight;
	int depth;
	
	bool[] code; // keep the code for all nodes
	
	Node Parent;
	Node left;
	Node right;	
END
\end{lstlisting}
\caption[Pseudo code for the Adaptive Huffman Coding Node class]{Pseudo code for the Adaptive Huffman Coding Node class used to construct the Huffman tree. The most important things to note are: The new depth and code variables, and the constructor must add a new bit to the code.}
\label{sudoCode:AHCNode}
\end{figure}

\section{Encoding Procedure}
The encoding procedure is executed when a new value arrives. This process requires finding the nullPTR node. The search process can be accelerated by keeping a pointer to the nullPTR nodes parent at all times in the AHC class. Another process that has to happen each and every time a value is encoded, is locating the node that contains the same value, if there is one. We improved the speed of this step by keeping all unique values with a pointer to its leaf node in a Hash Map. The hash map can then be used to determine if the value has been encoded before and hence find the leaf node quickly. 
\\
\\
The final approach that was used to speed the AHC algorithm was locating of the \textit{Most Significant Node}. Another Hash Map, mapping node weights to priority queues is used. When a new value is encoded it is placed in the correct priority queue. If that priority queue does not exist it is created. When each nodes weight changes it is moved to the correct priority queue. These priority queues sort the data according to the node's depth value, the smallest depth being at the top. This means when the highest node with a specific weight is needed, it can be found by looking at the priority queue pointed to by the hash map with that weight value.
\\
\\
The encoding procedure starts by determining if the tree has been started or not, if not the root node and the first right child are created, leaving the left child as the nullPTR node, and the data is placed into the required hash maps for easy future finding. If the tree is already constructed we determine if the symbol to be added already has a node in the tree by using the constructed hash map. If it does exist then the code is copied and that node is updated, then the associated binary sequence is returned. If the symbol does not already have a node associated to it, one is constructed were the nullPTR is just as the root was created. A new joiner with the right child as the new node and the left child as the nullPTR. The required values are saved in their respective hash maps and the tree is updated as shown in the Algorithm.~\ref{sudoCode:AHCEncoding}.

\begin{figure}[h!]
\centering
\begin{lstlisting}[frame=single]
bool [] encodeValue(float Symbol){
	IF (The tree has not yet been constructed)
		Construct The Tree using the global root pointer
		
		// add to the leaf nodes hash map
		uniqueHaspMap[symbol] = rootNode.right
		listOfNewValues.add(symbol) //this is we keep the order of the unique values
		
		nullPTRParent = Parent of new nullPTR
		
		checkNodeFordepthHashMap(rootNode.right)
		
		updateNode(rootNode)
		
		RETURN symbol code
	END IF
	
	IF (the symbol has been encoded before)
		leafNode <- Get the leaf node associated to the symbol from the hash map

		bool[] code = leafNode.code // copied since the update process could change it
		
		updateNode(leafNode)
		
		RETURN code
	ELSE
		nullPTRParent.left <- new joiner node
		nullPTRParent.left.right <- new leaf node with the new symbol
		
		// copy the code so that the update process does not change it
		bool[] code = nullPTRParent.left.right.code;
		
		// add the the leaf node hash map
		uniqueList[symbol] = nullPTRParent.left.right
		// add to the order list
		listOfNewValues.add(symbol)
		
		nullPTRParent <- The new parent of the nullPTR
		
		checkNodeFordepthHashMap(nullPTRPArent.right)
		
		update(nullPTRParent)
		
		RETURN code	
	END IF							
}
\end{lstlisting}

\caption[Pseudo code for the Adaptive Huffman Coding algorithm's encoding procedure]{Pseudo code showing the Adaptive Huffman Coding algorithm's encoding procedure, The algorithm takes in a symbol, adds it to the tree updating and swapping nodes if needed, and outputs the associated binary sequence for that symbol.}
\label{sudoCode:AHCEncoding}
\end{figure}

\section{Update Procedure}
The update process follows the same process as the standard AHC code; we use the new methods to find the most significant node. The updated method is shown in Figure.~\ref{sudocode:updateAHC}

\begin{figure}[h!]
\centering
\begin{lstlisting}[frame=single]
update(Node node)
BEGIN
	IF (node is not the root node)
		mostSig <- getMostSignificantNode(node.weight); //find the highest node
		
		IF (mostSig not the node AND mostSig not the nodes parent AND mostSig not NULL)
			swapNodes(node, mostSig); // this swaps position, codes, weights etc
		END IF
	END IF
	
	node.weight++;
	
	IF (node is not the root node)
		checkNodeFordepthHashMap(node); // check for change in the priority queue
		update(node.parent); // update the parent
	END IF
END
\end{lstlisting}
\caption[Pseudo code for Adaptive Huffman Coding algorithm's updateNode method]{Pseudo code for the Adaptive Huffman Coding algorithm's update node procedure, updated to use the Find Most Significant Node method. This increase each nodes weight and does any node swaps that are needed to keep the symbol with the most occurrences closest to the root node.}
\label{sudocode:updateAHC}
\end{figure}

\section{Feasibility of AHC}
Since each value has the possibility of swapping all the values within the tree, and updating the weights of nodes every single encoding procedure, parallelising any of the encoding procedures will cause synchronisation problems. If the update procedure is parallelised, the encoding tree could be constructed differently to the decoding tree, thus not decoding correctly. Another issue arises when one thread is updating the tree and busy increasing the weights of all the parent nodes, while another thread swaps these parent nodes after the previous update is at that parent node. Subsequent parent nodes that are updated will then be incorrect as they have moved.
\\
\\
Since a parallel version of the code is not viable and Adaptive Huffman is slow, averaging 10 times slower than Dynamic Huffman. Blocking (splitting data into blocks and running separate versions of the algorithm on each block at the same time) AHC in order to parallelise it is not worth while. It would be easier and faster to block the Dynamic Huffman algorithm.