% Chapter Template

\chapter{Design} % Main chapter title

\label{Chapter3} % Change X to a consecutive number; for referencing this chapter elsewhere, use \ref{ChapterX}

\lhead{Chapter 3. \emph{Design}} % Change X to a consecutive number; this is for the header on each page - perhaps a shortened title

%----------------------------------------------------------------------------------------
%	SECTION 1
%----------------------------------------------------------------------------------------

This chapter investigates how to modify Huffman Coding to suit the data and speed requirements of the SKA radio data.

\section{Overview}
Standard Huffman Coding averages around 2MB/s for compression speeds on Intel Ivy Bridge I7 950 @ 3.07GHz. Theoretically this speed should be multiplied by the number of cores used in a parallel approach that scales completely. Unfortunately in most cases this is not true, since there are many overheads to calling on parallel threads, as well as many sections of code that can not be run in parallel.
\\
Studies show that Huffman Coding does not perform well on a GPU (Graphics Processing Unit) due to small cache sizes per SM/SMX and the requirement of using a Tree structures and the Dynamic programming required in many stages of the Huffman process\cite{672291}. Thus a CPU parallel approach will most likely achieve better compression speeds than a GPU approach, mainly due to memory access speeds and Host-to-Device memory transfer speeds.
\\
Since the standard Huffman Coding algorithm only achieves 2MB/s and in best case we could only expect the speed to be multiplied by the number of cores a CPU contains, simply making the standard algorithm parallel will not suffice. We thus need to adapt the standard Huffman Coding algorithm to the SKA processing requirements.
\\
\\
As stated previously, the SKA data tends to span a narrow frequency range with very few spikes, and thus possess few unique values. This is to Huffman Coding's advantage as each code can be made smaller and the table will be smaller. This means that Huffman coding will most likely attain good compression ratio for the SKA data. The issue lies with the speed of the algorithm, and whether it is worth using Huffman coding over the standard algorithms, such as GZIP, BZIP, BZIP2 and WinRAR/WinZIP, the last 2 being being freeware most commonly used by the public. Further comparisons are required with other research approaches SKA have running, namely a parallel RLE and Predictive scheme.

\section{Algorithms}
In order to test the Huffman Coding scheme fully for streaming tasks, both Adaptive and Dynamic approaches need to be tested.

\subsection{Huffman Coding}
Huffman Coding (also known as Dynamic Huffman Coding) uses a tree structure in order to quickly generate binary sequences for input symbols. The symbols that occur the most frequently are assigned the shortest binary sequence. This method requires the ability to read all values in the file in advance in order to generate the required tree structure.
\\
\\
Huffman coding generates this tree by first binning (finding all unique values and how many of each unique value are in the file) the file and then using a Dynamic Programming scheme to generate the tree. Once the tree is constructed all leaf nodes correspond to unique values, and the closest leaf node to the root node of the tree corresponds to the unique value with the most occurrences. All inner nodes (non-leaf nodes) are just to help determine the structure of the tree.
\\
\\
Once the tree is complete, the binary sequence codes need to be constructed. The most efficient way to do this is is to start from the leaf nodes, then move up the tree checking if the node is a left, or right child adding the respective 0 or 1 binary values to the code of that leaf node, until the root node has been reached. Pseudo  Code.~\ref{sudoCode:codegeneration}.
\begin{figure}[h!]
\centering
\begin{lstlisting}[frame=single]
bool [] generateCode(node leafNode)
BEGIN
	node parentNode = leafNode.parent;
	bool [] codeArray;
	WHILE (parentNode is not NULL (Exists)) do
		if (leafNode == parentNode.left)
			codeArray.add(0);
		else if (leafNode == parentNode.right)
			codeArray.add(1);
	
		leafNode = parentNode;
		parentNode = parentNode.parent;
	END WHILE
	
	reverseCode(codeArray);
	
	RETURN codeArray;	
END
\end{lstlisting}
\caption[Pseudo Code for the Code generation method used by Huffman Coding]{Pseudo code for the Code Generation method in Huffman Coding. This code generates all the codes for the leaf nodes.}
\label{sudoCode:codegeneration}
\end{figure}

The reverseArray() method is required since decompression requires a root to leaf binary sequence in order to determine the correct unique value associated to that binary sequence. Some algorithms skip this reverseArray() method since they use Break Point values in between each code. However break points tend to reduce the compression ratio and for the SKA data, reduce the processing speed too(see Chapters. \ref{Chapter5} and \ref{Chapter6}).
\\
\\
For the Encoding procedure, each symbol value is taken and replaced with the binary sequence generated for that symbol. All the binary sequences are then placed after one another and written to a file as a sequence of byte values (8 binary bits at a time), with padding of 0s at the end to finish the last byte value.
\\
This allows the decoder to start at the beginning of the full binary sequence for the decoding process. The decoding process starts by moving down the tree according to the next binary value in the given sequence, 0 for left node movement and 1 for right node movement, this is done until a leaf node is reached, then that number is added to the output values array and the process is started again from the root node until there are no binary values left to read. Pseudo Code.~\ref{sudoCode:Decompression}.

\begin{figure}[h!]
\centering
\begin{lstlisting}[frame=single]
symbolType [] decode(bool[] fullBinarySequence)
BEGIN
	node treeNode = rootNode;
	symbolType [] symbols;	
	// NOTE: padding must not add an extra symbol, so check for codeLength too
	int pos = 0;
	WHILE (pos > fullBinarySequence.length - shortestCodeLength)
		if (fullBinarySequence[pos] == 0)
			treeNode = treeNode.left;
		else
			treeNode = treeNode.right;
		
		IF (treeNode is leafNode)
			symbols.add(treeNode.symbol);
			treeNode = rootNode;
		END IF
		
		pos++;
	END WHILE
	
	RETURN symbols;
END
\end{lstlisting}

\caption[Pseudo code for the Huffman coding decompression method]{Pseudo code for the Huffman Coding decompression method.}
\label{sudoCode:Decompression}
\end{figure}

The process of making Huffman Coding parallel entails parallelising the Binning method and the symbol to binary sequence swapping method, or by implementing a blocking scheme, where the data is split into blocks of equal size and compressed simultaneously. The blocking method is expected to be faster than the parallel method as there is a large dynamic programming section to Huffman coding.
\\
\\
The output file consists of a table associating each symbol and its count in order to reconstruct the same tree, followed by the full sequence of encoded binary values.

\subsection{Adaptive Huffman Coding}

Adaptive Huffman Coding (AHC) was designed so that Huffman coding can work on streaming  data does and not require full knowledge of the data in advance.
\\
\\
The tree structure differs slightly from Dynamic Huffman Coding, as the furthest leaf node from the root node must always be a NULL or null placement holder (nullPTR), so that a new symbol can always be added to the tree.

\begin{figure}[htbp]
	\centering
		\includegraphics[width=0.6\textwidth]{Figures/update-process.png}
		\rule{35em}{0.5pt}
	\caption[Adaptive Huffman Coding Update Process]{Update Process for Adaptive Huffman Coding}
	\label{fig:AHC Update procedure}
\end{figure}

The basic principal of AHC is to start with a root node and a nullPTR leaf node. We then start reading symbols. When a new symbol arrives, we add a new node with the weight of 1 to where the nullPTR node was, this node has 2 children, the nullPTR node and a new leaf node with the new symbol and a weight of 1 and running the update procedure on the parent node, when a symbol previously seen arrives run the update procedure on the leaf node containing that symbol.
\\\
Fig.~\ref{fig:AHC Update procedure} details this update procedure. It entails checking if the node needs replacing with a higher node since its weight will rival that higher node (one closer to the root node), increasing the nodes weight and running the same update procedure on the parent node until arrival at the root node.\cite{HuffmanCodingSite}
\\
\\
Decoding AHC requires the correct sequence of unique symbols as they were found when compressing and the full binary sequence. It decodes the same way as Dynamic Huffman Coding, with a slight difference: when arriving at the nullPTR node it places the next unique symbol in the read sequence the same way it would add a new symbol when compressing, also running the same update procedure on leaf nodes each time one is found.
\\
\\
Due to AHC having such a linear approach and a tree update procedure it is impossible to parallelise AHC. The only alternative would be to block the data in some way, thus defeating the purpose of AHC which is to support streaming and not to require the data in advance.
\\
\\
The output file for AHC comprises of the unique symbols sequence in the correct order of appearance, followed by the full binary sequence.

\section{Prefix Sum}
The major bottleneck within the Huffman coding algorithm is the conversion from the final binary sequence to byte array. Usually this cannot be done efficiently because each symbols binary sequence is in its own array and of arbitrary length.
\\
\\
A Prefix sum is an accumulation of elements in an array that can undergo a binary associative operation $\oplus$ where the identity value for the operation is $I$. If $A=[A_1, A_2, \dots, A_{n-1}, A_n]$, where $n$ is the number of values in the array, then prefix sum is $PS=[I, A_1, (A_1 \oplus A_2), \dots, (A_1 \oplus A_2 \oplus \dots \oplus A_{n-2})]$.\cite{blelloch1990prefix}
\\
For instance: If you have the following array of integers:
\begin{center}
[1, 6, 9, 10, 2, 3, 12]
\end{center}
The prefix sum will be:
\begin{center}
[0, 1, 7, 16, 26, 28, 31, 43]
\end{center}

Usually a Dynamic programming method is used to generate bytes in linear until no binary digits are left. We decided to try and speed this up by using a prefix sum to flatten a 2 dimensional binary sequence array into a one dimension binary sequence array in parallel, which allows for the flattened binary sequence array to be converted to a byte array in parallel. This should speed up the binary sequence to byte conversion.

\section{GPU based Huffman Coding}
As discussed earlier, GPU based Huffman coding algorithms do not fair well. However we decided to try a GPU-CPU combined algorithm using the Thrust libraries since they do not require allot of work to use and are already optimized. Since the Thrust libraries only contain Binning methods and prefix sums, these are the two sections of code we decided to test on the graphics card.
\\
\\
The binning method for thrust requires the number of unique values in advance, which in our case is not possible. Luckily Thrust contains other methods such as: \verb|inner_product()| which can calculate the number of unique values in an array. Thus the final procedure required to bin the entire data set is: Firstly the data needs to be sorted using Thrusts sort method, this is because the \verb|inner_product| method requires all unique values to be bunched together. The \verb|inner_product| method is then run in order to initialize the \verb|unique_Values[]| and \verb|unique_values_count[]| arrays. The binning method can then be run to find and count the number of each unique value. Thrusts \verb|reduce_by_key| method is used to do this.
\\
\\
The Thrust prefix sum method can be used to determine a exclusive scan of the lengths of each array in our 2 dimensional binary sequence array. Thrusts \verb|exclusive_scan| method is run on an array containing all these lengths. Once the prefix sum is complete, the values in the prefix sum array are the starting positions of each array in the 2 dimensional binary sequence array. This allows us to initialize a flat binary sequence array and then flatten the 2 dimensional binary sequence array using the CPU.
\\
Once the binary sequence array has been flattened, it can be converted into byte values to be written to a file.

\section{Benchmarking}
The process of testing the final stage of the SKA's Huffman coder will entail using sample data provided by the SKA. The main comparisons will be comparisons of parallel versions, and finally comparisons to standard compression algorithms. This will require multiple runs for each, on the same system. A time average for all algorithms over a minimum of 6 runs will be required for all test parameters. These parameters will be, 1 threads (or linear running), 4 threads (standard PC for public users), 8 threads and 16 threads. A combined GPU and CPU version will be constructed and will be compared to the equivalent CPU version (same number of CPU threads for each algorithm).
\\
\\
The final testing stages will be a comparison for SKA to determine the best possible scheme to use. This will compare all the SKA researched compression schemes. This comparison will be made fair by making all algorithms compress the same file.