% Chapter Template

\chapter{Huffman Coding Implementation} % Main chapter title

\label{Chapter5} % Change X to a consecutive number; for referencing this chapter elsewhere, use \ref{ChapterX}

\lhead{Chapter 5. \emph{Huffman Coding Implementation}} % Change X to a consecutive number; this is for the header on each page - perhaps a shortened title

%----------------------------------------------------------------------------------------
%	SECTION 1
%----------------------------------------------------------------------------------------

Since certain parts of the DHC algorithm can be executed faster when using CUDA than any CPU parallel system, we developed a CPU and combined GPU-CPU version of the algorithm was designed for SKA.

\section{CPU Only Huffman Coding}
The same tree used in AHC was used for Huffman Coding, but the Code and depth values are removed as they are not needed for the construction of the Huffman Tree or the codes.
\\
\\
Both a parallel algorithm and a blocked system were created for Huffman Coding. It was determined that the Blocked algorithm is approximately 10 times faster, and thus the parallel algorithm was abandoned to allow time to optimize the blocked version of the code.

\subsection{Binning}
The initial step for Huffman Coding is to Bin the data i.e calculate each unique symbol's probability. This is done by creating a hash map of symbols to symbol counts. The data is read and when a new symbol is found it is added to the hash map with a count of 1. If that symbol has been seen, before the count value for that symbol is increased by 1. This can be done in parallel with a few synchronisation checks. Firstly an atomic count variable is used, and a critical block is constructed to only allow one thread to construct a new symbol at a time.
\\
The binning method developed for our Blocked Huffman coding algorithm is presented in Algorithm.~\ref{sudocode:binning}
\begin{figure}[h!]
\centering
\begin{lstlisting}[frame=single]
BinData(float[] data)
BEGIN
	FOR (int i = 0; i < numberFloats; ++i)
		float value = data[i];
		
		IF (FrequencyHashMap does not contain the value)
				FrequencyHashMap[value] = 1;
		ELSE
			FrequencyHashMap[value]++;
		END IF
	END FOR
END
\end{lstlisting}
\caption[Pseudo code for the Huffman Coding binning method]{Pseudo code for the Huffman Coding binning method. This finds and counts all the unique symbols.}
\label{sudocode:binning}

\end{figure}

The binning algorithm uses a Hash Map of IEEE 32-bit floats to 32-bit integer value to save the count of each float value.

\subsection{Tree Construction and Code Generation}
The next step is to construct the tree. As noted in Chapter.~\ref{Chapter3}, a Dynamic programming scheme is used and so no parallel version could be designed. A Priority queue ordered on each tree node's weight is used. An output list of all leaf nodes is also kept for the code construction, as explained in Chapter.~\ref{Chapter3}. The only difference is that the tree is deleted after the HashMap of symbol to Code is constructed. The generate tree method is presented in Algorithm.~\ref{sudocode:generateTree}.

\begin{figure}[h!]
\centering

\begin{lstlisting}[frame=single]
GenerateTree(Nodes[] leafNodes)
BEGIN
	PriorityQueue<Node> queue
	
	FOR (all values in FrequencyHashMap)
		node <- new leaf node containing the value 
		queue.push(node)
		leafNodes.add(node)
	END FOR
	
	WHILE (the queue contains more than one node)
		Node a = queue.pop
		Node b = queue.pop
		
		Node parent <- new joiner node
		
		parent.left = a
		parent.right = b
		
		a.parent = parent
		b.parent = parent

		queue.push(parent)				
	END WHILE
	
	DELETE the tree;
END
\end{lstlisting}

\caption[Pseudo code for the Huffman Coding Generate Tree Method]{Pseudo code for the  Huffman Coding Generate Tree method. This is how the Huffman tree is constructed using a Dynamic programming procedure with a priority queue to place symbols that appear more often closer to the root node.}
\label{sudocode:generateTree}

\end{figure}

The generation of the codes can be done in parallel when not blocking the data, as each leaf node's code can be constructed at the same time. Due to threads fighting for control of a CPU core when more threads than the number of cores on a CPU are used, it is not recommended to execute any parallel algorithm while running blocks in parallel.

\subsection{Float-to-code swapping procedure}
The next step is to put the codes in the correct order. This is most easily done by swapping the floats for the corresponding Hash Map code. The swapping can be done in parallel by giving each thread the same number of symbols to swap and then having them swap at the same time. The algorithm uses OpenMPI system to run the conversion in parallel. This is shown in Algorithm~\ref{sudocode:swapvalues}.

\begin{figure}[h!]
\centering
\begin{lstlisting}[frame=single]
SwapValues(float[] data, bool[][] codes)
BEGIN
	// calculate the number of values to process at a time
	int32 numToProcessPerBlock = ceil(NumberFloats / NumThreads)
	
	// initialise the compressor with the correct frequency map
	compressor(FrequencyHashMAP)

	initialise the compressor tree
	
	# OMP parallel for
	LOOP (The number of threads)
		checkthat the number to process does not exceed the array bounds
		
		// swap the codes		
		call on the compressor to swap the symbols for their codes for the current block
	END FOR
END

// The compress method in Huffman compressor
compress(float[] data, bool[][] codes, int32 num)
BEGIN
	FOR (int32 i = 0; i < num; ++i)
		codes[i] = CodesHashMap[data[i]];
	END FOR
END
\end{lstlisting}
\caption[Pseudo code for the Huffman Coding Swap Values method]{Pseudo code for the Huffman Coding swap values method. This method swaps all the symbols for its respective binary sequence in parallel.}
\label{sudocode:swapvalues}
\end{figure}

\subsection{Binary Sequence to Char conversion}
The final steps are to convert the boolean sequences into char arrays and place the required data into a file. There are 2 ways to do this: A Dynamic programming approach, and a prefix sum flattening parallel approach\cite{O'Neil:2011:FDC:1964179.1964189} which is shown to work very well.
\\
\\
The Dynamic Programming approach constructs bytes from the binary sequences in parallel using the 2 dimensional binary sequence array. It moves through each array in the 2D binary sequence array one bit at a time. Adding that bit to the current byte, until 8 bits have been added to the current byte. It then adds that byte to the byte array and starts with a fresh byte until there are no bits left. Algorithm.~\ref{sudoCode:dynamiccharconversion}

\begin{figure}[h!]
\centering
\begin{lstlisting}[frame=single]
ConvertToChar(bool[][] codes, char[] charArray)
BEGIN
	int32 count = 0;
	char c = 0;
	LOOP (the number of float symbols in the data)
		LOOP (all the bits in the current code)
			// create the char bit by bit
			IF (codes[i][z] == 1)
				SHIFT 1 to indexed position
			ELSE
				SHIFT 0 to the indexed position 
			END IF				
			
			cnt++; // move to next bit
			
			IF (a complete char has been constructed)
				add the char to our output array
				refresh the char
			END IF
		END FOR
	END FOR
END
\end{lstlisting}
\caption[Pseudo code for the Huffman Coding Dynamic programming Char Conversion method]{Pseudo code for the Huffman Coding Char Conversion method which converts the 2 dimensional binary sequence array into a byte array using a Dynamic Programming scheme.}
\label{sudoCode:dynamiccharconversion}
\end{figure}

This dynamic programming method was determined to be the slowest part of the entire algorithm, and thus causes a large bottleneck. We then decided to try the Prefix Sum flattening parallel approach found in~\cite{O'Neil:2011:FDC:1964179.1964189}. This method uses a prefix sum to allow for a fast parallel process to flatten the 2D binary sequence array into a 1D binary sequence array. STL's in-place copy method was used to efficiently do this in parallel. Once the flattened array is constructed, the 1D binary sequence array can be converted into a byte array in parallel, shown in Algorithm.~\ref{sudoCode:prefixlinear}.

\begin{figure}
\centering
\begin{lstlisting}[frame=single]
ConvertToChar(bool[][] codes, char[] charArray)
BEGIN
	// The prefix sum is first calculated
	// note an extra length is added to determine the total
number of binary values
	int32 [] prefixSum(codes.length + 1);
	
	prefixSum[0] = 0;
	int32 sum = 0;
	LOOP (the length of the code)
		sum += codes[i].length;
		prefixSum[i+1] = sum; // calculate the prefix
	END FOR
	
	// A flattened array of binary values is then calculated (remember padding)
	bool[] flatArray(ceil(prefixSum[codes.length]/8) * 8);
	
	# OMP parallel for
	LOOP (the number of codes in the 2D array)
		copy the code to the correct array starting position
	END FOR
	
	// finally construct the byte array
	
	# OMP parallel for
	LOOP (number of bytes to create)
		unsigned char c = 0;

		FOR (int z = 0; z < 8; ++z)
			IF (flatArray[i << 3 + z] == 1)
				SHIFT 1 to the index
			ELSE
				SHIFT 0 to the index
			END IF
		END FOR		
		
		charArray[i] = b;
	END FOR
END
\end{lstlisting}
\caption[Pseudo code for the Prefix Sum version of the Huffman Coding char conversion method]{Pseudo code for the Huffman coding char conversion method. This version of the char conversion method, uses a linear prefix sum to help flatten the 2D binary sequence array in parallel, for a parallel byte conversion.}
\label{sudoCode:prefixlinear}
\end{figure}

The use of a linear prefix sum for the CPU rather than try to make it parallel is from the Article.~\cite{blelloch1990prefix}. The authors of the article state that the CPU parallel prefix sum tends to slow down the prefix sum operation and thus a linear approach is faster on the CPU.
\\
\\
This version of the char construction method is around 5 times faster than the Dynamic programming version, which ensures the final CPU-only algorithm is more than 3 times faster.
\\
\\
The file is then constructed by writing the tables for the data, and then dumping the byte array from memory into the file, to save on writing costs.
\\
In the blocked scheme there will be multiple tables and the charArrays will have to be constructed separately for each table so that the decompression can differentiate between the codes generated by each tree.

\subsection{Decompression}
The decompression process receives the tables and the character arrays. The parallel version of the algorithm will only have 1 table and 1 char array and can thus convert the char array back into boolean arrays in parallel by converting each char at a time. It then uses the same procedure as compression to construct the tree, this time neither deleting the tree nor constructing the hash map. The decompression procedure discussed in Chapter.~\ref{Chapter3} can then decompress the data.
\\
\\
For the blocked Huffman Coding version, each block can decompress at the same time but the conversion to char array will have to be serial for each block. 

\section{GPU and CPU combined Huffman Scheme}
The GPU and CPU combined version of the code was implemented to find out if certain sections of the code could be accelerated by using the GPU rather than the CPU.
\\
\\
\subsection{Binning}
The first section of code that could be parallelised is the binning process. There are many binning algorithms for CUDA. Thrust is a library for CUDA designed by NVIDIA which has all the normal c++ STL (Standard Template Library) algorithms and many more algorithms already available for use. A common use for Thrust is binning, as it is usually faster to bin data on the GPU than the CPU.
\\
\\
In order to use the Thrust binning algorithm most effectively, the full 5GB block that arrives should be binned rather than be split into blocks. This changes the swapping section of code to be the same as the non-blocked parallel CPU algorithm, where the compressor and its initialisation step are created outside of the swapping for-loop. Another change had to be made in the compressor class: the initialisation step should now \emph{not} bin the data, as this is being done through Thrust calls.
\\
\\
The Thrust binning algorithm uses the original data, and 2 extra arrays for the unique symbols and the counts. The Thrust library provides both a device (GPU memory) array class as well as a host (CPU memory) array class that can interact with on another. This interaction also allows for easy memory copy to and from the device.\\
The algorithm as explained in the Chapter.~\ref{Chapter3} is shown in Figure.~\ref{sudocode:charconvgpu}

\begin{figure}
\centering
\begin{lstlisting}[frame=single]
ConvertToChar(Thrust::device<float> data, Thrust::device<float> uniqueVals,
Thrust::device<int32> counts)
BEGIN
	// sort the data
	Thrust::sort(data.begin(), data.end());
	
	// once the data is sorted, number of unique values can be calculated
	int32 numUnique = Thrust::inner_product(data.begin(), data.end() -1
data.begin() + 1, int32(1), Thrust::plus<int32>(), Thrust::not_equal_to<float>());

	// set the size of the device arrays
	uniqueVals.resize(numUnique);
	counts.resize(numUnique);
	
	// find each unique value and count it.
	Thrust::reduce_by_key(data.begin(), data.end(),
Thrust::constant_iterator<int>(1), uniqueVals.begin(), counts.begin());
END
\end{lstlisting}
\caption[Pseudo code for the Thrust binning method]{Pseudo code for the Thrust binning method. This method is to replace the CPU binning method to both speed up the compression as well as increase the compression ratio since fewer tables will be needed for each 5GB chunk.}
\label{sudocode:charconvgpu}
\end{figure}

The Thrust binning method finds all unique values and counts the number of unique values through the following procedure:
\begin{enumerate}
	\item First the data is sorted to place all unique symbols together and in consecutive order.
	\item A Thrust \textit{inner product} is used to count how many unique values there are in the file, this is done by increasing an integer value by one each time a value is not equal to the one before it (since the data is sorted, it counts all unique symbols).
	\item Finally a \emph{reduction} operation is done on the data: Each time a new symbol is found, it is placed in the uniqueVals array, and a counting iterator is placed in the counts array, it then adds to that iterator each time that symbol is seen before.
\end{enumerate}

Since the Thrust binning algorithm has many synchronisation issues, test show that regular Gaming graphics cards, GT/GTX series of NVIDIA graphics cards, beat the specialised, Tesla NVIDIA series, graphics cards in speed when running this CUDA code. This is most likely because the Gaming cards have much higher clock speeds than the TESLA cards. Even though the TESLA cards have allot more SMs and threads per SM, the synchronisation in the binning procedure causes most of the threads to halt and wait for another thread to release access to one of the variables. Thus, the higher clock speeds of the Gaming cards allow for the threads to release a variable in less time.
\\
\\
This Thrust binning procedure provides speed up of around 45\% including all copy times to and from the Graphics card.
\\
An important issue arose during the testing of this thrust binning algorithm. The Thrust::sort method is not an in-place sort, meaning it requires double the size of the data it is sorting in memory space. This is an issue for the SKA data as the current max memory size for NVIDIA graphics cards is 6GB and the SKA data arrives in 5GB chunks. There is thus insufficient memory space to sort the whole 5GB chunk. Two modifications were introduced to try and address this issue:
\\
First, the data was sorted bit by bit using many thrust::sort calls on small blocks. Sadly due to each kernel call having an overhead time, the use of many thrust::sort calls caused the 45\% speed up achieved to be lost. 
\\
The second modification attempts to fix the problem by utilizing Pinned memory. Pinned memory is the ability to link Host and Device memory together and allow copy between both the host and device to run on the fly during a kernel process. Thrust has the ability to use Pinned memory through the 
\\\verb|Thrust::experimental::cuda::pinned_allocator| object. This is used for each host and device Thrust array declaration, to tell thrust to used pinned memory when dealing with the data in those arrays. This pinned memory allocation allows for the Thrust::sort method to sort data larger than half the size of the device memory, but causes the speed up to drop to 40\% of the original CPU method, thus losing 5\%.

\subsection{Symbol to Code Swapping}
The next parallel procedure that could be changed to a GPU algorithm is the process of swapping the floats to their respective codes. Many issues were found in the process of writing a kernel to do this swapping. In particular, CUDA cannot have pointer values pointing to other pointer values, only to data. This means that the Hash Map of symbols-to-codes cannot be sent as a data struct to the graphics card. We thus had to flatten the codes and symbols into separate arrays: an array for the floats, a long array for all the codes flattened after one another, and another array to state the starting position in the flattened code array for each unique symbol in the flattened float array.
\\
\\
After the flattening of all the arrays is done, the data could then be sent to the GPU and a kernel could be run. However since pointers could not be used, a hashing method is impossible. Each symbol had to be found by looping through the entire flattened float array. In the worst case, this means each thread has to loop through all the unique values before finishing. 
\\
\\
Due to these issues, the kernel method was an average of 5 times slower compared to the normal CPU swapping procedure. Thus the GPU method for swapping was discarded.

\subsection{Binary sequence to Character array conversion}
The only part of the final prefix sum byte conversion procedure in the CPU-only method that could be changed to a graphics card method is the Prefix sum. Since the Prefix sum has been shown to work very well on the GPU\cite{blelloch1990prefix} we decided to try the prefix sum using the given thrust method \verb|Thrust::exclusive_scan|. The chang removes the linear CPU prefix sum and replaces it with the Thrust call as shown in Figure.~\ref{sudocode:thrustprefix}.

\begin{figure}
\centering
\begin{lstlisting}[frame=single]
ConvertToChar(bool[][] codes, char[] charArray)
BEGIN
	// The prefix sum is first calculated
	// note an extra length is added to determine the total
number of binary values
	int32 [] prefixSum(codes.length + 1);

	thrust::host<int32> prefixSum
	
	// copy the lengths to the host vector
	# OMP parallel for
	LOOP (the length of the 2D code array)	
		prefixSum[i] = codes[i].length;
	END FOR
	
	// copy the lengths to device vector
	thrust::device<int32> prefixDev = prefixSum;
	
	// run the thrust prefix scan
	thrust::exclusive_scan(prefixDev.begin(), prefixDev.end());

	// copy the data back to the host
	prefixSum = prefixDev;
	
	// clear the graphics card memory as its no longer needed	
	prefixDev.clear();
	prefixDev.shrink_to_fit();
	
	// A flattened array of binary values is then calculated (remember padding)
	// note the size of the last value does not include the last length
	bool[] flatArray(ceil((prefixSum[codes.length - 1] + codes[codes.length - 1].length)/8) * 8);
	
	// Rest of the code is the same as the CPU only version
END
\end{lstlisting}
\caption[Pseudo code for the Thrust version of the Prefix sum char conversion]{Pseudo code showing the Huffman coding prefix sum char conversion. This version of the prefix sum char conversion method uses thrust to run the prefix sum in parallel rather than have it run linearly on the CPU.}
\label{sudocode:thrustprefix}
\end{figure}

The thrust implementation of the prefix sum achieves a speed up of around 26\% compared to the CPU based character conversion method.

\subsection{Feasibility of Huffman Coding}
The final data rate achieved by the GPU-Huffman algorithm is around 71MB/s. Although this is much lower than the SKA 5GB/s required, tests show that it is much faster than the standard algorithms that are commonly used. The final average compression ratio for SKA data is 41\% which is very good for long term storage of the data. Finally, as will be shown in the Chapter.~\ref{Chapter6}, this compression ratio is better than any of the standard algorithms available.
\\
\\
This shows that GPU-Huffman is a feasible solution for long term storage of SKA data, but is not a good choice for pipe line compression to save on networking throughput rate.