\documentclass[a4paper]{acm_proc_article-sp}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{hyperref}
\author{Brandon James Talbot \\ Department of Computer Science \\ University of Cape Town}
\title{Huffman Squeeze}
\begin{document}
\maketitle

\section{Introduction}
This report discusses the possibility of compressing radio telescope data, without obstructing data flow, using a Entropy Encoder\cite{954501}. All compression techniques work off of a similar principal, to reduce the output size of data. This can be achieved by removing in-important information or by finding another way to represent the data. These two methods are set into 2 categories: Lossy and lossless compression algorithms. In this case radio data is transmitted over a network and needs to be compressed before storage, thus it requires an online approach rather than off-line. 
\\
\\
A lossy compression algorithm removes irrelevant data or data that could to some extent be calculated. These algorithms can only be used in data sets that have excess unneeded data, or data that needs not be exact to still have the same desired needs. This means that the exact input data is not equal to the output data after decompression. Whereas Lossless compression algorithms compresses the data without any loss of data, meaning the input file for compression is equal to the output of decompression. Entropy encoders fall under the Lossless category of compression.
\\
\\ 
Online compression is compression done on data across a network, or being stored in memory over time. This means that you don't have the entire data set to work with from the beginning. Off-line compression is thus done on data stored fully in one location, were you can see the entire data set at all times of the processing.
\\
\\
Compression algorithms are compared to each other via two fronts: Compression Ratio and speed.
\\
Compression ratio is an indication of the amount the file has been compressed by, and is usually written as a single decimal number. The closer this number is to 0, the better the compression ratio, but if this number is greater than 1 the compression algorithm has inflated the file rather than compressed it.
\\
Compression speed is represented by the amount of data the algorithm compresses within a second.
\\
\\
The algorithm that will be discussed primarily is a parallel version of Huffman Coding. There are many possibilities for making this algorithm parallel, all of these will be discussed later in the report. Possible architectures for parallelism will also be discussed.

\subsection{Research Questions}
The creation of such an algorithm can become very complex, and causes many other questions to arise. Even though the output task is pretty simple, many questions arise to make sure this algorithm is worth the installation:
\begin{enumerate}
\item Is it possible to create a Real Time Entropy Encoding algorithm?
\item Does the algorithm attain speeds or compression to rival standard methods such as BZIP?
\item Is the compression worth the time delay?
\end{enumerate}
The first question is very much related to the amount of data that needs compressing, and the time between arrival of any new data. Real Time in this case talks about the ability to compress data and store it before the next set of data arrives for compression.
\\ \\
The second question determines if the algorithm is better than the current standard algorithms that could be used. If this algorithm cant beat those it is not worth the effort.
\\ \\
The third question is completely at SKAs discretion and the feasibility of the algorithm. If the algorithm barely compresses the file and takes ages doing so, it could cause problems rather than solve them.

\subsection{Layout}
ETA - TBA

\section{Background}
\subsection{SKA}
SKA South Africa have created plans to construct the largest array of radio telescope dishes in the world. SKA currently have a set of 7 Dishes in South Africa called KAT-7 as part of its MeerKAT system. This array will be upgraded bit by bit to become the full square kilometre array.
\\
\\
SKA radio data comes in at an alarming rate. The current KAT-7 array tends to output 2 TB per second of data. This data being floating point pairs, one real and one imaginary. The current set-up has FPGAs to calculate data right at the base of each dish. The data is then sent to the main offices where further calculations are done to further convert the data into recognisable floating point values. This transfer is done over a network with speeds of 36.6 Gigabits per second which average around 5 GB per second. \\
SKA has been kind enough to supply example data files for testing.
\\
\\
SKA data tends to be set within a narrow band of the floating point set. With few spikes moving out of this band. This means that there are few unique values compared to the amount of values in total.
\\
\\
Fig.~\ref{MeerKAT_PIPELINE} Shows the MeerKAT pipeline from the dishes to the storage location, the proposed compression step must take place just before storage of the data. The process starts with 10 bit data going through FPGA furrier transforms in order to convert that 10 bit data into 2 18 bit values. These 2 values then go through bean formers and correlators to generate 16 and 32 bit values respectively. The 32 bit values go through a 3 dimensional complex pair manager while the 16 bit through a 2 dimensional complex pair manager. These values are then combined into a single 32 bit value pair. This is the data that needs compressing onto the storage device.

\begin{figure}[h!]
 \centering
 \includegraphics[width=0.50\textwidth]{Process.png}
 \caption{High-level overview of the MeerKAT pipeline}
 \label{MeerKAT_PIPELINE}
\end{figure}

\subsection{Compression}
Compression algorithms can be broken down into 4 major groups. These groups describe how the data is dealt with in order to be compress. This determines the process and output of the data when compressed.
\cite{salomon2004data}

\subsubsection{Basic Methods}
Basic methods tend to be quick but not efficient methods. The most common basic method is Run Length Encoding (RLE). RLE tries to find repetitive data, and represents this data by the number of repetitions in a row followed by the value, thus lessening the number values to be written and thus compressing the file. These methods tend to do nothing but find a shorter representation for the data, without changing too much structure. \\ Most Basic methods tend to be lossless and could work for both online and off-line tasks.\cite[ch.~1]{salomon2004data}

\subsubsection{Statistical Methods}
Statistical methods tend to use mathematics principals to either determine or re-represent data in a reversible way. The 2 most common of these methods are Entropy Coders which find new ways to represent the data by using statistics on the data, and Predictive methods which predict what values will be next in order to store differences rather than actual values, hoping that the differences are less and thus can be stored using less space.\cite[ch.~2]{salomon2004data}
\\
These methods could be done in lossless or lossy schemes and with some modification could be done on online or off-line tasks.

\subsubsection{Transformations}
Transformation methods use Wavelets and Furrier forms to change data into a format that either allows for better compression or allows for removal of some data that is not that important for the reversal. In most cases this lost data causes some degradation to the data.\cite[ch.~5]{salomon2004data}
\\
In most cases transform methods are lossy. And tend to require off-line data. Some methods can work with online data, but tend to be more lossy.

\subsubsection{Dictionary methods}
Dictionary methods use some sort of sorting and pattern finding algorithms to find unique sequences, or repeated patterns within the data. This differs from RLE and Entropy encoders since it uses chunks of data values rather than single values at a time.
\\
Dictionary methods tend to be adaptive, and thus are very linear. They can be both lossless and lossy but tend to mostly be lossless. Since they require chunks of data, online tasks tend to be very difficult but there are many dictionary Lempel Ziv (LZ) methods that can work with online data.\cite[ch.~3]{salomon2004data}

\subsection{Entropy Encoders}
There are 2 well known Entropy Encoders: Huffman Coding\cite{672291} and Arithmetic Coding\cite{5974482}. Arithmetic coding is slightly different to Huffman. Arithmetic coding converts all the values in the data set into a single decimal value, and uses this value in conjunction to a constructed "Table" to decode the value. The "Table" is constructed using each unique value and the percentage of occurrence for that number.
\\ \\
This process of generating a single number, rather than replacing many numbers, means that the process is a Dynamic Programming\cite{5164745} problem, and thus cannot be done in parallel.
\\
And thus Huffman coding is most likely the faster of the 2 algorithms.

\subsection{Huffman Coder overview}
Huffman Coding is a Entropy Encoder found in many of the commonly used compression schemes such as JPEG \cite{5365034} and MPEG-2/4 AAC (mpeg audio channel)\cite{5536972}. 
\\
\\
An Entropy encoder determines a way to change data into a smaller representation, making sure to always give the smallest representation to the data that occurs the most often as to increase the "compression".
\\ \\
Huffman coding does this by finding and counting all unique values within the data source, and then using a tree structure to generate a binary sequence for each unique value that is as short as possible. It then proceeds to replace each and every data value in the file with its binary representation, placing a "Table" or guide at the beginning at the front of the data file so that the decompression process knows the data value and its corresponding binary sequence.


\subsection{Huffman Coder changes}
Huffman Coding has been changed in many ways to both improve its speed and memory usage for specific data. Two JPEG versions\cite{5365034, 1192978} were made faster by using a Hash Map once the codes were constructed rather than the tree. This also allowed for the tree structure to be deleted to save memory. 
\\
\\
Other changes involve removing weighting totals for each node, these are usually used to stop the tree from becoming too large and thus generating codes larger than the original input. This was done for the JPEG compressor since the maximum number of unique values would be 256 for black and white or 3 * 256 for colour. Those numbers do not generate larger codes than original input.

\subsection{Parallelism}
In order to make an algorithm parallel you have to determine if there are disjoint sections of code, or if you can do many sequences at a time if it requires time steps. Any possible section of code that can be run at the same time without these processes requiring each other. Then once you have decided your code could be made parallel, you must decide what hardware to use. There are 2 common approaches: Graphical Processing Units (GPUs) and the normal P-Threads (Processor or CPU threads). They both tends to have their drawbacks, and this comes down to how you intend to make your code parallel.

\subsubsection{CPU}
CPU threads are the most commonly used approaches. It tends to be allot easier to code and allows for full use of the computers Random Access Memory (RAM). CPUs also tend to have larger cache sizes than GPUs. But the average CPU will have no were near the number of processing cores a average GPU does. This means that the CPU can not process as many threads at the same time.
\\
\\
The P-Thread system allows the CPU to run X number of precesses at the same time. This number X is determined by the CPU models number of Cores. Each core can run a process without having to wait for other cores, unless it requests a memory block or hardware device that is currently being used by another core. This is called synchronisation issues.
\\
\\
Further parallelism for the CPU would bee to use AVX and SSE (Streaming SIMD instructions). These allow for faster processing of certain methods. SSE allows you to do the same operation on multiple variables as long as the number of output values is the same as the number of input values. AVX allows you to specify what variables are in the registers at what point. This allows for speeding up certain processes.

\subsubsection{GPU}
GPUs have been broken into 2 major sectors Open CL for any graphics card manufacturer and CUDA (Compute Unified Design Architecture) for NVIDIA based graphics cards. The major drawbacks of Graphics cards is the low memory, smaller cache per processor (or SMX), the extra time it takes to transfer memory too and from the computers RAM and finally the defined splitting of memory over the SMXs\cite{6339599}.
\\
\\
The GPU structure is greatly more complex than that of the CPU. The GPU can run X number of programs determined by the number of threads each SMX can run at a time times by the number of SMXs the GPU contains. Each thread in the GPU can run into the same synchronisation issues the CPU has.

\section{Design}
The outline for Huffman coding is well known and used in many situations. But this section will explain how to specialise and change Huffman code to the SKA data and required speeds. Further detail into what possible methods and processes could be used to both speed up and achieve better compression rates will be discussed.

\subsection{Overview}%Talk about benchmarking
Standard Huffman coding averages around 2 MB/s for compression speeds. This can thus be multiplied by the number of cores used in a parallel approach that works to a full extent. This means that on the average CPU with 4 core, we could intend to attain 8 MB/s. And on the XEON-PHI CPUs and approximate 120 MB/s.
\\
The issue lies with the GPU. Lots of research shows that Huffman does not parallelise well on the GPU due to the small cache sizes per SM and the requirement of using a Tree structure and thus Dynamic programming in many stages of the Huffman process\cite{672291}. Thus the CPU parallel approach will most likely achieve better speeds than the GPU approach, due to the requirement to exchange memory to the GPU and back, and then being unable to parallelise to any good extent on the GPU.
\\
This shows that the standard approach could not be expected to attain higher speeds than 120 MB/s. This being no were near SKAs needed 5 GB/s in order to have a real time compression. But since Huffman could be specialised and edited for speed, it is possible to come close.
\\
\\
Huffman coding will attain good compression ratios for data that has few unique values in comparison to the total data set. As said before, SKA data tends to have a narrow band of the floating point range with a few spikes and thus few unique values. This shows that we could expect great compression ratios. But will it be worth it with the speeds?
\\
In order to show this we will have to compare the result with the standard compression schemes. Namely GZIP, BZIP 1 and 2, WinRAR and WinZIP, the last 2 being the freeware versions most commonly used by the public.
\\
Further comparisons would be to compare other research approaches that SKA have running, namely a parallel RLE and Predictive scheme.


\subsection{Algorithm}
In order to test Huffman Coding scheme to to the full extent for an online task, both Adaptive and Dynamic approaches need to be tested.


\subsubsection{Huffman coding}
Huffman coding (also known as Dynamic Huffman Coding) uses a tree structure in order to quickly generate binary sequences for values giving the values that occur the most the smallest sequence. This method requires the ability to read all values in the file in advance in order to generate the require tree structure.
\\
\\
Huffman coding generates this tree by first binning (finding all unique values and how many of each there are in the file) the file and then using a Dynamic Program scheme to generate this tree. Once the tree is constructed you should notice that all leaf nodes are unique values, and the closest leaf node to the root node of the tree is the unique value with the most occurrences. while all inner node (non-leaf node) are just joiners that help determine the structure of the tree.
\\
\\
Once this tree is constructed, the binary sequence codes need to be constructed. The most efficient way to do this is:
\begin{enumerate}
 \item Start at leaf node with the value you wish to convert.
 \item Move to the parent node
 \item Determine if the node was the parent nodes left or right child adding 0 for left child, 1 for right child to the binary sequence.
 \item Continue from 2nd step using the parent node as new child node until parent node is null (current child node is the root node)
 \item Use output binary sequence as code or reverse code for root down to leaf version.
\end{enumerate}
In most cases you have to convert the code to root->leaf binary sequence as the decode procedure requires it this way. But some cases need not do this since they add ``Break points'' into the operation, these break points tell the decoder when a new code starts and ends, and could even tell the code the length of the next code.
\\
\\
For the Encoding procedure, each float value is taken and replaced with the generated code, all binary sequences will be place after one another and then written to file as char values (8 binary bits at a time) with padding of 0 at the end to finish the last char value.
\\
This allows the decoder to start at the beginning of this full binary sequence and move down the nodes. Each 0 binary bit it will move to the left node of the tree, 1 the right node, until it reaches a leaf node, outputting that float value and starting back at the root node for the next binary bit.
\\
Making the Huffman code parallel will entail a parallel binning scheme as well as a parallel replacement of floating point values. A blocking scheme can also be used where you block the data to be compressed into equal sized chunks, then run Huffman on each chunk separately. It would be expected that the blocking method would be faster than the normal parallel version, but would supply less compression.
\\
\\
The output file consists of a table displaying all the unique values and their counts in order to reconstruct the same tree. And then the full sequence of binary values.


\subsubsection{Adaptive Huffman}
Adaptive Huffman Coding (AHC) was designed so that Huffman coding could work on online source data (streaming data) and not require full knowledge of the data in advance. 
\\
\\
The tree structure differs slightly from Dynamic Huffman Coding, as the furthest leaf node from the root node must always be a NULL or a null placement holder, so that a new value can always be added to the tree.

\begin{figure}[h!]
 \centering
 \includegraphics[width=0.4\textwidth]{update-process.png}
 \caption{Update procedure for Adaptive Huffman Coding}
 \label{Update Procedure}
\end{figure}


The basic principal of AHC is to start with a root node and a nullPTR leaf node. Then to start reading new values. When a new value arrives you add a new Node with that value and a weighting of 1. And placing a new nullPTR value at the furthest point from the root node. When a value seen previously arrives you increase the weight of the leaf node containing that value and run an the update procedure.
\\
As shown in Fig.~\ref{Update Procedure} the update procedure entails checking if the node needs replacing with a higher node since its weight now rivals a higher node (one closer to the root) and then increasing the parent nodes weight and running the same update procedure on that node, until we arrive at the root node.\cite{HuffmanCodingSite}

Decoding AHC requires the correct sequence of unique values as they were found when decompressing and the full binary sequence. It decodes the same way as Dynamic Huffman Coding, with a slight difference. When the program arrives at the nullPTR node, it inputs the next unique value within the sequence as it would when compressing, and runs the same update procedure. Also when it arrives at a leaf node and outputs the value, it must run the update procedure on that node as it did when compressing.
\\
Due to AHC having such a linear approach, and a tree update procedure, it is quite impossible to make AHC parallel. The only way would be to block the data in some way, thus defeating the purpose of AHC which is to keep it streaming and not require the data in advance. As well as maintaining a single tree structure too keep compression ratio down.
\\
\\
The output file for AHC is comprised of the unique values sequence in the correct order, followed by the full binary sequence.

\subsection{Benchmarking}
The process of testing the final stage of SKA's Huffman coder will entail using the sample SKA data that Jason Mannley (senior analyst at SKA South Africa) has provided. 
\\
The main tests will entail comparison of parallel versions, and finally comparison to standard compression algorithms. This will require multiple runs for each, on multiple systems. An average of time for all algorithms over a minimum of 6 runs will be required for all test parameters. These parameters will be, 1 threads (or linear running), 4 threads (standard PC for public use), 8 threads, 16 threads, 32 threads and finally 64 threads.
\\
\\
Since no GPU version will be constructed due to the known issues, it is not necessary to compare the 2 architectures from each other. The final testing stages will be a comparison for SKA to determine the best possible scheme to use, this will be a comparison between the designed Huffman coder and the other research projects SKA has running. The results of all will be compared and certain recommendations will be devised, but only a comparison will be displayed so that SKA can determine the best algorithm for their devised plans.

\bibliographystyle{plain}
\bibliography{bibtex}

\end{document}