% Chapter Template

\chapter{Results} % Main chapter title

\label{Chapter6} % Change X to a consecutive number; for referencing this chapter elsewhere, use \ref{ChapterX}

\lhead{Chapter 6. \emph{Results}} % Change X to a consecutive number; this is for the header on each page - perhaps a shortened title

%----------------------------------------------------------------------------------------
%	SECTION 1
%----------------------------------------------------------------------------------------
This chapter covers many tests to determine if our Huffman coding algorithm is feasible for SKA data compression. Since external factors could cause incorrect results 2 requirements for testing were set:
\begin{itemize}
	\item  Each test must be run 10 times, with outlier times removed and all others averaged in order to exclude issues with other processes using the RAM or CPU cores allocated to the test process.
	\item Each test must be run on the same data, and hardware for each relevant test.
\end{itemize}

\section{Tests}
\subsection{Feasibility of Huffman Coding in parallel}
The first set of tests (Figure.~\ref{fig:CPUThreads}) were run to determine if making Huffman Coding parallel is possible and worth the hardware requirements.
\\
This test was run on a 1GB binary file of floating point data, with 1200 unique float values. Since this test was to determine the speed of process no output file was created and the time for reading the input file was discarded.
\\
The UCT HEX cluster was used for this test. The machine specs for the HEX cluster is as follows:
\begin{itemize}
	\item CPU - XEON E5-2650 @ 2Ghz (16 cores).
	\item RAM - 64GB of 1600MHz RAM.
	\item GPU - TESLA M2090 NVIDIA graphics card with 6GB DDR5 memory.
\end{itemize}

\begin{figure}[h!]
\centering
\begin{tikzpicture}
	\begin{axis}[
	   	xmin = 0,
	    xmax = 17,
	    minor x tick num=1,
	    %xticklabels from table={data/SKAspeed.dat}{threads},
	    ylabel={\bfseries Speed (MB/s)},
	    xlabel={\bfseries Number of Threads},
	    legend pos = outer north east,
	    nodes near coords,
   		height=9cm,
   		width=9cm
	]
	\addplot [every node near coord/.append style={font=\tiny,shift={(10pt,-12pt)}}, mark=square*, color=red] table [x=threads, y=rate] {data/cputhread.dat};
	\addlegendentry{Compression}
	\addplot [every node near coord/.append style={font=\tiny,shift={(0pt,0pt)}}, mark=triangle*, color=blue] table [x=threads, y=drate] {data/cputhread.dat};
	\addlegendentry{Decompression}
	\addplot [every node near coord/.append style={font=\tiny,shift={(-5pt,-1pt)}}, mark=*, color=purple] table [x=threads, y=gpurate] {data/cputhread.dat};
	\addlegendentry{GPU Compression}
\end{axis}
\end{tikzpicture}
	\rule{35em}{0.5pt}
	\caption[Huffman Coding graph of Speed to number or Threads]{Graph showing speed up produced by increased threads for our Huffman Coding algorithm. This shows that parallelising the Huffman Coding algorithm is a feasible approach to increasing its throughput. And using the GPU to speed up small sections of the Huffman Coding algorithm does supply an increased throughput. It also shows that the decompression is only slightly accelerated by using parallelism.}
	\label{fig:CPUThreads}
\end{figure}


\subsection{Comparison to standard compression programs}
The next test (Figures.~\ref{fig:CompressionTimes} and \ref{fig:CompressionRatios}) was performed to determine if the produced Huffman Coding algorithm performs better than compression algorithms already available on the market. Since the HEX cluster uses a networked hard drive and is not set up for high disk IO, these tests were run on a 1GB file of floating point data on a different system. The machine specs used to run these tests are:
\begin{itemize}
\item CPU - Intel Haswell core I7 4800MQ @ 2.7GHz (8 cores)
\item RAM - 8GB of 1600Mhz RAM
\item GPU - NVIDIA GTX 770M with 3GB DDR5 RAM
\item Hard Drive: Western Digital 500GB hard drive (10000RPM)
\end{itemize}

\begin{figure}[h!]
\centering
\begin{tikzpicture}
	\begin{axis}[
	    ybar,   % Stacked horizontal bars
	    ymin=0,         % Start x axis at 0
	    ymax=170,
	    xtick=data,     % Use as many tick labels as y coordinates
	    xticklabels from table={data/standard.dat}{Algorithm},
	    xticklabel  style={rotate=90},
%	    title={\Large\bfseries Table to compare Standard Algorithms to the GPU Huffman 	Coder},
	    ylabel={\bfseries Time (s)},
	    xlabel={\bfseries Algorithm},
	    xlabel style={yshift=-10ex},
	    legend pos = outer north east,
	    nodes near coords,
    		every node near coord/.append style={anchor=mid west, rotate=70},
   		nodes near coords align={vertical},
   		height=9cm,
   		width=9cm
	]
	\addplot table [x expr=\coordindex, y=Compress] {data/standard.dat};
	\addlegendentry{Compression}
	\addplot table [x expr=\coordindex, y=Decompress] {data/standard.dat};
	\addlegendentry{Decompression}
\end{axis}
\end{tikzpicture}
	\rule{35em}{0.5pt}
	\caption[Graph of Compression times for GPU-Huffman vs Standard algorithms]{Graph showing compression times of GPU-Huffman and CPU-Huffman vs standard algorithms. This shows that both the CPU and combined GPU-CPU Huffman Coding algorithms compress faster than any standard compression tools available. It also shows that compared to BZIP2 it decompresses faster but it does not beat GZIP or ZIP in compression speeds. Our Huffman Coding algorithm is thus a much better algorithm for the SKA data}
	\label{fig:CompressionTimes}
\end{figure}

\begin{figure}[h!]
\centering
\begin{tikzpicture}
	\begin{axis}[
	    ybar,   % Stacked horizontal bars
	    ymin=0,         % Start x axis at 0
	    ymax=100,
	    xtick=data,     % Use as many tick labels as y coordinates
	    xticklabels from table={data/standardratio.dat}{Algorithm},
	    xticklabel style={rotate=90},
	    ylabel={\bfseries Output File Percentage (\%)},
	    xlabel={\bfseries Algorithm},
	    xlabel style={yshift=-10ex},
	    nodes near coords,
    		%every node near coord/.append style={font=\tiny},
   		nodes near coords align={vertical}
	]
	\addplot table [x expr=\coordindex, y=percentage] {data/standardratio.dat};
\end{axis}
\end{tikzpicture}
	\rule{35em}{0.5pt}
	\caption[Graph of Compression ratios for GPU-Huffman vs standard algorithms]{Graph showing compression ratios for GPU-Huffman vs standard algorithms. Both our Huffman Coding algorithms beat all the standard compression tools in compression ratio for the SKA data. The combined GPU-CPU method has a slightly better compression ratio than the CPU-only version since it only has to save 1 table and pad a single binary sequence unlike the CPU-only method. Huffman Coding is thus a much better algorithm for the SKA data.}
	\label{fig:CompressionRatios}
\end{figure}

\subsection{Comparison of all SKA compression projects}
The final test (Figures.~\ref{fig:SKAComSpeeds},\ref{fig:SKADecomSpeeds} and \ref{fig:SKACompressionRatios}) was run to compare  all the currently running SKA compression projects, namely a Predictive Scheme, a RLE scheme and our Huffman Coding scheme. All algorithms in this test were run on the UCT HEX GPU cluster on a 756MB file of floats supplied by SKA.

\begin{figure}[h!]
\centering
\begin{tikzpicture}
	\begin{axis}[
	    %xtick=data,     % Use as many tick labels as y coordinates
	    xmin = 0,
	    xmax = 17,
	    ymax = 6,
	    minor x tick num=1,
	    %xticklabels from table={data/SKAspeed.dat}{threads},
	    ylabel={\bfseries Compression Speed (GB/s)},
	    xlabel={\bfseries Number of Threads},
	    legend pos = outer north east,
	    nodes near coords,
   		height=9cm,
   		width=9cm
	]
	\addplot [every node near coord/.append style={font=\tiny,shift={(10pt,-12pt)}}, mark=square*, color=red] table [x=threads, y=huff] {data/SKAspeed.dat};
	\addlegendentry{Huffman Coding}
	\addplot [every node near coord/.append style={font=\tiny,shift={(12pt,-6pt)}}, mark=triangle*, color=blue] table [x=threads, y=pred] {data/SKAspeed.dat};
	\addlegendentry{Predictive}
	\addplot [every node near coord/.append style={font=\tiny,shift={(-3pt,0pt)}}, mark=*, color=purple] table [x=threads, y=rle] {data/SKAspeed.dat};
	\addlegendentry{Run Length Encoding}
\end{axis}
\end{tikzpicture}
	\rule{35em}{0.5pt}
	\caption[Graph comparing all SKA research Projects with respect to compression speed]{Graph showing compression speeds for all SKA research Projects. RLE is the fastest algorithm achieving the required 5GB/s throughput. The Huffman Coding algorithm is the slowest of all three algorithms. This shows that for SKA's required throughput RLE is the best candidate.}
	\label{fig:SKAComSpeeds}
\end{figure}

\begin{figure}[h!]
\centering
\begin{tikzpicture}
	\begin{axis}[
		xmin = 0,
	    xmax = 17,
	    ymax = 6,
	    minor x tick num=1,
	    %xticklabel  style={rotate=90},
	    ylabel={\bfseries Decompression Speed (GB/s)},
	    xlabel={\bfseries Number of Threads},
%	    xlabel style={yshift=-10ex},
	    legend pos = outer north east,
	    nodes near coords,
   		height=9cm,
   		width=9cm
	]
	\addplot [every node near coord/.append style={font=\tiny,shift={(10pt,-12pt)}}, mark=square*, color=red] table [x=threads, y=dhuff] {data/SKAspeed.dat};
	\addlegendentry{Huffman Coding}
	\addplot [every node near coord/.append style={font=\tiny,shift={(12pt,-6pt)}}, mark=triangle*, color=blue] table [x=threads, y=dpred] {data/SKAspeed.dat};
	\addlegendentry{Predictive}
	\addplot [every node near coord/.append style={font=\tiny,shift={(-3pt,0pt)}}, mark=*, color=purple] table [x=threads, y=drle] {data/SKAspeed.dat};
	\addlegendentry{Run Length Encoding}
\end{axis}
\end{tikzpicture}
	\rule{35em}{0.5pt}
	\caption[Graph comparing all SKA research Projects with respect to decompression speed]{Graph showing decompression speeds for all SKA research Projects. RLE and Predictive schemes are greatly accelerated through the use of parallelism while Huffman Coding decompression tends to stay very linear. This shows that the Predictive and RLE algorithm can decompress the data faster than it was compressed, while the Huffman Coding algorithm tends be slower than its compression speed.}
	\label{fig:SKADecomSpeeds}
\end{figure}

\begin{figure}[h!]
\centering
\begin{tikzpicture}
	\begin{axis}[
	    ybar,   % Stacked horizontal bars
	    ymin=0,         % Start x axis at 0
	    ymax=100,
	    xtick=data,     % Use as many tick labels as y coordinates
	    xticklabels from table={data/SKAratio.dat}{algorithm},
	    xticklabel style={rotate=90},
	    ylabel={\bfseries Output File Percentage (\%)},
	    xlabel={\bfseries Algorithm},
	    xlabel style={yshift=-10ex},
	    nodes near coords,
    		%every node near coord/.append style={font=\tiny},
   		nodes near coords align={vertical}
	]
	\addplot table [x expr=\coordindex, y=perc] {data/SKAratio.dat};
\end{axis}
\end{tikzpicture}
	\rule{35em}{0.5pt}
	\caption[Graph of Compression ratios for all SKA research projects]{Graph showing compression ratios for all SKA research projects. Our Huffman Coding algorithm compresses the file allot more than both the Predictive and RLE algorithms, having over double the compression ratio. This also shows that the Predictive and RLE methods have a small compression ratio and thus will not be good for long term storage. Our Huffman Coding algorithm is thus the best candidate for SKA to use as along term storage compression tool.}
	\label{fig:SKACompressionRatios}
\end{figure}

\section{Discussion}
\subsection{Feasibility of Huffman Coding in parallel and for SKA}
The first test shown in figure.~\ref{fig:CPUThreads} Shows that a near linear speed up for both the GPU and CPU compression is evident. This means that we can expect more threads to speed up the compression, up to a memory throughput limit. Decompression on the other hand, shows that the initial usage of multiple threads causes a huge speed increase. However since the only section of the algorithm that is parallel is the conversion from bytes read from the file to the full binary sequence, the speed up diminishes as we increase the number of threads.
\\
\\
Another noticeable feature is that the CPU compression speed is accelerating faster than the graphics card, this means that it may be possible for the high end Intel XEON Phi  CPUs with threads greater than 100 could process the CPU only version of the generated Huffman Coding scheme faster than the GPU-CPU algorithm. Only testing on XEON Phi's and top Gaming Graphics cards together could prove this, this is left for future work. However, one should note that when using the CPU only algorithm the table increases in size according to number of threads. The table size of the CPU only algorithm is: $NumThreads * numSymbolsUnique * 2 * 4 + 4$ bytes, thus the compression ratio is reduced the more threads used.
\\
\\
The top speed achieved is 79.3MB/s with 16 threads on a Tesla M2090 graphics card. If we assume that increasing the number of threads continues to cause a near to linear speed up, then 64 threads will achieve approximately 120MB/s and 128 threads around 140MB/s. This means with current hardware it is impossible to reach the transfer speed of the SKA data, which is 5GB/s. Thus Huffman Coding can not be done in real time for SKA data.

\subsection{Comparison to Standard Compression programs}
The next test was done to determine if this Huffman Coding algorithm is better than any currently available to the SKA. Figures.~\ref{fig:CompressionTimes} and \ref{fig:CompressionRatios} show the findings. The algorithm was tested against the top commonly used public available algorithms: BZIP2, GZIP and ZIP. As the graph shows, the GPU and CPU versions both beat all the standard algorithms in both speed and compression ratio. Both new algorithms are around 3 times faster than BZIP2 and slightly better in compression ratio. However the decompression speeds only beat the BZIP2 algorithm whereas GZIP and ZIP tend to decompress the data very quickly, but have terrible compression ratios. An interesting finding was that GZIP and ZIP compress .H5 (HDF) files faster than BZIP2 but BZIP2 compresses binary stored floats and plain text floats faster than GZIP and ZIP. The test was run on a file were the floats were stored as binary data due to this fact.
\\
This shows that the created Huffman Coding scheme for SKA is much better than any standard compression program available to SKA currently, since it provides better speeds and compression ratio.

\subsection{Comparison of SKA compression projects}
The final test was run to determine if the other 2 SKA project's algorithms, a Predictive scheme and RLE, could beat the generated Huffman Coding algorithm. Figures.~\ref{fig:SKAComSpeeds} and \ref{fig:SKADecomSpeeds} show that both the RLE and predictive scheme beat the Huffman Coding scheme in throughput, RLE being the fastest and Huffman Coding being the worst for both compression and decompression. Out of all three, RLE is the only algorithm to achieve the throughput equal to the SKA transfer rate (Note: The predictive scheme does achieve the required throughput on Intel Hasswell and AMD processors, as these processors have the \textit{LZCNT} leading zero count machine instruction which causes a good speed up for the predictive method).
\\
\\
However Figure.~\ref{fig:SKACompressionRatios} shows that the Huffman coding algorithm supplies over double the compression ratio compared to the Predictive and RLE schemes.
To determine which of the three SKA projects is the best algorithm overall is thus much harder. The RLE algorithm is the fastest, but supplies the worst compression ratio. Huffman Coding is the slowest, yet supplies the best compression ratio by far. Taking the SKA pipeline into account, Figure.~\ref{fig:MeerKAT Pipeline} - we notice that we could place either the RLE or Predictive scheme between the Complex pairs and the compute nodes. This is were the data is sent over a slow network from the station were the telescope antennae are, and the main offices. Since the RLE and Predictive scheme are really fast there won't be a bottleneck issue and it will speed up the transfer times, but a 93+\% compression ratio is very poor for long term storage.  The Huffman Coding algorithm can thus be used for long term storage since it achieves a very good compression ratio.