% Chapter Template

\chapter{Conclusion} % Main chapter title

\label{Chapter7} % Change X to a consecutive number; for referencing this chapter elsewhere, use \ref{ChapterX}

\lhead{Chapter 7. \emph{Conclusion}} % Change X to a consecutive number; this is for the header on each page - perhaps a shortened title

%----------------------------------------------------------------------------------------
%	SECTION 1
%----------------------------------------------------------------------------------------

This report set out to determine if using a Huffman Coding scheme could achieve high throughput on data produced by the SKA, which is currently  under construction in South Africa. Once completed the SKA infrastructure will produce up to 1 Petabyte of frequency data every 20 seconds. A fast compression scheme with a good compression ratio is thus required to both decrease the required network bandwidth and reduce storage space requirements.
\\
The research aims were to determine if:
\begin{enumerate}
	\item Is it possible to create a SKA Real Time compression algorithm using Huffman Coding?
	\item Is Huffman Coding a better choice to compression schemes already available?
	\item Is the compression ratio worth any time overhead it produces?
\end{enumerate}

Several parallel algorithms were produced, blocked and parallel CPU version as well as a combined GPU-CPU algorithm. The combined GPU-CPU algorithm was found to be the best of the three for both compression ratio and throughput (achieving 73MB/s throughput and a 41\% compression ratio on a 16-core system) compared to the blocked scheme which was around \sfrac{3}{4} the speed and a fractionally worse compression ratio. Several methods were used to accelerate the algorithm: parallel swapping of data, Thrust implemented binning with pinned memory, a prefix sum implementation for flattening arrays in parallel and removal of tree structures to use hash map look ups. Many hardware differences were noticed: Since the algorithm is memory bound, the speed of the memory and motherboard played a huge factor. Furthermore, since most of the GPU based processes have many synchronization issues, gaming graphics cards (which have high clock rated but fewer cores) run much faster than specialist GPU Tesla cards. Although all processes that could be parallelised were done so, our Huffman Coding algorithm could not achieve the required data throughput. Huffman coding can thus not be run in Real Time for SKA data.
\\
\\
Once investigations in improving throughput concluded, compression ratio was considered. Attempts to use break points in the binary sequences ended up both reducing compression ratio and slowing throughput due to the process of having to pad data for each and every binary sequence. We also realised that the blocked CPU-only scheme reduces the compression ratio as more threads are utilized, since each thread creates its own table and has to pad more than 1 binary sequence. The initial Thrust library implementation did not allow for the full 5GB chunk to be sorted in place on any graphics card. However, the experimental thrust pinned memory allocator, which allows for data any size below the total graphics card ram to be sorted in place, improved results. This meant that only a single table and single binary sequence per 5GB chunk is needed. The algorithm thus produces a 41.18\% average compression ratio for SKA radio data, which is better than any other algorithm available. It also beats the standard compression tools such as BZIP2 in data throughput. This means that our Huffman Coding algorithm is a better choice than any compression tool currently available.
\\
\\
SKA is experimenting with two other compression methods: a Predictive scheme and an RLE method. Both the Predictive and RLE method have higher throughput rates than our Huffman Coding algorithm but had over half the compression ratio. They both achieved the required 5GB/s throughput rate, but the Predictive method has a higher compression ratio than the RLE method by about 3\%. This shows that for long term storage our Huffman Coding algorithm is the best algorithm available, even though it has a low throughput compared to the Predictive and RLE schemes. However the 93\% compression ratio on the Predictive scheme does not allow for high storage space reduction. But the RLE or Predictive scheme could be used to speed up any network throughput without causing any bottlenecks in the pipeline.
\\
\\
Future work would entail adding a Lempel-Ziv scheme to the Huffman Coder in order to compress patterns rather than single symbols. Furthermore, an FPGA based Huffman coding algorithm could be tested for higher throughput. 