% Chapter Template

\chapter{Introduction} % Main chapter title

\label{Chapter1} % Change X to a consecutive number; for referencing this chapter elsewhere, use \ref{ChapterX}

\lhead{Chapter 1. \emph{Introduction}} % Change X to a consecutive number; this is for the header on each page - perhaps a shortened title

%----------------------------------------------------------------------------------------
%	SECTION 1
%----------------------------------------------------------------------------------------

\section{Radio Astronomy}
This report investigates the real time compression of radio telescope data. Radio Astronomy typically uses an array of large antennae in order to generate images of the sky. It does this by collecting the radio frequency data emitted by astronomical sources in the sky. Every object in space sends out some sort of radio frequency that can identify what the object is. Once a large range of frequencies have been collected they can be deciphered to determine exactly what is in that section of the sky.
\\
\\
Observations span many hours, since more data allows for a better reconstruction of the sky. 

\section{SKA and South Africa}
Square Kilometer Array (SKA) currently have an array of antennae in South Africa called KAT-7. This array consists of 7 antennae and produces around 2TB of frequency data every second. Since each observation spans several hours, and data is received at 2TB per second, the final size of an observation may run into the exabyte range. The SKA project intends on upgrading KAT-7 into MeerKAT in 3 phases to end up with 90+ antennae, and the final SKA goal is a full Square Kilometer of antennae in South Africa. The data retrieval rate for the radio frequencies increase quadratically per antennae, thus the final square kilometer of antennae will produce 1PB every 20 seconds.
\\
\\
Since hard disk space of exabyte size is not cheap, the SKA project requires an efficient compression ratio. Since the data is received at such a high rate, the compressor must be able to compress a stream of data.
\\
\\
Currently SKA does not have any compression in place; this is mainly due to the very demanding process requirement of speed and accuracy for the project. Since radio frequency data could both contain external spikes and spikes that are caused by objects SKA are actually interested in, the data cannot be compressed through a Lossy compression algorithm.
\\
\\
In a lossless compression scheme the data is completely revertible without any loss of data. Lossy compression is useful in circumstances were subtle changes in data are acceptable such as removing high frequency data in images for jpeg compression\cite{5365034}.

\section{Compression Scheme Chosen}
Entropy Encoding\cite{954501} was chosen for this compression task since it is a lossless form of compression. Huffman Coding\cite{5536972} was the selected algorithm within Entropy Encoders since it is the most suitable option for parallel design. Many other Entropy Coders use Dynamic Programming which is not easy to parallelise. Huffman Coding has a smaller Dynamic programming section and many procedures that could be parallelised.
\\
\\
Our Huffman Coder achieves very good compression ratios; where the output file is only 41\% of the input data size, and compresses at a rate of 48MB/s on average (including Disk IO). This beats all the standard compression programs available, such as BZIP2 and GZIP in both speed and compression ratio.
\\
Compression schemes are usually compared through the Compression Ratio and Speed. In this report we will indicate Compression ratio as $\frac{\textrm{Output File size}}{\textrm{Input File Size}}$, and speed as the data rate in Megabytes per second  that the compression algorithm can achieve.

\section{Research Questions}
The choice of several Huffman Coding inquiries to answer:
\begin{enumerate}
	\item Is it possible to create a SKA Real Time compression algorithm using Huffman Coding?
	\item Is Huffman Coding a better choice to compression schemes already available?
	\item Is the compression ratio worth any time overhead it produces?
\end{enumerate}

The first question is related to the speed of the algorithm in comparison to the arrival rate of the data blocks SKA radio dishes produce. For real time execution for SKA, the compression speed of the data needs to be approximately 5GB/s plus.
\\
\\
The second question relates to algorithms SKA could already use, those being BZIP, GZIP and many others. Our algorithm has to be faster to make this a viable alternative for SKA.
\\
\\
The final question is a very hard question to answer. If the compression takes too long it will not be worth while, but if it can produce a really good compression ratio and not slow the SKA pipeline too much, it may still be useful.

\section{Report Structure}
Chapter. \ref{Chapter2} introduces the SKA's infrastructure, how compression schemes work, previous compression done using the Huffman Coding scheme and the fundamentals of parallel computing. Chapter. \ref{Chapter3} outlines the design for the Huffman Coding algorithm for SKA, in terms of both an Adaptive and Dynamic approach. Chapter. \ref{Chapter4} depicts the implementation for the SKA Adaptive Huffman coding scheme whereas Chapter. \ref{Chapter5} depicts the Dynamic Huffman Coding scheme implementation. Chapter. \ref{Chapter6} depicts our results and Chapter. \ref{Chapter7} speaks to the final decision of the algorithms feasibility.