% Chapter Template

\chapter{Background} % Main chapter title

\label{Chapter2} % Change X to a consecutive number; for referencing this chapter elsewhere, use \ref{ChapterX}

\lhead{Chapter 2. \emph{Background}} % Change X to a consecutive number; this is for the header on each page - perhaps a shortened title

%----------------------------------------------------------------------------------------
%	SECTION 1
%----------------------------------------------------------------------------------------

\section{Square Kilometer Array in South Africa}
SKA South Africa will soon commence construction on the largest array of radio telescope dishes in the world. As mentioned previously, SKA currently have a 7 Dish array called KAT-7 as part of its MeerKAT system. This array will be upgraded bit by bit to become the full square kilometer array. 
\\
\\
The radio frequency data that arrives at 2TB/s from the KAT-7 array consists of floating point pairs, representing one real and one imaginary component. The current system uses FPGAs\cite{6530082} to calculate the data at the base of each dish. This data is then sent to the main offices where further calculations are done to convert the data into the received floating point values. This transfer is done over a network with speeds of 36.6 Gigabits per second which averages to around 5GB/s.

\begin{figure}[htbp]
	\centering
		\includegraphics[width=0.7\textwidth]{Figures/Process.png}
		\rule{35em}{0.5pt}
	\caption[SKA MeerKAT Pipeline]{High Level overview of the MeerKAT pipeline, showing how the data is converted from radio frequency data into 32-bit floating point values before being stored.}
	\label{fig:MeerKAT Pipeline}
\end{figure}
The aforementioned floating point data tends to span a rather narrow frequency range, with very few values moving far out of the range("spikes"). This means that there will be very few unique values in comparison to the total number of values contained in the output data. This is good as such data plays to an Entropy Encoders strengths, since entropy encoders use unique symbols in order to compress data.
\\
\\
Fig~\ref{fig:MeerKAT Pipeline} shows the MeerKAT pipeline from the dishes to the storage location. It shows the conversion steps from frequency input being 10 bits to the final 32 bit floating point data output. The proposed compression step will take place between the computation nodes at the main office and the storage facility the data is to be stored.

\section{Compression Schemes}
Compression algorithms can be broken down into 4 major groups\cite{salomon2004data}. We discuss each of these groups below.

\subsection{Basic Methods}
Basic compression schemes are usually fast, but not very effective, since they tend to be processes that complex schemes use together for better compression ratios. The most widely used basic compression scheme is Run Length Encoding (RLE), which compresses data by finding repetitions of symbols and then representing the repetition by its symbol and the number of repetitions. Doing so reduces the number of values from more than 2 values to an integer and a symbol. If the value does not repeat, it only records that symbol as to not increase the size of the data.
\\
\\
Basic schemes work for both streaming and non-streaming data and could thus be suitable for SKA radio data. However since basic schemes seldom achieve good compression ratios, one was not chosen for this report.

\subsection{Statistical Methods}
Statistical methods use mathematical principals in order to change, or map the data into a smaller representation that can be reversed. The two most widely used schemes under statistical methods are Entropy Encoders and Predictive methods.\cite{1514415}
\\
\\
Predictive schemes use probabilities of the next symbol being a certain value in order to compress data. This is done by predicting the next value and then comparing it to the actual next value. This is most commonly done by taking the difference between the predicted value and the actual value and saving that difference in a more compact binary representation. The difference value will in most cases be smaller than the actual value since in most cases that difference will have more zeroes starting from the most significant bit in the binary representation. These zeroes can be truncated in the recorded value.
\\
\\
Entropy Encoders will be discussed in depth in the next section since the chosen algorithm for this paper falls into this category.

\subsection{Transformations}
Transformation methods typically use Wavelets or Fourier Transforms to change data into a format that can either allow for better compression, or the ability to remove data that will not noticeably affect the signal. Due to this removal of data, such transformation methods are used primarily for visual data where a small artefact will not cause a huge problem\cite{6196449}.

\subsection{Dictionary Methods}
Dictionary methods use sorting and pattern finding algorithms to find unique sequences of data or repeated patterns. These methods then use one of the other schemes in order to compress these patterns of symbols, rather than compressing each symbol on its own. This usually gives a better compression ratio that using each scheme on its own. Due to the requirement of finding patterns that are utilised in the data more than once, the data has to be known in advance and thus streaming data tends to not work very well.
\\
The most widely used pattern finding methods for Dictionary methods are from the Lempel Ziv (LZ) compression scheme\cite{613235}.

\section{Entropy Encoders}
Entropy Encoders use the probabilities of each symbol being in the data in order to find more compact representations for each symbol, or for the entire data set. Huffman Coding and Arithmetic Coding\cite{5974482} are the most widely known Entropy coders.
\\
\\
Huffman coding was the algorithm chosen for this report and will be discussed in depth in the next section.
\\
\\
Arithmetic coding calculates the probability of each symbol within the data and represents this through a range of values 0 and 1. For example: If the symbol \textit{2} was found to comprise 20\% of the data, it would be all the numbers from 0 to (not including) 0.2. Once all the probabilities are found, they are sorted from least probable block starting at 0, to the most probable block ending at 1. The data is then read in order, taking each symbol, finding the block representing that symbol and recreating the table between that blocks starting and ending numbers. This is repeated until the last value has changed the tables starting and ending values. Either the higher or lower value of the table is then used to represent the entire dataset with a table of all the symbols and their probabilities. The data is decompressed in the same way, except that each next table is chosen by were the given decimal value was found\cite{488381}.
\\
\\
The requirement of repeated steps with the correct order means that Arithmetic coding is a Dynamic Programming problem\cite{5164745}, and thus not easily parallelisable and so unlikely to be accelerated very much by a parallel implementation which is a high priority for this task.

\section{Huffman Coding Overview}
Huffman Coding uses the probability of each unique symbol to determine codes (binary sequences) that map each symbol to a new binary representation. It gives the symbol that occurs the most frequently the shortest code. It does this most effectively through the use of a tree structure. Huffman Coding is most widely used for JPEG compression\cite{5365034} and MPEG-2/4 AAC (MPEG audio channel) compression\cite{5536972}.
\\
\\
Once the Huffman coder has determined the codes for all unique symbols, it replaces each symbol with the code. These codes are placed into a single binary sequence in order to save the data with a table representing each unique symbol and its respective probability. The decompression scheme generates the same tree structure used to generate the code, through the use of the table of unique symbols and respective probabilities in order to take the binary sequence and convert that into its original symbols.
\\
\\
For instance, lets define each symbol as a single character, then try to compress the following data "abracadabra".
\\
Firstly the data needs to be binned (each unique char counted):
\begin{itemize}
	\item a - 5
	\item b - 2
	\item r - 2
	\item c - 1
	\item d - 1
\end{itemize}

The tree is then constructed using the probabilities of each character, since "a" has the most occurrences it needs to be the leaf node closest to the root node of the tree (the node that has no children, closest to the root of the tree). "b" and "r" have to be the next 2 nodes closest to the root and "c" and "d" can be the furthest from the root. The easiest way to do this is to take the 2 values in our list that have the lowest counts and create a node to join them called a joiner with its weight being the children's weights added together as shown in \ref{fig:Huffman Tree 1}.

\begin{figure}[h!]
	\centering
		\includegraphics[width=0.3\textwidth]{Figures/hufftree1.png}
		\rule{35em}{0.5pt}
	\caption[Tree showing the first step of ABRACADABRA compression]{Huffman Coding tree example for "abracadabra" after the first step}
	\label{fig:Huffman Tree 1}
\end{figure}

We then add that joiner node "1" to the list with its given weight:
\begin{itemize}
	\item a - 5
	\item b - 2
	\item r - 2
	\item 1 - 2
\end{itemize}
The same process is repeated, joining 1 and r together to create the next tree \ref{fig:Huffman Tree 2}.

\begin{figure}[h!]
	\centering
		\includegraphics[width=0.3\textwidth]{Figures/hufftree2.png}
		\rule{35em}{0.5pt}
	\caption[Tree showing the second step of ABRACADABRA compression]{Huffman Coding tree example for "abracadabra" after the second step}
	\label{fig:Huffman Tree 2}
\end{figure}

The weight list is now changed to:
\begin{itemize}
	\item a - 5
	\item b - 2
	\item 2 - 4
\end{itemize}

Since there is still more than 1 item in the weights list another 2 more joinings will have to occur shown in figures: \ref{fig:Huffman Tree 3} and \ref{fig:Huffman Tree 4}

\begin{figure}[h!]
	\centering
		\includegraphics[width=0.3\textwidth]{Figures/hufftree3.png}
		\rule{35em}{0.5pt}
		\caption[Tree showing the second step of ABRACADABRA compression]{Huffman Coding tree example for "abracadabra" after the third step}
	\label{fig:Huffman Tree 3}
		\includegraphics[width=0.3\textwidth]{Figures/hufftree4.png}
		\rule{35em}{0.5pt}
	\caption[Tree showing the second step of ABRACADABRA compression]{Huffman Coding tree example for "abracadabra" after the forth step}
	\label{fig:Huffman Tree 4}
\end{figure}

Note that the nodes "B" and "A" were placed on the left of the joiner node, this is done since all children nodes of joiners must follow a simple rule: the left child must have a smaller or equal weight than the right node.
\\
\\
From the final tree the codes for our symbols can be calculated, the Blue numbers along each line joining the nodes indicates the binary value added for that path. Thus the code for "a" is "0", but the code for "c" is "1101". Thus the codes table is:
\begin{itemize}
	\item a - 0
	\item b - 10
	\item r - 111
	\item d - 1100
	\item c - 1101
\end{itemize}

The next step is to replace each character in our data with its code, the output full binary sequence will be: 
\begin{tabular}{lllllllllll}
a & b & r & a & c & a & d & a & b & r & a \\
0 & 10 & 111 & 0 & 1101 & 0 & 1100 & 0 & 10 & 111 & 0
\end{tabular}
\\
Giving us a total length of 23 binary digits, this means 3 bytes will be needed to save these 23 binary digits, as a byte is made up of 8 binary values. This shows that the output binary sequence will be smaller than the 11 bytes for the original data.


\section{Huffman Coder Extensions}
Huffman Coding has been extended in many ways to both improve speed and memory usage for specific data. JPEG encoders\cite{5365034, 1192978} have been accelerated by using a Hash Map once the codes were constructed rather than using the tree and looking for the symbols leaf node. This allowed the tree to be completely deleted and thus save memory.
\\
\\
Other changes involve removing weighting totals for each node. These are usually used to stop a tree from becoming too large and thus generating codes larger than the original input symbols.
\\
\\
Adaptive Huffman coding is an extension used to make Huffman Coding work on streaming data, since the probability cannot be calculated if the entire dataset cannot be read in advance. It achieves this by having a recalculation step each time a symbol is encoded and its probability changes. Each time a Symbol is encoded, that symbol now occurs more often in the file than it previously did, it is thus more probable than a previous symbol and needs its binary code to be shortened before it is used again. The tree is thus re-arranged each time a symbol is encoded. The decompression follows the same process for each decoding step.

\section{Parallelism}
Parallel execution is the process of running many instances of code at the same time. In order for code to run in parallel and still have the desired effect, that code has to be disjoint. This means the processes in the code that will be run in parallel cannot require data that is changed by one of the other parallel processes. Many issues arise with parallel code. Issues typically arise when each parallel process needs to update the same variable, or they all need to execute one process, then wait for all to finish before continuing the other processes. Using Atomic variables when a single variable needs to be updated fixes synchronization issues by making all parallel processes wait while the variable is being changed by one of the processes. The second issue is fixed by a synchronisation barrier. This forces all parallel processes to wait until every parallel process encounters the barrier.
\\
\\
Parallelisation can be done using the Central Processing Unit (CPU) and the Graphical Processing Unit (GPU), we will now discuss each of these architectures.

\subsection{Parallelism via the CPU}
CPU threads are the most commonly used approach to accelerate execution since it tends to be allot simpler to code and allows for the full use of the computer's Random Access Memory (RAM). CPUs also tend to have larger cache sizes than GPUs but the CPU will have far fewer processing cores, meaning the CPU cannot process as many threads at the same time.
\\
\\
The CPU can only run the same number of processes (threads) as cores it contains in parallel to its full potential, since if more threads are run at the same time a process scheduler has to schedule many threads to be run by a single core causing overheads.
\\
\\
Further parallelisation for the CPU could be achieved by using SSE (Streaming SIMD instructions). These instructions allow for faster processing of certain algorithms. SSE allows for vectorised data to be processed in parallel. SSE only works for single basic instructions to be run on data at the same time. For example: if you need to divide 1000 values in an array by 2. You can call on the SSE methods and within a few instructions steps all 1000 divisions will be computed, rather than issuing 1000 division instructions sequentially. AVX is a slight upgrade to SSE allowing for larger data sets and memory blocks to be used.

\subsection{Parallelism via the GPU}
The GPU has a different architecture to the CPU. It has a single large memory block (the GPU RAM), and a separate cache, shared memory and texture memory for each SMX\cite{6339599}. Whereas the CPU has a single cache used by all cores. Each GPU SMX also has the capability of running many threads at a time, meaning algorithms can be made parallel in blocks of memory, and then threads for that block of memory as well.
\\
\\
One of the problems with GPU parallelism is the requirement of small data blocks per SMX, since the SMX has very small cache or shared memory, and the overhead of sending data to the main graphics card RAM and back when processed.
\\
\\
There are two two main programming languages for programming parallel graphics processes: OpenCL and CUDA. OpenCL (Open Computing Language) is an open source language that works for both AMD and NVIDIA graphics cards. CUDA (Compute Unified Device Architecture) is designed for NVIDIA graphics cards and is easier to program than OpenCL. CUDA also has many libraries of already designed algorithms that have been optimized such as Thrust.
\\
\\
A method generated to run on the graphics card is called a Kernel. Only the host (CPU) can call on the graphics card to process a kernel except for the new NVIDIA graphics cards of compute level 3.5 and above, which added the functionality of kernels calling on other kernels. When a kernel is called the number of SMXs and threads per SMX needs to be allocated. The number of threads per SMX required for best efficiency varies greatly for each graphics cards due to hardware differences. 
\\
\\
Thrust is a library of CUDA processes with data structures that help copy data to and from the graphics card. This allows for algorithms to be programmed more easily and faster than having to write your own kernel for each process. Thrust libraries also call on the correct number of SMXs and threads per SMX for the compute level it is compiled for, thus no need to determine the correct number of threads per SMX for the current hardware.