\section{Implementation}
\label{implementation}

The implementation of this project was written in a combination of C and CUDA, NVIDIA's GPU programming model. For ease of testing, all libraries were written so that they would interface easily with the benchmark tools, so the nomenclature follows that specified in Plank's FGAL~\cite{PlankPFGA}.  The main computation of concern in the library is multiplication, so in all of the following libraries, we focused our efforts on all functions that are related to multiplication in the library. 

Section~\ref{imp:stupid} contains a description of the naive implementation of Galois fields arithmetic, in which all calculations are computed directly. In Section~\ref{imp:gpu_quirks}, we discuss some of the difficulties encountered when programming on GPUs.  Section~\ref{imp:gpu_plank} includes a discussion of the implementation of a Galois field arithmetic library that makes use of both table lookups, as is the typical convention, and GPU parallelization.  Finally, Section~\ref{imp:gpu_notables} explains the implementation of a Galois field arithmetic library that does all computations directly (i.e. does not use table lookups) and makes use of the GPU.

\subsection{Naive Implementation of Galois Field Arithmetic} 
\label{imp:stupid}
As discussed in Section~\ref{Design}, we implemented a Galois field arithmetic library that does all computations directly as a baseline for comparisons among other libraries.  This implementation was written entirely in C. This library does not include any creation of multiplication, division, log or inverse log tables, nor does it store any tables at all, other than the primitive polynomials as discussed in Section~\ref{background}. Unlike other libraries, in which the computations differ according to the value of $w$, the computations are entirely identical across all values of $w$.  The only change for different values of $w$ is the data type that is operated on.  The data types and their sizes for the most common values of $w$ are given in Table~\ref{tab:data_types}.

\begin{table}
\centering
\begin{tabular}{ | c || c | c |}
\hline
$w$ & Data Type & Size in Bytes \\
\hline
\hline
8 & char & 1 \\ \hline
16 & short & 2 \\ \hline
32 & int & 4 \\ \hline
\end{tabular}
\caption{Values of $w$ and the associated data types on which Galois arithmetic operations are performed.}
\label{tab:data_types}
\end{table}

\subsection{Programming for GPUs}
\label{imp:gpu_quirks}

When programming on GPUs, computation is divided into threads. For our purposes, each thread computes a single value, whether it be a value in a table, or the result of a multiplication of a region of data.  On GPUs, these threads are divided into blocks which are mapped to multithreaded Single Instruction Multiple Data (SIMD) processors on the GPU chip itself.  Within the block, threads are divided into warps, which are the scheduling unit used within each multithreaded SIMD processor. It is best to specify the number of threads per block to be a multiple of warp size, so that no computational unit is idle when the threads are scheduled. The computation completed on each thread is specified by a function called a kernel. The element operated on in the kernel can be determined by an index, which is calculated based on the index of both the thread and block (\cite{530book, cuda}). When a kernel is enqueued, the number of blocks and threads is specified, in the following format:

\begin{scriptsize}
\begin{verbatim}
kernel_name<<<num_blocks, num_threads>>>(arg1, arg2, ...);
\end{verbatim}
\end{scriptsize}

For our version of CUDA, the number of threads per block is limited to 512, while the warp size is 32. The number of blocks is also limited.  For our version, the number of blocks is capped at 65536. All of our CUDA code is written and organized with these values in mind. 

The GPU contains its own memory hierarchy, including global memory, caches, shared memory and registers.  Threads within the same block have access to the same shared memory, and all threads on the GPU have access to the global memory.  Access to the global memory is much slower than access to shared memory.  Each thread has its own registers.  The GPU does not have direct access to the CPU's memory.  Any data required in computations completed on the GPU must be transferred via cudaMemcpy from the CPU's memory to the GPU's memory.  Once computation is completed and stored in the GPU's memory, the result has to be copied back to the CPU's memory.  These memory transfers are completed using a PCI bus, so the speed of the transfer depends on the specific bus used (\cite{530book, cuda}).

One of the quirks of programming for GPUs is that conditional statements cause a significant decrease in performance. All threads within a warp complete the same instructions. Thus, all branches are computed in each thread regardless of the condition, since threads within a warp may satisfy different conditions. So, it is in the programmer's best interest to remove as many conditional statements as possible. This includes unrolling small loops in the code to avoid checking conditions in for and while loops (\cite{530book, cuda}). 

%Threads, blocks, warps (maximum sizes in number of threads per block and blocks, ideal size of warp being a multiple of 32 -- however this is hardware dependent)
%Transferring memory between GPU and CPU
%Eliminating conditionals if possible
%Unrolling small loops

\subsection{Using the GPU and Table Lookups}
\label{imp:gpu_plank}

For this section, Plank's fast Galois arithmetic library~\cite{PlankPFGA} was manipulated in several ways to make use of the GPU. The tables used in this library are multiplication, division, log and inverse log tables, all of which are used to speed up calculations in Galois fields.  The sizes of these tables depends on the value of $w$ in $GF(2^w)$.  Table~\ref{tab:table_sizes} shows the sizes of these lookup tables for some typical values of $w$. 

\begin{table}
\centering
\begin{tabular}{|c||c|c|c|c|}
\hline
$w$ & Multiplication & Division & Log & Inverse Log \\ \hline
\hline
8 & $2^{16}$  & $2^{16}$ & $2^8$ & $3*2^8$ \\ \hline
16 & $2^{32}$ & $2^{32}$ & $2^{16}$ & $3*2^{16}$ \\ \hline
32 & $2^{64}$ & $2^{64}$ & $2^{32}$ & $3*2^{32}$ \\ \hline
\end{tabular}
\caption{Number of entries in various tables for $w \in {8, 16, 23}$.}
\label{tab:table_sizes}
\end{table}

The values in these tables are, for the most part, calculated independently of one another.  So, these calculations can be completed in parallel on the GPU, rather than sequentially on the CPU.  Figure~\ref{fig:seq_mult_init} shows the original code for initializing multiplication and division tables. In contrast, Figure~\ref{fig:gpu_mult_init} shows the associated kernel, which is executed as a thread for each value in the table. These threads can be executed in parallel on the GPU, whereas the original code must be executed sequentially. 

\begin{figure}[h]
\begin{scriptsize}
\begin{verbatim}
j = 0;
mult_tables[w][j] = 0;   // y = 0 
div_tables[w][j] = -1;
j++;
for (y = 1; y < nw[w]; y++) {   // y > 0
  mult_tables[w][j] = 0;
  div_tables[w][j] = 0;
  j++;
}


for (x = 1; x < nw[w]; x++) {  // x > 0 
  mult_tables[w][j] = 0; // y = 0 
  div_tables[w][j] = -1;
  j++;
  logx = log_tables[w][x];
  for (y = 1; y < nw[w]; y++) {  // y > 0 
    mult_tables[w][j] = ilog_tables[w][logx+log_tables[w][y]]; 
    div_tables[w][j] = ilog_tables[w][logx-log_tables[w][y]]; 
    j++;
  }
}
\end{verbatim}
\end{scriptsize}
\caption{Original code to initialize multiplication and division table.}
\label{fig:seq_mult_init}
\end{figure}

\begin{figure}[h]
\begin{scriptsize}
\begin{verbatim}
__global__ void init_mult_div_tables(int *nw, 
    int *mult, 
    int *div, 
    int *log, 
    int *ilog) 
{
  int j, x, y, logx;
  j = blockIdx.x*blockDim.x+threadIdx.x;
  x = j/(*nw);
  y = j%(*nw);
  logx = log[x];

  mult[j] =  (x*y > 0) ? ilog[logx+log[y]] : 0;
  div[j] = (x*y > 0) ? ilog[logx-log[y]] : 0;
  div[j] = (y > 0) ? div[j] : -1;
}
\end{verbatim}
\end{scriptsize}
\caption{Kernel to initialize multiplication and division tables.}
\label{fig:gpu_mult_init}
\end{figure}

The major computations that are completed in both benchmarks tested, as well as in general applications, are multiplies of regions of data.  So, the functions that compute the region multiplies were our primary focus for performance enhancement. In these functions, each element of the region can be computed separately, so the computation can be parallelized. For different values of $w$, the different tables, and thus different computations are used.  For example, $w = 8$ uses the multiplication and division tables, while $w = 16$ uses the log and inverse log tables.  These tables are required for the computations, so they must be transferred to the GPU. 

While implementing this version of the library, we made several design choices.  First, we noted that the size of the region determines the number of threads created. As noted above, it is best for the number of threads to be a multiple of the warp size.  So, we padded the region to be multiplied to be a multiple of 32, which is the warp size in our version of CUDA.  

Secondly, we noted that memory transfers accounted for a large portion of total computation time.  So, to minimize memory transfers, we transferred each table only once to the GPU, where it remained over the course of the program execution.  A third observation made was that memory allocation and deallocation on the GPU also require a significant amount of time.  Since the region size was constant throughout all benchmark applications, we reduced the number of allocations and deallocations by only allocating space for the source region and destination region once. The new source and destination data has to be transferred with each new region, but some of the memory management overhead is reduced. 

A third observation we made is that increasing the number of threads per block improved overall performance.  We attribute this increase in performance to a reduction in memory latency.  Since operations of threads are interleaved, the latency of a memory access in one thread can be hidden by context switching to a separate thread.  With more threads in a single block, there are more opportunities for switching, thus there are more opportunities for hiding memory latency.

Through the course of implementation, we tried several other changes to see the effect of those changes on performance.  One technique we tried was increasing the amount of computation per thread. Specifically, rather than each thread computing a single element in the region, each thread computed two or four elements in the region.  
The rationale behind this technique is that it decreases the number of threads, thus decreasing the context switching overhead.  This change, however, led to a decrease in performance. Based on the observation made above, we attribute this decrease in performance to the number of memory accesses.  For this particular application, it was beneficial to have more threads because the memory access overhead was larger than the context switching overhead.

Another observation that we made during implementation was that the CPU was idle during arithmetic computations.  One of the advantages of using a GPU for computation is that the CPU and GPU can work simultaneously.  Thus, we attempted to split computation between the CPU and GPU, so that while memory was transferred to the GPU and the GPU was performing its computations, the CPU was also performing computations.  However, it is difficult to properly synchronize these operations and it is extremely difficult to load balance successfully between the two processors to fully utilize this technique. So, this change also resulted in a decrease in performance. We speculate that the calculation required to determine the proper load balancing would outweigh the advantage of using both processors to perform computation.  

%(Discussion of region multiplies and how we used the GPU in those cases here)

	%	(Note how memory is only allocated once. Note mistake in memory allocation.  Note that tables are only copied once for this calculation.)
%(Increasing the number of threads per block to reduce memory latency costs)

\subsection{Using the GPU without Table Lookups}
\label{imp:gpu_notables}

The third library implemented in this project is a Galois field arithmetic library that performs all computations directly (without tables) and performs these computations on the GPU when possible.  The main benefit of this approach is a significant reduction in the amount of memory required to do the arithmetic, as the large tables specified in Table~\ref{tab:table_sizes} do not have to be created or stored. A secondary benefit is that computation can be implemented naively. That is, this implementation does not require the intricate details associated with the table lookup approaches.  This results in a vast reduction of development time for the library.  

While this version eliminates some memory overhead, each multiplication operation requires many more instructions. The kernel for the library using tables (describe in the previous section) for $w = 8$ is given in Figure~\ref{fig:kernel_w8_table}, while the kernel for $w = 8$ for this library is given in Figure~\ref{fig:kernel_w8_notable}.  As is obvious from these code snippets, there is a significant increase in the number of operations with this library; however, the number of memory accesses is decreased.  

\begin{figure}[h]
\begin{scriptsize}
\begin{verbatim}
__global__ void mult_w8_null(int *nbytes_d, 
    unsigned char *ur1_d, 
	unsigned char* ur2_d, 
	int *mult_d, 
	int *srow_d) 
{
  int i = blockIdx.x*blockDim.x + threadIdx.x;
  ur2_d[i] = (i < *nbytes_d) ? mult_d[*srow_d+ur1_d[i]] : 0;
}
\end{verbatim}
\end{scriptsize}
\caption{Kernel that uses tables for computing multiplication results.}
\label{fig:kernel_w8_table}
\end{figure}

\begin{figure}[h]
\begin{scriptsize}
\begin{verbatim}
__global__ void mult_w8_null(unsigned char *ur1_d, 
      unsigned char *ur2_d, 
      int *multby_d, 
      int *prim_poly_d, 
      int *nwm1_d) 
{
  int index = blockIdx.x*blockDim.x + threadIdx.x;
  char x;
  char y;
  int z;
  x = ur1_d[index];
  y = *multby_d;
  char prod;
  int i, j, ind;
  int k;
  char scratch[33];

  prod = 0;
  for (i = 0; i < 8; i++) {
    scratch[i] = y;
    if (y & (1 << (7))) {
      y = y << 1;
      y = (y ^ prim_poly_d[8]) & nwm1_d[8];
    } else {
      y = y << 1;
    }
  }
  for (i = 0; i < 8; i++) {
    ind = (1 << i);
    if (ind & x) {
      j = 1;
      for (k = 0; k < 8; k++) {
        prod = prod ^ (j & scratch[i]);
        j = (j << 1);
      }
	}
   }
   ur2_d[index] = (x == 0 || y == 0) ? 0 : prod;
}
\end{verbatim}
\end{scriptsize}
\caption{Kernel that directly computes multiplication results.}
\label{fig:kernel_w8_notable}
\end{figure}

Again, the main focus of this library was improving performance of multiplications of regions of memory.  We made similar performance improvements as in the previously discussed ilbrary, such as reductions in memory allocations and deallocations and increases in the number of threads per block. 

	%	(Elimination of tables -- change in number of instructions per multiply, e.g.)
	%	(Changes the amount of memory transferred, but not as much as we initially thought)
	%	(Example code -- compared with code in previous section)
%(Increasing the number of threads per block to reduce memory latency costs)
