\section{GPGPU Techniques and Results}
We described an initial GPGPU CUDA kernel and gave an initial result of 40 ms in chapter 5.  This chapter will present techniques to further accelerate this result.
\subsection{Identifying the Bottleneck}
The first task in accelration is to identify the bottleneck of the algorithm.  In our CUDA kernel, the bottleneck identified as the third for loop: the calculation of $\left|x[k]-x[i]\right|^{\alpha[j]}$ by threads 1-500 of the each block and the summing of these results to find $X_{\theta}^+ and X_{\theta}^-$.

%-------- Figure -----------------------------------
\begin{figure}[h!]
\epsfig{file=figures/bottleneck.eps}
\caption{Identifying the Bottleneck}
\label{fig:bottleneck}
\end{figure}

%------- End Figure---------------------------------------

Breaking down into details more details, there are three sections of the CUDA code.  Section 1 is the pre and post processing, this is the part of code where GPU memories are allocated, copied to and from by the CPU and the kernels are invoked.  Section 2 is the bottleneck.  Section 3 is the code that calculates the likelihood and MLE functions.  

Section 1 takes slightly under 1 ms, i.e. it takes 1 ms to do the sorting of the input data, copy the data to the GPU, call the kernels, and copy the data back.  Section 2 takes total run time - time for section 1.  The calculations in section 3 has minimal effect on run time.

\subsection{General GPGPU techniques}
We identified several CUDA guidelines in CUDA review. We used these plus some additional common sense guidelines in accelerating the CUDA program. 

\textbf{Threads should not be idle} We do not want idol threads waiting to do nothing in CUDA or CPU multi-threaded programming.  One direct application of this principle is to seperate the execution of sections 2 and 3.  Section 2 launches 500x80 blocks of 512 threads each.  Section 3 is in 2 for loops, so we may launch 80 blocks of 500 threads each, or even 500x80 blocks of 1 threads each,  Either way, the two sections are seperated so that the threads launched for section 2 are not idol while performing the tasks of section 3.

We must remember that the execution of section 3 code depends on the results of section 2.  Mainly, each block of section 2 calculates 2 variables, $X_{\theta}^+ and X_{\theta}^-$, that section 3 needs in order to execute.  

\textbf{Minimize copying between CPU and GPU} One traditional bottleneck in GPGPU programming is the length of time it takes to copy values from CPU to GPU and vice-versa.  Developers do well to minimize the amount of copying that happens. Modeling the algorithm described in section 3 and shown in figure ~\ref{fig:system},there need to be a minimum of 500 initial copies from CPU to GPU; and 4 final copies from GPU to CPU at the end of the algorithm.
 
%-------- Figure -----------------------------------
\begin{figure}[h!]
\includegraphics[scale = 0.75]{figures/system.eps}
%\epsfig{file=figures/system.eps}
\caption{System View of Algorithm}
\label{fig:system}
\end{figure}

%------- End Figure---------------------------------------   

Keeping copies at these low numbers mean that the time taken in copying is neglible to the total run time of the program.  As mentioned before, section 3 depends on the results of section 2, more specifically , the 80,000 values of $X_{\theta}^+ and X_{\theta}^-$. This would require a significant amount of copying and run time, but luckily there is never a need for these values to be copied back to the CPU.  Providing section 3 with the pointers where section 2 has written the values is enough to avoide these copies.  

\subsection{Acceleration Techniques: Summing GPGPU threads}

The bottleneck in section 2 of the algorithm is the main focus of our techniques.  Notice that there are two parts in the bottleneck. Bottleneck part 1 calculates $\left|x[k]-x[i]\right|^{\alpha[j]}$ for each thread of each block (remember that there are 500x80 blocks with 512 threads each, so greater than 2 million threads overall). Bottleneck part 2 sums up the calculations of part 1 in each block and writes to global memory (500x80 blocks write 40,000 values in global memory). CUDA code for parts 1 and 2 are shown:
\linespread{1}
\begin{verbatim}
  // Declare variables in shared memory for fast thread read/write
  __shared__ double temp_X_minus [threadsPerBlock];
  __shared__ double temp_X_plus [threadsPerBlock];

  // Part 1: each thread takes a power according to 
  // i:dataIndex, k:tid, j:alpha(j)
  if (tid <= dataIndex){
    temp_X_minus[tid] = pow(abs(dev_data[tid]-p1), alpha);
    temp_X_plus[tid] = 0;
  }
  else if (tid >dataIndex && tid < windowSize){
    temp_X_minus[tid] = 0;
    temp_X_plus[tid] = pow((dev_data[tid]-p1), alpha);
  }
  else {
    temp_X_minus[tid] = 0;
    temp_X_plus[tid] = 0;
  }
  __syncthreads();


  // Part 2: sum up the results of each thread, standard summing function 
  int i = threadsInBlock/2;  // threadsInBlock = number of threads per block
  while (i != 0) {
    if (tid < i){
      temp_X_minus[tid] += temp_X_minus[tid + i];
      temp_X_plus[tid] += temp_X_plus[tid + i];
    }
    __syncthreads();
    i /= 2;
  }

  // write result back to global memory, 
  // each block produces unique result stored
  // in unique location defined by blockID
  if (tid == 0){
    dev_X_plus[blockID] = temp_X_plus[0];
    dev_X_minus[blockID] = temp_X_minus[0];
  }
\end{verbatim}
\linespread{2}
It then makes sense to determine if each part could be accelerated. Part 1 requires each thread to perform a double precision power function and place the result into a shared variable. This part may be accerlerated if the developer knows certain facts about the input. This is more of a pre-processing acceleration that will be covered in the next chapter.

Part 2 requires calculating the sum of all the variables calculated in thread 1 (now stored in the shared variable) and writing it to global memory. In other words, the task is for each block to calculate
$X_\theta^+ = \sum_{k} (x[k]-x[i])^{a[j]}$, 
part 1 calculates
$shared[k] = (x[k]-x[i])^{a[j]}$ while part2 calculates
$X_\theta^+ = \sum_{k} shared[k]$.

The CUDA code for part 2 
\linespread{1}
\begin{verbatim}
  // Part 2: sum up the results of each thread, standard summing function 
  int i = threadsInBlock/2;  // threadsInBlock = number of threads per block
  while (i != 0) {
    if (tid < i){
      temp_X_minus[tid] += temp_X_minus[tid + i];
      temp_X_plus[tid] += temp_X_plus[tid + i];
    }
    __syncthreads();
    i /= 2;
  }
\end{verbatim}
\linespread{2}

tries to take a 'folding' method in calculating the sum of all threads. In the first iteration of the while loop, half of the threads will be summed to the other half: the sum gets 'folded' in half. In the second iteration, the half that was folded get folded/summed again.  The process is repeated until a single sum remains in shared[0].  The method is illustrated in figure ~\ref{fig:folding}
%-------- Figure -----------------------------------
\begin{figure}[h!]
\epsfig{file=figures/folding.eps}
\caption{GPGPU 'Folding' summing process}
\label{fig:folding}
\end{figure}

%------- End Figure---------------------------------------   
This summing method is a typical GPU method used to take advantage of the number of threads available in the GPU. This method is faster than a typical CPU for loop summing because multiple threads are working in parallel, especially at the beginning of the folding process. 

There are 512 threads launched for each block in our CUDA kernel (as a general rule, keep the threads as a power of 2 so operations like 'folding' may be performed without if else statements slowing down the code). During the first run of the folding process, 256/512 threads are performing addition; during the second run, 128/512 threads are working. This part of the algorithm takes advantage of the CUDA's parallel nature.  

However, there are many more threads idle than working at the end of the summing method. In the last step there is 1 thread working and 511 idle, the second to last step has 2 working threads and 510 idle threads.  It is not good to have all these threads sitting around idly in the GPGPU because that means inefficiencies may exist in the code.

Recognizing this inefficiency, the summing process can be stopped short of the last iteration, i.e. i = 4 instead of i = 0.  Then the CUDA code is altered accordingly:

\linespread{1}
\begin{verbatim}
  // Part 2: sum up the results of each thread, standard summing function 
  int i = threadsInBlock/2;  // threadsInBlock = number of threads per block
  while (i != 4) {
    if (tid < i){
      temp_X_minus[tid] += temp_X_minus[tid + i];
      temp_X_plus[tid] += temp_X_plus[tid + i];
    }
    __syncthreads();
    i /= 2;
  }

  // write result back to global memory, 
  // each block produces unique result stored
  // in unique location defined by blockID
  if (tid == 0){
    dev_X_plus[blockID] = temp_X_plus[0] + temp_X_plus[1] + \
                          temp_X_plus[2] + temp_X_plus[3];
    dev_X_minus[blockID] = temp_X_minus[0] + temp_X_minus[1] + \
                           temp_X_minus[2] + temp_X_minus[3];
  }
\end{verbatim}
\linespread{2}

The effect of altering the stopping value of i on the algorithm is shown in table ~\ref{folding table} and graphically in Figure ~\ref{fig:foldGraph}.

%---------- Table --------------------------------------------
\begin{table}[h!]
\begin{center}
\begin{tabular}{|l|r|}
  \hline
  stop position & savings in milliseconds \\   \hline
  i = 0 & base \\   \hline
  i = 4 & 9.4 \\   \hline
  i = 8 & 10.8 \\   \hline
  i = 16 & 9.9 \\   \hline
  i = 32 & 9.5 \\    \hline
  i = 64 & 8.3 \\   \hline
\end {tabular}
\caption {Effect of folding architecture on runtime}
\label {folding table}
\end{center}
\end{table}

%--------------end of Table-----------------------------------


%-------- Figure -----------------------------------
\begin{figure}[h!]
\includegraphics{figures/foldingGraph.eps}
%\epsfig{file=figures/foldGraph.eps}
\caption{GPGPU 'Folding' Savings}
\label{fig:foldGraph}
\end{figure}

%------- End Figure---------------------------------------   

As table ~\ref{folding table} shows, the 'best' summing process is to stop the folding when i = 8 and add the rest of the elements with a single line of code. There needs to be a word of caution to readers following this thesis: this result will not be universal. The result will vary over different GPGPU cards as well as different algorithms. It is best to do this sort of test on each algorithm being worked on for each GPGPU card and see what is the best result.  The general spirit of what is described should be true (stopping before i = 0 will yield savings) but the exact best place to stop will depend on the kernel being launched as well as the GPGPU card.

\subsection {Acceleration Techniques: GPU Memory}
The GPGPU review chapter presented a look at the four types of GPGPU memory.  In the current algorithm, 500 data points are being copied to GPGPU global memory, where it is being accessed by threads 1-500 in every block in the bottleneck portion of the algorithm.  The bottleneck kernel is launched with 500x80 blocks of 512 threads each, so there are 500*500*80 = 2 million accesses to global memory.  

 \textbf{Constant vs. Global Memory:}

Constant memory and Global Memory are both off-chip memory. Constant Memory is designed to be read only where Global Memory is read/write. Constant Memory access is much faster than Global Memory access if threads are looking to access the same memory location multiple times. 

In the current algorithm, using Constant instead of Global memory did not yield any performance improvement.  This is because while each constant memory address is accessed multiple times: 500x80 times for each of the 500 data point memory addresses, the repeated access does not come from threads within the same block. Every thread from the same block needs to access a different memory address. This means that the special properties of the contant memory are not being utilized at all and there are no savings to be had.  We expected this result before using constant memory, but as with all things GPGPU, actually testing code is the only way to see if theory and practice match.


\textbf{Limiting Access to Global Memory:}

Knowing that memory access slows performance and the data being accessed by all blocks are the same, we look for creative ways to limit the accesses to global memory. In practice, this means changing the launch architecture of GPGPU calls: instead of launching 500x80 blocks, launch less and have those blocks re-use data access.

The limit to reducing the number of blocks launched can be seen by looking at the calculations within the bottleneck.  In the 500x80 block configuration, each block calculates $X_{\theta}^+ = \sum_{i} (x[k]-x[i])^{a[j]}$ where i,j are the id numbers of the current block.  We see that there are 80 unique values of $\alpha$[j], and that is the minimum number of blocks that should be launched: 80 blocks of 500 threads each.  The way to have 80 blocks perform the same calculations as 500x80 blocks is to put a for loop (from 0-499) inside each of the blocks:

\linespread{1}
\begin{verbatim}
  // Declare variables in shared memory for fast thread read/write
  __shared__ double temp_X_minus [threadsPerBlock];
  __shared__ double temp_X_plus [threadsPerBlock];
  __shared__ double data[windowSize];

  // Access the global memory once, put it into shared memory
  if (tid < windowSize)
     data[tid] = dev_Data[tid];

  // for loop here goes through 500 iterations.
  for (int k = 0; k < windowSize; k++){
    p1 = data[k];  
    
    // Part 1: each thread takes a power according to 
    // i:dataIndex, k:tid, j:alpha(j)
    if (tid <= dataIndex){
      temp_X_minus[tid] = pow(abs(dev_data[tid]-p1), alpha);
      temp_X_plus[tid] = 0;
    }
    else if (tid >dataIndex && tid < windowSize){
      temp_X_minus[tid] = 0;
      temp_X_plus[tid] = pow((dev_data[tid]-p1), alpha);
    }
    else {
      temp_X_minus[tid] = 0;
      temp_X_plus[tid] = 0;
    }
    __syncthreads();


    // Part 2: sum up the results of each thread
    // into temp_X_plus[0], temp_X_minus[0] using the 
    // fast summinig function (8x64), code ommitted

    // write result back to global memory, 
    // each block produces 500 unique result stored
  
    if (tid == 0){
      dev_X_plus[k*sizeAlpha + alphaIndex] = temp_X_plus[0];
      dev_X_minus[k*sizeAlpha +alphaIndex] = temp_X_minus[0];
    }
    __syncthreads();

 } //end for loop

\end{verbatim}
\linespread{2}

As the code snippet shows, the for loop now writes 500 values to global memory instead of one unique value.  Each block of code begins by placing accessing theddata in global memory and placing it into a shared variable.  This shared variable is then accessed with each call of the for loop. Overall, there are now 80x500 = 40,000 accesses to global memory instead of 2 million in the 500x80 block configuration.

The results of limiting access to global memory are surprising: the performance of the algorithm decreased, an extra millisecond is added to the running time. There are other versions of this launched as well. Ex:80x2 blocks are launched each running for loops from 0 to 249. However, these other configurations also did not yield the savings expected. One hypothesis here is that the cost of having for loops in the code outweighed the gain of clock cycles in limiting global memory access.

\subsection{Acceleration Techniques: OPENMP + CUDA}

CUDA 4.0 added compatibility with OPENMP. The CPU can now launch the connected GPGPU cards in parallel. Figure ~\ref{fig:openmp} shows the fork join model of OPENMP combined with CUDA.
%-------- Figure -----------------------------------
\begin{figure}[h!]
\includegraphics[scale = 0.8]{figures/openmp.eps}
%\epsfig{file=figures/openmp.eps}
\caption{OPENMP + CUDA Fork Join System }
\label{fig:openmp}
\end{figure}

%------- End Figure---------------------------------   

Figure ~\ref{fig:openmp} shows how OPENMP operates with two GPGPU cards. A general process for using OPENMP with CUDA on this algorithm is to:
\linespread{1}
\begin{verbatim}
1) Find the number of CUDA compatible devices available. 
2) Set the number of CPU threads as equal to the number of GPU devices 
   available. 
3) Perform pre-processing on both devices: allocate memory and copy 
   variables to both devices. 
4) Launch appropriate CUDA kernels on both devices.
5) Copy calculated parameters back to CPU memory, join threads, 
   find the overall minimum parameters.
\end{verbatim}
\linespread{2} 

This process is similar to any other OPENMP programming. The algorithm savings and added complexity occur in step 4. Because there are two CUDA GPGPU devices instead of 1, it makes sense for each device to perform half the algorithm. The fact that each device is doing half of what it does before means savings in the performance time. It also means that the developer has to design the algorithms again to maintain the correct results. The developer has to clearly state the purpose of each half kernel and make sure the results are right. This is not a simple case of OPENMP programming as CPU only side programming sometimes is. 

The rewards of OPENMP + CUDA is in the performance. The running time of the algorithm is reduced from 30 ms to under 20 ms with OPENMP + CUDA.  This result means that the result may be furthur reduced if additional devices were available. However, there does come a point of saturation when adding new devices, remember that the minimum set up time for a device is 1 ms. So one has to carefully weigh the cost of additional devices (dollar cost, power cost, space cost) with the performance gain.
 