\documentclass[12pt]{article}

\topmargin -0.5in
\footskip 0.7in
\textwidth 6.5in
\textheight 9.0in
\oddsidemargin 0.1in
\evensidemargin 0.1in
\parindent0pt\parskip1ex

\usepackage{amsmath,algorithmic,comment,subfigure,graphicx,ifthen,epsfig}
\usepackage[ruled,vlined]{algorithm2e}
\newcommand{\tight}{\baselineskip=8pt}


\tolerance=750


\title{CS 267 Homework 2, Part 2}

\author{ Benjamin Lipshitz (lipshitz@berkeley.edu) \\
Edgar Solomonik (solomon@eecs.berkeley.edu) \\
Brian Van Straalen (bvs@eecs.berkeley.edu) \\ 
} 

\begin{document}

\maketitle
\section{Introduction}

   This homework required a more optimal implementation of the short-range particle simulation using a CUDA-based GPU implementation.  Our team undertook two versions.  One involved taking the same {\tt particle\_t*} implementation to make an O(n) algorithm in serial and execute it in the GPU.  The other used the same logical binning approach, but also linearized the bins into a sorted array of {\tt particle\_t} and a secondary array of array offsets.

\section{Original Performance}

  The example code has an $O(n^2)$ performance profile with increasing particles.  
  
\begin{table}[htdp]
\caption{Original $O(n^2)$ code}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
particles & time (s) on single Hopper core & time (s) on Fermi processor\\
\hline
5k  & 139.68 & 9.15 \\
10k & 590.20 & 34.15  \\
20k &   -    & 140.80 \\
40k &   -    & 557.92 \\
\hline
\end{tabular}
\end{center}
\label{original}
\end{table}


\section{$O(n)$ Implementation 1}

This code version is simply a transfer of the algorithm developed in part 1 of the homework 2 to the {\tt \_\_device\_\_} on the GPU version.

we change our bins to be device allocated
\begin{verbatim}
    cudaMalloc((void ***) &binArray, MAX_BIN*numBins* sizeof(particle_t*));
    cudaMalloc((void ****) &bins, numBins* sizeof(particle_t**));
    cudaMalloc((void **) &num_particles_in_bin, numBins * sizeof(int));
\end{verbatim}

Then set up the pointers on the device. These are our short-range interaction bins.

When the threads are filling bins we have to be thread-safe. To manage this we used {\tt atomicAdd}.  With a zero'd out {\tt num\_particles\_in\_bins} each thread looks at a particle and atomically increments that bins count. atomicAdd also returns the last value this integer had, which is precisely where we want to store this particle.

\begin{verbatim}
 int tid = threadIdx.x + blockIdx.x * blockDim.x;
 if(tid >= n) return;


 int bin_x = (int)(particles[tid].x/cell_size);
 int bin_y = (int)(particles[tid].y/cell_size);
 int bin = bin_y*bins_dim+bin_x;

 int index = atomicAdd(num_particles_in_bin+bin, 1);
 particle_t** thisBin = bins[bin];
 thisBin[index] = particles+tid;
\end{verbatim}

Like the serial code, this one performs in $O(n)$ time, up to about 10M particles, when it appears we run out of memory.


\begin{table}[htdp]
\caption{Implementation 1 $O(n)$ code}
\begin{center}
\begin{tabular}{|c|c|}
\hline
particles &  time (s) on Fermi processor\\
\hline
5k  & 0.2979 \\
10k & 0.4336 \\
20k & 0.7292 \\
40k &  1.329 \\
80k &  2.539 \\
320k&  9.907 \\
640k&  19.746 \\
1.28m & 39.400 \\
2.56m & 79.599 \\
5.12m & 162.84 \\
\hline
\end{tabular}
\end{center}
\label{original}
\end{table}

Since the different calls to {\tt \_\_device\_\_} code were with different thread block arrangement we have to put
{\tt cudaThreadSynchronize()} calls between host calls to sections.  This code is probably mostly limited now by the fact that all data access to to the GPU global memory.  This has about a 400 cycle penalty in call time. Attempts to use the CUDA Profiler and look at {\tt gld\_coherent gld\_incoherent gst\_coherent gst\_incoherent} events just gave errors, saying that the hardware was not support 1.x hardware events.  We were pretty sure Fermi had these hardware counters.




\section{Implemetation 2 of $O(n)$ code}
For this implementation we wanted to put the particles in bin-order at
each time-step to provide contiguous memory access and to reduce the
number of layers of pointers required to access the particles.  Hence
the binning process is divided into 3 steps:

\begin{enumerate}
\item We calculate the bin that each particle belongs to in
  parallel.  As in implementation~1, we use {\tt atomicAdd} to keep
  track of {\tt num\_particles\_in\_bins} and record the return value
  to keep track of where the particle will reside relative to the
  start of its bin in the array {\tt particle\_offsets\_in\_bins}.
\item We calculate the {\tt prefix\_sum} of {\tt
    num\_particles\_in\_bins} so that it now stores the index in a
  sorted particle list of the start of each bin.
\item Finally we reorder the list, knowing the starting index of each
  bin from {\tt num\_particles\_in\_bins} and the offset of each
  particle relative to the start of its bin from {\tt
    particle\_offsets\_in\_bins}.
\begin{verbatim}
 int tid = threadIdx.x + blockIdx.x * blockDim.x;
 if(tid >= n) return;

 particle_t p = old_particles[tid];
 new_particles[bin_positions[p.bin]+particle_offsets[tid]] = p;
\end{verbatim}
\end{enumerate}

Because particles in neighboring bins are now contiguous, this allows
faster access to them in {\tt compute\_forces\_gpu}.  In particular,
as long as the particle under consideration is not near either end of
the space, we are able to calculate contributions to the force from
three adjacent bins with each lookup of a bin.

\begin{table}[htdp]
\caption{Implementation 2 $O(n)$ code}
\begin{center}
\begin{tabular}{|c|c|}
\hline
particles &  time (s) on Fermi processor\\
\hline
5k  & 0.2394 \\
10k & 0.3289 \\
20k & 0.5336 \\
40k &  0.9344 \\
80k &  1.780 \\
160k & 3.429 \\
320k&  6.687 \\
640k&  13.474 \\
1.28m & 26.917 \\
2.56m & 55.015 \\
5.12m & 111.079 \\
\hline
\end{tabular}
\end{center}
\label{original}
\end{table}

With these optimizations, implementation~2 runs about 30\% faster than
implementation~1.  However all memory access is still to GPU global
memory, and to get better performance we needed to make use of  {\tt
  \_\_shared\_\_} memory when possible.


\subsection{Shared memory implementation of prefix\_sum}

We profiled our implementations using the enviroment variable CUDA\_PROFILE. This allowed us to
quickly determine that our prefix sum implementation was sufferring an overhead due to repeated
kernels for each parallel prefix sum stride. Further, even without this problem the overall 
execution time of the prefix sum was as large as the other kernels. So, we decided to focus on
optimizing the prefix sum kernel in order to improve overall performance.

Our new prefix sum algorithms does 3 kernel invocations that do the following
\begin{enumerate}
\item Do a prefix sum within each thread block on some contiguous sections of the data as follows
  \begin{enumerate}
  \item Declare a shared memory buffer the size of the number of threads in the block and load a
	distinct shared buffer location with each thread in a fully coalesced fashion.
  \item Do a parallel prefix sum on the data in the shared memory buffer using the \_\_syncthreads() primitive
	before each read and before each write.
  \item Use a size 1 shared memory buffer to maintain a ongoing offset that is equal to the total
	sum of all the counts seen so far, and add this offset before each write back to global memory.
  \end{enumerate}
\item Write the total sum of counts that each block gets into a unique location in a global buffer.
\item Do a prefix sum on the sum of counts that we wrote in the last step. This prefix sum is the length
      of the number of thread blocks used, which is at most 30, so it is sufficient to use 1 block do this sum.
\item Invoke the same thread blocks as in step 1, and iterate over all the counts offsetting them by the
      global count we calculated in the previous step.
\end{enumerate}

This method for doing prefix sums invokes the kernel many fewer times and is much more efficient.
We achieved the following performance after this optimization (see Table~\ref{opt}). Clearly,
this is at least linear scaling and the performance is strictly better than the previous version.
\begin{table}[htdp]
\caption{Implementation 2 $O(n)$ code}
\begin{center}
\begin{tabular}{|c|c|}
\hline
particles &  time (s) on Fermi processor\\
\hline
5k & 0.168 \\
10k & 0.253 \\
20k &  0.477 \\
40k &  0.837 \\
80k & 1.577 \\
160k&  2.982 \\
320k&  5.829 \\
640k & 11.56 \\
1.28m & 23.05 \\
2.56m & 46.25 \\
5.12m & 93.10 \\
\hline
\end{tabular}
\end{center}
\label{opt}
\end{table}

\section{The Mysteries of Verification}

Early on in the project we were struggling to get our GPU code to generate the same results as our known serial benchmark.  We had already changed the random seed value in the random generator such that we were certain the codes had the exact same starting conditions.  On finer investigation we discovered an interesting trend in our computations.  Comparing our various GPU code versions from the reference version, and the original non-GPU version we could see deviations from the reference codes as shown in figure \ref{verification}.   It would seem that our homework had unwittingly (or perhaps wittingly) performed the kind of benchmark described in 
{\bf www.eecs.berkeley.edu/~ballard/projects/CS263paper.pdf}     D. Bailey, J. Demmel, W. Kahan, G. Revy, and K. Sen. {\it Techniques for the automatic debugging of scientific floating-point programs}, 2010.  Fermi might not execute IEEE spec floating point.  The manual says it does.  The code is using Verlet Velocity integration, which is higher order and generally used, but does have a stability constraint.  The reality is, the code is using Sympletic Euler integration, not the typically used Verlet Velocity integration.  This system has no damping in it, except implicitly, due to the cut-offs.  This should be the gold standard in stability, but it is not.  It would be good to run a single-precision and a double-precision version of the code and check for exponential divergence.  Exponential divergence is a hallmark of two different phenomenon: an unstable integrator, or a chaotic system.  An experiment with a finer timestep could discriminate these options.  The Lyaponov exponent could be calculated.

\begin{table}[htdp]
\caption{Verification study of GPU algorithm}
\begin{center}
\begin{tabular}{|c|c|}
\hline
time step & deviation from reference code \\
100 &   10th decimal place \\
600 &    7th decimal place \\
1000 &   3rd decimal place \\
\hline
\end{tabular}
\end{center}
\label{verification}
\end{table}


\end{document}
