%
%  CS267 Homework 2 Part 1 submission
%
%  Created by Yangqing Jia, Nils Peters, and Ian Tullis.
%  Copyright (c) 2011 . All rights reserved.
%
\documentclass[]{article}

% Use utf-8 encoding for foreign characters
\usepackage[utf8]{inputenc}

\usepackage{times}

% Setup for fullpage use
\usepackage{fullpage}

% Multipart figures
\usepackage{subfig}

% More symbols
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsthm}

% Surround parts of graphics with box
\usepackage{boxedminipage}

% Package for including code in the document
\usepackage{color}
\usepackage{listings}
\lstset{language=C,numbers=left,numbersep=5pt,frame=single}
\usepackage{fancyvrb}

% This is now the recommended way for checking for PDFLaTeX:
\usepackage{ifpdf}

\newif\ifpdf
\ifx\pdfoutput\undefined
\pdffalse % we are not running PDFLaTeX
\else
\pdfoutput=1 % we are running PDFLaTeX
\pdftrue
\fi

\ifpdf
\usepackage[pdftex]{graphicx}
\else
\usepackage{graphicx}
\fi

% Use \todo to mark todo items
\newcommand{\todo}[1]{{\color{red} TODO: #1}}
% Use \note to add notes
\newcommand{\note}[1]{{\color{blue} NOTE: #1}}
% use \hide{something} if you do not want something to show up, but 
% still want to keep them in the source file
\newcommand{\hide}[1]{}


\title{CS267 HW 2: Parallelize Particle Simulation}
\author{ Yangqing Jia, Nils Peters, Ian Tullis\\
 		( jiayq@eecs., nils@icsi., itullis@ ) berkeley.edu}

\begin{document}

\ifpdf
\DeclareGraphicsExtensions{.pdf, .jpg, .tif}
\else
\DeclareGraphicsExtensions{.eps, .jpg}
\fi

\maketitle

\section{Introduction}

In this report, we will first outline our algorithm that leads to $O(n)$ complexity when carried out in serial code. We then discuss our OpenMP, MPI and GPU implementations, and empirically evaluate their performances.

\section{The $O(n)$ Serial Algorithm}
The main problem of the naive code is that it computes the pairwise foces between any two particles during each step of the simulation, which yields a complexity of $O(n^2)$. As described on the assignment webpage, the particles in the simulation area are sufficiently sparse that there are only $O(n)$ interactions expected. (For a simulation of $n$ particles, the area is set to be $O(n)$, according to the code; thus the length of each edge of the square is $O(\sqrt{n})$).

Our idea is to segment the are to small bins, where each bin is expected to contain only a constant number of particles. A natural choice is to have bins whose edge length is the range of interaction forces ({\tt cutoff} in the provided code). Thus, there will be $O(n)$ such bins, each containing $O(1)$ particles. Now, to compute the force applied to a certain particle, we only need to check the particles that are in the same bin of the given particle, or in the neighboring 8 bins. Thus, we only need to check $O(1)$ number of particles. This enables us to use $O(n)$ time to compute the forces for all the particles. 

We used a C++ vector of length $O(n)$ to store the bins, where each bin is a vector of particle indices in that bin.  We will show that the complexity of necessary bookkeeping never exceeds $O(n)$. Specifically, for each simulation step, we need to do the following:
\begin{enumerate}
    \item Assign each particle to a bin. This requires us to clear each bin first, and add each particle to the corresponding bin based on its position, which takes $O(n)$ time in total.
    \item Computing forces. As described, for each particle we only need to consider the particles in the same bin or in the surrounding 8 bins, so it takes $O(1)$ time for each particle only and $O(n)$ in total.
    \item Update particle positions. This is not difficult, since it only depends on the total force on the particle. It's $O(1)$ for each particle and therefore $O(n)$ in total.
\end{enumerate}
Thus, the whole program runs in $O(n)$ time.

To empirically show that the proposed algorithm runs in linear time, we ran both the naive implementation and our implementation above with different numbers of parameters. The log-log plot (Figure \ref{fig:serialspeed}) demonstrates that the complexity is linear. (Note that the slope corresponding to our method is half that of the naive implementation, indicating that our complexity is the square root of $O(n^2)$.). An increase in computation time can be observed towards the right end of the blue curve, but we infer that it is due to cache misses; after passing a size of about 20,000, the slope returns normal.

\begin{figure}
    \centering
    \includegraphics[width=0.5\textwidth]{figures/speed_serial}
    \caption{The log-log figure showing the computation time of our method (blue) and the naive implementation (red) vs. the number of particles. The experiment was carried out on a macbook pro with 2.66G CPU.}\label{fig:serialspeed}
\end{figure}

\section{The OpenMP Implementation}
To parallelize the serial code by utilizing OpenMP primitives, we explored the possibility of parallelizing the for loops in the program. Specifically, there are three major loops over particles in the program (the loop over steps is, apparently, not able to be parallelized so we don't count it), and we discuss them below separately.

\begin{enumerate}
	\item Move particles. This is the most straightforward loop, since moving an particle does not affect others.
	\item Computing forces. Although forces exist between particles, the computation of the force to a specific particle can be easily parallelized. Further, since we have the binning trick introduced, only a small subset of particles needs to be considered when computing the force for a specific particle.
	\item Assign each particle to a bin. This is the most tricky part. There are two possible ways to do the assignment: one is to loop over bins, and for each bin collect particles that belong to it. This turned out highly inefficient as there are much more bins than particles. Instead, we adopt the second approach: loop over particles, and push each particle to the bin it belongs to.
	
	However, when parallelizing this loop, race conditions may occur: there might be two threads pushing two particles to the same bin at the same time. We used locks - one lock for each bin - to solve this problem. When a thread needs to push a particle to a bin, it first requests the lock corresponding to the bin. After obtaining the lock, it pushes the particle, and releases the lock.
\end{enumerate}

The comparison between the OpenMP version and the serial version (both running the $O(n)$ algorithm. We don't compare the OpenMP against the naive serial version as it is $O(n^2)$) is shown in Figure \ref{fig:omp}. Since the complexity is linear, we plot the curves in normal scale. To show that OpenMP introduces additional overhead (since it needs to deal with locks), we also plotted the case when running the OpenMP code with only one thread. For this experiment, we used our own computer with 16 cores, thus allowing a maximum of 16 concurrent threads.

Several interesting findings may be observed from the figure:
\begin{enumerate}
	\item It is interesting to see that the actual complexities for all the methods are actually a little higher than the theoretical linear complexity. We infer that increasing memory access cost (due to factors like cache misses) when the number of particles increase might be the cause.
	\item The OpenMP code does create overheads, which can be observed in all cases. For example, the OpenMP code with 2 cores virtually has no effect when the number of particles are below 10,000.
	\item With the number of data points increasing, the speedups of the OpenMP code approaches its theoretical limit. For a fixed number of particles, adding more threads becomes less and less effective when the number of threads increases, possibly due to more frequent race conditions merging.
\end{enumerate}

\begin{figure}
	\centering
	\includegraphics[width=0.95\textwidth]{figures/omp_speed}
	\caption{Speed comparison between the serial and OpenMP codes with different number of threads. In the left figure, absolute computation time is shown; in the right figure, the speedup ratio is shown with the serial code being the baseline.}\label{fig:omp}
\end{figure}


\section{The MPI Implementation}
	Our mpi\_eff.cpp code is a straightforward adaptation of the provided mpi.cpp code to our binned serial\_eff method. As in mpi.cpp, we divide up the particles among processors. Each processor has its own copies of the full bin data and the full particle data, plus a local buffer for modifying its own particles. After each processor has calculated the forces (from  all other particles) on all of its own particles, we collect and distribute this data with Allgatherv(). 

The results (particle positions) produced by our MPI code were close to, but not the same as, the results produced by our OpenMP and serial codes, which were identical to each other \footnote{We compared results by setting the random seed to a fixed number instead of time(NULL) and then using diff on the output files.}. However, they are the same as the results produced by the provided mpi.cpp function, so we think that that code is either numerically unstable (which seems unlikely) or has a small bug in the first place (which we were unable to find in time).

As the comment in mpi.cpp suggests, these "all" MPI methods are slow, and we were not surprised to find that this MPI code (which doesn't really do justice to the capabilities of MPI) performed worse than our slightly more sophisticated OpenMP code. The MPI speedup relative to our binned serial code was always substantially worse than the OpenMP speedup relative to our binned serial code, whether we ran on Franklin or Hopper. Figure \ref{fig:mpi} shows the computation time of our MPI code compared against the $O(n)$ serial implementation. As can be seen in the figure, the MPI implementation does speed up to some extent, but is still far from the theoretical limit. The underlying algorithm on each processor is still O(n) despite the existence of multiple processors, since the Allgatherv() function and its implicit barrier promote too much communication, and the separate processors do too much redundant work (for instance, all of them assign the updated particles to bins independently), and this prevents us to fully utilize the power of parallelization.

\begin{figure}
	\centering
	\includegraphics[width=0.8\textwidth]{figures/performance-final-MPI}
	\caption{Speed comparison between the serial and MPI codes.}\label{fig:mpi}
\end{figure}

We were unable to correctly implement a more ambitious strategy which would have distributed bins, and not particles, among processors. In this strategy, each processor keeps track of a subset of bins and all the particles therein. The bins are partitioned among processors evenly such that the simulation area is divided into horizontal bands, and  (provided that there are enough processors) each band will only be within particle-force range of at most two other bands. Therefore, each processor only needs to communicate with (and know the most up-to-date information for the particles contained in the bins handled by) at most two other processors. These neighboring processors also need to "pass" particles to one another whenever particles move between bands. We would have implemented this using two-processor methods such as send and recv rather than "all" methods. We would have also had to use a barrier to ensure that some processors did not start the next step of the simulation before others were finished with the current step. If we find time to implement this strategy successfully, we will submit the results.


\section{The GPU Implementation}
The GPU code is largely similar to the OpenMP code, since both follow the shared memory model. Thus, we will not elaborate on the implementation details, but there is still one thing we want to mention, which is the race condition. In OpenMP, this is done by introducing locks, but in CUDA, there is actually a better solution: to use the atomic operations. Specifically, we represent the bins as an array of indices of the particles belonging to that bin, and keep record of the current tail of the array. Whenever a thread is pushing a particle to a bin, it first attempts to increase the current tail of the array by 1, using AtomicInc. AtomicInc then returns the old tail position of the array, to which the thread can safely write the index of the particle to be inserted: this operation is the counterpart of the lock code in OpenMP, and guarantees that the pushing operation is atomic.

As a comparison between the naive version and the linear version of GPU codes, we plotted the log-log scale speed curves in Figure \ref{fig:GPU}. Theoretically, the comparison should look similar to that between the naive and linear serial codes, and the result confirmed this.

Further, we explored the influence of the number of threads per block on the running speed. To this end, we choose the number of particles to be 100,000 and 1,000,000 respectively, and vary the number of threads per block from 16 to 256 \footnote{As recommended by the CUDA programming guide, it does not make sense to set a number that is not power of 2.}. The computation time is shown in Figure \ref{fig:gputhreads}. It can be observed that there is no universal solution that fits all cases. Unlike OpenMP, increasing the number of threads does not necessary lead to speed increase - we infer that the number of blocks as well as the number of threads per block affect the speed, and tuning these parameters are tricky.

During the GPU programming we had the following feelings about the advantage and disadvantage of programming with GPU:
\begin{enumerate}
	\item Using GPU makes it easy to scale the number of cores to hundreds, while still maintaining the advantage of shared-memory model. For example, we don't need to do message passing, and race conditions can be easily handled with atomic operations (at least in simple cases). As a result, the speedup could be significant - the GPU implementation can simulate 1 million particles in around 25 seconds, which is difficult for all other methods.
	\item There seems to be limited ways to synchronize threads within the kernel function. Admittedly we can synchronize threads within each block, but it is impossible to synchronize cross blocks. This forces us to write multiple kernel functions and call them separately, synchronizing in the main function, which may affect performance since each call introduces a significant overhead. Also, the call to GPU kernel functions returns immediately in the main function, forcing the use of cudaSynchronizeThreads() function to be called every time a kernel function is called.
	\item Debugging CUDA programs is not as easy as normal programs, even with the help of cuda-gdb.
\end{enumerate}

As a final note, when we are testing the program, we find some strange bugs that we are not able to locate the cause:
\begin{enumerate}
	\item When we run the CUDA program on dirac with job submissions, sometimes the stdout output does not get returned. However, if we request an interactive job and execute the program, the output is normal. This is the way we run experiments in this report.
	\item When the number of particles is within a specific range, sometimes the program halts. We traced the program and it seemed to halt in the move() function. When we choose the number of threads per block to be 256, such ``halt range'' is about 400 - 2000. Outside this range, the program runs normally. However, when we test the program in our own GPU (a GTX295 GPU), the problem disappears. We are not sure if it is a problem caused by some specific settings on the cluster, and believe that it's worth mentioning.
\end{enumerate}

\begin{figure}
    \centering
    \includegraphics[width=0.5\textwidth]{figures/speed_gpu}
    \caption{The log-log figure showing the computation time of our method (blue) and the naive GPU implementation (red) vs. the number of particles. The experiment was carried out on dirac, using 256 threads per block.}\label{fig:GPU}
\end{figure}

\begin{figure}
    \centering
    \includegraphics[width=0.8\textwidth]{figures/gpu_speed_vs_nthreads}
    \caption{The computation time vs.\ the number of threads per block, with 1,000,000 particles (left) and 100,000 particles (right). The best configuration are marked respectively.}\label{fig:gputhreads}
\end{figure}

\end{document}

