\documentclass[12pt]{article}

\topmargin -0.5in
\footskip 0.7in
\textwidth 6.5in
\textheight 9.0in
\oddsidemargin 0.1in
\evensidemargin 0.1in
\parindent0pt\parskip1ex

\usepackage{amsmath,algorithmic,comment,subfigure,graphicx,ifthen,epsfig}
\usepackage[ruled,vlined]{algorithm2e}
\newcommand{\tight}{\baselineskip=8pt}


\tolerance=750


\title{CS 267 Homework 2}

\author{ Benjamin Lipshitz (lipshitz@berkeley.edu) \\
Edgar Solomonik (solomon@eecs.berkeley.edu) \\
Brian Van Straalen (bvs@eecs.berkeley.edu) \\ 
} 

\begin{document}

\maketitle

\section{$O(n)$ Serial Implementation}

To turn this algorithm into an $O(n)$ implementation we need to exploit the 
force distance cut-off.  Space is partitioned into sufficient numbers of 
equal sized bins such that each bin has only $O(1)$ particles inside and 
each bin is at least as large as the force interaction distance.  
Then the algorithm proceeds as follows:

\begin{verbatim}
for each STEP
  sort each particle into it's unique bin.  time = O(n)
  for mybin in binList.  time = O(n)
     for each nbin that is mybin or a neighbor.  time = O(1)
         for each particle p in mybin. time = O(1)
            for each particle np in nbin.  time = O(1)
               apply_force(p, np). 
            end
         end
     end
  end
  for each particle p. time = O(n)
     move(p)
  end
end
\end{verbatim}

We used an array of vectors of pointers to particles as our data structure.
The array can be allocated once we determine the granularity of our bins and
does not need dynamic resizing. We used STL vectors to keep track of the
particles belong to any bin, since they are efficient, easy to use, and most
importantly resizable. We stored pointers to particles in these vectors
to minimize copy overhead.

Another alteration of the example serial program, as well as ours, was giving both 
programs the same constant random number generator seed value.  Done this way 
we can diff the outputs of the two programs and verify correctness of our optimized code.

\begin{figure}[t]
\centering
\includegraphics[width=3.0in]{plots/serial}
\caption{Performance of original and optimized serial implementation.}
\label{fig:serial}
\end{figure}

Figure~\ref{fig:serial} demonstrates the performance of our binned serial implementation.
The scaling of our implementation is $O(n)$ instead of $O(n^2)$ and we perform significantly
better for all problem sizes tested. These results suggest that our binning strategy
suffers little overhead and achieves the expected scaling


\section{OpenMP Implementation}
To convert our serial code into a shared memory version, we had to
parallelize three parts, the binning, calculating the force, and
moving the particles.  The parallization of each is implemented
slightly differently, and the processes are synchronized after each of
the three steps.

The bins are divided up into p (number of threads)
rectangular blocks.  For the binning, each thread looks at all of the
particles, but only adds to the bins it controls.  This limits our
ability to scale to a large number of threads, but gets the binning
done cleanly without race conditions or locks.  To improve cache
performance, the threads read through the particles in different
orders.

To calculate the force, each thread calculates the forces on
the particles in the bins it controls, reading from other threads'
bins as necessary on the boundaries.  Finally for the move step the
particles are simply divided among the threads.

\begin{figure}[t]
\centering
\includegraphics[width=3.0in]{plots/openmp}
\caption{Performance of original and optimized OpenMP implementation.}
\label{fig:openmp1}
\end{figure}

\begin{figure}[t]
\centering
\includegraphics[width=3.0in]{plots/openmpstrong}
\caption{Strong scaling of our OpenMP implementation.}
\label{fig:openmp2}
\end{figure}
 
As shown in Figure~\ref{fig:openmp1}, this implementation is also
\(O(n)\) up to a very large number of particles.  As seen it Figure~\ref{fig:openmp2}, however, it does not
scale well to more than a few threads.  For up to 6 threads, it gets
over 50\% efficiency, but 8 threads gives almost no improvement, and
running on more than 8 cores is actually slower than running on
fewer.  This is because our binning proceedure isn't truly parallel,
but other shared memory binning methods, such as using locks would
also have trouble scaling to many cores.

\section{MPI Implementation} 

Our MPI parallelism strategy is to divide the problem domain into equal rectangles and assign each mpi process to it's own large bin.  Then within each mpi bin apply our O(n) algorithm from part 1.   With the initial distribution, and after each advance, the particles are again bin-sorted at both levels of binning.  The extra complexity with the distributed memory version of the code is to have each bin partially overlap based on the prescribed interaction cut-off distance.  In this model particles can appear on several processors.  {\tt apply\_force} is executed on all local particles, then the local particle list is pruned of those particles that are in a processor's ghost region, leaving just the bins in the local process to be moved.  

This performs well as we add nodes to our parallel job, but the
overhead makes it very inefficient when run on only a small number of
processors.  See Figure~\ref{fig:mpi}.  As long as there are more than
about 500 particles per process, the code scales well to many processors.

\begin{figure}[t]
\centering
\includegraphics[width=3.0in]{plots/mpistrong}
\caption{Strong scaling of our MPI implementation.}
\label{fig:mpi}
\end{figure} 

Within any individual Hopper node however, there are problems scaling
this code to many particles. See table \ref{mpi_linear}.     

\begin{table}[htdp]
\caption{Single-node (p=24) MPI run.  O(n) scaling for fixed p}
\begin{center}
\begin{tabular}{|c|c|}
\hline
particles & time (s) \\
\hline
5k  & .187s \\
10k & .334s  \\
20k & .647s  \\
40k & 1.76s  \\
80k & 3.87s  \\
\hline
\end{tabular}
\end{center}
\label{mpi_linear}
\end{table}

Further instrumenting the code with TAU produced some more information.  The situation deteriorated as more and more particles were added to the simulation.  at 640K particles TAU was reporting that most our time was in gathering the global histogram of of what particles were to be moved.  To see if this was an effect of {\tt MPI\_Allgather}, or load imbalance an {\tt MPI\_Barrier} was placed before the call to {\tt MPI\_Allgather}. The TAU results are now more informative (see figure \ref{tau}).
\begin{figure}
\small
\begin{verbatim}
FUNCTION SUMMARY (mean):
---------------------------------------------------------------------------------------
%Time    Exclusive    Inclusive       #Call      #Subrs  Inclusive Name
              msec   total msec                          usec/call
---------------------------------------------------------------------------------------
100.0          683       11,592           1           1   11592771 main int (int, char **)
 94.1        4,970       10,909           1         400   10909519 main loop
 37.5           10        4,341         100         600      43415 migrate
 29.1        3,369        3,369         100           0      33691 Barrier before allgather
  7.9          917          917         100           0       9174 apply_force loop
  6.7          771          771         100           0       7715 bin sort
  5.5          633          633         100           0       6339 local binning
  1.5          174          174         100           0       1745 Isend+Irecv
  0.4           46           46         100           0        464 move
  0.1           10           10         100           0        105 Waitall
  0.0            4            4         100           0         43 allgather
  0.0        0.735        0.735         100           0          7 build counters
\end{verbatim}
\caption{TAU output from instrumented MPI implementation. p=24, n=640k, 100 steps}
\label{tau}
\end{figure}

So, either synchronization within a Hopper node is terrible, or there is a very large amount of imbalance generated within each iteration.

The summaries are not helpful beyond that to diagnose things better.  Our code was instrumented further to investigate trace generation (It seems TAU on hopper is not configured properly for trace generation, so we made our own).

Firstly, a trace of the time spent in {\tt MPI\_Barrier} was made. Show in figure \ref{barrier}. Keeping in mind that an entire iteration is taking roughly 0.1 seconds, the effect is enormous. The effect is also not random.  rank==0 is always stalled for a long time at the barrier, and rank==23 spends no time at the barrier.  So, the job looks load imbalanced.  Next, we look at how much load we are giving each processor.  Figure \ref{particles} shows a trace of how many particles each process has.  We can see 4 distinct groupings.  These correspond to processors in the corners, the x-edge adjacent, the y-edge adjacent, and the interior processors.  While evocative, the figure shows that the particle load is balanced to within 1\%.  Nothing near what we are seeing.  So, it isn't {\it load imbalance}, but {\it execution imbalance}.  Process 0 and Process 23 see nearly identical workloads, but have dramatically different execution times between the bulk-synchronization point.  There also appears to be some non-randomness effect in the initial particle distribution that has a slightly lower probability near the range extremes.

\begin{figure}
\includegraphics[width=5.0in]{plots/barrier}
\caption{Time spent in Barrier for MPI. p=24, n=640k, 100 steps}
\label{barrier}
\end{figure}

\begin{figure}
\begin{verbatim}
  MPI_Init(&argc, &argv);
  timespec a, b;
  a.tv_sec=0;
  a.tv_nsec = 20000;
  std::list<double> t;
  int rank ;
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);


  for(int i=0 ;i<100; i++)
    {
      double dd=read_timer();
      MPI_Barrier(MPI_COMM_WORLD);
      dd=read_timer()-dd;
      if(i>15)
        t.push_back(dd);
      nanosleep(&a, &b);
    }
\end{verbatim}
\caption{Simple Barrier benchmark example}
\label{barrierBenchmark}
\end{figure}


\begin{figure}
\includegraphics[width=5.0in]{plots/particles}
\caption{Particles for each process. p=24, n=640k, 100 steps}
\label{particles}
\end{figure}


\begin{figure}
\includegraphics[width=5.0in]{plots/applyForce}
\caption{Time spent in the apply\_force loop. p=24, n=640k, 100 steps}
\label{applyForce}
\end{figure}

Given the same particle load, how does each core on the Hopper die perform ?  One possibility is a profound bias in the execution of communication that favors a swift return of Process 0 over high rank Process numbers.  A simple benchmark code was implemented shown in figure \ref{barrierBenchmark}.  This benchmark shows that the imbalance of barrier is on the O(1E-4 seconds).  Thus, the communication effects are not likely to be the culprit.  If we trace just the {\tt apply\_force} portion of the job we see that given a load imbalance of less than 1\% there is a performance variation of over 20\%.  If local bin sorting, mpi bin sorting and moving all suffer a similar effect that makes scaling on-node difficult with a static balanced execution model.  This suggests that on-node performance might benefit from a more dynamic execution model.

\section{Hybrid MPI+OpenMP implementation and further optimizations}

Since our pure OpenMP implementation did not overcome the NUMA effects of the 4-socket Hopper node
and the original MPI version showed problems with synchronization due to noise on the node, we 
impelemented a hybrid version and added further optimizations. Our original guess was that a
single MPI process per Hopper NUMA-node running with 6 OpenMP threads would be the best
approach. However, we quickly found some interesting trade-offs. 

\subsection{Architecture of the hybrid code}

Designing a hybrid MPI+OpenMP code, forced us to deviate from some of the approaches we took for
the pure codes. These deviations included sequential optimizations as we found the some of our
parallel approaches prevented the code from scaling as $O(n)$.

\subsubsection{Further sequential optimizations}

In order to efficiently add OpenMP to our MPI code, we restructured and optimized the MPI implementation
to reveal more parallelism and fewer sequential bottlenecks. Our experience and performance analysis
with the MPI code suggested that noise is creating significant load imbalance. We refactored the code
to avoid potential noisy parts. The optimizations included

\begin{enumerate}

\item On allocation of our particle buffers/vectors we reserved extra space to avoid reallocation of buffers.
      These types of reallocations could have been creating significant copy and memory allocation work on a select
      few processors during the migrate stage and therefore causing significant noise.

\item Our migration code simultaneously sends the particles in the buffer region as well as those that have
      moved into a different region. The buffered particles are later pruned out, since they should really only
      belong to one processor. We reconfigured the pruning to explicitly copy to a new buffer and rotate buffers
      rather than simply using the STL vector erase() function. This optimization prevented any noise associated
      with removing a sparse set of particles from the array as well as allowed the parallelization of pruning.

\item We changed the migration code to explicitly manage the set of particles that are not migrating and to only
      copy those particles once. By explicitly keeping track of this set of particles, we were also able to
      parallelize their management explicitly so that no thread gets stuck managing the entire set.

\end{enumerate}

The above optimizations seemed to have removed a lot of the noise we saw in the code previously. They also
facilitated an OpenMP parallelization of the MPI code that functions as $O(n/p)$. 

\subsubsection{Parallelizing MPI code with OpenMP}

Our OpenMP parallelization was very aggressive. We defined the entire loop, including the call to migrate
as a parallel OpenMP region and explicitly managed anything that had to be done atomically. The computation
that was previously parallelized with OpenMP was done in the same fashion. The pruning code was parallelized
by first accumulating counts of particles that need to stay in parallel, doing a prefix sum on the counts,
and then copying in parallel. We also parallelized the copying of the particles that are staying 
in the migrate function. This type of parallelization may seem silly since copying is memory bandwidth bound,
however, in practice it can speed-up the code due to better overlap, more total L1 cache, and less reliance
on synchronization.

We also parallelized the actual MPI communication calls. Each OpenMP thread did a portion of the sends and
waited on a portion of the receives. Such parallelization can be managed by the Hopper machine with the
proper set of flags. Namely the code must be linked and compiled with -lmpich\_threadm and the script must
contain 'setenv MPICH\_MAX\_THREAD\_SAFETY multiple'. The PGI compiler seems to have had more difficulties
with this than the GCC compiler. A static allocation of dynamic size in a parallel region seems to be incompatible
with PGI. Further, we had to remove the sse flag to get it to run. In the end, our results were collected with
GCC, since it had less problems and performed better.

\subsection{Performance of the hybrid code}

A large increase in performance was seen with an aggressive campaign to reduce variability in execution time.  In the case of no OpenMP and just MPI we you can see the revised TAU output from a 640K particle run in figure \ref{revised_tau}.  Our prime suspect here is the memory subsystem in the Cray CNL kernel, but that will take extra investigation to uncover.

\begin{figure}
\small
\begin{verbatim}
NODE 0;CONTEXT 0;THREAD 0:
---------------------------------------------------------------------------------------
%Time    Exclusive    Inclusive       #Call      #Subrs  Inclusive Name
              msec   total msec                          usec/call
---------------------------------------------------------------------------------------
100.0        1,056        4,780           1           1    4780727 main int (int, char **)
 77.9          654        3,724           1         400    3724292 main loop
 24.2          874        1,156         100         400      11566 migrate
 23.3        1,114        1,114         100           0      11141 apply_force loop
 14.4          689          689         100           0       6894 local binning
  5.5          264          264         100           0       2645 Barrier before allgather
  2.3          109          109         100           0       1094 move
  0.3            7           12         100         100        127 Isend+Irecv
  0.1            5            5         100           0         54 Waitall
  0.1            4            4         100           0         49 allgather
  0.0        0.392        0.392         100           0          4 build counters

\end{verbatim}
\caption{TAU output from instrumented optimized MPI implementation. p=24, n=640k, 100 steps}
\label{revised_tau}
\end{figure}


\subsubsection{Intra-node performance}

\begin{figure}[t]
\centering
\includegraphics[width=3.0in]{plots/one_node}
\caption{Strong scaling of our hybrid implementation within a single node. 'tpp' is the number of OpenMP threads per process.}
\label{fig:one_node}
\end{figure}

Figure~\ref{fig:one_node} details the strong scaling performance of our hybrid code with 100,000 particles for different
numbers of threads per process. We can see that there is overhead from going to one MPI process to two but this
overhead is reduced from that observed in the pure MPI version. We also see that the code seems to scale super-linearly
when going to multiple processes. In fact, the pure MPI version performs better than any hybrid. This is likely being
caused by cache effects, as perhaps partitioning the data explictly as done by the MPI code, causes more consistent
cache behavior or even completely in-cache execution.

\subsubsection{Inter-node performance}

\begin{figure}[t]
\centering
\includegraphics[width=3.0in]{plots/multi_node}
\caption{Strong scaling of our hybrid implementation across nodes. 'tpp' is the number of OpenMP threads per process.}
\label{fig:multi_node}
\end{figure}

Figure~\ref{fig:multi_node} details the strong scaling
performance of the code to multiple nodes. Again we measure the performance for various numbers of threads per process.
We raise the number of particles to 400,000 since these runs are at a larger scale. We could have raised the problem
size even farther and gotten better scaling, but this analyis reveals scaling bottlenecks at smaller core counts.

As we see in Figure~\ref{fig:multi_node}, the story changes when we strong scale to multiple nodes. The effect of
using OpenMP now finally pays off. The MPI version is faster on a single node but quickly loses this advantage.
This phenomenon is likely due to the larger amount of MPI ranks that need to be managed in a flat MPI version,
as well as the partitioning of the data. Each MPI rank ends up with a small number of particles and the lantecy
and load imbalance costs kick in. Further, the perimeter-to-surface ratio gets smaller and more redundant work is
done by the MPI version in the ghost region. 
Our hybrid version on the other hand scales nicely to a few hundred cores 
and is able to operate efficently at very fine granularity. An interesting observation here is that our results
indcate that it is better to have just 3 threads per process rather than 6. This does not coincide with our
original hypothesis, nor does it coincide with the advice of NERSC. 

\subsubsection{Problem size scaling performance}

\begin{figure}[t]
\centering
\includegraphics[width=3.0in]{plots/problem_scale}
\caption{Performance as a function of problem size of the hybrid code on 1 core and on a full node of Hopper with 6 threads per process.}
\label{fig:problem_scale}
\end{figure}

As a sanity check lets make sure we didn't mess up our $O(n)$ scaling anywhere in the parallel code. Figure~\ref{fig:problem_scale}
shows the scaling performance. Seems good, call it a day.





















\end{document}
