% \documentstyle[12pt]{report}
\documentclass[12pt]{article}

% \sffamily
% \renewcommand{\familydefault}{cmss}

\tolerance=750

\usepackage{amsmath,algorithmic,comment,subfigure,graphicx,ifthen,epsfig}
\usepackage[ruled,vlined]{algorithm2e}


\title{CS 252 Project: Execution Variability for Multicore Processors and Remedies}

\author{ Edgar Solomonik \\ 
       Brian Van Straalen } 

\begin{document}

\maketitle
The scalability of most parallel programs in HPC is limited by imbalances between synchronization points.  On Hopper we have built and instrumented a simple MPI program for modeling a particle interaction problem.  With only short-range forces this is a very parallel-friendly problem.  Nonetheless it demonstrates large amounts of time lost in a bulk synchronization.

Instrumenting the code with TAU produced some more information.  The situation deteriorated as more and more particles were added to the simulation.  at 640K particles TAU was reporting that most our time was in gathering the global histogram of of what particles were to be moved.  To see if this was an effect of {\tt MPI\_Allgather}, or load imbalance an {\tt MPI\_Barrier} was placed before the call to {\tt MPI\_Allgather}. The TAU results are now more informative (see figure \ref{tau}).
\begin{figure}
\small
\begin{verbatim}
FUNCTION SUMMARY (mean):
---------------------------------------------------------------------------------------
%Time    Exclusive    Inclusive       #Call      #Subrs  Inclusive Name
              msec   total msec                          usec/call
---------------------------------------------------------------------------------------
100.0          683       11,592           1           1   11592771 main int (int, char **)
 94.1        4,970       10,909           1         400   10909519 main loop
 37.5           10        4,341         100         600      43415 migrate
 29.1        3,369        3,369         100           0      33691 Barrier bf allgather
  7.9          917          917         100           0       9174 apply_force loop
  6.7          771          771         100           0       7715 bin sort
  5.5          633          633         100           0       6339 local binning
  1.5          174          174         100           0       1745 Isend+Irecv
  0.4           46           46         100           0        464 move
  0.1           10           10         100           0        105 Waitall
  0.0            4            4         100           0         43 allgather
  0.0        0.735        0.735         100           0          7 build counters
\end{verbatim}
\caption{TAU output from instrumented MPI implementation. p=24, n=640k, 100 steps}
\label{tau}
\end{figure}

So, either synchronization within a Hopper node is terrible, or there is a very large amount of imbalance generated within each iteration.

The summaries are not helpful beyond that to diagnose things better.  Our code was instrumented further to investigate trace generation (It seems TAU on hopper is not configured properly for trace generation, so we made our own).

Firstly, a trace of the time spent in {\tt MPI\_Barrier} was made. Show in figure \ref{barrier}. Keeping in mind that an entire iteration is taking roughly 0.1 seconds, the effect is enormous. The effect is also not random.  rank==0 is always stalled for a long time at the barrier, and rank==23 spends no time at the barrier.  So, the job looks load imbalanced.  Next, we look at how much load we are giving each processor.  Figure \ref{particles} shows a trace of how many particles each process has.  We can see 4 distinct groupings.  These correspond to processors in the corners, the x-edge adjacent, the y-edge adjacent, and the interior processors.  While evocative, the figure shows that the particle load is balanced to within 1\%.  Nothing near what we are seeing.  So, it isn't {\it load imbalance}, but {\it execution imbalance}.  Process 0 and Process 23 see nearly identical workloads, but have dramatically different execution times between the bulk-synchronization point.  There also appears to be some non-randomness effect in the initial particle distribution that has a slightly lower probability near the range extremes.

\begin{figure}
\includegraphics[width=5.0in]{plots/barrier}
\caption{Time spent in Barrier for MPI. p=24, n=640k, 100 steps}
\label{barrier}
\end{figure}

\begin{figure}
\begin{verbatim}
  MPI_Init(&argc, &argv);
  timespec a, b;
  a.tv_sec=0;
  a.tv_nsec = 20000;
  std::list<double> t;
  int rank ;
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);


  for(int i=0 ;i<100; i++)
    {
      double dd=read_timer();
      MPI_Barrier(MPI_COMM_WORLD);
      dd=read_timer()-dd;
      if(i>15)
        t.push_back(dd);
      nanosleep(&a, &b);
    }
\end{verbatim}
\caption{Simple Barrier benchmark example}
\label{barrierBenchmark}
\end{figure}


\begin{figure}
\includegraphics[width=5.0in]{plots/particles}
\caption{Particles for each process. p=24, n=640k, 100 steps}
\label{particles}
\end{figure}


\begin{figure}
\includegraphics[width=5.0in]{plots/applyForce}
\caption{Time spent in the apply\_force loop. p=24, n=640k, 100 steps}
\label{applyForce}
\end{figure}

Given the same particle load, how does each core on the Hopper die perform ?  One possibility is a profound bias in the execution of communication that favors a swift return of Process 0 over high rank Process numbers.  A simple benchmark code was implemented shown in figure \ref{barrierBenchmark}.  This benchmark shows that the imbalance of barrier is on the O(1E-4 seconds).  Thus, the communication effects are not likely to be the culprit. Our current best guess is the use of the heap manager provided by Cray's Compute Node Linux.  Brian has previously seen a similar effect on the Cray XT4 and XT5 systems.   Experiments and finer measurements would need to be done to confirm this hypothesis.  If this turns out to be the problem then a new user-space dynamic allocation system will need to be deployed.  Hopefully it would be general enough to deploy as an alternative package on production Cray systems for codes that need to use dynamic memory.  It might also be possible that some dynamic memory problems can be eliminated by using C99 stack based dynamic memory. This might be better for performance as it is visible to the compiler.


   When {\it that} issue is resolved, there is still a serious imbalance effect on this Magny-Cours processor.  We have established that the variable workload is balance to within roughly 1\%, but if you trace a flop-intensive section of code has over 20\% runtime difference (Figure \ref{applyForce}).  This might also be a memory management effect, but the heap manager is not used here.  A close look reveals that core 23 is consistently slower than core 0, despite the fact that they have almost identical computations to perform.  This is not really load imbalance, but execution imbalance.  The variability is far higher than traditional system noise benchmarks reveal for this architecture.  The effect appears in other code sections, and it is large enough to affect scaling.  If this bias cannot be eliminated with code improvements then the work distribution needs to be improved by some combination of performance prediction and dynamic load balancing.
   
\end{document}
