\section{Parallel Implementation}
\label{sec:parallel}

We parallelized our improved serial algorithm of the Barnes-Hut particle simulation using a shared memory OpenMP implementation.  The force calculation and position update steps of the particle simulation were parallelized.  These steps represent the largest share of computational effort for very large particle simulations, and are most important to the efficient parallelization of the Barnes-Hut algorithm.  

After the datatree is constructed and the total mass/center of mass coordinates of each node are calculated, a copy of the data tree is shared between each processor.  The force computation is performed by a loop which calls the force calculation function for each particle.  This loop is divided by particle number to be performed by the number of threads specified during initialization.  This parallelization scheme was deemed suitable since there is no communication between processors once the quadtree is constructed.  The threads are synchronized through the use of an OpenMP barrier function following the "parallel for" loop, which requires each processor wait until all processors are idle before moving onto the next operation.  The parallelization of the particle update step is performed in a similar manner.      

\begin{figure}[h!]
\includegraphics[width=\textwidth]{./plots/parallel_timing.pdf}
\caption{This graph shows the timing improvement of the parallel implementation as number of processors increases.}
\label{fig:parallel_timing}
\end{figure}

We tested our implementation by running it in parallel on four processors and compared it with the parallel implementation of the naive particle simulation provided.  The corresponding timing results are shown in Figure~\ref{fig:parallel_timing}.  We observe, from the parallelized naive implementations, that the scaling of the direct force algorithm can become unfavorable even when ideally parallelized.  Our OpenMP implementation scales roughly linear with system size, and is shown to be more computationally efficient than the direct algorithms.  The crossover point for our implementation is shifted to higher system sizes relative to our serial implementation (500 particles).  This is due to the creation of the quadtree and mass computation being performed in serial rather than in parallel.  In its current state, our algorithm achieves 47% of peak theoretical speed-up at 4 processors and 10000 particles.  This could be improved in a future implementation by creation of the quadtree and calculation of the node center of mass and total mass in a parallel fashion.

