% THIS IS SIGPROC-SP.TEX - VERSION 3.0
% WORKS WITH V3.1SP OF ACM_PROC_ARTICLE-SP.CLS
% JUNE 2007
%
% It is an example file showing how to use the 'acm_proc_article-sp.cls' V3.1SP
% LaTeX2e document class file for Conference Proceedings submissions.
% ----------------------------------------------------------------------------------------------------------------
% This .tex file (and associated .cls V3.1SP) *DOES NOT* produce:
%       1) The Permission Statement
%       2) The Conference (location) Info information
%       3) The Copyright Line with ACM data
%       4) Page numbering
% ---------------------------------------------------------------------------------------------------------------
% It is an example which *does* use the .bib file (from which the .bbl file
% is produced).
% REMEMBER HOWEVER: After having produced the .bbl file,
% and prior to final submission,
% you need to 'insert'  your .bbl file into your source .tex file so as to provide
% ONE 'self-contained' source file.
%
% Questions regarding SIGS should be sent to
% Adrienne Griscti ---> griscti@acm.org
%
% Questions/suggestions regarding the guidelines, .tex and .cls files, etc. to
% Gerald Murray ---> murray@acm.org
%
% For tracking purposes - this is V3.0SP - JUNE 2007

\documentclass{acm_proc_article-sp}
\usepackage{graphicx}

\begin{document}

\title{Comparison of Cell and POWER5 Architectures for a Flocking Algorithm: A Performance and Usability Study}
\subtitle{CS267 Final Project}
%
% You need the command \numberofauthors to handle the 'placement
% and alignment' of the authors beneath the title.
%
% For aesthetic reasons, we recommend 'three authors at a time'
% i.e. three 'name/affiliation blocks' be placed beneath the title.
%
% NOTE: You are NOT restricted in how many 'rows' of
% "name/affiliations" may appear. We just ask that you restrict
% the number of 'columns' to three.
%
% Because of the available 'opening page real-estate'
% we ask you to refrain from putting more than six authors
% (two rows with three columns) beneath the article title.
% More than six makes the first-page appear very cluttered indeed.
%
% Use the \alignauthor commands to handle the names
% and affiliations for an 'aesthetic maximum' of six authors.
% Add names, affiliations, addresses for
% the seventh etc. author(s) as the argument for the
% \additionalauthors command.
% These 'additional authors' will be output/set for you
% without further effort on your part as the last section in
% the body of your article BEFORE References or any Appendices.

\numberofauthors{3} %  in this sample file, there are a *total*
% of EIGHT authors. SIX appear on the 'first-page' (for formatting
% reasons) and the remaining two appear in the \additionalauthors section.
%
\author{
% You can go ahead and credit any number of authors here,
% e.g. one 'row of three' or two rows (consisting of one row of three
% and a second row of one, two or three).
%
% The command \alignauthor (no curly braces needed) should
% precede each author name, affiliation/snail-mail address and
% e-mail address. Additionally, tag each line of
% affiliation/address with \affaddr, and tag the
% e-mail address with \email.
%
% 1st. author
\alignauthor
Jonathan Ellithorpe
% 2nd. author
\alignauthor
Mark Howison
% 3rd. author
\alignauthor
Daniel Killebrew
}
\date{19 May 2008}
% Just remember to make sure that the TOTAL number of authors
% is the number that will appear on the first page PLUS the
% number that will appear in the \additionalauthors section.

\maketitle
\begin{abstract}
We have parallelized and optimized an agent-based simulation of emergent flocking behavior for two shared-memory architectures: the POWER5 and the Cell Broadband Engine. The goal of this study was to explore the affordances and trade-offs of both architectures and their available APIs, especially for addressing the load-balancing issues caused by the uneven spatial distribution of flocking agents. We found that for this particular algorithm, the OpenMP API and POWER5 architecture provided higher performance. OpenMP also proved more usable because of its high-level constructs.
\end{abstract}

% A category with the (minimum) three required fields
%\category{H.4}{Information Systems Applications}{Miscellaneous}
%A category including the fourth, optional field follows...
%\category{D.2.8}{Software Engineering}{Metrics}[complexity measures, performance measures]

%\terms{Delphi theory}

\keywords{Cell Broadband Engine, POWER5, OpenMP, shared memory, flocking, agent-based model, particle simulation} % NOT required for Proceedings

\section{Introduction}
In our agent-based model of flocking, individual flocking agents adjust their velocity and acceleration according to simple rules involving the properties of nearby agents. As the simulation evolves, these straight-forward interactions lead to an emergent flock formation. That is, flock formation can be viewed as a decentralized phenomenon resulting from individual agents' behaviors; it isn't necessary to have a central ``leader bird'' orchestrating the formation of a flock. Simulations of group behavior can often be cast as agent-based models, for instance in applications in virtual reality, computer games, robotics and artificial life \cite{zhou}. Agent-based modeling is also an important methodology for the nascent field of complexity studies \cite{holland}, and has found applications in research in a diversity of fields, from material science to social psychology \cite{smith+confrey} to evolutionary biology \cite{kauffman} to education \cite{blikstein:cas}.

Although there exist agent-based modeling environments (e.g., NetLogo \cite{netlogo}) and toolkits (e.g., Swarm \cite{swarm}), they are aimed primarily at uniprocessor desktop/workstation architectures. As modeling needs scale, there will likely be an increased demand for optimized parallel implementations of agent-based models, especially with the rising availability of dual- and multi-core processors in workstations. This work is a step in that direction, exploring the obstacles to agent-based modeling on multi-processor architectures.

\section{Flocking Algorithm}
Our flocking algorithm consists of the following three phases:

\emph{Interaction}. An individual agent's neighbors are defined as the agents lying within a predefined sight radius and angle range (one eight the world size and $\pm 100^{\circ}$ in our implementation). An outer loop fixes an agent with position $x$ and velocity $v$, while an inner loop runs through that agent's $n$ neighbors to calculate the sum of differences vectors,
\[dx = \sum_{i=0}^n dx_i.\]
This aggregate difference vector is used to calculate four vectors based on following simple rules for agent interaction (see \cite{reynolds:flocks}, \cite{reynolds:steering}, \cite{tanner}, and \cite{zhou}):
\begin{enumerate}
\item \emph{Don't crowd other agents}. The crowding vector points away from the neighbors' average heading, and is given by
\[v_1 = \frac{-dx}{n_c|dx|^4},\]
where $n_c$ is the number of neighbors within the crowding radius (one sixteenth the world size in our implementation).
\item \emph{Align your velocity with your neighbors' average velocity}. The alignment vector,
\[v_2 = \frac{1}{n}dx,\]
points toward the neighbors' average velocity.
\item \emph{Move toward the center of gravity of your neighbors}. The local center of gravity vector is calculated as the average position of the neighbors minus the fixed agent's position:
\[ v_3 = \left( \frac{1}{n}\sum_{i=0}^n x_i \right) - x.\]
\item \emph{Move stochastically.} A unit vector $v_4$ with random direction is calculated.
\end{enumerate}
Using these vectors, each agent is assigned in swap space a new velocity
\[v_{new} = C_1v_1 + C_2v_2 + C_3v_3 + C_4v_4\]
and a new acceleration $a_{new} = M \cdot v_{new}$. In our implementation, these weights have the values,
\begin{eqnarray*}
C_1 & = & 10^{-8},\\
C_2 & = & 1.0,\\
C_3 & = & 0.1,\\
C_4 & = & 0.05,\\
M & = & 1.0,
\end{eqnarray*}
which were hand-tuned by trial-and-error, using animations of the resulting flocking behavior as feedback.

\emph{Time Step Calculation}. A new time step $dt$ is calculate as the inverse of the largest magnitude among all the agents' position, velocity, and acceleration vectors.

\emph{Move}. Each agent is assigned a new position $x_{new} = v_{new} \cdot dx$ in swap space. The swap space then becomes the data for the next iteration.

Our serial implementation spatially decomposes the world into a moving grid that follows the agents' center of gravity. In the serial implementation, the spatial decomposition reduces the complexity of the interaction phase of the algorithm. This is especially noticable during flock formation, when agents are more uniformly distributed. Thus, Figure \ref{fig:serial_speedup} shows a higher speed-up at shorter times of our moving grid serial implementation over a naive, non-spatially-decomposed implementation.

\begin{figure}
\centering
\scalebox{0.33}{\includegraphics{serial_speedup.pdf}}
\caption{Performance of the moving grid vs. the naive serial implemtation at n=50.}
\label{fig:serial_speedup}
\end{figure}

The moving grid has additional benefits for parallel implementations. When a stable flock forms, many of the agents are moving roughly in tandem with the center of gravity, which minimizes the frequency of reallocating agents from one thread's grid box to another thread's.

Because we are more interested in exploring the load-balancing issues that arise in non-uniform spatial distributions of agents, we created initial states in which stable flocks had already formed, and used these to seed the experiments reported hereafter.
\section{POWER5}
\subsection{Performance of Grid Implementation}
We parallelized the moving grid serial implemenation for an 8-node POWER5 system using the OpenMP API. Our naive implementation uses a grid size specified at compile time, with the grid box size equal to the sight radius (an 8x8 grid). Initially, we parallelized the interaction procedure using OpenMP's {\texttt parallel for} construct, which assigns an entire row to each thread. This row layout significantly impairs load-balancing, since a flock moving within a single row is assigned entirely to one thread. We addressed this limitation in our next implementation by using a Hilbert curve to assign grid boxes to processesors (see Figure \ref{fig:hilbert_curve}). This layout improves performance since a flock in any location and headed in any direction is likely to lie across grid boxes assigned to different threads. A look-up table for the Hilbert curve coordinates is calculated at run-time to reduce the overhead of resolving which grid boxes belong to which threads.

\begin{figure*}
\centering
\scalebox{0.22}{\includegraphics{hilbert_curve.png}}
\caption{The Hilbert curve layout for 8x8, 16x16, and 32x32 grid sizes.}
\label{fig:hilbert_curve}
\end{figure*}

Even with the improved layout, an 8x8 grid has individual grid boxes that are large enough to contain nearly an entire flock for smaller flock sizes, given the fixed sight radius in the algorithm. In our final implementation, we added a runtime parameter for the grid size. The implementation was most efficient (see Figure \ref{fig:bassi_speedup}) at larger flock sizes (weak scaling) and finer grid subdivisions, which increase the likelihood that each thread will own a similarly sized portion of the flock. However, for each grid box, we are initially allocating enough memory to hold the entire number of fish. Given our 48B agent structure and the additional 48B of swap space for each agent, a simulation with 2048 agent and a 32x32 grid requires a reasonable 48MB of memory, but scaling this implementation to say one million agents would require an unreasonable 24.5GB of memory. Clearly, an implementation capable of handling larger flocks would require a shared-memory reallocation mechanism that our current implementation lacks. The cache hierarchay of the POWER5 plays a role in performance, as well; the dip at 512 agents in Figure \ref{fig:bassi_speedup} coincides with the agent data exceding the size of the POWER5's L1 cache (32KB).

\begin{figure*}
\centering
\scalebox{0.65}{\includegraphics{bassi_speedup.pdf}}
\caption{Speed-ups for OpenMP implementations using different grid sizes and layouts.}
\label{fig:bassi_speedup}
\end{figure*}

As the size of the flock increases, the proportion of time our implementation spends on moving/sorting the fish and on the reduction calculation of the next time step decreases (see Figure \ref{fig:bassi_decomp}). The serial implementation, on the other hand, spends almost all of its time in the interaction phase. This profile explains why we were able to achieve 97\% of peak performance for 2048 fish with a 32x32 grid: with those parameters, the algorithm's profile is dominated by the interaction phase, which requires only shared-memory reads that take advantage of locally cached views of shared memory. Since every phase of the algorithm could be parallelized, peak performance should theoretically be attainable according to Amdahl's law.

\begin{figure}
\centering
\scalebox{0.33}{\includegraphics{bassi_decomp.pdf}}
\caption{The profile of time spent interacting the fish, calculating the next time step (using a reduction in the OpenMP version), and moving/sorting the fish.}
\label{fig:bassi_decomp}
\end{figure}

\subsection{Performance of Quadtree Implementation}
As an alternative implementation to the grid, we developed an algorithm based on quadtrees. The two dimensional space that the birds moved through was recursively broken up into quads dynamically, based on the number of birds in the quad. The intuition behind the quadtree is that it is a form of adaptive grid refinement. Where there are more birds, the grid becomes more detailed (see Figure \ref{fig:quadtree}). This follows from the primary rule of quadtrees: break a quad up into four constituent quads when the occupation of a quad reaches a predefined level.
\begin{figure}
  \centering
  \includegraphics[width=20pc]{quadtree}
  \caption{A quadtree with maximum occupancy of two}
  \label{fig:quadtree}
\end{figure}


The quadtree implementation took two parameters as inputs: the maximum number of birds in a quad before the quad was decomposed, and the minimum number of birds in a parent quad. Below this threshold, a parent quad would collapse the subquads beneath it, making itself the owner of all the birds. This minimum threshold serves to prevent birds from checking inside empty quads for interactions. It also prevents birds from checking multiple smaller quads for interactions, when checking a single larger quad would suffice. Optimizing these two parameters was difficult in practice, though. This is because the optimum value is really dependent on the spatial density of birds. If birds are very sparse, then a relatively low value for maximum birds would be desirable, because at low densities, the number of birds in a quad that are close enough to interact would also be low. If this maximum was maintained for a much denser flock, then a bird would have to check against many nearby quads for interactions, instead of just a few, when a few larger neighboring quads would have sufficed. The choice of minimum birds in a quad follows similar logic. In the end, we chose to err on the side of a smaller value for maximum birds, and a minimum value of about 70\% of the maximum. The intuition is that it's better to check against more quads than necessary, all of which contain birds within interaction range, than checking against quads with birds that are not within interaction range.

While the quadtree serves to limit the number of excess interactions that are calculated, it also provides a way of distributing birds to threads for computation. At the beginning of the simulation, the quadtree is created, and thread are assigned quads in a such a way as to give all threads their fair share of birds. This results in each thread owning a certain area of the simulation space. If quadtree ownership was not redistributed from time to time, load imbalance would occur. Whether or not the simulation started randomly, or after flocking had already occurred, once the birds started moving, they would soon end up on one two processors, because they were so tightly clumped. This severe load imbalance would greatly hurt scaling, as shown in Figure \ref{fig:balancing}. The balanced algorithm redistributes the quadtrees whenever any single thread indicates that too much imbalance is present (if a thread has more than 150\% of the birds it started with, for example). This balancing algorithm demonstrates very linear speedup, in contrast to the algorithm which never performs quadtree reassignment. The downside of our rebalancing algorithm was that quadtree reassignment was done serially, limiting the potential maximum speedup of our algorithm. In practice, we did not have enough processors available to see the effect of this serial portion on our speedup.

One feature of our quadtree implementation that we thought particularly clever was the method in which birds that had moved out of their owning thread into another were dealt with. During the move phase, if a bird was detected to move into another thread's quad, it was put into a special data structure. Each thread has a multiple lists, one for each of the other threads. For example, if a bird in thread 1's quadtree moved into one of thread 2's quads, it was put into thread 1's `list of birds moving to thread 2'. At the end of the move phase, all threads flushed data to memory to ensure consistency. Then each thread moved the birds that now belonged to myThreadID + 1 into the quads of myThreadID + 1. All threads barriered, and then transferred birds to myThreadID + 2, and so on. This resulted in maximal parallelization of birds being transferred between threads, instead of the naive algorithm where a single master thread performs interthread thransfer.

\begin{figure}
  \centering
  \includegraphics[width=20pc]{balancing}
  \caption{Speedup of balanced vs. imbalanced algorithms}
  \label{fig:balancing}
\end{figure}

\subsection{Usability}
Our implementation takes advantage of the built-in work-sharing, data-scoping, and synchronization constructs in OpenMP. Two features of our implementation required more customization. First, the {\texttt C} implementation of OpenMP has a limited number of built-in reduction operators, and we had to implement our own reduction for the time step calculation. Second, the default thread layout using the {\texttt parallel for} construct was not suitable for load-balancing, and we implemented the Hilbert curve layout described above. Third, we took advantage of the OpenMP threading model with the quadtree algorithm by giving each thread ownership of a set of birds, making it responsible for moving these birds. The shared memory model made calculating interactions across thread boundaries very easy, and threads were only synchronized when necessary to maximize performance.

\section{Cell Broadband Engine}

\subsection{Iterative Refinement of Parallelization}

\subsubsection{Basic Parallelization of the Algorithm - Function-Offload Model}
The flocking algorithm, at it's heart, performs two operations each iteration. One is calculating the forces imposed on the agents by each other, and the other is using this force to change the agents' velocities and positions. Our first attempt at parallelizing the algorithm was to use a Function-Offload model for the part of the algorithm that calculates agent forces (the interact\_fish() function). Thus, at each iteration of the algorithm, the SPEs transfer their agents and neighboring agents from main memory into their local stores, calculate all forces on the fish that they own (apart of their assigned bucket) and write their agents in their bucket back to main memory. The PPE, in the meantime, waits until all SPEs have finished, then applies the calculated forces (the move\_fish() function), sorts the fish into their new buckets, and repeats.

\subsubsection{Extending the Functional Offloading}
Our second verion of parallelization involved offloading the move operation as well. The difficulty associated with this was that a gather-scatter operation was required on part of the PPE to send the SPEs the appropriate $dt$, the amount of time to move forward in the simulation. In order to reduce error in the simulation due to using discrete time steps, the time steps themselves become smaller and smaller as the fish move faster and faster. To accomplish this, the algorithm looks at the maximum of all movement vectors in the simulation (velocity and accelleration), and uses them to calculate the current iteration's $dt$ (which is inversly related to the maximum of all velocities and accellerations). Thus, it becomes neccessary for the SPEs that hold only a partial state of the algorithm to find the maximum of their own agents' velocities and accellerations, report this to the PPE, and wait for the PPE to calulate and broadcast the iteration's $dt$. This gather-scatter operation is performed entirely by using the CBEA's mailbox system, a way to communicate a single 32-bit message from the SPE's to the PPE, or from the PPE to the SPEs. Thus, the PPE waits on all SPEs to send their information, and then the SPEs wait for the PPE to send them $dt$. Given the ring structured interconnection network of the cell cores, a more efficient approach would have been to have the SPEs perform an all-to-all-scatter and calculate $dt$ themselves, avoiding the overhead and latency of having the PPE send back its calculated $dt$. Unfortunately doing this is easier said than done because the cell does not have default support for SPE to SPE mailbox communication. To do something like this requires setting up DMA transfers between pre-arranged areas of each other's local stores. 

\subsubsection{Using more SPEs}
The first versions of the code were limited to using only 4 of the 7 SPEs availble on the cell platform on which we were working. Normally a cell processor will have 8 SPEs, however our development platform for this project was a Sony Playstation 3, for which the 8th SPE is disabled. We chose 4 because of its property as a perfect square, significantly simplifying the code's implentation on the architecture. The reason for the simplicity of using a fixed instead of generic number of processors is due to the intimacy of the code with the hardware (or rather the lack of hardware abstraction). To generalize the code to use a paramaterized number of SPEs involved careful hand coding of pointer passing and making sure that DMA transfers were always for a multiple of 16 bytes, a constraint imposed by the architecture. In fact, some grid layouts (using a 2x3 grid versus a 2x4 grid, for example) require manually editing the code to meet the 16 byte alignment constraint. 

The development environment does, however, provide a construct for virtualizing the physical SPEs, called SPE ``contexts''. If the number of contexts in use is greater than the number of physical SPEs then the cell context scheduler will swap contexts in and out pre-emptively. This context schduling inhibited our attempts to run 8 SPEs when using versions of the code that required all SPEs to communicate with the PPE, since a swapped out SPE would stall the program indefinitely. Thus, the maximum number of SPEs we were able to successfully utilize was 6 in a 2x3 grid.

\subsubsection{Single-Precision versus Double-Precision Floating Point Arithmetic}
The original flocking simulation used double-precision floating point arithmetic, although the cell's SPEs are optimized for single-precision operations. Double-precision instructions have a 13-cycle latency, the first 6 cycles of which are unpipelined. Additionally, no other instructions can be issued (of any type) during these 6 cycles. Single-precision instructions, however, are fully pipelined and can be issued back to back for an IPC of 1. 

\subsubsection{Shifting the dt Calculation}
Without sacraficing any perceivable amount of simulation accurracy, we circumvented the reduction of $dt$ by simply calculating $dt$ before the SPEs grab the fish from memory and calculate accellerations. Our studies on the error caused by doing this revealed that when 1000 steps of the simulation were run, it was at the 326th step where we saw the first difference in $dt$. Even then the error amounted to only 0.003\% (see Figure \ref{fig:percent_error}). 

\begin{figure}
\centering
\scalebox{0.33}{\includegraphics{percent_error.pdf}}
\caption{Percent Error In dt Over 1000 Iterations}
\label{fig:percent_error}
\end{figure}

In any case we consistently monitored code performance for only 100 steps for each version of the cell code, and so experienced the same $dt$ values as when we did a proper reduction of the value. 

\subsubsection{Moving from Function-Offload Model to Streaming Model}
For simplicity of synchronization, our attempts at parallelization involved using the SPEs in essentially a remote-procedure call manner. Each iteration created new contexts, loaded the code, ran the code, and destroyed the contexts. In moving to a streaming model, it became necessary to synchronize all the SPEs at every time step. This was done using simple mailbox messages, where the PPE sends either a ``GO'' or ``STOP'' message to the SPE. The SPE will then DMA the new agent data into its local store, process the data, DMA it back, and wait for the next ``GO'' or ``STOP'' message. In this way we avoid loading and destroying contexts at every iteration.

\subsection{Performance}
With these 6 revisions of the code, each building upon the last, we experienced the simulation times found in Figure \ref{fig:total_time} for a 100 step simulation.

\begin{figure}
\centering
\scalebox{0.33}{\includegraphics{total_time.pdf}}
\caption{Total Time To Complete 100 Iterations of Algorithm}
\label{fig:total_time}
\end{figure}

As a referrence to the versions:
\begin{enumerate}
\item Function-Offload Model Parallelization of Accelleration Vector Calculation
\item Parallelized the Movement of Agents
\item Moved from 4 SPEs to 6 SPEs
\item Moved from Double Precision to Single Precision
\item Circumvent $dt$ reduction
\item Implement Streaming Model with Mailbox Synchronization
\end{enumerate}

We actually experienced a decline in performance when moving from version 1 to version 2. Upon profiling the code we found that one problem was that SPEs were waiting a significant portion of execution time for the $dt$ reduction (see Figure \ref{fig:cell_profile}). 

\begin{figure}
\centering
\scalebox{0.33}{\includegraphics{cell_profile1.pdf}}
\scalebox{0.33}{\includegraphics{cell_profile2.pdf}}
\scalebox{0.33}{\includegraphics{cell_profile3.pdf}}
\caption{Timing Breakdown of Versions 1, 2, and 3}
\label{fig:cell_profile}
\end{figure}

Thus, in version 5 of the algorithm we found a way to circumvent this reduction without sacraficing noticeable accuracy. We can also see from the breakdown of time for version 3 of the code, the $dt$ reduction actually becomes more of a bottleneck to performance, as expected since there are now 6 SPEs synchronizing on one data item instead of 4.

Moving from double-precision to single-precision floating point operations had a significantly positive impact on performance, helping more for larger simulations than smaller ones. This of course makes sense since larger simulations are spending more time calculating agent interactions than smaller simulations.

\begin{figure}
\centering
\scalebox{0.33}{\includegraphics{error100.pdf}}
\caption{Magnitude of Error Vector over 100 Iterations}
\label{fig:error100}
\end{figure}

For 100 simulated time steps (see Figure \ref{fig:error100}) we can see that the error for a single agent's position, measured as the magnitude of the vector from its position at step n under double precision arithmetic to its position at step n under single precision arithmetic, is increasing with every time step. When running for 1000 steps, we generate Figure \ref{fig:error1000}.

\begin{figure}
\centering
\scalebox{0.33}{\includegraphics{error1000.pdf}}
\caption{Magnitude of Error Vector over 1000 Iterations}
\label{fig:error1000}
\end{figure}

Our next two steps were to circumvent the $dt$ reduction and implement the flocking calculations in a Streaming data model, continually feeding continually running SPEs with data to crunch. Eliminating the communication of $dt$, once we realized that it could be done without incurring too much error, natrually caused the time spent in communication to reduce to practically nothing, and had a positive performance boost. The streaming SPE implementation also had a positive performance boost, as we no longer needed to startup and stop the spes at every iteration. By monitoring this versions performance, we found that now our performance bottleneck became waiting on mailboxes (see Figure \ref{fig:dma}).

\begin{figure*}
\centering
\scalebox{0.66}{\includegraphics{dma1.pdf}}
\scalebox{0.66}{\includegraphics{dma2.pdf}}
\caption{Time Doing Computation, Waiting for DMA Transfers, and Waiting on a Mailbox Message}
\label{fig:dma}
\end{figure*}

Additionally we can see a load balancing problem on the SPEs by the fact that 4 is spending much more of its time performing computation than it is waiting on its mailbox. This is, in turn, causing the other SPEs to have a spike in mailbox wait time, because SPE 4 must finish before starting the next simulation step. What's surprising here is that DMA transfers constribute nearly zero percent to the time spent waiting. 

\section{Conclusions}
Ultimately, The highest parallel performance we achieved was the 7.75x speed-up for the Hilbert curve OpenMP implementation running on 8 POWER5 nodes with 2048 agents (quadtree achieved a 7x speed-up). This is a good example of weak scaling, since that same implementation performed below 4x speed-up for 64 and 128 agents. However, 64 or 128 agents can already be simulated in real-time using similar NetLogo flocking models on uniprocessor architecture. The reason for implementing this algorithm on multi-processor architectures is to increase the problem size, so we were pleased that our implementation performed so well at 1024 and 2048 agents.

The Cell provides a plethora of options for boosting performance of code. Some optimizations not explored here inlude branch hint statements for the SPEs, SIMDization of the code, exploiting SPE to SPE communication, and multibuffering DMA accesses (which would become important for future simulations involving a larger number of agents). We hit some performance bottlenecks straight away, including waiting time for SPEs to synchronize, large SPE code size thus limiting problem state size, the use of double-precision arithmetic, and overhead associated with spe program loading. We were able to easily overcome spe program loading overhead by moving to a stream processing model, and we were able to reduce communication to some extent by changing our algorithm without losing much accuracy. Furthermore, for small time steps we were able to use single-precision arithmetic in place of double-precision, while avoiding large errors in agent positions. In the end we still face a large communication bottle neck, which further work in the area of load balancing using a QuadTree would likely solve. SPE to SPE communication would also help to solve this problem, as it eliminates the need to synchronize with the PPE, and would allow asynchronous updates to SPE local store memory. Additionally, SIMDization of the code could lead to a peak increase in performance of 4x, by working on 4 fish at a time in a single instruction.

In terms of what it takes to get these performance boosts, a programmer must be reasonably well acquainted with the hardware. Setting up explicit transfers between known addresses of memory can be tricky to keep track of, especially if these addresses themselves must be DMA transfered into an SPE's local store. At the sacrifice of programmer efficiency, however, such control does also mean flexibility and more opportunity for performance tuning. A dedicated instruction cache would have freed up space in the local store and made the programmer's job easier, though, and we wonder why the Cell designers did not implement such an option.

%\end{document}  % This is where a 'short' article might terminate

%ACKNOWLEDGMENTS are optional

%
% The following two commands are all you need in the
% initial runs of your .tex file to
% produce the bibliography for the citations in your paper.
\bibliographystyle{abbrv}
\bibliography{final_report}  % sigproc.bib is the name of the Bibliography in this case
% You must have a proper ".bib" file
%  and remember to run:
% latex bibtex latex latex
% to resolve all references
%
% ACM needs 'a single self-contained file'!
%
%APPENDICES are optional
%\balancecolumns

\end{document}
