\section{Discussion of Parallelization Techniques}
\label{sec:disc}

In this section we will briefly discuss three frameworks or abstractions for parallel programming.  Two of these frameworks use a shared memory abstraction, meaning that separate paths of control communicate via shared variables, much like two people communicating by writing on a blackboard.  The third of these uses a distributed memory abstraction, in which separate processes communicate via message passing.

\subsection{Shared Memory}

We first review two shared memory frameworks.

The first is POSIX threads, or pthreads~\cite{barney:11}.  Each thread represents a distinct execution control path in a program, and each has its own private memory.  Additionally, threads in a program share memory, through which they coordinate via shared variables.  Thus, the main challenges when using threads is synchronizing on shared variables and avoiding data races.

The second is OpenMP~\cite{quinn:04}.  OpenMP is also a threaded specification like POSIX, but it hides from the programmer the kinds of details she must face when using pthreads.  OpenMP allows the programmer to separate the program into serial and parallel regions and automatically parallelizes the parallel regions using the synchronization constructs specified by the programmer.  We chose to use OpenMP in our parallel implementation for ease of use, since it is extremely easy to parallelize loops with it.

\subsection{Distributed Memory}

MPI is a well-known framework for distributed memory parallel programming~\cite{quinn:04}.  It relies on message-passing to synchronize and share data across processes executing on separate processors.  Thus, in using MPI it is useful to know about the network topology connecting the machines you wish to parallelize across.  Since message passing happens asynchronously, it requires care on the part of the programmer.  We chose not to implement a distributed memory parallel version of our code.
