Our approach is divided into two phases, classification and implementation. We classify algorithms based on their communication patterns in order to determine if a particular algorithm is appropriate for implementation on the GPU. After classification, we are able to translate the communication primitives. Once this preliminary work is done, we can employ our conversion algorithm.

The communication patterns associated with parallel algorithms can classify how well the algorithm can be ported to the GPU.  Many parallel algorithms are designed around an architecture, or they might have a clear performance advantage on one architecture over another.  For example, most reduction algorithms perform well on tree-like architectures such as the Hypercube.  Because of this, we can classify them in two ways.  Consider class on MPI programs which are iterative in nature.  The underlying communication graph (with messages forming the edges on task/process nodes) is either static in each iteration or dynamic.  And the number (or size) of messages received by a process is either constant or variable.   The combination of the two aspects yields four categories.  The Table~\ref{tab:commpatterns} illustrates each category with example algorithms.
\input{tabs/tab-commpatterns.tex}


It is important to consider this when looking at porting an MPI algorithm because this will affect how complex the communication primitives need to be.  Static communication patterns make the source and target predictable.   This will also effect buffer sizes and synchronization.   
Translating the communication primitives
Here we will introduce two methods for translating the MPI primitives to a GPU counter part.  Both methods involve using a buffer and a lock or flag.  But, the first one which we will call b1, makes calls to send then when making a call to receive it will wait for a message.  The second method will use multiple kernel calls.  So there will be a break in the kernel in between a corresponding send and receive.  This will take a more time to make the extra calls, but it will be have much greater scalability.  
The first method will port a communication section to a single kernel., and synchronize across blocks as was shown in [6].  The second method will break up a communication section across multiple kernels, to allow for a global synchronization.
For the single kernel call, in order for communication to happen across each of the blocks, they must all be active at the same time, therefore than can not be more blocks than there are physical Multi-Processors.  However, a single kernel call is much faster than multiple kernel calls.  
Because the second method uses the implied barrier of the extra kernel call to synchronize, the number of blocks per grid is not bound by the physical architecture.  Therefore there can be potentially millions of blocks in a grid.  With up to 512 threads in a block this allows for billions of threads for a single block of kernel calls.
	The actual translation has three parts.  First, we introduce into the code two global variables, and two device kernels.  The flag and the buffer are added to the signatures and calls of every kernel that uses a send or receive.   Next, we replace all MPI send and receive calls with our GPU send and receive calls.  Finally, we need to make a decision based on the scalability of our algorithm and our architecture.  If our algorithm can live inside of the architecture then all we need to do is to choose our grid to match our architecture, that is to say, there should not be more blocks than multi-processors.  For example, the GTX280 has 30 MP's, so a 4 by 4 grid would be appropriate, and not exceed the architecture.  In this case, however, some of the MP's would not be used.  
	But, If we need it to scale to a large number of processes, then we would need to break up any kernels that have inter block communication.  This is fairly easy, but not trivial.  The simplest way to do this is the make several sub-copies of the kernel then keep local variables and local computations, but partition the global computations by communication calls.  
Both of these methods have advantages and disadvantages that we will in our results.  Here we have only described the Send and Receive primitives, but other message passing functions can be built on top of  Send and Receive or they can be built in a similar fashion.

