\subsubsection{Phase5: Finalize Send/Receive}
\paragraph{Purpose}

The last stage is the most important to the MPI programs.  This is where the MPI primitives are replaced with their appropriate GPU counterparts.
 
Here we will introduce two methods for translating the MPI primitives to a GPU counter part.  Both methods involve using a buffer and a lock or flag.  But, the first one which we will call b1, makes calls to send then when making a call to receive it will wait for a message.  The other method will use multiple kernel calls.  So there will be a break in the kernel in between a corresponding send and receive.  This will take a more time to make the extra calls, but it will be have much greater scalability.  
One method will port a communication section to a single kernel.  The other method will break up a communication section across multiple kernels, to allow for a global synchronization.
For the single kernel call, in order for communication to happen across each of the blocks, they must all be active at the same time, therefore than can not be more blocks than there are physical Multi-Processors.  However, a single kernel call is much faster than multiple kernel calls.  
Because the second method uses the implied barier of the extra kernel call to synchronize, the number of blocks per grid is not bound by the physical architecture.  Therefore there can be potentially millions of blocks in a grid.  With up to 512 threads in a block this allows for billions of threads for a single block of kernel calls.

The final translation involves two things. These are related to the algorithm itself.  On thing will be related to the layout of the threads.  The other will be replacing the MPI primitives with their GPU counterparts.  

There are two ways to insure communication is complete.  We can use a global lock and synchronize across blocks as was shown in [Xiao 2010], or we can use kernel breaks to synchronize.  For either method we need to add device kernels gpu\_send and gpu\_recv.  These will use two global variables that will need to be declared in the main declaration section.  

At this point we need to make a decision based on the scalability of our algorithm and our architecture.  If our algorithm can live inside of the architecture then all we need to do is to choose our grid to match our architecture, that is to say, there should not be more blocks than multi-processors.  For example, the GTX280 has 30 MP's, so a 4 by 4 grid would be appropriate, and not exceed the architecture.  In this case, however, some of the MP's would not be used.  

But, If we need it to scale to a large number of processes, then we would need to break up any kernels that have inter block communication.  This is fairly easy, but not trivial.  The simplest way to do this is the make several sub-copies of the kernel then keep local variables and local computations, but partition the global computations by communication calls.  

\paragraph {Output}



\paragraph{Meta Algoritm}

\begin{description}
\item [Step 1]

\item [Step 2] 

\item [Step 3] 

\item [Step 4]
The last part is to finish setting up the grid and block layout.  In MPI programs, the number processors will very as will as the size of the input.  But, in a CUDA program, we generally want to utilize as much of the GPU as possible.  Therefore setting up the grid will be based on which method you choose and the size of the input.  If you use method 1 with the single kernel, then you should map one block to a MP, and scale the input appropriately.  For method 2, maximize the number of threads per block.  An example could be all 512 or 256 threads if a 16 by 16 thread block pattern is needed.  Then you scale the grid to the size of the input.  


\end{description}




\paragraph{Sample Code}


\input{tabs/tab-algorithm_phase5}



\begin{cudablock}
[Main.cu Code]
int *d_flag,  *d_buffer;
cudaMalloc();


[kernel.cu Code]
__device__ static void gpu_send(int *value,int length ,int tid , int *flag, int *buffer);


__device__ static void gpu_recv(int *value,int length ,int nid , int *flag, int *buffer);


MPI_Send(&A, length, MPI_INTEGER, targetA, tag,MPI_COMM_WORLD );
//Becomes
GPU_Send(&A[tid*length], length, tid, &flag[targetA*length], &buffer[targetA*length]);


\end{cudablock} 

\begin{cudablock}
// Method 1 with single kernal 
__global__ static void myKernel(int *A, int length, int *buffer, int *flag)
{
int id=threadId.x;
int sourceA, targetA;
float myLocal;

sourceA=id%length;
targetA=id*length;

...
GPU_Send(&A[id*length], length, id, &flag[targetA*length], &buffer[targetA*length]);

...

GPU_Recv(&A[id*length], length, sourceA,  &flag[id*length], &buffer[id*length]);

...

}


// Method 2 with multiple kernels

__global__ static void myKernel_1(int *A, int length, int *buffer, int *flag)
{
int id=threadId.x;
int sourceA, targetA;
float myLocal;

sourceA=id%length;
targetA=id*length;
...
...
GPU_Send(&A[id*length], length, id, &flag[targetA*length], &buffer[targetA*length]);
}

__global__ static void myKernel_2(int *A, int length, int *buffer, int *flag)
{
int id=threadId.x;
int sourceA, targetA;
float myLocal;

sourceA=id%length;
targetA=id*length;

GPU_Recv(&A[id*length], length, sourceA,  &flag[id*length], &buffer[id*length]);
...
...

}
\end{cudablock} 

