\input{sec/sec-phase1}

\input{sec/sec-phase2}






Phase 3: Separate Code
Purpose:
This phase will move the intended code that will be translated into kernels that will run on the device to a separate kernel file.

GPU programs have code that runs on both the host machine and the GPU device.  At this point it is time to separate our code.  This is a little more complicated than putting computation code into a kernel file.  We also need to separate variables.  This means allocating memory, and determining where the data needs to be moved between the device and host memory.   Also, one last change is made to the preprocessor directives, we add includes for the new kernel file.

    There are 3 parts to separating the code.  Separate the variables, the computation.
Separating the computation is somewhat easy.  We create a new file, lets call it 'mykernels.cu'  We add '#include "mykernels.cu" ' to the end of the preprocessor directives.  Then we move the sections of device code to this file.  As we move the sections of device code over, we leave markers.  One marker goes into host code the other goes at the head of each section of device code.  In the case of function calls, all function calls of the same type are replaced by the same marker.
    Separating the variables is a little more involved.  Host variables are left alone, although they can be clarified as host variables by adding an 'h_' prefix.  For device variables, they should be augmented by a 'd_' prefix and the memory allocation is changed to the 'cudaMalloc()' function.  The more complicated variables are the Mixed variables.  For these there has to be a copy on the host as well as a copy on the device.  So, there will be two versions of the variable, with the same conventions as described above with the appropriate prefix.  Then cudaMemCopy() calls are inserted.  For input variables, they are inserted after initialization or input is given.  For output variables, they are inserted before any output is used.  


Output:

Output will be 2 files.  The first file is the main code that will run on the host.  The extension will be 'cu'.  The other file will contain the device code.  The name of the file will be 'myproject_kernel.cu'.  

The main file will have the following changes ...

The Kernel file will contain ...




Meta Alg:
Step 1: 







Sample Code:


///// Preproc Begin
#include <stdio>  /// Keep
#include <mpi.h> /// Change
///// Preproc End


///// Preproc Begin
#include <stdio>  /// Keep
#include <mpi.h> /// Change
#include "main_kernel.cu" /// kernel
///// Preproc End

/////Declarations 

/////Computation




##
     

Phase 4:Identify Kernels (Choose Model)
Purpose:

Now that the code has been separated, the device code will be clarified into specific kernels and the appropriate variables will be assigned.  On the host side, the stubs are replaced by the appropriate signatures.

In/Out:







Intent:

The identification of the kernels is one of the most important steps.  Too much granularity causes poor performance.  Too little will hurt the scalability.  
Clearly all device functions turn into kernels they just need to be preceded by the global or device keyword depending on where they are called.
Next, any device side initialization functions should be created.  This is a fairly trivial operation.
The rest of the device code will be the main computation.  Here the kernels can be identified by independent sections of code.  

After the kernels sections have been identified, they need to be turned into kernels.  
First, add a thread ID, this will be the local variable that will replace the process ID.  

[better]
MPI_Comm_rank(id);
->
int id=threadIdx.x;

'id' will now be a local variable for every kernel.  
Then, turn the section into a function with the obvious signature, where device variables are passed as a pointer to an array that is indexed by the new 'id' local variable.   In front of the signature the keyword __global__ needs to be added.  Any local variables should be declared in each kernel as necessary.  

Finally, the host code needs to correspond with the kernels.  First, we need to add two variables that are associated with the grid of blocks and the blocks of threads.  We can call these 'grid' and 'block'.  Where 'grid' will be the logical layout of the blocks and 'block' will be the logical layout of threads within a block.

dim3 grid(g1,g2,1);
dim3 block(b1,b2,b3);

Now we can create the kernel calls.  These are based on the signatures from the kernels and the  added chevrons with the grid and block variables. 


myKernel<<<grid, block>>>(d_input, d_output, ... );



Meta Alg:
Step 1:









Sample Code:















##

Phase 5: Finalize (Send/Receive)
Purpose:

The last stage is the most important to the MPI programs.  This is where the MPI primitives are replaced with their appropriate GPU counterparts.
 
Here we will introduce two methods for translating the MPI primitives to a GPU counter part.  Both methods involve using a buffer and a lock or flag.  But, the first one which we will call b1, makes calls to send then when making a call to receive it will wait for a message.  The other method will use multiple kernel calls.  So there will be a break in the kernel in between a corresponding send and receive.  This will take a more time to make the extra calls, but it will be have much greater scalability.  
One method will port a communication section to a single kernel.  The other method will break up a communication section across multiple kernels, to allow for a global synchronization.
For the single kernel call, in order for communication to happen across each of the blocks, they must all be active at the same time, therefore than can not be more blocks than there are physical Multi-Processors.  However, a single kernel call is much faster than multiple kernel calls.  
Because the second method uses the implied barier of the extra kernel call to synchronize, the number of blocks per grid is not bound by the physical architecture.  Therefore there can be potentially millions of blocks in a grid.  With up to 512 threads in a block this allows for billions of threads for a single block of kernel calls.




In/Out:







Intent:

The final translation involves two things. These are related to the algorithm itself.  On thing will be related to the layout of the threads.  The other will be replacing the MPI primitives with their GPU counterparts.  

There are two ways to insure communication is complete.  We can use a global lock and synchronize across blocks as was shown in [Xiao 2010], or we can use kernel breaks to synchronize.  For either method we need to add device kernels gpu_send and gpu_recv.  These will use two global variables that will need to be declared in the main declaration section.  

[Main.cu Code]
int *d_flag,  *d_buffer;
cudaMalloc();


[kernel.cu Code]
__device__ static void gpu_send(int *value,int length ,int tid , int *flag, int *buffer);


__device__ static void gpu_recv(int *value,int length ,int nid , int *flag, int *buffer);


The flag and the buffer are added to the signatures and calls of every kernel that uses a send or receive.   
Now we replace all MPI send and receive calls with our GPU send and receive calls.

MPI_Send(&A, length, MPI_INTEGER, targetA, tag,MPI_COMM_WORLD );
//Becomes
GPU_Send(&A[tid*length], length, tid, &flag[targetA*length], &buffer[targetA*length]);

At this point we need to make a decision based on the scalability of our algorithm and our architecture.  If our algorithm can live inside of the architecture then all we need to do is to choose our grid to match our architecture, that is to say, there should not be more blocks than multi-processors.  For example, the GTX280 has 30 MP's, so a 4 by 4 grid would be appropriate, and not exceed the architecture.  In this case, however, some of the MP's would not be used.  

But, If we need it to scale to a large number of processes, then we would need to break up any kernels that have inter block communication.  This is fairly easy, but not trivial.  The simplest way to do this is the make several sub-copies of the kernel then keep local variables and local computations, but partition the global computations by communication calls.  

// Method 1 with single kernal 
__global__ static void myKernel(int *A, int length, int *buffer, int *flag)
{
int id=threadId.x;
int sourceA, targetA;
float myLocal;

sourceA=id%length;
targetA=id*length;

...
GPU_Send(&A[id*length], length, id, &flag[targetA*length], &buffer[targetA*length]);

...

GPU_Recv(&A[id*length], length, sourceA,  &flag[id*length], &buffer[id*length]);

...

}


// Method 2 with multiple kernels

__global__ static void myKernel_1(int *A, int length, int *buffer, int *flag)
{
int id=threadId.x;
int sourceA, targetA;
float myLocal;

sourceA=id%length;
targetA=id*length;
...
...
GPU_Send(&A[id*length], length, id, &flag[targetA*length], &buffer[targetA*length]);
}

__global__ static void myKernel_2(int *A, int length, int *buffer, int *flag)
{
int id=threadId.x;
int sourceA, targetA;
float myLocal;

sourceA=id%length;
targetA=id*length;

GPU_Recv(&A[id*length], length, sourceA,  &flag[id*length], &buffer[id*length]);
...
...

}



The last part is to finish setting up the grid and block layout.  In MPI programs, the number processors will very as will as the size of the input.  But, in a CUDA program, we generally want to utilize as much of the GPU as possible.  Therefore setting up the grid will be based on which method you choose and the size of the input.  If you use method 1 with the single kernel, then you should map one block to a MP, and scale the input appropriately.  For method 2, maximize the number of threads per block.  An example could be all 512 or 256 threads if a 16 by 16 thread block pattern is needed.  Then you scale the grid to the size of the input.  



Meta Alg:
Step 1:







Sample Code:















##


Advantages 
Disadvantages
Method 1 Single Kernel
Speed
Limited by Architecture
Method 2 Multiple Kernel Calls
Highly Scalable 
Slower 

