\paragraph{Purpose}

The last stage is the most important to the MPI programs.  This is where the MPI primitives are replaced with their appropriate GPU counterparts.  We do this translation based on the method described above.
The last part is to finish setting up the grid and block layout.  In MPI programs, the number processors will vary as will as the size of the input.  But, in a CUDA program, we generally want to utilize as much of the GPU as possible.  Therefore setting up the grid will be based on which method you choose and the size of the input.  If you use method 1 with the single kernel, then you should map one block to a MP, and scale the input appropriately.  For method 2, maximize the number of threads per block.  An example could be all 512 or 256 threads if a 16 by 16 thread block pattern is needed.  Then you scale the grid to the size of the input.  