\subsubsection{Phase4: Choose Model and Identify Kernels}
\paragraph{Purpose}
Now that the code has been separated, the device code will be clarified into specific kernels and the appropriate variables will be assigned.  On the host side, the stubs are replaced by the appropriate signatures.

\paragraph {Output}
The identification of the kernels is one of the most important steps.  Too much granularity causes poor performance.  Too little will hurt the scalability.  
Clearly all device functions turn into kernels they just need to be preceded by the global or device keyword depending on where they are called.
Next, any device side initialization functions should be created.  This is a fairly trivial operation.
The rest of the device code will be the main computation.  Here the kernels can be identified by independent sections of code.  

After the kernels sections have been identified, they need to be turned into kernels.  
First, add a thread ID, this will be the local variable that will replace the process ID.  




\paragraph{Meta Algoritm}

\begin{description}
\item [Step 1]

\item [Step 2] 

\item [Step 3] 

\item [Step 4]

\end{description}




\paragraph{Sample Code}


\input{tabs/tab-algorithm_phase4}



\begin{cudablock}
MPI_Comm_rank(id);
->
int id=threadIdx.x;

'id' will now be a local variable for every kernel.  
Then, turn the section into a function with the obvious signature, where device variables are passed as a pointer to an array that is indexed by the new 'id' local variable.   In front of the signature the keyword __global__ needs to be added.  Any local variables should be declared in each kernel as necessary.  

Finally, the host code needs to correspond with the kernels.  First, we need to add two variables that are associated with the grid of blocks and the blocks of threads.  We can call these 'grid' and 'block'.  Where 'grid' will be the logical layout of the blocks and 'block' will be the logical layout of threads within a block.

dim3 grid(g1,g2,1);
dim3 block(b1,b2,b3);

Now we can create the kernel calls.  These are based on the signatures from the kernels and the  added chevrons with the grid and block variables. 


myKernel<<<grid, block>>>(d_input, d_output, ... );

\end{cudablock} 

\begin{cudablock}
MPI_Comm_rank(id);
->
int id=threadIdx.x;

'id' will now be a local variable for every kernel.  
Then, turn the section into a function with the obvious signature, where device variables are passed as a pointer to an array that is indexed by the new 'id' local variable.   In front of the signature the keyword __global__ needs to be added.  Any local variables should be declared in each kernel as necessary.  

Finally, the host code needs to correspond with the kernels.  First, we need to add two variables that are associated with the grid of blocks and the blocks of threads.  We can call these 'grid' and 'block'.  Where 'grid' will be the logical layout of the blocks and 'block' will be the logical layout of threads within a block.

dim3 grid(g1,g2,1);
dim3 block(b1,b2,b3);

Now we can create the kernel calls.  These are based on the signatures from the kernels and the  added chevrons with the grid and block variables. 


myKernel<<<grid, block>>>(d_input, d_output, ... );

\end{cudablock} 

