\section{CUDA programming review}
Compute Unified Device Architecture (CUDA) is a parallel computing architecture
developed by Nvidia.  It is the computing engine for Nvidia graphis processing units(GPUs).  CUDA C gives developers access to the virtual instruction set and memory of the GPUs.  CUDA allows GPUs to become accessible for computation just like CPUs.

\subsection{CUDA threads architecture and control}
GPUs are massively parallel devices that allow many threads to concurrently execute. The CUDA C language allows the developer to launch as many threads as needed in one kernel (method/algorithm/function) for the device.  If the number of threads launched exceeds the maximum for said device (65k for Tesla), the threads wrap around and execute in big sequential blocks of max threads each.     
%-------- First figure -----------------------------------
\begin{figure}[h!]
\epsfig{file=figures/fig1.eps}
\caption{Organization of threads in GPU}
\label{fig:gpuThreads}
\end{figure}
%-- End figure -------------------------------------------
Figure ~\ref{fig:gpuThreads} shows how the threads are occupied for a typical CUDA Kernel.  The Kernel is named Example here and is launched with the code:
\linespread{1}
\begin{verbatim}
dim3 grid(2,1)
dim3 block(6,4)
Example<grid, block> ();
\end{verbatim}
The lines 
\begin{verbatim}
dim3 grid(2,1);
dim3 block(6,4);
\end{verbatim}
\linespread{2}
define the dimensions, in terms of threads, of the kernel being launched. Each thread starts with a single grid, which contains a 2 dimensional (up to 3 dimensional) array of blocks, which contain a 2 dimensional (up to 3 dimensional) array of threads.  The keyword dim3 allows the developer to specify the exact dimensions; in the kernel Example: there are 8 blocks of 24 threads each launched in a particular configuration.

\subsection{GPU memory}
An important aspect of optimizing the performance of CUDA code is to optimize the way data is accessed.  The developer must understand and correctly use the various types of CUDA memory available. CUDA C makes access to the different types of memory simple with key words such as constant.  Figure ~\ref{fig:gpuMem} shows the graphical representation of GPU memories.    

%-------- First figure -----------------------------------
\begin{figure}[h!]
\epsfig{file=figures/GPUmem.eps}
\caption{GPU memories}
\label{fig:gpuMem}
\end{figure}
%-- End figure -------------------------------------------
  
\textbf{Global Memory} resides in off-chip DRAM, read/write enabled, access is slower than other types of memory.  All threads from all blocks have access to global memory. 

\textbf{Local Memory} Local memory is declared when the CUDA C program declares local variables in the kernel.  These memories are only read/write accessible to a single thread.  It is important to remember that there are two types of local memory: register memory and global memory.  Register memory is kept on-chip and read/write access is single cycle. Global memory is the same as described above, except only one thread has access to that memory.  

Local memory is mapped to global memory there are too many local variablse declared, when a structure takes too much space, and when the compiler cannot determine indexing of arrays. 

\textbf{Constant Memory} resides in off-chip DRAM. As the title suggest, constant memory is read only after having been set via the CPU via a CUDA command.  In other words, the CPU may write to constant memory, but the CUDA kernel cannot.  The advantage of constant memory vs. global memory is that if multiple threads are accessing the same memory location, constant memory has as low as one cycle latency due to a caching system.  Actual times of constant memory access depend on how threads are using this memory.
 
\textbf{Texture Memory} resides in off-chip DRAM.  It is read/write memory that is mostly used in graphics processing.  Latency will be lower than global memory If threads that are 'close' to each other are accessing memories that are 'close' to each other. 

\textbf{Shared Memory} resides in dedicated on-chip hardware.  Each block of the GPU has a limited amount of shared memory (compared to a much larger amount of global memory).  The access time to shared memory is low.  The memory is read/write accessible to all threads in a block but not to threads from to other blocks.

The CUDA code in this project is optimized to take advantage of constant and local register memory. The code tries to minimize the amount of read/writes to the off-chip memories by maximizing the use of shared and local register memories.  Various measurements are made on the trade off between less global memories vs. more for loops, global memory vs. constant memory.  The results will be included in section 5.