In this section we will briefly describe the architecture of modern GPUs.\\

\textbf{GPU Architecture}

Modern GPU is characterized as a high performance co-processor
with large number of processing cores and high memory bandwidth.
The mainstream GPUs are mainly provided by two vendors: NVIDIA and AMD.
Here we use the term from NVIDIA's GPU
to illustrate the GPU's architecture, but the concepts also apply to AMD's GPU.

In NVIDIA GPU, processing cores are organized into multiple Streaming Multiprocessors(SMs).
Each SM can run hundreds of threads concurrently.
The SM partitions the threads into warps and manages the thread execution in the unit of warp,
which contains 32 parallel threads.
All threads in the warp execute in lockstep.

GPU has a hierarchical memory subsystem. Each SM has its own on-chip L1 cache
and all the SMs share a off-chip L2 cache on top of the device memory. Users can explicitly turn off
the L1 cache for their applications. When threads in a warp need to
access the device memory, the SM calculates the needed memory transactions based on the request location
and size. Then the requests are issued per warp (it is per half warp for early NVIDIA GPUs).

\begin{comment}
\textbf{GPGPU Programming}

Users can program on GPU with high level languages 
such as, CUDA \cite{cuda} for NVIDIA GPU and OpenCL \cite{opencl} for CPU and GPU.

\end{comment}

