\documentclass[conference]{IEEEtran}
%\usepackage{times}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{graphicx}
%\usepackage{caption}
\usepackage{subcaption}
\usepackage{epstopdf}
\usepackage{fixltx2e}
\usepackage{multirow}
\usepackage{comment}
\usepackage{flushend}
\usepackage{color}
\pagestyle{plain}

\def\code#1{{\tt\footnotesize #1}}
\sloppy


\begin{document}
\pagenumbering{arabic}

%\conferenceinfo{} {}
%\CopyrightYear{}
%\crdata{}
%\clubpenalty=10000
%\widowpenalty = 10000


\title{Characterizing and Improving the Cache Performance on Massively Multithreaded GPUs}


%
\def\sharedaffiliation{%
\end{tabular}
\begin{tabular}{cc}}
%




% For named submission
% author names and affiliations
% use a multiple column layout for up to three different
% affiliations
%\author{
%\IEEEauthorblockN{Kyoshin Choo\\William Panlener\\Orevaoghene Addoh\\Byunghyun Jang}
%\IEEEauthorblockA{Computer and Information Science\\
%The University of Mississippi\\
%University, MS 38677\\
%\{kchoo,wpanlener,oaaddoh\}@go.olemiss.edu\\
%bjang@cs.olemiss.edu}
%\and
%\IEEEauthorblockN{Minsu Choi}
%\IEEEauthorblockA{Electrical and Computer Engineering\\
%Missouri University of Science \& Technology\\
%Rolla, MO 65409\\
%choim@mst.edu\\}
%}

% For blind submission
\author{
\IEEEauthorblockN{ \\ \\ \\ \\}
\IEEEauthorblockA{ \\ \\ \\ \\ \\}
\and
\IEEEauthorblockN{ }
\IEEEauthorblockA{ }
}



% conference papers do not typically use \thanks and this command
% is locked out in conference mode. If really needed, such as for
% the acknowledgment of grants, issue a \IEEEoverridecommandlockouts
% after \documentclass

% for over three affiliations, or if they all won't fit within the width
% of the page, use this alternative format:
%
%\author{\IEEEauthorblockN{Michael Shell\IEEEauthorrefmark{1},
%Homer Simpson\IEEEauthorrefmark{2},
%James Kirk\IEEEauthorrefmark{3},
%Montgomery Scott\IEEEauthorrefmark{3} and
%Eldon Tyrell\IEEEauthorrefmark{4}}
%\IEEEauthorblockA{\IEEEauthorrefmark{1}School of Electrical and Computer Engineering\\
%Georgia Institute of Technology,
%Atlanta, Georgia 30332--0250\\ Email: see http://www.michaelshell.org/contact.html}
%\IEEEauthorblockA{\IEEEauthorrefmark{2}Twentieth Century Fox, Springfield, USA\\
%Email: homer@thesimpsons.com}
%\IEEEauthorblockA{\IEEEauthorrefmark{3}Starfleet Academy, San Francisco, California 96678-2391\\
%Telephone: (800) 555--1212, Fax: (888) 555--1212}
%\IEEEauthorblockA{\IEEEauthorrefmark{4}Tyrell Inc., 123 Replicant Street, Los Angeles, California 90210--4321}}




\maketitle

\begin{abstract}
As GPUs evolve into general-purpose co-processors, cache is becoming a more important component in overall hardware design and performance. Unlike on CPUs where only a few threads try to access memory simultaneously, there are significantly higher memory access contentions among hundreds or thousands of threads on GPUs. Despite this different behavior, there is little research that investigates the cache behavior and performance on GPUs in depth. In this paper, we present our extensive study on the characterization and improvement of GPU cache behavior and performance targeting general-purpose workloads. All our experiments have been conducted on a cycle-accurate ISA level GPU architectural simulator that models one of the latest GPU architectures, Graphics Core Next (GCN) from AMD.

Our study makes the following observations and improvements. First, we observe that L1 vector data cache hit rate is substantially lower on GPUs when compared with CPUs, and the main culprit is a compulsory miss caused by lack of data reuse among simultaneous and massive threads. Second, there are significant memory access contentions in shared L2 data cache which accounts for up to 19\% of total access in some benchmarks. Despite high hit rates, this high contention in L2 data cache remains a main performance barrier. Third, we demonstrate that memory coalescing plays a critical role in reducing memory traffic. Finally we found that there exists a variable inter-workgroup locality depending on workgroup assignment policy which affects the cache behavior and performance. Based on these observations, we propose two improvements. One is shared L1 vector data cache where multiple compute units share a single cache. The other is clustered workgroup scheduling where workgroups with consecutive IDs are assigned on the same compute unit. Our experiment shows that both techniques improve the cache performance considerably. We also present the experiment of combining two techniques which shows even better cache performance.
\end{abstract}

%%%%%%%%%%%%%%%%%%%
\section{Introduction} \label{sec:intro}

The success of GPGPU (General-Purpose computing on Graphics Processing Unit) has made high performance computing affordable in any platform from workstations to hand-held devices. Its small form factor and cost effectiveness (i.e., computing capability (GFLOP/\$) and power consumption (GFLOP/Watt) are unprecedented in the history of parallel computing. Data-parallel portions of applications are offloaded onto GPUs and accelerated easily by several orders of magnitude. When asynchronously combined with the task-parallel processing of multi-core CPU, this computing approach simultaneously exploits different parallelisms (i.e., instruction, task, data-level parallelism (ILP, TLP, DLP)). This new computing paradigm is known as heterogenous computing.

One of the distinguishing features of massively multi-threaded GPU from traditional multi-threaded CPU is its memory hierarchy. Designed to process a massive number of graphics shaders which tend to have little control flow, GPUs used to shun high-capacity, sophisticated caches in favor of much larger arithmetic functional units. Rather, GPUs use a wide memory bus for a high bandwidth connection to off-chip memory. The resulting high latency penalty is minimized by hiding memory latency using thread switching. Traditionally memory performance (i.e., effective latency in particular) is improved by introducing multiple levels of cache between functional units and off-chip memory. Although GPUs are bandwidth optimized, low latency is highly desired in many workloads (especially non-graphics), and the recent use of GPUs as general-purpose accelerators has urged hardware vendors to include more caches to better handle irregular and complicated memory access patterns of workloads.

Most modern GPUs have two-levels of cache in addition to software-managed scratchpad memories~\cite{amd2012GcnWhitepaper, Gebhart2011EnergyEffMechanmisms, Jia2012DemandCache, nvidia2012FermiWhitepaper}. For example, AMD introduced 16 KB L1 vector data cache per compute unit (CU) and 384 KB to 768 KB shared L2 cache in their latest GPU architecture, GCN (a.k.a. Southern Island). NVIDIA also introduced a relatively large (up to 48 KB per stream multiprocessor) and configurable L1 cache and shared L2 cache in their latest Kepler architecture. Although there has been research that reports the benefits of cache, there is little research that studies the cache behavior and performance from a hardware design perspective. Since the memory access patterns of compute workloads are quite different from those of traditional graphics shaders, it is unclear how caches behave and what impact each configuration would have.

In this paper, we thoroughly study the cache behavior and performance on modern GPUs using a cycle-accurate ISA-level detailed architectural simulator. The target architecture we investigate is GCN from AMD. Using representative compute benchmark workloads, we present the following findings and improvements in this paper.

\begin{itemize}
\item The cache hit rate of private L1 vector data cache is considerably low compared to conventional processors. Such low cache hit rate increases the traffic to lower level cache, degrading overall memory performance significantly.
\item The majority of L1 vector data cache misses is compulsory miss, caused by lack of data reuse among massive active threads on a compute unit.
\item Memory access contention\footnote{In this work, contention refers to a situation where multiple memory requests are made and only some of them are served while others are blocked and keep retrying.} is significantly high in shared L2 data cache, which accounts for up to 19\% of total accesses in some benchmarks. Despite high hit rate of L2 data cache, high contention remains as a main performance barrier at L2 level.
\item Due to massive and simultaneous thread execution and resulting memory requests, memory coalescing plays a critical role in overall memory performance. Data requests to the same address or contiguous linear address are fully coalesced and we demonstrate that this helps reduce memory traffic significantly.
\item There exists a variable inter-workgroup locality depending on workgroup assignment policy and it affects the cache behavior and performance. We demonstrate that default policy (i.e., first availability basis) does not help exploit this type of locality present in workloads.
\end{itemize}

Based on the observations listed above, we propose the following improvements.

\begin{itemize}
\item Shared L1 vector data cache among multiple CUs (instead of private) is proposed to reduce compulsory misses and memory contentions. It also enhances inter- as well as intra-workgroup coalescing. Additionally, the increased number of cache set entries or associativities resulted from the increased cache size further reduces other types of cache misses.
\item Clustered workgroup scheduling policy is proposed to enhance inter-workgroup data locality and to improve coalescing among workgroups.
\item The combined scheme of above two schemes shows even more improvement on cache hit rate, memory traffic reduction and total execution cycles.
\end{itemize}

%The remainder of this paper is structured as follows. Section~\ref{sec:background} provides background on GPU hardware, OpenCL thread mapping, and evaluation methods used in this work. Section~\ref{sec:disec} presents our detailed evaluation of GPU cache behavior and performance. Section~\ref{sec:scheme} proposes two cache optimizations: shared L1 vector data cache and clustered workgroup scheduling. We present experimental results of each proposed scheme as well as a combined scheme. Section~\ref{sec:related} discusses related works, and we conclude the paper in Section~\ref{sec:conclusions}.

%%%%%%%%%%%%%%%%%%%%%%
\section{Background and Evaluation Methods} \label{sec:background}

This section gives background on GPU architectures and thread mapping, and presents evaluation methods used in this work.

%\begin{figure}[t]
%\centering
%\includegraphics[scale=0.85]{images/GPU_memory_hierarchy.eps}
%\caption{A typical GPU memory hierarchy}
%\label{fig:GPU_mem_hierarchy}
%\end{figure}

\subsection{Target GPU Architecture}

Our target hardware under study is Graphics Core Next (GCN, Radeon HD7970) from AMD which is one of the most advanced and powerful commercial GPUs from the industry. In order to deliver balanced performance of graphics and general-purpose compute workloads, it employs a hybrid execution engine where vector and scalar operations are executed in different hardware units~\cite{amd2012GcnWhitepaper}. Compute unit, a basic computational building block of GPU, is illustrated in Fig.~\ref{fig:gcn_cu}, consists of four SIMD (Single Instruction Multiple Data) vector units, a scalar unit, a branch unit, and memory subsystems (local memory, L1 cache, L2 cache etc). Each SIMD unit simultaneously executes one instruction across 16 work items in each cycle, completing it in four cycles for a wavefront of 64 threads. The scalar unit consists of a scalar ALU and 8KB register file for executing independent operations which include simple arithmetic and control flow operations. This prevents threads within a wavefront from executing exactly the same computation. A CU supports up to 40 wavefronts in flight and can issue up to five different types of instructions at each cycle.

\begin{figure}[t]
\centering
\includegraphics[scale=0.85]{images/gcn_cu.eps}
\caption{GCN compute unit~\cite{amd2012GcnWhitepaper}}
\label{fig:gcn_cu}
\end{figure}

The memory hierarchy of GCN is designed to better support parallelism. Each CU has a 16KB, 4 way set associative L1 read/write vector data cache (L1VD). Four CUs share a 16KB L1 scalar data cache (L1SD) and a 4-way associative read-only 32KB L1 instruction cache. L1 vector and scalar data caches are backed by unified L2 cache of 768KB size. This L2 data cache (L2D) uses 16 way associative configuration with 64B cache line and LRU replacement. It is physically partitioned into slices which are coupled to memory to each memory channel. Access to these slices of the L2 cache is provided through a crossbar fabric from the CUs to the cache and memory partitions. This cache is connected to the off-chip main memory.

%While global synchronization between wavefronts can be done using the L2 cache, GCN also provides a mechanism for synchronization between workgroups. This is done using a 64KB Local Data Share (LDS, local memory in OpenCL term). Depending on the product, this LDS can have 16 or 32 banks with 512 32-bit wide entries per bank. The LDS is used to perform full rate interpolation on texture data in graphics applications. For general purpose compute applications, it is used as a scratchpad memory to avoid polluting the cache hierarchy.


% An example of a floating table. Note that, for IEEE style tables, the
% \caption command should come BEFORE the table. Table text will default to
% \footnotesize as IEEE normally uses this smaller font for tables.
% The \label must come after \caption as always.
\begin{table}[!t]
%% increase table row spacing, adjust to taste
\renewcommand{\arraystretch}{1.1}
\small
\caption{Baseline GPU architectural configuration.}
\label{tbl:arch_config}
\centering
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{3}{|c|}{Compute Unit}\\ \hline\hline
\multicolumn{2}{|c|}{\# of Compute Unit on GPU} & 32\\ \cline{1-3}
\multicolumn{2}{|c|}{\# of SIMD engines per CU} & 4\\ \cline{1-3}
\multicolumn{2}{|c|}{Maximum active wavefronts per CU} & 40 \\ \hline\hline
\multicolumn{3}{|c|}{Memory Hierarchy}\\ \hline\hline
\multirow{5}{2.1cm}{L1 vector \\ data cache \\ (one per CU) \\ 32 units} & \# of sets & 64 \\ \cline{2-3}
 & Associativity & 4 \\ \cline{2-3}
 & Line size & 64B \\ \cline{2-3}
 & Latency (cycles) & 1 \\ \cline{2-3}
 & Total L1 cache size & 512KB \\ \hline
\multirow{5}{2.1cm}{L1 scalar\\ data cache \\ (one per 4 CUs) \\ 8 units} & \# of sets & 64 \\ \cline{2-3}
 & Associativity & 4 \\ \cline{2-3}
 & Line size & 64B \\ \cline{2-3}
 & Latency (cycles) & 1 \\ \cline{2-3}
 & Total L1 cache size & 128KB \\ \hline
\multirow{5}{2.1cm}{L2 unified \\ data cache \\ 6 units} & \# of sets & 128 \\ \cline{2-3}
 & Associativity & 16 \\ \cline{2-3}
 & Line size & 64B \\ \cline{2-3}
 & Latency (cycles) & 10 \\ \cline{2-3}
 & Total L2 cache size & 768KB \\ \hline
\multirow{4}{2.1cm}{Global Memory}  & Line size & 64B \\ \cline{2-3}
 & Bus width & 256 bit \\ \cline{2-3}
 & Latency (cycles) & 100 \\ \cline{2-3}
 & Total memory size & 4GB \\ \hline
%\multirow{5}{2.1cm}{L1 \\ instruction cache \\ (one per 4 CUs) \\ 8 units} & \# of sets & 128 \\ \cline{2-3}
% & Associativity & 4 \\ \cline{2-3}
% & Line size & 64B \\ \cline{2-3}
% & Latency (cycles) & 1 \\ \cline{2-3}
% & Total L1 cache size & 256KB \\ \hline
\end{tabular}
\end{table}


\subsection{Thread Configuration and Mapping}

In OpenCL programming, threads in NDRange are sub-grouped into workgroups of equal size. At run time, workgroups are then assigned to each CU by hardware, being an allocation unit. Once assigned, a workgroup stays on the assigned CU until it completes. Threads in a WG can share data through local memory or L1 cache. Once allocated for execution, a WG is subdivided into smaller groups called wavefront (AMD)\footnote{It is known as a warp on NVIDIA platform.} whose size is a hardware design parameter. As multiple workgroups can be assigned on the same CU depending hardware resource used, there can exist multiple wavefronts from different workgroups on a CU at the same time. Hardware wavefront scheduler then picks a execution-ready wavefront from the pool, which is known as thread switching.


\subsection{Simulation and Benchmarks}

For accurate simulation, we use a fully-verified, cycle-accurate, detailed CPU-GPU heterogeneous processor architectural simulation infrastructure, called Multi2Sim~\cite{m2s_url, Ubal2012Multi2sim} for all experiments in this paper. Multi2Sim is an open source, modular, and fully configurable simulator that models several commercial CPUs and GPUs. Multi2Sim has implemented all important hardware blocks and provides very detailed performance statistics of each block and memory subcomponent. It also allows user to modify a hardware configuration using a standard \textit{ini} file format. Table~\ref{tbl:arch_config} lists the baseline hardware configuration used in this study, which matches AMD HD7970 GPU specification.

We use representative benchmark workloads provided in AMD APP SDK~\cite{app-sdk}. For better accurate assessment, input sizes and parameters are carefully chosen to fully occupy all hardware resources.


\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.5\textwidth}
  \centering
  \setlength{\abovecaptionskip}{2pt}
  \includegraphics[width=8.4cm, height=2.7cm]{images/CacheHitRate_ScalarSeperate_L1.eps}
  \caption{L1 data cache hit rates}
  \label{fig:cachehitratio_various_benchmarks_1}
\end{subfigure}
%\vskip+2ex
\begin{subfigure}[b]{0.5\textwidth}
  \centering
  \setlength{\abovecaptionskip}{2pt}
  \includegraphics[width=8.4cm, height=2.7cm]{images/CacheHitRate_ScalarSeperate_L2.eps}
  \caption{L2 data cache hit rates}
  \label{fig:cachehitratio_various_benchmarks_2}
\end{subfigure}%
\caption{Comparison of L1 and L2 data cache hit rate.}
%Note that L1 vector cache is used to represent GPU L1 while L2 is shared by scalar and vector units.
\label{fig:cachehitratio_various_benchmarks}
%Caption of subfigures \subref{fig:subfig1},\subref{fig:subfig2} and \subref{fig:subfig3}}
\end{figure}

%%%%%%%%%%%%%%%%%%%%%%
\section{Dissection of GPU Cache Behavior and Performance} \label{sec:disec}

In this section, we present our evaluation and analysis of cache behavior and performance and, for comparison purpose, we also simulate the serial version of benchmarks using generic x86 model implemented in Multi2Sim. Detailed CPU configuration can be found in~\cite{m2s_url}.

\subsection{L1 vector data cache hit rate is considerably low}

Fig.~\ref{fig:cachehitratio_various_benchmarks} compares L1 vector data (L1VD) cache hit rates between CPU and GPU for benchmarks tested. Experimental results show that, in almost all benchmarks, L1VD cache hit rates on GPU are considerably low compared to those on CPU; average hit rate is 0.49 (49\%) on GPU while 0.88 (88\%) on CPU.

Given the same benchmarks, L2 data cache exhibits higher hit rate. On average, L2 data cache hit rate is 0.73 (73\%) while it is 0.93 (93\%) on CPU. Both L1 and L2 hit rates are relatively and considerably lower (45\% and 21\% respectively) on GPU than those on CPU, and such low cache hit rates result in higher memory traffic at lower levels of memory hierarchy (L2 or off-chip global memory), degradign the overall performance.


\subsection{L1 vector and scalar data caches exhibit different cache behavior}
Separate L1 scalar data (L1SD) cache is used for the data requested by scalar memory instruction and its hit rate for the same benchmarks is shown in Fig.~\ref{fig:cachehitratio_l1scalar}. Comparing with L1VD, it is considerably higher (96\% on average). One outlier is \textit{RS} benchmark in which the memory access stride for scalar memory instruction changes from 1 to $2^k$ (length of input string) with an interval of multiple of 2. When memory access stride aligns with the number of sets of the cache, conflict misses occur.

% and even higher than that of CPU (88\% and 93\% for L1 and L2 cache, respectively)

\begin{figure}[t]
\centering
\includegraphics[width=8.4cm, height=2.7cm]{images/CacheHitRate_L1Scalar.eps}
\caption{Cache hit rates for GPU L1 scalar data cache}
\label{fig:cachehitratio_l1scalar}
\end{figure}

\begin{figure}[t]
\centering
\includegraphics[width=8.4cm, height=2.7cm]{images/CompulsoryMissRate.eps}
\caption{Contribution of compulsory miss in L1 vector data cache miss.}
\label{fig:compulsory_miss_rate}
\end{figure}

We observe that L1SD exhibits higher hit rate because of two reasons. First, four CUs share a single cache and such configuration increases inter-thread locality significantly. Second, scalar operations have a good amount of both temporal and spatial locality as scalar memory instructions are typically used for accessing common memory buffer description or uniform\footnote{Uniform refers to the situation where all threads within a wavefront behave in the same way (e.g., accessing the same data)} data.

%%%%%%%%%%%
\subsection{Compulsory miss dominates in L1 vector data cache} \label{subsec:compulsory}

According to our classification of cache misses, compulsory miss is a major culprit of high L1VD cache miss as shown in Fig.~\ref{fig:compulsory_miss_rate}. On average, compulsory miss accounts for 68\% of total miss. This behavior is contradictory to conventional wisdom that compulsory miss is negligibly small on traditional multi-core processors~\cite{Roh1995storagehierarchy}.

\begin{figure}[t]
\centering
\includegraphics[width=8cm, height=2.7cm]{images/MatMultDiagram.eps}
\caption{Data usage in matrix calculation of DCT}
\label{fig:matrixmultiplication_diagram}
\end{figure}

Although each thread is expected to work on different data set on a CU, there is a certain degree of data reuse between threads on different CUs. For example, consider matrix multiplication used in DCT benchmark. Fig.~\ref{fig:matrixmultiplication_diagram} shows the data use patterns of three arrays. Assuming that there are 16 CUs ($CU0 \sim CU15$) available and 16 workgroups total ($WG0 \sim WG15$) to execute, one workgroup is assigned to each CU. Suppose that $WG0$ which calculates C\textsubscript{00} is assigned to $CU0$, $WG1$ which calculates C\textsubscript{01} is assigned to $CU1$, and so on. $CU0$ uses data A\textsubscript{00}, A\textsubscript{01}, A\textsubscript{02}, A\textsubscript{03}, B\textsubscript{00}, B\textsubscript{10}, B\textsubscript{20}, and B\textsubscript{30}, and CU1 uses the data A\textsubscript{00}, A\textsubscript{01}, A\textsubscript{02}, A\textsubscript{03}, B\textsubscript{01}, B\textsubscript{11}, B\textsubscript{21}, and B\textsubscript{31}. In this scenario, it is clearly expected that $CU0$, $CU1$, $CU2$ and $CU3$ use the same data A\textsubscript{00}, A\textsubscript{01}, A\textsubscript{02}, A\textsubscript{03} for their calculations. When this data use pattern occurs all over the CUs, requests for every first access of the same data by each CU result compulsory misses on each CU. For FW and RS benchmarks, contribution of compulsory miss is relatively smaller than other benchmarks as data reuse of pre-loaded data is relatively high.

%%%%%%%%%%%%%%%%%
\subsection{L2 data cache access contention is significant} \label{subsec:contention}

\begin{figure}[t]
\centering
\includegraphics[scale=1.0]{images/ContentionDiagram.eps}
% where an .eps filename suffix will be assumed under latex,
% and a .pdf suffix will be assumed for pdflatex; or what has been declared
% via \DeclareGraphicsExtensions.
\caption{Illustration of L2 vector data cache memory access contention on GPUs.}
\label{fig:contention_diagram}
\end{figure}

\begin{figure}[t]
\centering
\includegraphics[width=8.4cm, height=2.7cm]{images/ContentionDCTExample.eps}
% where an .eps filename suffix will be assumed under latex,
% and a .pdf suffix will be assumed for pdflatex; or what has been declared
% via \DeclareGraphicsExtensions.
\caption{Contention traffic in L2 cache in DCT benchmark}
\label{fig:contention_dct_example}
\end{figure}

Our detailed profiling shows that there are significant contentions in L2 data cache (L2D) and there are two main reasons for this. The first reason is that shared L2D receives high volume of simultaneous memory requests from multiple CUs caused by high L1VD miss. This is inevitable unless L2D supports as many ports as the number of CUs. Implementing many-ports, however, not only is very expensive but also does not handle cache misses and memory writes. The second reason is long latency of off-chip global memory access. In case of miss, all other threads requested to L2D have to wait for very long time while retrying because cache is locked for thread being served.

Fig.~\ref{fig:contention_diagram} illustrates L2 vector data cache memory contention. First, wavefronts from different CUs request data with the same memory addresses in this example to its own private cache. If they all miss, then they request to the lower level L2D cache simultaneously or within a very small time interval. Given this multiple requests, one of them ($CU0$ in this example) succeeds and locks L2D cache. The rest of the requests from $CU1$, $CU2$, and $CU3$ are blocked and continue to retry until L2D cache is released. Due to the long latency of main memory, it ends up with many retries. Once $CU0$ releases L2, the retries from $CU1$, $CU2$, and $CU3$ succeed with hits and the contention ends. Fig.~\ref{fig:contention_dct_example} shows the contribution of contention traffic to the total L2 cache traffic in DCT benchmark. 19\% of total L2 cache traffic is caused by contention in DCT benchmark and average contention traffic for all benchmarks is 8\%.

%%%%%%%%%%%%%%%
\subsection{Coalescing occurs massively}

\begin{figure}[t]
\centering
\includegraphics[scale=0.85]{images/SIVectorCache.eps}
% where an .eps filename suffix will be assumed under latex,
% and a .pdf suffix will be assumed for pdflatex; or what has been declared
% via \DeclareGraphicsExtensions.
\caption{AMD Southern Island L1 vector data cache unit structure~\cite{amd2012GcnWhitepaper}}
\label{fig:SI_L1_cache}
\end{figure}

GPUs achieve high performance through massive threads execution. Such massive threads consequently issue data requests to its own L1 cache. GPU memory subsystem is equipped with a hardware block called \textit{coalesce unit} in L1VD cache (Fig.~\ref{fig:SI_L1_cache}) which merges memory requests within a certain address range during a certain time period into the smaller number of transactions. Our experiment shows that such coalescing occurs massively and plays a critical role in reducing memory traffic.

% For example, AMD's Southern Island hardware supports the maximum of 81,920 threads in parallel and NVIDIA's Kepler supports 16,384 threads.

\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.5\textwidth}
  \centering
  \setlength{\abovecaptionskip}{2pt}
  \includegraphics[scale=0.90]{images/CoalescedAccess_1.eps}
  \caption{Same address.}
  \label{fig:coalesced_1}
\end{subfigure}
%\vskip+2ex
\begin{subfigure}[b]{0.5\textwidth}
  \centering
  \setlength{\abovecaptionskip}{2pt}
  \includegraphics[scale=0.90]{images/CoalescedAccess_2.eps}
  \caption{Contiguous and linear address.}
  \label{fig:coalesced_2}
\end{subfigure}%
\caption{Two example cases of memory coalescing.}
\label{fig:coalesced}
%Caption of subfigures \subref{fig:subfig1},\subref{fig:subfig2} and \subref{fig:subfig3}}
\end{figure}

Fig.~\ref{fig:coalesced} illustrates two coalescing cases. Fig.~\ref{fig:coalesced_1} shows a case where each CU issues memory requests with the same address. In this case, memory requests of address $A00 \sim A07$ are merged together into a single request. Cache line size is a major factor that determines the degree of coalescing. If the range of request addresses exceeds the cache line size, requests are divided into several partially-coalesced requests. Fig.~\ref{fig:coalesced_2} shows a different case where each CU issues memory requests with contiguous and linear addresses ($A00 \sim A07$ from $CU0$, $A08 \sim A15$ from $CU1$ and so on). Requests $A00 \sim A07$ are coalesced to a single request for $CU0$, and requests $A08 \sim A15$ are coalesced to a single request for $CU1$, and so on. These coalesced requests can be further coalesced together if the cache line size of L2D cache is larger than that of L1VD cache. Memory coalescing is critical in reducing memory traffic.

%In case of miss, as described in section~\ref{subsec:contention}, only the memory requests from one CU are served, and the others are blocked and continue to retry.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Workgroup assignment impacts locality}
%Figure showing assignment locality problem
Our study shows that there exist \textit{intra-workgroup} and \textit{inter-workgroup} localities. The former can be exploited by a programmer via thread indexing while the latter by the hardware's workgroup assignment policy. To the best of our knowledge, the default workgroup assignment policy employed by all modern GPUs is to assign workgroups to the first available CU (so called \textit{first availability basis}). This guarantees good compute unit utilization but may miss an opportunity to exploit inter-workgroup locality. Our experiment demonstrates that an alternate scheme where contiguous workgroups are assigned to the same CU can improve L1VD cache hit rate by up to 17\%. This higher inter-workgroup locality can be understood from the example shown in Fig.~\ref{fig:matrixmultiplication_diagram}. When $WG0 \sim WG3$ were assigned on the same CU, WGs, $WG0 \sim WG3$, would reuse the first row ($A00 \sim A03$) of array $A$, and WGs, $WG4 \sim WG7$ would reuse the second row ($A10 \sim A13$). Note that array $B$ would not make any significant difference between two cases.

% Locality from workgroup scheduling cannot be optimized by using any one scheme in practice. Using a certain scheme with no knowledge of the underlying structure of the data risks losing locality that may have appeared using a different scheme. Ideally, a scheme must be chosen depending on the layouts of data and their memory access pattern among workgroups. If the hardware exposed different assignment policies, then this could be handled either explicitly by the programmer using a vendor specific extension, or this decision could be made by compiler by analyzing the kernel and sending feedback to the driver for dynamic hardware setting. The improved cache performance from a well-chosen workgroup assignment policy can be further enhanced with techniques such as the cache aware wavefront scheduling described in~\cite{WavefrontScheduling}.

%%%%%%%%%%%%
\section{Proposed Schemes} \label{sec:scheme}
Based on our observation and understanding of cache behavior and performance discussed in previous section, we present our two proposed approaches in this section: \textit{shared L1 vector data cache} and \textit{clustered workgroup scheduling}.

\begin{figure}[tbh!]
\centering
\includegraphics[scale=0.85]{images/SharedCacheDiagram.eps}
% where an .eps filename suffix will be assumed under latex,
% and a .pdf suffix will be assumed for pdflatex; or what has been declared
% via \DeclareGraphicsExtensions.
\caption{Example of shared L1VD cache with the sharing factor of 4.}
\label{fig:L1_shared_cache}
\end{figure}

%\begin{figure}[tb]
%\centering
%\begin{subfigure}[t]{0.235\textwidth}
%  \centering
%  %\setlength{\abovecaptionskip}{-9pt}
%  \includegraphics[width=4.2cm, height=2.7cm]{images/HitRateFW_1.eps}
%  \caption{L1 vector data cache hit rates}
%  \label{fig:hit_rate_fw_1}
%\end{subfigure}
%\begin{subfigure}[t]{0.235\textwidth}
%  \centering
%  %\setlength{\abovecaptionskip}{-9pt}
%  \includegraphics[width=4.2cm, height=2.7cm]{images/HitRateFW_2.eps}
%  \caption{Normalized total \# of memory accesses and cache misses}
%  \label{fig:hit_rate_fw_2}
%\end{subfigure}%
%\caption{FW benchmark results by various sharing factor}
%\label{fig:hit_rate_fw}
%%Caption of subfigures \subref{fig:subfig1},\subref{fig:subfig2} and \subref{fig:subfig3}}
%\end{figure}

\subsection{Shared L1 vector data cache}

We have showed that the majority of the cache misses in L1VD cache is compulsory miss. This compulsory miss is generally unavoidable in traditional task-parallel CPU architectures because data from/to memory is both independent and exclusive to each task (i.e., little data sharing or locality between tasks). On GPUs, on the other hand, there is a great deal of data reuse among threads (e.g., wavefronts and workgroups) - the same data sections are used by multiple threads. Our proposal to address this challenge is to make a L1VD cache shared by multiple CUs.


%\begin{figure*}[t]
%\centering
%\includegraphics[width=18cm,, height=2.6cm]{images/SharedVectorResult_VectorCacheHit.eps}
%% where an .eps filename suffix will be assumed under latex,
%% and a .pdf suffix will be assumed for pdflatex; or what has been declared
%% via \DeclareGraphicsExtensions.
%\caption{L1 vector data cache hit rates across different sharing factors.}
%\label{fig:shared_vector_result_vectorhitrate}
%\end{figure*}
%
%\begin{figure*}[t]
%\centering
%\includegraphics[width=18cm,, height=2.6cm]{images/SharedVectorResult_TotalMemoryAccess.eps}
%% where an .eps filename suffix will be assumed under latex,
%% and a .pdf suffix will be assumed for pdflatex; or what has been declared
%% via \DeclareGraphicsExtensions.
%\caption{Normalized total number of memory accesses across different sharing factors.}
%\label{fig:shared_vector_result_memoryaccess}
%\end{figure*}
%
%\begin{figure*}[t]
%\centering
%\includegraphics[width=18cm,, height=2.6cm]{images/SharedVectorResult_TotalExecutionCycles.eps}
%% where an .eps filename suffix will be assumed under latex,
%% and a .pdf suffix will be assumed for pdflatex; or what has been declared
%% via \DeclareGraphicsExtensions.
%\caption{Normalized total number of memory accesses across different sharing factors.}
%\label{fig:shared_vector_result_memoryaccess}
%\end{figure*}

\begin{figure*}[t]
\centering
\begin{subfigure}[b]{1\textwidth}
  \centering
  \setlength{\abovecaptionskip}{2pt}
  \includegraphics[width=18cm,, height=2.6cm]{images/SharedVectorResult_VectorCacheHit.eps}
  \caption{L1 vector data cache hit rates}
  \label{fig:shared_vector_result_vectorhitrate}
\end{subfigure}
\vskip-1ex
\begin{subfigure}[b]{1\textwidth}
  \centering
  \setlength{\abovecaptionskip}{2pt}
  \includegraphics[width=18cm,, height=2.6cm]{images/SharedVectorResult_TotalMemoryAccess.eps}
  \caption{Normalized total number of memory accesses}
  \label{fig:shared_vector_result_memoryaccess}
\end{subfigure}
\vskip-1ex
\begin{subfigure}[b]{1\textwidth}
  \centering
  \setlength{\abovecaptionskip}{2pt}
  \includegraphics[width=18cm,, height=2.6cm]{images/SharedVectorResult_TotalExecutionCycles.eps}
  \caption{Normalized total execution cycles}
  \label{fig:shared_vector_result_totalcycles}
\end{subfigure}%
\caption{Performance results for different sharing factors}
\label{fig:final_result_various_benchmarks}
%Caption of subfigures \subref{fig:subfig1},\subref{fig:subfig2} and \subref{fig:subfig3}}
\end{figure*}




\begin{table}
% increase table row spacing, adjust to taste
\renewcommand{\arraystretch}{1.1}
\small
\caption{Memory performance improvements by various sharing factors.}
\label{tbl:result_l1c_cache_hit2}
\centering
\begin{tabular}{|c||r|r|r|r|}
\hline
\multicolumn{5}{|c|}{Improvements on L1VD cache hit rates}\\ \hline
     &    2 &     4 &     8 &    16  \\ \hline
BS   &  0\% &   0\%	&   0\%	&   1\%	 \\ \hline
DCT  & 28\% &  44\%	&  49\%	&  37\%	 \\ \hline
RG   & 15\% &  25\%	&  36\%	&  43\%	 \\ \hline
FW   & 53\%	&  73\%	&  89\%	& 105\%	 \\ \hline
FWT  &  2\%	&   3\%	&   5\%	&   6\%	 \\ \hline
MT   & 21\%	&  34\%	&  46\%	&  51\%	 \\ \hline
SLA  & 15\%	&  36\%	&  37\%	&  39\%	 \\ \hline
RED  &  5\%	&  11\%	&  13\%	&   6\%	 \\ \hline
RS   &  9\%	&  29\%	&  44\%	&  51\%	 \\ \hline
HIST & -1\%	&   0\%	&  -4\%	& -10\%	 \\ \hline
BLS  & 56\%	& 127\%	& 169\%	& 227\%	 \\ \hline
BO   & 18\%	&  31\%	&  31\%	&  31\%	 \\ \hline\hline
\multicolumn{5}{|c|}{Improvements on \# of mem. accesses}\\ \hline
     &    2 &     4 &     8 &    16  \\ \hline
BS   & 50\% &  75\% &  87\% &  90\%  \\ \hline
DCT  & 49\% &  55\% &  59\% &  63\%  \\ \hline
RG   &  8\% &  13\% &  17\% &  18\%  \\ \hline
FW   & 24\% &  33\% &  40\% &  47\%  \\ \hline
FWT  &  8\% &  13\% &  16\% &  17\%  \\ \hline
MT   & 14\% &  21\% &  26\% &  28\%  \\ \hline
SLA  &  4\% &   4\% &   4\% &   4\%  \\ \hline
RED  & -1\% &  -3\% &  -2\% &  -2\%  \\ \hline
RS   &  1\% &   4\% &   6\% &   6\%  \\ \hline
HIST &  0\% &   0\% &  -1\% &  -1\%  \\ \hline
BLS  &  0\% &  -1\% &  -7\% & -18\%  \\ \hline
BO   & 30\% &  60\% &  60\% &  60\%  \\ \hline
\end{tabular}
\end{table}


Fig.~\ref{fig:L1_shared_cache} illustrates our proposed L1VD cache structure where 4 CUs share a single L1 vector data cache (i.e., sharing factor of 4). While data requests from active wavefronts in a single CU are coalesced together in private L1VD cache, shared L1VD cache allows data requests from multiple CUs to be coalesced together. For full benefits, shared L1VD cache needs to implement multiple read/write ports to interface with several CUs. As requests from multiple CUs are merged, access contentions between the requests (and thus total number of memory access) are substantially decreased. Cache hit rate increases with higher sharing factor but relative benefit over hardware implementation cost is expected to turn around at some point.

% Since caches are implemented on-chip (on the same silicon die), the cost of the added ports to the caches is trivial.

When merging L1VD caches with a sharing factor of 4, for example, the merged size of shared L1VD cache is 4 times the original size. This merging can be implemented by either increasing the number of sets or increasing set associativity. Even though total cache traffic to shared L1VD cache remains unchanged, the merged L1VD cache helps decrease capacity and conflict misses. Our experimental result clearly demonstrates this improvement. As a good example, although compulsory miss is relatively low in two benchmarks, FW and RS, in private configuration (Fig.~\ref{fig:compulsory_miss_rate}), shared L1VD cache improves hit rates by 73\% and 29\% respectively with the sharing factor of 4 (Fig.~\ref{fig:shared_vector_result_vectorhitrate}).

%Fig.~\ref{fig:hit_rate_fw} shows the changes in cache hit rates, memory accesses, and cache misses among different sharing factor, for FW benchmark. It shows that as the sharing factor increases, L1VD cache hit rates increase, and the total number of memory accesses and cache misses decreases, but their improvements tend to gradually slow down.

Fig.~\ref{fig:shared_vector_result_vectorhitrate} and Fig.~\ref{fig:shared_vector_result_memoryaccess} show experimental results of a collection of benchmarks with five different sharing factors. As shown in Fig.~\ref{fig:shared_vector_result_vectorhitrate}, hit rates increase as sharing factor increases. In eight benchmarks (DCT, RG, FW, MT, SLA, RS, BLS, and BO), cache hit rate has been improved over 35\% when the sharing factor is 16. Only one benchmark, HIST, gets slightly worse because each thread processes a distinct set of data (no data reuse) and there is a slight increase in other types of misses when merged. Overall, hit rate has improved by 37\% on average and the memory traffic has reduced by 22\%. The total number of memory access is shown in Fig.~\ref{fig:shared_vector_result_memoryaccess}. BS benchmark shows the largest decrease (90\% when compared sharing factor of 16 with non-sharing case). Overall, memory traffic is reduced by 22\% on average. Table~\ref{tbl:result_l1c_cache_hit2} shows the detailed statistics. Fig.~\ref{fig:shared_vector_result_totalcycles} shows the improvement in total execution cycles using latency information shown in Table~\ref{tbl:arch_config}. Most benchmarks (11 out of the 12) have been improved and overall improvement is 13\% in total execution cycles.


%Due to magnitude difference of the number of accesses, graphs are represented as normalized number of accesses, where the number of accesses for shared factor case 1 is a basis for each benchmarks.

%%%%%%%%%%%%%%%%%%
\subsection{Clustered workgroup scheduling}

%\begin{figure}[t]
%\centering
%\begin{subfigure}[t]{0.235\textwidth}
%  \centering
%  %\setlength{\abovecaptionskip}{-9pt}
%  \includegraphics[scale=0.95]{images/HitRateFW_wWF1_1.eps}
%  \caption{L1 vector data cache hit rates}
%  \label{fig:hit_rate_fw_wWF1_1}
%\end{subfigure}
%\begin{subfigure}[t]{0.235\textwidth}
%  \centering
%  %\setlength{\abovecaptionskip}{-9pt}
%  \includegraphics[scale=0.95]{images/HitRateFW_wWF1_2.eps}
%  \caption{Normalized total \# of memory accesses and cache misses}
%  \label{fig:hit_rate_fw_wWF1_2}
%\end{subfigure}%
%\caption{FW benchmark results by various sharing factor}
%\label{fig:hit_rate_fw_wWF1}
%%Caption of subfigures \subref{fig:subfig1},\subref{fig:subfig2} and \subref{fig:subfig3}}
%\end{figure}

Motivated by inter-workgroup locality, we propose a workgroup scheduling (assignment) policy where workgroups with contiguous workgroup IDs are assigned to the same CU. Given a workgroup ID, the destination CU is determined by:

% ; C : Set of compute units
% ; B : Set of workgroup buffer lists
% ; B_{i} is the buffer list for C_{i}
%
% initialAssignment
%   wg_per_cu = floor((|WG| + |C| - 1) / |C|)
%   for i from 0 to |C| - 1
%     B_{i}.head = i * wg_per_cu
%     if i == |C| - 1
%       B_{i}.tail = |WG| - 1
%     else
%       B_{i}.tail = (i + 1) * wg_per_cu - 1

% More readable
\begin{displaymath}
CU_{ID} = \frac{WG_{ID}}{\left\lfloor \frac{|WG| + |C| - 1}{|C|} \right\rfloor}
\end{displaymath}
where $CU_{ID}$ is a compute unit ID, $WG_{ID}$ is a workgroup ID, $|WG|$ is the number of workgroups, and $|C|$ is the number of compute units.

To avoid any undesired idle CUs, the idea of ``workgroup borrowing'' is implemented as shown in Algorithm~\ref{fig:activate}.  An idle CU will take half of the inactive workgroups from the queue of non-idling CU. This will lead to some thrashing of the workgroup queue as very few workgroups remain inactive near the end of kernel execution, but this effect is minimal assuming large input sizes.

% activateWorkgroup
%   for i from 0 to |C| - 1
%     if available C_{i}
%       if B_{i}.head > B_{i}.tail
%         for all j mod |C| != i from (i + 1) mod |C|
%           if B_{j}.head <= B_{j}.tail
%             B_{i}.head = B_{j}.head
%             B_{i}.tail = (B_{j}.head + B_{j}.tail) / 2
%             B_{j}.head = B_{i}.tail + 1
%       if B_{i}.head <= B_{i}.tail
%         activate WG_{B_{i}.head} on C_{i}
%         B_{i}.head = B_{i}.head + 1

% More readable
\begin{algorithm}
\caption{Choosing next workgroup to activate on a CU.}
\label{fig:activate}
\begin{algorithmic}
\FORALL{compute units C}
\IF{C is available}
\IF{C's queue is empty}
\STATE Borrow half of the workgroups from the next non-idle CU
\ENDIF
\IF{C's queue is non-empty}
\STATE Activate and remove a workgroup from C's queue
\ENDIF
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}

% Benchmark
%
% Tag
% L : Locality
% B : Baseline
%
% Benchmark
% 01 : BitonicSort
% 02 : Reduction
% 03 : DCT
% 04 : MatrixTranspose
% 05 : BinomialOption
% 06 : Histogram
% 07 : FastWalshTransform
% 08 : ScanLargeArrays
% 09 : RecursiveGaussian
% 10 : RadixSort
% 11 : FloydWarshall
%
%	Scalar			Vector			L2
% Bench	Acc	Hits	%	Acc	Hits	%	Acc	Hits	%
% 01L	111291	108795	0.9776	484734	480137	0.9905	36241	35673	0.9843
% 02L	16977	16658	0.9812	138858	47605	0.3428	99683	114817	0.5750
% 03L	109987	109177	0.9926	605803	298155	0.4922	660634	529111	0.8009
% 04L	105372	104113	0.9881	2257418	1039325	0.4604	3194903	2286582	0.7157
% 05L	17230	17206	0.9986	5252	3740	0.7121	2676	2161	0.8075
% 06L	24916	19761	0.7931	6049693	1822294	0.3012	1.067e7	6428324	0.6024
% 01B	8587	6827	0.7950	112214	110678	0.9863	22745	22461	0.9875
% 02B	5624	5496	0.9772	74164	24205	0.3264	100627	56818	0.5646
% 03B	770	730	0.9481	40426	13000	0.3216	41667	33466	0.8032
% 04B	21645	21389	0.9882	560311	271750	0.4850	619159	521506	0.8423
% 05B	6567	6543	0.9963	2581	1589	0.6157	2329	2070	0.8888
% 06B	942	846	0.8981	470759	206567	0.4388	572435	302405	0.5283
% 07B	952	592	0.6218	74628	66898	0.8964	18048	15955	0.8840
% 08B	12529	12319	0.9832	117516	49105	0.4179	152315	102491	0.6729
% 09B	38240	37819	0.9890	1296739	584794	0.4510	1667109	1344660	0.8066
% 10B	1276	858	0.6724	1.654e7	2571498 0.1555	2.178e7	17191e7 0.7892
% 11B	232142	226994	0.9778	8045566	3453166	0.4292	8775690	8402193	0.9574

\begin{figure}[t]
\centering
\includegraphics[width=8.4cm, height=2.7cm]{images/HitRate_ClusteredWGS.eps}
% where an .eps filename suffix will be assumed under latex,
% and a .pdf suffix will be assumed for pdflatex; or what has been declared
% via \DeclareGraphicsExtensions.
\caption{L1VD cache hit rate for clustered workgroup scheduling}
\label{fig:ClusteredWGS}
\end{figure}

To investigate the impact of the workgroup clustering with borrowing policy, five benchmarks were chosen from the AMD APP SDK which has enough workgroups to simulate this scheme. The results of the experiments are illustrated in Fig.~\ref{fig:ClusteredWGS}. Four out of five benchmarks exhibit an improvement in cache hit rate. One benchmark was impacted negatively. The analysis of this individual kernels reveals that positive impact was associated with kernels that had a relatively small stride in memory access (therefore more inter-workgroup locality), and negative impact was associated with kernels that had a relatively large stride in memory access (negative impact on inter-workgroup locality).

% Using this observation, a hybrid scheduling policy with knowledge of data access stride would result consistent cache hit increase. Using the default policy for large strides and the clustering with borrowing policy for small strides will lead to 3-6\% cache hit improvement in general in our experiment.

%%%%%%%%%%%%%%%%%%
\subsection{Combined scheme: shared L1 vector data cache $+$ clustered workgroup scheduling}

When the two proposed schemes are combined, the result is even better. For example, in FW benchmark, shared L1VD with sharing factor of 8 and the proposed workgroup clustered scheduling policy combined improves the cache hit rate by 118\% and memory traffic by 54\%
(labeled as 'FW\_8WF' in Fig.~\ref{fig:hit_rate_fw_wWF2}).

Our experiment shows that the combined scheme reduces cache misses and memory traffic in most cases. Nonetheless, there are outliers depending on specific memory access patterns of workloads. Especially workloads that have large stride memory access patterns or no data reuse among workgroups do not benefit from this proposed scheme.

\begin{figure}[tb]
\centering
\begin{subfigure}[t]{0.235\textwidth}
  \centering
  \setlength{\abovecaptionskip}{2pt}
  \includegraphics[width=4.1cm, height=2.7cm]{images/HitRateFW_wWF2_1.eps}
  \caption{Cache hit rates}
  \label{fig:hit_rate_fw_wWF2_1}
\end{subfigure}
\begin{subfigure}[t]{0.235\textwidth}
  \centering
  \setlength{\abovecaptionskip}{2pt}
  \includegraphics[width=4.1cm, height=2.7cm]{images/HitRateFW_wWF2_2.eps}
  \caption{Normalized total \# of memory accesses and cache misses}
  \label{fig:hit_rate_fw_wWF2_2}
\end{subfigure}
\caption{Impact of the combined scheme in FW benchmark.}
\label{fig:hit_rate_fw_wWF2}
%Caption of subfigures \subref{fig:subfig1},\subref{fig:subfig2} and \subref{fig:subfig3}}
\end{figure}



%%%%%%%%%%%%
\section{Related Works} \label{sec:related}

Despite a plethora of GPU memory optimization studies at high-level programming level~\cite{Baskaran:2008:CFO,bjang2011AccessPattern,Sung:2010:DLT,Yang:2010:GCM,Zhang:2010:SGA}, there are few research works that deal with the behavior or performance of GPU memory hierarchy from a hardware architecture perspective. We believe it is due to the absence of proper tools or methods such as detailed architectural simulator or hardware profiler, that allow us to pinpoint or collect useful performance data.

Choi~\cite{Choi2012ReducingOffchipTraffic} proposed two cache management schemes to reduce off-chip memory accesses. One scheme is write-buffering that tries to utilize the shared cache for inter-block communication instead of off-chip DRAM access, and the other is read-bypassing scheme that prevents the shared cache from being polluted by streamed data that are consumed only within a thread-block. Their work, however, is different from ours in two aspects. First, they used a simulator at intermediate language level (NVIDIA's PTX), and although the simulation at such level can provide some insights, it often misses the real behavior or performance of detailed hardware structures. Second, our work focuses on more classical and fundamental cache configuration that hardware designer must know, rather than introducing additional hardware blocks on top of default cache configuration.

Jia~\cite{Jia2012DemandCache} investigated memory traffic on a NVIDIA GPU and characterized them into a taxonomy where three types of locality are introduced: within-warp locality, within-block locality, and cross-instruction data reuse. Based on this classification, they proposed a compile-time algorithm that determines whether or not to use L1 cache. Their approach is quite different from ours in that they develop a software algorithm to turn on and off the use of cache memories, whereas we understand and optimize cache behavior and performance.


To the best of our knowledge, there is no research work that reveals the detailed and accurate cache behavior and performance on emerging GPUs. Our study characterizes memory traffic at cache level from the massively parallel thread execution standpoint, and proposes a better cache configuration as well as workgroup scheduling policy to improve the memory performance of modern GPUs.



%%%%%%%%%%%%
\section{Conclusions} \label{sec:conclusions}

As cache plays more important role in GPU performance, hardware designers need to understand the exact behavior and performance of cache. Programmers also need to know the impact of different memory access patterns on memory performance to better optimize their kernels.

In this paper, we characterize and analyze the memory traffic at cache level using a cycle accurate architectural simulator. Our study reports several interesting observations and improvements. Cache hit rate is noticeably lower on GPUs than on CPUs. Especially private L1 vector data cache suffers from high number of misses. A major culprit turns out to be compulsory misses due to low data reuse. Due to privateness of L1VD, inter-workgroup data reusability rate gets lower, and therefore causes many compulsory misses. This privateness also causes high contentions in L2 data cache and unnecessary memory request traffic to lower level of memory hierarchy are generated.

Parallel SIMT execution model makes massive memory coalescing. Due to inter-workgoup and intra-workgroup locality, many threads are requesting the same or nearby data to lower level memory hierarchy and these access patterns do not benefit from private caching. Current round-robin workgroup scheduling does not exploit the locality of data, neither.

To cope with the limited locality and reusability of data, we present multi-CU shared L1 vector data cache scheme and clustered workgroup scheduling. Our experiment demonstrates that these schemes increase inter-workgroup locality and, as a result, reduce compulsory misses, contentions in L2 data cache.



%%%%%%%%%%%%
\section*{Acknowledgment}
We acknowledge the Multi2Sim developer team, especially Rafael Ubal and Dana Schaa for their valuable support on Multi2Sim simulator.

\bibliographystyle{abbrv}
\bibliography{paper}
\end{document}
