\section{Evaluation}
\label{section:eval}


\subsection{Experiment Setup}
To evaluate the \sys\ design, we use a
Simics-based~\cite{simics} full system execution-driven simulator,
which models the SPARC architecture. For cache and memory simulation,
we use Ruby from the GEMS toolset~\cite{gems}. Our baseline is a
16-tile multicore with private L1 caches and a 16-way shared
inclusive L2 distributed across the tiles. 
%with MESI directory coherence protocol. 
We employ a 4x4 mesh network with virtual cut-through routing.  We
simulate two forms of packets: 8-byte control packets for coherence
messages and 72-byte data payload packets for the data messages.
Table~\ref{parametersCore} shows the parameters of our simulation
framework.

%The cache lines in a particular tile can only use entries
%from the SPACE directory associated with the tile.
%% ZHZ FIXME need to change with the tagless table now assoc with tagless table

% Information I think we can do without
%The network width is shared at 8-byte granularity, 
%which means two 8-byte messages which are both ready can be transmit 
%on the same link at the same time. The L1 cache and the L2 cache bank on the same slice 
%can directly communicate without going through the mesh network.


We use a wide range of workloads, which include commercial server
workloads~\cite{simulate-commercial} (Apache and SPECjbb2005),
scientific applications (SPLASH2~\cite{wooohara95}), and multimedia applications
(PARSEC~\cite{bienia11benchmarking}). We also include two microbenchmarks, 
migratory and producer-consumer, with known 
sharing patterns. % to estimate the influence of the directory access
%latency.  
Table~\ref{problem-sizes} lists all the benchmarks 
and the inputs used in this study.  The table also includes
the maximum number of access patterns for each application, which can
be correlated with the performance of a given \sys\ directory size.

% Need to be somewhere else
%Finally, the table lists the percentage of INV messages out of total messages in the network 
%evaluated in the full vector directory. 
%This number would indicate whether the application is sensitive to an inaccurate directory.


\begin{table*}[!ht]
\begin{minipage}[t]{0.3\linewidth}\centering
\def\post{\rule{0pt}{7pt}}
{
  \caption{ Target System parameters}
  \label{parametersCore}
  \centerline {
  \begin{tabular}{|@{\hspace{4pt}}r@{\hspace{4pt}}|@{\hspace{4pt}}p{1.5in}@{\hspace{4pt}}|}
    \hline
     Cores:  16-way, 3.0~GHz, In order \\
    \multicolumn{2}{|c|}{L1D/I : each 64KB, 2way, 64byte block} \\ 
 \hline
    \multicolumn{2}{|c|} {Shared Tiled L2 Cache} \\
    \hline
    \multicolumn{2}{|c|} {16 banks, 4MB/Tile, 16way, 14 cycles}  \\
    \hline
    \multicolumn{2}{|c|} {Interconnect: 4x4 mesh} \\
    \multicolumn{2}{|c|}{128bit wide 2cycle links} \\ 
   \hline  
  \multicolumn{2}{|c|} {Main Memory : 500 cycles} \\
  \hline
  \end{tabular}
  }
}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}\centering
\def\post{\rule{0pt}{7pt}}
{
\centering
{
\caption{Application Characteristics}
\label{problem-sizes}
\scriptsize
%\begin{tabular}{|l|p{.5in}|p{.5in}|p{0.5in}|}
\begin{tabular}{|l|p{2.5in}|r|@{\hspace{1pt}}r|}
  \hline\post
  \multirow{2}{*}{Benchmark} &
  \multirow{2}{*}{Setup} & 
  \# of   & Network \\
  & & sharing patterns & Utilization \\
  \hline\post
  Apache & 80000 requests fastforward, 2000 warmup, and 3000 for data
  collection & 1657& 11.6\% \\
  \hline\post
  JBB2005 & 350K Tx fastforward, 3000 warmup, and 3000 for data collection  &1054 & 8.5\% \\
  \hline\post
  Barnes & 8K particles; run-to-completion &707 & 3.3\% \\
  \hline\post
  Cholesky & lshp.0; run-to-completion &364 & 2.6\% \\
  \hline\post
  FFT & 64K points; run-to-completion  & 104& 3.7\% \\
  \hline\post
  LU & 512x512 matrix,16x16 block; run-to-completion & 249& 1.9\% \\
  \hline\post
  MP3D & 40K molecules; 15 parallel steps; warmup 3 steps &181 & 6.1\% \\
  \hline\post
  Ocean & 258x258 ocean & 208 & 5.7\% \\
  \hline\post
  Radix & 550K 20-bit integers, radix 1024 &169 & 5.0\% \\
  \hline\post
  Water & 512 molecules; run-to-completion &75 &2.7\% \\
  \hline\post
  Migratory & 512 exclusive access cache lines &63 & 0.6\% \\
  \hline\post
  ProdCon & 2K shared cache lines and 8K private cache lines &82 &1.5\% \\
  \hline\post
  Blackscholes & 4096 options &450 &3.5\% \\
  \hline\post
  Bodytrack & 4 cams, 100 particles, 5 layers, 1 frame & 2087 &2.2\% \\
  \hline\post
  Canneal & 100K elements, 10K swaps per step, 32 steps &313 &4.3\% \\
  \hline\post
  X264 & 640 x 360 pixels, 8 frames &590 &2.2\% \\
  \hline
\end{tabular}
}
}
\end{minipage}
\end{table*}

\begin{figure*}[!ht]
\begin{center}
\subfigure[Tagless false positives]{
\includegraphics[width=0.75\textwidth]{figure/false_pos_TL.pdf}
}
\subfigure[\sys\ false positives]{
\includegraphics[width=0.75\textwidth]{figure/false_pos_64.pdf}
}
\caption{(a) Average number of false positives per reference with tagless approach. 
 (b) Average number of false positives per reference with \sys\ approach.}
\label{false-positives}
\end{center}
\end{figure*}




%\begin{table*}[!ht]
%\def\post{\rule{0pt}{7pt}}
%\end{table*}

% ZHZ fixed  - done remove INV rate, add PARSEC ones

We compare against the following coherence directory designs:



\paragraph{Tagless Directory (TAGLESS)}
This design studies the original tagless approach presented at Micro
2009. The number of hash functions is fixed at two, and the number of buckets 
per set is varied from 16 to 64.


\paragraph{SPATL-N (TAGLESS-SPACE Approach)}
% Not per tile because the tagless is conceptually centralized.
We also study a range of \sys\ design points varying the directory
table from 512 --- 2048 entries. We evaluate two versions, namely,
\sys{}-NOUPDATE (SPATL1024noupdate in chart) and \sys{}. The \sys{}-NOUPDATE
is a baseline design for the combined approach. \sys{} includes extra
optimizations (discussed in Section~\ref{section:pattern}) geared to
eliminating the transient false-positives that arise due to conflicts
in the TAGLESS table.  For the \sys\  design, each tile contains a
segment of the directory table.  We charge a 2-cycle penalty for 
each \sys\ lookup.



\subsection{How accurate is \sys{} ?}

\emph{ \sys\ can achieve false positive
  similar to the base Tagless design. We do require extra logic in
  the design to eliminate the pollution arising out of two-levels of
  compression. We eliminate the pollution by designing simple
  ``pattern recalculation'' messages to recalculate the sharing pattern. These
  messages are multicast to possible sharers off the critical-path, at 
  eviction time.
}




In our first set of experiments, we estimate the accuracy of sharing
patterns maintained in \sys{}. In a directory-based coherence
protocol, coherence operations refer to the sharer information to
forward coherence messages and the accuracy of tracking sharers 
has an impact on overall network utilization and hence
energy spent in communication. For cases in which the sharing pattern
is represented inaccurately we evaluate the average number of extra
false sharers experienced on each directory probe.

Our baseline shown in Figure~\ref{false-positives}a evaluates the
tagless directory approach with different hash functions and
buckets. 64 buckets and 2 hash functions appears to be the optimal
design with negligible false positives. Figure~\ref{false-positives}b
shows the \sys\ approach.  As we can see the \sys{}-noupdate (naively
combining TAGLESS with a Pattern table) introduces many false
positives. Once we introduce the optimization to recalculate the
sharing pattern on evictions we reduce the false positives and are
able to approach tagless' level of accuracy. In applications
including MP3D, FFT, and Water, the \sys\ design does not add any
inaccuracy on top of the tagless design. This is due to the
over provisioning of entries in the pattern table, which needs to
support other applications as well. In the baseline \sys{}-noupdate
design, Apache, Barnes, Bodytrack and SPECjbb experience the lowest
accuracy with the relatively large number of sharing patterns that 
merge in complex ways to introduce many false sharers.




%% FIXME ZHZ need to replot this INV-Ref figure
%% and arrange this paragraph
There are two possible scenarios where the directory needs to be
referenced.  A cache miss in the L1 to look up the directory to decide
which cache could possibly provide the data.  If \sys\ were integrated
with an inclusive shared L2 cache then we can decide to source data
for all misses from the L2, except in the case when one of the caches
holds a modified copy. If \sys\ was integrated with a non-inclusive
shared cache then cache misses need to possibly source the data from
one of the L1s and need the directory for determining the possible
sharers.  Write misses (get exclusive access and update messages) probe the
directory for sharer information to forward invalidations. 
%We show
%the fraction of directory probes that are attributed to cache misses vs. 
%forwarded invalidations.


\begin{figure}[!h]
\centering
\hspace{-30pt}
\begin{minipage}{0.35\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figure/ref_inv_false.pdf}
\caption{False sharers on coherence write invalidations.}
\label{ref-inv}
\end{minipage}
\end{figure}

%[AS] . Legend. Change to No-msg.Every. Random. Counter. Match legend
%of two figures.
\begin{figure*}[!ht]
\centering
%\hspace{-30pt}
%\begin{minipage}{0.35\textwidth}
%\centering
%\includegraphics[width=1.15\textwidth]{figure/clear_method.pdf}
%\caption{Pattern recalculation strategies}
%\label{clear_method}
%\end{minipage}
\begin{minipage}{0.75\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figure/inv_update_flits.pdf}
\caption{Extra interconnect traffic. The four columns from left to right
         indicate traffic in policies of Every, Random, Count, and Sharer.}
\label{bandwidth}
\end{minipage}
\end{figure*}

Figure~\ref{ref-inv} demonstrates an interesting trend that the
average false positive sharers is much smaller for invalidations
probes. Most of the \sys\ false sharers are introduced as a result of
probes on read misses.  If \sys\ were integrated with a non-inclusive
cache it would need to satisfy both read misses and forwarded
invalidations; we would need 1024 entries in the pattern table. If
\sys{} were integrated with an inclusive shared L2 cache, we can
eliminate all the false positives due to the cache misses and reduce
the pattern table size by 4$\times$.



\subsection{Interconnect Traffic} 
In this section, we study the interconnect traffic for applications in
\sys\, and show that the \sys\  directory introduces minimal
increase in on-chip traffic.

\sys\ increases traffic compared to fully accurate directory
in two ways.  The false positives per reference generates additional
messages which are on the critical path of invalidations and
lookups. In addition, ``pattern recalculation'' (presented in
Section~\ref{section:pattern}) at evictions also multicast messages to
sharers.  Figure~\ref{bandwidth} plots the increase in traffic due to
the false positives and the recalculation. In applications with few
sharing patterns, both the traffic caused by false positives and
recalculations are minimal. This is the case for Blackscholes,
Canneal, and all the scientific benchmarks except Barnes. The
additional traffic is less than 2\%.  Due to recalculation, multicast
only happens when there is a hint of pollution (i.e., pattern table
indicates that more than one sharing pattern has been ORed at the
entry). Therefore both types of traffic is minimal.  In applications
with many sharing patterns (i.e., Apache, JBB, Bodytrack), traffic
overhead due to false positives is limited to 5\%. On the other hand,
traffic due to multicast is increased by up to 15\%.  This traffic is
off the critical directory lookup path, so its impact on the
performance could be minimal compared to the traffic due to false
positives. Note that our overall network utilization for most
applications is moderate, which allows the network to support the
increase in traffic.


The key to reducing this traffic is the frequency of the pattern
recalculation.  Recalculating on every eviction might be unnecessary
because multiple hashing functions could filter out some of the false
positives, meaning the recalculation traffic is unnecessary in such
cases. Recalculating lazily and infrequently on the other hand leads
to a heavily polluted pattern table, and introduces further conflicts.  We
explore three simple techniques to reduce the recalculation traffic
here.  \textbf{Random} choose to send the recalculation message every
third eviction.  \textbf{Count} will only send the recalculation
message if the entries pointing to the pattern reach a certain
threshold (48 in the experiment). \textbf{Sharer} will send the
message when the pattern indicates more than 4
sharers. 
%Figure evaluates the three methods by
%additional false positive and reduced traffic. 
Figure~\ref{bandwidth} evaluates the effects of the three methods on 
both traffic caused by recalculation and  false positives. 
In general, less frequent recalculation leads to more false
positives, therefore causes slight increase in traffic due to false positives.
The simple random
method is very effective, reducing the recalculation traffic to less than 7\%
for all the applications, while adding less than 1\% traffic from false positives.
%In general, less frequent recalculation leads to more false
%positive. 
Count method does not perform better than random because the
number of entries pointing to a pattern does not translate to the
frequency with which the pattern is referenced. The sharer method has the 
highest accuracy. However, the traffic reduction is limited.


%%fixed ZHZ update the figure - all applications, 
%%comparing to traffic by false positive, 
%%different traffic reduce techs



\subsection{Area, Energy, Delay}
This section reports the area, energy, and access time of \sys\  directory.
CACTI 6.0~\cite{cacti6-micro-2007} is used to estimate the delay and energy
for a 32nm process technology. The estimated numbers at 16 cores 
are shown in Table~\ref{cacti_dir}.
The additional cost of accessing the small pattern directory table adds little
to the access time and energy. The accessing can be finished within two CPU
cycles, and both the accessing time and power consumption is significantly 
better than alternative directory designs including a FULL directory cache
and the shadow tags directory.


The last column in Table~\ref{cacti_dir} shows the relative area of the \sys\  directory. The area for
\sys\  includes both the buckets of pointers and the pattern table. On top of
the tagless directory, \sys\  further compresses the directory by 25\% to 42\% at 16
cores. This translates to 28\% to 37\% area comparing to a FULL directory cache.
The leakage power is proportional to the size of the memory structures. 
We estimate a 74\% reduction in leakage power for \sys\  with 512 entries
compared to a FULL directory cache.


\begin{table}[!ht]
\def\post{\rule{0pt}{7pt}}
\centering
{
\footnotesize
\caption{\bf\footnotesize CACTI estimates for various directory settings.
         (The access time and read energy for \sys\  include access of the
          pointer in SPACE buckets and the pattern table entry.)
        }
\centering
\begin{tabular}{|l|r|r|r|}\hline
Configuration &Access&Read & Storage Relative\\
              &Time(ns)& Energy(fJ) & to Tagless \\
\hline
FULL dir cache &  0.55  &  16812 & 2.03$\times$ \\
Shadow tags & 0.92 & 67548 & 1.53$\times$\\
Tagless-lookup &  0.27  &  4104 & 1$\times$\\
\sys{}-512 &  0.40  & 4299 & 0.58$\times$ \\
\sys{}-1024 &  0.41 & 4394 & 0.66$\times$\\
\sys{}-2048 & 0.43 & 4486 & 0.75$\times$\\

\hline
\end{tabular}
\label{cacti_dir}
}
\end{table}
\vspace{-10pt}
\subsection{Scalability}
The performance of the \sys\ directory directly depends on the number
of sharing patterns present in the cache. This is mainly influenced by
the application's characteristics, the parallelization strategy, and
programming patterns.  However, in most architectures the cache
block size and cache size have a key influence on the sharing patterns
observed since they affect properties like false sharing and working
set size in the cache.  Figure~\ref{L1size} shows the influence on false
positives by varying the L1 cache parameters.  As the size of the L1 caches
increase, the average false positives increases with more sharing
patterns. However, the increase is minor after the size of the working
set is reached. The influence of larger cache lines is mixed, because
false sharing could lead to either increasing or decreasing sharing
patterns. The false positive increases when line size increases from
32B to 64B, then decreases when line size further increases to
128B. Characterizing the influence of false sharing on sharing patterns
is beyond the scope of this work.

\begin{figure}[!h]
\includegraphics[width=0.5\textwidth]{figure/L1size.pdf}
\caption{Average false positive under varying L1 cache settings. The group on the
         left keeps the cache line size constant (64B) and varies the number of sets.
         The group on the right keeps the number of sets constant and varies the cache
         line size.}
\label{L1size}
\end{figure}

To study the scalability of the \sys\ directory, we simulate three multicore systems (8-core
 CMP, 16-core CMP and 32-core CMP). For each system, we experimented
 with three \sys\ directory setups by varying the size of the pattern table. Figure~\ref{8_16_32} shows that
\sys\ with a limited number of pattern entries consistently performs similar to FULL. The
network traffic is within 5\% for \sys{}-128 using 8 cores, \sys{}-1024 using 16 cores,
and \sys{}-4096 using 32 cores. Interestingly, to achieve an effective
directory, \sys\ appears to need the pointer size being $K*logP$ ($K = 2.4$ under our experiments). 
On top of the tag compression by TAGLESS,
the directory of size $M*P$ is further compressed to $M*K*logP$.
%[AS]. Need new legend. Change Necessary->Baseline. by false pos.
%False positives. Patten Recalculation.

Figure~\ref{scale} projects the size of the directory to systems up to 512 cores.
Compared to the tagless directory, \sys\ further compresses the directory 
by 34\% at 16 cores, and by 78\% at 64 cores. 
We also show the size of the ideal directory for 8, 16, and 32 cores.
The ideal directory is a directory cache that magically holds only the tags present
in the L1 caches. It represents the minimum space for an accurate directory cache.
The size varies across applications and in execution, and we show the captured maximum
size.\sys\ consumes smaller area compared to the ideal directory cache with minimal
penalty.


\paragraph{Accelerator-based Manycore Architectures}
%[AS]
An important design decision in \sys\ is the size of the pattern table
(fixed at design-time) which determines how many unique sharing
patterns can be simultaneously supported. In our experience, we
observed large variations between the different workload suites
and in some cases outliers even within a workload suite (e.g., Barnes
in SPLASH2). In our current set of experiments, we assume
general-purpose multicores that can target any of these
workloads. Hence, the pattern table is sized to 
support commercial applications like Apache and SpecJBB which have
myriad read-sharing patterns. Unfortunately, this severely
over-provisions the pattern table for workloads such as SPLASH2. We
now consider accelerator-like manycore architectures which target only
data parallel algorithms like SPLASH2. We found that a 32 entry
pattern table is sufficient for many SPLASH2 applications (other than
Barnes) to perform optimally at 16 cores. Assuming linear growth in
patterns with increase in cores (a reasonable assumption for data
parallel workloads), we only require a 2048 entry pattern table for
1024 cores. We believe providing a cache coherence directory for a
hypothetical 1024-core accelerator (64KB L1 per core) would only
require $\simeq$0.6MB, less than 1\% of the total L1 capacity.

\begin{figure}[!h]
\includegraphics[width=0.5\textwidth]{figure/traffic_8p16p32p.pdf}
\caption{Interconnect traffic for \sys\ normalized to full map
  in-cache directory. The stacked bar shows
         the extra traffic caused by false positives, and extra traffic caused by pattern
         recalculation. X axis represents three multicore systems (8-core, 16-croe and 
         32-core). We experiment with three different \sys\ pattern
         table sizes. 2nd X axis : \# of bits of the pattern
         pointer. Pattern table size ($2^{\# of pattern bits}$)}
\label{8_16_32}
\end{figure}


\begin{figure}[!h]
\includegraphics[width=0.5\textwidth]{figure/scale.pdf}
\caption{Storage requirements for FULL directory cache , Tagless, \sys\ and ideal (all unique tags) directory
         cache. Each core has a 64KB private L1 cache.}
\label{scale}
\end{figure}
