\section{SPATL : \\ Hybrid Coherence Directory}
\label{sec-design}

\begin{figure}[!htb]
\subfigure[Maximum number of sharing patterns.]{
\includegraphics[width=0.4\textwidth]{figure/max_patterns.pdf}
}
\subfigure[Number of cache blocks for each of N sharers.]{
\includegraphics[width=0.4\textwidth]{figure/patterns_over_blocks.pdf}
}
\caption{(a) This shows the maximum number of patterns present for a
  specific application. (b) Shows the cache block distribution over 
  different number of sharers. 
    For example, $\simeq$9,000 cache blocks have the
  private access pattern. (Only one processor is accessing the block.) The application here is Apache.}
\label{max-patterns}
\end{figure}



\begin{figure*}[!htb]
\centering
\hspace{-20pt}
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=1.2\textwidth]{figure/diagram/hybrid.pdf}
\caption {Left : Tagless Directory Approach. 1 hash function.  Right
: Hybrid Tagless-Pattern Directory Approach.  Each bucket includes a
pointer to the sharing pattern. }
\label{spatl-arch}
\end{minipage}
\hfill
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[width=1.3\textwidth]{figure/diagram/insert_SPATL.pdf}
\caption{Steps involved in inserting a new cache line.}
\label{insert-example}
\end{minipage}
\end{figure*}

\pagebreak
\subsection{Sharing Patterns in the Directory}

At PACT 2010, the SPACE~\cite{zhao-pact-2010} design was proposed 
as a promising technique that compresses directory space for inclusive 
cache designs. 
SPACE was based on observations of application semantics that 
showed the regular nature of inter-thread sharing, 
resulting in many cache blocks having the same or
similar sharing patterns. Thus, the in-cache directory has a lot of
redundancy and replicates the same pattern for many cache
blocks. SPACE decouples the sharing vectors from the L2 tag and
stores the unique sharing patterns in 
a pattern table; multiple cache lines with the same pattern would
point to a common entry in the pattern table. The sharing bit vector
per cache tag is replaced with a pointer whose size is proportional to
the number of unique sharing patterns.  Unfortunately, while this
provides better scalability than the base in-cache directory design
(reduces the directory overhead to $\simeq$40KB for the Niagara2),
lines not present at the L1s continue to bear the pointer overhead, which
limits the overall benefit.

In this work, we extend the idea of eliminating sharing pattern 
redundancy to the tagless
buckets. Each bucket in the tagless directory represents a union of
the sharing patterns of cache blocks that hash to that bucket.  It is
likely that the same sharing pattern gets replicated across multiple
hash table buckets since the cache blocks that map to the bucket
happen to have the same sharing pattern.  Figure~\ref{max-patterns}a
shows the maximum number of patterns displayed in an application
during its execution (system configuration described in
Table~\ref{parametersCore} in Section~\ref{section:eval}).  The
relatively small number of patterns present in the applications
compared to the total number of possible patterns suggests an
opportunity to design a directory that holds the present sharing patterns 
without assuming that each bucket demonstrates a unique pattern.
In the tagless directory, each bucket combines and holds the union of
sharing patterns of addresses that map to that bucket. This in some
cases causes an overall increase in the total number of patterns since
two addresses with different sharing patterns could map to the same
bucket (causing false positives).


%An important metric of interest is the distribution of cache lines
%with the same sharing patterns.  If there exists good sharing pattern
%locality (many cache lines display the same sharing pattern), it would
%increase the effectiveness of a directory based on common sharing
%patterns.  A single entry can describe the sharing pattern of many
%cache lines. 

Figure~\ref{max-patterns}b shows the degree
of sharing for a snapshot of the application Apache with and without the 
use of the tagless directory. 
Each bar in the histogram represents the number of cache
lines with patterns with a certain number of processors sharing the
cache line.  Private cache lines 
are the dominant sharing pattern for Apache, exhibited by over 75\% 
of the cache lines. We observe that the percentage of cache lines 
tagged as private reduces to 40\% for the tagless directory. 
Some cache lines with private patterns are tagged as 2-sharer or 3-sharer 
because of conflicts in tagless's bloom filter buckets. 


Based on our analysis, we observe that 
many cache blocks have a common sharing pattern. We also 
observe that the number of 
patterns that are frequently referenced is small (as corroborated by results 
in~\cite{zhao-pact-2010}). 
Correspondingly, many buckets in the tagless directory exhibit 
the same sharing pattern. 
Thus, we propose a solution to enable tagless directory scalability to 
large number of cores by eliminating the
redundant copies of sharing patterns. The number of patterns that a
directory needs to support for a real application is 18$\times$
(Apache, SPECjbb), 126$\times$ (SPLASH2), and 26$\times$ (PARSEC) less
than the number of buckets in the tagless approach. 
The compression of sharing patterns will
complement the compression achieved by the tagless directory. 



% [AS]
% Do we need to talk about access distribution. I think it could
%  confuse because here we have to then describe when the directory is
%  accessed. For inclusive cache on every coherence operation while
%  exclusive cache we access it for every miss. 




\subsection{SPATL Architecture}
As shown in the conventional tagless directory, essentially every bucket
in the bloom filter specifies the sharing pattern for blocks mapping
to that bucket. We propose to decouple the sharing pattern for each
bucket and hold the different unique sharing patterns observed in the
tagless directory in a separate pattern directory table. This
eliminates redundancy across the tagless directory where the
same sharing pattern is replicated across different buckets.  With the
directory table storing the patterns, each bucket now includes a
pointer to an entry in the directory, not the actual pattern
itself. 


We organize the directory table as a two-dimensional structure with
$N_{Dir. ways}$ ways and $N_{Dir. sets}$. Each bucket points to
exactly one entry in the directory table and multiple buckets pointing
to the same entry essentially map to the same sharing pattern. The
size of the pattern directory table is fixed (derived from the
application characteristics in Section~\ref{section:pattern}) and is
entirely on-chip. Hence, when the table capacity is exceeded, we have
a dynamic mechanism to collate patterns that are similar to each other
into a single entry.


In this section, we describe our directory implemented on a multicore
with 16 processors, with 64KB private, 4-way L1 caches per core, and a
tagless directory with 2 hash functions and 64 buckets per-hash
function.  The conventional tagless directory design incurs an
overhead of $Hash_N * N_{buckets}*P cores$ bits = 128 * 16 bits per
set of the L1 cache.  Figure~\ref{spatl-arch} illustrates the \sys\ approach. We
have a table with $N_{Dir. entries}$ ($ = N_{Dir. ways} * N_{Dir. sets}$)
entries, each entry corresponding to a sharing pattern, which is
represented by a P-bit vector.  For each bucket in the tagless
directory, we replace the sharing vector with a $\lceil
log_{2}(N_{Dir. entries}) \rceil$ bit pointer to indicate the sharing
pattern.  Every time the sharer information is needed, the bucket is
first hashed into, and the associated pointer is used to index into
and get the appropriate bitmap entry in the directory table, which
represents the sharer bitmap for the cache tags that map to that
bucket.  The main area savings in SPACE comes due to the replacement
of P-bit vector per bucket with a $\lceil log_{2}N_{Dir. entries}
\rceil $-bit pointer. The directory table itself is a $N_{Dir. entries} *
P$ bit array; at a moderate number of cores, it does not constitute
the dominant overhead.

The next two sections describe how \sys\ inserts entries into the tagless 
buckets and directory table, how patterns are dynamically collated when there
aren't any free entries, and how sharing patterns are recalculated on 
cache block evictions.

\subsection{Cache Block Insertion} 

When a cache line is brought in and a sharing pattern changes (or
appears for the first time), the block needs to modify the sharing
pattern associated with its bucket in the tagless directory.
To achieve this, the set index of the cache line is used to
index to the specific bloom filter, and the tag is used to map to 
the specific bucket.
%the set index of the cache line is used to
%index into the specific bloom filter. 
When a cache line is inserted into a specific core $i$, logically, it
modifies Core $i$'s sharing bit in the bucket mapped to.  This operation
is carried out in \sys\ as a sharing pattern change.  The current
sharing pattern pointed to by the pointer in the bucket is accessed,
and Core $i$'s bit is set in the pattern to form the new pattern.

%ZHZ reorganzie a bit
%To enable this
%operation, it involves accessing the current sharing pattern pointed
%to by the bucket corresponding to the address and setting Core i's bit
%in the pattern.  When inserting a new cache tag into a cache set S in
%Core i, it essentially recalculates the bloom filter for set S in Core
%i.  

%Core i's bloom filter for a set S, can be obtained by collating
%the bit corresponding to Core i in each of the sharing vectors pointed
%to by each bucket.  

%ZHZ redundant as previous para
%Inserting a new address in the filter essentially involves setting the
%bit corresponding to Core i's in the pattern pointed to by the bucket
%into which the cache tag maps to.
%into which cache tag in the set maps to. We treat the setting of Core
%i's bit in the buckets as a sharing pattern change.

% Note that, we don't
%directly modify the sharing vector in the pattern table, as there
%ould be other buckets pointing to the same pattern. Instead we treat
%this is as a change in the sharing pattern of the addresses' bucket
%and find a place to re-insert it in the pattern directory
%table. Initially, all buckets point to the NULL sharing pattern (all
%0s).

The newly generated pattern needs to be inserted into the directory table. 
The
pattern table is organized as a two-way table with $N_{rows}$ and
$N_{cols}$.  Initially, the incoming sharing pattern hashes into a
particular row and then compares itself against patterns that already
exist in that row (Figure~\ref{spatl-arch}).  Once a free entry is found in the
directory table, the row index and column location are used by the
bucket to access the specific entry.  Intuitively, the hash function
that calculates the row index in the pattern table has to be unbiased
so as to not increase pollution in any given row.  We also require
that similar patterns map to the same row so as to enable useful
collation of sharing patterns that do not differ by many bits when the
protocol runs out of free directory entries. 

To satisfy these two seemingly contradictory goals, we use a simple
hash function to calculate the row index into the pattern table. We
use a coarse bit-vector representation of the original sharing pattern
as an index. For example, in a pattern directory with 16 rows, we could
use a coarse-grain four bit representation as the encoding to
indicate which of the possible four core clusters is caching
the data. It ensures that patterns that map to the same row will 
differ only in topologically adjacent bits, enabling intelligent 
collation of patterns, i.e., without excessive extra traffic due to false 
sharers by limiting this traffic to neighbors or a specific set of 
sharers (when there are no free patterns
available). Since private and globally-shared (all processors cache a
copy) patterns appear to be common patterns across all the
applications, \sys\ dedicates explicit directory indices for these
$P+1$ patterns (where $P$ is the number of processors).

\paragraph{Eviction of cache blocks} 

When a cache block is evicted from a core $i$, the bloom filter must 
be modified accordingly. Merely accessing the bucket 
to which the block hashes and resetting the bit corresponding to 
core $i$ in the sharing pattern specified by the bucket does not suffice, 
since other blocks in the same core's cache set may map to the same bit. 
Instead, the tagless directory
will recalculate the $i$th bit (associated with core $i$) 
of the bloom filter buckets by rehashing all tags in the set to detect 
collisions.  

In \sys\, we cannot simply recalculate and reset (if necessary) 
core $i$'s bit in the sharing
pattern pointed to by the bucket since other buckets could be pointing
to this same sharing pattern. Instead, we treat such recalculations of
the bloom filter as essentially sharing pattern changes. When core i's
bit needs to be reset in a bucket, we first access the sharing pattern
pointed to by the bucket. Following this, we reset core i's bit and
try to reinsert into the pattern table as a new sharing pattern.
 
\pagebreak
\paragraph{Illustration : Cache line insertion and eviction}

Figure~\ref{insert-example} illustrates the steps involved in
inserting a cache line in \sys{}.  Currently, set S holds cache lines
$X_1$,$X_2$, ...$X_N$ in its N ways and we would like to insert cache
line Y and displace cache line $X_N$.  Not all buckets are affected as
a result of this change, only the buckets that $X_N$ and $Y$ hash
into. Hence, in \spot{1} the L1 cache at core P calculates the
current Bloom summary, the new bloom summary with Y inserted in place
of $X_N$ , and the difference between the two summaries. The
difference will include at most two buckets.  If Y is the first address
in the set to hash into a bucket from the set S, then Core P's bit in
that bucket needs to be set (indicated by Set-Bucket in Figure~\ref{insert-example}). If no
other address hashes into the same bucket as $X_N$ then Core P's bit
needs to be reset on the bucket to prune out false positives
(indicated by Reset-Bucket). The tuple consisting of (Core id (P), Set
id (S), Set-Bucket, Reset-Bucket) is sent to the tagless directory. In
\spot{3} the tagless directory refers the pattern pointed to by the
Reset-Bucket, resets Core P's bit, inserts the new pattern into the
pattern table and swings Reset-Bucket's pointer to point to the new
pattern. We don't unset Core P's bit directly in the pattern table
because there could be potentially other buckets pointing to the same
pattern. In \spot{4} the tagless directory refers the pattern
pointed to by the Set-Bucket, Sets Core P's bit, inserts the new
pattern into the pattern table, and swings Set-Bucket's pointer to
point to the new pattern. We don't Set Core P's bit directly in the
pattern table because this could induce an extra false sharer for
buckets already pointing to the entry.



\subsection{Merging Patterns} 

A key challenge of fixed size superset representation is the
combining of patterns from different cache blocks. In the hybrid
approach, which combines Tagless and the pattern directory, sharing
patterns need to be merged at two different levels. At the first
level, the tagless directory essentially associates a single sharing
pattern vector with each bucket.  When cache blocks with different
sharing patterns hash into the same bucket, then the tagless directory
will need to store a union of the sharing patterns of each cache
block. Essentially this arises due to the false positives introduced
by bloom filters. 


The other form of merging occurs when there are more sharing
patterns in the system than the pattern directory can
support. Figure~\ref{insert-pattern} illustrates the process of
inserting a pattern into the pattern table.  When inserting a pattern
in the directory, we index into the pattern table and search for a
matching entry. If there are no free entries that can be allocated
from the set, the incoming pattern is combined with some existing
pattern. Note that this merging does introduce extra false positives
for buckets that already point to that entry. The pattern directory
does try to minimize the pollution due to existing entries. 
%When the
%incoming sharing pattern maps to a set with no free entries it is
%merged with one of the existing patterns in the set. This merging
%occurs due to the lack of space in the pattern directory.  
The incoming pattern merges with the sharing pattern that is closest in
terms of hamming distance (number of sharers in which they
differ). This ensures that extra false sharers caused by the merging
of the incoming pattern is kept to a minimum. Existing tagless
directory buckets that point to the sharing patterns will experience
new false positives, but by ensuring that the patterns which merge are
similar to each other, we can limit the number of false sharers as well 
as the potential extra traffic caused by the existence of the false sharers.

\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.4\textwidth]{figure/diagram/merge.pdf}
\caption{\bf\footnotesize Inserting and Merging a pattern into the
  pattern table.}
\label{insert-pattern}
\end{center}
\end{figure}


\paragraph{Removal of sharing patterns}
%%ZHZ should it be a subsection?
The last challenge that needs to be addressed is to ensure that
entries in the directory are re-usable once no bucket has the sharing
pattern in the entry. We use a simple scheme of reference counting to
detect when an entry becomes stale. A counter is associated with each
entry in the directory. This counter is incremented when a new bucket
starts pointing to the entry and is decremented when a bucket pointing
to the entry changes its pointer. 
%(bloom filter bucket reset on cache
%line eviction evicted or false positives in the bucket change the
%sharing pattern). 
The entry is reclaimed when the counter reaches
zero.


\pagebreak
\subsection{Directory Accesses} 

An interesting challenge that \sys\ introduces is that it is possible
for the directory to provide an inaccurate list of sharers to the
coherence protocols. Coherence protocols use the sharers list in
multiple ways. On a write access, the sharing pattern is used to
forward invalidations and obtain the latest version of a cache block
if any of the processors is holding a modified version. In such cases,
we adopt a parallel multicast approach in which the pattern directory
pings all possible sharers indicated by the sharing pattern. Cores will
respond based on their state; whether they have a modified copy, or
have simply read it or do not even cache the block. 
%Read operations
%are handled similarly, although if any of the L1s is caching a copy
%it automatically supplies the data. 

Whether the shared cache is inclusive or exclusive
determines whether the information in the directory is needed to
retrieve data on read misses. Consider an inclusive cache in our baseline
system with private caches and shared L2. With an inclusive L2 cache,
the shared L2 has a copy of each L1 cache block. In case of a read miss,
the L2 can directly source the data and save the effort of forwarding
messages to one of the L1 sharers. We only need to add information in
the coherence directory about the new sharer. The directory
information is needed only for invalidation on write misses.  With a
non-inclusive (or exclusive) shared L2, on a read miss that doesn't
find the block at the L2 level, we can't separate the condition when
the block doesn't reside at all on-chip from when the block is
cached by one of the L1s without examining the directory. 
We have no choice but to check the directory
and ping each of the sharers to see if they have a copy. False sharers in
the directory affect read miss performance and the directory design
has to be comparatively more robust than inclusive caches.

\subsection{Challenge : Two-Level Conflicts}
\label{section:pattern}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.4\textwidth]{figure/diagram/pollution.pdf}
\caption{\bf\footnotesize Two-Level of False Positives.  Dashed lines
  indicate operations }
\label{pollution-pattern}
\end{center}
\end{figure}


The base \sys\ design without optimization exhibits much higher
false positives when compared to the tagless design. The reason for
the increase in false positives is the double conflicts in the tagless
buckets and the pattern table. As we can see from
Figure~\ref{max-patterns}b the tagless table in general introduces new
sharing patterns because of conflicts at the tagless buckets.  The
pattern table introduces further false positives after merging patterns. The
conflict itself is not a problem if the original patterns can be
recovered when a cache line is evicted as in the tagless design.
Unfortunately, with the base hybrid design this recovery ability is
lost since the pattern table introduces new sharing patterns by
ORing with other unknown patterns.
 
To illustrate the problem, consider the example with 4 processors
shown in Figure~\ref{pollution-pattern}.  Cache line A is of private
pattern 0001, while cache line B is of private pattern 1000.  A and B
map to same the bucket in the tagless directory. This causes the first
level of false positives, and the bucket creates the pattern 1001, and
inserts it into the pattern table (\spot{1}). In the pattern table,
the pattern gets merged with pattern 1101. Pattern 1101 is now stored
in the pattern table, and the bucket stores the pointer pointing to
the pattern 1101 (\spot{2}).
 
Now consider when cache line A is evicted.  In the \sys\ design, on
a cache line eviction, we read the pattern table entry (1101) and
reset Core 0's bit, which leaves us with 0101. The false positive Core
1 caused by merging patterns in the pattern table can not be cleared
since we do not know whether Core 1's bit was set as a pollution in
the Tagless or the Pattern table. With private patterns being the
common patterns, the situation described occurs frequently, leading to
pattern table pollution, and soon enough the pattern table does not
have free entries, leading to more pollution.
In the tagless design, the signature is
recalculated, and the pattern would naturally become 0001, which is
the accurate pattern again.

To clean up the polluted entries, we use \textit{pattern recalculation}
messages at the time of cache evictions. At the time of cache
evictions, we look up the pattern table and multicast a \textit{pattern
recalculation} message to other sharing processors (In this case when
Core 0 evicts A, the tagless directory multicasts messages to Core 1
and Core 3 indicated by the pattern.) Each individual processor
recalculates its signature of the set and sends back the
information. Now, we are able to reconstruct the precise pattern for
the set and place it in the pattern table. For example, in this case
we will be able to precisely recalculate the pattern for the set as 0001.
The recalculation results in increased messages in the system as shown
in Figure~\ref{bandwidth}.  However, the messages are not in the 
critical path because they are incurred only on cache evictions. 
We investigated a
few simple optimizations to address the increase in traffic in
Figure~\ref{bandwidth}.  Instead of recalculating the pattern on
every eviction, we use simple decision logic to decide when to
recalculate based on the importance of the cache line. We evaluate the
importance of whether a pattern recalculation is needed based on
information such as the number of sharers in the pattern, and the
number of entries pointing to the pattern. All this information
already exists in the base design and we demonstrate that employing
such optimizations minimizes the bandwidth cost of \textit{pattern
recalculation} messages.


%We do measure the extra bandwidth cost (see
%Figure~\ref{bandwidth}) and evaluates optimizations over the simple approach
%of multicast on every eviction.
 


\begin{comment}
Talk about tagless structure and where it has trouble scaling.
That's where SPACE comes in. It helps tagless decouple the the bitmaps
which are proportionate to number of cores.
Tagless's buckets essentially serve as tags for SPACE.

Combined architecture.
Talk about basic tagless organization.  Talk about decoupling of
sharing patterns and the pointers.

talk about the lookup operation; how is the sharing pattern looked up.
Read requests for coherence operations (parallel fwding. original
tagless did serial snooping).


Write requests. talk about directory collating requests or supplying
data from the L2 if its inclusive. 

handling sharing pattern updates.

don;t describe in painful detail but do introduce some hints about
separate actions for inclusive and exclusive in case of  misses. 
where an exclusive cache needs to select a provider. 

In inclusive cache, the L2 tags include tags which makes life easier
in some cases. 

\section{Practical constraints}
A major challenge with the \sys\ design is the two-levels of sharing
pattern  merging that takes place. 
Talk about double pollution due to SPACE and tagless. How do you deal
with this ? ``Pattern hint'' messages sent periodically...

How do they help ?
Policies for sending pattern hint messages.
\end{comment}
