\section{Background}
\label{section:background}
In a multicore chip like the one shown in Figure~\ref{baseline-arch}, 
there are private caches associated with each core (or set of cores).
%each of which can store a unique set of blocks. 
In our baseline design, we also have a shared L2 cache that is tiled across the
various cores. 
While conceptually a centralized structure, the
directory is distributed across the various tiles. 
Each cache block is
assigned a home tile and the directory associated with the home tile
is assigned the task of providing sharer information for cache blocks
that map to that tile.
For maximum precision, the coherence directory must maintain sharing 
information for each unique tag in the private caches. 

\begin{figure}
\includegraphics[width=0.4\textwidth]{figure/diagram/tile.pdf}
\caption{Tiled 16 processor multicore. Coherence directory distributed
  to each tile.}
\label{baseline-arch}
\end{figure}

%A typical coherence directory must provide sharing information for all cache 
%lines in the address space. 
%As an optimization, sharing information could be limited to those cache 
%lines that are actually resident in one or more L1 caches. 
%This requires the cache tag to be associated with the directory entry, 
%which can add significant overhead as the tag can be up to
%48bits (64 bit addresss space). 

Designs that use an inclusive shared L2 cache piggyback on the L2 tags
to implement the tags required by the directory.  This requires the
addition of a P bit sharing vector (P : \# of cores) per L2 tag.
%, similar
%to Censier and Feautrier's original memory-based coherence
%directory. 
Unfortunately, since shared caches are many times larger
than private caches, many entries contain no information. For example,
if the Niagara2 (8 cores, 8KB L1/core, 4MB shared L2) were to
implement an in-cache directory it would consume 64KB of space, which
is 100\% of the cumulative size of L1 caches across all the 8
cores.

%Unfortunately,
%as we discuss in Section~\ref{sec-design}, since the majority of the
%cache lines are private (over 70\%) the directory cache would need to
%end up duplicating a significant fraction of the L1 tag array. 
An alternative to piggybacking on the L2 tags is to use a directory
cache to maintain information only for lines present in the L1.  
Since each cache line in each core could be unique, to
guarantee no loss of information, the directory
cache would need to contain as many entries as the sum of the number
of cache lines in each L1, along with an associativity that is at
least the aggregate associativity of all the L1s (i.e., even on the 8
core Niagara2, we would need a 32 way directory cache). Practical
directory cache designs have much lower associativity and pay the
penalty of associativity-related eviction of directory information for
some blocks. While recently there have been
proposals~\cite{cuckoo-dir} to use sophisticated hash functions to
eliminate associativity conflicts, optimizing the directory cache
organization is a hard problem.


Many current multicore chips (e.g., Niagara2) use a simplified form of 
directory cache consisting of replicas of the tag
arrays of the L1 cache (i.e., maintain shadow tags). 
An associative search of the shadow tags is used to generate the sharer vector 
on the fly. Although shadow tags achieve good compression by maintaining only
information for lines present in the lower level caches, the associative search
used to generate the sharer vector imposes significant energy penalty.

Recently, 
the Tagless coherence directory~\cite{zebchuk-micro-2009} was proposed 
to eliminate the associative lookup. Instead of
representing each tag exactly, a bloom filter concisely
summarizes the contents of each set in every L1 cache. Overall, we
would need only $N_{L1 sets} * P$ bloom filters (32--64bits per bloom
filter) to represent the information in all the L1 caches.  The
probing required per L1 in shadow tags is replaced with a simple
read of a bloom filter, which eliminates all the complex
associative search of shadow tags. Unfortunately, for large
multicores the cost of the bloom filters grows proportionately
(similar to the sharing pattern vector) and constitutes significant
overhead. For example, for an 8 core Niagara2, it would require 3KB
(per hash function), but extrapolating to 1024 cores, it would
require 3MB, which imposes significant area and energy penalty for 
sharing pattern information access. We briefly describe the overall
architecture of Tagless below and highlight the challenges.


\subsection{Tagless Coherence Directory}
% [AS] some parts may be redundant with previous paragraph. Can
% simplify a bit.  Hongzhou things people may not
% understand. Hopefully with figure they will.  Need to carefully
% vet. If they don't get this. they don't get our design.  Challenges
% need to be put as well.
Tagless coherence directory uses a set of bloom filters to summarize
the contents of the cache. Figure~\ref{tagless-arch} shows the bloom
filter associated with each set of the private L1 cache.  Essentially, 
the Tagless directory consists of a $N_{L1 sets}*P$ set of bloom filters
($N_{L1 sets}$ : number of sets in the L1 cache. P : Number of
cores). Each bloom filter per set is a partitioned design that 
consists of $hash_N$ hash functions each of which map to a k bucket (k
bitmap) filter. If the
size of the bloom filter is comparable to a cache tag, overall this
essentially improves the space over shadow tags by a factor of $\frac{N_{L1
  ways}}{\# of hash functions}$.



Tagless directory uses this representation to simplify the insertion
and removal of cache tags from the bloom filter. Each bloom filter
summarizes the cache tags in a single cache set.  Inserting a cache
block's address requires hashing the address and setting the
corresponding bucket (note that each address maps to only one of the
buckets). Testing for set membership consists of reading the bucket
corresponding to the cache tag in the set-specific bloom filter of each 
processor and collating them to construct the sharing pattern (in
Figure~\ref{tagless-arch}, each bucket represents a sharing
pattern). Having a bloom filter per set also enables Tagless directory
to recalculate the filter directly on cache evictions.  While
conceptually, the Tagless directory consists of $N_{L1 sets}*P$ bloom filters,
these filters can be combined since each core uses the same bloom
filter organization. A given cache block address maps to a unique set
and a unique bucket in the bloom filter. Combining the buckets 
from all the bloom filters, a $P$ bit sharing pattern is created,
which is similar to the sharing pattern in a conventional full-map
directory.  

\begin{figure}
\includegraphics[width=0.4\textwidth]{figure/diagram/tagless.pdf}
\caption{Tagless Coherence Directory~\cite{zebchuk-micro-2009}.}
\label{tagless-arch}
\end{figure}

Multiple addresses could potentially hash to the same bucket and hence
introduce false positives.  Using multiple hash functions enables
addresses to map to different buckets and possibly eliminate false
positives.  Simply ANDing the sharing vector from the buckets that an
address maps to in each hash function will eliminate many false
positives. Consider an implementation with $hash_N$ hash functions, k
buckets per hash function, $N_{sets}$ L1 cache sets, and $P$
cores. The Tagless directory requires a $P$-bit pattern for each of the
k buckets, giving rise to an overhead of $hash_N$ * k * $P$ * $N_{sets}$ 
bits. 

\paragraph{Scalability Challenges}

For large multicore chips (256+ cores) the storage overhead of the Tagless 
directory is dominated by P. This introduces challenges to
scalability with increasing core counts. 
Furthermore, reading a large $P$-bit wide vector from this coherence 
directory will not be energy efficient. 
Figure~\ref{tagless-scale}a shows the per-core area of the Tagless directory
while increasing the number of cores.  
Since the number of addresses that are mapped to a bloom filter grows 
with the number of cores, the possibility of false positives increases 
when using a fixed bloom filter size. 
We therefore increase the number of buckets
per bloom filter so as to maintain the same level of false positives 
as our baseline design. 
If we project to a Niagara2 design with a number of cores
from 256--2048, the Tagless directory adds significant
overhead. At 2048 cores, the total directory overhead is 16MB, which
is 100\% overhead since the aggregate size of all the L1s in this
system is 16MB. We assume that the directory is uniformly distributed
amongst all the cores and hence the per-core overhead grows more
gradually from 2KB at 256 cores to 8KB at 2048
cores. Figure~\ref{tagless-scale}b plots the energy overhead of reading from a
directory tile.  The size of sharing pattern block read varies
linearly with the cores. We see a significant increase in the read
energy from 5pJ at 256 cores to 12pJ at 2048 cores.


\begin{figure}
\includegraphics[width=0.22\textwidth]{figure/tagless-scale-a.pdf}
\hfill
\includegraphics[width=0.22\textwidth]{figure/tagless-scale-b.pdf}
\caption{Left (a): Storage overhead of Tagless directory per core; X
  axis: \# of cores (Bloom Filter size); Y axis: KB of coherence
  directory per core.  Right (b): Access energy of Tagless directory tile
  per core; X axis: (\# of cores); Y axis: pJ.}
\label{tagless-scale}
\end{figure}
%%\textbf{TODO; need figure for tagless in terms of space}

