\section{Introduction}

In order to utilize the growing on-chip real estate, designers are
increasingly turning toward larger numbers of independent compute
engines or cores, whether homogeneous or heterogeneous.  
To provide fast data access, data is replicated/cached in core-local 
storage to exploit locality.  
Further, to ease communication among these compute cores, 
the potentially multiple copies of data are often kept coherent in hardware. 
The larger core counts require more
bandwidth both for data access and to keep the caches coherent.
Cache coherence needs to track 
information about the various copies of cached blocks in order to
keep them consistent with each other.
A directory is typically used to provide precise information on the presence 
of replicas so as to minimize coherence communication. 


A typical directory-based coherence protocol~\cite{Censier_dir}
maintains a bit vector (the sharing pattern) per coherence unit, 
representing the processors that currently share the memory locations,
resulting in space overhead that is proportional to the number of
cores and the size of the shared level of memory.  By limiting the 
communication to a multicast among the actual sharers instead of 
a broadcast, the bandwidth requirement of directory-based protocol scales
better than typical snoop-based protocols.

%The directory
%protocol uses the bit vector to minimize traffic by ensuring that
%
%communication is limited to the actual sharers of any given piece of
%data.  Hence, the bandwidth requirement of the directory-based
%protocol scales better relative to a snoop-based approach due to
%reduction in unnecessary broadcasts.


Several optimizations to reduce the area overhead of the directory
have been proposed.  For example, a directory
cache~\cite{tag_dir,acacio_level} stores sharing information for a
subset of lines in the shared memory.  A compressed sharer
vector~\cite{Gupta90reducingmemory,simoni_thesis,Choi99segmentdirectory}
uses fewer bits to represent sharer information, thereby losing some
precision in determining the exact sharers. Such techniques also can
represent only a limited number of sharing patterns and suffer
inelegant sharp performance losses for specific types of sharing
patterns.  Pointers~\cite{sgi_origin,limited-pointer-dir} provide
precise sharing information for a limited number of sharers of each
cache line, resorting to introducing extra hardware and software
overhead when the number of sharers exceeds the number of
hardware-provided pointers.
%(in both the
%hardware required and in the potential use of software to manage the
%overflow) 
%Hierarchical organizations~\cite{acacio_level} introduce multiple 
%serialized lookups, thereby incurring additional delay. 
%ZHZ FIXME add ref to WAYPOINT by Kelm?


Alternatively, shadow tags are used, for example, in Niagara2~\cite{niagara2},
in which the tags from the lower level caches are replicated at the shared level. 
An associative search of the shadow tags is used to generate the sharer vector 
on the fly. Although shadow tags achieve good compression by maintaining only
information for lines present in the lower level caches, the associative search
used to generate the sharer vector is energy hungry, especially at larger core counts.

%Shadow tags achieve good compression since they maintain sharing information 
%only for those cache lines present in the processor-level (private) caches. 
%However, the associative search used to generate the sharer vector on the 
%fly is typically energy hungry, especially at larger core counts. 
 
%in exclusive caches it serves to help with read misses and coherence
%operations
%in inclusive caches its main purpose is coherence operations.
%Consists of two things tags to indicate what addresses the on-chip
%hierarchy care about and sharing pattern that indicates context.
%The sharing pattern could be generated dynamically provided the tags
%at each cache like in shadow tags.

%talk about tagless; briefly hint how it derives from shadow tags.
%talk about its area benefits and what it compresses.
%talk about the challenges while scaling to higher number of cores.


%talk about space and describe how it derives from full map.
%talk about limitations being it needs some structure to provide tags
%and in the original design it used an inclusive L2 to get the tags for
%free.
%However, similar to original full map directories its overhead scales
%as a function of the L2 size. 

Recently, two different approaches have been used to achieve directory 
compression without loss in precision or extra energy consumption. 
The Tagless directory~\cite{zebchuk-micro-2009} starts with the shadow 
tag design 
%(appropriate for both inclusive and non-inclusive caches) 
and uses bloom filters per private-level cache set to encode the presence 
of the tags in each private-level cache.  
The buckets in the bloom filter represent the sharing pattern. 
This approach has two advantages, namely, it eliminates the energy-hungry
on-the-fly sharing pattern generation, and
the shadow tag space is also no longer proportional to the size of the tag.

SPACE~\cite{zhao-pact-2010} was designed for inclusive caches and 
leverages the observation that many memory 
locations in an application are accessed by the same subset of 
processors and hence have identical sharing patterns. In addition, 
the number of such patterns is small, but varies across applications 
and even across time. SPACE proposes the use of a sharing pattern 
table together with pointers from individual cache lines to the 
table. Graceful degradation in precision is achieved when the table's 
capacity is exceeded. 

%The Tagless directory~\cite{zebchuk-micro-2009} was proposed to overcome 
%the storing the shadow tags by using bloom filters to concisely map the tags 
%in each cache set. Tagless~\cite{zebchuk-micro-2009} uses bloom filters per 
%cache set to overcome the challenge of element removal without incurring 
%the overhead of counting bloom filters. Tagless~\cite{zebchuk-micro-2009} 
%does not optimize the storage overhead of the sharing vector, and with 
%increase in the number of cores per CMP, the area of the sharing vector 
%may become prohibitive relative to the shadow tags. 

In this paper, we extend the observation made in~\cite{zhao-pact-2010}
that sharing pattern commonality across memory locations can be used
to compress the directory without significant loss in precision, to
apply to non-inclusive caches.  Specifically, we combine the energy
and compression benefits of the Tagless and SPACE approaches in a 
system we call SPATL (Sharing-pattern based Tagless Directory).  As in
the Tagless approach, tags within individual sets are combined in a
bloom filter. However, rather than containing sharer vectors, the
individual buckets in the bloom filter contain pointers to a table of
sharing patterns.  As in SPACE, only the sharing patterns actually
present due to current access to shared data are represented in the
sharing pattern table.  This combination allows directory compression
with graceful degradation in precision for both inclusive and
non-inclusive cache organizations. Our results show that the use of a
sharing pattern table can be used to compress the Tagless directory,
resulting in compounded area reductions without significant loss in
precision. \sys\ is 66\% and 36\% the area of the Tagless directory 
at 16 and 32 cores, respectively.
We study
multiple strategies to periodically eliminate the false sharing that
comes from combining sharing pattern compression with Tagless, and
demonstrate that \sys\ can achieve the same level of false sharers as
Tagless with $\simeq$5\% extra bandwidth. Finally, we demonstrate that
\sys\ scales even better than an idealized directory and can support
1024-core chips with less than 1\% of the private cache space for data
parallel applications.



% Need to say something about the results
% What's the compression factor etc......


