
Two paramount challenges for designing power efficient and high performance multicore architectures
in deep submicron technologies are the memory wall and wire delay problems~\cite{BurgerGK96,
Agarwal00, Dally-DAC}. The memory wall problem is critical because of two factors: memory
density/size and power dissipation. Large scale multicores would require large on-chip caches and
efficient memory hierarchy, which in turn might increase the power consumption further (due to
significant increase in leakage current beyond 45nm technology nodes). The wire delay problem is
important since the communication delay between components on a chip will become dominant when
compared to logic delays. Consequently, ingenious technology, circuit and architectural solutions are
required to meet the power and performance requirements of future multicore systems.

To address the memory wall problem, emerging memory technologies, such as Magnetic RAM~(MRAM),
Phase-change RAM~(PCRAM), and Resistive RAM~(RRAM), are being explored as potential alternatives to
existing memories such as SRAM and DRAM. As these emerging memory
technologies are getting mature, it is important for circuit/architecture designers to understand
their pros and cons for exploiting them to significantly improve the performance/power
envelope of future computing systems. Among various emerging memory technologies, Spin-Torque
Transfer RAM (STT-RAM)~\cite{MRAM:HYY+05, MRAM:KTM+07} combines the speed of SRAM,
the density of DRAM, and the non-volatility of Flash memory,
with excellent scalability.
Furthermore, it has been demonstrated that with 3D stacking~\cite{xie-jtc-2006, 3D-micro},
STT-RAM can be integrated with conventional CMOS logic~\cite{xydong-dac}.
Thus, STT-RAM is potentially attractive to replace the traditional
on-chip SRAM~\cite{gsun-hpca} for addressing the memory wall problems.

Compared to the SRAM-based cache design, the advantages of STT-RAM based on-chip caches
are the lower leakage and higher density. While the SRAM cell leakage power becomes dominant as
technology scales, a STT-RAM cell does not consume any standby leakage power because its
non-volatility nature allows data retention even without power supply. The higher density of STT-RAM
cells comes from the ``1T1J'' structure, while the traditional SRAM cell is a ``6T'' structure built
using 6 transistors. Recent work~\cite{xydong-dac,gsun-hpca} has shown that a STT-RAM cache having
the same capacity as a SRAM cache, only occupies about 25\% of the area of the SRAM cache. On the
other hand, STT-RAM suffers from a longer write latency and higher write energy consumption
compared to SRAM. The write energy and duration is higher since, to write a ``0'' or ``1'' into an
STT-RAM cell, a strong current is necessary to force the storage node (Magnetic Tunnel Junctions, or
MTJ) to reverse the magnetic direction. The required current amplitude is determined by the size of
MTJ and the write pulse duration. Although the Spin-Torque Transfer technology can scale down the
critical current as the device feature size shrinks, the amount of critical current is still much
higher and the {\it write latency is about four times higher} than that of SRAM cells \cite{MRAM:HYY+05,
MRAM:ZBM+06}.

To address the latency/energy overhead associated with the write operations in STT-RAM, researchers
have proposed various mitigation techniques at both circuit-level and architectural level. For
example, circuit-level approaches such as eliminating redundant bit-writes~\cite{MRAM:ICCAD09:Zhou}
and data inverting~\cite{pcm-date-2010} have been proposed to reduce the write energy; architectural
techniques such as read-preemptive write-buffer design and hybrid cache architecture with
STT-RAM/SRAM can also help mitigate the latency and energy overhead in STT-RAM~\cite{gsun-hpca}.
While these techniques are targeted for solving the memory wall problem, they do not address the wire
delay problem.

To address the wire delay problem, on-chip networks, also known as Network-on-Chip (NoC)
architectures, have become a major research thrust~\cite{Peh-dally, evc, topology-hpca, dragonfly,
rca-hpca}. Several high performance and low-power NoC architectures have been proposed recently for
designing scalable multicores. In this context, 3D NoC architectures seem quite promising to mitigate
the area, power and scalability issues~\cite{xu-radix,gsun-hpca,mira}.
We believe it is possible to address both the memory wall and wire delay problems by
integrating STT-RAM in 3D stacking and providing architectural solutions to hide the
long write latency of STT-RAMs, and is the motivation of this paper.

To this end, we investigate a 3D on-chip network design and propose to redesign the network-on-chip
routers to mitigate the systemic latency and energy overheads associated with the long write
operations in STT-RAM. Our scheme is based on the observation that, cache accesses to idle STT-RAM
banks can be prioritized over cache requests to STT-RAM banks which are serving a long-latency write
request. This allows us to {\it selectively grant network resources} to the cache requests requesting
access to idle STT-RAM banks by prioritizing these requests over requests to busy banks. We examine
the cache access distributions of different commercial and server workloads and observe that on an
average, it is possible to delay 17\% of accesses to a STT-RAM bank for giving priority to other
cache accesses and thereby hide the memory latency.

Despite this simple observation and our early analysis showing the potential benefits of selective
prioritization of requests in a network, determining which cache banks are busy and which banks are
idle in a distributed environment is non-trivial. The task of putting this observation into
\emph{practice} poses challenges in accurately determining the busy/idle status of each cache bank
and then prioritizing the right cache request. In particular, how does one develop a scheme to
\emph{predict} a cache bank status from a router in the NoC that is, for instance, two hops away from
the cache bank and having done so, how should this information be used to guide a prioritization
policy? This paper makes the following \emph{contributions} in exploiting and then hiding the
long-latency write duration of STT-RAMs:

\textbf{STT-RAM aware router arbitration}. \emph{First}, it describes why simple round-robin
arbitration is not optimal when SRAM is replaced by STT-RAM. \emph{Second}, it elaborates our
proposal of packet reordering in a 3D network. \emph{Third}, it does an in-dept analysis of cache
access distribution to STT-RAM banks and highlights the potential of selective prioritization.
\emph{Fourth}, it describes a scheme of facilitating prioritization in a 3D network by (a)
partitioning the STT-RAM layer into a number of logical regions, (b) restricting the path diversity
and (c) estimating the busy duration of STT-RAM using three novel schemes (simple scheme (SS),
regional congestion aware scheme (RCA) and window based scheme (WB)).

\textbf{Quantifying the benefits of STT-RAM aware router arbitration}. In Section 4, we present
experimental results with a two-layer 64-core and 64-STT-RAM configuration and show that our proposed
approach can lead to an average 14\% improvement in IPC and 54\% reduction in energy compared to the
SRAM-based cache implementation. Additionally, we find that, logically partitioning the cache layer
into more regions helps our schemes by having a fine-grained control of the packets and better
estimating the congestion that improves performance further. We find that re-ordering requests from a
router 2-hops away from the destination STT-RAM bank and sub-dividing the cache layer into eight
logical regions gives best performance results (19\% IPC improvement) for our proposed prioritization
schemes. We also conduct several sensitivity analysis to examine the scalability and inflection
points of our proposed approach.

\textbf{Benefits of our scheme over prior proposal}. We show that our scheme fairs better when
compared to a recently proposed work~\cite{gsun-hpca} that advocates the use of write-buffers in
every STT-RAM bank and employs read-preemption at the cache bank level. Our scheme is simple and
provides 6\% additional network latency reduction over this proposal. To the best of our knowledge,
this is the first work to study the {\it on-chip network design} targeting at STT-RAM based cache
architecture.

%To facilitate such packet reordering in a 3D network, we partition the STT-RAM layer into a number of
%regions, estimate the busy time of a cache bank using three heuristics (simple scheme (SS), regional
%congestion aware scheme (RCA) and window based scheme (WB)) and provide microarchitectural
%modifications in the router to support such prioritization. We also conduct several sensitivity
%analysis to examine the scalability and inflection points of the proposed approach.

%Experimental results with a two-layer 64-core and 64-STT-RAM configuration show that our proposed
%approach can lead to an average 14\% improvement in IPC and 54\% reduction in energy compared to the
%SRAM-based cache implementation. Additionally, logically partitioning the cache layer into more
%regions helps our schemes by having a fine-grained control of the packets and better estimating the
%congestion that improves performance further. We find that re-ordering requests from a router 2-hops
%away from the destination STT-RAM bank and sub-dividing the cache layer into eight logical regions
%gives best performance results (19\% IPC improvement) for our proposed prioritization schemes. Also,
%our schemes are better (2.5\% additional network latency reduction) when compared to a recent
%work~\cite{gsun-hpca} that employs read-preemption at the cache bank level. To the best of our
%knowledge, this is the first work to study the {\it on-chip network design} targeting at STT-RAM
%based cache architecture.

%The rest of the paper is organized as follows: Section~\ref{sec:background} provides a background on
%the technologies discussed in this paper; Section~\ref{sec:arch_details} presents a motivating
%example and details our proposed schemes; Section~\ref{sec:exp_eval} discusses the experimental setup
%and results using a diverse suite of benchmarks; Section~\ref{sec:prior} summarizes related work; and
%finally Section~\ref{conclusion} concludes the paper.
