\begin{singlespace}

Technology scaling has exacerbated the \emph{memory wall} and \emph{wire
delay} problems for designing energy efficient and high performance multi-core architectures. The
\emph{memory wall} problem needs ingenious solutions to increase on chip caching capacity and
minimize energy consumption (due to higher leakage current as technology scales). The \emph{wire
delay} problem dictates designing scalable, area and power efficient on-chip interconnects to
integrate hundreds of functional units on a chip. Emerging memory technologies such as Spin Torque
Transfer RAM~(STT-RAM) have shown promise as a replacement for conventional SRAM/DRAM memory
technology to address the memory density problem. However, the long write latency of the STT-RAM is a
{\it systemic} concern from high-performance and low-power consumption standpoints.
%Alongside, using 3D heterogeneous integration, it can be feasible and cost-efficient
%to stack STT-RAM atop conventional chip multi-processors, to address the above problems,
%if only we can alleviate the long latency of STT-RAM writes through architectural and circuit level solutions.
While architectural techniques such as write buffering can mitigate the write overheads, we propose
a different solution, using the on-chip network to circumvent latency, without any additional resource requirements.

In this paper, we investigate the integration of STT-RAM in a 3D multicore environment and propose
network level solutions to exploit the systemic long write-latency problem in STT-RAMs. Our scheme is
based on the observation that network resources can be granted to cache requests requesting access to
STT-RAM banks which are currently not serving a write request. This allows us to prioritize cache
accesses to the idle banks by delaying accesses to the STT-RAM cache banks which are currently
serving a long latency write request. Through a detailed characterization of the cache access
patterns of 42 applications, we propose an efficient mechanism to facilitate such delayed writes to a
cache bank by (a) partitioning the cache layer (which aids us in estimating the busy time of each
cache bank) and (b) prioritizing packets in a router requesting access to idle banks. Through
detailed evaluations in a 64-core and 64-STT-RAM bank 3D architecture, we show that our proposed
approach provides 14\% average IPC improvement and 54\% energy reduction over an equivalent area SRAM
implementation.
 %and is 5.5\% better when compared to a recently proposed write buffering mechanism.

\end{singlespace}
