%In this section, we present a motivational example followed a detailed description of our proposed
%scheme.

\begin{figure*} %[t]
%\begin{tabular}{c}
\centering
 \psfig{figure=figures/motiv.eps, width=6.25in, height=1.65in}
%\end{tabular}
 \hrule
 \caption{\label{fig:motiv} \scriptsize \bf (a) Example request sequence at R0 in a 2x2 mesh
 topology; arbitration sequence using (b) simple round-robin arbiter and (c) a STT-RAM bank aware arbiter.}
\end{figure*}

\SubSection{A Case for STT-RAM aware router arbitration}

To motivate the importance of STT-RAM aware router arbitration, let us consider the example
illustrated in Figure~\ref{fig:motiv}. In this example, the network consists of 4 routers connected
in a 2x2 mesh, where router R0 is connected to a processing node (P) and the other routers (R1-R3)
are connected to STT-RAM cache banks. Consider the case where R0 receives multiple write requests
from P over time as shown in Figure~\ref{fig:motiv}(a). The resulting arbitration sequence at R0 that
employs traditional RR arbitration routes multiple requests to R1 before forwarding request to R2 and
R3 (see Figure~\ref{fig:motiv}(b)). Since writes to STT-RAMs have a long latency, subsequent accesses
within a short duration of the first write to the STT-RAM module connected to R1, are queued at the
STT-RAM module interface (possibly at the network interface). This STT-RAM oblivious arbitration, not
only leads to degraded performance (since banks connected to R2 and R3 remain idle) but also, leads
to unwanted network congestion at R1 (which might affect other flows going through R1 in a larger
network).

In contrast, with a STT-RAM aware arbitration, R0 can prioritize requests to STT-RAM  modules (R2 and
R3) over a request to R1. This can be done by being cognizant of the scenario that the module
connected to R1 would be busy with the write request that was sent to it recently. Such a scheme can
yield performance improvement by (a) prioritizing requests to idle banks and (b) shifting the
buffering of requests to busy STT-RAM modules from the module interface to the network router
buffers. An example schedule resulting from the STT-RAM aware arbitration is shown in
Figure~\ref{fig:motiv}(c). This prioritization scheme is specifically beneficial for STT-RAM
structures (and possibly for other memory technologies with long bank access times) but not
attractive for conventional SRAM cache banks since, write latencies in SRAMs are typically of the
order of router and network latencies and delaying a write-request to a busy SRAM bank would not
overlap with the service time in the bank and hence, would hurt performance.

\SubSection{Our proposal: Re-ordering accesses to STT-RAM banks}

Since write-latencies for STT-RAMs are 11x the order of router hop latency, we prioritize requests to
banks other than the one recently forwarded with a write request. The key intuition behind this is -
the bank to which a write request was sent would be busy servicing the request and sending more
requests to this bank would only queue them. Thus, not all requests become equally critical from a
network stand-point and a router can schedule requests prioritizing few over the others. Another
aspect of this re-ordering and selective prioritization is that, if all cache banks to which packets
in a particular router are destined to are busy, then we can \emph{prioritize coherence packets and
packets destined to memory controllers} (see Figure~\ref{fig:motiv}(c)), further boosting
performance.

Next, two questions need to be answered for {\it practical} implementation of our proposal: (1)
\emph{How} long should a packet be delayed? and (2) \emph{Where} should a packet be re-ordered? i.e.
how far from its destination. These are critical issues since, ideally we want to delay a particular
packet to a busy bank such that, as soon as the busy bank is done servicing a write request, the next
request arrives. %at its local router and is ready to be serviced.
In other words, the network and queuing delays of the subsequent (delayed) packets should overlap
with the service time of the first packet to avoid performance degradation. This significantly
depends on our ability to be able to detect the busy duration of STT-RAM banks in the network. If we
were to prioritize and re-order requests in a router far away from the requests' destination, then
estimating the congestion in the network and busy time of destination cache bank becomes difficult.
In contrast, if we were to do the re-ordering of packets from a router very close to the destination
cache bank, then this particular router would not have many requests to re-order and hence, would be
forced to route a packet to a busy cache bank (only to be later queued at that bank). After extensive
sensitivity analysis, we choose to selectively delay request packets in a router whose destination is
\emph{2-hops} away from the current router\footnote{Later in our analysis section, we do a
sensitivity study of this aspect and justify this decision} provided that the destination STT-RAM
bank is busy. Qualitatively, there are two reasons behind this choice: (1) Estimating busy time of a
destination bank that is 2-hops from the current router is relatively easier (shown later in
Section~\ref{sec:sensitivity_anyl}) and (2) It gives us significant opportunity to prioritize
coherence, memory-controller destined traffic and any other packets destined to routers more than
2-hops away.

Two key factors that decide whether a re-ordering/prioritizing scheme such as ours can be successful
are - (1) How separated in time (cycles) are two consecutive accesses to a cache bank i.e. {\it
difference} in cycles in consecutive accesses to cache banks and (2) {\it Number} of requests
buffered in a router requesting accesses to different cache banks in the network at any point in
time. The first factor exposes the application property and quantifies the reason behind our
proposal's benefit. It indicates how busy cache banks are i.e. if two requests to a particular cache
bank are spread out over time such that long after the first request is serviced, the second request
arrives, then our re-ordering scheme would not be effective. The second factor indicates the
potential of the re-ordering scheme (and also dictated our decision in choosing 2 hops as distance
for re-ordering). It tells us, if all requests in a router are destined to a particular bank, then
re-ordering those requests is not useful. However, if at any point in time, a router has requests
that are destined to various banks in the vicinity, then there is an opportunity for re-ordering that
can be exploited. The plot discussed in the following subsection quantifies these two factors.

\subsection {Cache access distribution}

For different applications, we analyzed the distribution (in cycles) of consecutive accesses to
STT-RAM banks and the average number of request packets buffered in a router in the cache layer,
whose destination is 2-hops away from the current router. In this analysis, we have 64 cores in one
layer and 64 STT-RAM banks in the other layer (see Section~\ref{sec:exp_platform} for details on our
simulation test-bed). Figure~\ref{fig:mram_access} shows the result of this analysis, where the
vertical axis represents the percentage of all accesses to a cache bank following a write access to
that bank and the horizontal axis corresponds to latency in cycles. The graphs are plotted with 33
cycle intervals (except the first bin), with the leftmost bar indicating the percentage of accesses
that are separated by 0 $\leq$ latency $<$ 16 cycles after a write request. The rightmost bar
represents the percentage of all accesses that are separated by more than 165 cycles.

This plot quantitatively shows the burstiness in applications. For ferret, around 8\% of the accesses
occur within 16 cycles after a write request is initiated to a cache bank and 10\% of the accesses
within 33 cycles after a write request is initiated. Considering that write service times in STT-RAMs
last 33 cycles (Table~\ref{table:comparison}), all these subsequent requests inevitably get queued in
the network-interface of the router or the bank controller before getting serviced. Requests that are
separated by at least 33 cycles (after a write request to a cache bank), are not queued and can
directly proceed to be serviced. These requests are represented in the 66, 99, 132 and 165+ bins in
the histograms shown. The figure also shows that not all applications are bursty. For instance,
requests in x264 are spread out and the percentage of requests following a write request to a bank
that can potentially get queued (i.e. in bins 16 and 33) is only 4\%. However, across all
applications we analyzed, on an average, 17\% of requests are always queued behind a write operation.
Figure~\ref{fig:mram_access} also shows the number of request packets in a router in the cache layer
whose destination is 2-hops away and that follow a write access request. We find that, almost always
there are about \emph{3 requests} in the router following a write packet which if scheduled soon
after the write request would end up getting queued in the STT-RAM banks. These are the requests
packets that can potentially be \emph{delayed} to {\it hide} the long latency write operations of STT-RAMs.
%After having seen the potential of prioritizing requests to idle banks by delaying accesses to busy
%The following sub-sections describe the architecture/implementation details of our prioritization
%scheme.

\begin{figure*} [t] \centering
\begin{tabular}{ccccc}
\psfig{figure=figures/ferret_bin.eps, width=1.0in, height=1.0in} &
\psfig{figure=figures/facesim_bin.eps, width=1in, height=1.0in} &
\psfig{figure=figures/sclust_bin.eps, width=1in, height=1.0in} &
\psfig{figure=figures/x264_bin.eps,width=1in, height=1.0in} &
\psfig{figure=figures/parsec_bin.eps, width=1in, height=1.0in} \\

\psfig{figure=figures/libq_bin.eps, width=1.0in, height=1.0in} &
\psfig{figure=figures/lbm_bin.eps,width=1in, height=1.0in} &
\psfig{figure=figures/sphinx_bin.eps,width=1in, height=1.0in} &
\psfig{figure=figures/hmmer_bin.eps,width=1in, height=1.0in} &
\psfig{figure=figures/spec2k_bin.eps, width=1in, height=1.0in} \\

\psfig{figure=figures/sap_bin.eps, width=1.0in, height=1.0in} &
\psfig{figure=figures/sjas_bin.eps,width=1in, height=1.0in} &
\psfig{figure=figures/tpcc_bin.eps,width=1in, height=1.0in} &
\psfig{figure=figures/sjbb_bin.eps,width=1in, height=1.0in} &
\psfig{figure=figures/server_bin.eps, width=1in, height=1.0in} \\

\end{tabular}
 \hrule
 \caption{\scriptsize \bf  Plots showing the distribution of consecutive accesses to STT-RAM banks in
 different applications following a write access(the last column shows the average across the whole benchmark suite).
 The horizontal axis represents the access latencies in cycles and the vertical axis represents
 the percentage of access. The inset in each plot mentions the average number of request-packets in a router in the cache layer to 2-hop away STT-RAM destination.}
\label{fig:mram_access}
\end{figure*}

\SubSection{Facilitating prioritization}

To prioritize requests to idle banks and delay requests to busy banks, a router should be able to
know which banks in its vicinity are idle and which banks are busy. One way of achieving this, is to
have a global network that transmits this information to every router in the network on a cycle by
cycle basis. This approach would, however, be highly expensive for resource and power constrained
NoCs. An alternate approach (at least, conceptually), could be to have all packets destined to a
particular cache bank to be routed through a particular router in the network - a serialization
point. This serialization point can serve as a secondary source for the cache bank to which all other
nodes wanting to communicate with the cache bank would send their requests. However, this is not the
case in current implementations of 3D NoC. In a typical 3D network, each router in the core layer is
connected to a router in the cache layer below. When using a deterministic routing algorithms like
Z-X-Y or X-Y-Z routing, owing to the path-diversity, each cache may receive requests from various
cores along different routes. For example, consider the illustration shown in Figure~\ref{flat-3D}
with the core layer being on top of the cache layer and core 0 connected to router 0, core 1
connected to router 1 and so on. If core 0 wants to send a request to cache node 64, the request
packet would be routed vertically downwards. If core 63 wants to send a request to cache node 0,
assuming Z-X-Y routing, the request packet is routed vertically to router 127, followed by
X-direction routing from router 127 to router 120 and then Y-direction routing to router 0. Since
these two routes never overlap, there is no single point in the network, where we can re-order
requests if cache bank 64 were to be busy.

We introduce a novel scheme to provide such a serialization point in the network. Our scheme involves
(1) dividing the cache layer into logical regions and (2) limiting the path-diversity in the 3D NoC.

\begin{figure*} [t]
\centering
\begin{tabular}{c}
 \psfig{figure=figures/legend.eps, width=0.20in, height=3.5in, angle=-90}
\end{tabular} \\
\begin{tabular}{ccc}
 \psfig{figure=figures/core-lay.eps, width=1.10in, height=1.25in, angle=-90} &
 \psfig{figure=figures/cache-lay.eps, width=1.10in, height=1.25in, angle=-90} &
 \psfig{figure=figures/cache-lay-marked.eps, width=1.15in, height=1.25in, angle=-90} \\
 \scriptsize (a) & \scriptsize (b) & \scriptsize (c)
\end{tabular}
 \hrule
 \caption{\scriptsize \bf Two layers of the 3D CMP: (a) Core Layer (b) Cache Layer
 (c) Cache layer showing the child nodes of parent nodes.}
\label{flat-3D}
\end{figure*}

%\subsubsection {Partition the cache layer for reducing path-diversity}
\textbf{Partition the cache layer for reducing path-diversity:} We partition the cache layer of 64
cache banks into a few logical regions - 4 to demonstrate the concept. The reason behind doing this
is that, we designate one vertical through-silicon bus (TSB)\footnote{We call a collection of TSVs to
be a TSB} in each region through which all cores can communicate their request packets with any cache
bank in the corresponding region in the cache layer. Figure~\ref{flat-3D} shows the logical partition
and the 4 TSBs connecting one router in the core layer to one router in the cache layer. Use of X-Y
routing together with these TSBs, serializes packets to routers in each region. Hence, a packet
routed from the core layer to the cache layer is first routed using X-Y routing to a particular
router in the core layer, followed by a TSB traversal in the vertical direction to a router in the
cache region. Finally, the packet follows X-Y routing again in the cache layer to the destination
cache bank. No such path restriction is imposed when communicating between the cache layer to the
core layer i.e. all 64 TSBs can be used in this case. Also, coherence traffic is not constrained to
go through the 4 TSBs alone and can use all the 64 TSBs.

Note that the innermost corner router in a region does not form a serialization point for {\it all}
traffic in that region. Each router manages traffic for all 2-hop away routers in the region. We call
the router that manages traffic for the 2-hops away destination STT-RAM bank as a {\it parent node}
and the routers connected to the STT-RAM bank 2-hops away from the parent node as a {\it child node}.
Thus, few routers in each region serve as the parent nodes, where the prioritization of core to cache
packets occur and these parent nodes have a predicted estimate of the busy times of 2-hops away bank
from them (Section~\ref{sec:busy_time} describes this prediction schemes).  As an example, in
Figure~\ref{flat-3D}, router 91 manages traffic to cache bank 75, 82 and 89 and router 90 manages
traffic to cache banks 74, 81 and 88. Since all packets in the cache layer use X-Y routing, using 4
TSBs and partitioning the cache layer into logical regions help each parent router to estimate the
busy times of bank 2-hops away from it. The innermost corner three nodes in each region that lie in
the center of the network (ex. nodes 83,90 and 91 of region 0) are managed by the region-TSB node
vertically above in the core layer (i.e. node 27).

Clearly, restricting the routes from the core layer to the cache layer would increase the hop count
and hurt performance. To reduce the performance penalty of increase in hop count, instead of sending
one flit at a time through the core to cache layer TSBs, we combine few flits and transmit them
simultaneously. Our design is similar to the XShare technique proposed in~\cite{topology-hpca}, where
a NoC router combines two small flits (coherence and header flits) and sends them over the link to
the next router. However, in our scheme, whenever possible, we combine two 128b data flits
%in addition to the coherence and header flits
and transfer them simultaneously over the high density TSB (256b in
our design).
%Simultaneous transmission of two flits
%requires minimal modification to the credit based flow control scheme - instead of requiring credit
%for a single flit in the upstream router, the downstream router now needs two credits in the upstream
%router.

%\begin{figure*} [t]
%\centering
%\begin{tabular}{ccccc}
% \psfig{figure=figures/xbar.eps, width=1.18in, height=1.15in} &
% \psfig{figure=figures/base_buff.eps,width=1.0in, height=1.15in} &
% \psfig{figure=figures/xshare_buff.eps, width=1.20in, height=1.15in} &
% \psfig{figure=figures/base_sa.eps,width=1.18in, height=1.15in} &
% \psfig{figure=figures/xshare_sa.eps, width=1.20in, height=1.15in} \\
% (a) & (b) & (c) & (d) & (e)\\
%\end{tabular}
% \hrule
% \caption{\small \bf  XShare design: (a) Crossbar design showing two ports merging to 128b flits and sending them onto a 256b link (b) Baseline buffer organization
% (c) XShare buffer organization (d) Baseline SA stage (e) XShare SA stage}
%\label{fig:xshare}
%\end{figure*}


%When two combined 128 bits arrive at an upstream router port in the cache layer, the individual flits
%are demuxed based on the virtual channel ids (VCIDs) of the two flits. This requires that the data
%path of the input DEMUX and the switch MUX to be split into two separable halves. Similarly, to
%combine two flits, a switch allocation (SA) control logic now sends two demux signals that determine
%which half of the crossbar a flit needs to be routed (Upper Half is Data Set1 (DSET1) and Lower Half
%is Data Set2 (DSET2)). The buffers are still maintained as 128 bit FIFO buffers requiring no
%additional modifications. They are treated as two separate logical halves (DSETs) by the MUX/DEMUX
%logic. The primary overhead comes from including the second layer of smaller muxes (shaded black in
%the figures). Typically, the buffer read / write stages are the shortest stages in a generic router
%pipeline, and hence, have sufficient slack for two 2:1 muxes without affecting the router cycle time.

\begin{wrapfigure}{l}{3.60in}
%\begin{figure*} [t]
\centering
\begin{tabular}{c}
\psfig{figure=figures/2d_layer.eps, width=3.60in, height=3.30in} \\
\end{tabular}
 \hrule
 \caption{\scriptsize \bf Proposed 3D architecture with cores in the top layer and STT-RAM banks
 (partitioned into 4 logical regions) in the bottom layer. The bold arrows show the route taken by requests from core to cache bank.}
\label{fig:3D}
%\end{figure*}
\end{wrapfigure}

%\subsection{Putting it all together}
\textbf{Putting it all together:} Figure~\ref{fig:3D} shows our resulting 3D NoC design. In this
design, all communications from the core to the cache layer occur through the 4 high density TSBs
(256b), while communication from the cache layer to the core layer can use all of the 64 128b TSBs.
This figure shows the path taken for request generating at cores 7, 46 and 48 to communicate with
cache bank 89, 82 and 75, respectively. As shown in the figure, for all these requests, since the
destination cache node lies in region 0, they are first routed using X-Y to node 27 in the same
layer, followed by a vertical TSB transaction, and finally being buffered in the secondary source
router 91. The router 91 now sees all the requests and since it is the only router through which all
processor requests to routers 75,82 and 99 pass, it has a estimate of which of these cache banks are
currently busy or idle and thus, can selectively prioritize these requests to them. The next section
describes how each parent router knows an estimate of busy times of its child nodes. This helps the
parent node delay a request packet to the child node as long as the child node is busy.

\subsection{Estimation of busy time} \label{sec:busy_time}

Restricting the path diversity by allowing only 4 TSBs to be used when communicating from core to the
caches and using X-Y routing in the cache regions, helps each parent node estimate the busy times of
its child nodes. This is because, each child node only receives request from its parent nodes except
the coherence traffic, which can be received from any router in the network. Also, ideally a parent
node should delay a request to its child node such that, as soon as the child node is done servicing
a request another request arrives from its parent node. The latency of a packet from a parent router
to the 2-hop away destination consists of router delay, link traversal delay and delay due to
congestion. Now, since the destination is two hops away, there is one intermediate router with 2
cycles delay (we assume a 2-stage router) and 2 cycles link traversal (1 cycle each) delay. The only
unknown component is the congestion at the intermediate and destination nodes. Thus, each parent node
should delay a request by 4 cycles + estimated congestion cycles + write service time in the STT-RAM
bank (= 33 cycles) {\it following} a write-request. We use three heuristics for estimating this
congestion:

{\bf Simplistic Scheme (SS):} In the simplistic scheme, the parent node delays a request packet to a
{\it busy} cache bank for 33 cycles following a write-request by ignoring the  congestion. While
delaying packets for this duration, the parent node prioritizes packets destined to other parent
nodes, coherence traffic and traffic to memory controllers. Clearly, in this scheme, since congestion
is not modeled, a packet is not sufficiently delayed when congestion at the destination bank is
significant and arrives at its destination only to be queued.

{\bf Regional Congestion Aware (RCA) scheme:} In the RCA scheme, we use information from neighboring
nodes to estimate congestion. This scheme is based on Grot et al.'s~\cite{rca-hpca} scheme, where a
coarse view of regional congestion can be obtained by aggregating congestion estimates from
neighboring routers. In our RCA based scheme, an aggregation module resides at each network interface
and the inputs to the aggregation module come from downstream routers and the local router's
congestion estimate. Aggregation logic then combines the two congestion values, potentially weighting
one value differently than the other, and feeds the result to a propagation module that propagates
the congestion estimate of a router to its neighbors. To estimate the local congestion, a router uses
buffer utilization in the port along which congestion estimates are propagated. Similar to
\cite{rca-hpca}, we weigh the local and neighboring congestion information equally. The RCA scheme
requires additional wires for propagating the congestion estimates among neighbors and based on
\cite{rca-hpca}, we assume 8 bit extra wires between each node. Among the three schemes we use for
congestion estimate, RCA provides the best estimate albeit at the cost of additional wires.

{\bf Window Based (WB) scheme:} The third scheme is  a window based scheme that does not not require
any back wiring overheads to pass congestion information from child nodes to the parent. In this
scheme, for every N packets, the parent node tags a packet with a B-bit time stamp and starts a B-bit
counter before despatching it to the destination node. The destination (child) node, after receiving
the tagged packet, sends an ACK packet together with the time stamp it received back to the parent
node. The B-bit counter is updated every cycle until an ACK packet with the B-bit time stamp is
received by the parent node from the child node. After receiving the ACK packet, the parent node
estimates the congestion as half of the difference between the current time and the time stamp
received.
%Essentially, the parent node is calculating a one way delay from itself to the child node
%using the round-trip delay information.
This scheme is similar to the window based scheme used in TCP/IP protocol except that the size of the
window is just 1. In our case, we chose N=100 and B=8 and the time stamp\footnote{We take into
account the counter saturation and roll-over by using additional minimal logic} is appended with the
header flit. A header usually carries source-destination information and is typically shorter (64b)
than data flits (128b) and appending 8 bit time stamp to the header flit does not require additional
wires nor introduces any significant overheads. Congestion can vary with program phases and our
analysis shows that updating the congestion information every 100 packets provides reasonably
accurate congestion estimates. The overhead in WB scheme is that of maintaining B-bit counters in
each router and communicating ACK (1-bit) messages. Our implementation of the counter scheme in
Synopsys shows minimal gate count increase in each router and hence, the WB scheme is simpler to
\emph{implement} in the network.
