\section{Interconnect network}\label{sec:interconnet}

In this section, we describe the overview and the design details of
the interconnect layer. We also show that how to utilize this layer
to achieve improved performance, power, flexibility and reduced
manufacturing cost.

\subsection{Overview of the service layer}

In existing 3D stacking architectures for CMPs, there exist
logic-to-logic stacking~\cite{3d:BMN+2006} and
logic-to-cache/logic-to-memory
stacking~\cite{hybrid-cache,3d-memory-isca08}. The typical
logic-to-cache/logic-to-memory structure is shown in
Figure~\ref{fig:typical-3d}. In this structure, the cache/memory
layer is stacked on top of compute (processor) layer and the
interconnect network is integrated with compute and cache/memory
layers (can add routers and links in the figure to illustrate the
area). As we mentioned in Section~\ref{sec:introduction}, the
interconnect fabric normally is customized and optimized for one
type of chip. Therefore, it lacks flexibility and is inefficient or
unable to support other chip designs requiring different on-chip
interconnect. Compute and storage (cache/memory) layers are tightly
coupled in this architecture. In order to improve on-chip networks
in terms of performance-power, reliability, flexibility and cost we
decouple the interconnect fabric with computing (core) and storage
(cache) layers as a single layer, called ``service layer'', in 3D
stacking.

\begin{figure}[htbp]
%\vspace{-8pt}
\centering
\includegraphics[width=3in]{./figure/typical-3d.eps}
%\vspace{-10pt}
\caption{Typical logic-cache/logic-to-memory stacking in 3D.}
\label{fig:typical-3d}
%\vspace{-12pt}
\end{figure}

Figure~\ref{fig:service-layer} illustrates one example of the 3D
stacking structure and the components in the service layer. Assume
we have up to 36 cores in the compute (processor) layer. The
interconnect layer consists of routers and links to connect the
compute layer and cache layer. Since the whole layer is dedicated
for the interconnect it can support multiple networks such as mesh
topology, ring topology, and hierarchical topology. We present a
hierarchical (change the name, overlay/stacking/hybrid?) topology in
Figure~\ref{fig:service-layer}, which is evaluated in
Section~\ref{sec:result}. There are two mesh topologies in the
interconnect layer: 6x6 fine-grained mesh and 3x3 coarse-grained
mesh, which are highlighted in blue and yellow colors, respectively.
The reason we choose mesh topologies is because of its simplicity
and its scalability for global network~\cite{noc:hpca09}. (This
layer can also support ring and crossbar topologies but the
objective here is to show one example to design the layer). In the
case there are 36 cores, each router in 6x6 fine-grained mesh is
connected to one core. 4 cores are in a cluster in 3x3
coarse-grained mesh. Between these tow meshes, we can use a bus or
cross-bar or point-to-point interface to connect them. Each mesh
supports typical communication traffic such as busy local and global
communications. With this interconnect layer, we can stack a
processor layer with 36 cores (6x6) as well as 9 cores (3x3), which
flexibly use the same interconnect layer for chip stacking.

\begin{figure}[htbp]
%\vspace{-8pt}
\centering
\includegraphics[width=4.5in]{./figure/service-layer.eps}
%\vspace{-10pt}
\caption{An example of 3D cache stacking and the service layer with
designs of up to 36 (6x6) cores.} \label{fig:service-layer}
%\vspace{-12pt}
\end{figure}

The two meshes can be used coordinately to improve the performance.
For 6x6 multi-core integration, 3x3 coarse-grained mesh provides
fast-path cross-chip interconnect. For example, if there is
communication between upper left corner core and lower right corner
core, 10 hops are needed if no 3x3 coarse-grained mesh is
integrated. With the coarse-grained mesh, only 6 worst-case hops are
needed. Figure~\ref{fig:cache-bank} shows an elaboration on
supporting 3x3 cores with 6x6 cache banks. Each core has
corresponding 4 cache banks. The fine grained interconnect fabric
supports for local data communication between neighbor cores/caches.
The coarse grained interconnect supports faster global data
communication between cores/caches that are further apart. For
example, if there is communication between Core 0 and Core 8, there
are fewer hops and higher link bandwidth them. They can be also
operated independently since each is a full-bloomed network by
itself. In addition, one network can be turned off for power saving
while others keep operations for performance...It also supports
heterogeneous processor design.

\begin{figure}[htbp]
%\vspace{-8pt}
\centering
\includegraphics[width=4.5in]{./figure/cache-bank.eps}
%\vspace{-10pt}
\caption{Elaboration on supporting 3x3 cores with 6x6 cache banks.}
\label{fig:cache-bank}
%\vspace{-12pt}
\end{figure}


\subsection{Hardware support of the service layer}

In this section, we present the design details of the interconnect
layer such as the interface between two meshes, the router design to
support 3D technology, the TSVs connections, and the logic to
facilitate performance-power improvement.

The key concept is to use NoC routers for communications within the
interconnect layer, while using a specific through silicon bus (TSB)
for communications among different layers. Figure~\ref{fig:tsb}
illustrates an example of the structure. There are 4 cores located
in the core layer, 4 routers in the interconnect layer, and 16 cache
banks in cache layer and all layers are connected by through silicon
bus (TSB) which is implemented with TSVs. This interconnect style
has the advantage of short connections provided by 3D integrations.
It has been reported the vertical latency of traversing a 20-layer
stack is only 12ps~\cite{3d:dac06}, thus the latency of TSB is
negligible compared to the latency of 2D NoC routers. Consequently,
it is feasible to have single-hop vertical communications by
utilizing TSBs. In addition, hybridization of 2D NoC routers with
TSBs require one (instead of two) additional link on each NoC
router, because TSB can move data both upward and
downward~\cite{3d:isca06}. In this 3D stacking, TSVs buses (6x6 +
3x3 = 45) pierce through all three layers. (will evaluate the area
in the result section)

\begin{figure}[htbp]
%\vspace{-8pt}
\centering
\includegraphics[width=4.5in]{./figure/tsb.eps}
%\vspace{-10pt}
\caption{Elaboration of TSB in 3x3 cores with 6x6 cache banks
structure.} \label{fig:tsb}
%\vspace{-12pt}
\end{figure}

The interface between two networks can be implemented using bus or
5x5 crossbar topology. (add a figure to connect 4 local routers and
1 global router) For performance and power improvement, each network
can operate independent of others, such that one can be turned off,
if not used, to save power or operate together to improve
performance due to reduced hops. The logic to support turn on/off
one of the networks is simple, which includes a control signal and a
gate (gated Vdd). (add the figure?)

From cost point of view, the die area for each layer is reduced
compared to 2D case so that it may provide cost benefit even though
extra bonding cost is introduced in 3D. It also enables various
compute and storage layers to be integrated. (Decouple storage layer
from compute layer, Compute layer may use one on-chip network,
Storage layer may use another on-chip network. Allow mixing/matching
various storage layers with one compute layer, and vice versa). It
also potentially increases the volume of individual layers and
reduce their manufacturing cost (expand here or more details in the
result section?).

(and support rerouting if there is a broken link?
reconfigurability?)

\subsection{Routing in stacking networks}

\subsection{Deadlock avoidance}

1) restricted routing, 2)virtual channels

%interconnect: enable cache reuse when the cores are not
%working.(other purpose: increase core voltage when it requires
%higher voltage to work, reliability service...) the area, power,
%performance evaluation of the interconnect layer. different
%topologies: shared bus/crossbar, mesh/ring architecture.
%fully-connected/hierarchical

\begin{comment}
JL: Hi Xiaoxia!  You might want to put the figures in the charts or variations of them into this section.
\end{comment}
