%\begin{figure}[htbp]
%%\vspace{-8pt}
%\centering
%\includegraphics[width=3in]{./figure/typical-3d.eps}
%%\vspace{-10pt}
%\caption{The Typical and logic-cache/logic-to-memory stacking in 3D.}
%\label{fig:typical-3d}
%%\vspace{-12pt}
%\end{figure}

\section{Interconnect Service Layer}\label{sec:interconnet}
%In this section, we describe the overview, advantages and the design
%details of ISL.

%\subsection{Overview}

Typical 3D stacking architectures include logic-to-logic
stacking~\cite{3d:BMN+2006} and logic-to-cache/memory
stacking~\cite{hybrid-cache}. %,3d-memory-isca08}.
% JL: Let's remove the memory layer in Fig. 2(a), i.e., only two layers..  What do you think?
Figure~\ref{fig:3d-stacking-types} (a) illustrates a logic-to-cache
structure. In this structure, the cache layer is stacked to the
computing (processor) layer, while the interconnect network is
integrated in computing and cache layers.
%In order to improve on-chip networks in terms
%of performance-power, reliability, flexibility and cost we decouple
%the interconnect fabric with computing (core) and storage (cache)
%layers as a single layer, called \emph{"interconnect service
%layer"}, in 3D stacking.
Figure~\ref{fig:3d-stacking-types}(b) illustrates the proposed 3D
stacking structure with \emph{interconnect service layer} (ISL). The
interconnect layer consists of routers and links to connect the
computing layer and cache layer. Since the whole layer is dedicated
for the interconnect it has room to support multiple networks of
different granularity and topologies, such as mesh, ring,
hierarchical topology, etc.

%\begin{figure}[htbp]
%\begin{tabular}{cc}
%\centering
%\includegraphics[width=1.8in]{./figure/typical-3d.eps}
%&
%\includegraphics[width=1.8in]{./figure/proposed-3d.eps}
%\end{tabular}
%\caption{(a) Logic-cache 3D stacking. (b) 3D stacking with interconnect service layer.}
%\label{fig:3d-stacking-types}
%\end{figure}

\begin{figure}[htbp]
\centering
\includegraphics[width=3in]{./figure/proposed-3d.eps}
\caption{(a) Logic-cache 3D stacking. (b) 3D stacking with
interconnect service layer.}
\label{fig:3d-stacking-types}\vspace{-8pt}
\end{figure}

\subsection{Advantages}

%The interconnect fabric is normally built for \emph{providing best
%average case performance} across a generic set of applications. The
%reason being, no fixed on-chip network design efficiently supports
%all different types of communication requirements. Therefore it
%lacks flexibility and is unable to \emph{adapt to dynamically
%changing and special communication requirements at runtime}. In
%addition, in 2D as well as 3D designs the interconnect fabric is
%tightly coupled with computing (core) and storage (cache)
%components. This pushes chip designers to adopt low complexity,
%structured and regular interconnect design to \emph{limit
%pre-fabrication and post-fabrication verification efforts and
%costs}.

A separate \emph{interconnect service layer} (ISL) decouples the
active logic components (computing and storage) from interconnect
fabric, reducing greatly the pre-fabrication and post-fabrication
verification complexity, hence cost. The ISL can be designed,
manufactured and tested as a separate IP component. This allows
network architects to design more flexible, adaptive and
reconfigurable interconnect fabric. We propose \emph{multiple
superimposed heterogeneous networks} in ISL. The ISL can potentially
consist of $M$ multiple networks, each providing a separate degree
of flexibility. One or more of these $M$ networks can be active
simultaneously at runtime. We enumerate the benefits of ISL in the
following:
%\begin{itemize}
%\item

\textbf{Latency.} One or more of the $M$ multiple networks can be
optimized for providing low latency for latency critical
applications. For example, concentrated and richly connected
topologies (e.g., Flattened Butterfly and Hierarchical) are oriented
best towards providing low latency.
% JL: Not sure we need this reference for DAC.
%Similarly a bufferless networks ~\cite{bless-isca09}
% can provide ultra low latency. Bufferless networks are known to saturate quickly and hence are suitable
% only for latency critical and less intensive applications.

%\item
\textbf{Bandwidth.} One or more of $M$ multiple networks can be
optimized for providing high bandwidth. For example, a flat topology
with many routers and many buffers can provide high throughput. The
throughput provided by mesh can be 2X higher than butterfly or
hierarchical topology.
%although they are not scalable in terms of latency. However bandwidth oriented applications are insensitive
%to latency as their performance is determined by rate of packet delivery rather than individual packet latency.

%\item
\textbf{Power.} One or more of $M$ multiple networks can be
optimized for low power to meet the stringent power constraints
imposed by computing and storage layers. The power efficiency of a
network can be programmed by using different frequency domains, as
well as using small data path widths for routers and channels.

%\item \textbf{Worst Case Guaranteed Service} The networks which have been designed
%to provide worst case performance guarantees usually provide suboptimal performance
%for best effort traffic ~\cite{gsf-isca08}. One or more of
%"M" multiple networks can be designed to provide bandwidth/latency guarantees and need be
%active only for real time applications. In addition guaranteed service designs assume fixed topologies
%based on pre-determined application flows ~\cite{aethereal-date06}. To allow such irregular topologies at run time,
%for one of the superimposed networks, some topology connections can be disabled at runtime
%to match the flow requirements.
%\end{itemize}

\subsection{ISL Example}

In this section, we describe one specific example of the ISL design with two meshes of different granularity.
We assume a range of CMP designs with up to 36 processor cores.
Figure~\ref{fig:service-layer} illustrates a possible configuration
for the proposed 3D stacking architecture.
%We present hierarchical (change the name, overlay/stacking/hybrid?) topology in
%Figure~\ref{fig:service-layer}, which is evaluated in Section~\ref{sec:result}.
There are two superimposed meshes in the interconnect layer: 6x6
fine-grained mesh and 3x3 coarse-grained mesh, which are highlighted
in dark and light, respectively. The reason we choose mesh
topologies is because of its simplicity and its scalability for
global network~\cite{noc:hpca09}.
%Note that this
%layer can also support ring and crossbar topologies but the
%objective here is to show one example to design the layer.
In a 36-core design, each router in the 6x6 fine-grained mesh is
connected to one core. Four cores are in a cluster in the 3x3
coarse-grained mesh. A bus, cross-bar or point-to-point interface can be used to connect the two meshes. Each mesh is self-sufficient and supports typical communication traffic.
  With this interconnect layer, we can stack a
processor layer with 36 small cores (6x6) and their caches, 9 big cores (3x3) with their caches, 3 big cores and 24 small cores and their caches, etc, all of which
flexibly use the same interconnect layer for chip stacking.

\begin{figure}[htbp]
\vspace{-8pt} \centering
\includegraphics[width=3in]{./figure/service-layer.eps}
\vspace{-8pt} \caption{An ISL example for 3D chip designs of up to
36 (6x6) cores.} \label{fig:service-layer} \vspace{-10pt}
\end{figure}

The two meshes can be used coordinately to improve the performance.
For 6x6 multi-core integration, 3x3 coarse-grained mesh provides
fast-path cross-chip interconnect. For example, if there is
communication between upper left corner core and lower right corner
core, 10 hops are needed if no 3x3 coarse-grained mesh is
integrated. With the coarse-grained mesh, only 6 worst-case hops are
needed. Figure~\ref{fig:cache-bank} shows an elaboration on
supporting 3x3 cores with 6x6 cache banks. Each core has its
corresponding 4 cache banks. The fine-grained interconnect fabric
supports local data communication between neighbor cores/caches,
whether or not the cores belong to the same cluster under a router
in the coarse-grained mesh. The coarse-grained interconnect supports
faster global data communication between cores/caches that are
further apart. For example, if there is communication between Core 0
and Core 8, there are fewer hops and higher link bandwidth between
them.
%They can be also
%operated independently since each is a full-bloomed network by
%itself.
%In addition, one network can be turned off for power saving
%while others keep operations for performance...It also supports
%heterogeneous processor design.

\begin{figure}[htbp]
\vspace{-8pt} \centering
\includegraphics[width=3in]{./figure/cache-bank.eps}
\vspace{-8pt} \caption{Elaboration on supporting 3x3 cores with 6x6
cache banks.} \label{fig:cache-bank} \vspace{-10pt}
\end{figure}

\subsection{Architecture}
In this section, we present the design of the ISL such as the router
design, the TSVs connections, and the routing scheme to support the
superimposed mesh topology.


%the interface between two meshes, the router design to support 3D
%technology, the TSVs connections, and the logic to facilitate
%performance-power improvement.


\subsubsection{Router Microarchitecture}
The key concept is to use NoC routers for communications within the
interconnect layer, while using a specific through silicon bus (TSB)
for communications among different layers. Figure~\ref{fig:tsb}
illustrates an example of the structure. There are 4 cores located
in the core layer, 4 routers in the interconnect layer, and 16 cache
banks in the cache layer and all layers are connected by through
silicon bus (TSB) which is implemented with TSVs. This interconnect
style has the advantage of short connections provided by 3D
integrations. It has been reported the vertical latency of
traversing a 20-layer stack is only 12ps~\cite{3d:dac06}, thus the
latency of TSB is negligible compared to the latency of 2D NoC
routers. Consequently, it is feasible to have single-hop vertical
communications by utilizing TSBs. In addition, hybridization of 2D
NoC routers with TSBs requires one (instead of two) additional ports
on each NoC router, because TSB can move data both upward and
downward~\cite{3d:isca06}. %In this 3D stacking, TSVs buses (6x6 +
%3x3 = 45) pierce through all three layers.

\begin{figure}[htbp]
\vspace{-8pt} \centering
\includegraphics[width=2.8in]{./figure/tsb.eps}
\vspace{-8pt} \caption{Elaboration of TSBs and routers in a setup of
2x2 cores with 4x4 cache banks.} \label{fig:tsb} \vspace{-10pt}
\end{figure}

%The interface between two networks can be implemented using bus or
%5x5 crossbar topology.%For performance and power improvement, each
%network can operate independent of others, such that one can be
%turned off, if not used, to save power or operate together to
%improve performance due to reduced hops. The logic to support turn
%on/off one of the networks is simple, which includes a control
%signal and a gate (gated Vdd).
%From cost point of view, the die area
%for each layer is reduced compared to 2D case so that it may provide
%cost benefit even though extra bonding cost is introduced in 3D. It
%also enables various compute and storage layers to be integrated and
%potentially increases the volume of individual layers and reduce the
%total cost.

%(Decouple storage layer from compute layer, Compute layer may use
%one on-chip network, Storage layer may use another on-chip network.
%Allow mixing/matching various storage layers with one compute layer,
%and vice versa).

%
%(and support rerouting if there is a broken link?
%reconfigurability?)

\subsubsection{Routing}

The interconnect service layer provides routing both \emph{within}
each superimposed network and \emph{between} them. The need for
routing between the independent networks is to facilitate
flexibility and improve utilization of network resources.
Intra-network communication can be supported by simple baseline
routing schemes. Simple extension to default
dimension-ordered-routing can be implemented if the network supports
a regular topology.  For example, our previous example has the mesh
topology, which we evaluate in this paper. If any one of the
networks has irregular topology, application-specific architecture
or flow-based topology mapping for guaranteed services, the router
needs to support table-based routing and arbitration. To enable
inter-network communication, special inter-network input/output
ports are provided at \emph{specific routers} of each independent
network which connect it to other networks. Thus each independent
superimposed network has fewer designated ``interface routers''
which connect it to separate networks. To avoid possibility of
deadlock at interface routers, egress and ingress traffic have
dedicated virtual channels in these routers. In addition to
operating the superimposed networks independently, the interface
router's routing tables can also be programmed for fusion of
multiple networks into a larger monolithic network.  We skip the
rich details of this design space due to limited space.

%\subsubsection{Deadlock avoidance}
%1) restricted routing, 2)virtual channels

%interconnect: enable cache reuse when the cores are not
%working.(other purpose: increase core voltage when it requires
%higher voltage to work, reliability service...) the area, power,
%performance evaluation of the interconnect layer. different
%topologies: shared bus/crossbar, mesh/ring architecture.
%fully-connected/hierarchical
