\section{Introduction} \label{sec:introduction}

%With the technology scaling, the number of cores and the total die
%area in CMP increase. Consequently, the yield of functional cores is
%reduced, resulting higher cost. 3D may provide cost efficiency with
%smaller stacking die size when the corresponding 2D die has large
%area~\cite{dong:aspdac09}.

%To further improve performance/cost, the interconnect between cores
%and caches can be designed for the reconfigure and fault-tolerant
%purpose so that when some cores are not functional their caches can
%be used by any other core or neighboring cores (different complexity
%of interconnect). (expand this section later: CMP, yield, 3d
%technology, communication node, and cost) Technology from
%130nm-90nm-65nm-45nm, the number of cores can be from 4 cores-8
%cores-16 cores-32 cores-64 cores, cache size may also change. The
%number of tiers in 3D is from 2-3-4 layers. focus on one technology
%first.

%JL: start

The diminishing return of endeavors to increase clock frequencies
and exploit instruction level parallelism in a single processor have
led to the advent of chip multiprocessors (CMPs). %~\cite{cmp:pact05}.
As the number of cores in CMPs increases aiming for higher
computation throughput, die size gradually increases as well.
%~\cite{noc:isscc07}.
Consequently, the manufacturing yield suffers, which leads to higher
manufacturing cost. Meanwhile, networks-on-chip (NoC) has emerged as
a promising and scalable solution for interconnecting the cores in
CMPs.
%Literature research includes a variety of interconnects
%such as shared bus~\cite{noc:isca05},
%mesh~\cite{noc:isscc07,noc:isca04}, and ring~\cite{noc:comp07}. 2D
%mesh topology is popular for tile CMPs due to its low complexity.
%In~\cite{noc:ics06,noc:micro07}, a high radix topology is proposed
%to minimize hop count and improve performance. Recently, a
%hierarchical NoC is proposed to achieve performance and power
%improvement~\cite{noc:hpca09}.

A rich collection of NoC literature exist.
% JL: We probably do not need too many references for DAC?
%, such as~\cite{noc:isca05,noc:isscc07,noc:isca04,noc:comp07,noc:ics06,noc:micro07,noc:hpca09} to name a few.
Nonetheless, challenges for future many-core CMP design remain. For
example, current NoC designs lack flexible support for
cost-effective power-performance improvement of future many-core
CMPs. First, it consumes much chip area and power with increased
number of cores in CMPs, which makes the chip bigger, more power
hungry, and constraining the number of cores (computing) and the
capacity of on-chip cache memory
(storage)~\cite{noc:isca05}. %,orion2.0}.
Second, the current
interconnect fabric is typically fixed for one chip design, since it
is integrated with processors and cache memory within one single
die.  Reuse of the interconnect fabric for future chip generations
is difficult, resulting in both design and manufacturing overhead.
%For example, one specific network topology to
%one chip design may not be efficient or able to support other chip
%designs requiring different on-chip interconnect.

On the other hand, three-dimensional (3D) integration technology has become
a promising means to mitigate the power-performance related problems in conventional 2D chips,
such as dominant interconnect delay and power
consumption~\cite{3D:DWM+05,3D:YPG+06}. %Several 3D integration
%technologies have been explored, including wire bonded, microbump,
%contactless (capacitive or inductive), and through-silicon-via (TSV)
%vertical interconnects~\cite{3D:DWM+05}. TSV based 3D integration
%has the potential of greatest vertical interconnect
%density; therefore, it is the most promising one among all the
%vertical interconnect technologies.
In 3D ICs that are based on TSV technology, multiple active device
layers are stacked together (through wafer stacking or die stacking)
with direct vertical TSV interconnects~\cite{3D:YPG+06}. One
important benefit of 3D ICs is that it provides the opportunity of
stacking dies with different technologies, processes, and
vendors~\cite{3D:YPG+06}. 3D technology also reduces the die area in
each layer and may provide cost efficiency
benefit~\cite{dong:aspdac09}.

Putting these together, we propose to decouple the interconnect
fabric from the computing (core) and storage (cache) layers as a
separate layer, called \emph{Interconnect Service Layer} (ISL), in
3D stacking. This decoupling can provide reduced manufacture cost
since each layer has smaller die area in 3D.  It can offer more
reliable and flexible interconnect layer compared to its traditional
2D counterparts. The decoupled ISL has the real estate for more than
one on-chip network, e.g., it can support multiple on-chip networks
in a single die such as mesh, ring, hierarchical topologies, etc.
With ISL, the constraints on the router area and link bandwidth in
2D can be relaxed. It also supports different manufacture volume for
each die in 3D to reduce the overall cost. For example, our proposed
ISL can be manufactured with much larger volume than the other
computing and storage layers.  Then it can be bonded to those with
varied designs, such as different number of cores and storage
capacity.

%(can explain an example here: one service layer can be
%bonded to 9 larger cores or 36 smaller cores with different network
%topologies, which is already in the service layer).
%Additionally, flexible power-performance tradeoff can be achieved by
%operating each network topology independently, e.g., only one
%network is active for reduced power consumption or activating all
%the network topologies to improve performance due to reduced number
%of hops. %In summary, our objective is first design ISL and then
%utilize this layer judiciously for improved performance, power,
%flexibility and reduced manufacturing cost.

%Problem: One important benefit of 3D is integrating chips from
%different technologies, different processes and different vendors.
%However, customized interconnect layer lacks flexibility. Usually
%optimized for one type of chip. Inefficient or unable to support
%other chip designs requiring different on-chip interconnect. Compute
%and storage layers are tightly coupled with fixed match between
%them.

%Observation: The interconnect layer has estate for more than one on-chip network
%
%Goal: To improve on-chip networks in the interconnect layer for
%flexible 3D integration and optimization

%Benefits:
%
%Enable various compute and storage layers to be integrated
%Decouple storage layer from compute layer
%Compute layer may use one on-chip network
%Storage layer may use another on-chip network
%Allow mixing/matching various storage layers with one compute layer, and vice versa
%Potentially increase the volume of individual layers and reduce their manufacturing cost

%JL: end

This paper makes the following contributions:

%\begin{itemize}

1) We map computing, communication and storage (including cache)
functions to different layers in 3D stacking.  Particularly, we
extract communication functions as a single layer called ISL for
flexible 3D integration.
%Note that we use communication and interconnect interchangeably.

2) The ISL can be designed, manufactured and tested as a separate IP
component. This allows designs of more flexible and reconfigurable
interconnect fabric. Particularly, we propose an architecture with
\emph{multiple superimposed heterogeneous networks} for ISL. The ISL
can potentially consist of $M$ multiple networks, each network
providing a separate degree of flexibility and communication
capacity. One or more of these $M$ networks can be active
simultaneously at runtime.

%The decoupled interconnect layer supports multiple on-chip networks
%of different granularity, such as a 3x3 mesh and a 6x6 mesh in a
%36-core CMP or different topologies, such as a PowerBus and a ring
%or mesh.

%\item We enhance ISL for better power-performance by judiciously
%selecting one network or coordinate networks for improved
%communication capability or activating only one network for reduce
%power consumption. (no evaluation for this part)

3) We evaluate one specific example of ISL and demonstrate the cost
benefit and performance improvement. The evaluation results show
that the cost reduction of our proposed architecture could be up to
40\% compared to 2D case. The performance improvements are 21\% and
6.5\% in average compared to 2D and 3D without ISL design,
respectively.

4) We extend existing 3D cost models by modeling the number of TSVs
for power delivery, %and clock distribution,
differentiating cost models between different functional layers, and
addressing product volume
factor of each layer in 3D integration. %, and address (production) time factor of each layer.

%(relaxed router area and bandwidth constraints, judiciously select
%one network or coordinate networks for improved communication
%capability, for example, operate the network topologies
%independently, e.g., only one network is active for improved power
%or make them work together for improved performance), cost reduction
%(reduced hop counts), flexibility (multiple alternative connection
%points to the storage and compute layers) via multiple networks with
%interfacing points among them. Fast near-neighbor communication
%using a fine-grained interconnection network and fast global
%communication using a coarse-grained interconnection network. (add
%reconfigurability and recovery?)

 %(different technology for
%memory/core, various storage size under the same area constraint,
%evaluate cost, performance and power) (can also add verification
%cost, since interconnect layer is considered as a IP, the
%verification cost is lower than if we integrate the complicated
%interconnect with cores in 2D case)

%\end{itemize}

%The rest of the paper is organized as follows:
%Section~\ref{sec:cost-model} describes the cost model for 3D
%technology. Section~\ref{sec:interconnet} presents the design
%details of the interconnect layer. Section~\ref{sec:methodology}
%provides the methodology for evaluating the interconnect layer
%design in terms of performance, power, and cost.
%Section~\ref{sec:result} shows the evaluation results of this
%decoupling compared to 2D case and 3D stacking without decoupling.
%Section~\ref{sec:related} presents related prior work. Finally,
%Section~\ref{sec:conclusion} presents conclusions and outlines
%directions for future work.

%Plans for the paper:
%
%1. Modify the existing cost model: if it's possible to separate the
%cost for each layer, i.e., differentiate the cost for
%logic/memory/communication module. (By Xiangyu)
%
%2. Communication layer: evaluate different topologies: shared
%bus/crossbar, mesh/ring architecture, fully-connected/hierarchical.
%enable cache reuse when the cores are not working. (research on
%different topologies, area/performance comparison, provide this
%information for experiments: By Xiaoxia)
%
%3. Evaluation experiments: environment setup using Gems/Simics.
%performance/power evaluation using different workloads. (various
%experiments with different parameters, e.g., number of cores, size
%of the cache, the number of layers, communication latency, cache
%access latency from Cacti with different technologies) (By Guangyu)
%
%4. Things need to be discussed: selection of topologies, service
%function, experiment baseline (number of cores, communication,
%baseline is also 3D?), reasonable footprint, the workloads,
%comparison cases... (have some old results but need to be changed
%based on the update)
