\chapter{System-level Design Space Exploration for Three-Dimensional (3D)
SoCs}\label{chapter:esl}

Chapter~\ref{chapter:3d} presented a physical planning framework for 3D ICs
incorporating high-level synthesis, which iteratively improves the quality of
physical planning results and reduces the system cost of 3D ICs. Due to the
increasing transistor count and complexity of System-on-Chip (SoC) designs,
system-level synthesis via design space exploration plays a key role in
reducing the design effort and thus the time-to-market of products. 3D
integration provides additional architectural and technology-related design
options for future system-on-chip (SoC) designs, making the early design space
exploration more critical. This chapter proposes
 a system-level design partition and hardware/software
co-synthesis framework for 3D SoC integration. The proposed methodology can be
used to explore the enlarged design space and to find out the optimal design
choices for given design constraints including form factor, performance, power,
or yield. As shown in Chapter~\ref{chapter:3d}, statistical high-level
synthesis can be used as a tuning knob during the design planning and help
achieve the best results by pruning those design options leading to higher
overall costs.

\section{Introduction}\label{sec:C7-intro}

\begin{figure}
\centering
  % Requires \usepackage{graphicx}
  \includegraphics[width=0.5\textwidth]{Chapter-7/Figures/challenge.pdf}\\
  \caption{Design space exploration in both technology and architecture options for 3D SoCs.}\label{fig:challenge}
\end{figure}

As we pack more and more transistors into a single chip, the pace of
productivity gains has not kept up to address the increases in design
complexity. Consequently, we have seen a recent trend of moving design
abstraction to a higher level, with an emphasis on \textbf{Electronic System
Level (ESL)} design methodologies. Electronic System Level is an established
approach built upon high-level abstracted languages such as C/C++, and is now
being used increasingly in System-on-Chip (SoC) design. From its genesis as an
algorithm modeling methodology with ``no links to implementation'', ESL is
evolving into a set of complementary methodologies that enable embedded system
design, verification, and debugging through to the hardware and software
implementation of custom SoC systems~\cite{Bailey2007}.

A common theme that runs through all of the current thinking in EDA and
system-level design these days is that complex design is best addressed at the
architectural level and very early in the design phase rather than later in the
design. Consequently, there has been intensive research on architectural design
space exploration for SoCs, with an emphasis on design partitioning and
hardware/software co-synthesis. In conventional system-level exploration,
designers consider trade-off in the way hardware and software components of a
system work together to exhibit a specified behavior, given a set of
performance goals and technology. In the scenario of 3D SoC integration, the
stacking strategies and 3D-related technology options will further complicate
the design space exploration, as shown in Fig.~\ref{fig:challenge}. It is
believed that if ESL is important for 2D designs, it will be critical for 3D
designs. A system-level design space exploration methodology that helps make
the decisions at the early stage of 3D SoC design is therefore of great
importance.

This chapter describes a methodology that explores the system-level design
space of 3D SoCs and finds out the design options leading to minimal
implementation cost under given design constraints.

\section{Related Work}
Recently we have seen a recent trend of moving design abstraction to a higher
level, with an emphasis on \emph{Electronic System Level (ESL)} design
methodologies. Ariki et al.~\cite{Araki2010} proposed a model-based SoC design
flow using ESL environment. Su et al.~\cite{Su2010} and Schafer et
al.~\cite{Schafer2010} presented case studies of ESL design methodologies on
GSM edge algorithm and complex image processing systems, respectively.
Nevertheless, the major research on ESL is targeting at conventional 2D SoC
architecture.

Cost analysis for 3D ICs have been addressed in several existing literatures.
Mercier et al~\cite{Mercier2006} first looked at the yield modeling of 3D IC
stacking regarding stacking yield loss. Dong et al~\cite{Dong2009} proposed
system-level cost analysis and design exploration for 3D ICs, by estimating the
implementation cost of different stacking options, given the gate count of a
design. Chen et al~\cite{Chen2010} extended the cost analysis of 3D ICs by
considering the testing cost of different design choices.


\section{Preliminaries and Motivational Example}\label{sec:C7-backgroud}

This section provides some preliminaries on 3D IC stacking and architectural
co-design, and presents a simple case discussion which motivates the work in
this chapter.

\subsection{Preliminaries on 3D IC Stacking}

\begin{figure}
\centering
  % Requires \usepackage{graphicx}
  \includegraphics[width=0.75\textwidth]{Chapter-7/Figures/3dexample.pdf}\\
  \caption{An illustration of 3D stacking technology.}\label{fig:3D}
\end{figure}

3D ICs can provide advanced system integration by stacking different dies into
a single chip.  The layers could be connected with wire bonding, TSV,
microbump, or even inductive/capacitive contact~\cite{Davis2005}. TSV-based 3D
technology, as shown in Fig.~\ref{fig:3D}, provides the possibility for high
density interconnection between the layers by the mean creating vertical
connections through the silicon substrate, and consequently is the focus of the
majority of current research on 3D integration technologies.
\textit{Die-to-wafer (D2W)} and \textit{wafer-to-wafer (W2W)} are two different
ways to bond multiple dies in TSV-based 3D integration. W2W bonding stacks all
layers of wafer before a single 3D chip is sliced and packaged, while D2W
bonding mount different layer of dies onto the base wafer sequentially.

\subsection{Preliminaries on Architectural Co-design}

\begin{figure}
\centering
  % Requires \usepackage{graphicx}
  \includegraphics[width=0.5\textwidth, angle=-90]{Chapter-7/Figures/ESL.pdf}\\
  \caption{Conventional ESL design flow for 2D chips.}\label{fig:ESL}
\end{figure}

Nowadays SoCs are implemented as mixed software-hardware systems. The software
components usually run on general processor cores, while hardware components
consist of accelerators, custom IPs, etc. Generally, software is used for
features and flexibility, while hardware is used for performance. While a given
functionality could be implemented on either hardware or software, the two
choices might have different impacts on performance, power or other metrics,
leading to different costs. Therefore, an unified architectural
hardware/software co-synthesis methodology is required to minimize cost of
implementation while satisfying all the design constraints. Fig.~\ref{fig:ESL}
shows a typical architectural synthesis flow for 2D chips. The flow takes as
input a task graph and a component library including both hardware and software
models, determines the assignments of tasks to different components, and
evaluates the performance iteratively to find out the best design options.


\subsection{An Motivational Example}

\begin{figure}
\centering
  % Requires \usepackage{graphicx}
  \includegraphics[width=0.45\textwidth]{Chapter-7/Figures/example.pdf}\\
  \caption{An motivational example on design choice exploration for 3D SoCs.}\label{fig:C7-example}
\end{figure}

For conventional 2D SoC design, the logic circuits and on-chip memory are put
onto one single layer. The flexibility during early design stage is limited.
After the introduction of 3D ICs, design space is enlarged by additional design
choices. The number of layers, the stacking strategies, the interconnect
bandwidth provided by 3D stacking technology are all flexible to decide during
early stage of design phase. Fig.~\ref{fig:C7-example} shows one motivational
example. This simplified SoC design uses a 2 layer logic-to-memory stacking,
with a micro-processor in the lower layer and DRAM
 memory in the upper layer. Designers can choose to either use high frequency
memory (528MHz DDR with 32-bit interface) with a small number of TSVs, or low
frequency memory (66MHz SDR with 4$\times$128-bit parallel interface) with a
larger number of TSVs. Even with 8 times lower frequency, the second
implementation can still provide the same bandwidth as the first one (Note that
DDR memory has doubled data rate). The tradeoff between these two
implementations is the number of TSVs (100 v.s 1000) with corresponding chip
area overhead, and the chip power consumption. These can be translated to
different implementation cost. This chapter presents optimized design choices
based on various preferences of designers, using the proposed 3D integration
synthesis framework.

\section{System-Level Synthesis Framework for 3D ICs}\label{sec:C7-framework}

This section introduces design flow of this synthesis framework. Resource
allocation, task scheduling and layer assignment methodologies are presented
later in this section.

\subsection{Architecture Synthesis Framework}

Fig.~\ref{fig:framework} shows the design flow of our proposed architecture
synthesis framework. Similar to 2D synthesis flow, the synthesis tool takes a
task graph and a component library as inputs, and outputs the optimal 3D
architecture suggestions. A task graph is a directed graph with edges
 oriented from source task cell to destination task cell showing data flow direction. A component library
contains corresponding hardware and software implementations for each cell.

The resource allocation step assigns suitable component to each task based on
functionality requirement. After resource allocation, chip area, power
consumption and basic resource cost can be evaluated based on selected
components. According to the functionality and estimated area of components, 3D
partitioning is deployed to determine the optimal partitioning of system
components of the 3D IC. While different partitioning methods lead to different
system cost and interconnect delay, 3D floorplanning helps decide the layer
assignment of each component block and generates chip layout of each layer.
Combining 3D partitioning and floorplanning in the synthesis flow can help
optimize chip area and mitigate heat dissipation problem which is introduced by
chip stacking. Accurate interconnect information that is obtained from the
partitiong/floorplanning stage, is fed to the next stage for fine-grained task
scheduling. Finally, the generated synthesis results are presented in a sorted
order, based on the cost function that is given according to designer's
preference.

\begin{figure}
\centering
  % Requires \usepackage{graphicx}
  \includegraphics[width=0.5\textwidth]{Chapter-7/Figures/framework.pdf}\\
  \caption{System-level exploration framework for 3D SoCs.}\label{fig:framework}
\end{figure}

\subsection{Resource Allocation}
Some cells in the task graph can be implemented either by hardware or software.
As the number of tasks implemented by software increases, the average cost is
reduced. However, in terms of performance, software implementations are usually
inferior to their hardware counterparts. On the other hand, hardware
implementations enlarge chip area and increase power consumption. Moreover,
heat dissipation problem in 3D ICs becomes severe as more hardwares are used.
With a cost function that integrates all these aspects, the proposed resource
allocation algorithm balances the usage between hardware and software and
achieves minimal costs for the hardware/software co-synthesis.

In our work, genetic algorithm is used as the optimization method for resource allocation. The
advantage of genetic algorithm is that it can provide several optimized
solutions in a short time by removing solutions with high cost and leaving
other solutions to generation evolution\cite{Kenji1995}. In genetic algorithm,
a single component that can implement required functionality of one task cell
is called a gene. A DNA means a small architecture that is built with several
genes based on design principles. For example, if a software implementation of
block encoding is chosen as one gene, then memory and processors are needed as
necessary hardware support. These genes, software implementation, processors
and memories together integrate to an DNA. A set of DNAs are connected
according to architecture design principles generating a complete architecture
solution, which is called chromosome. All the implementations of task cells
construct a gene pool of the task graph. The goal of genetic algorithm is to
construct possible architectures from gene pool with minimize system cost.

At the beginning of the algorithm, a set of architectures with required
population size is generated by a random allocation strategy.
Fig.~\ref{fig:allocation outline} shows the outline of the proposed random
resource allocation strategy. The allocation begins with searching along task
edges from the source cell. For each task cell, the algorithm tries to assign
it to the existing allocated components. If none of the allocated components is
compatible with the current task in terms of both functionality and timing, a
new component is allocated, by randomly picking up a compatible component from
the component library. The corresponding timing information is then extracted
for use in later resource allocation and also task scheduling. The allocation
strategy can reduce task stalling time and lower system cost by increasing each
component's utilization.

\begin{figure}[htbp]
\centering
\footnotesize
\rule[-1mm]{\textwidth}{0.01in}
\vspace{-15pt}
\begin{codebox}
\Procname{$\proc{ResourceAllocation}(task graph)$}
\zi \Comment initialization
\li  source cell start time = 0;
\zi \Comment main loop for resource allocation
\li \While (!end of task graph)
\li \Do    search component list;
\li     \If (comp function == cell function \&\& \
\zi         comp available time $<$ cell start time)
\li       \Then  obtain comp pointer;
\li         \Else
    random select comp from comp lib;
\li    obtain comp pointer;
        \End
\li insert cell to comp task list;
\li task finish time = start time + comp delay;
\li comp available time = task finish time;
\li next cell start time = \
\zi task finish time + connection delay;
\End
\end{codebox}
\vspace{-5pt}
\rule[1mm]{\textwidth}{0.01in}
\caption{Outline of the resource allocation algorithm}\label{fig:allocation outline}
\end{figure}

The randomly allocation is performed in multiple runs to generate an initial
set of architectures. After this, the evolution process begins as shown in
Fig.~\ref{fig:flowchart2}. All the generated architectural solutions are sorted
based on the cost that is calculated from the cost function. Among all the
architecture solutions, the first one third of solutions are treated as good
chromosomes and reserved for the next generation. The second part of
architectures are selected to participate in evolution. The last part of
solutions are discarded and new architectures are generated to replace them.
For the architectures that are selected for evolution, they need to go through
mutation task by task.

\begin{figure}
\centering
  % Requires \usepackage{graphicx}
  \includegraphics[width=0.5\textwidth]{Chapter-7/Figures/flowchart2.pdf}\\
  \caption{Genetic algorithm flow chart.}\label{fig:flowchart2}
\end{figure}

As chromosomes with high-quality genes and DNAs are kept during generation
evolution, system cost of each population gradually falls into a considerable
small range. This leads to the convergence condition of the algorithm. Cost
standard deviation is computed every time after evolution, and if the standard
deviation falls below a user-set threshold, the convergence condition is met
and the evolution process terminates. In case the standard deviation can't
converge, the algorithm terminates after a certain number of iterations.

\subsection{Layer Assignment}

Layer assignment is closely related to the stacking strategy that is decided by
3D partitioning. Designers can choose their preferred stacking strategy, or
partitioning granularity of components in this step. Issues and concerns such
as balanced I/O pins, optimal TSV numbers can also be tackled during 3D
partitioning~\cite{Sawicki2009}. After partitioning, 3D floorplanning takes the
layer assignments of each component as input, and generates virtual layout of
each layer. Various design issues, such as interconnect routing, power/ground
network~\cite{Falkenstern2010} and thermal dissipation~\cite{Cong2004} are
taken into account in 3D floorplanning. The partitioning and floorplanning step
has significant impact on the cost of 3D ICs. For example, if designers choose
a logic-to-memory stacking strategy, then during resource allocation, once a
software implementation is allocated, the memory space that is allocated to
this software is located in a different layer. Memory operation delay that was
originally determined by memory bandwidth, is now constrained by TSV bandwidth
between layers. In contrast, if logic-to-logic stacking is chosen, hardware
implementations with higher memory bandwidth requirements are put closer to
on-chip memory in the same layer.

\subsection{Task Scheduling}

After the 3D partitioning and floorplanning of each components, the task
finishing time is validated through scheduling to see if the task can meet the
timing constraint. Task cell execution time and communication delay together
contribute to final task completion time. During resource allocation, the
execution time of each task cell is evaluated. Communication delay including
bus delay, port delay and TSVs delay (if two cells are in different layers) are
computed during scheduling after all the routing and bandwidth information are
available. The number of TSVs used in design has impact on bandwidth between
layers and thus interconnect delay potentially reduces with more TSVs are
integrated. An ASAP(As-Soon-As-Possible) scheduling strategy is applied to get
the total completion time of this task. If the synthesized architecture can
satisfy the timing constraint, the solution is marked as feasible and accepted
by the synthesis tool. The total latency of this task is also a factor that
influences solutions sorting.

\subsection{Cost Function}

In architecture synthesis, the cost function is a key metric to sift final
architecture solutions. Different design requirements lead to different cost
functions. It is impractical to find a universal cost function that satisfy
every aspect of consideration. Based on the rationale above, we use a flexible
cost function formulation. The cost terms and weights for each term are
determined by designers. Equation~\ref{eq:cost function} shows the general form
of the cost function.

\begin{equation}
%\scriptsize
	\label{eq:cost function}
	Cost =\left ( \omega_1 X_1 + \omega_2 X_2 + \omega_3 X_3 \ldots + \omega_n X_n \right )+ Cost_{3D stacking}
\end{equation}

Terms in parentheses of Equation~\ref{eq:cost function} is decided by user.
$\omega_n$ is the weight of each design factor. $X_n$ represents the factor
that needs to be considered during design, such as area constraint, fabrication
cost, power consumption, etc. Designers can choose to use the value of this
factor or the reverse value of this factor, $X_n \in \{f, 1/f\}$. The last term
in this equation stands for system cost of 3D IC stacking. In our work, we
mainly consider wafer-to-wafer bonding method for 3D model and TSVs for layer
communication. The cost of chip that is built using wafer-to-wafer bonding is
given in Equation~\ref{eq:w2w bonding}~\cite{Dong2009}. 3D cost is proportion
to the number of layers, directly related to cost of each die and cost of
bonding between layers. As the number of layers increased, total 3D cost also
increased. In addition to the number of layers, yield of each layer and yield
of 3D bonding also influence final 3D stacking cost. If area is one of the
design issues, area overhead of using TSVs should be taken into consideration.
From Equation~\ref{eq:3D area}~\cite{Dong2009}, it can tell that TSVs area
overhead is related to connection bandwidth between layers. The bandwidth is
determined once resource allocation and layer assignment are completed. The
goal of this synthesis tool is to minimize total cost defined by users. By
choosing suitable factors and adjusting associated weight, designers can easily
create cost function that best describes their design requirements.

\begin{equation}
	\label{eq:w2w bonding}
	C_{w2w}=\frac{\sum_{i=1}^{N}C_{die_i}+\left ( N-1 \right )C_{bonding}}{\left ( \prod_{i=1}^{N}Y_{die_i}Y_{bonding}^{N-1} \right )}
\end{equation}

\begin{equation}
	\label{eq:3D area}
	A_{3D} = A_{die} + N_{TSV/die}A_{TSV}
\end{equation}

\section{Analysis and Case Study of 3D ESL\\ Exploration}\label{sec:C7-analysis}

Based on the synthesis flow presented above, we take a GSM edge algorithm for
base station
 as the target application and analyze the performance of our proposed 3D ESL tool. The target
 application contains complex data process steps including data ciphering and deciphering, data
 encoding and decoding, data formatting, etc. Most of the task cells can be implemented by either
 software or dedicated hardware.  Table~\ref{tab:spec} lists specification of this
 case study, including the component library size, the component cost assumptions, and 3D stacking cost assumptions.
 Component library is classified by component functionality with 23 component classes. Among these classes,
 excluding CPU and memory, 6 functions can only be implemented by software and 4 only by hardware. The
 basic cost of components is calculated from the silicon area of components and divided into four levels. 3D system
  cost using wafer to wafer bonding is given with the assumption that die yield and bonding yield are 0.95,
  bonding cost is 5.

\begin{table}
\footnotesize
\centering
\caption{Case study specification}
	\label{tab:spec}
\vspace{5pt}
	\begin{tabular}{|c|c|c|}
	\hline
	Library size & \multicolumn{2}{|c|}{23} \\ \hline
	Task size & \multicolumn{2}{|c|}{43} \\ \hline
	\multirow{5}{*}{Base Cost} & small & 0.4 \\
	& medium & 0.8 \\
	& large & 1.67 \\
	& CPU & 32.29 \\
	& Memory & 32.29 \\ \hline
	\multirow{3}{*}{3D Cost} & die yield & 0.95 \\
	& bonding yield & 0.95 \\
	& bonding cost & 5 \\ \hline
	\end{tabular}
\end{table}

In total 43 task cells need to be allocated for this application. The suggested
solution given by our 3D ESL tool shows that 32 components are allocated,
including one on-chip memory component and one DSP. A two layer logic-to-memory
stacking architecture with single bus is used in this application. All the
hardware implementations are put in the bottom layer, while the on-chip memory
is put in the second layer. Since the target application is memory-intensive,
memory bandwidth is the bottleneck. Logic-to-memory stacking can then increase
available memory bandwidth by TSVs connections. In this application, system
cost (including both 3D integration cost and component cost), application
finish time and chip area are taken into consideration. The weight for system
cost is 0.5. Chip area and application delay are equally weighted. Since the
system cost is the main considered factor, the optimal design can be built with
12 components implemented in hardware and 20 in software. We compared the
results with 2D implementation, 3D implementation with 100 TSVs and 3D with
1000 TSVs. Table~\ref{tab:results} shows the result of 3 suggested architecture
designs from each case. The final system cost, estimated chip area and
completion time for each architecture are listed in Columns 2-4, respectively.

\begin{table}
\centering\footnotesize
 \caption{Architecture synthesis results}
\vspace{5pt}
	\label{tab:results}
	\begin{tabular}{|c|p{14mm}|p{14mm}|p{16mm}|p{16mm}|}
	\hline
	\textbf{Case} & \textbf{System Cost} & \textbf{Chip Area} & \textbf{Finish Time} & \textbf{HW/SW ratio} \\ \hline\hline
	\multirow{3}{*}{2D} & 60.30 & 60266 & 4201733 & 12/20 \\
	& 62.59 & 67266 & 4299533 & 15/17 \\
	& 62.77 & 68766 & 4201733 & 14/18 \\ \hline
%	\multirow{3}{*}{3D 100TSVs}& 61.46 & 65000 & 4195733 & 12/20 \\
     & 61.46 & 65000 & 4195733 & 12/20 \\
	3D with 100 TSVs & 63.75 & 72000 & 4291533 & 15/17 \\
	& 63.94 & 73500 & 4195733 & 14/18 \\ \hline
	 & 72.71 & 110000 & 4193633 & 12/20 \\
	3D with 1000 TSVs  & 74.99 & 117000 & 4288733 & 15/17 \\
	& 75.18 & 118500 & 4193633 & 14/18 \\ \hline
	\end{tabular}
\end{table}

Since the on-chip memory size from given component library is relatively small,
building 3D chips doesn't have much area benefits. The chip area even increases
when TSVs number is large. It can be seen that these three cases have almost
the same suggestions when we put more weight on the resource cost. As we can
see from Table~\ref{tab:results}, compared with 2D implementation, 3D
implementation can gain better
 performance with less chip area. Performance has been further improved with larger number of
  TSVs, but the area increases. These results still stand when we consider chip area as the main
  design metric, since chip area is contributed by hardware implementation.
  If we consider application finish time as the main design metric.
  The optimal design can be obtained by using 1000 TSVs 3D implementation.
  The hardware/software ratio is 19/13, with system cost 98.06. However, if 100 TSVs
  are used, the cost is decreased by 11.46\% with very small delay increment.

\section{Summary}
With the adoption of 3D IC technology, designers have more choices in the
design space, which makes it harder to reach optimal design options only by
human effort. For design space exploration, a system-level 3D SoC design and
hardware/software co-synthesis framework is proposed in this chapter. The
proposed framework aims at providing designers optimal architectures in a short
time with user-defined design goals. We demonstrated that our approach can
generate optimal design choices combining 3D partitioning and floorplanning
with task allocation and scheduling in system-level synthesis. Analysis of the
influence of 3D integration on the synthesized results is presented. A real
world case study using the proposed framework is performed showing the
effectiveness of our proposed methodology.
