%SourceDoc ../YourName-Dissertation.tex
\chapter{Introduction to Statistical High-Level Synthesis} \label{chapter:intro}

%\section{Introduction} \label{sec:introduction}

Technology scaling improves chip performance and increases integration density.
For example, the latest Intel Xeon processor is built upon the most advanced
45nm technology with 2.3 billion transistors on a single chip, while the 32nm
technology is expected to be used in volume production at the end of 2009.
Integrating billions of transistors on a single chip with nano-scale
transistors has resulted in two major challenges for chip designers.
\begin{enumerate}
\item \textit{The increasing gap of productivity and design complexity.}
    Consequently, we have seen a recent trend of moving design abstraction
    to a higher level, with an emphasis on \textbf{ESL (Electronic System
    Level)} design methodologies. The ESL design methodologies is
    presumably coming through high-level synthesis (or enabled by
    high-level synthesis), which is the process of translating a behavioral
    description into a register transfer level (RTL) description. For
    example, the Mentor Graphics high-level synthesis tool called
    \textit{Catapult-C}~\cite{HLS:Wolfgang09} has been used by many
    companies in the design flow. Other high-level synthesis tools such as
    \emph{C-to-silicon} from Cadence and \emph{Bluespec} have also gained a
    lot of attention~\cite{HLS:Wolfgang09}.

\item \textit{Process variability resulting in significant performance and
    power variations.} The challenges in fabricating transistors with very
    small feature size have resulted in significant variations in
    transistor parameters (such as transistor channel length, gate-oxide
    thickness, and threshold voltage) across identically designed
    neighboring transistors (this variation is called {\it within-die
    variation}) and across different identically designed chips (this
    variation is called {\it inter-die variation}). These manufacturing
    variations can cause significant performance and power variations in
    chip design. For example, Intel has shown that a 30\% variation in chip
    frequency and a 20 times variation in chip leakage are observed in 1000
    sample chips fabricated in 180nm technology~\cite{HLS:Intel08}. In the
    lasted 45nm technology, the relative process variations are reported to
    be even worse~\cite{HLS:Intel08}. Consequently, dealing with
    variability has become one of the major design focuses for nano-scale
    VLSI design.
\end{enumerate}

 Traditionally, performance/power variations are handled by a combination of \textit{speed/power binning} and \textit{design
 margining}~\cite{PV:michigan-book}. Speed/power binning tests all fabricated
 chips, those of which with a slower speed or excessive power are either discarded or sold at a
 reduced price; design margining uses worst-case process
 corners to guarantee the design requirement. However, these
 solutions are becoming insufficient as the variability increases along with technology scaling,
 and may not be a viable solution when the variability encountered in the new process
 technologies becomes very significant. Also, cost sensitivity makes designing for
 the worst-case manufactured hardware unacceptable. \emph{As a result, a shift in the design
paradigm, from today's deterministic design to statistical or probabilistic
design, is critical for deep sub-micron design.} Industry and academia have
already realized the need for such a shift in the design paradigm, and we have
seen a lot of research on statistical variation-aware design
methodologies~\cite{PV:michigan-book}.

The majority of the existing analysis and optimization techniques related to
process variations are at the lower level (device or logic gate
level)~\cite{PV:SPR04,PV:ABZ03b,PV:RVW04}. In the domain of high-level
synthesis, process-variation-aware research is still in its
infancy~\cite{xie:iccad06}.
 It is important to raise the process variation awareness to a
higher level, because the benefits from higher-level optimization often far
exceed those obtained through lower-level optimization. Furthermore,
higher-level statistical analysis enables early design decisions to take
lower-level process variations into account, avoiding late surprise and
possibly expensive design iterations. Such statistical high-level synthesis
research initiated a novel research direction, investigating the impact of
process variations at early design stages, moving the ESL design methodologies
from deterministic to probabilistic design. Statistical high-level synthesis
will provide a complementary perspective to the existing research on
statistical analysis/optimization at lower design level (device level or logic
level), and bring the awareness of process variation to the Electronic System
Level design paradigm.

\section{The Influence of Process Variations on HLS}

In high-level synthesis, the design specification is usually written as a
behavioral description that is translated into an internal representation such
as parse trees or control-data flow graphs (CDFG), which are then mapped to the
functional units selected from the resource library to optimize design goals
(such as performance, area, and power). The high-level synthesis process
usually consists of module selection, scheduling, resource binding, and clock
selection~\cite{hls-newbook}.

\begin{figure}[htbp]
\centering
\includegraphics[width=.7\textwidth]{Chapter-1/Figures/adder-delay-variation.pdf}\\
\caption{The delay variation (normalized sigma/mean) for 16-bit
adders in IBM Cu-08(90nm) technology (Courtesy of K. Bernstein,
IBM).} \label{fig:IBM_fig}
\end{figure}

Traditionally, worst-case delay/power parameters for the resource are used to
facilitate the design space exploration in high-level synthesis. However, it is
becoming inappropriate as larger variability is encountered in the new process
technologies. For example, Fig.~\ref{fig:IBM_fig} shows the delay variations
(depicted as normalized sigma/mean) for 11 different type of 16-bit adders that
span a range of circuit architecture and logic evaluation styles. Under the
influence of process variation, the existing deterministic worst-case design
methodologies in high-level synthesis can overestimate the resources needed to
meet the performance goal, and result in unexpected performance discrepancy or
a pessimistic performance/power estimation, or may end up using excess
resources to guarantee design constraints. As shown in Fig.~\ref{fig:PDF_fig},
due to resource sharing, a multiplexer and an adder is connected in cascade.
Assume that the delay of the adder (Dadd) and the multiplexer (Dmux) conform to
independent Gaussian distributions $N(\mu, \sigma)$, where $\mu$ and $\sigma$
stand for the mean value and standard variance of the delay distribution
respectively. The parameters are as shown on the figure. In conventional
worst-case analysis, the worst-case execution time (WCET) is calculated as
$(\mu + 3\sigma)$. So the execution time for the path is
$(\mu_{Dmux}+3\sigma_{Dmux}+\mu_{Dadd}+3\sigma_{Dadd})$, which is $91ps$.
 However, based
on the statistical information, the delay of the path follows a new Gaussian
distribution
 $N(\mu_{Dadd}+\mu_{Dmux}, \sqrt{\sigma^2_{Dadd}+\sigma^2_{Dmux}})$, and
the $3\sigma$ delay of the path is $85ps$. Compared with $91ns$ obtained using
WCET analysis, the statistical approach gets a tighter estimation of the
circuit performance.

\begin{figure}[htbp]
\centering
\includegraphics[width=.7\textwidth]{Chapter-1/Figures/CDF.pdf}\\
\caption{The comparison of worst-case execution time (WCET) based and statistical
analysis based approaches.} \label{fig:PDF_fig}
%\vspace{-5pt}
\end{figure}


\begin{figure}[htbp]
\centering
\includegraphics[width=.65\textwidth]{Chapter-1/Figures/CCPDF.pdf}\\
\caption{An example illustrates the effectiveness of the performance
yield metric.} \label{fig:Yield_fig}
%\vspace{-5pt}
\end{figure}


\begin{figure}[htbp]
\centering
  % Requires \usepackage{graphicx}
  \includegraphics[width=0.7\textwidth]{Chapter-1/Figures/newyield.pdf}
  \caption{An example of different scheduling, resource sharing and clock selection affecting performance yield.}\label{fig:newyield}
\end{figure}

To bring the process-variation awareness to the high-level synthesis flow, the
concept of \textit{parametric yield} is proposed~\cite{HLS:Wang08}. The
\textit{performance yield} is defined as the probability of the synthesis
results meeting the clock cycle time constraints under the latency constraints
and resource constraints. The \textit{power yield} is defined as the
probability that the total power of the synthesis result is less than the power
limit under latency and resource constraints.

Fig.~\ref{fig:Yield_fig} demonstrates the effectiveness of the performance
yield metric even for a simple example. Assume that we have two synthesized
results with the critical path delay distributions of $D1(t)$ and $D2(t)$,
respectively. When clock cycle time is set to $T1$, the synthesis result with
$D1(t)$ is better than that with $D2(t)$ in terms of performance yield.
However, when clock cycle time is set to $T2$, the synthesis result with
$D2(t)$ is better than that with $D1(t)$. However, if we use worst-case delay
to evaluate the results, we always choose the synthesis result with $D1(t)$.

The parametric yield of the HLS resultant hardware depends on all steps of
high-level synthesis: scheduling, module selection, resource sharing, and clock
selection. For example, in Fig.~\ref{fig:newyield}, the same DFG can be either
scheduled into 4 clock cycles (CC1-CC4 in (a)) with clock cycle time $T_{S}$,
or scheduled into 2 clock cycles (CC1-CC2 in (b)) with a longer clock cycle
time $T_{L}$. The computation time is $4\times T_{S}$ and $2\times T_{L}$,
respectively. In Fig.~\ref{fig:newyield}(a), the synthesized underlying
architecture would be one multiplier and one adder,
 while in Fig.~\ref{fig:newyield}(b), the
synthesized underlying architecture would be two adders and one multiplier,
with the adders and multiplier connected in series (for illustration purpose,
the possible multiplexers and registers are omitted in this example). Since the
delay distributions
 of the adder and the multiplier are independent of each other, the
performance yield of Fig.~\ref{fig:newyield}(a) is calculated as
Equation~(\ref{eqn:hls-yield-a}):

\begin{equation} \label{eqn:hls-yield-a}
  Y_a=\int_{0}^{T_{S}}D_{adder}(t)dt\times\int_{0}^{T_{S}}D_{mult}(t)dt
\end{equation}

where $D_{adder}$ and $D_{mult}$ are the probability density function (PDF) for
the adder and the multiplier, respectively. The PDF for the function units are
affected by the module selection step in the HLS. In
Fig.~\ref{fig:newyield}(b), since both $add$ operations are scheduled in the
same clock cycle, resource sharing with one single adder is not possible, and
two adders are needed. In addition, since both \emph{mult} and \emph{adder1}
are connected to \emph{adder2}, which will not start execution until the
outputs of both \emph{mult} and \emph{adder1} are available. The total delay in
this clock cycle, is the \emph{max} delay of \emph{mult} and \emph{adder1},
plus the delay of \emph{adder2}. Therefore, a $max$ operation is applied to the
delay distributions of \emph{mult} and \emph{adder1}, to get a new
\emph{maximum-delay} distribution of these two operations. Consequently, the
overall performance yield is calculated as Equation~(\ref{eqn:hls-yield-b}):

\begin{equation} \label{eqn:hls-yield-b}
  Y_b=\int_{0}^{T_{L}}(D_{adder2}(t)\times\int_{0}^{T_{L}-t}D_{max}(s)ds)dt
\end{equation}
\normalsize where $D_{max}$ is the \emph{maximum} distribution of \emph{mult}
and \emph{adder1}. The equation shows that, when \emph{adder2} needs time
\emph{t} (which varies from $0$ to $T_L$) to finish execution, \emph{adder1}
and \emph{mult} could have maximum execution time of $T_{L}-t$, in order not to
violate the timing of that cycle.

 The examples in Fig.~\ref{fig:Yield_fig} and Fig.~\ref{fig:newyield} simply illustrate that
the parametric yield of the HLS resultant hardware depends on all HLS steps
(scheduling, module selection, resource sharing, and clock selection), which
are usually tightly interacted with each other during high-level synthesis, and
influence the final parametric yield of the design.

\section{Key Issues in Statistical High-level Synthesis}

 \subsection{Library Characterization and Statistical Analysis}

 In order to facilitate the design space exploration while considering process variations,
the resource library of functional units for HLS has to be characterized for
delay/power variations.  Under the influence of process variations, the delay
and power of each component are no longer fixed values, but represented by a
probability density function (PDF). Consequently, the characterization of
function units with delay and power variations requires statistical analysis
methodologies.

\begin{itemize}

\item \textbf{Delay Characterization.} Gate-level statistical timing
    analysis tools, such as Synopsys PrimeTime \emph{VX} or IBM
    \emph{Einstimer/Einstat}
 can be used to characterize the delay variations. These variation-aware
timing tools increase the accuracy of timing analysis by considering the
statistical distribution of device parameters (such as channel length and
gate-oxide thickness). For example, using the PrimeTime \emph{VX}, one can
characterize the delay PDF of function units with the following steps.
\begin{enumerate}
\item {All the gates in a standard cell library (such as NAND gates or
    NOR gates) for a specific technology node (such as 45nm) are
    characterized using gate-level characterization tool \emph{NCX}
    from Synopsys}. \item {The function units used in HLS resource
    library are then synthesized to gate-level netlist with the
    standard cell library.} \item {Statistical timing analysis for the
    function units is performed using PrimeTime \emph{VX}, and the
    parameters of delay distributions are reported.}
\end{enumerate}

\item \textbf{Power Characterization.} Statistical power characterization
    for function units in resource library can be done using Monte Carlo
    analysis in SPICE. The power consumption of function units consists of
    dynamic and leakage power. While dynamic power is relatively immune to
    process variation, leakage power is affected greatly and becomes
    dominant as technology continues scaling down. Consequently, leakage
    power for each gate in the standard cell library is characterized using
    statistical analysis with Monte Carlo analysis. Similar to delay
    characterization, after each component in the resource library for HLS
    is mapped to the standard cell library as a gate-level netlist, the
    power PDF can be estimated via the gate-level power characterization.

\end{itemize}

\subsection{Statistical Timing and Power Analysis for HLS}
\label{sec:sstaDFG}

Similar to the gate level statistical timing analysis~\cite{PV:michigan-book},
one can use a first-order canonical model to model the delay for a function
unit in the resource library, rather than using a PDF. In this model, the delay
of a gate is expressed as
\begin{equation} \label{eqn:delaymodel}
D_{m}=d_0+\sum_{i=1}^{n}d_i X_i+d_{n+1}X_m
\end{equation}
where $d_0$ is the nominal delay of a component in resource library. $X_i$ and
$X_m$ are the independent normally distributed random variables to model the
variations in process parameters. $X_i$ is the correlated component of these
variation parameters, such as channel length, gate oxide thickness, metal-line
width, etc., and $X_m$ is the purely random component.

In the statistical timing analysis,  the $max$ and $sum$
operations~\cite{PV:michigan-book} are used to propagate the delay distribution
in the synthesized results of HLS. While the $sum$ distribution can be computed
by accumulating the corresponding normal distributions, the $max$ distribution
can be computed using tightness probability and moment
matching~\cite{PV:Cla61}. The results of these two operations are maintained in
the canonical form. Consequently, the delay of the circuit is also expressed in
the canonical form. Thus, the delay of function units in high level synthesis
can be expressed in the canonical form as in equation (\ref{eqn:delaymodel}).
For instance, in Fig.2(b) , the $max$ operation is performed over the delay
values of the first adder and multiplier to obtain the delay distribution
before the input to the second adder; the $sum$ operation is performed over the
first stage delay and the delay of the second adder, to obtain the path delay
for the entire clock step. The results of the $max$ and $sum$ operations are
also expressed in the linear canonical form~\cite{HLS:Wang08}.

Similar to the gate level statistical power analysis~\cite{PV:michigan-book},
the statistical leakage power of a function unit can be expressed as
\begin{equation}  \label{eqn:leakagegate}
P_{m}=exp(a_0+\sum_{i=1}^{n}a_i X_i+ a_{n+1}X_m)
\end{equation}
where $exp(a_0)$ is the nominal leakage power of a function unit. Similarly,
$X_i$ and $X_m$ are the independent normally distributed random variables to
model the variations in process parameters.
 The total power of the circuit
is computed as summation of the power of all the components in the circuit. The
result of the summation is also expressed in the same form as equation
(\ref{eqn:leakagegate})~\cite{PV:michigan-book}.
 The total power of the synthesized results in HLS is computed by
iteratively adding the leakage power of the function units.

 \subsection{Existing Work on Variation-aware High-level\\ Synthesis}

With the characterized variation-aware resource library and the statistical
analysis methods, the yield-driven HLS algorithms are able to perform design
space exploration statistically, and search for solutions to improve
performance and/or power yield.

One of the early attempts in variation-aware HLS~\cite{xie:iccad06} introduces
the performance yield concept in HLS. The HLS framework is based on a simulated
annealing engine, and the statistical timing analysis is based on discrete
delay distribution for each component in the resource library. A
variation-aware clock cycle time selection algorithm is proposed to improve the
utilization of clock slacks and reduce the sensitivity to timing variations.
The performance yield is integrated into the cost function to guide the
synthesis. However, other steps (such as scheduling and resource
binding/sharing) remain to be conventional. The work demonstrates that
integrating performance yield concept can satisfy the yield requirement with
average 14\% of the resource reduction, compared to conventional HLS
approaches.

Jung and Kim~\cite{HLS:Jung07} use a similar statistical discrete delay
distribution analysis method, and propose a heuristic algorithm for
variation-aware scheduling and resource binding. Their heuristic focuses on
improving performance yield under latency constraint, by iteratively searching
the DFG for \emph{yield-equivalent} operation patterns, and binding these
patterns to the same combination of resource units. Within these
\emph{yield-equivalent} patterns, the work implicitly utilizes the
\emph{time-borrowing} technique via operation-chaining,  to improves
performance yield by sharing timing slacks between faster and slower
operations. Moreover, enhanced resource sharing reduces the module usage and
consequently improve the performance yield, given that the overall performance
yield is approximately the product of the yields of all module instances used
in the design.

The SA-based HLS framework in~\cite{xie:iccad06} is extended to modify all the
HLS steps such that they are variation-aware~\cite{HLS:Wang08}, with the
integration of statistical power analysis. This framework optimizes both
performance yield and power yield during the HLS design iteration. The
framework takes a DFG, constraints (latency constraint, resource constraint,
clock cycle time constraint, and power constraint), and a resource library as
inputs, and generates a synthesized DFG that is power optimized while
satisfying performance constraints. Since the subtasks of high level synthesis
are strongly interdependent and affect the parametric yield (as shown in Fig.
2), this work brings performance and power variation awareness into subtasks of
the high level synthesis by performing yield-driven module selection, resource
sharing, and scheduling, simultaneously. Experiments show that, the
yield-driven HLS framework can achieve on average 31\% power yield improvement
with only 1\% of performance yield loss, compared to traditional worst-case
analysis based approach.

\section{High-Level Synthesis for Three Dimensional Integrated Circuits}

To further improve integration density and to tackle the interconnect challenge
as technology continues scaling, researchers have been pushing forward
three-dimensional (3D) IC stacking~\cite{Davis2005,Xie2006}. In a 3D IC,
multiple device layers are stacked together with direct vertical interconnects
through substrates, as shown in Figure~\ref{fig:3dexample}. 3D ICs offer a number of advantages over traditional
two-dimensional (2D) design, such as (1) higher packing density and smaller
footprint; (2) shorter global interconnect due to the short length of
through-silicon vias (TSVs) and the flexibility of vertical routing, leading to
higher performance and lower power consumption of interconnects; (3) support of
heterogenous integration: each single die can have different technologies.


\begin{figure}[htbp]
\centering
  % Requires \usepackage{graphicx}
  \includegraphics[width=0.85\textwidth]{Chapter-1/Figures/3dexample.pdf}
  \caption{Illustration of three-dimensional integrated circuits.}\label{fig:3dexample}
\end{figure}

A common theme that runs through all of the current
thinking in EDA and system-level design these days is that
complex design is best addressed at the architectural level and
very early in the design phase rather than later in the design.
Consequently, there has been intensive research on architectural
design space exploration for SoCs. In the scenario
of 3D SoC integration, the stacking strategies and 3D-related
technology options will further complicate the design space
exploration. It is believed that if ESL is
important for 2D designs, it will be critical for 3D designs. A
system-level design space exploration methodology that helps
make the decisions at the early stage of 3D SoC design is
therefore of great importance.

\section{Contributions and Organization}

The previous section showed the importance of the shift from deterministic to
statistical design methodology in high level synthesis. The research presented
in this thesis, aims at extending current variation-aware behavioral synthesis
to a comprehensive solution with new optimization techniques, new variation
source coverage and integration with new emerging technologies.

We start in Chapter~\ref{chapter:ILP} with a 0-1 integer linear programming
(ILP) formulation that aims at reducing the impact of timing variations in
high-level synthesis, by integrating overall timing yield constraints into
scheduling and resource binding. The proposed approach focuses on how to
achieve the maximum performance (minimum latency) under given timing yield
constraints with affordable computation time. Experiment results show that
significant latency reduction is achieved under different timing yield
constraints, compared to traditional worst-case based approach.

Chapter~\ref{chapter:latch} proposes a methodology to replace the edge-trigged
flip-flops in circuits by transparent latches, to exploit latches' extra
ability of passing time slacks and tolerating delay variations. We then discuss
the benefits and overheads for the replacement, and propose an optimization
framework for latch replacement in high-level synthesis design ow. Experimental
results show that the latch-based design can achieve an average of 27\%
improvement of timing yield compared with traditional flip-flop based design.

It has been shown that multiple threshold and supply voltages assignment
(multi-Vth/Vdd) is an effective way to reduce power dissipation. However, most
of the prior multi-Vth/Vdd optimizations are performed under deterministic
conditions. With the increasing process variability that has significant impact
on both the power dissipation and performance of circuit designs, it is
necessary to employ statistical approaches in analysis and optimizations for
low power. Chapter~\ref{chapter:multiv} studies the impact of process
variations on the multi-Vth/Vdd technique at the behavioral synthesis level.
Experimental results show that significant power reduction can be achieved with
the proposed variation-aware framework, compared with traditional worst-case
based deterministic approaches.

Chapter~\ref{chapter:nbti} presents an NBTI-aware synthesis framework that
minimizes leakage power of circuits with bounded delay degradation (thus
guaranteed life-time). A fast evaluation approach for NBTI-induced degradation
of architectural function units is proposed, and multi-Vth resource libraries
are built with degradation characterized for each function unit. The author
then proposes an aging-bounded high-level synthesis framework, within which the
degraded delays are used to guide the synthesis, and leakage power is optimized
through the proposed aging-aware resource rebinding algorithm.

In Chapter~\ref{chapter:3d} and Chapter~\ref{chapter:esl}, the statistical
behavioral synthesis is extended and integrated into the design of new emerging
three-dimensional (3D) ICs. 3D integration brings in numerous benefits as well
as challenges on power density, thermal dissipation, and variation modeling.
Previous chapters show that statistical high-level synthesis can explore the
trade-offs between parametric yields and performance/power of the VLSI
circuits. These two chapters show that statistical high-level synthesis could
be used as an effective tuning knob for module-level 3D integration, and
combining statistical high-level synthesis and 3D IC design could significantly
improve the quality of 3D IC designs.

Finally, Chapter~\ref{chapter:con} concludes the thesis and discusses the
future work.
