\chapter{Minimizing Leakage Power in Aging-Bounded High-level
\\Synthesis}\label{chapter:nbti}

Previous chapters all focus on process variations which are static and happen
at the manufacturing time of VLSI circuits. Dynamic variations (such as
Negative Bias Temperature Instability (NBTI)) which happen during the operation
time of VLSI circuits, can cause the temporal degradation of threshold voltage
of transistors, and have also become major design concerns for deep-sub-micron
(DSM) designs. Meanwhile, leakage power dissipation becomes dominant in total
power as technology scales. While multi-threshold voltage assignment has been
shown as an effective way to reduce leakage, the NBTI-degradation rates vary
with different initial threshold voltage assignment, and therefore motivates
the co-optimizations of leakage reduction and NBTI mitigation. This chapter
minimizes leakage power during high-level synthesis of circuits with bounded
delay degradation (thus guaranteed lifetime), using multi-$V_{th}$ resource
libraries. We first propose a fast evaluation approach for NBTI-induced
degradation of architectural function units, and multi-$V_{th}$ resource
libraries are built with degradation characterized for each function unit. We
then propose an aging-bounded high-level synthesis framework, within which the
degraded delays are used to guide the synthesis, and leakage power is optimized
through the proposed aging-aware resource rebinding algorithm. Experimental
results show that, the proposed techniques can effectively reduce the leakage
power with an extra 26\% leakage reduction, compared to traditional
aging-unaware multi-$V_{th}$ assignment approach.

\section{Introduction}\label{sec:C5-intro}

As technology scales, Negative Bias Temperature Instability (NBTI) has become a
major reliability concern for circuit designers. NBTI manifests itself as an
increase in the transistor threshold voltage, causing the logic gates to slow
down, and the critical paths may no longer be able to meet the timing
constraints. Circuit level simulations have shown that NBTI can result in a
10\% circuit delay degradation after 10 years of service time~\cite{Kumar2006,
Bhardwaj2006}. Meanwhile, the leakage power of circuits has an exponential
dependence on threshold voltage~\cite{PV:CLL+00}. During circuit operation
time, NBTI-induced threshold voltage degradation may severely affect the
leakage power. Therefore, ways to accurately analyze and reduce leakage power
under the impact of NBTI needs to be explored.

Recently, various techniques have been proposed to mitigate the
impact of NBTI, including gate sizing~\cite{Kang2007},
synthesis~\cite{Kumar2007}, Input Vector Control
(IVC)~\cite{Wang2007}, and Internal Node Control
(INC)~\cite{Wang2009}. Most of the techniques, however,  focus at
gate level or physical design level. As the number of transistors
integrated in a single chip reaches billions, the pace of
productivity gains has not kept up to address the increases in
design complexity. Consequently, we have seen a recent trend of
moving design abstraction to a higher level. \textbf{High-level
synthesis (HLS)}, which is also known as behavioral synthesis,
enables this shift by providing automation to generate
optimized hardware from a high-level description of the
functionality or algorithms to be implemented in hardware. During
HLS, many circuit optimization techniques can be applied at the
higher abstraction level (module level), such as Multiple Supply
Voltage (multi-$V_{dd}$)~\cite{hls:shiue00}, Multiple Threshold
Voltage (multi-$V_{th}$)~\cite{HLS:Tang05,HLS:Khouri02}, and
Adaptive Body Biasing (ABB)~\cite{HLS:Wang082}. HLS provides an
optimization platform for tackling the NBTI degradation problem
with reduced tuning complexity.

A principal approach for NBTI mitigation is \emph{guardbanding}, in which extra
delay headroom is reserved at design time, allowing the circuit to be degraded
to a bounded extent. In order to keep the degraded delay under the bound,
circuits can be adaptively adjusted at run time, using Adaptive Supply Voltage
(ASV)~\cite{zhang09,chen09} or Adaptive Body Biasing
(ABB)~\cite{Kumar09,Tiwari08}. As run-time tuning usually incurs extra control
overhead, this work focus on design time optimization using multi-$V_{th}$
assignment. Multi-$V_{th}$ assignment has been shown as an effective way to
reduce circuit leakage power~\cite{Sundararajan99,HLS:Tang05,HLS:Khouri02}.
However, prior research didn't take into account the temporal degradation of
threshold voltage. In terms of NBTI mitigation, this work is based on the fact
that high-$V_{th}$ circuits degrades slower than low-$V_{th}$
circuits~\cite{Bhardwaj2006,Kumar09,Tiwari08}. Therefore, the difference
between the delay of a high-$V_{th}$ circuit and that of its low-$V_{th}$
equivalent, will become smaller and smaller as the degradation goes on. Given
the same delay guardband, high-$V_{th}$ and low-$V_{th}$ circuits may reach the
delay bound at about the same time (that means both circuits guarantee the same
lifetime), while high-$V_{th}$ circuits consume much less leakage power.
Therefore, comparing with the NBTI-unaware multi-$V_{th}$ techniques, more
high-$V_{th}$ circuits can be used in favor of leakage power saving. In this
work, the dependencies of leakage and degradation rate on initial $V_{th}$
settings are explored at the behavioral synthesis level, yielding minimal
leakage power under the given aging bound.

This chapter starts from accurate evaluation of delay degradation and leakage
power for architectural function units under different threshold voltages,
using the long-term dynamic NBTI model considering the impact of input signal
probabilities~\cite{Bhardwaj2006,Kumar2006}. After the multi-$V_{th}$ library
is characterized, a HLS framework with aging bounds is presented, within which
an initial scheduling and resource binding is done according to the anticipated
degraded delay of units at the attainable lifetime bound, using only
low-$V_{th}$ resource units. Static timing analysis is then performed on the
scheduled and bound result, generating timing slack information based on the
degraded delay. The ``anticipated'' slacks are used to guide a resource
rebinding algorithm, which iteratively replaces the resource unit with their
high-$V_{th}$ equivalents where the delay difference can be fit into the
slacks, to reduce leakage power without violating the timing constraints. The
effectiveness of the proposed technique is demonstrated on a set of industrial
HLS benchmarks, and the improvements are compared with the conventional
aging-unaware multi-$V_{th}$ implementations.

To the best of our knowledge, this is the first work to tackle the
co-optimization problem that minimize leakage power and mitigate
aging effect simultaneously at the behavioral synthesis level. The
contributions of this chapter can be summarized as follows:
\begin{enumerate}
  \item A fast evaluation approach for NBTI-induced degradation of
      architectural function units is introduced, to build multi-$V_{th}$
      resource libraries with modeled NBTI-induced degradation;
  \item A framework for high-level synthesis with aging bounds is
      established based on conventional HLS design flow;
  \item A heuristic resource binding algorithm is proposed to minimize
      leakage power under given aging bounds, using multi-$V_{th}$ resource
      libraries.
\end{enumerate}

\section{NBTI and Leakage Characterization}\label{sec:character}
This section presents the NBTI model and the characterization flow
to capture the degradation and leakage power of architectural
resource units under different threshold voltages. A multi-$V_{th}$
resource library with degraded delay and leakage information is
built for the proposed aging-bounded high-level synthesis.

\subsection{NBTI Modeling}

\begin{figure}
  \centering
  % Requires \usepackage{graphicx}
  \includegraphics[width=0.65\textwidth]{Chapter-5/Figures/nbti.pdf}\\
  \caption{Threshold voltage degradation during stress and recovery cycles.}\label{fig:nbti}
%\vspace{-10pt}
\end{figure}

NBTI can be described as the generation of interface charges at the $Si/SiO_2$
interface~\cite{Huard06}. Depending on the bias condition of the PMOS
transistor, NBTI has two phases: stress phase and recovery phase. In the stress
phase ($V_{gs} = 0$), the holes in the channel weaken the $Si-H$ bonds, which
results in the generation of the positive interface charges and hydrogen
species, correspondingly, threshold voltage ($V_{th}$) of the PMOS transistors
increases. During the recovery phase ($V_{gs} = Vs$), the interface traps can
be annealed by the hydrogen species and thus, $V_{th}$ degradation ($\Delta
V_{th}$) is partially recovered. Dynamic NBTI model~\cite{Bhardwaj2006}
captures the degradation when the PMOS transistor undergoes alternate stress
and recovery periods, as shown in Fig.~\ref{fig:nbti}.

In order to predict the long term threshold voltage degradation ($\Delta
V_{th}$) due to NBTI, a compact model based on reaction-diffusion is proposed
in~\cite{Bhardwaj2006}, in which $\Delta V_{th}$ is modeled as a function of
the cycle period $T_{clk}$, duty ratio $\alpha$, and circuit running time $t$:


\begin{displaymath}
%\scriptsize
|\Delta V_{th,t}| = \Big(\frac{\sqrt{K_v^2 \alpha T_{clk}}}{1-\beta_t^{1/2n}}\Big)^{2n},
\beta_t =  1 - \frac{2\xi_1 t_e + \sqrt{\xi_2C(1-\alpha T_{clk})}}{2t_{ox} + \sqrt{Ct}}
\end{displaymath}
\begin{equation}\label{eq:eq1}
%\scriptsize
K_v = \Big(\frac{qt_{ox}}{\epsilon_{ox}}\Big)^3K^2C_{ox}(V_{gs}-V_{th})\sqrt{C}\exp{\Big(\frac{2E_{ox}}{E_o}\Big)}
\end{equation}
All the model parameters, if not aforementioned, are constants from real
measurement or data fitting. For the sake of brevity, their meanings and values
are not listed here (they can be found in~\cite{Bhardwaj2006}). The model
assumes a periodic rectangular waveform as the gate input signal, while in real
circuits, the signal waveforms are usually random. In~\cite{Kumar2006}, it is
analytically proven that, these random waveforms can be converted to equivalent
periodic rectangular signals by ensuring that the signal probabilities of the
random waveform and that of the deterministic periodic waveform are the same.
Therefore, the above model is applicable to any input waveforms with the duty
ratio $\alpha$ to be set as the signal probability\footnote{Here signal
probability (SP) is defined as the probability that the signal is at logic 0,
since NBTI stress on PMOS devices is caused by logic 0 signals.}. The model
also shows an exponential dependence between $\Delta V_{th}$ and initial
$V_{th}$. For a single PMOS transistor at 45nm technology, the threshold
voltage degradation at 10-year lifetime against input signal probabilities and
initial threshold voltages is plotted in Fig.~\ref{fig:spvth}, which shows the
maximum $V_{th}$ degradation varies from $0.11V$ to $0.16V$ with different
input signal variabilities, and demonstrates the necessity of considering the
impact of signal probabilities during NBTI modeling, especially for
low-$V_{th}$ circuits.

\begin{figure}
  \centering
  % Requires \usepackage{graphicx}
  \includegraphics[width=0.65\textwidth]{Chapter-5/Figures/SPvsVTH.pdf}\\
  \caption{Threshold voltage degradation against input signal probabilities and initial threshold voltages for a PMOS transistor, showing higher input signal probability and lower initial threshold voltage lead to larger degradation. }\label{fig:spvth}
%\vspace{-10pt}
\end{figure}


\subsection{NBTI and Leakage Characterization for Multi-$V_{th}$ Library Components}

With transistor-level NBTI models, gate-level NBTI evaluation can
be done by propagating gate input signal probabilities to internal
transistors, computing the corresponding $V_{th}$ degradation, and
calibrating the gate delay by SPICE simulations with the updated
$V_{th}$ values~\cite{Kumar2007}. However, for large-scale
circuits such as architectural function units with thousands of
gates, a fast and efficient evaluation flow utilizing existing
analysis tools is needed.

Fig.~\ref{fig:chara} shows the NBTI and leakage characterization flow used to
characterize architectural function units in this work. The flow starts with
the creation of NBTI-characterized technology libraries. Operation conditions,
such as initial threshold voltage and anticipated circuit lifetime (e.g., 10
years), are set priorly. Gate-level NBTI models together with netlists of
library cells are then fed to the library characterization tool \emph{Liberty
NCX} from Synopsys~\cite{URL:NCX}, generating standard cells with nominal
delays to serve as a baseline, as well as degraded cells with delays based on
the appropriate $\Delta V_{th}$ resulting from the cells' input probabilities.
The names of the degraded cells are annotated with the corresponding input
probabilities as suffixes. Leakage power of each cell is also characterized
according to the degraded threshold voltage. All the characterized cells are
then compiled into technology libraries for targeting and linking in subsequent
analysis steps.

Following cell library creation, synthesis is performed in \emph{Design
Compiler} taking as input the Verilog/VHDL description of a function unit,
using the standard cells with nominal delay to produce a cell netlist of the
desired unit. The synthesized netlist is then fed to \emph{Primetime PX} to
propagate the primary input probabilities to the internal nodes of the netlist.
As the signal probability of each internal node is reported, the cells taking
that node as input are annotated with the signal probability value. In the case
that a cell takes multiple inputs, the value corresponding to the worst-case
NBTI-degradation is selected. With the annotated cell netlist, static timing
and power analysis using \emph{Primetime} is performed against the
NBTI-characterized technology libraries, generating the NBTI-induced delay and
leakage power values for the given function unit.

\begin{figure}[tbph]
%\vspace{-5pt}
  \centering
  % Requires \usepackage{graphicx}
  \includegraphics[width=0.7\textwidth]{Chapter-5/Figures/flow.pdf}\\
  \caption{NBTI and leakage power characterization flow for function units. Design compiler, Primetime and Literby NCX are commercial tools from Synopsys.}\label{fig:chara}
%  \vspace{-5pt}
\end{figure}

As implementing multiple threshold voltages in a single chip
induces extra manufacturing cost, to reduce the tuning overhead,
three voltage levels are used in this work, and the multi-$V_{th}$
technique is applied at the granularity of function units. That
means all the gates inside a function unit operate at the same
threshold voltage, and threshold voltage only varies from function
unit to function unit. Correspondingly, the components in the
resource library are then characterized under low-$V_{th}$ (LVT),
medium-$V_{th}$ (MVT) and high-$V_{th}$ (HVT) settings
respectively, using the flow presented in previous subsection. For
a given set of resource library components, the characterized
results are compiled into a multi-$V_{th}$ resource library, in
which each unit has multi-$V_{th}$ implementations with equivalent
functionalities but different NBTI-induced delay and leakage power
values. Note that a finer-granularity multi-$V_{th}$ assignment
for function units could be done at the gate level, which would
increase the complexity of the characterization and design space
exploration at high-level synthesis.

\subsection{Motivation Example}

Using the proposed characterization flow, the delay degradation of
a 16-bit adder (as a representative function unit) is sampled and
interpolated with different initial threshold voltages at
different circuit running times, as shown in
Fig.~\ref{fig:vthtime}. The figure shows that, high-$V_{th}$
circuits have a smaller degradation rate than low-$V_{th}$
circuits.

\begin{figure}
  \centering
  % Requires \usepackage{graphicx}
  \includegraphics[width=0.65\textwidth]{Chapter-5/Figures/VTHvsTime.pdf}\\
  \caption{NBTI-induced delay degradation of a 16-bit adder against
different threshold voltages and circuit running time, showing
that high-$V_{th}$ adder has a larger initial delay but a lower
degradation rate.}\label{fig:vthtime}
\end{figure}

\begin{figure}
  \centering
  % Requires \usepackage{graphicx}
  \includegraphics[width=0.8\textwidth]{Chapter-5/Figures/delay2.pdf}\\
  \caption{The conceptual comparison of different optimization
strategies: (a) Conventional dual $V_{th}$ assignment with tighter
timing constraint $D1$ at design-time; (b)NBTI-aware dual $V_{th}$
assignment with timing constraint $D_{req}$ at $T_{life}$. Due to
NBTI, gates with higher $V_{th}$ has slower degradation.}\label{dif_path}
%  \vspace{-10pt}
\end{figure}


Fig.~\ref{dif_path} shows a motivation example for this work. The circuit has
two paths with various number of function units in each path. Given a
performance (delay) requirement $D_{req}$, a high-level synthesis tool would
try to assign high $V_{th}$ to as many function units as possible, such that
the leakage reduction is maximized while the delay requirement is still met.
However, under the influence of NBTI, during the $V_{th}$ assignment, one must
consider the delay degradation as time goes by, and make sure that any path
delay during the specific product life time $[0, T_{life}]$ is not larger than
the performance requirement $D_{req}$.

Fig.~\ref{dif_path}(a) shows a simple \emph{guardbanding} solution to take into
account the delay degradation due to NBTI, with extra delay
headroom is reserved at design time.  One can simply tighten the
performance constraint to include the aging effect. For example,
by simply setting a new timing constraint $D1$ ($D1 = D_{req}-
\Delta D$, in which $\Delta D$ is the maximum (worst-case) delay
degradation) at \textit{design time}, one can obtain a $V_{th}$
assignment as shown in Fig.~\ref{dif_path}(a), with
$D_{path_1}=D_{path_2} \leq D1$ at design time.

However, such a simple solution ignores the fact that
\textit{function units with lower $V_{th}$ tend to age faster,
while function units with higher $V_{th}$ has a slower
degradation}~\cite{Bhardwaj2006}. For example, in
Fig.~\ref{dif_path}(a), path 2 has a function unit with high
$V_{th}$, while all function units in path 1 are assigned low
$V_{th}$. Consequently, path 2 has a slower aging rate. Being aware
of such a difference, one may be more aggressive to assign more
function units on path 2 with high $V_{th}$
(Fig.~\ref{dif_path}(b)), even make it slower than path 1 at
\textit{design time} (Fig.~\ref{dif_path}(b), $D2 > D1$), as long
as the path delay $D_{path_1}$ and $D_{path_2}$ at \textit{the end
of life time} $T_{life}$ can still meet the timing constraint
$D_{req}$. Such an approach can achieve extra leakage savings
(Fig.~\ref{dif_path}(b) has one extra high-$V_{th}$ function unit
than Fig.~\ref{dif_path}(a)). Note that Fig.~\ref{dif_path} is only
 a simple illustrative example, without considering resource sharing
  and pipelining, which makes the analysis of performance/power analysis more complicated.

Consequently, based on the fact that \textit{function units with
lower $V_{th}$ tend to age faster, while function units with
higher $V_{th}$ has a slower degradation}, and the leakage and
delay degradation characterization for resource library (in
Section II-B), we propose a leakage optimization behavioral
synthesis framework considering aging effect. In this framework, we use a new
timing constraint (i.e.,$D_{path_i}$ at the end of life time
$T_{life}$ can meet the delay requirement $D_{req}$), instead of a
design-time timing constraint (i.e.,$D_{path_i}$ at the design
time can meet the delay requirement $D_{req}- \Delta D$), such
that extra leakage savings can be achieved, by using more
high-$V_{th}$ function units.

\section{Leakage Optimization in Aging-bounded \\High-Level Synthesis}\label{sec:optim}
In this section, we present the aging-bounded high-level synthesis framework,
and then propose the resource rebinding algorithm for leakage power
minimization under aging bounds.

\subsection{Aging-bounded HLS}

High-level synthesis (HLS) is the process of transforming a behavioral
description into a RTL description. Operations
such as additions and multiplications in the data flow graph (DFG)
are scheduled into control steps. During the resource allocation and
binding stages, operations are bound to corresponding function units
in the resource library meeting resource type and latency requirements.

In a conventional HLS flow, given the clock cycle period $D_{clk}$ (which is
usually required by the design specification), the timing requirement can be
represented as follows:
\begin{equation}\label{eq:eq3}
 \forall i \in 1 \ldots n, \quad Slack_i \doteq D_{clk} - D_i \geq 0
\end{equation}
where $n$ is the number of control steps, $D_i$ and $Slack_i$ are the total
delay (the maximum arrival time) and slack at control step $i$, respectively.
The resource binding step binds operations in each control step to optimal
function units from the resource library, ensuring that all control steps have
non-negative slacks.

In the case of NBTI, the circuit is degraded and the delay $D_i$ gradually
increases as time goes on. Eventually the slacks are used up by the NBTI
induced delay degradations, and the circuit fails due to a timing violation. A
common way to prevent circuits from failing is \emph{guardbanding}, which
reserves extra timing headroom at design time by relaxing $D_{clk}$ (thus lower
frequency), allowing circuits to degrade to a certain extent. Generally, users
will set a lower bound $T$ on the circuit lifetime, expecting the circuit to
work flawlessly until time $T$. How to set the optimal guardband on $D_{clk}$
under the attainable lifetime constraint and perform the design accordingly,
are the key problems to be solved in \emph{Aging-bounded HLS}.

\emph{Aging-bounded HLS} takes as input the lower bound of attainable circuit
lifetime, computes the ``anticipated'' degraded delay of function units at the
lifetime bound, and uses the degraded delay values to guide the resource
selection and binding. In this case, given an attainable lifetime bound of 10
years, the timing requirement in Expression (\ref{eq:eq3}) will be changed to:
\begin{equation}\label{eq4}
 \forall i \in 1 \ldots n, \quad Slack_{i, 10year} \doteq D_{clk} - D_{i, 10year} \geq 0
\end{equation}
where $D_{i, 10year}$ and $Slack_{i, 10year}$ are the total delay and slack at
control step $i$, measured at the service time of 10 years considering
degradation, respectively. Resource library characterized in
Section~\ref{sec:character} is used in aging-bound HLS, and resource unit are
selected according to the new timing requirement. The circuit lifetime is then
guaranteed to be more than 10 years by this requirement. As mentioned in
Section~\ref{sec:character}-C, an optimized resource selection between
multi-$V_{th}$ function units considering both delay degradation and leakage
power, will lead to better design decisions.

\subsection{Leakage Optimization in Aging-bounded HLS}
With the NBTI-characterized multi-$V_{th}$ resource library and the
aging-bounded HLS framework, the leakage power optimization problem can be
solved using traditional low-power resource binding algorithms such as integer
linear programming~\cite{Shiue2000} and maximum weight independent
set~\cite{HLS:Tang05}. However, the use of multiple threshold voltages enlarges
the design space by times and increases the computational complexity.
Consequently, this work uses a greedy heuristic guided by search-and-replace,
which has been shown to be practical and effective~\cite{HLS:Khouri02}.

The basic flow for leakage optimization in this work is shown in
Fig.~\ref{fig:hlsflow}. The flow takes DFG descriptions of circuits as input,
performance initial scheduling and resource binding within conventional HLS
algorithms under the new lifetime bound, using only the basic (low-$V_{th}$)
resource units from the resource library characterized in
Section~\ref{sec:character}. After that, the leakage power optimization is
performed in steps listed as follows.

\begin{figure}[!htbp]
  \centering
  % Requires \usepackage{graphicx}
  \includegraphics[width=0.65\textwidth]{Chapter-5/Figures/hlsflow.pdf}
  \caption{The flow of leakage optimization in aging-bounded HLS.}\label{fig:hlsflow}
\end{figure}

\textbf{Slacks Analysis in HLS.} Timing slacks of each control step at a given
lifetime bound are defined in Expression (\ref{eq4}). However, in most HLS
tools, operation chaining is used which schedules multiple chained operations
into one control step. In this case, each operation may has its own non-zero
slack. We borrow the methodology of slack analysis at gate-level and apply it
for HLS. The chained operations are classified to different levels according to
their ``logic'' (actually architectural) depths. At each level, the maximum
arrival time at the output nodes is calculated, and each operation's slack is
calculated as the difference between its arrival time and the maximum arrival
time at its level.


\begin{figure}[!htbp]
  \centering
%  \vspace{-5pt}
  % Requires \usepackage{graphicx}
  \includegraphics[width=0.65\textwidth]{Chapter-5/Figures/move.pdf}
  \caption{Resource replacements used in the rebinding: (a) Replacing according to resource slacks; (b) Replacing according to control step slacks.}\label{fig:moves}
%  \vspace{-8pt}
\end{figure}

\textbf{Resource Replacements Used in the Rebinding.} Corresponding to the
slack analysis, in order to fully explore the design space, two types of
resource replacements are used in the search for resource binding:
\begin{itemize}
\item \textbf{Replacing according to resource slacks}, as shown in
    Fig.~\ref{fig:moves}-(a). In this case, the slacks are dedicated to the
    target resources to be replaced. Therefore, the replacing is
    straightforward to find function-equivalent units while the delay
    difference can be fit into the slacks.
\item \textbf{Replacing according to control step slacks}, as shown in
    Fig.~\ref{fig:moves}-(b). In this case, the slacks can be shared
    between the chained operations. This complicates the problem. In order
    to find out the best combination of resource bindings, we assign the
    whole slack to each level of the chained operations in turn, converting
    the control step slacks to resource slacks in each level, and finding
    out the assignment that yields the best result. We omit the possibility
    that control step slack is distributed among different levels enabling
    simultaneous replacements in multiple levels, according to the
    observation that control step slacks are usually not significantly
    larger than the delay differences of resource units.
\end{itemize}


\begin{figure}%[htbp]
\centering
\footnotesize
\rule[-1mm]{\textwidth}{0.01in}
\begin{codebox}
\Procname{$\proc{NBTI-Binding}(DFG, T\_lifetime)$}
\zi \Comment Initialization
\li Multi-Vth Library Characterization with Lifetime Bound $T\_lifetime$
\li List Scheduling under Resource Usage Constraints
\li Initial Binding to $LVT$ resources
\zi \Comment Aging-aware resource binding
\li \For $i \gets 1$ to $NumCSteps$
\zi \Do \{
\li    Levelize Operations in $CStep i$
\li     Perform Slack Analysis
\li     \For $j \gets 1$ to $NumLevels$
\zi         \Do \{
\li             Save $Slacks(1..Levels, 1..Ops)$
\li             $Slacks(j, -) \gets Slacks(j, -) + CStepSlack(i)$
\li             $PSaving(j), Replaces(j) \gets \proc{ResReplace}(Slacks)$
\li             Restore $Slacks(1..Levels, 1..Ops)$
\zi             \} \End
\li    Find $j$ so that $PSaving(j) = Max(Gain(1..NumLevels))$
\li    Apply $Replaces(j)$ to DFG
\li    $TotalPSaving \gets TotalPSaving + PSaving(j)$
\zi    \}\End
\li     Report $TotalPSaving$ and Updated DFG
\zi
\li $\proc{ResReplace}(Slacks)$
\li \For all $ops$ in Current $CStep$
\zi \Do \{
\li    Find Best Resource Candidates from $MVT$ and $HVT$ Libraries
\zi     according to $Slacks$
\li    Perform Resource Replacement for $ops$
\li    Record $Replaces$ and Compute $PSaving$
\zi \} \End
\li Return $Replaces$ and $PSaving$
\end{codebox}
\rule[1mm]{\textwidth}{0.01in}
\vspace{-5pt}
\caption{Outline of the aging-aware resource binding algorithm}\label{fig:C5-algorithm}
% \vspace{-14pt}
\end{figure}

\textbf{The Resource Rebinding Algorithm.} According to the resource replacing
strategies discussed above, a resource rebinding algorithm is proposed to find
out all the low-$V_{th}$ candidates, and to replace them with high-$V_{th}$
equivalents for leakage power reduction. The outline of the algorithm is shown
in Fig.~\ref{fig:C5-algorithm}, where a DFG is initially scheduled and bound to
low-$V_{th}$ (LVT) resource units, under a given lifetime bound (Lines 1-3).
The algorithm then traversals all the control steps (Line 4). In each control
step, chained operations are levelized and slack analysis is performed (Lines
5-6), following by the assignment of control step slack to each level of
operations, and the subsequent updating of resource slacks (Line 9). Resource
rebinding is done by replacing the low-$V_{th}$ (LVT) units with the optimal
equivalences found by searching the medium-$V_{th}$ (MVT) and high-$V_{th}$
(HVT) libraries, with the anticipated delay differences that can be fit into
the slacks (Line 10, 16-21). The optimal assignment of control step slack is
then determined by comparing the leakage savings resulting from the
corresponding resource replacement (Line 12-14).

As for the computational complexity, in the proposed algorithm, the
levelization and slack analysis on each control step can be done by depth-first
search which has the complexity of $O(|V|log(|V|))$, and the sizes of the
graphs ($|V|$) are usually small in each control step of the DFG. For the
resource replacement, according to the slack updating strategy, each operations
can have at most two slack values. Assume that for each slack value the
resource libraries are searched exhaustively, the maximum loop depth with
respect to all operations is 2. Therefore, the overall run time for the
proposed resource binding algorithm is $O(n^2)$.

%\vspace{5pt}

\section{Experiments and Result Analysis}\label{sec:C5-analysis}

\begin{figure}
  % Requires \usepackage{graphicx}
  \centering
  \includegraphics[width=0.7\textwidth]{Chapter-5/Figures/results1-1.pdf}
%   \vspace{-6pt}
  \caption{NBTI-induced delay degradation of function units with different initial threshold voltages, at the circuit lifetime of 10 years. The pattern-filled bars show the original delays without degradation, and the error bars show the NBTI-induced degradations. }\label{fig:chardelay}
%  \vspace{-8pt}
\end{figure}

\begin{figure}
  % Requires \usepackage{graphicx}
  \centering
  \includegraphics[width=0.7\textwidth]{Chapter-5/Figures/results1-2.pdf}
%  \vspace{-8pt}
  \caption{Leakage power of function units with different initial threshold voltages. The $y$ axis is logarithmically plotted. The pattern-filled bars show the leakages at the circuit lifetime of 10 years, and the error bars show the change of leakage power due to NBTI-induced $V_{th}$ degradation.}\label{fig:charactpower}
%  \vspace{-12pt}
\end{figure}


In this section, we present the experimental results of our leakage power
optimization framework for aging-bounded high-level synthesis.

We first show the NBTI-induced delay degradation and leakage power
characterization of function units. The work is based on a 45nm
technology and NCSU FreePDK 45nm cell library~\cite{URL:freepdk}
is used for the characterization. The threshold voltages are set
as: \emph{LVT} = $0.200V$, \emph{MVT} = $0.315V$, \emph{HVT} =
$0.423V$, while the supply voltage is $V_{dd} = 1.1V$. %These
%voltage values are taken from a commercial 45nm multi-$V_{th}$
%technology library.
Figs.~\ref{fig:chardelay} and \ref{fig:charactpower} show the
example of characterization results for a set of function units in
our resource library, including two 16-bit adders (\emph{bkung16}
and \emph{kogge16}), two 32-bit adders (\emph{bkung32} and
\emph{kogge32}), two 8-bit$\times$8-bit multipliers
(\emph{pmult8x8} and \emph{booth9x9}). Note that the
characterization of other components (such as multiplexer and
registers) are not depicted here due to space limitation.

From Fig.~\ref{fig:chardelay} we can see that the differences between the
initial delays of function units with different $V_{th}$s are significant, as
the NBTI-induced degradation (shown by the error bars) goes on, the
``anticipated'' degraded delays are getting much closer. Meanwhile,
Fig.~\ref{fig:charactpower} lists the leakage power of function units at the
time of first use and the service time of 10 years, under different initial
threshold voltages. Note that the $y$ axis is logarithmically plotted, which
shows the potentials of leakage power savings if high-$V_{th}$ units are used
instead of low-$V_{th}$ units, and this motivates the resource rebinding work
presented in this chapter.

\begin{table}[tbp]
%  \vspace{-10pt}
\centering \footnotesize \caption{Benchmark profile and initial scheduling
results}\label{table:bench}
 \vspace{5pt}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Name & \# nodes & \# edges & \# CCs & \# adders& \# multipliers \\ \hline\hline
PR & 44 & 132 & 12 & 4 & 2\\ \hline
WANG & 52 & 132 & 14 & 4 & 2 \\ \hline
MCM & 96 & 250 & 18 & 7 & 3 \\ \hline
HONDA & 99 & 212 & 15 & 6 & 6 \\ \hline
DIR & 150 & 312 & 16 & 8 & 8 \\ \hline
STEAM & 222 & 470 & 19 & 11 & 10 \\ \hline
CHEM & 348 & 729 & 29 & 9 & 10 \\ \hline
\end{tabular}
\end{table}


With the NBTI-aware multi-$V_{th}$ resource library characterized, our proposed
resource rebinding algorithm for leakage minimization is applied on a set of
industrial HLS benchmarks. The profile of benchmarks, as well as the initial
scheduling results, are listed in Table~\ref{table:bench}, where the 2nd and
3rd columns show the number of nodes and edges in each benchmark, respectively.
The 4th column shows the number of control steps resulted from the initial
scheduling, and the last 2 columns show the number of resource instances used
in each schedule.

The proposed resource binding algorithm is implemented in C++ and experiments
are conducted on a Linux workstation with Intel Xeon 3.2GHz processor and 2GB
RAM. All the experiments run in less than 10s of CPU time. All the leakage
reduction values in the experimental results are computed against
single-$V_{th}$ implementations using only low-$V_{th}$ units.

Fig.~\ref{fig:result2-1} shows the comparison of total leakage energy reduction
with the traditional aging-unaware multi-$V_{th}$ assignment. In the
aging-unaware approach, multi-$V_{th}$ assignment is performed according to the
original (non-degraded) delays of function units, yielding an average leakage
reduction of 14\%, while in our proposed aging-bounded approach, the degraded
delays at the lifetime bound of 10 years are used to guide the resource
rebinding, and an average leakage reduction of 26\% is achieved. The comparison
shows that with the proposed aging-bounded approach, the leakage power
consumption can be more effectively reduced without affecting the attainable
circuit lifetime.


\begin{figure}[!tbp]
  % Requires \usepackage{graphicx}
  \centering
  \includegraphics[width=0.7\textwidth]{Chapter-5/Figures/results2-1.pdf}
    \vspace{-10pt}
  \caption{Total leakage energy reduction under a lifetime bound of 10 years, compared with the traditional aging-unaware multi-$V_{th}$ assignment.}\label{fig:result2-1}
    \vspace{-6pt}
\end{figure}

\begin{figure}[tbp]
  % Requires \usepackage{graphicx}
  \centering
  \includegraphics[width=0.8\textwidth]{Chapter-5/Figures/results2-2.pdf}
    \vspace{-10pt}
  \caption{Leakage reduction against aging-bounded single-$V_{th}$ approach with different lifetime bounds.}\label{fig:result2-2}
    \vspace{-6pt}
\end{figure}


\begin{figure}[!btp]
%    \vspace{-8pt}
  % Requires \usepackage{graphicx}
  \centering
  \includegraphics[width=0.8\textwidth]{Chapter-5/Figures/results2-3.pdf}
%    \vspace{-10pt}
  \caption{Leakage reduction against aging-bounded single-$V_{th}$ approach with different threshold voltage settings, under a lifetime bound of 10 years.}\label{fig:result2-3}
%    \vspace{-10pt}
\end{figure}

Fig.~\ref{fig:result2-2} explores the impact of different lifetime bounds at
the effectiveness of the proposed leakage reduction technique, against
aging-bounded single-$V_{th}$ implementations. The average leakage energy
reduction under lifetime constraints of 5, 10 and 15 years, are 11\%, 26\% and
32\%, respectively. This suggests that higher lifetime bounds are more
favorable for leakage energy reduction. The reason behind is that as the
circuit running time gets longer, the delay difference between high-$V_{th}$
and low-$V_{th}$ units decreases, so more high-$V_{th}$ units can be used in
the design under higher lifetime bounds. However, higher lifetime bounds also
require more guardbanding for the larger overall degradations, yielding lower
clock frequencies. Therefore, the tradeoff between leakage power reduction and
circuit performance need to be balanced.

Fig.~\ref{fig:result2-3} explores the impact of different settings of threshold
voltage levels at the effectiveness of the proposed leakage reduction
technique, against aging-bounded single-$V_{th}$ implementations. In the
comparison experiments, threshold voltage levels are reduced to 2, where only
\emph{Medium-}$V_{th}$ units or \emph{High-}$V_{th}$ units can be used for the
replacement. The attainable lifetime bound is set as 10 years. The average
leakage energy reductions under three threshold voltage settings L-M-$V_{th}$
(using LVT and MVT), L-H-$V_{th}$ (using LVT and HVT), and multi-$V_{th}$
(using all three levels) are 16\%, 17\% and 26\%, respectively. This comparison
is raised by the consideration of manufacturing overhead of the multi-$V_{th}$
technology. As total leakage reduction is determined by both the number of
units replaced and the leakage saving of each single replacement, if only two
levels of threshold voltages are allowed, the results depend on which factor
dominates. Fig.~\ref{fig:result2-3} shows that in some cases, L-M-$V_{th}$
scheme beats L-H-$V_{th}$ scheme because more LVT units can be replaced with
MVT units, while in other cases L-H-$V_{th}$ is more favorable, because the
leakage saving brought by HVT units is more significant. This leads to the
optimal second threshold voltage selection problem that is left to be explored
in future work. Nevertheless, multi-$V_{th}$ design using three levels of
threshold voltages can exploit more benefits at the cost of extra manufacturing
overhead.

\section{Summary}
This chapter explored the impact of different threshold voltages on the rates
of circuit degradation due to NBTI, from the view of high-level synthesis. As
the delay difference between low-$V_{th}$ circuits and high-$V_{th}$
equivalents diminishes with degradation, more high-$V_{th}$ can be used in the
design, bringing in great potentials for leakage power savings. The author then
proposes a framework to accurately evaluate the delay degradation as well as
the leakage power for architectural units, to perform the synthesis under a new
metric \emph{Lifetime Bound}, and to optimize the leakage power consumption
during the new synthesis process. Experimental results show that, compared to
traditional aging-unaware multi-$V_{th}$ assignment approach, the proposed
techniques can more effectively reduce the leakage power under the given
attainable circuit lifetime bound.

%\vspace{5pt}
