\chapter{ILP-based Scheme for Timing Variation-aware Scheduling and Resource
Binding}\label{chapter:ILP}

Chapter~\ref{chapter:intro} introduced the preliminaries on statistical
high-level synthesis, including the variation characterization of modules and
the statistical timing and power analysis in high-level synthesis.  This
chapter presents a 0-1 integer liner programming (ILP) formulation that tries
to reduce the impact of timing variations in high-level synthesis by
integrating overall timing yield constraints into scheduling and resource
binding. The proposed approach focuses on how to achieve the maximum
performance (minimum latency) with a given penalty of timing yield and
affordable computation time. Experiment results show that an average 23\%
latency reduction is achieved under 90\% timing yield constraint.

\section{Introduction}
The technology scaling has resulted in significant deviations from the nominal
values of transistor parameters, such as channel length, threshold voltage, and
metal linewidth. As a result, traditional deterministic worst-case analysis is
no longer feasible as it brings too much pessimism in estimating circuit
performance and requires much extra resources and design efforts to meet design
constraints. Instead,  statistical static timing analysis (SSTA) has gained
great favor~\cite{PV:VRK+04,PV:DK03}. In SSTA, the delay of circuit units is
modeled as a distribution rather than a fixed worst-case value, and the overall
timing performance is calculated by propagating these distributions through the
circuit, resulting in a much more accurate estimation of actual circuit
performance.

Most of the previous variation research work focuses on lower level of the
design flow. In this chapter, the author looks into the problem at the behavior
synthesis level, in particular, on how to integrate process variation awareness
into scheduling and resource binding. Traditionally, scheduling and resource
binding are performed based on deterministic worst-case analysis. In the era of
statistical timing analysis, binding an operation between fixed clock cycles to
a function unit with statistical delay distribution will naturally result in
performance yield loss. Compared to the requirement of 100\% performance yield
in worst-case based high-level synthesis (HLS), the basic objective of
variation-aware HLS is to minimize latency with an acceptable sacrifice on
performance yield.

Both heuristic algorithms and integer linear programming (ILP) based algorithms
are used to solve the scheduling and resource binding problem in HLS. Although
the number of variables and inequalities in an ILP formulation grows
exponentially as the size of problem scales up, the motivation of this work is
to get an optimal solution of variation aware high-level synthesis, and make it
a reference for evaluating the performance of different heuristic algorithms
which are more practical in real design.

In this chapter, we first define \textbf{timing yield} as the probability that
a design can finish execution in given clock steps without timing constraint
violation. We then discuss how to compute the overall timing yield of a
control/data flow graph (CDFG) from delay distributions of nodes in the graph.
After that, we formulate the conventional scheduling and resource binding
problem into an ILP framework, and integrate the timing yield constraint into
the framework. We adopt an linear approximation of the overall timing yield,
since it is a non-linear function and cannot be represented by the ILP
formulation. Finally we conduct experiments using a commercial ILP solver, and
show the performance of our new approach on a set of HLS benchmarks.

\section{Related Work} \label{sec:C2-related}

ILP-based high-level synthesis has been exploited during the past decade.
Chaudhuri et.al~\cite{hls:chau94} gave a formal analysis of the constraints of
ILP-base scheduling, and presented a well-structured ILP formulation of the
scheduling problem to reduce the computation time. Recent work on ILP-based HLS
problem has concentrated on extending the basic ILP formulation to cope with
additional extra design parameters, such as power, reliability, and
manufacturability. Shiue~\cite{hls:shiue00} extended the traditional ILP
approach to include peak power optimization. Tosun et al.~\cite{hls:Tosun05}
presented an ILP-based approach to look at the soft error-aware HLS problem.

Recently various design techniques to tackle the variation problem have been
proposed on the basis of statistical timing analysis, such as buffer insertion,
gate sizing, and threshold voltage assignment. However, most of these works
focus on device level or gate level. Bokar~\cite{PV:borkar05-micro}
demonstrated that to efficiently reduce the impact of process variations, it's
better to incorporate variability at higher level of circuit design. Hung et
al.~\cite{hls:weilun06} proposed a \emph{simulated-annealing} based high-level
synthesis framework to take process variations into account. Jung et
al.~\cite{HLS:Jung07} made use of statistical timing information to perform
high-level synthesis based on heuristics, and implicitly utilized the
``time-borrowing''~\cite{timing:smo90} technique, which schedules groups of
operations into multiple clock cycles and slower resource units can utilize the
time slacks of faster units in these groups.

\section{Timing Variation-Aware High-Level\\ Synthesis} \label{sec:C2-problem}
High-level synthesis (HLS) is the process of translating a behavioral
description into a register level structure description. Scheduling and
resource binding are key steps during the synthesis process. The scheduler is
in charge of sequencing the operations of a control/data flow graph (CDFG) in
correct order and it schedules as many operations as possible in the same
control step to extract more parallelism. The binding process binds operations
to hardware units in the resource library to complete the mapping from
abstracted descriptions of circuits into practical designs.

The resource library consists of hardware units with different delay and area
properties, which make it possible to perform design exploration to get more
optimized result during the synthesis process. Traditionally the worst-case
latency of each function unit is provided to HLS algorithms. However, such
worst-case parameters are becoming inappropriate as larger variability is
encountered in the new process technologies.

 As the magnitude of process
variations grows rapidly, worst-case based analysis and optimization are no
longer acceptable since they introduce too much pessimism in design and lead to
greatly increased design effort to meet the latency requirement. As a result,
statistical description and analysis of function unit delay are introduced to
tackle the timing problem in high-level synthesis.

\subsection{Yield Aware Resource Partitioning} \label{subsec:partition}

\begin{figure}
\centerline{\includegraphics[width=0.55\textwidth]{Chapter-2/Figures/t.pdf}}
\caption{A demonstration of timing yield of resource units.} \label{fig:yield}
\end{figure}

In statistical timing analysis, the delay of a resource unit is described by a
probability density function (PDF). First we define \textbf{timing yield} as
the probability that a resource unit can finish execution in a given period,
that is, the cumulated probability in the PDF, as shown in
Equation~(\ref{eq:yield0}). Figure \ref{fig:yield} shows an example of how to
calculate the timing yield of a resource unit and how to choose between
resource units with different timing yields. As shown in Figure
\ref{fig:yield}, if traditional worst-case analysis is used during resource
selection, we should choose \emph{Adder2} since it has smaller worst-case
latency, however, if under a given clock cycle $T_{CLK}$, the timing yield of
\emph{Adder1} is obviously larger than that of \emph{Adder2}, so \emph{Adder2}
is a better choice in timing-yield aware HLS.
\begin{equation}\label{eq:yield0}
%
TimingYield_{i} = P(Delay_{i} < T_{clk}) =  \int_{0}^{T_{clk}} PDF_{i}
\end{equation}
Empirical data shows that the delay of resource units approximately conforms to
Gaussian distribution~\cite{hls:weilun06}. For instance, given the delay of an
adder conforming to a Gaussian distribution with $\mu=38ns$ and $\sigma=2.5ns$,
the timing yield at $T_{clk}=40ns$ is 81.26\%. Therefore, if the distribution
parameters and clock cycle are given in advance, we can easily calculate the
timing yield property of resources in the resource library. Note that for a
multi-cycle function unit, one should replace $T_{clk}$ with $N*T_{clk}$ in
Equation~(\ref{eq:yield0}) ($N$ is the nominal number of clock cycles of
execution time for that particular multi-cycle function unit).

\subsection{Calculation of Overall Timing Yield}\label{subsec:sharing}
The \textbf{overall timing yield} of a CDFG is the probability that the entire
design can finish execution in given clocks steps without timing constraint
violation. In traditional worst-case based analysis, the overall timing yield
is always 100\%, resulting in a overly conservative design with pessimistic
performance estimation. However, with statistical timing analysis, the tradeoff
between overall timing yield and circuit latency is explored in our proposed
variation-aware approach, and significant improvement of circuit performance
can be achieved with a well balancing of this tradeoff.

To calculate the overall timing yield of a CDFG, conventional
approaches~\cite{hls:weilun06}  rely on the \emph{sum} and \emph{max}
operations on discrete probability density functions of delay of resources.
However, the \emph{max} operation is not linear and cannot be handled by an ILP
solver. To simplify the problem, we can apply the product rule of probability
and calculate the overall timing yield as:
\begin{equation}\label{eq:yield1}
%
Yield_{Timing} = \prod_{i=1}^{M}Yield_{Timing}(i)
\end{equation}
where $\{i=1,2\ldots M\}$ represents operations in the CDFG. However, the
precondition of Expression (\ref{eq:yield1}) is that all operations are
independent to each other on timing, which can hardly be satisfied due to the
\emph{resource sharing} in HLS. For operations that share the same resource
sequentially, their delay distributions are not independent, but identical, so
the timing yield of the shared resource should be counted only once in
Expression (\ref{eq:yield1}), according to the principle of conditional
probability. Therefore, the overall timing yield can be computed as the product
of the timing yield of all resource instances used in the CDFG, as shown in
Expression (\ref{eq:yield2}):
\begin{equation}\label{eq:yield2}
%
TotalYield = \prod_{r=1}^{N_{res}}Yield(r)^{A(r)}
\end{equation}
where $\{r=1,2\ldots N_{res}\}$ represents different kinds of resources  used
in the synthesized CDFG, $A(r)$ stands for the amount of instances of resource
$r$ concurrently needed in the design, and $Yield(r)$ is the timing yield of
resource $r$. Expression (\ref{eq:yield2}) is still not linear. However, since
$Yield(r)$ is constant, logarithm operation can be applied to each side of the
equation so that it can be handled by any LP solver.

With the dynamic calculation of overall timing yield of the CDFG during the
scheduling and binding process, the proposed approach is able to explore the
yield-aware resource library to get the best solution (latency minimum
oriented) from different combinations of resource units, under a given
constraint of overall timing yield.

\section{ILP Formulation with Timing Yield\\ Constraint} \label{sec:algorithm}

\begin{table*}[!bt]
\centering \caption{The constants and variables used in our ILP formulation}
\label{table:ILPformulation} \vspace{5pt}\footnotesize
\begin{tabular}{|c|p{9cm}|c|}
\hline
\textbf{Notation} & \textbf{Definition} & \textbf{Type} \\
\hline\hline
$N_{op}$ & Number of operations in the data-flow graph & Constant \\
\hline
$N_{clock}$ & \textbf{Estimated} number of clocks needed to schedule all operations & Constant \\
\hline
$N_{res}$ & Number of resource entries in the resource library & Constant \\
\hline
$i$, $j$, $r$ & Subscript of operations, clock steps and resource library entries, respectively.  & Variable \\
%\hline
%$j$ & Subscript of clock steps. $1\le j\le N_{clock}$ & Variable \\
%\hline
%$r$ & Subscript of resource entries in the resource library. $1\le r\le N_{res}$ & Variable \\
\hline
$E(i,i')$ & Operation dependence of the data-flow graph. If operation $i'$ is dependent on operation $i$, $E(i,i')=1$, otherwise, $E(i,i')=0$& Constant Matrix  \\
\hline
$OPType(i)$ & Type of operation $i$, e.g., for adder $OPType(i)=1$, for multiplier $OPType(i)=2$ & Constant Array \\
\hline
$Type(r)$ & Type of resource $r$. Corresponding to $OPType$. & Constant Array\\
\hline
$Dura(r)$ & Delay of resource $r$. Unit is clock(s). & Constant Array\\
%\hline
%$Area(r)$ & Area of resources $r$. & Constant Array\\
\hline
$Yield(r)$ & Timing yield of resource $r$. & Constant Array\\
\hline
$A(r)$ & Number of instances of resource $r$ used in the design. & Variable Array \\
\hline
$X(i,j,r)$ & Variable associated with schedule and binding information of operation $i$. $X(i,j,r)=1$ if and only if operation $i$ is scheduled to clock $j$ and bound to resource $r$; $0$ otherwise. & Variable Matrix \\
\hline
\end{tabular}
\end{table*}

Integer linear programming (ILP) is a common methodology in optimization. In
this section we propose an ILP-based new scheme for concurrent scheduling and
resource binding with the awareness of timing yield.
\subsection{Problem Definition}
 A control/data flow graph (CDFG) is given as a directed acyclic graph $G(V,E)$, where the vertex set $V=\{i=1,\ldots n\}$ represents the operations in the CDFG and the edge set $E=\{(i, j):i,j\in 1,\ldots n\}$ represents the data dependencies between operations. Given an unscheduled CDFG, with yield-info-included resource library, we have to find a synthesized CDFG such that the timing yield constraint is met and the overall latency is minimized.

\subsection{ILP Formulation}

\subsubsection{Basic ILP Framework}
To make our ILP formulation easy to follow, we start by presenting the notation
used in our formulation. Table \ref{table:ILPformulation} lists all the
constants and variables used, and their definitions.

Now we present the constraints. First we should make sure that the decision
variable $X(i,j,r)$ is boolean. Unlike formulations in prior
work~\cite{hls:Tosun05} with individual decision variables for scheduling and
binding separately, here $X(i,j,r)$ is introduced as the only one to represent
the scheduling and binding relations concurrently, in order to reduce the size
of variable space and simplify the formulation. We then present the unique slot
constraint of scheduling and binding in Expression (\ref{eq:unique}):
\begin{equation}\label{eq:unique}
%
\sum_{j=1}^{N_{clock}}\sum_{r=1}^{N_{res}}X(i,j,r) =1 \qquad \forall i \in [1, N_{op}]
\end{equation}
%\begin{equation}
%\sum_{r=1}^{N_{res}}B(i,r) =1 \qquad \forall i \in [1, N_{op}]
%\end{equation}
Expression (\ref{eq:start}) calculates the clock step where an operation is
scheduled to start execution. Expression (\ref{eq:dura}) presents the actual
delay of an operation after it is bound to a resource unit. And expression
(\ref{eq:dependent}) shows the execution time constraint due to data
dependencies between operations.
\begin{equation}\label{eq:start}
%
Start(i)=\sum_{j=1}^{N_{clock}}\sum_{r=1}^{N_{res}}j\cdot X(i,j,r) \qquad \forall i \in [1, N_{op}]
\end{equation}
\begin{equation}\label{eq:dura}
%
Delay(i)=\sum_{j=1}^{N_{clock}}\sum_{r=1}^{N_{res}}X(i,j,r)\cdot Dura(r) \quad \forall i \in [1, N_{op}]
\end{equation}
\begin{eqnarray}\label{eq:dependent}
% \small
Start(i') - Start(i) \geq Delay(i) \nonumber\\ \qquad \qquad \forall i,i'\in [1,N_{op}]\,|\, E(i,i')=1
\end{eqnarray}
%
%\begin{equation}
%X(i, j) + B(i, r) - 1 \leq K(i, j, r) \qquad \forall i,j,r
%\end{equation}
%
Expression (\ref{eq:concurrent}) is to constrain the maximum instances of every
resource unit used concurrently in a design. Due to the existence of
multi-cycle resource units, we have to consider not only the instances used in
the same control step, but also the ones begins to execute in a period (the
delay of this resource unit) prior to that clock step.
\begin{eqnarray}\label{eq:concurrent}
%
%sum(i in TASKS,l in if(j-Dura(r)+1>0, j-Dura(r)+1, 1)..j) K(i, l, r) <= A(r)		
\sum_{i=1}^{N_{op}}\sum_{l=j-Dura(r)+1}^{j}X(i,l,r) \leq A(r) \nonumber\\ \qquad\qquad \forall r\in[1,N_{res}], \forall j \in [1,N_{clock}]
\end{eqnarray}
Expression (\ref{eq:optype}) presents the operator type constraint for resource
binding, that is, an operation must be bound to a resource unit with the
corresponding operation type. It is newly introduced in this work for it is not
mentioned in any prior publications, to the best of our knowledge.
\begin{equation}\label{eq:optype}
%
%	sum( r in RESES) B(i,r)*Type(r) = OPTYPE(i)
\sum_{j=1}^{N_{clock}}\sum_{r=1}^{N_{res}}X(i,j,r)\cdot Type(r) = OPType(i) \quad \forall i \in [1, N_{op}]
\end{equation}

\subsubsection{Timing Yield Constraints and Objective Functions}\label{subsubsec:constraints}
The calculation of timing yield has already been presented in Section
\ref{subsec:sharing}, therefore, we only need to provide a lower bound to
Expression (\ref{eq:yield2}), in order to set the timing yield constraint, and
then integrate the calculation and constraint of timing yield with the basic
ILP formulation, as shown in Expression (\ref{eq:yieldconstraint}):
\begin{equation}\label{eq:yieldconstraint}
%
\sum_{r=1}^{N_{res}}ln(Yield(r))\cdot A(r) \geq ln(\textsc{MinYield})
\end{equation}
The optimization objective of this formulation is latency minimization. The
resource usage is constrained by limiting the amount of instances of resource
units in the same type (e.g. adder or multiplier) in the design, as shown in
Expression (\ref{eq:alternate}):
{\setlength\arraycolsep{2pt} %\vspace{-8pt}
\small
\begin{eqnarray}\label{eq:alternate}
%
\sum_{r=1 | Type(r)=1}^{N_{res}}A(r) & \leq & \textsc{NumofAdder} \nonumber \\
\sum_{r=1 | Type(r)=2}^{N_{res}}A(r) & \leq & \textsc{NumofMultiplier}
\end{eqnarray}}
and the optimization objective is:
\begin{equation}\label{eq:mindelay}
%
minimize\quad Start(N_{op})
\end{equation}
subject to (\ref{eq:unique})-(\ref{eq:optype}).

\section{Experimental Results} \label{sec:analysis}
In this section, we present experimental results for the ILP formulation of
timing variation-aware scheduling and binding described in Section
\ref{sec:algorithm}. We conduct the experiment on a set of high-level synthesis
benchmarks; namely, a differential equation solver (DES), an FIR filter (FIR),
a 16-point elliptic wave filter (EWR), an autoregressive lattice filter (AR)
and an IIR filter used in the industry (CHEM)~\cite{hls:weilun06}. The proposed
ILP formulation is solved by a commercial LP solver
Xpress-MP~\cite{xpress-book} running on a PC with an Intel Pentium-M 1.7GHz
processor.
\begin{table}[!bhp]
\centering \caption{Resource library with timing yield
information}\label{table:reslib} \vspace{5pt}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\textbf{Name} & \textbf{Dura} & \textbf{Yield} & & \textbf{Name} & \textbf{Dura} & \textbf{Yield} \\
\hline\hline
Add1 & 1 clock & 0.90 && Mul1 & 4 clocks & 0.92 \\
Add2 & 2 clocks & 0.95 && Mul2 & 6 clocks & 0.98 \\
Add3 & 3 clocks & 1.00 && Mul3 & 7 clocks & 1.00 \\
\hline
\end{tabular}
%\vspace{-8pt}
\end{table}

The resource library used in the experiment contains the delay distribution
with probabilities for each functional unit. For this work, six different types
of adders and multiplier are included in the resource library. Each functional
unit's delay distribution was characterized in SPICE with Monte Carlo method to
model both intra-die and inter-die variations. Given a specific clock cycle
time $T_{clk}$ as a constraint, the timing yield for each function unit can be
obtained using the method described in Section III.A. Table~\ref{table:reslib}
shows an example of the resource library with timing yield information. Note
that most of the function units are multi-cycle ones except $Add1$.

\begin{figure}[!th]
\centering
\includegraphics[width=0.5\textwidth]{Chapter-2/Figures/cdfg.pdf}
\caption{Scheduled results for DES by worst-case based HLS (a) and variation aware HLS
(b).}\label{fig:cdfg}
\end{figure}

We first show the synthesis results of DES produced by two different
approaches, respectively. The timing yield constraint is set to 90\%. In
addition, the resource usage constraint discussed in Section
\ref{subsubsec:constraints}, Expression (\ref{eq:alternate}) is set as:
\textsc{NumofAdder} = 3, \textsc{NumofMultiplier} = 3. The synthesized CDFGs
are shown in Figure \ref{fig:cdfg}. The left part of the figure presents the
worst-case based scheduling and binding, in which only resource units with
100\% yield timing yield can be selected for binding, resulting in a timing
yield of 100\% and completion latency of 21 clock steps. In contrast, our
variation aware HLS algorithm in Figure~\ref{fig:cdfg}-(b) uses 1~\emph{Add2}
and 2~\emph{Mult2}, thus the timing yield is 0.95 $\times$ 0.98 $\times$ 0.98 =
0.9124. The completion latency is 17 clocks, which is reduced by 19\% due to
the dynamic exploitation and selection of resource units under the timing yield
constraint.


\begin{table}
\centering \caption{Latency reduction with different timing yield constraint.
a)DES b)EWF} \label{table:result1} \vspace{5pt}\footnotesize
\begin{tabular}{c}
\begin{minipage}{5.2in}
\centering %
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{Timing Yield} & Latency & Reduc. & Run \\\cline{1-2}
Constraint & Actual & \#CC & \% & Time(s)\\ \hline
1.00 & 1.0000 & 21 & - & 0.1 \\ \hline
0.95 & 0.9500 & 19 & 9.5\% & 0.6 \\ \hline
0.90 & 0.9124 & 17 & 19.0\%& 3.3 \\ \hline
0.85 & 0.8644 & 16 & 23.8\%& 3.8 \\ \hline
0.80 & 0.8075 & 15 & 28.5\%& 4.1 \\ \hline
\end{tabular}
\end{minipage} \\
A)DES(12 ops) \\ \\

\begin{minipage}{5.2in}
\centering %
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{Timing Yield} & Latency & Reduc. & Run \\\cline{1-2}
Constraint & Actual & \#CC & \% & Time(s)\\ \hline
1.00 & 1.0000 & 37 & - & 7.9 \\ \hline
0.95 & 0.9500 & 29 & 21.62\% & 42.2 \\ \hline
0.90 & 0.9000 & 18 & 51.35\%& 68.8 \\ \hline
0.85 & 0.8550 & 17 & 54.05\%& 74.7 \\ \hline
0.80 & 0.8100 & 15 & 59.46\%& 107.0 \\ \hline
\end{tabular}
\end{minipage} \\
B)EWF(26 ops) \\
\end{tabular}
\end{table}

\begin{figure}[!th]
\centering
\includegraphics[width=0.7\textwidth]{Chapter-2/Figures/result.pdf}
\caption{Latency reductions under different yield points for all benchmarks}\label{fig:benchmark}
\end{figure}
In the second experiment, we evaluate the performance of our ILP-based
algorithm on different benchmarks. Due to the space limitation, only the
results of DES and EWF are shown in Table \ref{table:result1}, where the first
two columns presents the timing yield constraint and actual timing yield of the
synthesized CDFG, respectively. Latency in \#CC indicates the total clock steps
needed for completion, and the fourth column shows the latency reduction
gained, compared to the result of worst-case based HLS, which is shown in the
first row with timing yield constraint set to 1. The last column records the
running time of the algorithm.

As shown in Table \ref{table:result1}(B) of EWF, up to 59\% latency reduction
can be achieved under the timing yield of 80\%. The reason is that all
operations in the CDFG of EWF are ALU operations. According to the resource
library, the delay of ALU units (Adder) reduces very fast as the timing yield
requirement is lowered.

Figure \ref{fig:benchmark} shows the latency improvements of all benchmarks at
different performance yields. In this figure, the average latency reductions
are 10\%, 23\%, and 30\% for timing yield 95\%, 90\%, and 85\%, respectively,
which demostrate the effectiveness of our ILP-based scheme for variation-aware
HLS.

\section{Summary}

This chapter presented an ILP-based scheme for concurrent scheduling and
binding with the awareness of timing variations. The ILP formulation
incorporates with the timing yield calculation and is able to achieve
significant reduction of area under a given timing yield constraint, compared
to traditional worst-case based HLS algorithms.

Currently the ILP formulation can only produce schedule results with all
operations synchronized, which means, data exchange between operations will
only happen at the clock edges. However, If we consider the sequentially
chained operations and incorporate ``slack stealing'' techniques, the circuit
latency can be further reduced. This is our future direction to improve the
performance of our ILP-based  variation-aware high-level synthesis.
