\chapter{Preliminaries}\label{CH::PRE}
\markboth {Chapter \ref{CH::PRE}. Preliminaries}{}

The behavior of a circuit can be specified using a
high-level hardware description language or a common software programming language, and it
should be translated into a suitable intermediate
format, e.g. a flow graph, in order to efficiently manage and analyze the design specification.
This translation is critical for enabling detailed flow analysis, enabling the visualization of
intermediate values to improve HLS tasks (like register allocation) and making global decisions
about code motion.

\vspace{1em} \noindent
Recently several types of intermediate representations and Static Single Assignment form have been proposed to represent the flow properties of a program. Each of these previously unrelated techniques lends efficiency and power to a useful class of data computation (i.e. liveness analysis), HLS tasks execution, and resources optimization (i.e. speculative execution).

\vspace{1em} \noindent
This Chapter is organized as follows. In Section \ref{pre:IR}, Intermediate Representation will be introduced, and then the most used flow graphs and the Static Single Assignment form will be presented; in Section \ref{pre:liveness}, the standard Liveness Analysis will be explained; in Sections \ref{pre:sched}, \ref{pre:RA} and \ref{pre:controller}, the three high-level synthesis tasks and the speculative techniques will be defined; and at the end, in Section \ref{pre:concl}, conclusions will be summarized.

\section{Intermediate Representation}\label{pre:IR}
Language based specifications are usually translated into \emph{intermediate
representations} to efficiently manage and analyze the design specification.
Such intermediate representation (IR) is a kind of representation that is independent of the details of source and target languages. Therefore, all transformations can be applied to this IR without any modifications due to different details in the languages. This approach is similar to the one used by the compilers oriented to software production, as detailed in Figure~\ref{fig:compiler}.

\begin{figure}[htb]
\centering
\begin{minipage}[l]{0.9\textwidth}
\includegraphics[width=\columnwidth]{./chapters/preliminaries/images/compiler.jpg}
\end{minipage}
\caption[Compilers for software and hardware]{Analogies between compilers oriented to software synthesis (a) and hardware synthesis (b)}\label{fig:compiler}
\end{figure}

\vspace{1em} \noindent
The intermediate representation has to be simple in order to be easily analyzed by HLS sub-tasks, but, at the same moment, it has to store all the necessary information. Representations based on graphs have been chosen since graphs are the most powerful and clear representation: in fact, it's very easy to transform the high-level synthesis algorithms to simpler graph theoretic formulations.

\vspace{1em} \noindent
Graphs are defined as follow:
\begin{itemize}
 \item The vertices $v \in V_0$ are the operations which have to be executed into behavioral
specifications;
 \item The edges $e \in E$ describe relations between source operations and target
ones. Two vertices $v_l$ and $v_2$ will be connected by a directed edge $e$ into
graph $G$ if the two operations are related by the property that graph $G$ represents.
\end{itemize}

\vspace{1em} \noindent
\begin{definition}\label{hls:operation_type}
\textbf{Operation type}: the \textnormal{operation type} function $\tau : V_o\rightarrow \Pi(\Xi)$ determines for each operation vertex $v \in V_o$ the operation type that it represents in the behavioral specification.
\end{definition}

\vspace{1em} \noindent
Intermediate representation topic can be detailed in terms of:
\begin{itemize}
   \item \emph{Flow Graphs}, i.e. the intermediate representations of the behavioral description;
   \item \emph{Static Single Assignment form}, i.e. the intermediate representation form subjected to a lot of properties and well-known multiple optimizations.
\end{itemize}

\subsection{Flow Graphs}\label{pre:graphs}
Several types of intermediate representations have been proposed in literature, each one targeting
different types of applications. In this Section a detailed description of data flow graph (DFG), control flow graph (CFG), hierarchical task graph (HTG), Control Dependence Graph (CDG) and Control/Data Flow Graph (CDFG) will be presented. All the flow graph examples presented below will refer to the behavioral code specified in Section \ref{hls:inputs:desc}, except the hierarchical task graph example.

\subsubsection{\underline{Data Flow Graph}}\label{pre:graphs:dfg}
\index{Data Flow Graph|textbf}
A \textit{Data Flow} language or architecture executes a computation only when all operands are available. This is a technique that allows to specify parallel computation at very low level, usually into a bi-dimensional graph representation: instructions that can be simultaneously
computed are horizontally ordered, while sequential ones are vertically ordered. Data
dependences between operations are represented by directed edges and instructions allow data to be transmitted directly
from source operations to target ones. So a node that defines a variable presents
outcoming edges to all nodes that will use that variable, as Figure \ref{hls::dfg_example} shows.
\begin{figure}[htb]
\centering
  \includegraphics[width=0.65\textheight]{./chapters/preliminaries/images/dfg.jpg}
  \caption{Data Flow Graph}\label{hls::dfg_example}
\end{figure}

\vspace{1em} \noindent
A \textit{Data Flow Graph} (DFG) (also called \emph{Data Dependence Graph}) is a directed acyclic graph $G_{DFG}(V_{DFG},E_{DFG})$, where the vertices $V_{DFG}=\{op_i|i=1,...,n_{ops}\}$ are the operations in the design, and the edges represent the flow data dependencies between operations. In particular, a directed edge $e_{ij}=(op_{i},op_{j})$, where $op_{i}, op_{j} \in V_{DFG}$, exists in $E_{DFG}$ if data produced by operation $op_{i}$ is used by operation $op_{j}$.

\subsubsection{\underline{Control Flow Graph}}\label{pre:graphs:cfg}
\index{Control Flow Graph|textbf}
The \textit{Control Flow Graph} (CFG) is a data
structure widely used by compilers. It is an abstract representation of a program.
Each vertex of the graph is an operation (or a basic block) and branches in the control flow are
represented by directed edges. In particular, there is an edge between node A and B if there is an execution in which B is executed after A.

\vspace{1em} \noindent
There are also two other vertices: ENTRY one (where
the control enters into the flow) and the EXIT one (where the flow ends). CFG is
a static representation and it only represents the different
control flows present in the behavioral specification (e.g.: see Fig. \ref{hls::cfg_example}).

\begin{figure}[htb]
\centering
\includegraphics[width=0.13\textheight]{./chapters/preliminaries/images/cfg.jpg}
\caption{Control Flow Graph}\label{hls::cfg_example}
\end{figure}

\subsubsection{\underline{Hierarchical Task Graph}}\label{pre:graphs:htg}
\index{Hierarchical Task Graph|textbf}
Whereas CFG is useful for maintaining the flow of control between basic blocks, \textit{Hierarchical Task Graph} (HTG) is employed to maintain structure of the design description. In fact, HTG have been defined as intermediate parallel program
representations that encapsulate minimal data and control dependences,
and can be used to extract and exploit functional
and task-level parallelism.

\vspace{1em} \noindent
An HTG is a directed acyclic graph
$G_{HTG}(V_{HTG},E_{HTG})$, where the vertices $V_{HTG}=\{htg_i|i=1,2,...,n_{htgs}\}$ can be one of the three types: \textit{single, compound} and \textit{loop} nodes.
\begin{enumerate}
\item \textit{Single nodes} represent nodes that have no subnodes and are used
        to encapsulate basic blocks;
\item \textit{Compound nodes} are recursively defined as HTGs, i.e., they
        contain other HTG nodes. They are used to represent structures
        like if-then-else blocks, switch-case blocks or a series of HTGs.
\item \textit{Loop nodes} are used to represent the various types of loops (for,
        while-do, do-while). Loop nodes consist of a loop head and a
        loop tail that are single nodes and a loop body that is a compound
        node.
\end{enumerate}

\begin{figure}[htb]
\centering
\includegraphics[width=0.6\textheight]{./chapters/preliminaries/images/htg.jpg}
\caption{Hierarchical Task Graph example}\label{hls::htg}
\end{figure}

\vspace{1em} \noindent
The edge set $E_{HTG}$ in $G_{HTG}$ represents the flow of control between HTG nodes. An edge $(htg_i,htg_j)$ in $E_{HTG}$, where $htg_i, htg_j \in V_{HTG}$, signifies that $htg_j$ executes after $htg_i$ has finished execution. Each node $htg_i$ in $V_{HTG}$ has two distinguished nodes, $htg_{start}$ and $htg_{stop}$, belonging to $V_{HTG}$ such that there exists a path from $htg_{start}$ to every node in $htg_{i}$ and a path from every node in $htg_{i}$ to $htg_{stop}$.
The $htg_{start}$ and $htg_{stop}$ nodes for all compound and loop HTG nodes are always single nodes. The $htg_{start}$ and $htg_{stop}$ for a loop HTG node are the loop head and the loop tail respectively and those of a single node are the node itself.

\vspace{1em} \noindent
An example of HTG graph can be seen in Figure~\ref{hls::htg}. Figure \ref{hls::htg}.a represents an example of C description and Figure \ref{hls::htg}.b represents the corresponding HTG representation. A HTG representation with the control and DFGs overlaid on top can be seen in Figure \ref{hls::htg}.c. Here, an empty basic block $bb_4$ is added to the CFG in the \textit{Join} node of the If-HTG node.

\subsubsection{\underline{Control Dependence Graph}}\label{pre:graphs:cdg}
\index{Control Dependence Graph|textbf}
A \textit{Control Dependence Graph} (CDG)
is a directed graph $G$ where each node represents an operation in the behavioral
specification. It represents control dependencies within operations, e. g. from
which operation the execution of a single operation is controlled.
Before defining the CDG, the \textit{post-dominator} definition has to be presented.

\vspace{1em} \noindent
\begin{definition}
\textbf{Post-domination}: a node $V$ is \textnormal{post-dominated} by a node $W$
in graph $G$ if every directed path from $V$ to $EXIT$ node (not including $V$)
contains $W$.
\end{definition}

\vspace{1em} \noindent
Note that this definition of post-dominance does not include the initial node on
the path. In particular, a node never post-dominates itself.

\vspace{1em} \noindent
\begin{definition}
\textbf{Control dependence}: let $G$ be a control flow graph. Let $X$ and $Y$ be
nodes in $G$. $Y$ is control dependent on X if and only if:
\begin{itemize}
 \item there exists a directed path $P$ from $X$ to $Y$ with any $Z$ in $P$
(excluding $X$ and $Y$) post-dominated by $Y$ and
 \item $X$ is not post-dominated by $Y$
\end{itemize}
\end{definition}

\begin{figure}[tb]
\centering
      \includegraphics[width=0.60\textheight]{./chapters/preliminaries/images/cdg.jpg}
  \caption{Control Dependece Graph example}\label{hls::cdg_example}
\end{figure}

\vspace{1em} \noindent
If $Y$ is control dependent on $X$ then $X$ must have two exits. Following one of
the exits from $X$ always results in $Y$ being executed, while taking the other exit
may result in $Y$ not being executed. \textit{Condition 1} can be satisfied by a
path consisting of a single edge. \textit{Condition 2} is always satisfied when
$X$ and $Y$ are the same node (e.g.: see Fig.~\ref{hls::cdg_example}).

\subsubsection{\underline{Control/Data Flow Graph}}\label{pre:graphs:cdfg}
\index{Control/Data Flow Graph|textbf}
A \textit{Control/Data Flow Graph} (CDFG) (also called \emph{System Dependence Graph}) is a commonly used internal representation to capture the behavior.
It is the union of the two graphs: Control Dependence Graph (CDG) and Data Flow Graph (DFG).
The control-dependence graph (CDG) portion of the CDFG captures sequencing, conditional branching, and looping constructs in the behavioral description, and the data-flow graph
(DFG) portion captures data-manipulation activity described by a set of assignment statements (operations).

\begin{figure}[htb]
\centering
  \includegraphics[width=0.60\textheight]{./chapters/preliminaries/images/cdfg.jpg}
  \caption{Control/Data Flow Graph example}\label{hls::cdfg_example}
\end{figure}

\vspace{1em} \noindent
A CDFG is a directed graph, whose nodes represent operations, and
edges represent dependencies between operations. In CDFG descriptions, the dependencies are of two types: data and control. An edge represents a data dependency if the source node of the edge
produces data that the sink node consumes. Existence of a control dependency between nodes implies that the execution of the sink node depends on the outcome of the execution of the source
node. Data dependencies in the CDFG are indicated by blue arcs, and control dependencies, by red arcs. Variable declarations and initializations do not correspond to operations in the
CDFG.

\vspace{1em} \noindent
It is used since it represents both data and control dependences in an unique
graph without containing false control dependences like control flow graph
does (e.g.: see Fig.~\ref{hls::cdfg_example}).

\subsection{Static Single-Assignment Form}\label{pre:SSA}
Many liveness and dataflow analysis need to find the use-sites of each defined variable or the definition-sites of each variable used in an expression. The \emph{def-use chain} and the \emph{use-def chain} are data structures that makes this efficient: for each statement in the flow graph, it is easy to obtain a list of pointer to all the \emph{use} sites of variables defined there, and a list of pointers to all \emph{definition} sites of the variables used there. In this way it is possible to hop quickly from a use to its definitions and from a definition to its uses.

\vspace{1em} \noindent
An improvement on the idea of def-use chains and use-def chains is \emph{Static Single-Assignment form}, or \emph{SSA form}, an intermediate representation in which each variable has only one definition in the program text. The one (static) definition-site may be in a loop that is executed many (dynamic) times, thus the name \emph{static} single-assignment form instead of single-assignment form (in which variables are never redefined at all).

\vspace{1em} \noindent
The SSA form is useful for several reasons:
\begin{enumerate}
  \item Liveness and dataflow analysis and optimization algorithms can be made simpler when each variable has only one definition.
  \item If a variable has N uses and M definitions (which occupy about N + M instruction in a program), it takes space (and time) proportional to N$ \cdot $M to represent def-use chains - a quadratic blowup. For almost all realistic programs, the size of SSA form is linear in the size of the original program.
  \item Uses and defs of variable in SSA form relate in a useful way to the dominator structure of the control-flow graph, which simplifies algorithms such as interference graph construction.
  \item Unrelated uses of the same variable in the source program become different variables in SSA form, eliminating needless relationship. An example is the program,

      $ for $ $ i \leftarrow 1 $ $to$ $N$ $ do$ $A[i] \leftarrow 0 $

      $ for $ $ i \leftarrow 1 $ $to$ $M$ $ do$ $s \leftarrow s + B[i]$

      where there is no reason that both loops need to use the same machine register or intermediate-code temporary variable to hold their respective loop counters, even though both are named $ i $.
\end{enumerate}

\vspace{1em} \noindent
Informally, the code for a procedure is said to be in SSA form if it meets
two criteria:
\begin{enumerate}
  \item each name has exactly one definition point, and
  \item each use refers to exactly one name.
\end{enumerate}

\vspace{1em} \noindent
The first criterion creates a correspondence between names and definition points. The
second criterion forces the insertion of new definitions at points in the code where
multiple values, defined along different paths, come together.

\vspace{1em} \noindent
To satisfy the first criterion, the compiler must rewrite the code by inventing new
names for each definition and substituting these new names for subsequent uses of the
original names. To build SSA form from a straight-line fragment of code is trivial; each
time a name gets defined, the compiler invents a new name that it then substitutes
into subsequent references. At each re-definition of a name, the compiler uses the next
new name and begins substituting that name. For example, consider the code in the
left column of Figure~\ref{fig:ssa_1}.

\begin{figure}[htb]
  \centering
  \includegraphics[width=10cm]{./chapters/preliminaries/images/ssa_straight.JPG}
  \caption{Straight-line code and its conversion to SSA form}\label{fig:ssa_1}
\end{figure}

\vspace{1em} \noindent
Conversion to SSA form produces the code in the right column: this means that in a straight-line code, such as within a basic block, each instruction can define a fresh new variable instead of redefining an old one.
Each new definition of a variable (such as $ x $), is modified to define a fresh new variable ($x_{0}$, $x_{1}$), and each use of the variable is modified to use the most recently defined version.

\vspace{1em} \noindent
The presence of control flow complicates both the renaming process and the interpretation
of the resulting code. In fact, when two or more control-flow paths merge together, it is not obvious how to have only one assignment for each variable. Where a statement has more than one predecessor, there is no notion of "most recent".
\begin{figure}[htb]
  \centering
  \includegraphics[width=10cm]{./chapters/preliminaries/images/ssa_more.JPG}
  \caption{Conversion to SSA form in the presence of more control flows}\label{fig:ssa_2}
\end{figure}

\vspace{1em} \noindent
If a name in the original code is defined along two
converging paths, the SSA form of the code has multiple names when it reaches a reference.
To solve this problem, the construction introduces a new definition point at
the merge point in the control-flow graph (CFG). The definition uses a pseudo function,
called a $\phi$-function. The arguments of the $\phi$-function are the names flowing into
the convergence, and the $\phi$-function defines a single, new name. Subsequent uses of
the original name will be replaced with the new name defined by the $\phi$-function. This
ensures the second criterion stated earlier: each use refers to exactly one name (i.e., the single-assignment property). To understand the impact of $\phi$-functions, consider the code fragment shown in Figure~\ref{fig:ssa_2}.
Two different definitions of $v$ reach the use. The construction inserts a $\phi$-function
for $v$ at the join point; it selects from its arguments based on the path that executes at run-time.

\vspace{1em} \noindent
Conceptually, the SSA construction involves two steps.
\begin{enumerate}
\item The first step decides where
$\phi$-functions are needed. At each merge point in the CFG, it must consider, for each
value, whether or not to insert a $\phi$-function.
\item The second step systematically renames
all the values to correspond to their definition points. For a specific definition, this
involves rewriting the left-hand side of the defining statement and the right-hand side
of every reference to the value. At a merge point, the value may occur as an argument
to a $\phi$-function. When this happens, the name propagates no further along that path.
(Subsequent uses refer to the name defined by the $\phi$-function.)
\end{enumerate}

\section{Liveness Analysis}\label{pre:liveness}
The behavior of a circuit is translated into a suitable intermediate
format, e.g. a flow graph, that uses an unbounded number of temporaries. This behavior must run on a digital circuit with a bounded number of registers.

\vspace{1em} \noindent
Two temporaries $a$ and $b$ can fit into the same register, if $a$ and $b$ are never "in use" at the same time. Thus, many temporaries can fit in few registers; if they don't all fit, a major number of register is required.

\vspace{1em} \noindent
Therefore, it is necessary to analyze the intermediate representation program of the behavioral description to determine which temporaries are in use at the same time. A variable is \emph{live} if it holds a value that may be needed in the future, so this analysis is called \emph{liveness analysis}.
Liveness of variables "flows" around the edges of the flow graph; determining the live range of each variable is an example of a \emph{dataflow} problem.

\vspace{1em} \noindent
An assignment to a variable or temporary \emph{defines} that variable. An occurrence of a variable on the right-end side of an assignment (or in other expressions) \emph{uses} the variable.

\vspace{1em} \noindent
The \emph{def} of a variable is a set of graph nodes that define it; on the other hand, the \emph{def} of a graph node is the set of variables that it defines; and similarly for the \emph{use} of a variable or graph node.

\vspace{1em} \noindent
A variable is \emph{live} on an edge if there is a directed path from that edge to a \emph{use} of the variable that does not go through any \emph{def}. A variable is \emph{live-in} at a node if it is live on any of the in-edges of that node; it is \emph{live-out} at a node if it is live on any of the out-edges of the node.

\vspace{1em} \noindent
Different approaches to liveness analysis equations and solutions will be presented in Section \ref{soa:liveness}

\vspace{1em} \noindent
Liveness information are used for several kinds of optimization. For some optimizations, it is necessary to know exactly which variables are live at each node in the flow graph. In particular, for many optimizations it is important to see if a specific assignment to a temporary $ t $ can directly affect the value of $ t $ at another point in the program.

\vspace{1em} \noindent
An \emph{unambiguous definition} of $ t $ is a particular statement in the program of the form $ t \leftarrow a \oplus b $ or $ t \leftarrow M[a] $. Given such a definition $ d $, it can be said that $ d $ reaches a statement $ u $ in the program if there is some path of the flow graph from $ d $ to $ u $ that does not contain any other unambiguous definition of $ t $.

\vspace{1em} \noindent
Information about reaching definitions can be kept as \emph{use-def chains}, that is, for each use of a variable $ x $, a list of the definitions of $ x $ reaching that use. Use-def chains do not allow faster dataflow analysis per se, but allow efficient implementation of the optimization algorithms that use the results of the analysis.

\vspace{1em} \noindent
A generalization of use-def chain is static single-assignment form, described in Section \ref{pre:SSA}. SSA form not only provides more information than use-def chains, but the dataflow analysis that computes it is very efficient.

\vspace{1em} \noindent
One way to represent the results of liveness analysis is via \emph{def-use chains}: a list, for each definition, of all possible uses of that definition. As previously seen in Section \ref{pre:SSA}, SSA form also contains def-use information.

\section{Operation Scheduling}\label{pre:sched}
A \textit{scheduling function} $\theta : V_0 \rightarrow \Pi(\mathbb{N}^n)$ assigns to each intermediate representation node $v \in V_0$ a sequence of cycle steps in which the node is executed. If these cycle steps are continuous, this will be called the \textit{execution interval} of the operation \textit{v}. A schedule will be called a \textit{simple} schedule if all operations have an execution interval of length one. In this work, only execution in continuous cycle steps will be considered.

\vspace{1em} \noindent
Most scheduling techniques can be classified into two broad categories: resource-constrained and time-constrained.
\begin{description}
\item[Resource-constrained scheduling techniques] assume that the
set of resources (functional units and/or registers) that will
be used to implement the design is specified, and attempt to
minimize the number of clock cycles required to perform the
computation;
\item[Time-constrained scheduling techniques] assume
that a fixed number of clock cycles are available to perform the
computation, and attempt to minimize the number of resources
required.
\end{description}

\vspace{1em} \noindent
Scheduling techniques and tools for data-flow-intensive designs
are primarily concerned with exploiting the tradeoff between
parallelism (or concurrency) and resource requirements, while
scheduling techniques and tools for control-flow-intensive designs
are based on exploiting the mutual exclusion of operations
in the description that is imposed by conditional constructs.

\vspace{1em} \noindent
\begin{definition}\label{hls:mutual_exclusion}
\textbf{Mutual exclusion:} two operations will be called mutually exclusive if they are executed under mutually exclusive conditions. A \textnormal{mutual exclusive function} $m : V_0 \rightarrow \Pi(\mathbb{N})$ is defined such that:
\begin{equation}
   m(v_i) \wedge m(v_j) = 0
\end{equation}
 when operations $v_i$ and operation $v_j$ are executed under mutually exclusive conditions.
\end{definition}

\vspace{1em} \noindent
For example, operations of the same type (e.g., addition,
subtraction, etc.) that are mutually exclusive may be scheduled
in the same clock cycle without requiring a separate functional
unit to implement each of them. Similarly, mutually exclusive
paths of computation may be scheduled independently, optimizing
each path differently.

\subsection{Speculative Execution Definition}\label{pre:spec}
The quality of synthesis results (in terms of circuit delay and area) is adversely affected
by the presence of conditionals and loops in the behavioral specification. Designers are often given minimal control over the transformations that effect these results.

\vspace{1em} \noindent
To alleviate the problem of poor synthesis results in the presence
of complex control flow in designs, there is a need for high-level and
compiler transformations that can optimize the synthesis results irrespective
of the choice of control flow in the input description.

\vspace{1em} \noindent
Several scheduling algorithms have been proposed to address this issue. They
employ beyond-basic-block code motion techniques, such as \emph{speculation},
to extract the inherent parallelism in designs, increase resource
utilization (\cite{BIB::Waka_glob}, \cite{BIB::SP_1}) and then reduce the total number of control steps.

\vspace{1em} \noindent
As explained in Section~\ref{pre:liveness}, a liveness analysis of a flow graph collects information about the execution of the program. The results of these analysis can be used to make optimizing transformations of the specified behavior description: this Section presents speculative techniques to improve operation scheduling during high-level synthesis.

\vspace{1em} \noindent
Generally, \textit{speculation} refers to the unconditional execution of operations
that were originally supposed to have executed conditionally.
However, there are situations when there is a need to
move operations into conditionals. This may be done by \textit{reverse speculation},
where operations before conditionals are duplicated into subsequent
conditional blocks and, hence, executed conditionally, or this
may be done by \textit{conditional speculation}, where an operation from
after the conditional block is duplicated up into preceding conditional
branches and executed conditionally. Another code-motion technique described, called \textit{early condition execution}, evaluates conditional checks as soon as their data dependencies are satisfied. In this way,
all of the operations in the branches of the conditional are ready to be
scheduled immediately.

\vspace{1em} \noindent
A number of similar code transformations have been proposed
for compilers as well. Whereas compilers often pursue maximum
parallelization by applying speculative-code motions, in high-level
synthesis, such code transformations have to be selected and guided
based on their effects on the control, interconnect, and area costs.

\subsubsection{Speculative Execution}\label{sp:def:exec}
\emph{Speculative execution} or \emph{speculation} refers to the execution of an operation
before the branch condition that controls the operation has been
evaluated. In the approach to speculation for high-level synthesis described in \cite{BIB::SP_6}, the result of a speculated operation is stored in a new register. If the condition
of the speculated operation evaluates to true, then the
stored result is committed to the variable from the original operation,
else the stored result is discarded.

\vspace{1em} \noindent
Speculation can be demonstrated by the example represented in Fig.~\ref{fig:sp_1}. In Fig.~\ref{fig:sp_1}(a),
variables $d$ and $g$ are calculated based on the result of the calculation of
the conditional $c$. Since the operations that produce $d$ and $g$ execute on
different branches of a conditional block, these operations are \textit{mutually
exclusive}. Hence, these operations can be scheduled on the same hardware
resource with appropriate multiplexing of the inputs and outputs,
as shown by the circuit in Fig.~\ref{fig:sp_1}(a).
\begin{figure}[ht]
  \centering
  \includegraphics[width=12cm]{./chapters/preliminaries/images/speculation.JPG}
  \caption{Speculation example}\label{fig:sp_1}
\end{figure}

\vspace{1em} \noindent
Now, consider that an additional adder is available. Then, the operations
within the conditional branches can be calculated speculatively
and concurrently with the calculation of the conditional $c$, as shown
in Fig. \ref{fig:sp_1}(b). The corresponding hardware circuit is also shown in this
Figure. Based on the evaluation of the conditional, one of the results
will be discarded and the other committed. It is evident from the corresponding
hardware circuits in Fig.~\ref{fig:sp_1}(a) and (b) that as a result of this
speculation, the longest path gets shortened from being a sequential
chain of a comparison followed by an addition to being a parallel computation
of the comparison and the additions.

\vspace{1em} \noindent
This example also demonstrates the additional costs of speculation.
Speculation requires more functional units and potentially more storage
for the intermediate results. Uncontrolled aggressive speculation can
also lead to worse results due to multiplexing and control overheads.
On the other hand, judicious use of speculation can improve resource
utilization.

\subsubsection{Reverse Speculation}\label{sp:def:rev}
\emph{Reverse speculation} refers to moving an operation $op_i$ from its basic
block $bb_j$ into the successors of $bb_j$. This code motion is used
to duplicate operation $op_i$ into the branches of an If-HTG when the
If-HTG is the successor of $bb_j$.

\vspace{1em} \noindent
Reverse speculation is useful in instances where an operation inside
a branch of an If-HTG is on the longest path through the design,
whereas an operation before the If-HTG is not \cite{BIB::SP_6}.

\vspace{1em} \noindent
Reverse speculation is demonstrated
by the example in Fig.~\ref{fig:sp_2}(a). In this design, operation $b$, that is on the
shorter dependency path ($<b; g; h>$), is placed in the basic block before
the If-HTG, whereas operation $d$, that is on the longer dependency
path ($<d; e; f; h>$), is placed in the true branch of the If-HTG. If operation $b$ is reverse
speculated into the conditional branches, as shown in
Fig.~\ref{fig:sp_2}(b), the adder in basic block $bb0$ is left idle. This enables the speculative
execution of operation $d$ in $bb0$, as shown in Fig.~\ref{fig:sp_2}(c). The dashed
lines in Fig.~\ref{fig:sp_2} demarcate the state assignments (S0 through S4) for the
three designs. Clearly, the final design in Fig.~\ref{fig:sp_2}(c), after reverse speculation
of $b$ and speculation of $d$, requires one state less than the original
design in Fig.~\ref{fig:sp_2}(a).
\begin{figure}[ht]
  \centering
  \includegraphics[width=12cm]{./chapters/preliminaries/images/reverse_speculation.JPG}
  \caption{Reverse Speculation example}\label{fig:sp_2}
\end{figure}

\vspace{1em} \noindent
Note that, while applying reverse speculation in the example above,
a data dependency analysis determines that the result of operation $b$ is
used only in the false branch of the If-HTG. Hence, instead of duplicating
$b$ into both branches, $b$ is moved only into the false branch of
the If-HTG, as shown in Fig.~\ref{fig:sp_2}(b). In the general case, reverse speculation
leads to duplication of the operation into both the branches of an If-HTG. It is also important to make a distinction between moving
operations into a later scheduling step and the downward operation duplication
done by reverse speculation. When an operation encounters
a fork node while being moved down, it has to be duplicated into all
the control paths that lead out of the fork node (unless its result is not
needed in one of the branches).

\subsubsection{Early Condition Execution}\label{sp:def:early}
Reverse speculation can be coupled with another novel
transformation, namely, \emph{early condition execution}. This transformation attempts to
schedule operations such that the conditional check can be evaluated
or scheduled as soon as possible. Any operations before the conditional
check that are unscheduled are moved into the branches of the
If-HTG by reverse speculation. Evaluating a conditional check early
using early condition execution resolves the control dependency for operations
within branches of the If-HTG. These operations are, thereby,
available for scheduling sooner \cite{BIB::SP_6}.
\begin{figure}[ht]
  \centering
  \includegraphics[width=12cm]{./chapters/preliminaries/images/early_condition.JPG}
  \caption{Early Condition example}\label{fig:sp_3}
\end{figure}

\vspace{1em} \noindent
Early condition execution is demonstrated by the example in
Fig. \ref{fig:sp_3}(a). In this example, comparison operation $c$ computes a conditional
that is checked in basic block $bb_1$ (the Boolean conditional
check is denoted by a triangle). This comparison operation can be scheduled concurrently with operation a in state S0 in basic block
$bb_0$, as shown in Fig.~\ref{fig:sp_3}(b). Now, the conditional check in basic block
$bb_1$ can be executed early in state S1. However, operation $b$ in
basic block $bb_0$ has not been scheduled as of yet. Therefore, this
operation is reverse speculated into basic block $bb_3$ (and not into $bb_2$
since its result is used only in $bb_3$). These code motions lead to an
overall shorter schedule length, as shown by the state assignments in
Figs.~\ref{fig:sp_3}(a) and (b).

\subsubsection{Conditional Speculation}\label{sp:def:cond}
Often design descriptions have instances where there are idle resources
in the scheduling steps of the basic blocks that comprise the
branches of an If-HTG. Speculating out of If-HTGs also leaves resources
idle in the basic blocks of the conditional branches. To utilize
these idle resources, authors in \cite{BIB::SP_6} propose duplicating operations that lie in
basic blocks after the conditional branches up into the basic blocks that
comprise the conditional branches. They call this code motion \emph{conditional
speculation}. This is similar to the duplication-up code motion
used in compilers and the node duplication transformation discussed
by Wakabayashi et al. \cite{BIB::Waka_glob}.
\begin{figure}[ht]
  \centering
  \includegraphics[width=12cm]{./chapters/preliminaries/images/conditional_speculation.JPG}
  \caption{Conditional Speculation example}\label{fig:sp_4}
\end{figure}

\vspace{1em} \noindent
Conditional speculation can be demonstrated by the example in Fig.~\ref{fig:sp_4}(a).
In this example, operations $x$ and $y$ both write to the variable $a$ in the
conditional branches $bb_1$ and $bb_2$. Consider that this design is allocated
one adder, one subtracter, and one comparator. Then, operations
$x$ and $y$ can be speculatively executed as shown in Fig.~\ref{fig:sp_4}(b). The speculation
of these operations leaves the resources in basic blocks $bb_1$ and
$bb_2$ idle. Hence, the operation $z$ that lies in basic block $bb_4$ can be
duplicated up or conditionally speculated (CS) into both branches of
the If-HTG and scheduled on the idle adder, as illustrated in Fig.~\ref{fig:sp_4}(c).
Operation $z$ is dependent on either the result of operation $x$ or operation
$y$, depending on how the condition evaluates (since operation $z$
is dependent on the variable $a$). Hence, the duplicated operations $z1$
and $z2$ directly read the results of operations $x$ and $y$, respectively. Clearly, for this example, this set
of code motions leads to a design that requires one less state to execute.

\section{Resource Allocation and Binding}\label{pre:RA}
Resource allocation and binding task is concerned with assigning operations and values to hardware components and then to interconnect them using connection elements (process of datapath generation).

\vspace{1em} \noindent
\begin{definition}\label{hls:datapath}
 \textbf{Datapath:} The datapath ($DP$) is a graph $DP(M_o\cup M_s \cup M_i,I)$ where
 \begin{itemize}
  \item a set $M = M_o\cup M_s\cup M_i$, whose elements, called modules, are the nodes of the graph, with
  \begin{itemize}
     \item a set $M_o$ of \textnormal{operational} modules like adders, multipliers and ALUs,
     \item a set $M_s$ of \textnormal{storage} modules like registers and register files,
     \item a set $M_i$ of \textnormal{interconnection} modules like multiplexers, demultiplexers busses and bus drivers;
 \end{itemize}
  \item an interconnection relation $I\subseteq M\times M$, whose elements are interconnection links. These are the edges of the datapath graph.
 \end{itemize}
\end{definition}

\vspace{1em} \noindent
Each module $m\in M$ specifies:
\begin{itemize}
 \item the library component of which this module is an instance,
 \item the pins $P={p_1,\ldots ,p_k}$ of the module.
\end{itemize}
For each interconnection it is defined which pins of the module it is connected to.

\vspace{1em} \noindent
As said in Section \ref{hls:tasks:RA}, the allocation of a datapath in high-level synthesis system consists of four dependent subproblems:
\begin{enumerate}
\item The \textit{storage value insertion};
\item The \textit{module allocation and binding};
\item The \textit{register allocation and binding};
\item The \textit{interconnection allocation}.
\end{enumerate}

\subsection{Storage value insertion}
The storage value insertion phase inserts additional nodes in the scheduled data flow graph. Each edge that crosses a cycle step boundary represents a value that has to be stored somewhere. The storage allocation function can therefore be defined as the following transformation:

\vspace{1em} \noindent
Given a scheduled data flow graph $G(V_o,E,C)$:
\begin{definition}
 \textbf{Storage value insertion}: the \textnormal{storage value insertion} is a transformation $G(V_o,E,C)\rightarrow (V_o\cup V_s,E',C)$, which adds storage value $v\in V_s$ to the graph such that all edges $e\in E$ which cross a cycle step boundary are connected to a storage value.
\end{definition}

\subsection{Module allocation and binding}
Given a data path $DP(M_o\cup M_s \cup M_i,I)$, a scheduled DFG $G(V_o\cup V_s,E,C)$ and a module library $\Lambda(T,L)$:
\begin{definition}\label{hls:allocation}
 \textbf{Module allocation}: the \textnormal{module allocation} function $\mu :V_o\rightarrow \Pi(M_o)$, determines which module performs a given operation.
\end{definition}
\vspace{1em} \noindent
Note that a module allocation $\mu(v_i)=m, m\in M_o, v_i\in V_o$ can only be a valid allocation if $m\in \lambda(\tau (v_i))$, i.e. the module $m$ is capable of execution of operation type of $v_i$.
\begin{definition}\label{hls:binding}
 \textbf{Module binding}: a \textnormal{resource binding} is a mapping $\beta : V_o\rightarrow M_o \times \mathbb{N}$, where $\beta(v_o) = (t,r)$ denotes that the operation corresponding to $v_o \in V_o$, with type $\tau(v_o)\in \lambda^{-1}(t)$ (i.e. component $t\in L$ can execute the operation represented by vertex $v_o$), is executed on the component $t = \mu(v_o)$ and $r< \sigma(t)$ (i.e. the operation is implemented by the \textit{r-th} instance of resource type $t$ and this instance is available into datapath).
\end{definition}
\vspace{1em} \noindent
A simple case of binding is a dedicated resource. Each operation is bound to one resource, and the resource binding $\beta$ is a one-to-one function.

\vspace{1em} \noindent
A resource binding may associate one instance of a resource type to more than one operation. In this case, that particular resource is shared and binding is a many-to-one function. A necessary condition for a resource binding to produce a valid circuit implementation is that the operations corresponding to a shared resource do not execute concurrently, i.e. they are in mutual exclusion, with respect to definition~\ref{hls:mutual_exclusion}.

\vspace{1em} \noindent
When binding constraints are specified, a resource binding must be compatible with them. In particular, a partial binding may be part of the original specification.

\subsection{Register allocation and binding}
The register allocation problem can be formulated as the allocation of a storage module $m\in M_s$ for each storage value $v\in V_s$:

\begin{definition}
\textbf{Register allocation}: the \textnormal{register allocation} function $\psi : V_s\rightarrow \Pi(M_s)$, identifies the storage module holding a value from the set $V_s$
\end{definition}

\vspace{1em} \noindent
The binding information is needed for evaluating and/or performing the register optimization. Therefore, the accurate estimation of the number of registers requires both
scheduling and binding.

\vspace{1em} \noindent
If the register allocation problem is considered in isolation, the goal is to minimize the number of storage modules.

\vspace{1em} \noindent
Register allocation is one of the most important application for liveness analysis: there is a set of temporaries $ a,b,c, ... $ that must be allocated to registers $ r_{1}, ..., r_{k} $. A condition that prevents $ a $ and $ b $ being allocated to the same register is called an \emph{interference}.

\vspace{1em} \noindent
The most common kind of interference is caused by overlapping live ranges: when $ a $ and $ b $ are both live at the same control step, they cannot be put in the same register.

\vspace{1em} \noindent
Interference information can be expressed as an undirected graph, the \emph{conflict graph}, with a node for each variable, and edges connecting variables that interfere.

\begin{definition}
\textbf{Conflict Graph}: the \textnormal{conflict graph} $ G_s(V_s,W) $ (also called \textit{interference graph}), is an undirected graph where nodes $ V_s $ are the storage value to be stored, and edges $W$ are pairs of values that cannot be assigned to the same storage module because they are alive at the same time.
\end{definition}

\vspace{1em} \noindent
This means that the storage values that are adjacent in $ G_s(V_s,W) $ cannot be stored in the same register because their interval life overlap. Thus, two vertices are joined by an edge if they are in conflict.

\begin{definition}
\textbf{Compatibility Graph}: the \textnormal{compatibility graph} $ \bar{G}_s(V_s,\bar{W}) $ is the complement of the conflict graph $ G_s(V_s,W)$.
\end{definition}

\vspace{1em} \noindent
In the general approach to register allocation \cite{BIB::STOK}, edges $\bar{W}$ of the compatibility graph $ \bar{G}_s(V_s,\bar{W}) $ are defined as follow:
\begin{equation}
\bar{W}=\lbrace (v_{i},v_{j}) \mid \ll w(v_{i}),P(v_{i}) \gg \parallel \ll w(v_{j}),P(v_{j}) \gg = false\rbrace
\end{equation}
where
\begin{itemize}
\item $w(v)$ is the cycle step in which the storage value $v \in V_{s}$ is written;
\item $P(v)$ determines the last cycle step in which the storage value $v \in V_{s}$ is read;
\item the operator $\parallel$ returns true if two intervals overlap and return false otherwise:
\begin{equation}
\ll x_{1},y_{1} \gg \parallel \ll x_{2},y_{2} \gg = \left\{
\begin{array}{cc}
false & \mbox{if } y_{1}<x_{2} \vee y_{2}<x_{1},\\
true & \mbox{otherwise}.
\end{array}
\right.
\end{equation}
\end{itemize}

\vspace{1em} \noindent
Thus, the storage values that are adjacent in $ \bar{G}_s(V_s,\bar{W}) $ can be stored in the same register without overwriting each others values. In fact, two vertices of the compatibility graph are joined by an edge if they are compatible.


\subsection{Interconnection allocation}
All registers and modules have to be connected to transfer all the data between their ports.

\begin{definition}
\textbf{Interconnection allocation}: the \textnormal{interconnection allocation} function $\iota :E\rightarrow (\Pi(M_i))$, describes how the modules and registers are connected and which interconnection is assigned for which data transfer.
\end{definition}

\vspace{1em} \noindent
Data path connectivity synthesis consists of defining the interconnection among
resources, steering logic circuits (multiplexers or busses), memory resources (registers
and memory arrays), input/output ports and the control unit. Therefore a complete binding is required. Data path connectivity synthesis specifies also the interconnection of the data
path to its environment through the input/output ports.

\section{Controller Synthesis}\label{pre:controller}
The controller derived from high-level synthesis specifies the logic to issue datapath operations. In particular it provides the signals that enable the registers and that control the steering circuits (i.e., multiplexers and busses). Sequential resources require a \textit{start} (and sometimes a \textit{reset}) signal. Hence the execution of the operation requires a set of \textit{activation} signals.

\vspace{1em} \noindent
In addition, the control unit receives some \textit{condition} signals from the datapath that evaluate the clauses of some branching and iterative constructs. Condition signals provided by data-dependent operations are called \textit{completion} signals.

\subsection{State Diagrams}
To address the control unit architecture described above, the behavioral view of sequential circuits at the logic level can be expressed by finite-state machine transition diagrams. A finite-state machine can be described by:

\begin{itemize}
\item A set of primary input patterns, $X$.
\item A set of primary output patterns, $Y$.
\item A set of states, $S$.
\item A \textit{state transition} function, $\delta : X \times S \rightarrow S$.
\item An \textit{output function}, $\lambda : X \times S \rightarrow Y$ for Mealy models or $\lambda : S \rightarrow Y$ for Moore models.
\item An initial state.
\end{itemize}

\vspace{1em} \noindent
The state transition table is a tabulation of the state transition and output functions. Its corresponding graph-based representation is the \textit{state transition diagram}. The state transition diagram is a labeled directed multi-graph $G_t(V,E)$, where the vertex set $V$ is in one-to-one correspondence with the state set $S$ and the directed edge set $E$ is in one-to-one correspondence with the transitions specified by $\delta$.

\vspace{1em} \noindent
In particular, there is an edge $(v_i,v_j)$ if there is an input pattern $x\in X$ such that $\delta(x,s_i)=s_j, \forall i,j = 1,2,\ldots ,\vert S\vert$. In the Mealy model, such an edge is labeled by $x/\lambda(x,s_i)$. In the Moore model, that edge is labeled by $x$ only; each vertex $v_i\in S$ is labeled by the corresponding output function $\lambda(s_i)$.


\section{Conclusions}\label{pre:concl}
In this Chapter, all preliminaries and basic knowledge that will be useful in the remaining of this thesis have been presented. In particular, Intermediate Representations and Liveness Analysis technique have been detailed, with emphasis on the SSA-form representation.
Operation Scheduling, Resource Allocation and Binding, and Controller Synthesis (i.e., the high-level synthesis subtasks) have been also introduced and the speculative execution has been presented as a well-known solution to improve design performance.

