\chapter{Preliminaries}\label{CH::SP}
\markboth {Chapter \ref{CH::SP}. Preliminaries}{}

\framebox{sistemare figure}

The behavior of a circuit can be specified using a
high-level hardware description language or a common software programming language, and it
should be translated into a suitable intermediate
format, e.g. a flow graph, in order to efficiently manage and analyze the design specification.
This translation is critical for enabling detailed flow analysis, enabling the visualization of
intermediate values to improve HLS tasks (like register allocation) and making global decisions
about code motion.

Recently Static Single Assignment form and several types of intermediate representations have been proposed to represent the flow properties of a program. Each of these previously unrelated techniques lends efficiency and power to a useful class of data computation, i.e. dataflow analysis, and resources optimization, i.e. speculative execution.

\framebox{definire x bene il senso delle varie sezioni del capitolo e poi descrivere con cura il problema della tesi}

This Chapter is organized as follows. In Section \framebox{***}, the most used flow graphs will be detailed; in Section \framebox{***}, the Static Single Assignment form will be introduced; in Section \framebox{***}, the standard Dataflow Analysis will be explained; and finally in Section \framebox{***}, the speculation techniques will be described.

\section{Flow Graphs Definition}\label{sp:graphs}
Language based specifications are usually translated into intermediate
representations to efficiently manage and analyze the design specification.
Several types of intermediate representations have been proposed in literature, each one targeting
different types of applications: data flow graph (DFG), control flow
graph (CFG) and hierarchical task graph (HTG). These graphs will be detailed in Section \ref{sp:graphs:main}. 

Other types of flow graphs can be derived from the previous ones; they represent a merge of two different graphs, or a particular view of one of them, or simply a subset of edges of the parent graph. These derived graphs will be presented in detail in Section \ref{sp:graphs:derived}.

\subsection{Main Flow Graphs}\label{sp:graphs:main}


\subsubsection{\underline{Data Flow Graph}}\label{hls:dfg}
\index{Data Flow Graph|textbf}
A \textit{Data Flow} language or architecture executes a computation only when all operands are available. This is a technique that allows to specify parallel computation at very low level, usually into a bi-dimensional graph representation: instructions that can be simultaneously
computed are horizontally ordered, while sequential ones are vertically ordered. Data
dependences between operations are represented by directed edges and instructions allow data to be transmitted directly
from source operations to target ones. So a node that defines a variable presents
outcoming edges to all nodes that will use that variable, as Figure \ref{hls::dfg_example} shows.

A \textit{Data Flow Graph} (DFG) (also called \emph{Data Dependence Graph}) is a directed acyclic graph $G_{DFG}(V_{DFG},E_{DFG})$, where the vertices $V_{DFG}=\{op_i|i=1,...,n_{ops}\}$ are the operations in the design, and the edges represent the flow data dependencies between operations. In particular, a directed edge $e_{ij}=(op_{i},op_{j})$, where $op_{i}, op_{j} \in V_{DFG}$, exists in $E_{DFG}$ if data produced by operation $op_{i}$ is used by operation $op_{j}$.

% % 
% % \begin{figure}[bt!]
% % \begin{center}
% %   \begin{minipage}[c]{.35\textwidth}
% %   \begin{small}
% %   b := i * 2; \\
% %   c := a + b: \\
% %   \textbf{if} (a $<$ b) \textbf{then} \\
% %   \ \ \ \ d := 1 - c;\\
% %   \textbf{else}\\
% %   \ \ \ \ d := c / 2;\\
% %   \textbf{endif}\\
% %   d := d + a;
% %   \end{small}
% %   \end{minipage}
% %   \begin{minipage}[c]{.10\textwidth}
% %   \end{minipage}
% %   \begin{minipage}[c]{.35\textwidth}
% %     \centering
% %   \includegraphics[width=0.25\textheight]{./chapters/speculation/images/dfg.jpg}
% %   \end{minipage}
% % \end{center}
% %   \caption{Data Flow Graph}\label{hls::dfg_example}
% % \end{figure}
% 

\subsubsection{\underline{Control Flow Graph}}\label{2:cfg}
\index{Control Flow Graph|textbf}
The \textit{Control Flow Graph} (CFG) is a data
structure widely used by compilers. It is an abstract representation of a program.
Each vertex of the graph is an operation and branches in the control flow are
represented by directed edges. There are also two other vertices: ENTRY one (where
the control enters into the flow) and the EXIT one (where the flow ends). This is
a static representation and it only represents the different
control flows present in the behavioral specification (e.g.: see Fig. \ref{hls::cfg_example}).
% 
% \begin{figure}[bt!]
% \begin{center}
%   \begin{minipage}[c]{.35\textwidth}
%   \begin{small}
%   b := i * 2; \\
%   c := a + b: \\
%   \textbf{if} (a $<$ b) \textbf{then} \\
%   \ \ \ \ d := 1 - c;\\
%   \textbf{else}\\
%   \ \ \ \ d := c / 2;\\
%   \textbf{endif}\\
%   d := d + a;
%   \end{small}
%   \end{minipage}
%   \begin{minipage}[c]{.10\textwidth}
%   \end{minipage}
%   \begin{minipage}[c]{.35\textwidth}
%     \centering
%     \includegraphics[width=0.16\textheight]{./chapters/speculation/images/cfg.jpg}
%   \end{minipage}
% \end{center}
%   \caption{Control Flow Graph example}\label{hls::cfg_example}
% \end{figure}

\subsubsection{\underline{Hierarchical Task Graph}}\label{hls:htg}
\index{Hierarchical Task Graph|textbf}
Whereas CFG is useful for maintaining the flow of control between basic blocks, \textit{Hierarchical Task Graph} (HTG) is employed to maintain structure of the design description. In fact, HTG have been defined as intermediate parallel program
representations that encapsulate minimal data and control dependences,
and can be used to extract and exploit functional
and task-level parallelism. 

An HTG is a directed acyclic graph
$G_{HTG}(V_{HTG},E_{HTG})$, where the vertices $V_{HTG}=\{htg_i|i=1,2,...,n_{htgs}\}$ can be one of the three types: \textit{single, compound} and \textit{loop} nodes.
\begin{enumerate}
\item \textit{Single nodes} represent nodes that have no subnodes and are used
        to encapsulate basic blocks;
\item \textit{Compound nodes} are recursively defined as HTGs, i.e., they
        contain other HTG nodes. They are used to represent structures
        like if-then-else blocks, switch-case blocks or a series of HTGs.
\item \textit{Loop nodes} are used to represent the various types of loops (for,
        while-do, do-while). Loop nodes consist of a loop head and a
        loop tail that are single nodes and a loop body that is a compound
        node.
\end{enumerate}

% \begin{figure}[t!]
% \centering
% \includegraphics[width=0.6\textheight]{./chapters/speculation/images/htg.jpg}
% \caption{Hierarchical Task Graph example}\label{hls::htg}
% \end{figure}
% 
The edge set $E_{HTG}$ in $G_{HTG}$ represents the flow of control between HTG nodes. An edge $(htg_i,htg_j)$ in $E_{HTG}$, where $htg_i, htg_j \in V_{HTG}$, signifies that $htg_j$ executes after $htg_i$ has finished execution. Each node $htg_i$ in $V_{HTG}$ has two distinguished nodes, $htg_{start}$ and $htg_{stop}$, belonging to $V_{HTG}$ such that there exists a path from $htg_{start}$ to every node in $htg_{i}$ and a path from every node in $htg_{i}$ to $htg_{stop}$.
The $htg_{start}$ and $htg_{stop}$ nodes for all compound and loop HTG nodes are always single nodes. The $htg_{start}$ and $htg_{stop}$ for a loop HTG node are the loop head and the loop tail respectively and those of a single node are the node itself. 

An example of HTG graph can be seen in Figure~\ref{hls::htg}. Figure \ref{hls::htg}.a represents an example of C description and Figure \ref{hls::htg}.b represents the corresponding HTG representation. A HTG representation with the control and DFGs overlaid on top can be seen in Figure \ref{hls::htg}.c. Here, an empty basic block $bb_4$ is added to the CFG in the \textit{Join} node of the If-HTG node.

\subsection{Derived Flow Graphs}\label{sp:graphs:derived}

\subsubsection{\underline{Control Dependence Graph}}\label{hls:cdg}
\index{Control Dependence Graph|textbf}
A \textit{Control Dependence Graph} (CDG)
is a directed graph $G$ where each node represents an operation in the behavioral
specification. It represents control dependencies within operations, e. g. from
which operation the execution of a single operation is controlled.
Before defining the CDG, the \textit{post-dominator} definition has to be presented.

\begin{definition}
\textbf{Post-domination}: a node $V$ is \textnormal{post-dominated} by a node $W$
in graph $G$ if every directed path from $V$ to $EXIT$ node (not including $V$)
contains $W$.
\end{definition}

Note that this definition of post-dominance does not include the initial node on
the path. In particular, a node never post-dominates itself.

\begin{definition}
\textbf{Control dependence}: let $G$ be a control flow graph. Let $X$ and $Y$ be
nodes in $G$. $Y$ is control dependent on X if and only if:
\begin{itemize}
 \item there exists a directed path $P$ from $X$ to $Y$ with any $Z$ in $P$
(excluding $X$ and $Y$) post-dominated by $Y$ and
 \item $X$ is not post-dominated by $Y$
\end{itemize}
\end{definition}

% \begin{figure}[bt!]
% \begin{center}
%   \begin{minipage}[c]{.35\textwidth}
%   \begin{small}
%   b := i * 2; \\
%   c := a + b: \\
%   \textbf{if} (a $<$ b) \textbf{then} \\
%   \ \ \ \ d := 1 - c;\\
%   \textbf{else}\\
%   \ \ \ \ d := c / 2;\\
%   \textbf{endif}\\
%   d := d + a;
%   \end{small}
%   \end{minipage}
%   \begin{minipage}[c]{.10\textwidth}
%   \end{minipage}
%   \begin{minipage}[c]{.50\textwidth}
%     \centering
%     \includegraphics[width=0.35\textheight]{./chapters/speculation/images/cdg.jpg}
%   \end{minipage}
% \end{center}
%   \caption{Control Dependece Graph example}\label{hls::cdg_example}
% \end{figure}
% 
If $Y$ is control dependent on $X$ then $X$ must have two exits. Following one of
the exits from $X$ always results in $Y$ being executed, while taking the other exit
may result in $Y$ not being executed. \textit{Condition 1} can be satisfied by a
path consisting of a single edge. \textit{Condition 2} is always satisfied when
$X$ and $Y$ are the same node (e.g.: see Fig.~\ref{hls::cdg_example}).

\subsubsection{\underline{Control/Data Flow Graph}}\label{hls:cdfg}
\index{Control/Data Flow Graph|textbf}
A \textit{Control/Data Flow Graph} (CDFG) (also called \emph{System Dependence Graph}) is a commonly used internal representation to capture the behavior. 
It is the union of the two graphs: Control Dependence Graph (CDG) and Data Flow Graph (DFG).
The control-dependence graph (CDG) portion of the CDFG captures sequencing, conditional branching, and looping constructs in the behavioral description, and the data-flow graph
(DFG) portion captures data-manipulation activity described by a set of assignment statements (operations).

% \begin{figure}[bt!]
% \begin{center}
%   \begin{minipage}[c]{.35\textwidth}
%   \begin{small}
%   b := i * 2; \\
%   c := a + b: \\
%   \textbf{if} (a $<$ b) \textbf{then} \\
%   \ \ \ \ d := 1 - c;\\
%   \textbf{else}\\
%   \ \ \ \ d := c / 2;\\
%   \textbf{endif}\\
%   d := d + a;
%   \end{small}
%   \end{minipage}
%   \begin{minipage}[c]{.10\textwidth}
%   \end{minipage}
%   \begin{minipage}[c]{.50\textwidth}
%     \centering
%     \includegraphics[width=0.35\textheight]{./chapters/speculation/images/cdfg.jpg}
%   \end{minipage}
% \end{center}
%   \caption{Control/Data Flow Graph example}\label{hls::cdfg_example}
% \end{figure}
% 
A CDFG is a directed graph, whose nodes represent operations, and
edges represent dependencies between operations. In CDFG descriptions, the dependencies are of two types: data and control. An edge represents a data dependency if the source node of the edge
produces data that the sink node consumes. Existence of a control dependency between nodes implies that the execution of the sink node depends on the outcome of the execution of the source
node. Data dependencies in the CDFG are indicated by blue arcs, and control dependencies, by red arcs. Variable declarations and initializations do not correspond to operations in the
CDFG.

It is used since it represents both data and control dependences in an unique
graph without containing false control dependences like control flow graph
does (e.g.: see Fig.~\ref{hls::cdfg_example}).


\section{Dataflow Analysis}\label{sp:dataflow}
The behavior of a circuit is translated into a suitable intermediate
format, e.g. a flow graph, that uses an unbounded number of temporaries. This behavior must run on a digital circuit with a bounded number of registers.

Two temporaries $a$ and $b$ can fit into the same register, if $a$ and $b$ are never "in use" at the same time. Thus, many temporaries can fit in few registers; if they don't all fit, a major number of register is required.

Therefore, it is necessary to analyze the intermediate representation program of the behavioral description to determine which temporaries are in use at the same time. A variable is \emph{live} if it holds a value that may be needed in the future, so this analysis is called \emph{liveness analysis}.

Liveness of variables "flows" around the edges of the flow graph; determining the live range of each variable is an example of a \emph{dataflow} problem.

An assignment to a variable or temporary $ defines $ that variable. An occurrence of a variable on the right-end side of an assignment (or in other expressions) $ uses $ the variable.

The $ def $ of a variable is a set of graph nodes that define it; on the other hand, the $ def $ of a graph node is the set of variables that it defines; and similarly for the $ use $ of a variable or graph node.

A variable is $ live $ on an edge if there is a directed path from that edge to a $ use $ of the variable that does not go through any $ def $. A variable is $ live-in $ at a node if it is live on any of the in-edges of that node; it is $ live-out $ at a node if it is live on any of the out-edges of the node.

Liveness information ($ live-in $ and $ live-out $) can be calculated from $ use $ and  $ def $ as the following dataflow equations shows:

\begin{equation}
 in[n] = use[n] \bigcup (out[n] - def[n])
\end{equation}
\begin{equation}
 out[n] =  \bigcup_{s \in succ[n]} in[s]
\end{equation}

These dataflow equations for liveness analysis mean that:
\begin{enumerate}
  \item If a variable is $ use[n] $, than it is $ live-in $ at node $ n $. That is, if a statement uses a variable, the variable is live on entry to the statement.
  \item If a variable is $ live-in $ at node $ n $, than it is $ live-out $ at all nodes in $ pred[n] $.
  \item If a variable is $ live-out $ at node $ n $, an not in $ def[n] $, than the variable is also $ live-in $ at $ n $. That is, if someone needs the value of $ a $ at the end of statement $ n $, and $ n $ does not provide that value, then $ a $\'s value is needed even on entry to $ n $.
\end{enumerate}

A solution to these equations can be found by iteration. $ in[n] $ and $ out[n] $ are initialized to the empty set \{\}, then the equations are repeatedly treated as assignment until a fixed point is reached.

The convergence of this algorithm can be significantly speeded by ordering the nodes properly; this can be done easily by postorder ordering.

Liveness information is used for several kinds of optimization. For some optimizations, it is needed to know exactly which variables are live at each node in the flow graph, as explained in next Subsections.

\subsection{Register Allocation}

One of the most important application of liveness analysis is for \emph{register allocation}: there is a set of temporaries $ a,b,c, ... $ that must be allocated to registers $ r_{1}, ..., r_{k} $. A condition that prevents $ a $ and $ b $ being allocated to the same register is called an \emph{interference}.

The most common kind of interference is caused by overlapping live ranges: when $ a $ and $ b $ are both live at the same control step, they cannot be put in the same register.

Interference information can be expressed as an undirected graph, the \emph{conflict graph}, with a node for each variable, and edges connecting variables that interfere.

\begin{definition}
\textbf{Conflict Graph}: the \textnormal{conflict graph} $ G_s(V_s,W) $ (also called \textit{interference graph}), is an undirected graph where nodes $ V_s $ are the storage value to be stored, and edges $W$ are pairs of values that cannot be assigned to the same storage module because they are alive at the same time.
\end{definition}

This means that the storage values that are adjacent in $ G_s(V_s,W) $ cannot be stored in the same register because their interval life overlap. Thus, two vertices are joined by an edge if they are in conflict.

\begin{definition}
\textbf{Compatibility Graph}: the \textnormal{compatibility graph} $ \bar{G}_s(V_s,\bar{W}) $ is the complement of the conflict graph $ G_s(V_s,W)$.
\end{definition}

In the general approach to register allocation, edges $\bar{W}$ of the compatibility graph $ \bar{G}_s(V_s,\bar{W}) $ are defined as follow:
\begin{equation}
\bar{W}=\lbrace (v_{i},v_{j}) \mid \ll w(v_{i}),P(v_{i}) \gg \parallel \ll w(v_{j}),P(v_{j}) \gg = false\rbrace
\end{equation}
where
\begin{itemize}
\item $w(v)$ is the cycle step in which the storage value $v \in V_{s}$ is written;
\item $P(v)$ determines the last cycle step in which the storage value $v \in V_{s}$ is read;
\item the operator $\parallel$ returns true if two intervals overlap and return false otherwise:
\begin{equation}
\ll x_{1},y_{1} \gg \parallel \ll x_{2},y_{2} \gg = \left\{
\begin{array}{cc}
false & \mbox{if } y_{1}<x_{2} \vee y_{2}<x_{1},\\
true & \mbox{otherwise}.
\end{array}
\right.
\end{equation}
\end{itemize}
Thus, the storage values that are adjacent in $ \bar{G}_s(V_s,\bar{W}) $ can be stored in the same register without overwriting each others values. In fact, two vertices of the compatibility graph are joined by an edge if they are compatible.

\subsection{Reaching Definitions and Use-Def/Def-Use Chains}
For many optimization it is important to see if a particular assignment to a temporary $ t $ can directly affect the value of $ t $ at another point in the program. An \emph{unambiguous definition} of $ t $ is a particular statement in the program of the form $ t \leftarrow a \oplus b $ or $ t \leftarrow M[a] $. Given such a definition $ d $, it can be said that $ d $ reaches a statement $ u $ in the program if there is some path of the flow graph from $ d $ to $ u $ that does not contain any other unambiguous definition of $ t $.

Information about reaching definitions can be kept as \emph{use-def chains}, that is, for each use of a variable $ x $, a list of the definitions of $ x $ reaching that use. Use-def chains do not allow faster dataflow analysis per se, but allow efficient implementation of the optimization algorithms that use the results of the analysis.

A generalization of use-def chain is static single-assignment form, described in Section xxx. SSA form not only provides more information than use-def chains, but the dataflow analysis that computes it is very efficient.

One way to represent the results of liveness analysis is via \emph{def-use chains}: a list, for each definition, of all possible uses of that definition. SSA form also contains def-use information.


\section{Static Single-Assignment Form}\label{sp:SSA}
Many dataflow analysis need to find the use-sites of each defined variable or the definition-sites of each variable used in an expression. The \emph{def-use chain} and the \emph{use-def chain} are data structures that makes this efficient: for each statement in the flow graph, it is easy to obtain a list of pointer to all the \emph{use} sites of variables defined there, and a list of pointers to all \emph{definition} sites of the variables used there. In this way it is possible to hop quickly from a a use to its definitions and from definition to its uses.

An improvement on the idea of def-use chains and use-def chains is \emph{Static Single-Assignment form}, or \emph{SSA form}, an intermediate representation in which each variable has only one definition in the program text. The one (static) definition-site may be in a loop that is executed many (dynamic) times, thus the name \emph{static} single-assignment form instead of single-assignment form (in which variables are never redefined at all).

The SSA form is useful for several reasons:
\begin{enumerate}
  \item Dataflow analysis and optimization algorithms can be made simpler when each variable has only one definition.
  \item If a variable has N uses and M definitions (which occupy about N + M instruction in a program), it takes space (and time) proportional to N$ \cdot $M to represent def-use chains - a quadratic blowup. For almost all realistic programs, the size of SSA form is linear in the size of the original program.
  \item Uses and defs of variable in SSA form relate in a useful way to the dominator structure of the control-flow graph, which simplifies algorithms such as interference graph construction.
  \item Unrelated uses of the same variable in the source program become different variables in SSA form, eliminating needless relationship. An example is the program,

      $ for $ $ i \leftarrow 1 $ $to$ $N$ $ do$ $A[i] \leftarrow 0 $

      $ for $ $ i \leftarrow 1 $ $to$ $M$ $ do$ $s \leftarrow s + B[i]$

      where there is no reason that both loops need to use the same machine register or intermediate-code temporary variable to hold their respective loop counters, even though both are named $ i $.
\end{enumerate}

Informally, the code for a procedure is said to be in SSA form if it meets
two criteria:
\begin{enumerate}
  \item each name has exactly one definition point, and
  \item each use refers to exactly one name.
\end{enumerate}

The first criterion creates a correspondence between names and definition points. The
second criterion forces the insertion of new definitions at points in the code where
multiple values, defined along different paths, come together.

To satisfy the first criterion, the compiler must rewrite the code by inventing new
names for each definition and substituting these new names for subsequent uses of the
original names. To build SSA form from a straight-line fragment of code is trivial; each
time a name gets defined, the compiler invents a new name that it then substitutes
into subsequent references. At each re-definition of a name, the compiler uses the next
new name and begins substituting that name. For example, consider the code in the
left column of Figure~\ref{fig:ssa_1}. 

% \begin{figure}[ht]
%   \centering
%   \includegraphics[width=8cm]{./chapters/speculation/images/ssa_straight.jpg}
%   \caption{Straight-line code and its conversion to SSA form}\label{fig:ssa_1}
% \end{figure}

Conversion to SSA form produces the code in the right column: this means that in a straight-line code, such as within a basic block, each instruction can define a fresh new variable instead of redefining an old one.
Each new definition of a variable (such as $ x $), is modified to define a fresh new variable ($x_{0}$, $x_{1}$), and each use of the variable is modified to use the most recently defined version.

The presence of control flow complicates both the renaming process and the interpretation
of the resulting code. In fact, when two or more control-flow paths merge together, it is not obvious how to have only one assignment for each variable. Where a statement has more than one predecessor, there is no notion of "most recent".

% \begin{figure}[ht]
%   \centering
%   \includegraphics[width=8cm]{./chapters/speculation/images/ssa_more.jpg}
%   \caption{Conversion to SSA form in the presence of more control flows}\label{fig:ssa_2}
% \end{figure}
% 
If a name in the original code is defined along two
converging paths, the SSA form of the code has multiple names when it reaches a reference.
To solve this problem, the construction introduces a new definition point at
the merge point in the control-flow graph (CFG). The definition uses a pseudo function,
called a $\phi$-function. The arguments of the $\phi$-function are the names flowing into
the convergence, and the $\phi$-function defines a single, new name. Subsequent uses of
the original name will be replaced with the new name defined by the $\phi$-function. This
ensures the second criterion stated earlier: each use refers to exactly one name (i.e., the single-assignment property). To understand the impact of $\phi$-functions, consider the code fragment shown in Figure~\ref{fig:ssa_2}.
Two different definitions of $v$ reach the use. The construction inserts a $\phi$-function
for $v$ at the join point; it selects from its arguments based on the path that executes at run-time.

Conceptually, the SSA construction involves two steps. 
\begin{enumerate}
\item The first step decides where
$\phi$-functions are needed. At each merge point in the CFG, it must consider, for each
value, whether or not to insert a $\phi$-function. 
\item The second step systematically renames
all the values to correspond to their definition points. For a specific definition, this
involves rewriting the left-hand side of the defining statement and the right-hand side
of every reference to the value. At a merge point, the value may occur as an argument
to a $\phi$-function. When this happens, the name propagates no further along that path.
(Subsequent uses refer to the name defined by the $\phi$-function.)
\end{enumerate}


\section{Speculative Execution Definition}\label{sp:def}

*****sistemare tutti i riferimenti bibliografici ****

Computationally expensive portions of several classes of applications
are characterized by the presence of a considerable number of
unpredictable branches. These control constructs limit the amount of
instruction-level parallelism that can be exploited from the input description. There are usually not enough operations available
for execution to utilize all the resources in each cycle or scheduling
step. Hence, there are a number of idle resources in a basic block.
A resource is said to be \emph{idle} in a scheduling step if there is no operation
scheduled to execute on that resource in that scheduling step (the
converse of an idle resource is a busy resource). Idle resources can be
utilized by moving and scheduling operations from subsequent or preceding
basic blocks. The candidate operations for these code motions
are operations whose data dependencies are satisfied, but the conditions
under which they execute may not have been evaluated. One of the key
enabling transformations for such type of code motions is speculation.

As explained in the previous Section~\ref{sp:dataflow}, a dataflow analysis of a flow graph collects information about the execution of the program. The results of these analysis can be used to make optimizing transformations of the specified behavior description: this Section presents speculative techniques to improve operation scheduling during high-level synthesis.

Generally, \textit{speculation} refers to the unconditional execution of operations
that were originally supposed to have executed conditionally.
However, there are situations when there is a need to
move operations into conditionals. This may be done by \textit{reverse speculation},
where operations before conditionals are duplicated into subsequent
conditional blocks and, hence, executed conditionally, or this
may be done by \textit{conditional speculation}, where an operation from
after the conditional block is duplicated up into preceding conditional
branches and executed conditionally. Another code-motion technique described, called \textit{early condition execution}, evaluates conditional checks as soon as their data dependencies are satisfied. In this way,
all of the operations in the branches of the conditional are ready to be
scheduled immediately.
A number of similar code transformations have been proposed
for compilers as well. Whereas compilers often pursue maximum
parallelization by applying speculative-code motions, in high-level
synthesis, such code transformations have to be selected and guided
based on their effects on the control, interconnect, and area costs.

\subsection{Speculative Execution}\label{sp:def:exec}
\emph{Speculative execution} or \textbf{speculation} refers to the execution of an operation
before the branch condition that controls the operation has been
evaluated. In the approach to speculation for high-level synthesis described in ~\cite{SP_6}, the result of a speculated operation is stored in a new register. If the condition
that the operation was to execute under evaluates to true, then the
stored result is committed to the variable from the original operation,
else the stored result is discarded.

In ~\cite{SP_6} speculation is demonstrated by the example in Fig.~\ref{fig:sp_1}. In Fig.~\ref{fig:sp_1}(a),
variables $d$ and $g$ are calculated based on the result of the calculation of
the conditional $c$. Since the operations that produce $d$ and $g$ execute on
different branches of a conditional block, these operations are \textit{mutually
exclusive}. Hence, these operations can be scheduled on the same hardware
resource with appropriate multiplexing of the inputs and outputs,
as shown by the circuit in Fig.~\ref{fig:sp_1}(a).

% \begin{figure}[ht]
%   \centering
%   \includegraphics[width=12cm]{./chapters/speculation/images/speculation.JPG}
%   \caption{Speculation example}\label{fig:sp_1}
% \end{figure}
% 
Now, consider that an additional adder is available. Then, the operations
within the conditional branches can be calculated speculatively
and concurrently with the calculation of the conditional $c$, as shown
in Fig. 3(b). The corresponding hardware circuit is also shown in this
figure. Based on the evaluation of the conditional, one of the results
will be discarded and the other committed. It is evident from the corresponding
hardware circuits in Fig.~\ref{fig:sp_1}(a) and (b) that as a result of this
speculation, the longest path gets shortened from being a sequential
chain of a comparison followed by an addition to being a parallel computation
of the comparison and the additions.
This example also demonstrates the additional costs of speculation.
Speculation requires more functional units and potentially more storage
for the intermediate results. Uncontrolled aggressive speculation can
also lead to worse results due to multiplexing and control overheads.
On the other hand, judicious use of speculation can improve resource
utilization.

\subsection{Reverse Speculation}\label{sp:def:rev}
\emph{Reverse speculation} refers to moving an operation $op_i$ from its basic
block $bb_j$ into the successors of $bb_j$. This code motion is imployed
to duplicate operation $op_i$ into the branches of an If-HTG when the
If-HTG is the successor of $bb_j$ . Reverse speculation has been referred
to as lazy execution \framebox{[??]} and duplicating down in past literature \framebox{[??]}.
Reverse speculation is useful in instances where an operation inside
a branch of an If-HTG is on the longest path through the design,
whereas an operation before the If-HTG is not. In ~\cite{SP_6} this is demonstrated
by the example in Fig.~\ref{fig:sp_2}(a). In this design, operation $b$, that is on the
shorter dependency path ($<b; g; h>$), is placed in the basic block before
the If-HTG, whereas operation $d$, that is on the longer dependency
path ($<d; e; f; h>$), is placed in the true branch of the If-HTG. If operation $b$ is reverse
speculated into the conditional branches, as shown in
Fig.~\ref{fig:sp_2}(b), the adder in basic block $bb0$ is left idle. This enables the speculative
execution of operation $d$ in $bb0$, as shown in Fig.~\ref{fig:sp_2}(c). The dashed
lines in Fig.~\ref{fig:sp_2} demarcate the state assignments (S0 through S4) for the
three designs. Clearly, the final design in Fig.~\ref{fig:sp_2}(c), after reverse speculation
of $b$ and speculation of $d$, requires one state less than the original
design in Fig.~\ref{fig:sp_2}(a).

% \begin{figure}[ht]
%   \centering
%   \includegraphics[width=12cm]{./chapters/speculation/images/reverse_speculation.JPG}
%   \caption{Reverse Speculation example}\label{fig:sp_2}
% \end{figure}
% 
Note that, while applying reverse speculation in the example above,
a data dependency analysis determines that the result of operation $b$ is
used only in the false branch of the If-HTG. Hence, instead of duplicating
$b$ into both branches, $b$ is moved only into the false branch of
the If-HTG, as shown in Fig.~\ref{fig:sp_2}(b). In the general case, reverse speculation
leads to duplication of the operation into both the branches of an If-HTG. It is also important to make a distinction between moving
operations into a later scheduling step and the downward operation duplication
done by reverse speculation. When an operation encounters
a fork node while being moved down, it has to be duplicated into all
the control paths that lead out of the fork node (unless its result is not
needed in one of the branches).

\subsection{Early Condition Execution}\label{sp:def:early}
Reverse speculation can be coupled with another novel
transformation, namely, \emph{early condition execution}. This transformation attempts to
schedule operations such that the conditional check can be evaluated
or scheduled as soon as possible. Any operations before the conditional
check that are unscheduled are moved into the branches of the
If-HTG by reverse speculation. Evaluating a conditional check early
using early condition execution resolves the control dependency for operations
within branches of the If-HTG. These operations are, thereby,
available for scheduling sooner.
% 
% \begin{figure}[ht]
%   \centering
%   \includegraphics[width=12cm]{./chapters/speculation/images/early_condition.JPG}
%   \caption{Early Condition example}\label{fig:sp_3}
% \end{figure}
% 
In ~\cite{SP_6} early condition execution is demonstrated by the example in
Fig.~\ref{fig:sp_3}(a). In this example, comparison operation $c$ computes a conditional
that is checked in basic block $bb_1$ (the Boolean conditional
check is denoted by a triangle). This comparison operation can be scheduled concurrently with operation a in state S0 in basic block
$bb_0$, as shown in Fig.~\ref{fig:sp_3}(b). Now, the conditional check in basic block
$bb_1$ can be executed early in state S1. However, operation $b$ in
basic block $bb_0$ has not been scheduled as of yet. Therefore, this
operation is reverse speculated into basic block $bb_3$ (and not into $bb_2$
since its result is used only in $bb_3$). These code motions lead to an
overall shorter schedule length, as shown by the state assignments in
Figs.~\ref{fig:sp_3}(a) and (b).

\subsection{Conditional Speculation}\label{sp:def:cond}
Often design descriptions have instances where there are idle resources
in the scheduling steps of the basic blocks that comprise the
branches of an If-HTG. Speculating out of If-HTGs also leaves resources
idle in the basic blocks of the conditional branches. To utilize
these idle resources, authors in ~\cite{SP_6} propose duplicating operations that lie in
basic blocks after the conditional branches up into the basic blocks that
comprise the conditional branches. They call this code motion \emph{conditional
speculation}. This is similar to the duplication-up code motion
used in compilers and the node duplication transformation discussed
by Wakabayashi et al. \framebox{[??]}.
% 
% \begin{figure}[ht]
%   \centering
%   \includegraphics[width=12cm]{./chapters/speculation/images/conditional_speculation.JPG}
%   \caption{Conditional Speculation example}\label{fig:sp_4}
% \end{figure}
% 
In ~\cite{SP_6} authors demonstrate conditional speculation by the example in Fig.~\ref{fig:sp_4}(a).
In this example, operations $x$ and $y$ both write to the variable $a$ in the
conditional branches $bb_1$ and $bb_2$. Consider that this design is allocated
one adder, one subtracter, and one comparator. Then, operations
$x$ and $y$ can be speculatively executed as shown in Fig.~\ref{fig:sp_4}(b). The speculation
of these operations leaves the resources in basic blocks $bb_1$ and
$bb_2$ idle. Hence, the operation $z$ that lies in basic block $bb_4$ can be
duplicated up or conditionally speculated (CS) into both branches of
the If-HTG and scheduled on the idle adder, as illustrated in Fig.~\ref{fig:sp_4}(c).
Operation $z$ is dependent on either the result of operation $x$ or operation
$y$, depending on how the condition evaluates (since operation $z$
is dependent on the variable $a$). Hence, the duplicated operations $z1$
and $z2$ directly read the results of operations $x$ and $y$, respectively. In ~\cite{SP_6}
authors have also shown the state assignments (S0; S1, and S2) for the three
designs using dashed lines in Fig.~\ref{fig:sp_4}. Clearly, for this example, this set
of code motions leads to a design that requires one less state to execute.
Note that, correctness issues place a number of constraints on the
kind of code motions that can be done. They have omitted these for
brevity, but they are detailed in \framebox{\framebox{[??]}} and are also dealt with to some
extent in \framebox{[??]} and \framebox{[??]}.

\section{Conclusions}\label{sps:concl}

Into this Chapter,

\framebox{conclusioni che terminano questo capitolo "informativo" e introducono il prox capitolo, }
\framebox{quello che dettaglia tutti gli algoritmi gi\`a presenti in letteratura}

