\chapter{State of the Art}\label{CH::SOA}
\markboth {Chapter \ref{CH::SOA}. State of the Art}{}
This Chapter will present the current state of the art about the steps focused in this thesis. In particular, the liveness analysis techniques, the scheduling algorithms (with emphasis on speculative ones) and the existing approaches for register allocation and binding are surveyed and analyzed.

\vspace{1em} \noindent
It is important to notice that the different intermediate representations have been not analyzed since DFG, CFG and CDG are de-facto the standard for all the high-level synthesis tools. Controller synthesis has not been addressed as well, since the translation from the finite-state machine representation to the corresponding digital circuits is usually performed by the logic synthesis tool.

\vspace{1em} \noindent
This Chapter is organized as follows. In Section \ref{soa:liveness}, liveness analysis algorithms are analyzed; in Section \ref{soa:sched}, scheduling algorithms are presented; in Section \ref{soa:spec} a detailed description about speculative algorithm is given; and in Section \ref{soa:register}, register allocation and binding approaches are described. Apart that for Section \ref{soa:sched}, every other Section is integrated with a specific part for SSA-based algorithms. At the end, in Section \ref{soa:concl}, conclusions are summarized.

\section{Liveness Analysis Algorithms}\label{soa:liveness}
Many liveness analysis can be expressed using simultaneous equations on finite sets. The equations can usually be set up so that they can be solved by \emph{iteration}: by treating the equations as assignment statements and repeatedly executing all the assignments until none of the sets changes any more.

\vspace{1em} \noindent
Liveness information (\emph{live-in} and \emph{live-out}) can be calculated from \emph{use} and  \emph{def} as the Appel (\cite{BIB::APPEL}) dataflow equations shows:

\begin{eqnarray}
   \begin{array}{lll}
    in[n]  &  =  &  use[n] \cup (out[n] - def[n]) \label{eqn:incoming}\\
    out[n] &  =  &  \bigcup_{s \in succ[n]} in[s] \label{eqn:outcoming}
   \end{array}
\end{eqnarray}

\vspace{1em} \noindent
These dataflow equations for liveness analysis mean that:
\begin{enumerate}
  \item If a variable is \emph{use[n]}, than it is \emph{live-in} at node $ n $. That is, if a statement uses a variable, the variable is live on entry to the statement.
  \item If a variable is \emph{live-in} at node $ n $, than it is \emph{live-out} at all nodes in \emph{pred[n]}.
  \item If a variable is \emph{live-out} at node $ n $, an not in \emph{def[n]}, than the variable is also \emph{live-in} at $ n $. That is, if someone needs the value of $ a $ at the end of statement $ n $, and $ n $ does not provide that value, then $ a $\'s value is needed even on entry to $ n $.
\end{enumerate}

\vspace{1em} \noindent
Dataflow equations \ref{eqn:incoming} can be solved by iteration using the algorithm presented below:

\begin{algorithmic}[1]
\FORALL{$n$}
\STATE $in[n] \leftarrow \{\}; out[n] \leftarrow \{\}$
\ENDFOR
\REPEAT
    \FORALL{$n$}
        \STATE $in'[n] \leftarrow in[n]; out'[n] \leftarrow out[n]$
        \STATE $in[n] \leftarrow use[n] \cup (out[n] - def [n])$
        \STATE $out[n] \bigcup_{s \in succ[n]} in[s]$
    \ENDFOR
\UNTIL{$in'[n] = in[n]$ and $ out'[n] = out[n] $ for all n}
\end{algorithmic}

\vspace{1em} \noindent
In this algorithm, \emph{in[n]} and \emph{out[n]} are initialized to the empty set \{\}, for all $n$, and then the equations are repeatedly treated as assignment until a fixed point is reached.

\vspace{1em} \noindent
The convergence of this algorithm can be significantly speeded up by ordering the nodes properly; this can be done easily by postorder ordering. In fact, when solving dataflow equations by iteration, the order of computation should follow the "flow". Since liveness flows \emph{backward} along control-flow arrows, and from "out" to "in", so should the computation.

\vspace{1em} \noindent
Another formulation of liveness analysis can be found in \cite{BIB::RA_3} and \cite{BIB::RA_SSA_1}: \begin{definition}
A variable $v$ is \emph{live} at point $p$ if it has been defined along a path from the
procedure's entry to $p$ and there exists a path from $p$ to a use
of $v$ along which $v$ is not redefined.
\end{definition}

\vspace{1em} \noindent
To compute liveness information with this formulation, \emph{out(n)}
is defined recursively as:
\begin{eqnarray}
   \begin{array}{ll}
 out(n_{f}) = \emptyset \label{eqn:liveness2} \\
 out(n) =  \bigcup_{m \in succ(n)} UEVar(m) \cup (out(m) \cap \overline{VarKill(m)})
   \end{array}
\end{eqnarray}
where \emph{n$_{f}$} is the exit node of the control flow graph.
If \emph{n} is a basic block in a CFG, then \emph{out(n)} is the set
of all such variables that are live upon exiting \emph{n}. Intuitively,
\emph{out(n)} contains those variables that are defined either
in \emph{n} or some other node \emph{n'} from which \emph{n} is reachable, and are
used in some CFG node \emph{n''} reachable from \emph{n}. Liveness analysis
computes \emph{out(n)} for every basic block in the program.

\vspace{1em} \noindent
This second liveness analysis formulation requires two additional sets of variables to compute \emph{out(n)}: \emph{UEVAR(n)} and \emph{VARKILL(n)}.
\begin{itemize}
  \item \emph{UEVAR(n)} is defined to be the set of upward-exposed variables
in \emph{n}. \emph{UEVAR(n)} contains all the variables that are used
in \emph{n} but are not defined in \emph{n}; some block that precedes \emph{n} during
program execution must define each variable in \emph{UEVAR(n)}.
  \item \emph{VARKILL(n)} is the set of all variables that are defined in \emph{n}.
\end{itemize}
Both \emph{UEVAR(n)} and \emph{VARKILL(n)} can be constructed by a
linear traversal of the operations in \emph{n}.
Let \emph{succ(n)} be the set of successor CFG nodes of \emph{n}. In
other words, there must be some (conditional or unconditional)
control transfer from \emph{n} to each node in \emph{succ(n)}.
During liveness analysis, these equations are repeatedly solved for each
basic block in the program until stability is reached.

\subsection{SSA-Based Algorithms}
To build an interference graph for a CFG in
SSA form, a minor modification to standard liveness analysis described in \ref{eqn:liveness2}
is required (\cite{BIB::RA_SSA_1}).
For programs in SSA form, in fact, let \emph{preds(m)} be the set of
predecessors of CFG node \emph{m}. Then, for each CFG node \emph{n} $\in$ \emph{preds(m)}, \emph{UEVAR(m)} is replaced with \emph{UEVAR(m, n)}: the
set of upward-exposed variables from \emph{m} to \emph{n}. This allows \emph{m}
to expose different $\phi$ function parameters to predecessors on
different incoming control paths, i.e., mutual exclusion. There
may also be variables that are originally in the \emph{UEVAR(m)} set
that are not defined by a $\phi$ function in \emph{m}. These variables are
still live upon entry to \emph{m}. All such variables are added to every
set \emph{UEVAR(m, n)} for each predecessor \emph{n} of \emph{m}.
For a program in SSA form, the recursive equation for
\emph{out(n)} sets is rewritten as
\begin{eqnarray}
   \begin{array}{cl}
 out[n] =  \bigcup_{m \in succ(n)} UEVar(m,n) \cup (out(m) \cap \overline{VarKill(m)})
   \end{array}
\end{eqnarray}


\section{Scheduling Algorithms}\label{soa:sched}
This section would like to present some basic technique for scheduling, such as ILP formulation, ASAP and ALAP Scheduling, List Scheduling, Force Directed Scheduling, Kerninghan and Lin Scheduling and the recent Wavesched Technique.

\vspace{1em} \noindent
As said in Chapter \ref{CH::PRE}, there are two classes of scheduling problems: \textit{time-constrained scheduling} and \textit{resource-constrained scheduling}. Time-constrained scheduling minimizes the hardware cost when all operations are to be scheduled into a fixed number of control steps, and resource-constrained scheduling minimizes the number of control steps needed for executing all operations given a fixed amount of hardware.

\begin{itemize}
\item \textbf{Integer Linear Programming (ILP)} formulations for both resource-constrained scheduling and time-constrained scheduling have been proposed. However, the execution time of the algorithm grows exponentially with the number of variables and the number of inequalities. In practice the ILP approach is applicable only to very small problems.
\end{itemize}
Since the ILP method is impractical for large designs, heuristic methods that run efficiently at the expense of the design optimality have been developed. Heuristic scheduling algorithms generally use two techniques: \textit{constructive approach} and \textit{iterative refinement}. There are many approaches for constructive scheduling, they differ in the selection criteria used to choose the next operation to be scheduled.
\begin{itemize}
\item The simplest constructive approach is the \textbf{As Soon As Possible (ASAP) scheduling}. First, operations are sorted into a list according to their topological order. Then, operations are taken from the list one at a time and placed into the earliest possible control step.
\item The other simple approach is the \textbf{As Late As Possible (ALAP) scheduling}. The ALAP value for an operation defines the latest control step into which an operation can possibly be scheduled. In this approach, given a time constraint in terms of the number of control steps, the algorithm determines the latest possible control step in which an operation must be executed.
\end{itemize}
The critical paths within the flow graph can be found by taking the intersection of the ASAP and ALAP schedules, such that the operations that appear in the same control steps in both schedules are on the critical paths.
In both ASAP and ALAP scheduling, no priority is given to operation on the critical path, so that those operations may be mistakenly delayed when resource limits are imposed on scheduling.
\begin{itemize}
\item \textbf{List Scheduling} overcomes this problem by using a global criterion for selecting the next operation to be scheduled. Examples of global priority function are mobility (which is defined as the difference between the ASAP and ALAP values of an operation) and urgency (which is defined as the minimum number of control steps from the bottom at which an operation can be scheduled before a timing constraint is violated). In list scheduling, a list of ready operations is ordered according to the priority function and processed for each state.
\item The \textbf{Force Directed Scheduling (FDS)} is another example that uses a global selection criterion to choose the next operation for scheduling. The FDS algorithm relies on the ASAP and ALAP scheduling algorithms to determine the range of control steps for every operation. The main goal of the FDS algorithm is to reduce the total number of functional units used in the implementation of the design. The algorithm achieves the objectives by uniformly distributing operations of the same type into all the available states.
\end{itemize}
Algorithms similar to FDS are called \textquotedblleft constructive\textquotedblright because they construct a solution without performing any backtracking. The decision to schedule an operation into a control step is made on the basis of a partially scheduled Data Flow Graph (DFG); it does not take into account future scheduling of operations into the same control step. Due to the lack of a lookahead scheme and the lack of compromises between early and late decisions, the resulting solution may not be optimal. This weakness could be coped by iteratively rescheduling some of the operations in the given schedule.
\begin{itemize}
\item One example of this approach is based on the paradigm originally proposed for the graph-bisection problem by \textbf{Kerninghan and Lin (KL)}. In this approach, an initial schedule is obtained by any scheduling algorithm. New schedules are obtained by rescheduling a sequence of operations that maximally reduces the scheduling cost. If no improvement is attainable, the process halts.
\end{itemize}

\vspace{1em} \noindent
A novel scheduling algorithm targeted
towards minimizing the average execution time of control-flow intensive
behavioral descriptions is presented in \cite{BIB::SP_1} by Lakshminarayana et al.
\begin{itemize}
  \item The algorithm presented in Figure \ref{fig:wavesched}, called \textbf{Wavesched}, accepts as input a CDFG, a target clock period, and an allocation constraint; it produces a scheduled CDFG which is optimized
for average execution time.
\end{itemize}


\begin{figure}
\centering
\scriptsize
\begin{verbatim}
   WaveSched(CDFG G, Allocation_Constraints C, clock_period clk, Unroll_bound U, STG S)
   {

0:    SET<OPERATION> initial = get_1_level_operations(G);
1:    STATE S0;
2:    STATE parent_state = S0;
3:    QUEUE<STATE> State_q;
4:    ARRAY<SET<COMPOSITE_OPERATION>> Unscheduled_immediate_successors;
5:    SET<COMPOSITE_OPERATION> initial_composite = Make_composite(initial);

6:    loop_forever()
      {
7:       SET<COMPOSITE_OPERATION> condition_inputs = composite operations whose outputs 
         are control dependency edges feeding operations in initial composite;
8:       foreach combination of conditions (condition, condition_inputs)
         {
9:          SET<COMPOSITE_OPERATION> S_condition = 
            under_condition(condition, initial_composite);
10:         STATE new_st;
11:         loop forever()
            {
12:            COMPOSITE_OPERATION new_C =
               select_composite_operation(S_condition, C, clk, U);
13:            if (new_C == NULL) break;
               else
               {
14:               add_composite_operation_to_state(new_C, new_st);
15:               add_schedulable_successors(new_C, S_condition);
16                remove_composite_operation(S_condition, new_C);
               }
            }
17:         S_successors = S_condition;
18:         if (new_st is identical to an existing state, P)
19:            add an arc in S, labeled condition, from parent state to P;
            else
            {
20:            add an arc in S, labeled condition, from parent state to new st;
21:            Unscheduled_immediate_successors[new_st] = S_condition;
22:            append(new_st,State_q);
            }
         }
23:      if (is empty(State_q) == 1) break;
24:      STATE s = dequeue_top(State_q);
25:      Initial_composite = Unscheduled_immediate_successors[s];
      }
   }
\end{verbatim}
  \caption{Wavesched Scheduling Algorithm}\label{fig:wavesched}
\end{figure}

While the CFG model is well suited to capture execution of instructions on a general-purpose
uniprocessor, it has been shown to be inadequate in exploiting the
parallelism inherent in typical CFI applications. Thus, using CDFG, Wavesched is a scheduling technique that exploits both parallelism and
mutual exclusion, and incorporates comprehensive data-dependent
loop optimizations, including dynamic loop unrolling and
winding, and parallel execution of data-dependent loops.


\section{Speculative Algorithms}\label{soa:spec}
The complexity of design for modern applications has extremely
grown in recent years. This means that the standard
techniques for high level synthesis can be considered obsolete
for a certain number of new designs. To cope with this problem,
recent research results have demonstrated, for example,
the effectiveness of speculative code transformations on mixed
control-data flow design to reduce the length of the resulting
schedules. In this Section some speculative approaches are proposed, based or not on the SSA intermediate representation.

\begin{itemize}
  \item Cordone, Ferrandi et al., in \cite{BIB::SP_3}, analyze the scheduling problem
by formulating an approach based on \textbf{Integer Linear Programming
(ILP)} to minimize the number of control steps given the amount of resources.
Their work propose an approach based on a new
data structure, the control and data dependence graph, that allows
a better exploitation of parallelism present in the original
specification.
They improve the already proposed ILP scheduling approaches
by introducing a new conditional resource sharing constraint which is
then extended to the case of speculative computation. The ILP formulation
has been solved by using a Branch and Cut framework which
provides better results than standard branch and bound techniques.
In general, the quality of the solution provided by the heuristic approach
is nearly optimal showing that the data structure based on the
combined CDG+DDG is better than other intermediate representations
previously proposed in literature.

\item
Techniques to integrate speculative execution
into scheduling during high-level synthesis of control-flow intensive
designs are presented in \cite{BIB::SP_2}. In that context, Lakshminarayana et al. demonstrate that not using
information such as resource constraints and branch probabilities,
while deciding when to speculate, can lead to significantly
suboptimal performance. They also demonstrate that it is necessary
to perform speculative execution along multiple paths at
a fine-grain level, in order to obtain maximal benefits. In other
words, the paths for speculation need to be decided dynamically,
during the course of scheduling, in accordance with the criticality
of individual operations and the availability of resources.
In their work, the authors present techniques to automatically manage the
additional speculative results that are generated by speculatively
executed operations. They show how to incorporate speculative
execution into a generic scheduling methodology (Figure \ref{fig:generic_sched}), and in particular
present its \textbf{integration into Wavesched} \cite{BIB::SP_1}.

\begin{figure}[htb]
\centering
      \includegraphics[width=0.55\textheight]{./chapters/soa/images/generic_sched.JPG}
  \caption{Pseudocode for a generic scheduling algorithm.}\label{fig:generic_sched}
\end{figure}

\vspace{1em} \noindent
Lakshminarayana technique, graphically summarized in Figure \ref{fig:wave_spec}, works as follows: speculatively executed operations are annotated with the conditional
operations whose results they depend upon.
The results generated by such speculatively executed operations are called speculative
results, and may or may not be used depending on the
evaluation of conditional operations during later clock cycles or
control states. The speculation condition of a speculative result
is defined to be the speculation condition of the operation that
generates it. When a conditional operation \emph{c} is executed, the proposed
scheduler automatically generates code to resolve all speculative
results whose speculation conditions involve \emph{c}.

\begin{figure}[htb]
\centering
      \includegraphics[width=0.55\textheight]{./chapters/soa/images/wave_spec.JPG}
  \caption{Flow diagram of the scheduling algorithm presented in \cite{BIB::SP_2}.}\label{fig:wave_spec}
\end{figure}

\vspace{1em} \noindent
The benefit of
incorporating speculative execution into the scheduling process
(as opposed to applying it as a pre-processing step to scheduling)
is that detailed information that is available during scheduling,
such as resource constraints, branch probabilities, etc.,
can be factored in when making decisions involving speculation.

\vspace{1em} \noindent
Experimental results demonstrate
that the presented techniques can improve the performance of
the generated schedule significantly. Schedules produced using
speculative execution were, on an average, 2.1 times faster than
schedules produced without its benefit.

\item
The quality of synthesis results for most high-level synthesis
approaches is strongly affected by the choice of control flow (through conditions
and loops) in the input description. This leads to a need for high-level
and compiler transformations that overcome the effects of programming
style on the quality of generated circuits. To address this issue, Gupta et al. in \cite{BIB::SP_6} have developed
a set of speculative code-motion transformations that enable movement
of operations through, beyond, and into conditionals with the objective
of maximizing performance. They have implemented these code transformations,
along with supporting code-motion techniques and variable
renaming techniques, in a high-level synthesis research framework called
\textbf{Spark}.

\vspace{1em} \noindent
Gupta et al. find interesting results. They find that the speculative-code motions
lead to reductions between 36\% and 59\% in the number of states in the
finite-state machine (controller complexity) and the cycles on the longest
path (performance) compared with the case when only nonspeculative code
motions are employed. Also, logic synthesis results show fairly constant critical
path lengths (clock period) and a marginal increase in area.

\vspace{1em} \noindent
In fact, they demonstrate that enabling just the nonspeculative
code motions across hierarchical blocks of code and early condition execution lead to modest improvements in the
number of FSM states and in the cycles on the longest path.
The largest improvements in performance (cycles) are obtained
by employing speculation and conditional speculation.
They also show that the total delay is almost halved when
all the code motions are enabled over when code motions only within
basic blocks are allowed and that the clock period does not increase by applying
these code motions. The constant critical path length, coupled with
large decreases in cycles on the longest path, leads to large decreases
in the total delay through the circuit.
However, code motions such as speculation and conditional speculation
can lead to an increase in area, due to the increasing complexity of the interconnect
(multiplexers and associated control logic) that is a product of the
shorter schedule lengths produced by the speculative code motions.
Shorter schedule lengths mean that resource utilization and resource
sharing increases and this leads to an increase in the complexity of the
multiplexers and associated control logic. This complexity increase is
particularly large due to conditional speculation because it duplicates
operations and, thus, more operations are mapped to the same number
of resources as before.
\end{itemize}


\subsection{SSA-Based Algorithms}
Speculative execution, such as control speculation and data
speculation, is an effective way to improve program performance.
Using edge/path profile information or simple heuristic rules,
existing compiler frameworks can adequately incorporate and
exploit control speculation. However, very little has been done so
far to allow existing compiler frameworks to incorporate and
exploit data speculation effectively in various program
transformations beyond instruction scheduling.
In addition, very little has been done so far to manage speculation algorithms with SSA representation.

\vspace{1em} \noindent
Lin, Chen et al. \cite{BIB::SP_SSA_1} propose a \textbf{speculative SSA form} to incorporate information from alias profiling and/or heuristic rules for data speculation, thus
allowing existing program analysis frameworks to be easily
extended to support both control and data speculation.

\begin{figure}[htb]
\centering
      \includegraphics[width=0.50\textheight]{./chapters/soa/images/framework.JPG}
  \caption{Lin, Chen et al. framework of speculative analysis and
optimizations.}\label{fig:framework}
\end{figure}

\vspace{1em} \noindent
Authors address the issues of how to incorporate
information for data speculation into an existing compiler analysis
framework (Figure \ref{fig:framework}) and thus enable aggressive speculative optimizations.
They use profiling information and/or simple heuristic rules to
supplement traditional non-speculative compile-time analysis.
Such information is then incorporated into the SSA form.

\vspace{1em} \noindent
Through the
experimental results on speculative register promotion, they have
demonstrated the usefulness of this speculative compiler
framework and promising performance potential of speculative
optimizations.



\section{Register Allocation and Binding Algorithms}\label{soa:register}
Data path allocation, part of the High-Level Synthesis
flow, deals with the allocation and binding of
functional, storage, and interconnection units to
operations, variables, and connections in a
design respectively.

\vspace{1em} \noindent
Since the resources have several operations and variables mapped
to them, there exist opportunities to reduce the number of inputs
to, and hence, the complexity of, the (de)multiplexers between
these resources by resource binding techniques \cite{BIB::SP_5}. Fewer inputs not
only mean smaller interconnects but also simpler associated control
logic. This section describes resource allocation and binding methodologies
to minimize all registers, interconnect and control costs.

\vspace{1em} \noindent
Recent studies (\cite{bib::RA_1xx}) have demonstrated that interconnection
costs have to be taken into account since area of multiplexers and interconnection elements has by far outweighed area of functional units and registers, register binding and allocation is not
necessarily confined to finding the minimal
number of registers but aims at a more complex
mapping of variables into a major number of registers: this can leads to an optimal datapath implementation that not has the minimum number of registers \cite{BIB::INT_2}.

\vspace{1em} \noindent
Register binding and allocation techniques usually aim at
allocating a minimal number of registers to hold
the designs variables. Some of these
techniques, such as Clique Partitioning \cite{bib::RA_alg_1} and
the Left-Edge Algorithm \cite{bib::RA_alg_2}, ignore the costs of
register binding and the implied interconnection.
Although other techniques, such as the extended
Clique Partitioning and the Weighted-Bipartite \cite{bib::RA_alg_3} take interconnection costs into consideration while performing register binding.
In either case, the obtained results always tend to
minimize the number of registers enduring
interconnection costs.

\vspace{1em} \noindent
In addition, the control and interconnect overheads
incurred due to possible speculative transformations can be minimized by resource
binding targeted at interconnect minimization. This leads to lower
area, without adversely effecting the latency of the final hardware
generated by logic synthesis tools.

\vspace{1em} \noindent
These improvements are despite the fact that during interconnect minimization
strategy, sometimes it is better to choose to allocate more registers if this
leads to a reduction in the steering logic. Hence, the reductions in
interconnect complexity dominate any increases due to higher register requirements.

\vspace{1em} \noindent
There are three approaches to solve the allocation problem:
\begin{enumerate}
\item \textbf{Constructive approaches}, which progressively construct a design while traversing the CDFG;
\item \textbf{Decomposition approaches}, which decompose the allocation problem into its constituent parts and solve each of them separately;
\item \textbf{Iterative methods}, which try to combine and interleave the solution of the allocation subproblems.
\end{enumerate}
A constructive approach starts with an empty datapath and builds the datapath gradually by adding functional, storage, and interconnection units as necessary. Although constructive algorithms are simple, the solution they find can be far from optimal. \\In order to improve the quality of the results, some researchers have proposed a decomposition approach, where the allocation process is divided into a sequence of independent tasks; each task is transformed into a well-defined graph-theoretical methods: clique partitioning, left-edge algorithm, and the weighted bipartite matching algorithm are examples.
\begin{itemize}
\item Tseng and Siewiorek \cite{bib::RA_alg_1} divided the allocation problem into three tasks of storage, functional-unit, and interconnection allocation which are solved independently by mapping each task to the well-known problem of \textbf{graph clique-partitioning}. In the graph formulation, operations, values, or interconnections are represented by nodes. An edge between two nodes indicates those two nodes can share the same hardware. Thus, the allocation problem, such as storage allocation, can be solved as finding the minimal number of cliques in the graph. Because finding the minimal number of cliques in the graph is an NP-hard problem, in Tseng and Siewiorek \cite{bib::RA_alg_1} an heuristic approach is taken.

\vspace{1em} \noindent
For register allocation, Tseng and Siewiorek build a graph where each vertex
represents a variable and an edge exists between two vertices if,
and only if, the two corresponding variables can share a same
register (i.e. they have disjoint lifetimes, from dataflow analysis results). The graph is then
partitioned into a number of cliques (a clique is a complete subgraph).
The number of cliques partitioned is the number of
registers needed and a register is allocated for those variables
corresponding to the vertices in each clique.
\end{itemize}
Although the clique-partitioning method when applied to storage allocation can minimize the storage requirements, it totally ignores the interdependence between storage and interconnection allocation. So previous method can be extended by augmenting the graph edges with weights that reflect the impact on interconnection complexity due to register sharing among variables \cite{bib::RA_alg_3}. In this way it can take interconnection costs into consideration while performing register binding
and it can find a coloring scheme such that the weight of the global scheme is minimized.
\begin{itemize}
\item \textbf{Left-edge algorithm} \cite{bib::RA_alg_2} can be applied to solve the register-allocation problem. Unlike the clique-partitioning problem, which is NP-complete, the left-edge algorithm has a polynomial time complexity. Moreover, this algorithm allocates the minimum number of registers. However, it cannot take into account the impact of register allocation on the interconnection cost, as can the weighted version of the clique-partitioning algorithm.
\item Both the register and functional-unit allocation problems also can be transformed into a \textbf{weighted bipartite-matching algorithm} \cite{bib::RA_alg_3}. In this approach, a bipartite graph is first created that contains two disjoint subsets (e.g., one subset of registers and one of variables, or one subset of operations and one of functional units), and an edge connecting two nodes in different subsets represents the node in one subset that can be assigned to the node of the other subset. Thus, the problem of matching each variable to a register is equivalent to the classic job-assignment problem.
    For register allocation, this approach guarantees minimal
usage of registers while being able to take the interconnection
cost into account.
\end{itemize}
The matching algorithm, like the left-edge algorithm, allocates a minimum number of registers. It also takes partially into consideration the impact of register allocation on interconnection allocation since it can associate weights with the edges.

\vspace{1em} \noindent
Given a datapath synthesized by constructive or decomposition methods, a further improvement may be achieved by reallocation, an iterative refinement approach.
\begin{itemize}
\item The most straightforward approach could be a simple assignment exchange using the \textbf{pairwise exchange} or the \textbf{simulated annealing} method, which is an approximate solution for exhaustive search.
\item In addition, a more sophisticated \textbf{branch-and-bound} method can be applied by reallocating a group of different types of entities for datapath refinement.
\end{itemize}

\vspace{1em} \noindent
An integer linear program (\textbf{ILP}) formulation for the allocation and binding problem in high level synthesis is presented in \cite{BIB::RA_alg_4}. Given a behavioral specification and a
time-step schedule of operations, the formulation minimizes
wiring and multiplexer areas. It represents the first time that an
ILP model for minimizing multiplexer and wiring areas has
been mathematically formulated and optimally solved. The
model handles chaining, multi-cycle operations, pipelined
modules, conditional branches and trades off wiring area
with resource area.

\vspace{1em} \noindent
Others techniques for register allocation and binding can be found in literature, such as
Linear Scan Algorithm, Min-Cost Max-Flow Algorithm, and K-Cofamily Based Algorithm.
\begin{itemize}
\item In \cite{BIB::INT_2}, a new technique for data path allocation
with the main aim to optimize the interconnections is
proposed. This approach is the \textbf{Min-Cost Max-Flow Algorithm}. Both the module allocation problem and the register allocation problem are solved using a flow network
model, with the flow cost representing the interconnection cost. The MCMF method not only can optimize the data path and interconnection
area, but also can the interconnection be reduced by
increasing the number of registers appropriately. Although the formulation is efficient, it suffered an increased number of registers than the minimum.
required.
\item Poletto and Sarkar, \cite{BIB::RA_alg_6}, describe a new algorithm for fast global register allocation called \textbf{Linear Scan}. This algorithm
is not based on graph coloring, but allocates registers to variables in a single linear-time scan of the variables' live ranges. The linear scan algorithm is considerably faster than algorithms based on graph coloring, is simple to implement, and results in code that is almost as efficient as that obtained using more complex and time-consuming register allocators based on graph coloring. Rather than coloring an interference graph, the algorithm allocates registers by making a single pass over coarse live interval information.
  \item In \cite{BIB::RA_1xx} Chen and Cong first formulate a \textbf{k-cofamily-based register binding algorithm} that guarantees to maintain the minimum register
number required while optimizing the multiplexers and then further reduces the multiplexer width through an efficient port assignment algorithm.
Given a compatibility graph $G_{c}(V_{c},A_{c})$, their objective for register binding is to find a subset of $A_{c}$ that covers all the vertices in $V_{c}$ in such a way that the total sum of the weights of all the edges in the subset is the minimum with the
constraint that all the vertices can only be bound into as many as \emph{k}
registers.
Their experimental results show that the k-cofamily-based register binding
algorithm is better overall than the left-edge register binding
algorithm on the total usage of multiplexer inputs and also
better than a bipartite graph-based algorithm.
\end{itemize}

\subsection{SSA-Based Algorithms}
In addition to all the algorithms described above, coloring allocators have been developed as part of a new generation of backends which include global optimization based on the creation of a static single assignment (SSA) program representation. The essence of this transformation is to ensure that the definition-use relationship of values in a program is one to many rather then the usual many to many.

\vspace{1em} \noindent
In particular, in SSA form, each defining point of a register variable defines a separate live range, which may be colored independently, overlapped by code motion optimizations, and if necessary spilled independently.

\vspace{1em} \noindent
Optimization based on SSA form inherently requires graph coloring to transform the optimized program back into implementable form. Conversely, SSA form enhances the advantages of the coloring algorithms. In particular it allows the live ranges originating from different definition points to be allocated independently. Almost all global optimizations are simplified in this formulation.

\vspace{1em} \noindent
Three definitions would be helpful to better understand next approaches:

\begin{definition}
A graph is \textbf{chordal} if every cycle with four or more edges has a chord, that is, an edge which is not part of the cycle but which connects two vertices on the cycle. Chordal
graphs are also known as `triangulated', `rigid-circuit', `monotone transitive',
and `perfect elimination' graphs.
\end{definition}

\begin{definition}
In a 1-perfect graph,
the chromatic number, that is, the minimum number of colors necessary to color
the graph, equals the size of the largest clique. A \textbf{perfect graph} is a 1-perfect
graph with the additional property that every induced subgraph is 1-perfect.
Every chordal graph is perfect, and every perfect graph is 1-perfect.
\end{definition}

\begin{definition}
A \textbf{strict program} ensures that every variable is assigned a
value before the variable is used in a computation along every
possible path of program execution. Thus, a strict program is one in which every path from the initial block until the use of a variable \emph{v} passes through a definition of \emph{v}.
\end{definition}

\vspace{1em} \noindent
Since chordal graphs are perfect they inherit all properties from perfect
graphs, of which the most important one is, that the chromatic number
of the graph is equal to the size of the largest clique. Even stronger,
this property holds for each induced subgraph of a perfect graph. In other words chordality ensures that local register pressure is not only a lower bound for the true register demand but a
precise measure. Determining the instruction in the program where the
most variables are live, gives the number of registers needed for a valid
register allocation of the program.

\vspace{1em} \noindent
Chordal graphs have several useful properties. Problems such as minimum
coloring, maximum clique, maximum independent set and minimum covering by
cliques, which are NP-complete in general, can be solved in polynomial time for
chordal graphs. In particular, optimal coloring of a chordal graph \emph{G = (V;E)}
can be done in \emph{$O(|E| + |V|)$} time.

\vspace{1em} \noindent
Recently, Brisk et al. \cite{BIB::Brisk_pol} proved that strict programs in SSA-form have perfect
interference graphs; independently, Hack \cite{BIB::RA_SSA_4} proved the stronger result
that strict programs in SSA-form have chordal interference graphs (which is in turn a subset of the class of perfect graphs).
Thus, since perfect and chordal graphs can be colored in polynomial time, the practical consequences of Brisk and Hack's proofs have been used to describe a lot of \textbf{coloring algorithms for chordal graphs}:

\begin{itemize}
  \item A formulation of a coloring algorithm for chordal graphs is presented by Brisk et al. in \cite{BIB::RA_SSA_1}. This work shows that given a CFG in SSA form, an optimal color assignment can be computed without explicitly constructing an interference graph (pseudocode presented in Figure \ref{fig:pseudocose_RA_SSA_1} By sidestepping the construction of the interference graph, optimal register sharing can run faster than linear scan allocation. Register sharing will reduce the overall area of the design and increase the utilization of registers. In fact, an optimal solution to the register sharing problem minimizes the number of registers in the resulting data path, yielding a more compact design with increased register utilization.

\begin{figure}[htb]
\centering
      \includegraphics[width=0.35\textheight]{./chapters/soa/images/pseudocose_RA_SSA_1.JPG}
  \caption{Pseudocode for optimal color assignment without building interference graph (Brisk approach).}\label{fig:pseudocose_RA_SSA_1}
\end{figure}

  \item Another coloring approach is described by Hack and Goos in \cite{BIB::RA_SSA_2} (further details in \cite{BIB::RA_SSA_4} and \cite{BIB::RA_SSA_6}). Authors show that the interference graphs of strict SSA-form programs are chordal which leads to a coloring algorithm running in quadratic time. Furthermore, like previous formulation, they show that the coloring algorithm does not need to have the interference graph materialized, but it uses a coloring sequence induced by the dominance relation of the program. In addition authors show how a register allocation of a SSA-form program using \emph{m} registers can be turned into a register allocation of a corresponding non-SSA program using also no more than \emph{m} registers.

\begin{figure}[htb]
\centering
      \includegraphics[width=0.50\textheight]{./chapters/soa/images/alg_RA_SSA_3.JPG}
  \caption{Overview of the Quintao Pereira proposed algorithm. Grey boxes are optional extensions.}\label{fig:alg_RA_SSA_3}
\end{figure}

  \item An additional SSA-based register allocation approach is presented in \cite{BIB::RA_SSA_3}. In that work, authors describe the many phases that constitute the register allocation process (from the assignment of physical registers to variables to the generation of running code) and then they describe a collection of methods that can be used in order to improve the quality of the code produced by the proposed algorithm (overview of the proposed algorithm in Figure \ref{fig:alg_RA_SSA_3}).
\end{itemize}

\vspace{1em} \noindent
There are situations in which graph coloring is too slow, for example in
a just in time (JIT) compiler that translates an intermediate program representation to
machine code at load time or even at run time. JIT compilers must do their job in
almost no time but should still produce high quality code. This conflict has led to the
Linear Scan register allocation technique, already presented above in this Section.
A big contribution to this approach is given in \cite{BIB::RA_SSA_5}: in this work, authors base \textbf{Linear Scan allocator on programs in SSA form} (static single assignment form). This
simplifies data flow analysis and tends to produce shorter live intervals but requires
modifications to the original linear scan algorithm.

\vspace{1em} \noindent
The adaptations for SSA form are done in a preprocessing step in which moves are
inserted into the instruction stream in order to neutralize the $phi$-functions. After this
step, SSA form does not affect the linear scan register allocation since $phi$-functions do
not show up in the live intervals any more.
In contrast to Poletto and Sarkar \cite{BIB::RA_alg_6} their linear scan algorithm can deal with
lifetime holes and fixed intervals, which makes it more complicated: in addition to the
three sets \emph{unhandled}, \emph{handled} and \emph{active} they need a fourth set, \emph{inactive}, to hold
intervals with a hole into which the start of the current interval falls. They also have to
exclude registers that are occupied by overlapping fixed intervals from the register
selection. Otherwise their algorithm is very close to the one described in \cite{BIB::RA_alg_6}.


\section{Conclusions}\label{soa:concl}
This Chapter has given a literature overview related to the major topics involved in this thesis. In particular, liveness analysis techniques, register allocation and binding techniques, and speculative approaches have been analyzed. Chapter \ref{CH::ALG} will present the methodology proposed by this thesis, focusing on all implementation details. Specifically, starting from the state-of-the-art presented into the present Chapter, this thesis would propose an high level synthesis optimization that combine speculative execution, SSA-based liveness analysis and heuristics for optimize resource allocation and binding.

