\chapter {Implementation Details}\label{details}
\markboth {Chapter \ref{details}. Implementation Details}{}

\begin{flushright}
\sl
The significant problems we face cannot be solved at the same level of thinking we were at when we created them.
\end{flushright}

\begin{flushright}
\sl
Albert Einstein
\end{flushright}
\par\vfill\par


In this Chapter, the proposed methodology will be detailed to better understand how the features have been implemented. As the previous Chapter, the description will be first divided in two steps:
\begin{enumerate}
 \item the \textbf{synthesis flow} will be described and detailed;
\item the methodology for \textbf{design space exploration} will be analysed.
\end{enumerate}
At the end, the interaction between these two sub-tasks will be presented. So this Chapter is organized as follows. In the Section~\ref{details:hls}, the high-level synthesis methodology proposed in the Section~\ref{mixed:flow} will be further detailed. In the Section~\ref{details:nsga}, the use of the NSGA-II genetic algorithm to explore the design space will be analysed, with particular attention to the implementation details of each step of the genetic algorithm and the interactions with the synthesis flow. To better understand how the different components work, they will be applied to an example, the \textit{Kim} benchmark~\cite{Kim-HRA-94} that is a well-known benchmark in the high-level synthesis literature. The \textit{Kim} benchmark is composed of 32 operations (among them there are 16 additions and 9 subtractions) and 3 branching blocks.

\section{High-level synthesis details}\label{details:hls}

In this section, it is explained how the design flow, presented in the Section~\ref{mixed:flow}, has been implemented. Each sub-task of the synthesis process will be analysed and further described. 

\subsection{From GIMPLE to Internal Representation}\label{details:gimple}

The first step in the synthesis flow is the translation of the behavioral specification from the GIMPLE internal representation of the \textit{GCC} compiler to the \textbf{internal graph representation} that will be used in the presented methodology. 
This sub-project is a part of the PandA framework~\cite{panda}, where this methodology has been integrated (see Section~\ref{results:implementation}).

The current analyzed and supported specifications spread from C, C++ to SystemC descriptions and the levels of abstractions considered go from logic to system level. Since the SystemC is an extension of the C++ programming language, it was decided to use an existing C++ front-end and after a deep analysis of the various existing C/C++ compilers or SystemC analyzers it was decided to adopt the front-end capabilities of the GNU GCC compiler. From the version 3.5/4.0 the front-ends parse the source language producing GENERIC trees, which are then turned into GIMPLE~\cite{gimple}.

\begin{figure}[ht]
\centering
\begin{minipage}[l]{0.90\textwidth}
\includegraphics[width=\columnwidth]{./chapters/details/images/gimple.png}
\end{minipage}
\caption{\textit{GCC} internal structure}\label{fig:gimple}
\end{figure}
The first intermediate representation, GENERIC, is a common representation, language independent, used to serve as an interface between the parser and the optimizer. GIMPLE is also a language independent, tree based representation of the source specification but it is used for target and language independent optimizations (e.g., inlining, constant propagation, tail call elimination, redundancy elimination, etc). With respect to GENERIC, GIMPLE is more restrictive, since its expressions has no more than three operands, it has no control flow structures (everything is lowered to gotos) and expressions with side-effects are only allowed on the right hand side of the assignments. Although GIMPLE has no control flow structures, GCC also build the control flow graph (CFG) to perform language independent optimizations. All these information are what it is needed to perform a static analysis of the design specification code. Instead of integrating the fronted in the \textit{GCC} compiler, a modular design style has been followed. In fact, the GIMPLE data structure has been saved (see Fig.~\ref{fig:gimple}) in an ASCII file by exploiting the debugging features of GCC (i.e., the \textit{-fdump-tree-oplower-raw} \textit{GCC} option). Actually, the dump performed with this option is performed on a per function basis and therefore several GIMPLE tree nodes are unnecessarily duplicated. To avoid this problem, the tree dump functions of GCC has been slightly modified by removing some duplication and simplifying the format of the ASCII file. Following the grammar of these files, it has been built a parser that is able to rebuild the GIMPLE data structure into the framework, thus allowing an independent analysis of the GCC data structures. Obviously, the extraction of GIMPLE information from GCC introduces some overheads but it also allows a modular decoupling between the GCC compiler and the toolset. The GCC analysis and the GIMPLE parsing corresponds to the first two steps performed by the framework to analyze the design specification.

\begin{figure}[ht]
\centering
\begin{minipage}[l]{0.50\textwidth}
\includegraphics[width=\columnwidth]{./chapters/details/images/parsing_flow.png}
\end{minipage}
\caption{Analysis flow performed by the framework}\label{fig:parsing_flow}
\end{figure}
The next step, \textit{Graphs and Structural info extraction} (see Fig.~\ref{fig:parsing_flow}), build a layer of functions and data structures providing, for C and C++ specifications, the CFG of each function present in the specification and helper functions providing information on the size and type of all data present in the C/C++ specifications. For the SystemC specifications, it is provided the hierarchy of SC\_MODULEs or SC\_CHANNELs and their connections binding identified during the analysis of module constructors hierarchically instantiated from the \textit{sc\_main}.

The CFG (see Section~\ref{hls:cfg}), the same extracted from the GCC, represents the sequencing of the operations as described in the language specification. Each CFG node has an identifier, the list of variables read and written and has a reference to the corresponding GIMPLE node. Call functions have associated also the control flow graph of the body of the called function if present in the specification. Given this information a data dependency analysis is performed to identify correlation between variable uses and definitions, used to create the DFG (see Section~\ref{hls:dfg}) graph representation.

In addition to control and data flow graph, other graphs have also been analysed, such has the system dependency graph (SDG, see Section~\ref{hls:sdg}). 
All graphs are managed by a data structure used to contain the behavioral specification, and it will be called \textbf{behavioral\_manager} so on. Figure~\ref{details:kim_cfg} shows the CFG built after parsing the source code of the \textit{Kim} benchmark. Figure~\ref{details:kim_sdg} represents the obtained SDG (CDG+DFG data structure) graph.

\begin{figure}
\begin{minipage}[l]{0.5\textwidth}
   \centering
   \includegraphics[scale=0.15]{./chapters/details/images/sdg.png}
   \caption{Kim example: CDG+DFG data structure.}\label{details:kim_sdg}
\end{minipage}
~
\begin{minipage}[l]{0.5\textwidth}
  \centering
   \includegraphics[scale=0.15]{./chapters/details/images/fcfg.png}
   \caption{Kim example: Control flow graph.}\label{details:kim_cfg}
\end{minipage}
\end{figure}

\subsection{Resource Library and Design Constraints}\label{details:resource}

The \textbf{resource library} is based on technology of the synthesizer tool or the target device. So the library is loaded from an external file, containing all informations about the components, so different libraries can be easily provided to support different synthesizer tools or target devices. The format of the file that has been chosen to represent the element is the eXtensible Markup Language (XML), since it can easily store hierarchical informations and there are available different libraries that allows to simply read and write this structure and reconstruct the organization of the informations. For each component, the information stored are:
\begin{itemize}
\item Area $A$ of the component, in Configurable Logic Blocks (CLBs);
\item Set of operation types $T$ that the component is able to perform;
\item External interface of the structural representation for the component; it will used to interface the component with its connected elements.
\end{itemize}
For each operation type $t \in T$ that the component is able to execute, additional informations have been provided:
\begin{itemize}
\item Cycle steps taken to execute an operation $o$ of type $t$ (i.e. $\tau(o) = t$, in according to Definition~\ref{hls:operation_type});
\item Initialization time taken to start an operation $o$ of the type $t$;
\item Power consumption spent to execute an operation $o$ of type $t$.
\end{itemize}
Informations about the area $A$ occupied using a given technology (or target device) have been retrieved through the synthesis of the structural representation of the component with a synthesis tool (e.g. Altera~\cite{Altera} or Xilinx~\cite{Xilinx}). Informations on power consumption can be obtained with simulation where \textit{power} models have been plugged to the environment or profiling of the component on a real device. For instance, a module representing an adder can be so defined:

\begin{footnotesize}
\begin{verbatim}
     <functional_unit functional_unit_name="ADDER" area="32.0">
      <operation operation_name="plus_expr" execution_time="1"
                 initiation_time="1" power_consumption="1"/>
      <circuit>
        <component_c id="ADDER" > 
            <structural_type_descriptor id_type="ADDER"/> 
            <port_o id="data_in_1" dir="IN"> 
               <structural_type_descriptor type="INT" size="32"/> 
            </port_o>
            <port_o id="data_in_2" dir="IN"> 
               <structural_type_descriptor type="INT" size="32"/> 
            </port_o>
            <port_o id="data_out" dir="OUT"> 
               <structural_type_descriptor type="INT" size="32"/> 
            </port_o>
          <NSC_functionality LIBRARY="plus_expr"/>
        </component_c>
      </circuit>
    </functional_unit>
\end{verbatim}
\end{footnotesize}

This component represents an \textit{adder}. The area of the component is \textit{32 CLBs} and the only operation type that it can implement is the \textit{plus\_expr}. It represents an addition coming from the naming convention of the GIMPLE internal representation. The section of the component defined as \textit{circuit} represent the interface of the component that has two 32-bit inputs and one 32-bit output. This is the only information that the framework needs to interface with the component. The informations that are retrieved from this file are stored in a data structure that will be used to manage and represent the informations about the resources that can be used, so it will be called \textbf{resource\_manager}.

The \textbf{design constraints} are represented as well with an XML file. Into this file, all constraints imposed to the design are listed. For instance, the maximum number of instances of a component is defined as a pair:

\begin{equation}
(id, num)
\end{equation}

where $id$ is the name associated to the component (e.g. \textquotedblleft ADDER\textquotedblright~in the previous component example) and $num$ is the maximum number of instances of the component $id$ that can be allocated by the algorithm.
In the same way, it easy to express the maximum number of registers or multiplexers that can be used in the final design.
The following example shows how to specify a set of constraints for the maximum number of adders and arithmetic logic units:

\begin{footnotesize}
\begin{verbatim}
<constraints>
   <HLS_constraints clock_period="1">
      <tech_constraints fu_name="ADDER" n="1"/>
      <tech_constraints fu_name="ALU"   n="2"/>
   </HLS_constraints>
</constraints>
\end{verbatim}
\end{footnotesize}

The functional unit named \textquotedblleft ADDER\textquotedblright~can be allocated only one time (there will not be more that one adder in the final design) and the functional unit named \textquotedblleft ALU\textquotedblright~can not be allocated more than twice. If no constraints are specified for a resource type, it is considered to be \textit{unconstrained}, so infinite resources are available for that component.
A clock period constraint has been also specified, expressed in \textit{nanoseconds}. 

The resulting informations are so stored in a data structure, that has been obviously named \textbf{hls\_constraint}.

\subsection{Allocation and resource binding constraints}\label{details:binding}

At this point the behavioral specification has been parsed and the \textbf{behavioral\_manager} data structure has been filled, the XML files containing informations about resources and constraints have been loaded. So also \textbf{resource\_manager} and \textbf{hls\_constraint} data structures have been filled: the synthesis process has all needed informations to start. Before starting, additional informations about the \textit{resource binding constraints} can be added to the \textbf{hls\_constraint} data structure.
The information about a partial binding, as it has already been described in Section~\ref{mixed:partial_binding}, is defined as:
\begin{equation}
 \beta(v) = (r,t)
\end{equation}
where $v$ is one of the operations in the behavioral specification, $r$ is a resource component and $t$ is the instance where it will have to be bounded. Note that this number has to be less than the number stored in the \textbf{hls\_constraint}, that represent the maximum number of components that can be allocated.
In the \textit{Kim} example, for instance, the designer could decide that operation \textquotedblleft \textit{+2}\textquotedblright~and operation \textquotedblleft \textit{+3}\textquotedblright~will be bounded to the same instance of the functional unit named \textit{ADDER}, identified by number \textit{0}. So the two additional constraints are defined as:
\begin{eqnarray}
 \beta(<+2>) = (<ADDER>,0) \nonumber \\
 \beta(<+3>) = (<ADDER>,0) \nonumber
\end{eqnarray}
These informations are so added to the \textbf{hls\_constraint} data structure as additional constraints to be met by the final design. Note that the following steps will be executed as usual on operations where no binding constraints have been specified. The algorithms will have the freedom to assign the operations to any admissible and free functional units. At the opposite, if constraints have been imposed, the algorithms will have to satisfy them, since they can be considered as a request formulated by the designer to consider the final design as feasible.

\subsection{Scheduling with resource binding constraints}\label{details:scheduling}

Once all constraints have been stored in the proper data structure, the synthesis can start to perform the \textbf{scheduling} sub-task. The scheduling has two objectives:
\begin{enumerate}
\item assign each operation to a control step where its execution will start, with respect to data and control dependences.
\item assign each operation to a functional unit, so there is no conflict on using resources and the number of components used is less than the maximum allowed (specified by the constraints).
\end{enumerate}
This is a \textit{resource-constrained} approach. The \acf{ILP} formulation consider all informations coming from behavioral specification and design constraints as a set of equalities or inequalities to be solved to get a solution. If the problem is quite small, the ILP formulation can provide an optimal solution despite a computation time quite large.

Branch and cut is a refinement of the standard linear programming based
branch and bound approach~\cite{Nemhauser88}. It starts by solving the
continuous relaxation of an ILP formulation, thus obtaining a
fractional solution. At this point, the standard branch and bound
algorithm would split the current problem into subproblems by fixing
some fractional variable to an integer value. The branch and cut
approach, on the contrary, first looks for linear inequalities which
are violated by the current fractional optimal solution but are
respected by all feasible integer solutions of the problem. These
inequalities are named cuts or valid inequalities. They are added to
the ILP formulation and the continuous relaxation is solved once again,
achieving a different (tight bound) and a different (hopefully less
fractional) solution. The process can be repeated several times. It can
even be proved that after a finite number of iterations, the solution
will be integer and it will be the optimal solution of the original
ILP. The disadvantage of such a method however is that the number of
iterations required is exponential and the formulation size grows
correspondingly, so that solving it becomes too expensive. Therefore,
at some stage the generation of valid inequalities is interrupted and
standard branching is performed.

There are several standard techniques to generate valid inequalities,
both for general ILPs and for specific families of problems. The node
packing approach of Gebotys is one of the latter. The open source
package \textit{COIN-OR}~\cite{COIN-OR-2001} provides a set of tools
among which an ILP solver with the capability of generating the most
important families of valid inequalities.

However, on large examples, this approach leads to unacceptable computation time.

The heuristic \textit{list-based} scheduling algorithm is a good solution to the problem. It produces solutions very near to optimal ones in a quite short time. In the set of benchmark used to validation, no significant differences have been found between the two algorithms. The \textit{list-based} algorithm implemented into this methodology will be now presented.

The algorithm mantains a priority list $PLIST_{tk}$ for each kind of operation $t_{tk}$ composed by \textquotedblleft available\textquotedblright~nodes. A node is considered available if it has no predecessors or all of them have been already been scheduled in a previous control step. At each iteration, all operations at the beginning of the list are assigned to the current control step until the $N_{tk}$ available resources able to execute operation of type $t_{tk}$: the list is ordered basing on a priority list function, used to resolve conflicts with the use of resource. In fact, if there is a conflict with the use of a resource, the operation with higher priority is assigned to resource and the operations with lower priority are kept in the list to be tested in following control steps. The assignment of an operation to a control step can induce some operation to become \textquotedblleft available\textquotedblright~and they will be added to the list basing on their priority values. The goodness of the result highly depends on the priority function that is used. A priority function often used is the \textit{mobility range}. The \textit{mobility} function is so defined:

\begin{definition}
 \textbf{Mobility}: let $E_i$ be the first control step when the operation $o_i$ can be scheduled (computed with the ASAP algorithm) and let $L_i$ the last control step when the operation $o_i$ can be scheduled (computed with the ALAP algorithm), the \textnormal{mobility} $M_i$ for the operation $o_i$ is computed as:

\begin{equation}
  M_i = L_i - E_i
\end{equation}

\end{definition}

The operations on critical paths have $M_i = 0$ since the ASAP and ALAP control steps are the same for them. The operations with lower mobility are preferred to be assigned than the other with higher mobility since if they are kept in the list there is an high probability to extend the overall latency.
To assign the operation to a list belonging to resource $r_j$, it has to check if the resource $r_j$ is able to execute the operation type $t_{tk}$, i.e. $t_{tk} \in \lambda^{-1}(r)$, in according to Definition~\ref{def:libray}, that is $t_{tk} \in T$, where $T$ is the operation type set described for the component when the library has been loaded. If an operation has been bounded to a specific functional unit, it can be inserted only in its list and so, only when this functional unit becomes free, the operation can be scheduled only on it. In the worst case, only an operation is assigned at each iteration, so the complexity of the algorithm is $O(N)$, where $N$ is the number of operations to be scheduled.

\begin{figure}[t!]
\centering
\includegraphics[width=0.7\columnwidth]{./chapters/details/images/ASAP.jpg}
\caption{Kim example: ASAP scheduling}\label{fig:ASAP}
\end{figure}

\begin{figure}[t!]
\centering
\includegraphics[width=0.7\columnwidth]{./chapters/details/images/ALAP.jpg}
\caption{Kim example: ALAP scheduling}\label{fig:ALAP}
\end{figure}

\subsubsection{An illustrative example: the \textit{Kim} benchmark scheduling}

If the scheduling algorithm just described is applied to \textit{Kim} benchmark, it executes as follows. ASAP and ALAP scheduling are shown in Fig.~\ref{fig:ASAP} and in Fig.~\ref{fig:ALAP} respectively. Consider the resource library composed by two adders, a subtractors and a comparator. Consider also the following resource binding constraints, as presented in the Section~\ref{details:binding}, for operations \textit{+2} and \textit{+3}:

\begin{eqnarray}
 \beta(<+2>) = (<ADDER>,0) \nonumber \\
 \beta(<+3>) = (<ADDER>,0) \nonumber
\end{eqnarray}

At the beginning, only operations \textit{!=1} and \textit{+11} are available:

\begin{small}
\begin{quote}
   $PList^{+}(0) = \{<+11>(5)\}$ \\
   $PList^{-}(0) = \{\ \}$    \\
   $PList^{!=}(0) = \{<!=1>(0)\}$ \\
\end{quote}
\end{small}

The operation $<+11>$ has the mobility value set to $5$ since in the ALAP it is scheduled in the control step $L = 5$ and in the ASAP it is scheduled at control step $E = 0$. The operation $<!=1>$ is in the critical path, so the mobility is set to $0$ (ASAP control step and ALAP control step are the same). The two operations can be assigned to a functional unit that is able to compute them. The operation $<!=1>$ is assigned to only functional unit that can compute comparison. There are two adders (both free) so the operation $<+11>$ can assigned to current control step and bounded to one of the two functional units. Therefore, a feasible binding can be:

\begin{eqnarray}
 \beta(<+11>) = (<ADDER>,0) \\ \nonumber
 \beta(<!=1>) = (<CMP>,0) \nonumber
\end{eqnarray}

The scheduling function is updated as follows:

\begin{eqnarray}
 \theta(<+11>) = 0 \\ \nonumber
 \theta(<!=1>) = 0 \nonumber
\end{eqnarray}

Operation $<if1>$, that is test (performed by the controller) of the condition computed by operation $<!=1>$, is chained to this one; so it can be scheduled into this control steps. Up to now, new operations become \textquotedblleft available\textquotedblright as well and the lists are updated. In the following control step the list are:

\begin{small}
\begin{quote}
   $PList^{+}(1) = \{<+16>(5)\}$\\
   $\longrightarrow \beta(<+16>) = (<ADDER>,0)$  \\
   $PList^{-}(1) = \{<-1>(2),<-8>(5)\}$ \\
   $\longrightarrow \beta(<-1>) = (<SUB>,0)$    \\
   $PList^{!=}(1) = \{<!=2>(0)\}$ \\
   $\longrightarrow \beta(<!=2>) = (<CMP>,0)$
\end{quote}
\end{small}

The algorithm can go on as just described. Note that, in this case, only a subtractor is available and there are two operations that could be scheduled; the operation $<-1>$ is preferred since it has a lower mobility. It means that it is an operation more critical for the overall latency that the other one, which has an higher mobility.

\begin{small}
\begin{quote}
   $PList^{+}(2) = \{<+1>(1),<+12>(3)\}$ \\
   $\longrightarrow \beta(<+1>) = (<ADDER>,0)$ \\
   $\longrightarrow \beta(<+12>) = (<ADDER>,1)$ \\
   $PList^{-}(2) = \{<-2>(0),<-6>(2),<-8>(4)\}$ \\
   $\longrightarrow \beta(<-2>) = (<SUB>,0)$  \\
\end{quote}
\end{small}
\begin{small}
\begin{quote}
   $PList^{+}(3) = \{<+2>(0),<+4>(2),<+13>(3)\}$ \\
   $\longrightarrow \beta(<+2>) = (<ADDER>,0)$ \\
   $\longrightarrow \beta(<+4>) = (<ADDER>,1)$ \\
   $PList^{-}(3) = \{<-4>(1),<-6>(1),<-5>(1),<-8>(3)\}$ \\
   $\longrightarrow \beta(<-4>) = (<SUB>,0)$  \\
   $\longrightarrow \beta(<-6>) = (<SUB>,0)$  \\
\end{quote}
\end{small}

The operation $<+2>$ could be assigned to any of the two functional units named \textit{ADDER}, but the binding constraint forces it to be assigned to unit \textit{ADDER,0}. Operations $<-4>$ and $<-6>$ can be assigned to the same functional unit since they are in mutual exclusion (in according to Definition~\ref{hls:mutual_exclusion}) and they will be never executed together. This is the only difference imposed by binding constraints. If the unit has been busy, the operation could simply consider as no free unit can implement it (in fact, due to the constraint, that is the \textit{only} unit that can implement it) and so it would have been kept in list to be tested in the future. The algorithm can go on and, at the end, each operation will be assigned to a control step and bounded to a functional unit. A feasible scheduling for \textit{Kim} example respecting the constraints that have specified is shown in Fig.~\ref{details:fig:kim_sched}.

\begin{figure}[t!]
\centering
\includegraphics[width=0.7\columnwidth]{./chapters/details/images/sched.jpg}
\caption{Kim example: feasible \textit{resource-constrained} scheduling}\label{details:fig:kim_sched}
\end{figure}

\subsection{Creation of the Finite State Machine Graph}\label{details:fsm}

After the scheduling has been completed, the \textbf{finite state machine graph} is created. The FSM model is used to represent the evolution of the specification basing on the control flow, when the design will be implemented and executed. It is constructed starting from the scheduling results and on the original control dependences among operations. In fact, after the scheduling has been performed, each operation has been assigned to a control steps and all operations executed in the same control step are known. The problem is that some of these operations are mutually exclusive (see Definition~\ref{hls:mutual_exclusion}) and it could be necessary to know, in each instant, which operations are really executed, evaluating the control conditions expressed in the previous control steps. So the construction of the FSM graph has been made taking into account the following objectives:

\begin{itemize}
 \item group in the same state the operations executed all together in a single control step, with respect to mutual exclusion property;
\item reconstruct the control flow based on evaluation of control conditions computed by the datapath.
\end{itemize}

This data structure has been built with a \textit{constructive} approach, similar to what has been done in the \textit{list-based} algorithm. To better explain how the algorithm works, it is useful to start from an example. The algorithm will be described by applying it to the \textit{Kim} benchmark specification. At the beginning, where the control step $CS$ is $0$, all \textit{available} operations are inserted into a list $Q$ that represent the operations to be tested. Note that in this situation, no control conditions have been tested yet. If the operation $q_i \in Q$ has been scheduled in the control step currently considered, the operation will be in the current state of the finite state machine graph. So it is added to a set $S_0$, that contains all operations executed in the first control step. After an operation has been added to set $S$, new operations can become free (like it happens in the \textit{list-based} algorithm). These operations are immediately tested if they have to be added to the current state, since they could be chained to previous ones. For instance, in the \textit{Kim} benchmark, scheduled as shown in Fig.~\ref{details:fig:kim_sched}, the operations $<!=1>$ and $<+11>$ are executed in the first control steps, so they are added to the control state $S$. Operations $<if1>$, $<+16>$ and $<-8>$ become \textquotedblleft available\textquotedblright~and, among them, the operation $<if1>$ is executed in the current control step, since it is chained to operation $<!=1>$. So it is added to the state $S$ too. Operations $<!=2>$ and $<-1>$ are added to list $Q$, that is so composed up to now:

\begin{equation}
  Q = \{ <+16>, <-8>, <!=2>, <-1>\}\label{details:fsm_state0} \nonumber
\end{equation}

 Now, there are not operations in the list that are executed in the current control step $0$. The set $S$ represents the new state to be added to the finite state machine graph. No conditions have been tested, so they do not need to be stored by the graph. At the opposite, the state contains a control condition evaluation. It causes a branch in the original control flow graph (see Fig.~\ref{details:kim_cfg}), so it will cause a biforcation also in the finite state machine graph. Two flows are created up to now, the first one related to \textit{true} value of the condition and the second one related to \textit{false} value. The list $Q$ (see Eq.~\ref{details:fsm_state0}) is inherited by both flows. At this moment, a clean-up in the lists is performed. In fact, in the list belonging to \textit{true} path, the operation $<-1>$ will be never executed, since it is executed only when \textit{false} condition has been evaluated. So it can be erased from this path. As well, in the other path, referred to \textit{false} condition, the operation $<!=2>$ will be never executed. The two lists, after clean-ups, are composed as follows:

\begin{eqnarray}
\begin{array}{lcl}
 Q^{<!=1;T>} & = & \{ <+16>, <-8>, <-1>\} \nonumber \\
 Q^{<!=1;F>} & = & \{ <+16>, <-8>, <!=2>\} \nonumber \\
\end{array}
\end{eqnarray}

Operations $<+16>$ and $<-8>$ are stored in both flows since they belong to a computation that is parallel and indipendent from the test of the condition. They will be executed with any control condition values. The algorithm goes on in the same way just described. Since two flows are \textit{active}, two different states will be created for the control step $CS=1$:

\begin{eqnarray}
\begin{array}{lcl}
 S^{<!=1;T>} & = &  \{ <+16>, <!=2>, <if2>\} \nonumber \\
 S^{<!=1;F>} & = & \{ <+16>, <-1>\} \nonumber \\
\end{array}
\end{eqnarray}

Note that in the state $S^{<!=1;T>}$ a new conditional value is tested, so two new flows are created from this one. The algorithm is repeated until all flows become empty. The resulting finite state machine graph is shown in Fig.~\ref{details:fsm_kim}.

\begin{figure}[t!]
\centering
\includegraphics[width=0.5\columnwidth]{./chapters/details/images/FSM.jpg}
\caption{Kim example: resulting finite state machine graph}\label{details:fsm_kim}
\end{figure}

Some properties can be exploited by the analysis of this short example:

\begin{itemize}
 \item let $N$ be the number of conditional evaluations executed in state $S$: the number of resulting flows, outcoming from $S$, will be $2^N$. For instance, if there are two conditional evaluation, 4 flows will come from state $S$, with any combination of boolean values (TT, TF, FT, FF);
\item let $e$ be the edge incoming in state $S$: it is labelled with the informations $Info$ about the conditional evaluations performed from $ENTRY$ node to state $S$. So, state $S$ can be considered to be executed under the conditions $Info$;
\item all operations belonging to the control state are always executed together since they are related to the same conditional values $Info$ and the same control step;
\item let $d_{S_i}$ be the distance between the $ENTRY$ node and the $S_i$ node, the $S_i$ node will be executed in the control step $d_{S_i} - 1$. This is because the first state is executed at the control step $CS=0$.
\item the length of the longest path from the $ENTRY$ node to the $EXIT$ node is equal to the worst case execution time (the \textit{latency} of the behavioral specification).
\end{itemize}

The structure of the finite state machine is similar to a \acf{CFG} and so it can be used where algorithms need such kind of informations (e.g. the relations among operations before and after a control step boundary). Therefore it obviously can be used to build the controller-FSM to issue the operations with respect to the control condition evaluations.

\subsection{Register Allocation on the Finite State Machine Graph}\label{details:register}

The \textbf{register allocation} task assigns the storage elements to the values that are alive across a cycle step boundary. As explained in the Section~\ref{mixed:register}, the register allocation phase is performed on the Finite State Machine graph, since it can represent a flow similar to Control Flow Graph. The dataflow analysis is performed on the base of the equations presented by Appel~\cite{Appel} for analysis performed by compilers. The equations have been extended in the following way:

\begin{eqnarray}
   \begin{array}{rcl}
    in[n]  &  =  &  use[n] \cup (out[n] - def[n]) \label{eqn:incoming}\\ 
    out[n] &  =  &  \bigcup_{s \in succ[n]} in[s] \label{eqn:outcoming}
   \end{array}
\end{eqnarray}

where $n$ is a vertex of the finite state machine graph. The Equation~\ref{eqn:incoming} is defined as:

\begin{itemize}
\item $use[n]$ are the variables that are used by operation executed in the state $n$; 
\item $out[n]$ are the variables live-out at the current state
\item $def[n]$ are the variables defined by the operations executed in the current state.
\end{itemize}

The Equation~\ref{eqn:outcoming} is defined as:

\begin{itemize}
\item $in[s]$ are the variable live-in at the state $s$, where $s$ is a successor of the state $n$ in the finite state machine graph.
\end{itemize}

A solution to these equations can be found by iterating. $in[n]$ and $out[n]$ are initialized to the empty set, then the equations are repeatedly treated as assignment until a fixed point is reached.
The convergence time of this algorithm can be significantly speeded up by ordering the nodes properly; this can be done easily with postorder ordering.

The second step, needed for the complete computation of the dataflow analysis, is a forward analysis, where each variable definition is propagated up to the end of the interval life, that is when the variable is \textit{live-in} and not \textit{live-out} at the operation vertex or when there is a new definition of the same variable in the vertex. It means that the previous value is not used anymore; that is the end of its interval life. Note that if this step is not computed, the dataflow equation results are right anyway. However, if it is computed, more informations are available for the synthesis and better results can be obtained (the conflict graph can be far reduced, pruning unuseful conflicting edges).

\subsubsection{Conflict graph creation}

After the dataflow analysis has been computed, the \textit{conflict graph} can be created. In graph creation, each edge of the finite state machine graph is taken into account. The source vertex and the target one are scheduled in different control steps by definition. This means that a register is needed for each variable living out from the source vertex to keep the value alive until target one will use them. So, in such a situation, a conflict edge can be set between each pair of variable: they cannot use the same register module. 
This algorithm is able to detect \textit{alias} variables. In fact, theoretically, each vertex of the behavioral specification can execute only one operation and so store only one result. Multi-definitions are allowed only by using statements like:

\begin{equation}
 a = b = c + d \nonumber
\end{equation}

In this way, variable \textit{a} and variable \textit{b} are different ones but they contain the same value, so they can share the same register. In the proposed solution, they can be detected because definitions have been forwarded. So, during the analysis of an edge, if two variables are both alive and the definition vertex is the same, they can be considered \textit{alias}. So there is not conflict edge between them.
Also, there is no conflict when the defining vertices are in mutual exclusion. In fact, at run-time, if a variable is defined by a vertex, the other one will not be defined, so registers could be shared between them. However, this situation never happens in the finite state machine graph defined here, since, for definition, all operations in the same state are all executed together. If a conditional branch occurs, the finite state machine graph has a biforcation above and two different states are created (see Section~\ref{details:fsm} for details).
Constant values and input variables do not need any register. 

Any pair of variables living together and without any of the above properties are in conflict. So an edge between them must be added into the conflict graph.

Now the problem is to assign each node of the conflict graph to a storage module such that no neighbor of the node has the same storage module. This is exactly the formulation of the graph coloring problem, with storage modules being the colors. It is well known that graph coloring is NP-complete~\cite{np_complete}, so exact solution is not feasible. An approximate solution has to be implemented, with better runtime behavior.

\subsubsection{Graph coloring problem}

The graph (or vertex) coloring problem, which involves assigning colors to vertices in a graph such that adjacent vertices have assigned distinct colors, arises in a number of scientific and engineering applications such as scheduling, register allocation, optimization and parallel numerical computation.
Mathematically, a proper vertex coloring of an undirected graph $G=(V,E)$ is a map $c: V \rightarrow S$ such that $c(u) != c(v)$ whenever there exists an edge $(u,v)$ in $G$. The elements of set $S$ are called the available colors. The problem is often to determine the minimum cardinality (the number of colors) of $S$ for a given graph $G$ or to ask whether it is able to color graph $G$ with a certain number of colors. The proposed methodology tries to find the minimum cardinality of the set $S$. The cardinality $N_{reg} = \Vert S\Vert$ represents the number of registers needed to correctly store values under the given constraints. Obviously, if this number is greather than the maximum number of register allowed by the constraints, it means that the constraints did not allow the synthesizer to build a design that could be implemented with the given number of register (more are needed). So the synthesis fails, since, if the register number is not sufficient to implement the specification, memory access should be provided, but this feature is not supported up to now. In fact, memory access requires the scheduling to deal with operations with unknown latency time and the formulation proposed does not support it.

A widely-used general greedy based approach~\cite{graph_coloring} is, starting from an ordered vertex enumeration $v_1, \dots, v_n$ of $G$, to assign $v_i$ to the smallest possible color for $i$ from 1 to $n$. This is the approach that has been followed. A simple example is shown in Fig.~\ref{details:fig:conf}. It shows how different solutions are feasible for the same vertex coloring problem.

\begin{figure}[t!]
\centering
\includegraphics[width=0.5\columnwidth]{./chapters/details/images/conf.jpg}
\caption{An example of different solutions to the same coloring problem}\label{details:fig:conf}
\end{figure}

Since the vertex coloring heuristic can take long time and, in some situations, also a simpler heuristic can lead to good overall results, other algorithms have been provided too. For instance, a left edge heuristic has been implemented. In this algorithm, the lifetimes of all values are represented by intervals. The register allocation problem can be viewed as the problem of assigning the intervals to registers along a horizontal line, such that two intervals in the same register do not overlap. These intervals can be seen as wires which have to be assigned to tracks (the registers), which will make this problem analogous to the channel routing problem without vertical constraints. A left edge algorithm can be used to solve this problem.
% \begin{quote}
%   Left edge algorithm $G_s (V_s , W)$ 
%   $\\.sort\_values(v\in V_s,\omega (v)); \ \ \ M_s=\emptyset; \\
%   .\mathbf{foreach} \ v \in \ V_s \ \mathbf{do} \\
%   .\ \ \ \mathbf{foreach}\ r \in \ M_s\ \mathbf{do} \\
%   .\ \ \ \ \ \ \ \mathbf{if} \ \omega (v) \ > P(r) \ \mathbf{then} \\
%   .\ \ \ \ \ \ \ \ \ \ \ \psi(v)=r; \ \mathbf{then} \\
%   .\ \ \ \ \ \ \ \ \ \ \ P(r)=P(v);\\
%   .\ \ \ \ \ \ \ \mathbf{endif} \\
%   .\ \ \ \mathbf{endfor} \\
%   .\ \ \ \mathbf{if}\ \psi(v)=0 \ \mathbf{then} \\
%   .\ \ \ \ \ \ \ M_s = M_s \cup \{ r \} ; \ \ \textrm{ // add new register } \\
%   .\ \ \ \ \ \ \ \psi(v)=r;\\
%   .\ \ \ \ \ \ \ P(r)=P(v);\\
%   .\ \ \ \mathbf{endif} \\
%   .\mathbf{endfor}$
% \end{quote}
Since the list of registers is pre-sorted, checking if a new value overlaps with the values in the registers can simply be done by comparing the write time of the value with the last cycle step in which the register is occupied. This cycle step is called the last read step of a register: $P(r)$. Sorting is the most complex step in this algorithm and the left-edge algorithm can be performed with complexity $O(nlogn)$ where $n$ is the number of values to be stored. 


\subsection{Interconnection Allocation and Optimization}\label{details:interconnection}

In datapath synthesis, each register output needs to be transferred to the input of a functional unit, and each functional unit outputs to the input of a
register (or to the input of another functional unit, if a chaining occurs). \textbf{Interconnection binding} maps the data transfers to the interconnection paths (in the target architecture presented here, they are direct connections supported by multiplexers). The objective of interconnection binding is to maximize the sharing of interconnection units, while still supporting the conflict-free data transfers required by the register-transfer description. Since the connections of a datapath usually occupy a substantial silicon area in a microchip and an important number of Configurable Logic Blocks in a FPGA device~\cite{interconnection_cong}, the cost of datapath connection has to be reduced.

The first step in the \textbf{interconnection allocation} task is the computation of the paths that have to be constructed in the final design. A data transfer is defined by:

\begin{itemize}
 \item a source element $obj_{src}$; it could be an input port, the output of a register or a functional unit;
\item a target element $obj_{tgt}$; it could be an output port, the input of a register or a functional unit;
\item the pins $P$ of the target element $obj_{tgt}$ where the communication path will be attached;
\item a value to be transfered;
\item a control step when the data tranfer could be executed;
\item the operation that requires the data transfer.
\end{itemize}

These informations can be easily retrieved by the results of previous steps. In fact, data transfers are requested when an operation is executed, since it needs the input values to be transferred from their location to the inputs of the functional unit where the information is binding. Besides, locations can depend on the control flow where the operation is executed. For instance, considering the \textit{Kim} benchmark and the finite state machine created in the Section~\ref{details:fsm}, the operation $<+16>$ is executed in both the branches of the $<if1>$ statement, but target locations of the result can be different, with respect to the optimization performed by register allocation algorithm on the two branches. So, the execution of an operation has to be related to the state where it can be executed. In the cited example, there are two \textquotedblleft different\textquotedblright~$<+16>$, the first is referred to the \textit{true} branch of the \textit{if1} statement, the second to the \textit{false} branch. So it is natural to associate any operations to the control states where it can be executed. Then, if operation $o_i \in S_j$ is executed, input values have to be retrived. Informations are retrieved from the \acf{DFG}, since it gives informations about operation data dependences. From the DFG, operations that produce the input values for operation $o_i$ are easily recognized: if the operation $o_i$ needs a value $a$ produced by operation $o_k$, there will be a data edge from the operation $o_k$ and the operation $o_i$, and it will be labelled as $a$ (the name of the variable that creates the dependence). A connection is needed from the location where this value is stored provided the operation $o_k$ has been really executed. In fact, considering the following pseudocode:

\begin{algorithmic}[1]
 \IF {Cond} 
 \STATE a = b + 1;\alglabel{prima}
 \ELSE
 \STATE a = 2 * b;\alglabel{seconda}
 \ENDIF
 \STATE c = a * b;\alglabel{terza}
\end{algorithmic}

the operation defined at line \algref{terza} has a data dependence from the operation at line \algref{prima} and the one at line \algref{seconda}. Starting from the evaluation of the control condition, one of the two data dependences is really involved. The finite state machine of this short example can be constructed as shown in Fig.~\ref{fig:simple_fsm}.

\begin{figure}[t!]
\centering
\includegraphics[width=0.3\columnwidth]{./chapters/details/images/fsm_inter.jpg}
\caption{Example of a finite state machine graph}\label{fig:simple_fsm}
\end{figure}

The operation defined at line \algref{terza} is replicated in the two branches, so in the \textit{true} one, it is data dependent involved with the operation at line \algref{prima}, and, in the \textit{false} branch, it is data dependent with the operation at line \algref{seconda}. So the two different paths have to be created, one for each dependence involved. These data transfers will be subjected to conditions: in fact the dependence between the operation $a = b + 1$ and the operation $c = a * b$ is subjected to the evaluation of the control condition as \textit{true}, the dependence between the operation $a = b + 1$ and the operation $c = a * b$ is subjected to the evaluation of the control condition as \textit{false}. Now connections from the source element where the value is stored and the pins referring to first operand of the operation $c = a * b$ can be created, for the value $a$.

In general, and also in this short example, more connections can be created to the same operand of a functional unit. It can happen when different situations are verified:

\begin{itemize}
 \item the value involving the connection can come from different location (as it can happen in the short example shown above);
\item the functional unit executes more than one operation and the operations require values from different locations.
\end{itemize}

When there is more than a single connection to an operand of a functional unit (or a register input), a multiplexer is needed. In general, if there are $N$ connections coming to the same input, a multiplexer having $N$ inputs and one output is needed. The multiplexer with $N$ inputs can be easily converted to a tree of $N - 1$ multiplexers having only two inputs. The solution with multiplexers with only two inputs is preferred since a unique module has to be specialized in the resource library, not depending on the number of inputs. So, with the two-input solution, it is easier to calculate the logic function used to perform the selection.

The logic to perform the selection is based only on enable signals and the informations on conditional evaluations all coming from the controller unit. In fact, in the example described above, when the operation $c = a * b$ has to be executed, the related enable signal is raised by the controller. It is evident that the enable signal is not sufficient to know which of the \textquotedblleft two\textquotedblright~operations $c = a * b$ is executed. So the conditional evalutation coming from the controller and related to the \textit{if} statement allows the data path to recognize which is the real operation executed and, so, the real connection involved. The selection function is created as a truth table, and it takes into account enable signals and conditional evaluation signals to select the right input to be tranferred to the input of the functional unit.

To avoid undesidered side-effects on memory elements, a write enable signal has been also provided. Therefore, the writing on a register can be performed only when the related enable signal has been raised. Then, the writing of a data value on a register can be performed only when the operation that produces the data value is executed. Starting from these informations, the truth table for the writing enable signal can be computed like as it has been computed for the selectors of the multiplexers.


% Communications paths, including buses and multiplexers, must be
% chosen so that the functional units and registers are connected as
% necessary to support the data transfers required by the specification
% and the schedule. The most simple type of communication path
% allocation is based only on multiplexers. Buses, which can be seen
% as distributed multiplexem, offer the advantage of requiring less
% wiring, but they may be slower than multiplexem. Depending on 
% the application, a combination of both may be the best solution.


\subsection{Datapath and Controller Circuit Generation}\label{details:circuit_generation}

After all the components have been interconnected (as described in Section~\ref{details:interconnection}), the structural representations of the datapath and the controller are created (see Section~\ref{mixed:circuit}). In such way, the back-end to the different hardware description languages is indipendent from the implementation details, since it has only to interface with this intermediate representation (see Section~\ref{mixed:backend}). 

The \textbf{datapath} representation has been created as a graph representation, as defined by Definition~\ref{hls:datapath}. In fact, starting from the top component, named obviously \textit{datapath}, each component is defined as a graph $G(V,E)$, where:

\begin{itemize}
 \item the vertices $V$ are the internal elements of the component. Each vertex $v \in V$ can be itself a component, or it can be either a memory element or an interconnection element. At the lower level of the hierarchy there is always a resource library component such as a functional unit, a register or a physical interconnection element (e.g. multiplexers and logic ports for boolean logic implementation)
\item the edges $E$ are the interconnections among the vertices $V$. Each $e\in E$ (where $e=(v,u)$ and $u,v \in V$ are component of the circuit) represents a connection path, as it has been computed in the Section~\ref{details:interconnection}.
\end{itemize}

Primary input ports are all represented together in a unique node, called \textit{ENTRY}. As well, primary output ports are represented in the \textit{EXIT} node. In such a representation, the computation is defined as:

\begin{definition}
 \textbf{Datapath computation}: let $PI$ be the set of values presented at the input ports, the data path evolves its computation in according to its description to produce the values $PO$ for the output ports. The set of input values $PI$ represents the inputs coming from the external world, but also the signals coming from the controller.
\end{definition}

The \textbf{controller} representation is created as a finite state machine, starting from the finite state machine graph representation (as built in the Section~\ref{details:fsm}) and based on Moore model. In fact, giving the finite state machine graph $FSMG(V,E)$, the related finite state machine (as defined in the Section~\ref{hls:fsm}) is defined as:

\begin{itemize}
\item the vertices $V$ represent the states $S$ of the finite-state machine;
\item the conditional values to be tested, coming from the datapath, that causes branches in the finite state machine graph, represent the inputs $X$ of the finite-state machine;
\item the enable signals for the operation execution and the conditional values to be sent to the data path to aid the routing of the information inside the datapath (as detailed in the Section~\ref{details:interconnection}) represent the outputs $Y$ of the finite-state machine.
\item the state transition function $\delta : X\times S\rightarrow S$ is defined as follows:

\begin{definition}
 \textbf{State transition function}: let $u \in V$ be a vertex in the finite state machine graph. If $out\_degree(u,FSMG) = 1$ (where $out\_degree(u,FSMG)$ is a function that returns the number of outcoming edges from the node $u$, in the graph $FSMG$), the state transition function of the state $u$ is defined as:

 \begin{equation}
   \delta(u,\bullet) = v
 \end{equation}

where the symbol $\bullet$ means that controller inputs are not taken into account and $v$ is the target vertex of the only edge $e \in E$ outcoming from vertex $u$. If $out\_degree(u,FSMG) > 1$, it means that the state $u$ contains $N = log_2(out\_degree(u,FSMG))$ conditional evaluations (that have generated the $2^N$ outcoming edges). These $N$ conditional evaluations are a subset $I_u =(i_1, i_2, \dots, i_n)$ of the inputs $X$ and the state transition function of the state $u$ is defined as:

\begin{equation}
  \delta(u,[I_u]) = \{v_1,v_2,\dots,v_{2^N}\}
\end{equation}

where $[I_u]$ represents all possible boolean combinations of the $I_n$ conditional evaluations and $\{v_1,v_2,\dots,v_{2^N}\}$ is the list of related target vertices of the $2^N$ outcoming edges from the state $u$.
\end{definition}

\item the output function $\lambda : S \rightarrow Y$ (since a Moore model has been implemented) is defined as follows:

\begin{definition}
 \textbf{Output function}: let $u \in V$ be a vertex in the finite state machine graph. The output function is defined as:

\begin{equation}
  \lambda(u) = \left[ 
                     \begin{array}{c}
                      Y_o \\
                      Y_c
                     \end{array}
               \right]
\end{equation}

Where $Y_o$ are the outputs referred to the signals used to enable the execution of the operations in the datapath and $Y_c$ are the outputs referred to the signals used to aid the datapath to recognize the flow where the control is.
Since the state $u$ contains a set $O_u \subseteq V_o$ of operations to be executed in the current state and an output value $Y_i$ is associated to each operation $v \in V_o$, the output function of the state $u$, referred to operations, is defined as:

\begin{eqnarray} 
    Y_o = \left\{ 
          \begin{array}{lcl}
           Y_o^{i} = 1 & if & v_i \in O_u \nonumber \\
           Y_o^{i} = 0 & if & v_i \notin O_u \nonumber
          \end{array}
        \right.
\end{eqnarray}

The conditional evaluations performed in the past are stored in the (unique) incoming edge to node $u$. So, this edge is labelled with $M = \Vert Info_e \Vert$ conditions, where $M$ is the number of conditional evaluation already performed. These values are the remaining values $Y_c$ of the output vector:

\begin{eqnarray}
    Y_c = \left\{ 
          \begin{array}{lcl}
           Y_c^{i} = c_i & if & c_i \in Info_e \nonumber \\
           Y_c^{i} = \bullet & if & c_i \notin Info_e \nonumber
          \end{array}
        \right.
\end{eqnarray}

where $c_i$ is the value of the $i-th$ evaluation condition in the behavioral specification. If $c_i\notin Info_e$, it means that that condition has not been evaluated yet, so its value is not meaningful.
\end{definition}

\item the initial state $S_o$ is the only vertex $v$ connected to $ENTRY$ node, such that it exists an edge $e \in E$ where $e = (ENTRY,v)$. Note that, as the finite state machine graph has been constructed in the Section~\ref{details:fsm}, $out\_degree(ENTRY,FSMG) = 1$ in all situations.
\end{itemize}

Following this formulation, the controller-FSM can be easily constructed using only the informations coming from the finite state machine graph.

\subsection{Estimations of the results}\label{details:evaluation}

After the synthesis is complete, it could be interesting to perform estimations about some figures of merit, such as performance or area.

The \textbf{performance} can be estimated with the number of control steps needed to compute the longest, \textit{critical}, path in the behavioral specification. Since the scheduling is the task that tries to minimize then latency, an estimation of the performance of a final design can be obtained with the resulting control steps. The same figure of merit can be retrieved as the length of the longest path in the finite state machine, since it models the evolution of the system step by step. For instance, the design obtained for the \textit{Kim} example in the Section~\ref{details:scheduling} takes 8 control steps.

The estimation of the \textbf{area} occupied by the component is quite more difficult to be computed, since it depends on optimizations made by the RTL synthesizer tool and the target device technology. An already existing area model~\cite{brandolese} has been adapted for FPGA design to generate the required values. 

The area model used for the estimation is now further described. For each architecture $A$ the model divides the area component into two main parts: the Flip-Flop part 
and the Look-Up Table part. While the flip flop part is easy to estimate, since it is composed by the data register used in the datapath and by the flip-flops used for the state encoding of the FSM controller, the LUT part is a little more complex. 

For the \textit{controller}, the flip-flops are used for the state encoding, so, with a common encoding format, the required elements are:

{\footnotesize
\begin{equation}
\#FF_{FSM} = \lceil log_2(A.FSM.NumControlStates) \rceil
\end{equation}
}

where $A.FSM.NumControlStates$ is the number of states on the finite state machine (as described in Section~\ref{details:fsm}).
The LUTs used in the controller are estimated as:

{\footnotesize
\begin{eqnarray}
\begin{array}{lcl}
\#LUT_{FSM} & = & \lceil 1.99*A.FSM.NumControlStates - 0.24*A.FSM.Inputs+ \nonumber \\
 & & +1.50*A.FSM.Outputs - 9.97 \rceil
\end{array}
\end{eqnarray}
}

where:

\begin{itemize}
\item $A.FSM.Inputs$ are the inputs coming from the datapath to the controller, i.e. conditional values to be evaluated when a branch occurs)
\item $A.FSM.Outputs$ are the outputs from the controller to the datapath, i.e. the enable signals to issue the execution of the operations and the state of the conditional branches evaluated, to aid the datapath to perform multiplexer selection.

\end{itemize}

The \textit{datapath} flip-flops are related to registers used to store values during computation. So, it is computed with the equation:

{\footnotesize
\begin{equation}
\#FF_{DataPath} = \sum_{R \in A.DataPath.Registers} sizeof(R)
\end{equation}
}

where $A.DataPath.Registers$ is the set of registers that have been allocated during the \textit{register allocation} phase (see Section~\ref{details:register}) and $R$ is the size (in Flip-Flops) of each register.

The contribution of LUTs is more critical to be estimated, since interconnection and glue logic are difficult to be computed. However, the LUTs occupied by the datapath are composed by three elements: the functional units, the interconnection elements and the glue logic. The functional units area is computed as:

{\footnotesize
\begin{equation}
\#LUT_{FU} = \sum_{F \in A.DataPath.FunctionalUnits} F.Area\label{eqn:lut_fsm}
\end{equation}
}

where $A.DataPath.FunctionalUnits$ are the functional units allocated in the final design and $F.Area$ is the area occupied by each of them, as stored in the \textit{resource library} (see Section~\ref{details:resource} for details).
The area of the interconnection elements is estimated as:

{\footnotesize
\begin{equation}
\#LUT_{MUX} = \sum_{M \in A.DataPath.Mux}  \lceil (0.59 * M.Input - 0.3*sizeof(M)) \rceil\label{eqn:lut_mux}
\end{equation}
}

where $A.DataPath.Mux$ is the set of multiplexers allocated by the \textit{interconnection allocation} task (see Section~\ref{details:interconnection}), $M.Input$ the number of inputs of each of them and $sizeof(M)$ the size of the data that pass through the multiplexer. This estimation already contains the contribution due to decoding logic needed to choose the right input.
The glue logic, due to connections of the datapath with the controller and the write-enable signals for the registers, is estimated as:

{\footnotesize
\begin{equation}
\#LUT_{Glue} = \lceil 0.7*\#LUT_{FSM} + 9.99*A.DataPath.NumRegisters \rceil\label{eqn:lut_glue}
\end{equation}
}

where $\#LUT_{FSM}$ is the number of LUTs used to implement the controller-FSM, as shown by Eq.~\ref{eqn:lut_fsm}, and $A.DataPath.NumRegisters$ is the number of register allocated.

The overall area is the sum of the contribution due to flip-flops and LUTs:

{\footnotesize
\begin{eqnarray}
\begin{array}{lcl}
A.Area.FF & = & \#FF_{FSM} + \#FF_{DataPath} \\
A.Area.LUT & = & \#LUT_{FSM} + \#LUT_{FU} + \#LUT_{MUX} + \#LUT_{Glue}
\end{array}
\end{eqnarray}
}

The three parts related to FSM (see Eq.~\ref{eqn:lut_fsm}), MUX (see Eq.~\ref{eqn:lut_mux}) and Glue (see Eq.~\ref{eqn:lut_glue}) are obtained by using a regression-based approach.
The coefficient extraction and the model validation has been done using the set of benchmark presented in \cite{ferrandi}. 
For the example described in this section, the \textit{Kim} benchmark, the values estimated are shown in Fig.~\ref{fig:results_kim}:

\begin{figure}[ht]
{\footnotesize
\begin{eqnarray}
\begin{array}{lcl}
 \#FF_{FSM} & = & 5 \nonumber \\ \nonumber
 \#LUT_{FSM} & = & 76 \\ \nonumber
 \#FF_{DataPath} & = & 160\\ \nonumber
 \#LUT_{FU} & = & 112 \\ \nonumber
 \#LUT_{MUX} & = & 739 \\ \nonumber
 \#LUT_{Glue} & = & 264 \\ \nonumber
 A.Area.FF & = & 165 \\ \nonumber
 A.Area.LUT & = & 1191 \\
 A.Area & = & A.Area.FF + A.Area.LUT = 1356\label{value:kim_area}
\end{array}
\end{eqnarray}
}
\caption{Estimation results for the \textit{Kim} benchmark}\label{fig:results_kim}
\end{figure}

where the value $A.Area$ has been computed with the idea that a Configurable Logic Block is note used for sequential and combinatorial logic in the same moment. This is not true in all the cases, but it can be approximately accepted as it is.
Performing a real RTL synthesis of the resulting code with the Xilinx ISE ver. 8.1i tool~\cite{Xilinx} on a Virtex II-PRO XC2VP30 FPGA device, the results obtained are:

{\footnotesize
\begin{eqnarray}
  \begin{array}{lcl}
  A.Area.FF & = & 179~(+7.82\%) \nonumber \\
  A.Area.LUT & = & 1169~(-1.88\%) \nonumber \\
  A.Area & = & 1348~(-0.59\%) 
  \end{array}
\end{eqnarray}
}

that shows an overall error less than 1.00\%, that can be considered a good result for the methodology.
Further details on validation of the area model will be presented in the Section~\ref{results:evaluations}.


\section{Design Space Exploration}\label{details:nsga}

In this section it will be explained how the components of the genetic algorithm have been implemented to perform the design space exploration for the high-level synthesis problem. The elements composing the genetic algorithm will be detailed and the interaction with the high-level is described. In Section~\ref{details:ga_kim}, the proposed methodology is applied to the \textit{Kim} benchmark to compare results with the standard high-level synthesis flow described above.

\subsection{Chromosome encoding: resource binding and optimization algorithms}\label{details:encoding}

The \textbf{chromosome} encodes all information about the resource binding, since the synthesis flow presented in this thesis is able to easily support operations that have been previously bounded to functional units. 
Since, for each high-level synthesis sub-tasks different algorithms have been proposed, the genetic algorithm can use differently them to find which one better fits the specification. In fact, in some situations, in the register allocation problem, the left edge algorithm can produce results that reduce the overall interconnection elements (i.e. multiplexers) despite the use of a greather number of registers. So the informations about the algorithms used to solve the different tasks have been also encoded. The resulting chromosome encoding is formed by two part, as shown in Fig.~\ref{fig:encoding} for the \textit{Kim} benchmark. The first part represents the operations to be bounded to functional units, the second part saves informations about the algorithms that will be used in the synthesis flow.

The part of the chromosome that is related to operations is a vector, where each element is related to an operation in the behavioral specification. Its value is the resource binding constraint that is specificated for the synthesis flow, as defined in the Section~\ref{hls::constraints} and explained in the Section~\ref{details:binding}. Therefore, a resource binding constraint is defined for each operation in the behavioral specification.

\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{./chapters/details/images/encoding.jpg}
\caption{An example of chromosome encoding for \textit{Kim} benchmark}\label{fig:encoding}
\end{figure}

Additional genes are added to compose the second part of the chromosome, related to algorithms used to solve high-level synthesis steps. Each gene represents a synthesis sub-tasks. Their values are indices about the algorithms that can be used to solve the related step. So, for instance, the gene related to \textit{scheduling} phase has two feasible value where it can evolve: $1$ and $2$. The value $1$ represents the \textit{ILP formulation} and the value $2$ represents the \textit{list-based approach}. If the chromosome will have this gene set to $1$, the synthesis flow will solve scheduling problem with the ILP formulation, otherwise, if it is set to $2$, list-based algorithm will be used. Genes for scheduling, register allocation and interconnection allocation are added. This formulation has been designed to be easily extended in the future. In fact, it allows the developer to add further algorithms when they will be available. He or she needs to add a new index value at the related gene where the genetic algorithm can perform the choice.

\subsection{High-level synthesis as fitness function}\label{details:fitness}

In this phase, there is the greatest interaction between the high-level synthesis problem and the genetic algorithm. In fact, at this point, the goodness of an individual (i.e. a design solution) has to be evaluated and a vector of fitness values (as defined in the Section~\ref{mixed:cost_function}) has to be returned to the algorithm. To evaluate a solution, the high-level synthesis flow presented in Section~\ref{mixed:flow} is executed as described in Section~\ref{details:hls}. The behavioral specification and the resource library are obviously unique for all individuals. A new \textbf{hls\_constraint} data structure (see Section~\ref{details:resource}) is created for each fitness evaluation from a base one, where all general constraints are always valid (e.g. constraints on number of resources, registers or total area).
For each operation in the behavioral specification, the resource binding information is extracted by the related gene and it is translated into a constraint that is added to the \textbf{hls\_constraint} data structure, as defined in Section~\ref{details:binding}. The scheduling algorithm, selected by the value of the related gene, is then performed, taking into account all these constraints (see Section~\ref{details:scheduling} for details). After the scheduling, all operations are assigned to a control step and they are bounded to the functional units specified by the chromosome encoding. Note that the scheduling results are hardly affected by the binding provided. Then, since all the following steps are based on the results of the scheduling and binding tasks, the motivation of the choice to encode the chromosome as binding constraints and to design a flow based on these informations becomes clear. In fact, evolving the chromosome, different binding solutions can be obtained and new design solutions can be provided and evaluated. At this point, the synthesis flow goes on up to the end and results are evaluated. The fitness of a solution is the vector defined in Eq.~\ref{eq:cost_function}, where the elements are computed as described in the Section~\ref{details:evaluation}. This is the most critical component of the genetic algorithm, since its execution time highly depends on the complexity of the behavioral specification.

\subsection{Initial population}\label{initial_population}

The initial population represents the set of points where the exploration starts. They can be randomly generated from a set of feasible values for the resource binding problem (see Definition~\ref{hls:binding}). The maximum number of resources that could be needed is retrieved by the application of the ASAP algorithm (see Section~\ref{hls::scheduling}). In fact, since this algorithm works on the base idea of infinite resources, for each resource type $k$, the maximum number $N_k$ of needed resource is the sum of the maximum number of operations of type $t$ supported by resource of type $k$, schedule in the same control steps by the ASAP algorithm. So an operation of type $t$ can be bounded on all functional unit types that can implement the operation type $t$ (in according to Definition~\ref{def:libray}). Once it has been bounded to a functional unit type $k$, the functional unit instance where it can be allocated is a value $p : 0 \leq p < N_k$.

To help the algorithm to perform a larger exploration in the design space, some interesting and known individuals can be added to the initial population. The design solutions with the minimal use of functional units (only an instance for each type) are difficult to be found by the genetic algorithm, since it is very low the probability that all operations of the same type have been mapped on the same functional unit instance, for each functional unit type. However, this is a region of the design space that could be interesting to explore and it is easy to calculate its points. All operations are mapped on the same functional unit instance, for example, all \textit{plus\_expr} are bounded on the same resource $<ADDER>,0$, where $ADDER$ is a functional unit type that can implement the operation type \textit{plus\_expr}. A certain fraction of the total initial population can be initialized with these values or with a random mutation (see Section~\ref{details:operators}) of these ones, to get points located near these ones in the design space.

\subsection{Genetic operators}\label{details:operators}

The classical genetic operators, such as \textit{crossover} and \textit{mutation}, have been applied without any modification, as explained in Section~\ref{mixed:operators}. In fact, it can be easily observed that any modification of the chromosome encoding performed by the operators creates feasible solution, provided the values of the resource binding are still in a feasible range (as it has been computed in Section~\ref{initial_population}). So there is no reason to create new formulations for these operators.

\textit{Uniform crossover} is applied with an high probability $P_c = 90\%$ as a method to produce offsprings. The individuals are chosen as parents with the \textit{tournament selection}, with only two candidates at each round. This operator performs a switching of the genes at the same index (so related to the same operation or the same high-level synthesis task, in according to encoding defined in the Section~\ref{details:encoding}) with a probability of $P_{cu} = 50\%$. That means that each of the offsprings can have 50\% of genes coming from the first parent and 50\% of genes coming from the second one.

\textit{Uniform mutation} is applied with a low probability $P_m = 10\%$ to generate offsprings. The individuals to be mutated are randomly chosen in the set of parent individuals. Each gene of the chromosome is mutated, according to the encoding defined in the Section~\ref{details:encoding}, with a very low probability $P_{mu} = 0.01\%$. This allows to explore new regions in the design space, without going too far from the parent individuals.

\subsection{Design space exploration results}\label{details:ga_kim}

Applying the design space exploration algorithm to the \textit{Kim} benchmark, a unique Pareto-optimal solution, in according to Definition~\ref{def:pareto_optimal}, has been found. This is due to the fact that this benchmark has a strict sequential structure, as shown by the SDG graph (see Fig.~\ref{details:kim_sdg}). This causes the algorithm to extract few parallelism and the variability in the design space is poor. However the results are interesting. In fact, the best solution found uses two comparators, one adder and three subtractors. Table~\ref{tab:hls_mixed} shows the comparison with the results obtained in  Section~\ref{details:evaluation}, where only one comparator, two adders and one subtractor are used. The comparison shows that reducing functional units is not always a good solution to reduce overall area. In fact, better results are obtained by adding functional units and performing a proper resource binding. So far, the number of interconnection and memory elements is heavily reduced. The results shows that applying the mixed design space exploration, the total area occupied is reduced by about 20\%. If the design produced by the standard synthesis flow is substituted by the one produced with the proposed methodology, about three hundred of LUTs are freed and they become available for different use.

\begin{table}
\centering
 \caption{\label{tab:hls_mixed} Comparison between results obtained with the standard high-level synthesis flow and the results obtained with the genetic design space exploration}
\begin{tabular}{|l|r|r|r|}
\hline
\textbf{ } & \textbf{Std HLS flow} & \textbf{Mixed HLS flow} & \textbf{Difference} \\
\hline
$\#FF_{FSM}$ & 5 & 5 & ==\\
$\#LUT_{FSM}$ & 76 & 78 & +2.63\% \\
$\#FF_{DataPath}$ & 160 & 96 & -40.00\%\\
$\#LUT_{FU}$ & 112 & 160 & +42.85\%\\
$\#LUT_{MUX}$ & 739 & 568 & -30.11\%\\
$\#LUT_{Glue}$ & 264 & 181 & -31.82\%\\
$\#A.Area.FF$ & 165 & 101 & -38.79\%\\
$\#A.Area.LUT$ & 1191 & 987 & -17.13\%\\
\hline
\textbf{A.Area}           & \textbf{1356} & \textbf{1088} & \textbf{-19.76\%} \\
\hline
\end{tabular}
\end{table}

%\ \\
\section{Conclusions}

% ######## CONCLUSIONE DEL CAPITOLO.... da sistemare ###############
In this Chapter, the algorithms used to solve each tasks of the high-level synthesis flow have been presented and described. The interactions among all the components have been detailed. So the implementation choices have been motivated and the flow has been applied to a common benchmark as example to illustrate as it works. Then, the implementation details of the genetic algorithm have been described as well, with particular attention to interaction with the synthesis flow described before. It has been shown that the two components can be easily composed to create the proposed methodology that perform high-level synthesis and design space exploration. The resulting mixed flow is then applied to the same benchmark and results show significant improvements, with a saving of about 20\% of global area on a real FPGA device.