\chapter {Mixed High-Level Synthesis}\label{mixed}
\markboth {Chapter \ref{mixed}. Mixed High-Level Synthesis}{}

\begin{flushright}
\sl
I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth.
\end{flushright}

\begin{flushright}
\sl
Umberto Eco
\end{flushright}
\par\vfill\par


The goal of this thesis is to create a methodology for high-level synthesis that is able to deal with multi-objective optimization, in particular when area constraints have to be met. The problem can be formulated as two different sub-tasks:
\begin{enumerate}
\item a \textbf{synthesis flow} from the behavioral specification and a set of constraints to a RTL design;
\item a methodology to explore the \textbf{design space} that is able to reach the better trade-offs among the different objectives.
\end{enumerate}

In this chapter, the proposed methodology will be introduced and motivated. All implementation details will be further described in the Chapter~\ref{details}. This chapter is organized as follows. In the Section~\ref{mixed:flow}, the high-level synthesis flow is introduced and then, in the Section~\ref{mixed:ga}, the design space exploration phase, implemented using a genetic algorithm, is presented. Finally, in the Section~\ref{mixed:mixed}, the resulting mixed flow is surveyed.


%####################### FLUSSO HLS #######################
\section{High-level synthesis flow}\label{mixed:flow}

In this section, the synthesis from a behavioral specification to the related RTL design is described and the implementation choises are presented and motivated.


\subsection{Intermediate representation}\label{mixed:ir}

The first step is the translation of the behavioral specification into a proper \textbf{internal representation}, as explained in Section~\ref{hls::IR}. The internal representation has to be simpler to be analysed by the following high-level synthesis sub-tasks, but, in the same moment, it has to store all information that could be needed. The representation based on graphs has been chosen since it is the most powerful and clear representation and it allows to transform the high-level synthesis algorithms to simpler graph theoretic formulations. To construct this internal representation, a parser of the \textit{GCC} compiler intermediate representation (GIMPLE~\cite{gimple}) has been implemented, since many relationships can be directly exploited, basing on analysis performed by the \textit{GCC} compiler. This allows to reduce programming effort on the front-end implementation and to introduce further code transformations (e.g. constant propagation or dead code elimination) that will be useful to reduce or simplify the following steps or either the final design. The result of this parsing is the creation of a set of graphs, where each of them represents a set of properties. For instance, the \acf{DFG} will be used to represent data dependences among operations, as described in the Section~\ref{hls:dfg}, that will have to be respected by a valid scheduling algorithm.


\subsection{Resource Library and Design constraints}

The \textbf{resource library} (see Section~\ref{hls::resource}) is the list of all components that can be used to implement the behavioral specification. Since a different set of operation types have to be associated to each library component, as defined by Definition~\ref{def:op_execution}, these informations have to be stored together to other specific component information, such as area, execution time (in cycle steps) and power consumption spent to executed a specific operation type.
So a generic component, stored in the resource library, will have to have these informations associated:
\begin{itemize}
\item Area occupied by the component (useful to perform estimation on total area occupied by the final design);
\item Set of operation types that can be executed by the component;
\item Structural representation of the component (to be used when the circuit representation will be created).
\end{itemize}
Then, for each operation type executed on a component, further informations have stored:
\begin{itemize}
 \item Execution time of an operation of the selected type on the component (in cycle steps);
 \item Initialization time of an operation of selected type on the component (in cycle steps);
 \item Power consumption needed for the execution of an operation.
\end{itemize}

The \textbf{design constraints} (see Section~\ref{hls::constraints}) represent the constraints imposed by the designer (or by the architecture of the target devices) to the final design. For instance, for FPGA designs, a constraint that can be imposed is the maximum number of area units that can be used by the final RTL design, e.g. maximum number of Configurable Logic Blocks (CLBs) to be used. Further constraints can be the maximum number of instances that can be allocated for a resource type (e.g. the maximum number of ALUs or registers).


\subsection{Partial binding}\label{mixed:partial_binding}

Once the internal representation has been created and the informations about resources and constraints have been loaded, some additional informations can be added to ensure that operations will be bounded on specific resources. In fact, as described in many books (e.g. De Micheli~\cite{book:MicheliSynthOpt}), the allocation task can be performed before scheduling one and a \textit{partial binding} (see Section~\ref{hls::constraints}) can be introduced to partially control the area occupation on the final design. A partial binding is defined as a relation $\beta$ between an operation vertex into the behavioral specification and the functional unit instance where it will have to be executed (in according to Definition~\ref{hls:binding}):
\begin{equation}
\beta(v_l) = (t,r)
\end{equation}
where $v_l\in V_o$ is an operation of type $l$ to be executed, $t$ is a functional unit that is able to executed the operation type $l$ and $r$ is an integer representing the instance of the functional unit in the design. The following algorithms accept this information as a constraint, without investigate why it has been imposed.


\subsection{Scheduling}

When informations about partial binding (if there are) have been added to the specification, the \textbf{scheduling} phase can take place. The scheduling is performed using different algorithms:
\begin{itemize}
 \item an integer linear programming formulation that is able to produce an optimum result;
\item a specialization of the list-based algorithm that is able to provide solutions in a quite short time.
\end{itemize}
 The execution time of this phase highly depends on constraints imposed by previous resource binding phase and on algorithms chosen to solve the problem. In fact, if constraints on functional units have been imposed, the \acf{ILP} formulation could reach the optimum solution for such problem, but it takes a lot more time since the number of equalities and inequalities to be solved are many more.
An heuristic \textit{list-based} algorithm, based on Hu's one~\cite{listbased}, has been introduced since it can easily manage the partial binding informations and can reach a sub-optimal solution in a quite short time. The usual list-based formulation selects the next operation to be scheduled among a set of operations that can be \textit{potentially} executed. An operation can be potentially executed if all its inputs have been already computed. Then the operation is selected to be executed if there is a free resource that is able to execute the type of the operation. If the operation has a partial binding to a specific functional unit instance, the algorithm will have simply to check if the specific instance which the operation is bounded is free (not a generic one). If it is free, the operation can be assigned to its functional unit instance and scheduled in the current control step; if the resource is busy, the operation will be kept in the list and the next one is tested. When a partial binding has not been specified before for some operations, the algorithm performs the usual methodology to associate them to free resources.


\subsection{Finite State Machine creation}

At the moment, after the scheduling phase, the operations have been assigned to control steps where they will be executed and then, inside each control steps, they are associated to functional units that will execute them. To perform further analysis can be useful reconstruct a flow similar to \acf{CFG}. In fact, the CFG (see Section~\ref{hls:cfg}) represents, for each moment, which is the operation that is executed, the previous one and the next one. In the high-level synthesis problem, it is necessary to know, for each control step, which are the operations executed together and then which are previous and next ones. For instance, this information is needed by the register allocation task, that needs to know which values are alive between two control steps to store them in the storage elements.
This feature is provided by the \textbf{Finite State Machine} (FSM), created on Gajski's FSMD model~\cite{fsmd}. This model describes the evolution of the system based on control flow. The FSM is defined as a graph where:
\begin{itemize}
 \item the vertices represent a set of operations all executed together in the same time, under certain control conditions
\item the edges are the transitions from the control steps to the following ones and they are eventually labeled with control conditions that have to be occurred to perform the transition.
\end{itemize}
When an operation produces a branch in the control flow (e.g. an \textit{if} statement), the resulting condition will produce a branch also into the finite state machine graph.

This is not the only approach that can be used to create the finite state machine. For instance, Kuehlmann and Bergamaschi~\cite{fsm_bergamaschi} create the RTL specification of the controller even before the scheduling step is performed: this way it is possible to generate a schedule which takes into account also the area, or the speed of the resulting controller.


%that are also able to explicitly represent the mutual exclusive paths. In 
%To ensure that all operations in the same control step are really executed,
%It is useful to provide this information also for a parallel computation as the behavioral specification is. However, at each control step, different operations can be executed together. 
%Since a value produced by an operation in a control step could be used as input by another operation that will be executed some steps later, a register is needed to store this value. This flow will also be useful to create the controller that will have to set enables to operations with respect to conditional values. 
% produces a representation that tries to exploit the maximum parallelism that can be obtained with available resources, reducing the worst case execution time. The following steps (register allocation and interconnection allocation) need the specification to be represented as a control flow, where all concurrent execution are located in a single point. 

\subsection{Register allocation}\label{mixed:register}

The main task to provide good solutions to the \textbf{register allocation} problem is the procedure used to recognize the overlapping of the life time intervals.  A solution  highly depends on how good Since a register is needed for the values alive between two control steps, the analysis can be easily performed on finite state machine graph. In fact, a vertex in this graph represent all operations executed in a control step. The value that will be further needed will be alive across the edges outcoming from this vertex. Besides, an edge represents the changing from a control step to the next one, so the values alive between two control steps are the values alive on this edge. The dataflow analysis, presented by Appel~\cite{Appel} and then used also by Brisk~\cite{brisk}, allows to compute, for each edge, which are the variables alive. These variables will be the vertices of a \textit{conflict graph}, that is a graph so defined:
\begin{itemize}
 \item the vertices are the variables that will have to be stored into a register (since that are alive between two control steps)
\item an edge connects two variables if they are alive in the same moment, that is they are alive on the same edge.
\end{itemize}
The resulting conflict graph is minimal with respect to number of conflicting variables (i.e. the number of edges in the graph). In a such way, the solution to register allocation problem will use the minimum number of registers. The problem can be formulated as a clique-covering problem (search of largest cliques into the compatibility graph, complementary to the conflict one) and it can be easily resolved with an heuristic vertex coloring on conflict graph. An heuristic coloring assigns a different color to source and target of each edge. So variables alive in the same moment will be connected all together by conflict edges, so they will be differently colored. Since each color used will represent a different register in the final design, the variables will be assigned, as requested, to different registers. When variables are not alive together, there are no conflict edges so it can happen that the algorithm assigns to the variables the same color. It means that they could share the same register, in fact the values are not alive in the same moment.


\subsection{Interconnection allocation}\label{mixed:interconnection}

At this moment, all elements in the circuit have to be connected. The mux architecture has been chosen to be implemented. The algorithm that provides interconnection allocation is quite simple. It computes where values are produced (or stored) and where they are directed, e.g. storage elements or functional units for a successive computation. Then it creates a connection between the source object and the target one. Once the connection has been created, it will have to be specialized. In fact, if a target object is connected only with a source one a direct connection can be performed. Instead, if a target object is connected to different source objects, a multiplexer is needed to perform, during the execution, the choice of the connection that is needed at each moment. The \textit{moment} is defined as the state of the finite state machine when that connection will have to be active. So the informations coming to this state (e.g. operations executed and conditions) are unique and they can be used to construct decoding logic for the multiplexer selector.


\subsection{Controller design}

The controller is defined as the part of the circuit that compute the evolution of the control flow, based on conditional inputs coming from the evaluation of the control constructs present in the initial specification. So the controller is created by the translation of the finite state machine graph in a structural representation of a finite state machine, where the states are the vertices of the graphs, the inputs are the control condition evaluations coming from datapath computations and the outputs are the enables to operations to be executed at each steps by the datapath. To help the construction of the decoding logic of the multiplexers (as described in the Section~\ref{mixed:interconnection}), also informations about actual conditions have been provided as output directed to the datapath.


\subsection{Structural circuit representation}\label{mixed:circuit}

Since the representation of the circuit could be affected by dependeces coming from the choice of a specific hardware description language, a further intermediate representation has been provided. This representation is created by reading the final data structure coming from synthesis flow and converting it into a graph, that represents the circuit elements and the connections, in according to Definition~\ref{hls:datapath}.

\subsection{Hardware Description Languages Backend}\label{mixed:backend}

The backend have to be independent from all the previous phases, but it have only to read the circuit representation described in the Section~\ref{mixed:circuit} to produce the related RTL code. Since the representation is abstract from hardware description languages details, the backend can be adapted to produce the code into different hardware description languages (e.g.: SystemC, VHDL or Verilog) starting from the same intermediate representation. Two different backends have been provided at the moment: the first one produces the SystemC code, well suited for functional validation through simulations and the second one produces the Verilog code, that can be used as input for RTL synthesis tools, such as Xilinx one~\cite{Xilinx}. The Verilog solution is used for structural validation through synthesis.


%############### DESIGN SPACE EXPLORATION ##########################à
\section{Design space exploration using evolutionary computation}\label{mixed:ga}

In this section, the use of a genetic algorithm as a method to explore the design space is presented. Besides, the features of the NSGA-II algorithm (see~\cite{deb00fast} and Section~\ref{state_art::NSGA}) implemented in the resulting genetic algorithm are surveyed.

\subsection{Chromosome encoding}\label{mixed:encoding}

Chromosomes encode all the information that is necessary for synthesis 
computation. In the proposed methodology a chromosome is simply a vector, composed of two parts. 
In the first part, each gene describes the mapping between the operations to be performed by the 
behavioral specification and the functional unit where it will be 
executed. With this formulation, allocation and binding information are here 
encoded. In the second part, additional genes are added to represent algorithms that can be
used to complete the other steps of the high-level synthesis (scheduling, register 
allocation and interconnection allocation). Since these algorithms 
are deterministic, the results can be retrieved by applying the algorithms
thus not requiring their results encoding into chromosome.
With this encoding, all genetic operators create 
feasible solutions. In fact, recombination of \textit{operations binding} 
simply results in a new allocation or binding and recombination of 
\textit{completing algorithms} results in using different high-level synthesis 
algorithms. This allows to produce good solutions even if common genetic operators are used.

Thus the algorithm is fast in that it does not require recovering procedure to produce feasible chromosome encoding. Moreover, the use of different completing algorithms allows the genetic algorithm to choose and use the best algorithm that solves each single high-level synthesis step.


\subsection{Fitness Function}\label{mixed:cost_function}

As described in the Section~\ref{state_art::EC}, the right implementation of the fitness function is critical for leading the genetic algorithm to produce good results. The fitness function is a multi-objective vector that contains the values to be minimized. It is defined as follows:
\begin{eqnarray}
 F(x) = \left[ \begin{array}{c}
               f_1(x) \\
               f_2(x) \\
               \end{array}
        \right]
      = \left[ \begin{array}{c}
               Area(x) \\
               Time(x) \\
               \end{array}
        \right]\label{eq:cost_function}
\end{eqnarray}
Where $Area(x)$ is an estimation of the area occupied by solution $x$, computed using an area model, depending on the final results of the synthesis process and, also, on the target device. $Time(x)$ is an estimation of the worst case execution time of solution $x$. To obtain the final results about synthesis where to compute the worst case execution time and the area occupied, the binding informations encoded into the chromosome are used as \textit{resource binding constraints}, as described in the Section~\ref{mixed:partial_binding} and the flow described in the Section~\ref{mixed:flow} can be performed up to the creation of the intermediate representation of the circuit, where all informations about number of functional units, registers and interconnection elements used to implement the structural representation have been computed. Since the created representation can be directly synthetizable (it has only to dump by the backend to a proper hardware description language), the estimations based on these results can be considered the most complete informations that can be obtained from the circuit.

\subsection{Genetic Operators}\label{mixed:operators}

To explore and exploit the design space, the usual genetic operators are 
used, the \textit{unary mutation} and the \textit{binary crossover}. 
The two operators are applied iteratively with respect to their probability.
The \textit{crossover} mates two parent chromosomes and produces two child chromosomes. 
Given two chromosomes, \textit{uniform} crossover is applied with a high probability.
A crossover mechanism can mix solution bindings, but can also mix the genes that represent algorithms which the synthesis phases are computed with. The \textit{mutation} operator is used for finding new points in the search space or to change the algorithm used to solve the related task.
Mutation has been implemented with a relatively low rate. Thus, a gene in a chromosome is changed according to its admissible values. If a selected gene is related to an operation, mutation results in a new binding for that operation. On the other hand, if a gene corresponds to a completing 
algorithm among scheduling, register allocation or interconnection binding, 
mutation changes the algorithm used to solve the corresponding synthesis step.

\subsection{Initial population}\label{mixed:initial_population}

At the beginning of each run, an initial population of admissible resource bindings 
is created. It can be created by random generation or by starting from a first 
admissible binding. It allows the algorithm to start from some interesting points (e.g. 
minimum number of functional units or minimum latency) and the explore around them. The rest of the population can be created by random mutation of these predefined ones.

\subsection{Ranking and Selection}\label{mixed:ranking}

Solutions are sorted into different levels according to their fitness values. 
The idea is that a ranking selection method can emphasize good solution points. 
The non-dominated solutions are classified into first level. 
Then, they are discounted and a new classification is performed among the remaining 
ones. The ranking has been accelerated using the \textit{fast-non-dominated-sort} 
algorithm available in the NSGA-II algorithm (see paragraph~\ref{nsga:fast_sorting}).


%################ RESULTING FLOW ##################
\section{Mixed high-level synthesis flow}\label{mixed:mixed}

In this section, the resulting flow is presented and described. The algorithm accepts as input a behavioral specification in a C language, a resource library and a set of constraints.

The information on the behavioral specification has been obtained from the parser, extracted and translated into an internal representation (as described in Section~\ref{mixed:ir}) made of a set of graphs, for instance the SDG (as known as CDG+DFG) data structure (see Section~\ref{hls:sdg}). The NSGA-II algorithm can start by 
creating a first resource allocation and binding (as described in section 
\ref{mixed:initial_population}). This binding defines the resource constraints (see Section~\ref{mixed:partial_binding} for the following 
scheduling phase, performed using the algorithm defined by the associated gene.
The solution is finally generated by applying the rest of the high-level synthesis flow 
(e.g., FSM creation, register allocation and interconnect allocation) according with the gene 
encoding (see section \ref{mixed:encoding}). At this point, estimations on area and execution time are computed (see section \ref{mixed:cost_function}). Once obtained the fitness function value, the solutions are sorted using non-dominated fronts (see section \ref{nsga:fast_sorting}), crowded distance is computed (see section \ref{nsga:crowded_distance}) and solutions are sorted even inside each front based on this crowded distance. 

The first parent population is now ready.
Now, tournament selection is performed to choose better elements for crossover 
and random selection is used to choose individuals for mutation. 
Then operators are used to create a new child population, as described in Section~\ref{mixed:operators}. 
Offspring population is then added to parent one and the resulting set 
is non-dominated sorted. In this way, parent solutions have been ranked with 
offspring ones and good solutions can be maintained through generations. 

The process goes on as above 
described until termination criteria have been met. The termination criterion that has been implemented is the number of generations to be evolved. So, when $N$ generations have been computed, the algorithms stops and returns the solution points founded at the moment.

%\ \\
\section{Conclusions}

% ######## CONCLUSIONE DEL CAPITOLO.... da sistemare ###############
In this chapter, the proposed methodology has been surveyed. The high-level synthesis flow has been presented and described. Then, the use of the NSGA-II features, that have been exploited to provide the design space exploration, has been presented, also considering how informations coming from the high-level synthesis flow can be considered. Finally the resulting algorithm evolution has been described, with particular attention to the use of described high-level synthesis and genetic components and the interaction among them.