\chapter {State of the art}\label{state_art}
\markboth {Chapter \ref{state_art}. State of the art}{}

\begin{flushright}
\sl
What moves men of genius, or rather what inspires their work, is not new ideas, but their obsession with the idea that what has already been said is still not enough.
\end{flushright}

\begin{flushright}
\sl
E. Delacroix
\end{flushright}
\par\vfill\par


%%################ INTRODUZIONE ALLO STATO DELL'ARTE #####################
As explained in the previous Chapter, the synthesis on FPGA devices requires fast and easy 
modifiable designs, so new design methodologies have been a very hot research topic over past two decades. The different high-level synthesis sub-tasks (i.e. scheduling and resources allocation) can be solved separately, with canonical algorithms, or considered together. If they are considered separately, solutions are far simpler, but overall results are poor, since the problems are closely interdependent.
At the opposite, if some of them are considered together, the complexity increases, but results are better. So the main goal of design methodology is so to find the implementation as the best trade-off between algorithm execution time and overall results.\\
%In the lecterature, there are many different approaches to the problem. It will be now analysed.

This Chapter will present the current state of the art about the high-level synthesis problem 
and it is organized as follows: in Section~\ref{state_art::HLS}, previous works on
the high-level synthesis problem are described and, in the Section~\ref{state_art::EC}, 
the most important works on evolutionary computation are analysed.
In the end, in Section~\ref{state_art::HLS_EC}, some applications of evolutionary computation to high-level synthesis problem are presented.

\section{High-Level Synthesis}\label{state_art::HLS}

In this Section, works on the different sub-tasks of the high-level synthesis problem are surveyed.

%%################ BEHAVIORAL REPRESENTATION #####################
\subsection{Different behavioral specifications}
The behavior of the system to be synthesized is usually specified at the algorithmic
level, by using either a programming language such as C, Pascal, FORTRAN, or Ada,
or a hardware description language (or behavioral domain language) such as DSL~\cite{camposano_circuits}, ISPS~\cite{barbacci}, DAISY~\cite{thesis:johnson}, MIMOLA~\cite{mimola}, and VHDL~\cite{camposano}. After the behavior of the system has been specified, it has to be compiled into internal representations (see Section~\ref{hls::IR}), which usually are data-flow graphs and control-flow graphs. A recent approach to behavioral specification compilation is based on interfacing with internal representation of the \textit{GCC} compiler~\cite{GCC} (i.e. GIMPLE~\cite{gimple}). In fact, starting from version 3.5, \emph{GCC} provides the possibility of dumping on file the syntax tree structure representing the initial source code. The use of \emph{GCC} allows the introduction of several compiler optimization 
techniques into a high-level synthesis framework, such as loop unrolling, constant propagation, dead code elimination, common subexpression elimination, etc., without any additional programming efforts. %Another approach, parse tree, is used in~\cite{gajski_tools}.

%% rappresentazioni intermedie
\subsubsection{Intermediate representations}
Camposano~\cite{camposano_dfg}~introduces and defines the \textit{\acf{DFG}} (see Section~\ref{hls:dfg}) representation, to expose maximum parallelism in the input description. Orailoglu and Gajski~\cite{graph_gajski} consider this information not sufficient and so they detail the use of both control and data flow graphs to represent specifications. They use data flow graph representation to establish area/performance bounds and to make area/performance tradeoffs. The control flow graph is used, instead, to generate a control unit design.

%% pdg
Ferrante et al.~\cite{ferrante_pdg} present the \textit{\acf{CDG}} (see Section~\ref{hls:cdg}) representation, that makes explicit both the data and control dependence for each operation in a
program. Data dependences have been used to represent only the relevant data flow relationships of
a program. Control dependences are introduced to analogously represent only the essential control
flow relationships of a program. Control dependences are derived from the usual control flow graph. Many traditional optimizations operate more efficiently on the CDG representation.

%% modello fsmd
Zhu and Gajski~\cite{fsmd} define the formal FSMD model to represent the target architecture for a behavioral synthesis. The overall target architecture of the high-level synthesis flow is based on a data path and a control path. The control path implements a finite state machine which generates a set of control signals, called the control word, at every clock cycle. The data path performs the computational tasks specified by the control signals by transforming data values in its storages.

%%################ INTRODUZIONE GENERALE #####################
%% demicheli e gajski
\subsection{High-level synthesis approaches}
A detailed description about high-level synthesis problems can be found in~\cite{book:MicheliSynthOpt} and~\cite{book:Gajski-HLS}. These works address the generation of detailed specifications of digital circuits from architectural or logic models and the optimization of some figures of merit, such as performance and area. 
% They introduce each sub-problems and they present the classical ways to solve them.
% Early approaches to high-level synthesis were oriented to reduce the worst 
% case execution time on critical paths. So the \textit{scheduling problem} had 
% been the focus problem addressed for quite large time, also since there were 
% many works on this topic coming from other research fields (e.g: time scheduling 
% for machine jobs or airplain flights). Then, when high-level synthesis designs  
% became more and more complex, the designer has also to consider and reduce 
% area occupation.

In these works and in Lin~\cite{recent_devel}, two decades of developments for scheduling, resource allocation and controller synthesis problems are analysed. Here, the techniques that had been proposed in the previous years for the various sub-tasks of high-level synthesis are surveyed. They mainly consider to solve the sub-tasks separately in order to obtain simpler algorithms, well suited on graph representations (see also Gajski~\cite{gajski_scheduling,gajski_function,gajski_interconnection}).

Early trends were oriented to solve the synthesis problem with deterministic algorithms and two main approaches have been used: mathematical formulations and heuristic solutions.

%%################ APPROCCIO ESATTO #####################
\subsubsection{Mathematical formulations}
Mathematical programming formulations, among them \textbf{exhaustive search methods} and \textbf{\acf{ILP}}, are oriented to get the best overall result, despite the complexity of the algorithms and the grown of their execution time. The first approach to high-level synthesis scheduling, an \textit{exhaustive search method} proposed by Barbacci~\cite{barbacci_exaustive} (1973) tried all possible combinations of serial and parallel transformations and chose the best design found. This method is complete but is computationally very expensive. This approach was improved by using \textit{branch-and-bound} techniques~\cite{branch}.

%%################ APPROCCIO ILP #####################
%% approccio ILP
\acf{ILP} formulations have been used to solve a wide range of problems in high-level synthesis in an optimum way, beginning with Hafer's early scheduling formulation~\cite{hafer}, presented in 1981.
%% HAFER
This work presented a methodology to develop a formal model to address RTL design. For example, to model an operation, conditions about input values have to be formalized. So, if an operation $O_3$ requires values $i_{1,3}$ and $i_{2,3}$, and then $i_{1,3}$ is stored in the register $s_1$, after it has been produced by operation $O_1$ and $i_{2,3}$ has been directly produced by operation $O_2$, the relations are:
\begin{equation}
  T_{IA}(i_{1,3}) \geq T_{OA}(0_1) + D_{SP}(s_1) \nonumber
\end{equation}
\begin{equation}
  T_{IA}(i_{2,3}) \geq T_{OA}(0_2) \nonumber
\end{equation}
where $T_{IA}(i_{j,3})$ is the time when the input $i_{j,3}$ is available, $T_{OA}(0_v)$ is time when output of operation $O_v$ is available and then $D_{SP}(s_1)$ is propagation time from clock to outputs of register $s_1$. 
\acf{ILP} formulations generally need additional constraints to be placed when alternative control flows (branches, switches or loops) and synchronization operations are specified. The introduction of boolean 0/1 variables into the model relations allows implementation decisions to be included in the model. The 0/1 variables are used to represent the mapping of operations and values from a data flow description on a set of hardware operators and registers, and to specify how operation inputs are accessed. In order to insure the completeness of the implementation, several additional constraint types have also to be added. 
The result is a set of equalities or inequalities that can be solved by a proper ILP solver, that grows exponentially with number of operations, alternative control flow constructs and constraints on resources. The major limitation is that, even for very small specifications, the run-time of generating a design explodes rapidly with complexity.

%% formulazioni ilp
\acf{ILP} formulations for both \textit{resource constrained} scheduling and \textit{time constrained} scheduling~\cite{formal_approach} have been then proposed. Papachristou and Konuk~\cite{papachristou_linear} present a methodology to drive scheduling and allocation problems using a Linear Programming approach. Then, an algorithm for interconnection optimization is applied based on operand flipping to share and reduce multiplexers. This optimization is presented also by Chen and Cong~\cite{interconnection_cong}. In fact, if an operand is required by a resource, there will have to be a connection between the location where this value is stored (or produced) and the port where it will be read by the functional unit.
\begin{figure}[t!]
\begin{minipage}[c]{0.5\textwidth}
\centering
\includegraphics[width=0.40\columnwidth]{./chapters/state_art/images/mux_01.jpg}
\caption[Port assignment without operand flipping]{Port assignment without \protect \\operand flipping}\label{fig:mux01}
\end{minipage}
~
\begin{minipage}[c]{0.5\textwidth}
\centering
\includegraphics[width=0.40\columnwidth]{./chapters/state_art/images/mux_02.jpg}
\caption[Port assignment with operand flipping]{Port assignment with \protect \\operand flipping}\label{fig:mux02}
\end{minipage}
\end{figure}
It can happen that a value is required, for example, as the first operand for a resource and the second operand already have a connection to source where this value is stored. This forces the second value of operation (connected to second operand), that comes from a different source, to share a multiplexer with previous connection (see Fig.~\ref{fig:mux01}). If the operation implemented is commutative, the two values can be flipped, so first one will be connected to second operand and the common source will be shared, avoiding the use of a multiplexer (see Fig.~\ref{fig:mux02}).

%% optimal algorithm to reduce interconnection
Rim et al.~\cite{rim_optimal} present a formulation to reduce wiring and multiplexer 
area with an ILP formulation, giving a scheduled graph. They define an interconnection as a point-to-point wire, connected to multiplexers. The function to be minimized is the sum of module register and multiplexer areas and an estimation of wiring length based on floorplan information. The solutions obtained can be considered optimal despite a large number of inequalities.

%% ilp per scheduling
Cordone et al.~\cite{ferrandi} recently present an ILP Branch and Cut approach to improve scheduling using speculative computation. This technique is based on idea that free
resources can be used, in order to achieve a better performance, to pre-computed some
variables in advance before the run-time conditional is evaluated. This work can lead to better results in overall latency time, despite the use of more resources.

%% ILP per low-power
ILP technique has been also recently applied on scheduling problems oriented to low power (Shiue and Chakrabarti~\cite{ilp_lowpower}) and peak power minimization (Shiue~\cite{ilp_peakpower}).
They consider peak power since it adversely affects the life-time of the devices, causes device damage and increases the package cost. The relative importance of resource minimization, power minimization and area minimization is controlled through cost variables.

%#### commento finale su approccio ilp
\acf{ILP} formulations can solve problems exactly, but, even if 
ILP solvers are relatively efficient, in practice, this approach is 
applicable only to very small problems, since the complexity of the problem exponentially 
increases with number of nodes and by considering control constraints and so the 
execution time of these algorithms grows to unacceptable values.

%%################ APPROCCIO BDD #####################
% %#### approccio con BDD - 
%estendere spiegando creazione automa e BDD
The \textbf{symbolic BDD-based manipulations} have been attained interesting 
results as an alternative ILP and heuristic techniques. The key idea of 
the symbolic approach is to use a set of nondeterministic finite automata 
to describe design alternatives for highly constrained control dominated 
models, in which complex if-then-else patterns constitute the body of kernel 
loops. Operations are modeled as small NFA (non-deterministic finite
automata) which encode the operation's temporal input and output behavior. Edges are 
labeled with the input requirements: a 1 means that an operand
must be available in that cycle for the transition to take place. 
\begin{figure}[t!]
\begin{minipage}[c]{0.5\textwidth}
\centering
\includegraphics[width=0.35\columnwidth]{./chapters/state_art/images/single_NFA.jpg}
\caption[Single cycle operation]{Single cycle \protect \\operation}\label{fig:single_NFA}
\end{minipage}
~
\begin{minipage}[c]{0.5\textwidth}
\centering
\includegraphics[width=0.41\columnwidth]{./chapters/state_art/images/control_NFA.jpg}
\caption[Single cycle control operator]{Single cycle \protect \\control operator}\label{fig:control_NFA}
\end{minipage}
\end{figure}
A single-cycle operation that takes one input can be modeled as in Fig.~\ref{fig:single_NFA}. Note that the non-determinism is used to model the unknown start time of the operation. After it starts, the model executes in a single cycle and ends in its final state. A single cycle \textit{control operation} is shown in Fig.~\ref{fig:control_NFA}. Execution traces are represented by unique sequences of states of the modeling automaton. Since distinct control cases can have distinct traces, every control bifurcation has a corresponding bifurcation in the modeling state space to represent alternative traces. These traces are distinguished by a bit (boxed) whose value indicates the branch selection (true or false path) in the simple single cycle operator model of Fig.~\ref{fig:single_NFA}.

Brewer~\cite{brewer} and Haynal~\cite{haynal} present a set of techniques 
for representing the high-level behavior of a digital subsystem as a collection 
of nondeterministic finite automata symbolic formulation for scheduling problem 
based on finite automata. The technology is similar to that used in symbolic 
model checking and it has been extended and formalized by Cabodi et 
al.~\cite{cabodi}, that describe a SAT-based formulation of automata-based
scheduling and propose a resolution algorithm based on SAT solvers and \acf{BMC}.

%%################ METODI EURISTICI #####################
% %#### metodi euristici
\subsubsection{Heuristic approaches}
Since the ILP method is impractical for large designs, \textbf{heuristic methods} that run efficiently at the expense of the design optimality have been developed. Heuristic methods are 
by far the most common used in high-level synthesis.

There are usually two techniques to approach scheduling with heuristic algorithms: \textit{constructive approach} and \textit{iterative refinement}. There are many approaches for constructive scheduling. They all arrange a schedule by assigning one operation at a time until all operations have been scheduled. The differences among various algorithms lie in how to select the next operation to be scheduled and where to locate it in the scheduling process. Two
simple approaches are ASAP (as-soon-as-possible) and ALAP (as-late-as-possible) (see Section~\ref{hls::scheduling}). Both the ASAP and ALAP scheduling approaches deal with unlimited resources; therefore, they can only be used for finding boundaries due to the unlimited resources. Despite that, they can supply the fastest implementation. It
should be noted that these simple models assume that each operation uses exactly
one control step. Chaining (scheduling more than one dependent operation in
one control step) and multistep operations (operations requiring multiple steps to
complete) are not considered here.

To solve the problem in ASAP, another approach called \textit{list scheduling}, based
on Hu's algorithm~\cite{listbased}, is established. In this approach, the next operation to be
scheduled is selected on a more global range. The operations to be scheduled into
the control step are kept in a list and ordered by a priority function. Each operation
in the list is scheduled in turn providing the resources are available in that particular
step; otherwise, it has to wait for the next step. The algorithm continues with the
next control step if no more operations are to be scheduled; the available operations
are found and ordered, and the process is repeated. Such iteration will continue until
all the operations have been scheduled.

List scheduling is used in several schedulers with different purposes in the
priority function. BUD~\cite{mcfarland} gives the priority to the longer path from the operation
to the end of the block containing the operation. Elf~\cite{Elf} uses
the "urgency" of an operation, which is the length of the shortest path from that
operation to the nearest local constraint. 

\textit{Force-directed scheduling} (FDS)~\cite{forcedirected} defines force as a priority during scheduling. It can specify a global time constraint and minimize the resources required to meet the constraint. The strategy is
to set similar operations in different control steps in order to balance the concurrency 
of the operations assigned to the units without increasing the total execution
time. By balancing the concurrency of operations, each structural unit has a high
utilization, which in turn decreases the total number of units required. This procedure 
involves the following three steps: (1) The time frame of each operation is
determined by evaluating the ASAP and ALAP scheduling (see Section~\ref{hls::scheduling}). The frame represents
the probability that the operation will eventually be placed in some time slot. (2)
A distribution graph (DG) is created by adding the probabilities of each type of
operation for each control step of the control-flow or data-flow graph. The DG indicates the concurrency of similar operations. (3) The force associated with every
feasible control step assignment of each operation is calculated. The force between
an operation and a particular control step is proportional to the number of operations of the same type that could go in that control step. The DG shows, for each
control step, how heavily loaded that step is, given that all possible schedules are
equally likely. The probability of operations in a given control step is calculated
by using \textit{mobility}. FDS supports the synthesis of data paths that have a near-minimum cost under fixed timing constraints, but does not consider hardware constraints.

Camposano~\cite{PBS-Camp91} proposed a \textit{path-based scheduling} algorithm that
could minimize the number of control steps (or control state) under given constraints 
such as timing and area. The basic idea of the so-called as-fast-as-possible
(AFAP) algorithm is that each possible path is scheduled separately to obtain the
minimal number of control states for the given constraints, then the schedules for
each path are overlapped in a similar optimal way. Loops and conditional branches
are handled as an integrated part of the problem. A path is scheduled by cutting
it at different points, where each cut represents the start of a new control step.
The minimum number of cuts will schedule the path in the minimum number of
control steps.

Some scheduling algorithms for conditional resources sharing have been reported. Kim et al.~\cite{Kim-HRA-94} presented a scheduling algorithm for DFGs with nested
conditional branches. It consists of three steps: (1) transform a DFG with conditional branches into a DFG without conditional branches, (2) use an FDS algorithm to obtain a schedule, and (3) transform the schedule obtained into a schedule
for the original DFG. This overcomes some problems existing in some similar
algorithms such as path-based algorithms. But it still cannot exploit sufficient
potential parallelism.

% introduzione allo scheduling
%Walker and Chaudhuri~\cite{walker95highlevel} ...

Lakshminarayana et al.~\cite{wavesched} create finite state 
machine during scheduling phase, under allocation constraints. This algorithm 
tries to minimize the average execution time and preserves the parallelism 
inherent in the application. 

Vijayakumar and Brewer~\cite{weighted} present a technique to weight the scheduling choices basing on control flow probabilities. So on, flows with more probability to be executed will be optimized despite which with lower probability.

% scheduling loops
%Baxter et al.~\cite{baxter} ...
\subsubsection{Allocation techniques}

Global \textbf{allocation techniques} include graph theoretic formulations, branch-and-bound algorithms, and mathematical programming techniques. 

Stok~\cite{stok} presented a set of heuristic algorithms to sequentially solve each step of datapath creation on scheduled data flow graphs, basing on graph formulations. The main idea is to represent operations as vertexes of a (undirected) graph. A \textit{clique} is a type of subgraph in which all operations are connected each other. For example, if this relationship represents the mutual exclusion property (see Definition~\ref{hls:mutual_exclusion}). All operations that are connected into a subgraph are mutually exclusive each other. In this case, all operations can share the same hardware unit. If the graph is covered by a minimum number of
cliques, called a \textit{minimal clique covering}, the minimum amount of hardware will
be required (one unit per clique). A so-called clique-finding problem is to find
sets of nodes in the graph, all of whose members are connected to one another.
All of the elements in such a set can share the same hardware without conflict.

In practice, heuristics are frequently used because finding the minimal number of
cliques covering is an NP-hard problem (\cite{k-racp-72}). The branch-and-bound technique is used to find an optimal solution for the allocation problem. Because it can update the best current solution and interrupt
the search after such a solution is found, the branch-and-bound technique is more
efficient than a general search and can deal with three kinds of assignment tasks.
The assignment, also called binding (see Section~\ref{hls:allocation}), is conducted for each control step. By looking ahead at the effects of assignment for multiple control steps, a more global plan can be achieved. However, heuristics are still needed to reduce the search complexity in practical use while maintaining an optimal result. The MIMOLA~\cite{mimola}
system, for instance, uses this method in its allocation.

% register allocation
Stok~\cite{stok} uses a clique-covering algorithm based on compatibility graph to reduce the number of registers. Two variables are compatibles (i.e. they are not in \textit{conflict}) if their corresponding lifetime intervals don't overlap. The life time interval is defined as the interval between the control step where the variable is defined and the control step where the variable is last used. Therefore, these variables can share the same register, since the values will be never alive in the same moment. The problem can so translated into the creation of the minimal conflict graph, that will generate the maximal compatibility graph. 
Brisk et al.~\cite{brisk} and Appel~\cite{Appel} present an approach based on dataflow analysis on control flow graph that allows to find the optimal conflict graph among live variables (based on solving dataflow equations). This allows to create the minimal conflict graph and so the maximal compatibility graph. It leads to create larger cliques and so use the minimum number of registers. The problem is that a complete dataflow analysis can take much time in specifications with many control constructs since there are many operation vertexes to be analysed. Besides, if the implementation of dataflow analysis is based on control flow graph, it is not able to take into account parallel computations, so conflict graph can be not right computed.

%% interconnection
%Chen and Cong~\cite{interconnection_cong} ...
% Very few algorithms consider also interconnection costs in cost evaluation. 
% Typical algorithms~\cite{Linear,interconnection} only optimize and minimize 
% interconnection cost after termination of the other phases.

%% fsm
\subsubsection{Finite state machine descriptions}
Kuehlmann and Bergamaschi~\cite{fsm_bergamaschi} propose a method and algorithms for exploring the design space between the register-transfer and behavioral levels. The goal is to be able to describe a design using a behavioral hardware-description language in such a way that extra control information can be speciﬁed by the designer, and taken into account
during synthesis. To achieve this objective, a new high-level representation has been formulated.
A behavioral description can be represented as a control-flow and a data-flow graph. Given a behavioral speciﬁcation, it is possible to partition it with respect to its control and data-flow characteristics. 
Given a partition $B=(B_1,B_2,\dots B_n)$ of the behavioral specification, a \textit{High-Level State Machine} is defined as: $HLSM = [S, C, B, \delta, \rho]$, where: $S =
(S_1, S_2, \dots S_n )$ is the set of high-level states; $C =
(C_1,C_2, \dots C_m)$ is the set of control conditions which result from conditional operations; $\delta : S \times C \rightarrow S$ is the high-level state transition function, which specifies
the next state depending on the present state and the
conditions $\rho : S \leftrightarrow B$ is the partition relation which
assigns each high-level state $S_i$ to a unique part $B_i$ of
the behavioral description which is to be performed if
$S_i$ is active.

The HLSM control-flow graph is generated using the following algorithm:
\begin{itemize}
\item Generate the original control and data-flow graphs from the behavioral description.
\item Select the state variable (user defined) and determine the set of values assumed by it. 
   These values constitute the set of high-level states $S$. The initial value of the state variable corresponds to the initial state of the HLSM.
\item Traverse the original control-flow graph and determine the partitions $B_i$ which are associated with each value of the state variable.
\item Determine the high-level state transitions from the operations which assign values to the state variable.
\item Reconnect the $B_i$ according to the high-level state
  transition diagram. The new graph is the HSLM control-flow graph.
\end{itemize}
\noindent By construction, each high-level state transition in
the HLSM corresponds to exactly one control-flow edge
in the HLSM control-flow graph. By marking these
edges the original HLSM boundaries can be recognized
by the following scheduling algorithm.

% stime
% Harmanani and Saliba~\cite{saliba} give an estimation of 
% the controller area based on a PLA model implementation, with number of PLA 
% inputs and outputs. This can be considered as an approximate approach because 
% some datapath design optimizations can hardly affect controller cost in terms 
% of components and control logic.

%%################ ALTRI TOOLS #####################
%% riferimenti a spark & co.?
\subsubsection{Existing tools}
There are some existing tools that compute RTL design starting from a behavioral specification. SPARK~\cite{Spark-Sys-VLSI03} is one of them. It is a C-to-VHDL high-level synthesis framework that employs a set of parallelizing compiler and synthesis transformations to improve the quality of high-level synthesis results. The compiler transformations have been re-instrumented for synthesis by incorporating ideas of mutual exclusivity of operations, resource sharing and hardware cost models. The SPARK parallelizing high-level synthesis methodology (see Fig.~\ref{fig:spark}) is particularly targeted to multimedia and image processing applications along with control-intensive microprocessor functional blocks. 
\begin{figure}[t!]
\centering
\includegraphics[width=0.40\columnwidth]{./chapters/state_art/images/spark.jpg}
\caption{SPARK synthesis framework}\label{fig:spark}
\end{figure}
SPARK takes behavioral ANSI-C code as input, schedules it using speculative code motions and loop transformations, runs an interconnect-minimizing resource binding pass and generates a finite state machine for the scheduled design graph. Finally, a backend code generation pass outputs synthesizable register-transfer level (RTL) VHDL. This VHDL can then by synthesized using logic synthesis tools into an ASIC or by mapped onto a FPGA. One of most importart limitations is that it can not perform design space exploration, but it works on fixed architecture.


%%###########################################################
%%############# Evolutionary computation ####################
\section{Evolutionary Computation}\label{state_art::EC}
%%################ INTRODUZIONE GENERALE #####################
%% Holland & goldberg
The basic principles of genetic algorithms (GAs) were first laid down rigorously by Holland~\cite{holland}, and are well described in many texts (e.g.~\cite{Dav87, Dav91, Gre86, Gre90, goldberg, Mic92, mitchell98introduction}). GAs simulate those processes in natural populations which are essential to evolution (see Section~\ref{hls:ga}). 

When designing a GA application, it has to be considered far more than just the theoretical aspects described in Section~\ref{hls:ga}. Each application will need its own fitness function, but there are also less problem-specific practicalities to deal with. Most of the steps in the traditional GA can be implemented using a number of different algorithms. For example, the initial population may be generated randomly, or using some heuristic method (\cite{Gre87, SG90}).

%% citazioni da generale sul genetico bea...
Much research has concentrated on optimising all the parts of a GA, since improvements can be applied to a variety of problems. Grefenstette~\cite{Gre86} sought an ideal set of parameters (in terms of crossover and mutation probabilities, population size, etc.) for a GA, but concluded that the basic mechanism of a GA was so robust that, within fairly wide margins, parameter settings were not critical. 
Frequently, however, it has been found that only small improvements in performance can be made. What \textit{is} critical in the performance of a GA, however, is the \textbf{fitness} function, and the coding scheme used.
Ideally the fitness function is desired to be smooth and regular, so that chromosomes with reasonable fitness are close (in parameter space) to chromosomes with slightly better fitness. For many problems of interest, unfortunately, it is not possible to construct such ideal fitness functions (if it were, hill-climbing algorithms could be simply used). Nevertheless, if GAs (or any search technique) are to perform well, it has to be founded ways of constructing fitness functions which do not have too many local maxima, or a very isolated global maximum.
The general rule in constructing a fitness function is that it should reflect the value of the chromosome in some \textquotedblleft real\textquotedblright way. As stated above, for many problems, the construction of the fitness function may be an
obvious task. For example, if the problem is to design a fire-hose nozzle with maximum through flow, the fitness
function is simply the amount of fluid which flows through the nozzle in unit time. Computing this may not be trivial, but at least it is know \textit{what} needs to be computed, and the knowledge of \textit{how} to compute it can be
found in physics textbooks.
Unfortunately the \textquotedblleft real\textquotedblright value of a chromosome is not always a useful quantity for guiding genetic search. In combinatorial optimisation problems, where there are many constraints (such as high-level problem), most points in the search space often
represent invalid chromosomes - and hence have zero \textquotedblleft real\textquotedblright value. 
%This can also be a problem, since invalide chromosomes can be different located into design spaces, for instance, the first can be closer to valid region then the other one. If the GA sets the fitness value to zero, it is not able to sort them and find the one closer to valid region.
An example of such a problem is the construction of school timetables. For a GA to be effective in this case, it must invented a fitness function where the fitness of an invalid chromosome is viewed in terms of how good it is at leading us towards valid chromosomes. %This, of course, is a Catch-22 situation. 
It has also to be known where the valid chromosomes are to ensure that nearby points can also
be given good fitness values, and far away points given poor fitness values. But, if where the valid chromosomes are is not known, this can't be done.
Cramer~\cite{Cra85} suggested that if the natural goal of the problem is all-or-nothing, better results can be obtained if meaningful sub-goals are invented, and reward those. In the timetable problem, for example, a reward might be given for each of the classes which has its lessons allocated in a valid way.
Another approach which has been taken in this situation is to use a penalty function, which represents how \textit{poor} the chromosome is, and construct the fitness as (constant - penalty)~\cite{goldberg}. Richardson et al.~\cite{RPLH89} give some guidelines for constructing penalty functions. They say that those which represent the amount by which the constraints are violated are better than those which are based simply on the number of constraints which are violated. Good penalty functions, they say, can be constructed from the expected completion cost. That is, given an invalid chromosome, how much will it \textquotedblleft cost\textquotedblright to turn it into a valid one? DeJong and Spears~\cite{DS89} describe a method suitable for optimising boolean logic expressions. There is much scope for work in this area.
\textit{Approximate function evaluation} is a technique which can sometimes be used if the fitness function is excessively slow or complex to evaluate. If a much faster function can be devised which approximately gives the value of the \textquotedblleft true\textquotedblright fitness function, the GA may find a better chromosome in a given amount of CPU time than when using the \textquotedblleft true\textquotedblright fitness function. If, for example, the simplified function is ten times faster, ten times as many function evaluations can be performed in the same time. An approximate evaluation of ten points in the search space is generally better than an exact evaluation of just one. A GA is robust enough to be able to converge in the face of the noise represented by the approximation. This technique, described by Goldberg~\cite{goldberg}, has been further improved to find the better way to adapt the approximate function evaluation and to get it closer to real one. Sastry et al.~\cite{sastry01dont,sastry01evaluationrelaxation} propose an approach based on fitness inheritance as an evaluation-relaxation scheme. In fitness inheritance, the fitness values of some individuals are inherited from their parents rather than through a costly evaluation function, thereby reducing total function-evaluation cost. They also propose~\cite{sastry2006} an approach based on linear estimation and substructural informations (schemata matching).

A classical problem with GAs is that the genes from a few comparatively highly fit (but not optimal) individuals may rapidly come to dominate the population, causing it to converge on a local maximum. Once the population has converged, the ability of the GA to continue to search for better solutions is effectively eliminated: crossover of almost identical chromosomes produces little that is new. Only mutation remains to explore entirely new ground, and this simply performs a slow, random search~\cite{Gol89b}.
The schema theorem says that we should allocate reproductive trials (or opportunities) to individuals in proportion to their relative fitness. But when we do this, premature convergence occurs - because the population
is not infinite. In order to make GAs work effectively on \textit{finite} populations, the way to select individuals for reproduction has to be modified.
The basic idea is to control the number of reproductive opportunities each individual gets, so that it is neither too large, nor too small. The effect is to compress the
range of fitnesses, and prevent any \textquotedblleft super-fit\textquotedblright individuals from suddenly taking over.

Another problem that can come with a GA is the \textit{slow convergence}. This is the converse problem to premature convergence described above. After many generations, the population will have
largely converged, but may still not have precisely located the global maximum. The average fitness will be high, and there may be little difference between the best and the average individuals. Consequently there is an insufficient gradient in the fitness function to push the GA towards the maximum. The same techniques used to combat premature convergence also combat slow finishing. They do this by expanding the effective range of fitnesses in the population. 

\textbf{Parent selection} is the task of allocating reproductive opportunities to each individual. In principle, individuals from the population are copied to a \textquotedblleft mating pool\textquotedblright, with highly fit individuals being more likely to receive
more than one copy, and unfit individuals being more likely to receive no copies. Under a strict generational replacement scheme, the size of the mating pool is equal to the size of the population. After this, pairs of individuals are taken out of the mating pool at random, and mated. This is repeated until the mating pool is exhausted.
The behaviour of the GA very much depends on how individuals are chosen to go into the mating pool. \textit{Fitness ranking} is one of most commonly employed method, which overcomes the reliance on an extreme individual. Individuals are sorted in order of raw fitness, and then reproductive fitness values are assigned according to rank. This may be done linearly~\cite{Bak85}, or exponentially~\cite{Dav89}. This method also ensures that the fitnesses of intermediate individuals are regularly spread out. Because of this,
the effect of one or two extreme individuals will be negligible, irrespective of how much greater or less their fitness is than the rest of the population. The number of reproductive trials allocated to, say, the fifth best individual will always be the same, whatever the raw fitness values of those above (or below). The effect is that overcompression ceases to be a problem. Srinivas and Deb~\cite{srinivas94multiobjective} propose a method based on ranking individual fitness when a multi-objective problem is involved. This method has been then extended by Deb et al.~\cite{deb00fast}. The resulting algorithm presents some clear advantages with respect to other genetic algorithms, as further described in the Section~\ref{state_art::NSGA}.

\textit{Tournament selection}~\cite{Bri81, GD91} is another technique to perform parent selection. There are several variants. In the simplest, \textit{binary tournament selection}, pairs of individuals are picked at random from the population. Whichever has the higher fitness is copied into a mating pool (and then both are replaced in the original population). This is repeated until the mating pool is full. Larger tournaments may also be used, where the best of randomly chosen individuals is copied into the mating pool. Using larger tournaments has the effect of increasing the selection pressure, since below average individuals are less likely to win a tournament, while above average individuals are \textit{more} likely to. A further generalisation is probabilistic binary tournament selection. In this, the better individual wins the tournament with probability $p$, where $0.5 < p < 1$. Using lower values of has the effect of decreasing the selection pressure, since below average individuals are comparatively more likely to win a tournament, while
above average individuals are less likely to. By adjusting tournament size or win probability, the selection pressure can be made arbitrarily large or small.

Goldberg and Deb~\cite{GD91} compare different schemes. They conclude that by suitable adjustment of parameters, all these schemes, can be made to give similar performances, so there is
no absolute \textquotedblleft best\textquotedblright method, even if some of them, under certain conditions, can have better performance than other ones.

%%citazioni da articolo di sastry su evaluation-relaxation
Estimation of distribution algorithms (EDAs)~\cite{eda2002,hboa} replace traditional variation operators of genetic algorithms by building a probabilistic model of promising solutions (that survive
selection) and sampling the corresponding probability distribution to generate the offspring population. The extended compact genetic algorithm (eCGA)~\cite{harik99linkage} uses a product of marginal distributions on a disjoint partition of variables of the problem to model highly-fit individuals and sample new ones. Each partition of variables corresponds to a \textit{linkage group}, so that important substructures can be effectively recombined as in a population-wise building block crossover. In eCGA, new solutions are generated from the following probability distribution:
\begin{equation}
\centering
p(X) = \prod_{i=1}^m p(X_{I_i} )
\end{equation}
 where $X = (X_1 , X_2 , \dots , X_n )$ is a vector that contains all
the variables of the problem and $I_i$ is the index set that contains the index of the variables that belong to the $i^{th}$ marginal distribution.
This kind of probability distribution belongs to a class of probabilistic models known as marginal product models (MPMs)~\cite{ack87,bertsekas96incremental,barthelemy93,bjor96}.
%(EDA = Estimation of Distribution Algorithm).

Sastry, Pelikan, and Goldberg~\cite{sastryecga2004} and Pelikan and Sastry~\cite{pelikan2004} proposed a fitness inheritance method for EDAs, specifically for eCGA and the Bayesian optimization algorithm (BOA)~\cite{hboa}.


% \subsection{Other optimization methods}
% A number of other general purpose techniques have been proposed for use in connection with search and optimisation problems. Like a GA, they all assume that the problem is de ned by a tness function, which must be maximised (All techniques can also deal with minimisation tasks, but to avoid confusion we will assume, without loss of generality, that maximisation is the aim). There are a great many optimisation techniques, some of which are only applicable to limited domains, for example, \textit{dynamic programming} [Bel57]. This is a method for solving multi-step control problems which is only applicable where the overall fitness function is the sum of the fitness functions for each stage of the problem, and there is no interaction between stages. This is not applicable to high-level synthesis, since it has been already described the high interaction among each stage.

%% libro melanie mitchell
%Mitchell~\cite{mitchell98introduction} ...

%% genetic programming
%Koza~\cite{koza94genetic} ...

%% overview genetici
%Beasley et al.~\cite{beasley93overview} ...

%%################ MULTIOBIETTIVO #####################
%% commento su computazione multiobiettivo

%% confronto sugli algoritmi multiobiettivo citazioni da ziztler
Two major problems must be addressed when an evolutionary algorithm is applied to
multi-objective optimization (see Section~\ref{hls:multi_obj}):
\begin{enumerate}
 \item How to accomplish fitness assignment and selection, respectively, in order to guide the search towards the Pareto-optimal set.
\item How to maintain a diverse population in order to prevent premature convergence and achieve a well distributed trade-off front.
\end{enumerate}
Often, different approaches are classified with regard to the first issue, where one can
distinguish between criterion selection, aggregation selection, and Pareto selection~\cite{horn97}. Methods performing criterion selection switch between the objectives during the selection phase. Each time an individual is chosen for reproduction, potentially a different objective will decide which member of the population will be copied into the mating pool. Aggregation selection is based on the traditional approaches to multiobjective optimization where the multiple objectives are combined into a parametrized single objective function.
The parameters of the resulting function are systematically varied during the same run in order to find a set of Pareto-optimal solutions. Finally, Pareto selection makes direct use of the dominance relation from Definition~\ref{multiobj01}; Goldberg~\cite{goldberg} was the first to suggest a Pareto-based fitness assignment strategy. 
Pareto-based techniques seem to be most popular in the field of evolutionary multi-objective optimization (Van Veldhuizen and Lamont~\cite{vanveldhuizen98multiobjective}). In particular, the algorithm presented by Fonseca and Fleming~\cite{fonsecagenetic}, the Niched Pareto Genetic Algorithm (NPGA) (Horn and Nafpliotis~\cite{horn93multiobjective}; Horn et al.~\cite{npga}), and the Nondominated
Sorting Genetic Algorithm (NSGA) (Srinivas and Deb~\cite{srinivas94multiobjective}, see also Section~\ref{state_art::NSGA}) appear to have achieved
the most attention in the EA literature and have been used in various studies. Furthermore, a recent elitist Pareto-based strategy, the Strength Pareto Evolutionary Algorithm (SPGA) (Zitzler and Thiele~\cite{zitzler99multiobjective}), which outperformed other multi-objective EAs on an extended 0/1 knapsack problem, has to be considered as a valid algorithm to be used.

Deb~\cite{deb99} has identiﬁed several features that may cause difficulties for multi-objective EAs in 1) converging to the Pareto-optimal front and 2) maintaining diversity within the population. Concerning the first issue, multimodality, deception, and isolated optima are well-known problem areas in single-objective evolutionary optimization. The second issue is important in order to achieve a well distributed nondominated front. However, certain characteristics of the Pareto-optimal front may prevent an EA from finding diverse Pareto-optimal solutions: convexity or nonconvexity, discreteness, and nonuniformity. Different studies~(\cite{multiobjective_analysis}) have demostrated that NSGA algorithm~\cite{srinivas94multiobjective} (and then its further refinement one, the NSGA-II algorithm~\cite{deb00fast}) is one of better algorithm to deal with multi-objective optimization. So in the following Section, some implementation details will be presented to better understand its capabilities.


%%################ ALGORITMI PROPOSTI #####################
%% BOA
%Pelikan et al.~\cite{pelikan99boa} ...

%%################ NSGA #####################
% NSGA/NSGA-II
\subsection{Elitist Nondominated Sorting Genetic Algorithm}\label{state_art::NSGA}
\index{NSGA-II|textbf}
Since evolutionary algorithms (EAs) work with a population of solutions, a simple EA can be extended to maintain a diverse set of solutions. With an emphasis for moving toward the true Pareto-optimal region, an EA can be used to find multiple Pareto-optimal solutions in one single simulation run. The non-dominated sorting genetic algorithm (NSGA) proposed in~\cite{srinivas94multiobjective} was one of the first such EAs. This work has been further extended by Deb et al.~\cite{deb00fast} 
into NSGA-II algorithm to better deal with multi-objective optimization, for instance, with a different and faster algorithm for individual sorting, presented in the Section~\ref{nsga:fast_sorting} and a mechanism to preserve diversity in the population, presented in the Section~\ref{nsga:crowded_distance}.

\subsubsection{Fast Nondominated Sorting algorithm}\label{nsga:fast_sorting}

In order to rank a population of size $N$ according to the level of non-domination, each solution must be compared with every other solution in the population to find if it is dominated. This requires $O(MN)$ comparisons for each solution, where $M$ is the number of objectives. When this process is continued to find all members of the first non-dominated class for all population members, the total complexity is $O(MN^2)$. At this stage, all individuals in the first non-dominated front are found. In order to find individuals of the next front, the solutions of the first front are temporarily discounted and the above procedure is performed again. The procedure is repeated to find subsequent fronts. As can be seen, the worst case (when there exists only one solution in each front) the complexity of the algorithm without any book-keeping is $O(MN^3)$. The NSGA-II algorithm presents a fast non-dominated sorting approach which will require at most $O(MN^2)$ computations (see Fig.~\ref{nsga:code}).

First, for each solution, two entities are calculated: 1) domination count $n_p$, the number of solutions which dominate the solution $p$, and 2) $S_p$, a set of solutions that the solution $p$ dominates. This requires $O(MN^2)$ comparisons.
All solutions in the first nondominated front will have their domination count as zero. Now, for each solution $p$ with $n_p = 0$, each member ($q$) of its set $S_p$ is visited and reduce its domination count by one. In doing so, if for any member $q$ the domination count becomes zero, it is put in a separate list $Q$. These members belong to the second nondominated front. Now, the above procedure is continued with each member of $Q$ and the
third front is identified. This process continues until all fronts are identified.

For each solution $p$ in the second or higher level of nondomination, the domination count $n_p$ can be at most $N - 1$. Thus, each solution $p$ will be visited at most $N - 1$ times before its domination count becomes zero. At this point, the solution is assigned a nondomination level and will never be visited again. Since there are at most $N - 1$
such solutions, the total complexity is $O(N^2)$. Thus, the overall complexity of the procedure is $O(MN^2)$.
It is important to note that although the time complexity has reduced to $O(MN^2)$, the storage requirement has increased to $O(N^2)$.

\begin{figure}
 \linespread{1}
\begin{footnotesize}
\underline{fast-non-dominated-sort($P$)}
\begin{algorithmic}[0]
  \FORALL {$p\in P$}
  \STATE $S_p = \emptyset$
  \STATE $n_p$ = 0
    \FORALL{$q \in P$}
      \IF {$p\prec q$}
        \STATE $S_p = S_p \cup \{q\}$ \COMMENT{Add \textit{q} to the set of solutions dominated by \textit{p}}
        \ELSE
           \IF {$q\prec p$}
              \STATE $n_p = n_p + 1$ \COMMENT{Increment the dominator counter of \textit{p}}
           \ENDIF
        \ENDIF
     \ENDFOR
     \IF {$n_p = 0$}
        \STATE $p_{rank} = 1$          \COMMENT{\textit{p} belongs to the first front}
        \STATE $\mathcal{F}_1 = \mathcal{F}_1 \cup \{p\}$
     \ENDIF
  \ENDFOR
  \STATE $i = 1$
  \WHILE {$\mathcal{F}_i \neq \emptyset$}
     \STATE $Q = \emptyset$ \COMMENT{Used to store the member of the nex front}
     \FORALL {$p\in \mathcal{F}_i$}
        \FORALL {$q\in \mathcal{S_p}$}
           \STATE $n_q = n_q - 1$
           \IF {$n_q = 0$}
              \STATE $q_{rank} = i + 1$ \COMMENT{\textit{q} belongs to the next front}
              \STATE $Q = Q \cup \{q\}$
           \ENDIF
        \ENDFOR
     \ENDFOR
     \STATE $i = i + 1$
     \STATE $\mathcal{F}_i = Q$
  \ENDWHILE
\end{algorithmic}
\end{footnotesize}
\linespread{1.3}
\caption{Pseudocode for the \textit{fast-non-dominated-sort} algorithm}\label{nsga:code}
\end{figure}

\subsubsection{Elitism and crowded-comparison operator}\label{nsga:crowded_distance}
An important feature of NSGA-II is the elitist mechanism to preserve diversity into population. 
In fact, despite the convergence to a Pareto-optimal set, it is also desirable that 
the genetic algorithm maintains a good diversity of the obtained solutions. 
So, NSGA-II implements a \textit{crowded-comparison} operator based on a density 
estimation. From the density estimation, a new value is associated with a 
solution: the crowded distance, based on estimation of perimeter of the cuboid 
formed by using the nearest neighbors. So, each solution has two values 
associated: a non-dominated rank and the crowding distance. 
This allows the algorithm to rank solutions also inside a non-dominated level: 
if two solutions have the same rank, they belong to the same front and the solution 
located into a lesser crowded region is preferred by the selection process.

\subsubsection{Constraint Handling}
In the presence of constraints, it can happen that solutions are feasible or 
infeasible. They are infeasible if an objective function violates a constraint. 
So, a different definition of \textit{domination} has to be considered to let the 
solutions be compared. So, the NSGA-II definition of \textit{domination} between solutions \textit{i} and \textit{j} has been applied:
\begin{definition}
 A solution \textit{i} is said to constrain-dominate a solution \textit{j}, 
 if any of the following conditions is true:
\begin{enumerate}
 \item Solution \textit{i} is feasible and solution \textit{j} is not.
 \item Solution \textit{i} and solution \textit{j} are both infeasible, 
 but solution \textit{i} has a smaller overall constraint violation
 \item Solution \textit{i} dominates solution j (its cost function is less 
 than \textit{j} cost function
\end{enumerate}
\end{definition}
The effect of using this definition is that a feasible solution has a better 
non-domination rank than any infeasible solution. So all feasible solutions are 
ranked based on objective functions values and for infeasible ones, 
the solution with smaller violation has a better rank. This is useful because one 
infeasible solution violating a constraint marginally is classified in a 
smaller non-dominated level. So, the algorithm searches faster in infeasible 
search region before reaching the feasible region.


%%###########################################
%%############# EC & HLS ####################
\section{Genetic Algorithms and High-Level Synthesis}\label{state_art::HLS_EC}

Evolutionary algorithms are so a good candidate for high-level synthesis because iteratively 
improve a set of solutions, they do not require the quality (cost) function to be linear (e.g.: 
time-area product) and they are known to work well on problems with large and non-convex search 
spaces. To deal with multi-objective functions, single weighted cost function is revealed not to 
be useful, because algorithm convergence depends on weight imposed to each cost function and not 
to problem nature. Now some implementations of genetic algorithms used to solve high-level synthesis problem are presented. For each one, the most important details and features are presented to better understand how they are used to tackle the problem.

%% harmanani e saliba
Harmanani and Saliba~\cite{saliba} use a genetic approach to solve the allocation problem. This work starts from a scheduled DFG\index{Data Dependence Graph}, so it does not take into account control construct such as IF statements and it needs a previous step (scheduling) that it has been accomplished using force directed algorithm~\cite{forcedirected}\index{Scheduling!force directed}. The chromosome encoding is a representation of intermediate datapath solution. It is a vector, where each element is a node into DFG and it contains also information about clock cycle where operation is executed at (given by scheduling already performed), the hardware unit it is mapped to and the corresponding storage and interconnection elements. Then, during each generation, to reduce datapath cost, two or more nodes should be merged to share resources. So two conditions have to be satisfied:
\begin{itemize}
  \item There is no conflict in use of two operators (e.g.: they require to use same functional unit into same control step). If there is, they have been assigned into different time steps in the schedule.
 \item The merger of the nodes results in a module that is in the resource library. If it isn't, such merging can not take place.
\end{itemize}
If these two conditions are satisfied, nodes are called \textit{compatible} and they can be merged. Since scheduling has been already performed and operations can change the unit where they have been mapped to, hardware units are considered to have the same execution time. In fact, if an operation is mapped to a unit where execution time is different from previous one, scheduling could change and, into this formulation, it can not take place. To explore and exploit the design space, the usual genetic operators are applied (crossover and mutation) and then an heuristic is used to avoid infeasible solutions.
It is a time-constrained algorithm, where scheduling is given and it tries to minimize use of resources and cost function is represented as sum of datapath size and controller size. Datapath size is the sum of the cost of individual elements. Controller is modeled as a PLA and so its size is estimated with the width and the height of the PLA. This algorithm is hardly based on scheduling that it needs special operators to check if the merging of the nodes lead to feasible or infeasible solutions. This can take several execution time in large examples. Then the estimation of datapath is based on area that components requires to be synthesized standalone, without any information about packing that the synthesis tools can perform on FPGA design, even if it can be admissible for ASIC design. The fitness of a solution $i$ is defined as:
\begin{equation}
 fitness(i) = \frac{1}{\alpha * A_{datapath} + (1 - \alpha) * A_{controller}}
\end{equation}
where $\alpha$ is a parameter that controls the desired trade-off between datapath and controller.
Since scheduling is provided, the goal is only to reduce total area.

%% papa and silc - multiobiettivo
Papa and \v{S}ilc~\cite{papasilc} propose a multi-objective genetic algorithm to solve scheduling\index{Scheduling!genetic} problem, with respect to allocation. This works is one of earlier presenting a multi-objective approach. This algorithms, called MOGS (Multi-Objective Genetic Scheduling), considers different scheduling and their impact on allocation process. Even if it's a multi-objective algorithm, the cost function is a one-dimension function, where different criteria are weighted.

%% mandal
Mandal et al. present two genetic algorithms~\cite{mandal96design,gabind} to approach high-level synthesis problem.
The first one~\cite{mandal96design} is a scheduling algorithm and tries to 
minimize area cost and design latency. 
The second one~\cite{gabind} is an allocation and binding algorithm that works one a 
scheduled graph. 
They consider the two phases and problems separately, even if the two algorithms are able to work together, where allocation algorithm (\cite{gabind}) is applied to results coming from the other one (\cite{mandal96design}).

%% grewel
Grewal et al.~\cite{Hierarchical} implement a hierarchical genetic algorithm, where genetic module allocation is at a top level, followed at a lower level, by a genetic scheduling. This algorithm is able to explore a wider space region, despite a longer execution time, since genetic algorithms are known to be slower than usual heuristic algorithms.

%% system-level synthesis
%Blickle et al.~\cite{blickle96systemlevel} use an evolutionary approach to system-level synthesis.

%% uso delle BNF per generare direttamente HDL
Ara\'ujo et al.~\cite{araujo} uses a genetic programming approach, where solutions are represented as tree productions (rephrased rules in the HDL grammar) to directly creates SFL (Structured Function description Language) programs. This algorithm is interesting since it can directly produce synthetizable design, but genetic programming, based on manipulation of trees (see Koza~\cite{koza94genetic} for further details), is more complicated to be implemented and results are poor with respect to the supported features.

%\ \\
\section{Conclusions}

% ######## CONCLUSIONE DEL CAPITOLO.... da sistemare ###############
In this Chapter, the most relevant works on high-level synthesis process have been introduced and analysed. Different algorithms have been analysed to better understand which are the better for each sub-task of the problem. Then, different implementation details in genetic algorithms are presented and motivated. Multi-objective optimization algorithms are the set of algorithms that best matches the high-level synthesis problem. Besides, the NSGA-II algorithm has been revealed to be a good candidate to solve the design space exploration problem when a multi-objective optimization problem is involved. Finally, a short description of some works that apply the genetic computation on the high-level synthesis problem are surveyed to better understand the actual state of the art and perspectives.