\chapter{Preliminaries}\label{preliminaries}
\markboth {Chapter \ref{preliminaries}. Preliminaries}{}

\begin{flushright}
\sl
What are you working for? I maintain that the only purpose of science is to ease
the hardship of human existence.
\end{flushright}

\begin{flushright}
\sl
B. Brecht, \textit{Leben des Galilei}
\end{flushright}
\par\vfill\par

Forty years ago, Electronics Magazine asked Intel co-founder Gordon Moore to 
write an article summarizing the state of the electronics industry. When 
writing the article, Moore noted that the number of devices (which then 
included transistors and resistors) inside chips was doubling every year, 
largely because engineers could shrink the size of transistors. That meant 
that the performance and capabilities of semiconductors were growing exponentially 
and would continue to. In 1975, Moore amended the law to state that the number 
of transistors doubled about every 24 months (see Fig.~\ref{fig:moore})

\begin{figure}[ht]
\centering
\begin{minipage}[l]{0.4\textwidth}
\includegraphics[width=\columnwidth]{./chapters/hls/images/moore.jpg}
\end{minipage}
\caption[Moore's law original sketch]{The sheet of graph paper that shook the world. While 
writing the article for Electronics Magazine, Moore sketched out his prediction 
of the pace of silicon technology. Credit: Intel}\label{fig:moore}
\end{figure}

Very-large-scale integration (VLSI) is the process of creating integrated 
circuits (IC) by combining thousands of transistor-based circuits into a single 
chip. VLSI began in the 1970s when complex semiconductor and communication 
technologies were being developed.
Modern ICs are enormously complicated (see Fig.~\ref{fig:4004} and Fig.~\ref{fig:pentium4}). A large chip may well have 
more transistors than people on Earth within few years: VLSI technology provides densities of multiple-million gates of logic per chip. Furthermore, the rules for what can and cannot be 
manufactured are also extremely complex. An integrated circuit process as of 
2006 may well have more than 600 rules.
%~\cite{mettere_qualcosa}.??
% Furthermore, since the manufacturing process itself is not completely predictable 
% designers must account for its statistical nature. 

Chip of such complexity are very difficult, if not impossible, to design using the 
traditional \textit{capture-and-simulate} design methodology. Instead, time 
to market is usually equally, if not more important than area or speed. 
So the industry has started looking at the product development cycle comprehensively 
to reduce the design time and to gain a competitive edge in the time-to-market 
race. 
% The complexity of modern IC design, as well as market pressure to produce designs 
% rapidly, has led to the extensive use of automated design tools in the IC design process.
%##### introduzione alla sintesi ad alto livello....
As the complexity of chips increases, so will the need for design automation 
on higher level of abstraction, where functionality is easier to understand and 
trade-off is more influential. There are several advantages to automating part 
or all of the design process and moving automation to higher levels. First, 
automation assures a much shorter design cycle. Then, it allows for more exploration 
at different design styles since different designs can be generated and evaluated 
quickly. Finally, design automation tools may out-perform average human designers 
in meeting most design constraints and requirements.
Computer-aided tools provide an effective mean for designing microelectronic circuits
that are economically viable products. Synthesis techniques speed up the design cycle
and reduce the human effort. Optimization techniques enhance the design quality. At
present, synthesis and optimization techniques are used for most digital circuit designs.
Nevertheless their power is not yet exploited in full and most of the work is still made 
by hand. 

\textit{Synthesis} is the generation of a circuit model, starting
from a less detailed one. Models can be classified in terms of levels of abstraction
and views (see Fig.~\ref{fig:flow}). We consider here three main abstractions, namely: \textit{architectural}, \textit{logic} and
\textit{geometrical}. The levels can he visualized as follows. At the \textit{architectural level} a circuit
performs a set of operations, such as data computation or transfer. At the \textit{logic level},
a digital circuit evaluates a set of logic functions. At the \textit{geometrical level}, a circuit
is a set of geometrical entities. 

\begin{figure}[t]
\centering
%\begin{minipage}[c]{0.4\textwidth}
\includegraphics[height=0.4\columnwidth]{./chapters/hls/images/flusso.jpg}
%\end{minipage}
\caption{Synthesis flow from behavioral specification to physical design}\label{fig:flow}
\end{figure}
The architectural-level synthesis consists of generating a structural view of an architectural-level model. This corresponds to determining an assignment of the circuit
functions to operators, called resources, as well as their interconnection and the
timing of their execution. It has also been called \textit{high-level synthesis} or \textit{structural synthesis}, because it determines the macroscopic (i.e., block-level) structure of the circuit. A behavioral architectural-level model can be abstracted as a set of operations and dependencies. Architectural synthesis entails identifying the hardware resources that can implement the operations, scheduling the execution time of the operations and binding them to the resources. In other words, synthesis defines a structural model of a data path, as an interconnection of resources, and a logic-level model of a control unit, that issues the control signals to the data path according to the schedule.
After the architectural-level synthesis, the \textit{logic-level synthesis} step has to be performed. Logic-level synthesis is the task of generating a structural view of a logic-level
model. Logic synthesis is the manipulation of logic specifications to create logic
models as an interconnection of logic primitives. Thus logic synthesis determines
the microscopic (i.e., gate-level) structure of a circuit. The task of transforming a
logic model into an interconnection of instances of library cells, i.e., the back end
of logic synthesis, is often referred to as library binding or technology mapping.
A logic-level model of a circuit can be provided by a state transition diagram of a finite-state machine, by a circuit schematic or equivalently by an HDL model. It may be specified by a designer or synthesized from an architectural-level model.
The logic synthesis tasks may be different according to the nature of the circuit
(e.g., sequential or combinational) and to the initial representation (e.g., state diagram
or schematic). Since many are the possible configurations of a circuit, optimization plays
a major role, in connection with synthesis, in determining the microscopic figures
of merit of the implementation. The final outcome of logic synthesis is a fully structural representation, such as a gate-level netlist.
The final step is the \textit{geometrical-level synthesis}, that consists of creating a physical view at the geometric level. It entails the specification of all geometric patterns defining the physical
layout of the chip, as well as their position. It is often called physical design, and
we shall call it so. Physical design consists of generating the layout of the chip.
The layers of the layout are in correspondence with the masks used for chip fabrication.
Therefore, the geometrical layout is the final target of microelectronic circuit design.
Physical design depends much on the design style. On one end of the spectrum, for
custom design, physical design is handcrafted by using layout editors. This means
that the designer renounces the use of automated synthesis tools in the search for
optimizing the circuit geometries by fine hand-tuning. On the opposite end of the
spectrum, in case of prewired circuits, physical design is performed in a virtual
fashion, because chips are fully manufactured in advance. 
% Instead, chip personalization
% is done by a fuse map or by a memop map. 
The major tasks in physical design are placement and wiring, called also routing. Cell generation is essential in the particular case of macro-cell design, where cells are synthesized and not extracted from a library.

Logic-level and physic synthesis steps have already been consistently automatized; for instance, logic synthesis has been well performed by Altera~\cite{Altera} and Xilinx~\cite{Xilinx} in their synthesis tools for FPGA design. The main problem up to now is that these tools require a RTL design described through an hardware description language to be synthesized. So a further step into design automation is to develop tools able to bridge the gap between behavioral specification and RTL design. This kind of tools have to be able to produce RTL design in a quite short time, with respect to design constraints. Besides, they have to be able to explore larger and larger design space region to find better and better solutions with respect to design goals. This is why \textit{high-level synthesis} has been a very hot research topic over past two decades. 

However, design space is too large and any optimal method is inefficient to handle the problem. Genetic algorithms are a good candidate to tackle such complex explorations, with their \textit{simil-random} search.
The remaining of this Chapter is organized as follows. In Section~\ref{hls:hls}, the high-level synthesis problem will be introduced and described. In Section~\ref{hls:ga}, genetic algorithms will be presented to better understand their implementations and so their future uses.

\begin{figure}[t!]
\begin{minipage}[l]{0.5\textwidth}
\includegraphics[width=\columnwidth,height=1.2\columnwidth]{./chapters/hls/images/4004.jpg}
\caption[Intel 4004, 1971]{Intel's first micro-processor, the 4004, appeared in 1971 and powered calculators. It featured 2,300 transistors. Credit: Intel}\label{fig:4004}
\end{minipage}
~
\begin{minipage}[l]{0.5\textwidth}
\includegraphics[width=\columnwidth,height=1.2\columnwidth]{./chapters/hls/images/pentium4.jpg}
\caption[Intel Pentium 4, 2000]{The Pentium 4, which debuted in 2000, sported 42 million transistors. Dual-core Itaniums have more than a billion. Credit: Intel}\label{fig:pentium4}
\end{minipage}
\end{figure}

%rtl synthesis già fatta (xilinx, altera)... abbiamo rtl design.

\section{High-Level Synthesis}\label{hls:hls}

\textbf{High-Level Synthesis} (HLS) is defined as a translation process from 
behavioural description into register-transfer-level (RTL) structural description (see Fig.~\ref{fig:hls_rtl}). It can be considered the automation of first step of the design flow, the architectural-level synthesis.

\begin{figure}[ht]
\centering
\begin{minipage}[l]{0.25\textwidth}
\includegraphics[width=\columnwidth]{./chapters/hls/images/hls.jpg}
\end{minipage}
\caption{Translation from behavioral specification to RTL design}\label{fig:hls_rtl}
\end{figure}

The inputs to a typical HLS tool include a \textit{behavioral description}, 
a \textit{resource library} describing available hardware resources and a set 
of \textit{design constraints}.

The output from a high-level synthesizer consists of two parts: a datapath 
structure at the register-transfer level (RTL) and a specification of the finite 
state machine to control the datapath. At the RTL level, a datapath is composed 
of functional units, storage and interconnection elements. The finite state 
machine specifies every set of microoperations for the datapath to be performed 
during each control step. The output can be then synthesized using related tools.

The goal is to reduce one or more design targets. Objectives can be to minimize 
total area occupied, latency or power consumption.

\textit{Behavioral description} specifies behaviour in terms of operations, 
assignment statements, and control constructs in a common high-level language 
(e.g. C language). 

In the \textit{resource library}, there may be several alternatives 
among which the synthesizer must select the one that best matches the design 
constraints and maximizes the optimization objective. 

The \textit{constraints} can be on the maximum number of available units of each resources, 
on total area or on the latency of the specification execution.

There are two main tasks in high-level synthesis: operations scheduling and resources 
allocation. \textit{Scheduling}\index{Scheduling} provides the control steps in which 
operations start their execution. \textit{Resource allocation}\index{Allocation} 
is concerned with assigning operations and values to hardware components and 
interconnect them using connection elements. Solving these problems efficiently is 
a non-trivial matter because of their NP-complete nature~\cite{np_complete}. Furthermore, the objectives are usually in contrast and the 
% %????
best high-level synthesis design flow cannot be known a priori since it depends heavily 
on nature of the problem: on some examples, executing scheduling before allocation can leads to better results, on other examples, it can leads to worse ones.

\subsection{Behavioral specification}\label{hls::IR}
A behavioral specification is used as the input for the high-level synthesis flow. 
It specifies behavior in terms of operations, assignment statements, and control 
constructs in a common software programming language (e.g.: C language). A description 
in a hardware description language (e.g.: VHDL or Verilog) can also be used since 
it allows to express timing and concurrency in hardware, but it needs a translation 
from the initial programming representation to this hardware description one. 
%This work extracts concurrency by itself analysing dependences between operations.

An \textbf{intermediate representation} (IR) is a kind of representation that is 
independent of the details of source and target languages. Therefore, all transformations can be applied to this representation without any modifications due to different details in the languages. This approach is similar to the one used by the compilers oriented to software production and the architecture can be similar to these ones (see Fig.~\ref{fig:compiler}).

\begin{figure}[ht]
\centering
\begin{minipage}[l]{0.7\textwidth}
\includegraphics[width=\columnwidth]{./chapters/hls/images/compiler.jpg}
\end{minipage}
\caption[Compilers for software and hardware]{Analogies between compilers oriented to software synthesis (a) and hardware synthesis (b)}\label{fig:compiler}
\end{figure}

The common used intermediate representation is composed of several graphs, for instance, the control flow graph (CFG) and the data flow graph (DFG), where all data dependences among operations are stored.
% The front end used is a kind of compiler that reads intermediate representation 
% of GCC dump. It builds, 
The graphs are defined as follow:


\begin{itemize}
 \item The vertices $v \in V_0$ are the operations which have to be executed into behavioral 
specifications
 \item The edges $e \in E$ describe relations between source operations and target 
ones. Two vertices $v_l$ and $v_2$ will be connected by a directed edge $e$ into 
graph $G$ if the two operations are related by the property that graph $G$ represents.
\end{itemize}

\begin{definition}\label{hls:operation_type}
 \textbf{Operation type}: the \textnormal{operation type} function $\tau : V_o\rightarrow \Pi(\Xi)$ determines for each operation vertex $v \in V_o$ the operation type that it represents in the behavioral specification.
\end{definition}


\noindent Now, the most important graphs will be described. First, the Control 
Flow Graph will be described (see Section~\ref{hls:cfg}) with its transformation into 
the Control Dependence Graph (see Section~\ref{hls:cdg}). Then the Data Flow Graph will be 
presented (see Section~\ref{hls:dfg}) and finally the System Dependence Graph, the data 
structure resulting on merging CDG and DFG, will be introduced (see Section~\ref{hls:sdg}).


\subsubsection{Control Flow Graph}\label{hls:cfg}
\index{Control Flow Graph|textbf} The \textit{Control Flow Graph} (CFG) is a data 
structure widely used by compilers. It is an abstract representation of a program. 
Each vertex of the graph is an operation and branches in the control flow are 
represented by directed edges. There are also two other vertices: ENTRY one (where 
the control enters into the flow) and the EXIT one (where the flow ends). This is 
a static representation and it only represents the different
control flows present in the behavioral specification (e.g.: see Fig.~\ref{hls::cfg_example}).

\begin{figure}[bt!]
\begin{center}
  \begin{minipage}[c]{.35\textwidth}
  \begin{small}
  b := i * 2; \\
  c := a + b: \\
  \textbf{if} (a $<$ b) \textbf{then} \\
  \ \ \ \ d := 1 - c;\\
  \textbf{else}\\
  \ \ \ \ d := c / 2;\\
  \textbf{endif}\\
  d := d + a;
  \end{small}
  \end{minipage}
  \begin{minipage}[c]{.10\textwidth}

  \end{minipage}
  \begin{minipage}[c]{.35\textwidth}
    \centering
    \includegraphics[width=0.125\textheight]{./chapters/hls/cfg}
  \end{minipage}
\end{center}
  \caption{Control Flow Graph example}\label{hls::cfg_example}
\end{figure}

\subsubsection{Control Dependence Graph}\label{hls:cdg}
\index{Control Dependence Graph|textbf}A \textit{Control Dependence Graph} (CDG) 
is a directed graph $G$ where each node represent an operation in the behavioral 
specification. It represents control dependencies within operations, e. g. from 
which operation the execution of a single operation is controlled.
Before defining the CDG, the \textit{post-dominator} definition has to be presented.

\begin{definition}
\textbf{Post-domination}: a node $V$ is \textnormal{post-dominated} by a node $W$ 
in graph $G$ if every directed path from $V$ to $EXIT$ node (not including $V$) 
contains $W$.
\end{definition}

Note that this definition of post-dominance does not include the initial node on 
the path. In particular, a node never post-dominates itself.

\begin{definition}
\textbf{Control dependence}: let $G$ be a control flow graph. Let $X$ and $Y$ be 
nodes in $G$. $Y$ is control dependent on X if and only if:
\begin{itemize}
 \item there exists a directed path $P$ from $X$ to $Y$ with any $Z$ in $P$ 
(excluding $X$ and $Y$) post-dominated by $Y$ and
 \item $X$ is not post-dominated by $Y$
\end{itemize}
\end{definition}

If $Y$ is control dependent on $X$ then $X$ must have two exits. Following one of 
the exits from $X$ always results in $Y$ being executed, while taking the other exit 
may result in $Y$ not being executed. \textit{Condition 1} can be satisfied by a 
path consisting of a single edge. \textit{Condition 2} is always satisfied when 
$X$ and $Y$ are the same node (e.g.: see Fig.~\ref{hls::dfgcdg_example}.b).

\subsubsection{Data Dependence Graph}\label{hls:dfg}
\index{Data Dependence Graph|textbf}A \textit{Data Flow} language or architecture 
executes a computation only when all operands are available. It is a technique 
that allows to specify parallel computation at very low level, usually into a 
bi-dimensional graph representation, where instructions that can be simultaneously 
computed are horizontally ordered and sequential ones are vertically ordered. Data 
dependences between operations are represented by directed edges. Operations do not 
refer to memory accesses as long as instructions allow data to be transmitted directly 
from source operations to target ones. So a node that defines a variable presents 
outcoming edges to all nodes that will use that variable (e.g: see 
Fig.~\ref{hls::dfgcdg_example}.a).

\begin{figure}[t!]
\begin{center}
  \begin{minipage}[c]{.40\textwidth}
    \centering
    \includegraphics[width=0.3\textheight]{./chapters/hls/cdg.jpg}
  \end{minipage}
~
  \begin{minipage}[c]{.40\textwidth}
    \centering
    \includegraphics[width=0.175\textheight]{./chapters/hls/data.jpg}
  \end{minipage}\\
  \begin{minipage}[c]{.40\textwidth}
  \centering \small (a)
  \end{minipage}
~
  \begin{minipage}[c]{.40\textwidth}
  \centering \small (b)
  \end{minipage}
\end{center}
  \caption{(a) Control Dependence Graph; (b) Data Flow Graph}\label{hls::dfgcdg_example}
\end{figure}

\subsubsection{System Dependence Graph}\label{hls:sdg}
\index{System Dependence Graph|textbf}The \textit{System Dependence Graph} (SDG) is 
the union of the two graphs: Control Dependence Graph (CDG) and Data Flow Graph (DFG). 
It is used since it represents both data and control dependences in an unique 
graph without containing false control dependences like control flow graph 
ones (e.g.: see Fig.~\ref{hls::sdg_example}).
\begin{figure}[t!]
 \centering
 \includegraphics[width=0.23\textheight]{./chapters/hls/sdg.jpg}
 \caption{System Dependence Graph}\label{hls::sdg_example}
\end{figure}


\subsection{Resource Library}\label{hls::resource}
The behavioral specification describes the input design to a high-level synthesis 
system. A system needs also information on which modules can be used during the 
synthesis. A collection of these modules is represented by a \textit{library}\index{Library|textbf}. It is common to use a set of modules each 
of which can only execute a limited set of operations. 

Modules implement different types of functions in hardware. They can be broadly
classified as follows:
\begin{itemize}
 \item Functional resources process data. They implement arithmetic or logic functions
     and can be grouped into two subclasses:
\begin{itemize}
\item Primitive resources are subcircuits that are designed carefully once and used often. Examples are arithmetic units and some standard logic functions, such as encoders and decoders. Primitive resources can be stored in libraries. Each resource is fully characterized by its area and performance parameters.
\item Application-specific resources are subcircuits that solve a particular subtask. An example is a subcircuit servicing a particular function of a processor. In general such resources are the implementation of other HDL models. When synthesizing hierarchical sequencing graph models bottom up, the implementation of the entities at lower levels of the hierarchy can be viewed as resources.
\end{itemize}

\item Memory resources store data. Examples are registers, read-only and read-write memory arrays. Requirements for storage resources are implicit in the graph model. In some cases, access to memory arrays is modeled as transfer of data across circuit ports.
\item Interface resources support data transfers. Interface resources include busses that
may he used as a major mean of communication inside a data path. External
interface resources are I/O pads and interfacing circuits.
\end{itemize}

\noindent The \textit{library} provides a way to describe the relation between the types of operations and the modules.

\begin{definition}\label{def:libray}
  \textbf{Library}: the library $\Lambda(T,L)$ is defined by the set of operation 
type $T$ and the set of library components $L$. The \textnormal{library function} 
$\lambda : T \rightarrow \Pi(L)$ determines for each operation type $t \in T$ by 
which library component it can be performed.
\end{definition}

\noindent For example, if the library $\Lambda$ contains the following components:
\begin{equation}
  L = \lbrace ripple\_carry\_adder, carry\_look\_ahead\_adder, ALU, array\_multiplier \rbrace \nonumber
\end{equation}

\noindent The library function for the plus-operation is then
\begin{equation}
 \lambda(+) = \lbrace ripple\_carry\_adder, carry\_look\_ahead\_adder, ALU \rbrace \nonumber
\end{equation}

\noindent The function $\lambda^{-1} : L \rightarrow \Pi(T)$ describes for each component 
which types of operations it can execute. The set $\lambda^{-1}(l)$ will be called 
the \textit{operation type set} of \textit{l}. Operations whose type belong to the 
same operation type set can share a module.

% binding - assignement to module (not yet to module instance)
\begin{definition}\label{def:op_selection}
 \textbf{Operation selection}: the \textnormal{operation selection function} $\sigma_0 
 : V_0 \rightarrow L$ determines on which type of library component an operation 
 is executed.
\end{definition}

% allocation target
\begin{definition}\label{def:num_components}
 \textbf{Module selection}: the \textnormal{module selection function} $\sigma : L 
\rightarrow \mathbb{N}^{\vert L\vert}$, where $\mathbb{N}$ is the set of natural 
numbers, determines how many library components of each type are instantiated from 
the library $\Lambda$ in the data path $P$.
\end{definition}

\begin{definition}\label{def:op_execution}
 \textbf{Operation type set}: the \textnormal{operation type set function} $\Omega : 
L \rightarrow \Pi(O)$ gives for each library component $l \in L$ which operation types 
it can execute. The sets $\Omega(l),l\in L$ will be called the \textnormal{operation 
type sets} of the library $\lambda(O,L)$.
\end{definition}

\subsection{Design Constraints}\label{hls::constraints}
%%% dettaglio delle modalità di porre vincoli 
% -area
% -tempo
The distinction between constraints and objectives is straightforward: a constraint 
is a design target that must be met in order for the design to be considered 
successful. For example, a chip may be required to run at a specific frequency in 
order to interface with other components in a system. In contrast, an objective is a 
design target where more (or less) is better. For example, yield is generally an 
objective, which is maximized to lower manufacturing cost. 
Constraints in architectural synthesis can be classified into two major groups: \textit{interface
constraints} and \textit{implementation constraints}.

Interface constraints are additional specifications to ensure that the circuit can
be embedded in a given environment. They relate to the format and timing of the I/O
data transfers. The data format is often specified by the interface of the model. The
timing separation of I/O operations can be specified by timing constraints that can
ensure that a synchronous I/O operation follows/precedes another one by a defined
number of cycles in a given interval. Timing constraints can also specify data rates for pipelined systems. 

Implementation constraints reflect the desire of the designer to achieve a structure with some properties. Examples are area constraints and performance constraints, e.g., cycle-time and/or latency bounds. 

A different kind of implementation constraint is a \textit{resource binding constraint}.
In this case, a particular operation is required to be implemented by a given resource.
These constraints are motivated by the designer's previous knowledge, or intuition,
that one particular choice is the best and that other choices do not need investigation.
Architectural synthesis with resource binding constraints is often referred to as synthesis from partial structure. Design systems that support such a feature allow a designer to specify a circuit in a wide spectrum of ways, ranging from a full behavioral model to a structural one. This modeling capability may be useful to leverage previously designed components.

Design constraints are formulated as equalities or inequalities. For example, design constraints can be:
\begin{itemize}
\item $Area(x) \leq Area_{max}$; where $Area(x)$ is area occupied by the design solution 
$x$ and $Area_{max}$ is the maximum allowed for total occupied area;
\item $Time(x) \leq Time_{max}$; where $Time(x)$ is latency of the solution $x$ and 
$Time_{max}$ is maximum latency allowed to consider the solution admissible.
\end{itemize}
\noindent A solution that doesn't meet all design constraints is not considered feasible
and will have to be discarded.

\subsection{Design Space}
The \textbf{design space} is the collection of all feasible structures corresponding to a circuit specification. The \textit{Pareto points} are those points of the design space that are not dominated by others in all objectives of interest. Hence they represent the spectrum of implementations of interest to designers. Their image determines the trade-off curves in the design evaluation space.
Realistic design examples have trade-off curves that are not smooth, because
of two reasons. First, the design space is a finite set of points, since the macroscopic
structure of a circuit is coarse grained. For example, a hypothetical circuit may have
one or two multipliers, and its area would change in correspondence of this choice.
Second, there are several non-linear effects that are compounded in determining the
objectives as a function of the structure of the circuit. Due to the lack of compactness
of the design space and of smoothness of the design evaluation space, architectural optimization problems are hard and their solution relies in general on solving some
related subproblems.
Architectural exploration consists of traversing the design space and providing
a spectrum of feasible non-inferior solutions, among which a designer can pick the
desired implementation. Exploration requires the solution of constrained optimization
problems. Architectural synthesis tools can select an appropriate design point, according to some user-specified criterion, and construct the corresponding data path and
control unit.

\subsection{The Temporal Domain: Scheduling}\label{hls::scheduling}
\index{Scheduling|textbf}
The scheduling problem is the problem of determining the order in which the operations in the behavioral description will be executed. Within a control step, a separated functional unit is required to execute each operation assigned to that step. Thus, the total number of functional units required in a control step directly corresponds to the number of operations scheduled into it. If more operations are scheduled into each control step, more functional units are necessary, which results in fewer control steps for the design implementation. On the other hand, if fewer operations are scheduled into each control step, fewer functional units are sufficient, but more control steps are needed. Thus, scheduling determines the tradeoff between design cost and performance.
A \textit{scheduling function} $\theta : V_0 \rightarrow \Pi(\mathbb{N}^n)$ assigns to each DFG node $v \in V_0$ a sequence of cycle steps in which the node is executed. If these cycle steps are continuous, this will be called the \textit{execution interval} of the operation \textit{v}. A schedule will be called a \textit{simple} schedule if all operations have an execution interval of length one. In this work, only execution in continuous cycle steps will be considered.
\begin{definition}\label{hls:mutual_exclusion}
  \textbf{Mutual exclusion:} two operations will be called mutually exclusive if they are executed under mutually exclusive conditions. A \textnormal{mutual exclusive function} $m : V_0 \rightarrow \Pi(\mathbb{N})$ is defined such that:
\begin{equation}
   m(v_i) \wedge m(v_j) = 0
\end{equation}
 when operations $v_i$ and operation $v_j$ are executed under mutually exclusive conditions.
\end{definition}

There are two classes of scheduling problems: \textit{time-constrained} scheduling and \textit{resource-constrained} scheduling. Time-constrained scheduling minimizes the hardware cost when all operations are to be scheduled into a fixed number of control steps. Resource-constrained scheduling, instead, minimizes the number of control steps needed for executing all operations given a fixed amount of hardware.

The simplest constructive approach is the as soon as possible (ASAP\index{Scheduling!ASAP|textbf}) scheduling. First, operations are sorted into a list according to their topological order. Then, operations are taken from the list one at a time and placed into the earliest possible control step. The other simple approach is the as late as possible (ALAP\index{Scheduling!ALAP|textbf}) scheduling. The ALAP value for an operation defines the latest control step into which an operation can possibly be scheduled. In this approach, given a time constraint in terms of the number of control steps, the algorithm determines the latest possible control step in which an operation must begin its execution. The \textit{critical paths}\index{Scheduling!critical path} within the flow graph can be found by taking the intersection of the ASAP and ALAP schedules such that the operations that appear in the same control steps in both schedules are on the critical paths.

\subsection{The Spatial Domain: Datapath Architecture}
\index{Datapath|textbf}
The data path describes a register transfer level design analogous to a schematic diagram. The data path is described in terms of modules and their interconnections.

\begin{definition}\label{hls:datapath}
 \textbf{Datapath:} The datapath ($DP$) is a graph $DP(M_o\cup M_s \cup M_i,I)$ where
 \begin{itemize}
  \item a set $M = M_o\cup M_s\cup M_i$, whose elements, called modules, are the nodes of the graph, with
  \begin{itemize}
     \item a set $M_o$ of \textnormal{operational} modules like adders, multipliers and ALUs,
     \item a set $M_s$ of \textnormal{storage} modules like registers and register files,
     \item a set $M_i$ of \textnormal{interconnection} modules like multiplexers, demultiplexers busses and bus drivers;
 \end{itemize}
  \item an interconnection relation $I\subseteq M\times M$, whose elements are interconnection links. These are the edges of the datapath graph.
 \end{itemize}
\end{definition}

%The set $L={l_1,\ldots , l_j}$ is defined as the set of the library components. 
Each module $m\in M$ specifies:
\begin{itemize}
 \item the library component of which this module is an instance,
 \item the pins $P={p_1,\ldots ,p_k}$ of the module.
\end{itemize}
For each interconnection it is defined which pins of the module it is connected to.

\subsubsection{Storage value insertion}
The storage value insertion phase inserts additional nodes in the scheduled data flow graph. Each edge that crosses a cycle step boundary represents a value that has to be stored somewhere. The storage allocation function can therefore be defined as the following transformation:

Given a scheduled data flow graph $G(V_o,E,C)$:
\begin{definition}
 \textbf{Storage value insertion}: the \textnormal{storage value insertion} is a transformation $G(V_o,E,C)\rightarrow (V_o\cup V_s,E',C)$, which adds storage value $v\in V_s$ to the graph such that all edges $e\in E$ which cross a cycle step boundary are connected to a storage value.
\end{definition}

\subsubsection{Module allocation and binding}\label{hls:allocation_binding}

Given a data path $DP(M_o\cup M_s \cup M_i,I)$, a scheduled DFG $G(V_o\cup V_s,E,C)$ and a module library $\Lambda(T,L)$:

\begin{definition}\label{hls:allocation}
 \textbf{Module allocation}: the \textnormal{module allocation} function $\mu :V_o\rightarrow \Pi(M_o)$, determines which module performs a given operation.
\end{definition}

Note that a module allocation $\mu(v_i)=m, m\in M_o, v_i\in V_o$ can only be a valid allocation if $m\in \lambda(\tau (v_i))$, i.e. the module $m$ is capable of execution of operation type of $v_i$. 

\begin{definition}\label{hls:binding}
 \textbf{Module binding}: a \textnormal{resource binding} is a mapping $\beta : V_o\rightarrow M_o \times \mathbb{N}$, where $\beta(v_o) = (t,r)$ denotes that the operation corresponding to $v_o \in V_o$, with type $\tau(v_o)\in \lambda^{-1}(t)$ (i.e. component $t\in L$ can execute the operation represented by vertex $v_o$), is executed on the component $t = \mu(v_o)$ and $r< \sigma(t)$ (i.e. the operation is implemented by the \textit{r-th} instance of resource type $t$ and this instance is available into datapath).
\end{definition}

A simple case of binding is a dedicated resource. Each operation is bound to one resource, and the resource binding $\beta$ is a one-to-one function.

A resource binding may associate one instance of a resource type to more than one operation. In this case, that particular resource is shared and binding is a many-to-one function. A necessary condition for a resource binding to produce a valid circuit implementation is that the operations corresponding to a shared resource do not execute concurrently, i.e. they are in mutual exclusion, with respect to definition~\ref{hls:mutual_exclusion}.

When binding constraints are specified, a resource binding must be compatible with them. In particular, a partial binding may be part of the original specification, as described in Section~\ref{hls::constraints}. This corresponds to specifying a binding for a subset of the operations $U\subseteq V_o$. A resource binding is compatible with a partial binding when its restriction to the operations $U$ is identical to the partial binding itself.

%Common constraints on binding are upper bounds on the resource usage of each type, as represented by the allocation function (see Def.~\ref{hls:allocation}) of instances for each resource type.

\subsubsection{Register allocation}

The register allocation problem can be formulated as the allocation of a storage module $m\in M_s$ for each storage value $v\in V_s$:

\begin{definition}
 \textbf{Register allocation}: the \textnormal{register allocation} function $\psi : V_s\rightarrow \Pi(M_s)$, identifies the storage module holding a value from the set $V_s$
\end{definition}

The binding information is needed for evaluating and/or performing the register optimization. Therefore, the accurate estimation of the number of registers requires both
scheduling and binding.

\subsubsection{Interconnection allocation}

All registers and modules have to be connected to transfer all the data between their ports.

\begin{definition}
 \textbf{Interconnection allocation}: the \textnormal{interconnection allocation} function $\iota :E\rightarrow (\Pi(M_i))$, describes how the modules and registers are connected and which interconnection is assigned for which data transfer.
\end{definition}

\noindent Data path connectivity synthesis consists of defining the interconnection among
resources, steering logic circuits (multiplexers or busses), memory resources (registers
and memory arrays), input/output ports and the control unit. Therefore a complete binding is required. Data path connectivity synthesis specifies also the interconnection of the data
path to its environment through the input/output ports.

The interface to the control circuit is provided by the signals that enable the registers and that control the steering circuits (i.e., multiplexers and busses). Sequential resources require a \textit{start}(and sometimes a \textit{reset}) signal. Hence the execution of each
operation requires a set of \textit{activation} signals. 

In addition, the control unit receives some \textit{condition} signals from the data path
that evaluate the clauses of some branching and iterative constructs. Condition signals
provided by data-dependent operations are called \textit{completion} signals. The ensemble
of these control points must be identified in data path synthesis.




% A datapath can be constructed in two steps: \textit{unit allocation} and \textit{unit binding}. Unit allocation determines the number and types of library components to be used in the design. Since a real RT component library may contain multiple types of resources, each with different characteristics (e.g. operation that can be performed, size, delay and power dissipation), unit allocation needs to determine the number and type of different functional and storage units from a component library. Unit binding maps the operations, variables and data transfers in the behavioral description into the functional, storage and interconnection units, respectively. For register allocation two main algorithm can be selected: an heuristic left edge and an optimal algorithm for register allocation (where a careful analysis is performed to choose the best algorithm with respect to the problem). For module allocation, a weighted graph coloring has been introduced, where operations that read or write on same registers have much probability to be bound to same functional units. So interconnection will be reduced. Interconnection elements are so allocated and decoding logic is computed.

\subsection{Control unit synthesis}\label{hls:fsm}

The behavioral view of sequential circuits at the logic level can be expressed by
finite-state machine transition diagrams. A finite-state machine can be described
by:
\begin{itemize}
\item A set of primary input patterns, $X$.
\item A set of primary output patterns, $Y$.
\item A set of states, $S$.
\item A state transition function, $\delta : X\times S\rightarrow S$.
\item An output function, $\lambda : X\times S\rightarrow Y$ for Mealy models or $\lambda : S \rightarrow Y$ for Moore models.
\item An initial state.
\end{itemize}

The state transition table is a tabulation of the state transition and output functions. Its corresponding graph-based representation is the state transition diagram. The state transition diagram is a labeled directed multi-graph $G_t(V,E)$, where the vertex set $V$ is in one-to-one correspondence with the state set S and the directed edge set $E$ is in one-to-one correspondence with the transitions specified by $\delta$. In particular, there is an edge $(v_i,v_j)$ if there is an input pattern $x\in X$ such that $\delta(x,s_i)=s_j, \forall i,j = 1,2,\ldots ,\vert S\vert$. In the Mealy model, such an edge is labeled by $x/\lambda(x,s_i)$. In the Moore model, that edge is labeled by $x$ only; each vertex $v_i\in S$ is labeled by the corresponding output function $\lambda(s_i)$.


% We consider in this section synthesis of control units. We assume that there are $n_ack$
% activation signals to be issued by the control unit and we do not distinguish among their
% specific function (e.g., enable, multiplexer control, etc). From a circuit implementation
% point of view, we can classify the control-unit model as microcode based or hard
% wired. The former implementation style stores the control information into a read-
% only memory (ROM) array addressed by a counter, while the latter uses a hard-
% wired sequential circuit consisting of an interconnection of a combinational circuit
% and registers. From a logic standpoint, synchronous implementation of control can
% be modeled as a finite-state machine.
% 
% Finite State Machines (FSM), also known as Finite State Automation (FSA), at their simplest, are models of the behaviors of a system or a complex object, with a limited number of defined conditions or modes, where mode transitions change with circumstance.
% 
% Finite state machines consist of 4 main elements:
% 
% \begin{itemize}
% \item states which define behavior and may produce actions
% \item state transitions which are movement from one state to another
% \item rules or conditions which must be met to allow a state transition
% \item input events which are either externally or internally generated, which may possibly trigger rules and lead to state transitions
% \end{itemize}
% 
% A < machine must have an initial state which provides a starting point, and a current state which remembers the product of the last state transition. Received input events act as triggers, which cause an evaluation of some kind of the rules that govern the transitions from the current state to other states. The best way to visualize a FSM is to think of it as a flow chart or a directed graph of states, though as will be shown; there are more accurate abstract modeling techniques that can be used.
% 
% There are two main methods for handling where to generate the outputs for a < machine. They are called a Moore Machine and a Mearly Machine, named after their respective authors.
% 
% A Moore Machine is a type of finite state machine where the outputs are generate as products of the states.
% 
% A Mearly Machine, unlike a Moore Machine is a type of < machine where the outputs are generated as products of the transition between states.

%\subsection{FPGA synthesis}

\section{Genetic Algorithms}\label{hls:ga}
\index{Genetic Algorithms|textbf}
\textbf{Genetic Algorithms} (GAs) are adaptive methods which may be used to solve search and optimisation problems. They are based on the genetic processes of biological organisms. Over many generations, natural populations evolve according to the principles of natural selection and \textquotedblleft survival of the fittest\textquotedblright, first clearly stated by Charles Darwin in \textit{The Origin of Species}. By mimicking this process, genetic algorithms are able to \textquotedblleft evolve\textquotedblright solutions to real world problems, if they have been suitably encoded. As such they represent an intelligent exploitation of a random search used to solve optimization problems.

% For example, GAs can be used to design bridge structures, for maximum strength/weight ratio, or to determine the least wasteful layout for cutting shapes from cloth. They can also be used for online process control, such as in a chemical plant, or load balancing on a multi-processor computer system.

The basic principles of GAs were first laid down rigorously by Holland~\cite{holland}, and are well described
in many texts~(e.g.: \cite{goldberg,mitchell98introduction}). GAs simulate those processes in natural populations which are essential to evolution. Exactly which biological processes are \textit{essential} for evolution, and which processes have little or no role to play is still a matter for research; but the foundations are clear. 
In nature, individuals in a population compete with each other for resources such as food, water and shelter. Also, members of the same species often compete to attract a mate. Those individuals which are most successful in surviving and attracting mates will have relatively larger numbers of offspring. Poorly performing individuals will produce few of even no offspring at all. This means that the genes from the highly adapted, or \textquotedblleft fit\textquotedblright individuals will spread to an increasing number of individuals in each successive generation. The combination of good characteristics from different ancestors can sometimes produce \textquotedblleft superfit\textquotedblright offspring, whose fitness is greater than that of either parent. In this way, species evolve to become more and more well suited to their
environment.

GAs use a direct analogy of natural behaviour. They work with a population of \textit{individuals}, each representing a possible solution to a given problem. Each 
individual is assigned a \textit{fitness score}, according to how good a solution 
to the problem it is. For example, the fitness score might be the strength/weight 
ratio for a given bridge design (In nature this is equivalent to assessing how effective an organism is at competing for resources). The highly fit individuals are given opportunities to \textquotedblleft reproduce\textquotedblright, by \textquotedblleft cross breeding\textquotedblright with other individuals in the population. This produces new individuals as \textquotedblleft offspring\textquotedblright, which share some features taken
from each \textquotedblleft parent\textquotedblright. The least fit members of the population are less likely to get selected for reproduction, and so \textquotedblleft die out\textquotedblright.
A whole new population of possible solutions is thus produced by selecting the best individuals from the current \textquotedblleft generation\textquotedblright, and mating them to produce a new set of individuals. This new generation contains a higher proportion of the characteristics possessed by the good members of the previous generation. In this way, over many generations, good characteristics are spread throughout the population, being mixed and exchanged with other good characteristics as they go. By favouring the mating of the more fit individuals, the most
promising areas of the search space are explored. If the GA has been designed well, the population will converge to an optimal solution to the problem.
% Although randomised, GAs are by no means random, instead they exploit historical information to direct the search into the region of better performance within the search space. The basic techniques of the GAs are designed to simulate processes in natural systems necessary for evolution, specially those follow the principles first laid down by Charles Darwin of \textquotedblleft survival of the fittest\textquotedblright. Since in nature, competition among individuals for scanty resources results in the fittest individuals dominating over the weaker ones.
% GAs simulate the survival of the fittest among individuals over consecutive generation for solving a problem. Each generation consists of a population of character strings that are analogous to the chromosome that we see in our DNA. Each individual represents a point in a search space and a possible solution. The individuals in the population are then made to go through a process of evolution.
% 
% GAs are based on an analogy with the genetic structure and behaviour of chromosomes within a population of individuals using the following foundations:

% \begin{itemize}
% \item Individuals in a population compete for resources and mates.
% \item Those individuals most successful in each 'competition' will produce more offspring than those individuals that perform poorly.
% \item Genes from 'good' individuals propagate throughout the population so that two good parents will sometimes produce offspring that are better than either parent.
% \item Thus each successive generation will become more suited to their environment. 
% \end{itemize}

\subsection{Coding}

It is assumed that a potential solution to a problem may be represented as a set of parameters. These parameters, known as \textit{genes}, are joined together to form a string of values, often referred to as a \textit{chromosome} (see Fig.~\ref{fig:gene}). Holland first showed~\cite{holland}, and many still think~\cite{mitchell98introduction}, that the ideal is to use a binary alphabet for the string. In genetics terms, the set of parameters represented by a particular chromosome is referred to as a \textit{genotype}. The genotype contains the information required to construct an organism - which is referred to as the \textit{phenotype}.
The same terms are used in GAs. The fitness of an individual depends on the performance of the phenotype. This can be inferred from the genotype - i.e. it can be computed from the chromosome, using the fitness function.

\begin{figure}[ht]
\centering
\begin{minipage}[l]{0.6\textwidth}
\includegraphics[width=\columnwidth]{./chapters/hls/images/gene.jpg}
\end{minipage}
\caption[Coding terms]{Coding terms}\label{fig:gene}
\end{figure}

\subsection{Fitness Function}

A fitness function must be devised for each problem to be solved. Given a particular chromosome, the fitness function returns a single numerical \textquotedblleft fitness\textquotedblright or \textquotedblleft figure of merit\textquotedblright~which is supposed to be proportional to the
\textquotedblleft utility\textquotedblright or \textquotedblleft ability\textquotedblright~of the individual which that chromosome represents. For many problems, particularly function optimization, it is obvious what the fitness function should measure - it should just be the value of the function.

\subsection{Reproduction}

During the reproductive phase of the GA, individuals are selected from the population and recombined, producing offspring which will comprise the next generation. Parents are selected randomly from the population using a scheme which favours the more fit individuals. Good individuals will probably be selected several times in a generation, poor ones may not be at all.
Having selected two parents, their chromosomes are recombined, typically using the mechanisms of crossover and mutation. The most basic forms of these operators are here described.

\subsubsection{Crossover}
Crossover takes two individuals, and cuts their chromosome strings at some randomly chosen positions (one or more), to produce segments. The segments are then swapped over to produce
two new full length chromosomes. Each of the offspring inherits some genes from each parent.
Crossover is not usually applied to all pairs of individuals selected for mating. A random choice is made, where the likelihood of crossover being applied is typically between 0.6 and 1.0. If crossover is not applied, offspring are produced simply by duplicating the parents. This gives each individual a chance of passing on its genes without the disruption of crossover.

\subsubsection{Mutation}
Mutation is applied to each child individually after crossover. It randomly alters each gene with a small probability (typically 0.001). The traditional view is that crossover is the more important of the two techniques for rapidly exploring a search space. Mutation provides a small amount of random search, and helps ensure that no point in the search space has a zero probability of being examined.

\subsection{Convergence}

If the GA has been correctly implemented, the population will evolve over successive generations so that the fitness of the best and the average individual in each generation increases towards the global optimum. Convergence is the progression towards increasing uniformity. A gene is said to have converged when 95\% of the population share the same value. The population is said to have converged when all of the genes have converged. As the population converges, the average fitness will approach that of the best individual.

\subsection{Why Genetic Algorithms Work}

Holland's schema theorem~\cite{holland} was the first rigourous explanation of how GAs work. A \textit{schema} is a pattern of gene values which may be represented (in a binary coding) by a string of characters in the alphabet ${0,1,\#}$. A particular chromosome is said to contain a particular schema if it matches that schemata, with the \textquotedblleft \#\textquotedblright
symbol matching anything. So, for example, the chromosome \textquotedblleft 1010\textquotedblright contains, among others, the schemata \textquotedblleft 10\#\#\textquotedblright, \textquotedblleft \#0\#0\textquotedblright, \textquotedblleft \#\#1\#\textquotedblright, and\textquotedblleft 101\#\textquotedblright. The order of a schema is the number of non-\# symbols it contains
(2, 2, 1, 3 respectively in the example). 
%The \textit{defining length} of a schema is the distance between the outermost non-\# symbols (2, 3, 1, 3 respectively in the example).
The \textbf{schema theorem} explains the power of the GA in terms of how schemata are processed. Individuals in the population are given opportunities to reproduce, often referred to as \textit{reproductive trials}, and produce offspring. The number of such opportunities an individual receives is in proportion to its fitness - hence the
better individuals contribute more of their genes to the next generation. It is assumed that an individual's high fitness is due to the fact that it contains good schemata. By passing on more of these good schemata to the next generation, we increase the likelihood of finding even better solutions. Holland showed that the optimum way to explore the search space is to allocate reproductive trials to individuals in proportion to their fitness relative to the rest of the population. In this way, good schemata receive an exponentially increasing number of trials in successive generations. This is called the schema theorem.
He also showed that, since each individual contains a great many different schemata, the number of schemata which are effectively being processed in each generation is of the order $n^3$, where $n$ is the population size. This property is known as \textbf{implicit parallelism}, and is one of the explanations for the good performance of GAs.

% \subsection{Search Space}
% A population of individuals are is maintained within search space for a GA, each representing a possible solution to a given problem. Each individual is coded as a finite length vector of components, or variables, in terms of some alphabet, usually the binary alphabet {0,1}. To continue the genetic analogy these individuals are likened to chromosomes and the variables are analogous to genes. Thus a chromosome (solution) is composed of several genes (variables). A fitness score is assigned to each solution representing the abilities of an individual to 'compete'. The individual with the optimal (or generally near optimal) fitness score is sought. The GA aims to use selective 'breeding' of the solutions to produce 'offspring' better than the parents by combining information from the chromosomes.

% The GA maintains a population of n chromosomes (solutions) with associated fitness values. Parents are selected to mate, on the basis of their fitness, producing offspring via a reproductive plan. Consequently highly fit solutions are given more opportunities to reproduce, so that offspring inherit characteristics from each parent. As parents mate and produce offspring, room must be made for the new arrivals since the population is kept at a static size. Individuals in the population die and are replaced by the new solutions, eventually creating a new generation once all mating opportunities in the old population have been exhausted. In this way it is hoped that over successive generations better solutions will thrive while the least fit solutions die out.
% 
% New generations of solutions are produced containing, on average, more good genes than a typical solution in a previous generation. Each successive generation will contain more good 'partial solutions' than previous generations. Eventually, once the population has converged and is not producing offspring noticeably different from those in previous generations, the algorithm itself is said to have converged to a set of solutions to the problem at hand.
% 
% 
% \subsection{Implementation Details}
% 
% After an initial population is randomly generated, the algorithm evolves the through three operators:
% \begin{itemize}
%    \item selection which equates to survival of the fittest;
%    \item crossover which represents mating between individuals;
%    \item mutation which introduces random modifications. 
% \end{itemize}
% 
% \subsubsection{Selection Operator}
% 
% The key idea of selection operator is to give preference to better individuals, allowing them to pass on their genes to the next generation. The goodness of each individual depends on its fitness. The fitness may be determined by an objective function or by a subjective judgement.
% 
% \subsubsection{Crossover Operator}
% 
% This is the prime distinguished factor of GA from other optimization techniques. Two individuals are chosen from the population using the selection operator. Then, a crossover site along the encoded chromosomes is randomly chosen. The values of the two chromosome are exchanged up to this point. For example, if S1=000000 and s2=111111 and the crossover point is 2 then S1'=110000 and s2'=001111. The two new offspring created from this mating are put into the next generation of the population. The main concept is that, by recombining portions of good individuals, this process is likely to create even better individuals.
% 
% \subsubsection{Mutation Operator}
% 
% With some low probability, a portion of the new individuals will have some of their gene flipped. Its purpose is to maintain diversity within the population and inhibit premature convergence that could avoid to explore some interesting space regions. In fact, the mutation alone induces a random walk through the search space. Instead, mutation and selection (without crossover) create a parallel, noise-tolerant, hill-climbing algorithms.
% 
% \subsubsection{Effects of Genetic Operators}
% 
% Using selection alone will tend to fill the population with copies of the best individual from the population. Using selection and crossover operators will tend to cause the algorithms to converge on a good but sub-optimal solution. %(soluzione di nicchia)
% Using mutation alone induces a random walk through the search space. Using selection and mutation creates a parrallel, noise-tolerant, hill climbing algorithm.
% 
\subsection{The Algorithm}

The $t^{th}$ generation evolution of a genetic algorithm can be summarized with the following pseudocode (see also Fig.~\ref{fig:simple_ga}):

\begin{quote}
Evolve generation(t): 
\begin{algorithmic}[1]
  \STATE randomly initialize population(t)
  \STATE determine fitness of population(t)
  \WHILE {termination criteria have not been met}
     \STATE select parents from population(t)
     \STATE perform crossover on parents creating population(t+1)
     \STATE perform mutation of population(t+1)
     \STATE determine fitness of population(t+1)
  \ENDWHILE
\end{algorithmic} 
\end{quote}

\begin{figure}[ht]
\centering
\begin{minipage}[l]{0.85\textwidth}
\includegraphics[width=\columnwidth]{./chapters/hls/images/simple_ga.jpg}
\end{minipage}
\caption{Flowchart of a simple genetic algorithm}\label{fig:simple_ga}
\end{figure}

\noindent The termination criteria can be different, such as:
\begin{itemize}
\item the best individual is good enough (information about \textit{good} individuals are needed)
\item the solutions don't improve anymore
\item a fixed number of generations have been evolved
\end{itemize}
% 
% 
% A genetic algorithm (or short GA) is a search technique used in computing to find true or approximate solutions to optimization and search problems. Genetic algorithms are categorized as global search heuristics. Genetic algorithms are a particular class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover (also called recombination).
% 
% Genetic algorithms are implemented as a computer simulation in which a population of abstract representations (called chromosomes or the genotype or the genome) of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness), and modified (recombined and possibly mutated) to form a new population. The new population is then used in the next iteration of the algorithm.
%% breve descrizione di ogni passo
\subsection{Multi-objective optimization}\label{hls:multi_obj}
Optimization problems involving multiple, conflicting objectives are often approached by
aggregating the objectives into a scalar function and solving the resulting single-objective
optimization problem. In contrast, it can be interesting to find a set of
optimal trade-offs, the so-called \textit{Pareto-optimal set}. In the following, this
well-known concept will be formalized and also defined the difference between local and global Pareto-optimal
sets.
A multiobjective search space is partially ordered in the sense that two arbitrary solutions are related to each other in two possible ways: either one dominates the other or
neither dominates.

\begin{definition}\label{multiobj01}
\textbf{Multi-objective optimization}: it is considered, without loss of generality, a multiobjective minimization problem with \textnormal{m} decision variables (parameters) and \textnormal{n} objectives:
\begin{equation}
\begin{array}{ll}
   \textnormal{Minimize } &  \textbf{y}=f(\textbf{x}) = (f_1(\textbf{x}),f_2(\textbf{x}),\ldots, f_2(\textbf{n}) )\\ \nonumber
   \textnormal{where } & \textbf{x} = (x_1, x_2, \ldots, x_m) \in X \\ \nonumber                       \ & \textbf{y} = (y_1, y_2, \ldots, y_n) \in Y
\end{array}
\end{equation}
and where $\textbf{x}$ is called \textnormal{decision vector}, $X$ \textnormal{parameter space}, \textbf{y} \textnormal{objective vector}, and $Y$ \textnormal{objective space}. A decision vector  $\textbf{a} \in X$ is said to \textnormal{dominate} a decision vector $\textbf{b} \in X$ (also written as $\textbf{a} \prec \textbf{b}$ ) if and only if:
\begin{equation}
\begin{array}{cc}
     \forall i \in {1, \ldots, n} : f_i(\textbf{a}) \leq f_i(\textbf{b}) & \wedge \\ \nonumber
     \exists j \in {1, \ldots, n} : f_j(\textbf{a}) < f_j(\textbf{b})
\end{array}
\end{equation}
Additionally, it is said that $\textbf{a}$ \textnormal{covers} $\textbf{b}$ ($\textbf{a} \preceq \textbf{b}$) if and only if $\textbf{a} \prec \textbf{b}$ or $f(\textbf{a}) = f(\textbf{b})$.
\end{definition}
Based on the above relation, the nondominated and Pareto-optimal solutions can be defined as:
\begin{definition}\label{def:pareto_optimal}
\textbf{Pareto-optimal solutions}: let $\textbf{a} \in X$ be an arbitrary decision vector:
\begin{itemize}
 \item The decision vector \textbf{a} is said to be \textnormal{nondominated} regarding a set $X'\subseteq X$ if and only if there is no vector in $X'$ which dominates \textbf{a}; formally
\begin{equation}
 \nexists \textbf{a'} \in X' : \textbf{a'} \prec \textbf{a}
\end{equation}
%\noindent If it is clear within the context which set $X'$ is meant, we simply leave it out.
\item The decision vector \textbf{a} is \textnormal{Pareto-optimal} if and only if \textbf{a} is nondominated regarding $X$.
\end{itemize}
 \end{definition}
Pareto-optimal decision vectors cannot be improved in any objective without causing
a degradation in at least one other objective; they represent, in our terminology, globally optimal solutions. However, analogous to single-objective optimization problems, there may also be local optima which constitute a nondominated set within a certain neighborhood. This corresponds to the concepts of global and local Pareto-optimal sets:
\begin{definition}
 \textbf{Pareto-optimal set}: consider a set of decision vector $X' \subseteq X$.
\begin{enumerate}
 \item The set $X'$ is denoted as a \textnormal{local Pareto-optimal set} if and only if
\begin{equation}
 \forall \textbf{a'} \in X' : \nexists \textbf{a} \in X : \textbf{a} \prec \textbf{a'} \wedge \Vert \textbf{a} - \textbf{a'} \Vert < \epsilon \wedge \Vert f(\textbf{a}) - f(\textbf{a'}\Vert < \delta
\end{equation}
where $\Vert \centerdot \Vert$ is a corresponding distance metric and $\epsilon > 0,\delta > 0$.
\item The set $X'$ is called a \textnormal{global Pareto-optimal set} if and only if
\begin{equation}
   \forall \textbf{a'}\in X' : \nexists \textbf{a} \in X : \textbf{a} \prec \textbf{a'}
\end{equation}
\end{enumerate}
\end{definition}
\noindent Note that a global Pareto-optimal set does not necessarily contains all Pareto-optimal solutions. If it referred to the entirety of the Pareto-optimal solutions, it is defined as \textquotedblleft Pareto-optimal set\textquotedblright; the corresponding set of objective vectors is denoted as \textquotedblleft Pareto-optimal front\textquotedblright.

%\ \\
\section{Conclusions}

% ######## CONCLUSIONE DEL CAPITOLO.... da sistemare ###############
Into this Chapter, the high-level synthesis process has been introduced and motivated. Then, the most important features and sub-tasks have been defined and detailed. The problem is revealed to be far complicated to be solved in an optimum way, since the sub-tasks are heavily dependents each others. Finally, a short description of a generic genetic algorithm has been presented to better understand the most important details that have lead to choose this particular optimization algorithm. In fact they are good candidate for solving such problems, where deterministic heuristics have been revealed to produce poor results.