\chapter{Proposed Methodology}\label{CH::ALG}
\markboth {Chapter \ref{CH::ALG}. Proposed Methodology}{}
The speculation techniques, presented in Section \ref{pre:spec} and detailed in Section \ref{soa:spec}, can improve the performance of the generated schedule, reducing the number of control steps and then the latency as well. On the other hand, they require more registers to store the additional intermediate results.
Relevant works about register allocation have been widely presented, discussed and compared in Chapter~\ref{CH::SOA}. However, this comparative analysis has revealed that there are no significant approaches able to tackle the register overhead due to speculation.

\vspace{1em} \noindent
In this thesis, a new methodology is proposed to overcome some limits of the previous approaches to the register allocation problem.
The liveness analysis has been completely revised with respect to the literature: it has been presented a new approach based on the state transition graph (STG) and on the SSA form. This new methodology required different changes on state-of-the-art algorithms and seems to be not enough explored in the past. By the way, resulting liveness analysis is optimal and the minimum number of registers can be directly derived from STG, without performing time-consuming algorithms (such as clique coloring or vertex coloring).

\vspace{1em} \noindent
However, the proposed approach could introduce undesired register moves that usually require additional logic elements (i.e., multiplexers). In particular, if a variable is alive across more than one cycle step boundary, assignments to different registers should be avoided as much as possible (i.e., it should be always assigned to the same register). For this reason, in order to minimize interconnection elements, variables live out from each cycle step have to be properly assigned to registers. This methodology also includes a simple heuristic that tries to reduce such interconnections.

\vspace{1em} \noindent
To summarize the implementation details and to better understand how experimental results will be obtained, the entire high-level synthesis flow is surveyed with respect to the chosen target architecture.

\vspace{1em} \noindent
This Chapter is organized as follows. Section~\ref{pro:target} presents the target architecture that it is intended to be addressed by this methodology and all other Sections present the steps to obtain it.  In particular, Section~\ref{pro:IR} presents the intermediate representation used into this methodology and Section~\ref{pro:constraints} explains how to specify resource library and design constraints. Speculative scheduling algorithm is detailed in Section~\ref{pro:scheduling} and relative speculative state transition graph creation is described in Section~\ref{pro:stg}.
Liveness analysis and register allocation implementation details and limitations, proposed register binding heuristic and interconnection allocation, and final controller synthesis approach are presented from Section \ref{pro:scheduling} to Section \ref{pro:controller}.


\section{Target Architecture}\label{pro:target}
Generally, the target architecture can be specified as:
\begin{itemize}
  \item The result of high level synthesis, that is a description of a digital synchronous
system in the structural domain at the register-transfer level.
  \item A data part
  \item A control part
  \item Communication via flags and control signals
  \item Discrete time steps (control steps)
  \item The mapping of the data and control flow in two dimensions - time and area.
\end{itemize}

\vspace{1em} \noindent
In this thesis, a \emph{datapath/controller multiplexer-based architecture} has been addressed. This means that the considered target architecture consists of an interacting datapath, the data part, and state transition graph, the control part. In addition, because of the choose of a faster and easier multiplexed architecture, storage is provided by distributed registers and interconnection is established by multiplexers and nets. An example of the interaction between datapath and controller can be seen in Figure.

\begin{figure}[htb]
\centering
      \includegraphics[height=0.5\columnwidth]{./chapters/proposed_algorithm/images/block_diagram.JPG}
  \caption{High-level block diagram}\label{fig:block_diagram}
\end{figure}

\vspace{1em} \noindent
Behaviorally, the controller is viewed as a state transition graph that specifies the time steps in which the data operations are done. The controller guides the datapath in making computations by supplying it with control signals governing which path is selected through each multiplexer and which registers are loaded at each time step. Further, the datapath may provide the controller with status signals when control flow depends on data-dependent conditionals. Other inputs and outputs to the system include the data inputs and outputs to the datapath, and start and done signals to the controller to synchronize the exchange of input data and output results with the outside world.
An example of datapath/controller multiplexer-based architecture can be seen in Figure.

\begin{figure}[htb]
\centering
      \includegraphics[height=0.4\columnwidth]{./chapters/proposed_algorithm/images/target_arch.JPG}
  \caption{Example of data path and state transition graph for a multiplexer-based architecture}\label{fig:target_arch}
\end{figure}

\section{Intermediate Representation}\label{pro:IR}
The behavioral specification is usually translated into an intermediate representation in order to be efficiently managed and analyzed, as detailed in Section \ref{pre:IR}.
The input considered in this work is a specification written in C language;
to build the internal representation, the GNU/GCC compiler has been interfaced and a parser of \textit{GIMPLE} has been implemented.

\vspace{1em} \noindent
GIMPLE is a simple but complete language and target independent intermediate representation. It is derived from the SIMPLE representation used by McCAT compiler developed by McGill University~\cite{SIMPLE:IR}. The SIMPLE intermediate representation has been designed to support alias and dependency analysis and high-level loop and parallelization transformations.
Moreover, complex compound statements are broken into three address form, using temporaries. All complex references to variables are broken in simple references, thus simplifying the alias analysis. The basic simple statements can be classified as assignment or call expressions. Moreover, GIMPLE follows the same policy of SIMPLE avoiding implicit side effects in control statements conditions.
SIMPLE has some restrictions on control flow statements: they have to be compositional (only structured control flow is considered).
Since GCC aims at supporting all kind of control statements this assumption is not acceptable.
Therefore, GIMPLE lowered all control flow statements to goto and if then else statements.
Moreover, GCC manages loops by adding some annotations to the intermediate representations.


\subsection{From Specification to Internal Representation}\label{pro:gimple}
The first step in the synthesis flow is the translation of the behavioral specification from the GIMPLE internal representation of the \textit{GCC} compiler to the \textbf{internal graph representation} that will be used in the presented methodology. The current analyzed and supported specifications spread from C, C++ to SystemC descriptions and the levels of abstractions considered go from logic to system level.
The GIMPLE data structure is dumped into an ASCII file by exploiting the debugging features of GCC (i.e., the \textit{-fdump-tree-oplower-raw} \textit{GCC} option). Actually, the dump performed with this option is performed on a per function basis and therefore several GIMPLE tree nodes are unnecessarily duplicated. To avoid this problem, the tree dump functions of GCC has been slightly modified by removing some duplication and simplifying the format of the ASCII file. Following the grammar of these files, it has been built a parser that is able to rebuild the GIMPLE data structure into the framework, thus allowing an independent analysis of the GCC data structures. Obviously, the extraction of GIMPLE information from GCC introduces some overheads but it also allows a modular decoupling between the GCC compiler and the toolset. The GCC analysis and the GIMPLE parsing corresponds to the first two steps performed by the framework to analyze the design specification.

\vspace{1em} \noindent
In the next step, it is built a layer of functions and data structures providing, for C and C++ specifications, the CFG of each function present in the specification and aid functions providing information on the size and type of all data present in the C/C++ specifications. The CFG, the same extracted from the GCC, represents the sequencing of the operations as described in the language specification. Each CFG node has an identifier, the list of variables read and written and has a reference to the corresponding GIMPLE node. Call functions have associated also the control flow graph of the body of the called function if it is in the specification. Given this information, a data dependency analysis is performed to identify correlation between variable uses and definitions, used to create the DFG. In this point, the SSA-form representation has been already introduced by the compiler and this step of dataflow analysis is very fast. In addition to control and data flow graph, other graphs have also been analyzed, such as the system dependency graph (SDG). All graphs are so managed by a data structure used to contain the behavioral specification and all the related information.

\section{Resource Library and Design Constraints}\label{pro:constraints}
The \textbf{resource library} is the list of all components that can be used to implement the behavioral specification and is based on technology of the synthesizer tool or the target device. Since a different set of operation types have to be associated to each library component, this information has to be stored together to other specific component information, such as area, latency and power consumption spent to executed a specific operation type. So a generic component, stored in the resource library, will have the following information associated:
\begin{itemize}
\item The area occupied by the component, used to estimate the area occupied by the final design (in CLBs for FPGA designs and in $mm^2$ for ASIC designs).
\item The set of operation types that can be executed by the component.
\item The structural representation of the component (used to created the structural representation of the circuit). In particular, the external interface of the component will be used to interface it with its connected elements.
\end{itemize}
Then, for each operation type executed on a component, further information are stored:
\begin{itemize}
\item the execution time of an operation of the selected type on the component (in nanoseconds);
\item the initialization time of an operation of selected type on the component (in nanoseconds);
\item the power consumption needed for the execution of an operation (in $mW$).
\end{itemize}
The library is provided by an external file, containing all the above information about the components. In this way, different libraries can be easily provided to support different synthesizer tools or target devices. The format of the file that has been chosen to represent the library is the eXtensible Markup Language (XML), since it can easily store information in hierarchical way and there are available different libraries that allow to simply read and write this structure and reconstruct the semantic of the library file.

\vspace{1em} \noindent
The \textbf{design constraints} represent the constraints imposed by the designer (or by the architecture of the target devices) to the final design. For instance, for FPGA designs, a constraint that can be imposed is the maximum number of area units that can be used by the final RTL design, i.e. the maximum number of Configurable Logic Blocks (CLBs) that can be used. Further constraints can be the maximum number of instances that can be allocated for a resource type (e.g. the maximum number of ALUs or registers).

\vspace{1em} \noindent
For each component, the information stored are: \begin{itemize} \item Area $A$ of the component, in Configurable Logic Blocks (CLBs); \item Set of operation types $T$ that the component is able to perform; \item \end{itemize} For each operation type $t \in T$ that the component is able to execute, additional informations have been provided: \begin{itemize} \item Cycle steps taken to execute an operation $o$ of type $t$ (i.e. $\tau(o) = t$, in according to Definition~\ref{hls:operation_type}); \item Initialization time taken to start an operation $o$ of the type $t$; \item Power consumption spent to execute an operation $o$ of type $t$. \end{itemize} Informations about the area $A$ occupied using a given technology (or target device) have been retrieved through the synthesis of the structural representation of the component with a synthesis tool (e.g. Altera~\cite{Altera} or Xilinx~\cite{Xilinx}). Informations on power consumption can be obtained with simulation where \textit{power} models have been plugged to the environment or profiling of the component on a real device. For instance, a module representing an adder can be so defined:

\section{Scheduling}\label{pro:scheduling}

\fbox{articolo cordone}

\section{Speculated State Transition Graph}\label{pro:stg}

\begin{figure}
\centering
\scriptsize
\begin{verbatim}
   WaveSched(CDFG G, Allocation_Constraints C, clock_period clk, Unroll_bound U, STG S)
   {

0:    SET<OPERATION> initial = get_1_level_operations(G);
1:    STATE S0;
2:    STATE parent_state = S0;
3:    QUEUE<STATE> State_q;
4:    ARRAY<SET<COMPOSITE_OPERATION>> Unscheduled_immediate_successors;
5:    SET<COMPOSITE_OPERATION> initial_composite = Make_composite(initial);

6:    loop_forever()
      {
7:       SET<COMPOSITE_OPERATION> condition_inputs = composite operations whose outputs
         are control dependency edges feeding operations in initial composite;
8:       foreach combination of conditions (condition, condition_inputs)
         {
9:          SET<COMPOSITE_OPERATION> S_condition =
            under_condition(condition, initial_composite);
10:         STATE new_st;
11:         loop forever()
            {
12:            COMPOSITE_OPERATION new_C =
               select_composite_operation(S_condition, C, clk, U);
13:            if (new_C == NULL) break;
               else
               {
14:               add_composite_operation_to_state(new_C, new_st);
15:               add_schedulable_successors(new_C, S_condition);
16                remove_composite_operation(S_condition, new_C);
               }
            }
17:         S_successors = S_condition;
18:         if (new_st is identical to an existing state, P)
19:            add an arc in S, labeled condition, from parent state to P;
            else
            {
20:            add an arc in S, labeled condition, from parent state to new st;
21:            Unscheduled_immediate_successors[new_st] = S_condition;
22:            append(new_st,State_q);
            }
         }
23:      if (is empty(State_q) == 1) break;
24:      STATE s = dequeue_top(State_q);
25:      Initial_composite = Unscheduled_immediate_successors[s];
      }
   }
\end{verbatim}
  \caption{StgCreator Algorithm}\label{fig:stgcreator}
\end{figure}

Starting from the algorithms proposed in~\cite{BIB::SP_1,BIB::SP_2}, it has been implemented an algorithm that creates the state transition graph (STG) associated to the specification. It will be used to represent the scheduling and the flow of the control during the execution.
The motivation is that, in those algorithms, the design constraints, that the designer could specify, are difficult to be taken into account.

\vspace{1em} \noindent
For this reason, the proposed algorithm, described by the pseudo-code shown in Figure~\ref{fig:stgcreator}, integrates the principles of the \textit{wavesched} algorithm with the capabilities of the scheduling formulation described in Section~\ref{pro:scheduling}.

\vspace{1em} \noindent
The algorithm creates the state transition graph as follows. The first step is the identification of the set of operations, \textit{initial}, that can be scheduled initially. The elements of this set are operations that depends only upon primary inputs. At this stage, the conditions under which different subsets of \textit{initial\_composite} execute are captured and the states which are responsible for scheduling the corresponding subsets are created. This is done by firstly identifying the control dependency edges feeding the composite operations in \textit{initial\_composite} (statement \texttt{7}). The analysis can be easily performed on the CDG looking at the edges incoming to those operations. Each of these control edges can evaluate either to \textit{true} or \textit{false}. Different combinations of truth values on the control dependency edges result in activation of different subsets of \textit{initial\_composite}. Statement \texttt{8} identifies different combinations of truth values on the control dependency edges and statement \texttt{9} extracts the subset of \textit{initial\_composite} that is activated on this combination of truth values. Then the operations to be executed are chosen. In this way, the state grows as the scheduling algorithm ordered the operations in \textit{S\_composite} and their successors. This expansion continues until no more operations are scheduled in the same control step by the scheduling algorithm. Once the new state has been created, it has to be checked if it is identical to any existing state. When it is not the case, an arc is added between the parent state and the created one. If no truth values are associated to this new state, it means that there is an unconditional transition between these two states. The immediate successors of \textit{new\_st} are invested with the responsibility of scheduling the composite operations remaining in \textit{S\_condition}. \textit{new\_st} is then appended to \textit{State\_q} and immediately dequeued. Its frontier constitutes \textit{initial\_composite} which is handled in the manner described above.

\subsection{Selecting the state operations}

The main difference between the \textit{wavesched} and the \textit{stgcreator} algorithms is how they select the operation to be executed into each state. In fact, since the process of selecting the operations can be computationally intensive, different approaches can be performed. 

\vspace{1em} \noindent
The \textit{wavesched} algorithm uses a heuristic, which is based on the fact that operations in CDFG which feed primary outputs through long paths are more critical (i.e., less mobile) than operations which feed primary outputs through short paths and, therefore, the former need to be scheduled earlier. The length of a path is measured as the sum of the delays of its constituent operations. This approach is interesting since it integrates the scheduling inside the state transition graph creation, but it has significant drawbacks. In fact, in this way, the resulting scheduling is optimized from the latency point of view, but the algorithm cannot control the impacts on the final area. The result can be a design that has a huge global area due to an inefficient assignment of operations to control steps.

\vspace{1em} \noindent
For this reason, in this thesis, the scheduling is performed before and the results are used during the state transition graph creation. In this way, the problem has been decoupled and the scheduling can be more efficiently addressed. In fact, as described in Section~\ref{pro:scheduling}, the proposed scheduling technique is able to take into account design constraints simply by adding additional constraints to the ILP formulation. Once the scheduling results have been obtained, the selection of operations during the state creation is very simple. The candidates operations are partitioned by the control steps which they are scheduled in and the earliest operations are chosen. For example, if operations \textit{A}, \textit{B} and \textit{C} are ready for execution, the algorithm looks at the scheduling results. In this way all the effects of scheduling are taken into account. If operations \textit{A} and \textit{C} are scheduled on control step \textit{2} and \textit{B} in control step \textit{1}, the state will be simply composed by operation \textit{B}. Then, in the next state, operations \textit{A} and \textit{C} are still candidate with the operations become ready after the execution of operation \textit{B}. Continuing the example, if operation \textit{B} allows the execution of operation \textit{D} that the scheduling has assigned to control step \textit{3}, the next state will be composed by operations \textit{A} and \textit{C} since they has been scheduled before.

\section{Liveness Analysis and Register Allocation}\label{sec:proposed:liveness}

The main task to provide good solutions to the \textit{register allocation} problem is the procedure used to recognize the overlapping of the life time intervals. Since a register is needed for the values alive between two control steps, the analysis can be easily performed on finite state machine graph. In fact, a vertex in this graph represent all operations executed in a control step. The value that will be further needed will be alive across the edges outcoming from this vertex. Besides, an edge represents the changing from a control step to the next one, so the values alive between two control steps are the values alive on this edge. The liveness analysis, presented by Appel~\cite{BIB::APPEL} and then used also by Brisk~\cite{brisk}, allows to compute, for each edge, which are the variables alive.
In a such way, the solution to register allocation problem will use the minimum number of registers. When variables are not alive together, there are no conflict edges so it can happen that the algorithm assigns to the variables the same color. It means that they could share the same register, in fact the values are not alive in the same moment.

\vspace{1em} \noindent
In this Section, the liveness analysis is presented. The algorithm described here is the most innovative part of this thesis, since the application of liveness analysis to the state transition graph has never been exploited in the past.
The liveness analysis proposed in this thesis is based on liveness equations presented in \cite{BIB::APPEL}, modified to take into account the concurrent execution of different operations in the same state and the effect of speculation. Then, since at this point of the analysis, a data dependence graph is already been computed, the information about read and written variables can be exploited to compute also the \textit{use/def} chain with a reduced computation effort.
This analysis uses two different data-structures: the DFG and the CDG (see Section \ref{pre:graphs} for details). The former one takes information about definitions and uses of the variables in the operations and the latter one contains information about control dependencies.

\vspace{1em} \noindent
The algorithm is based on a set of backward and recursive set of equations that, starting from $Exit$ state, compute the variables alive at each state transition until a fixed point has been reached.
The equations have been extended in the following way:
\begin{eqnarray}
\begin{array}{rcl}
in[n] & = & use[n] \cup (out[n] - def[n])\label{eqn:incoming} \\
\end{array}
\end{eqnarray}
~\\
\begin{eqnarray}
\begin{array}{rcl}
out[n] & = & \bigcup_{s \in succ[n]} in[s]\label{eqn:outcoming} \\
\end{array}
\end{eqnarray}
where $n$ is a vertex of the state transition graph.

\vspace{1em} \noindent
The Equation~\ref{eqn:incoming} is defined as:
\begin{itemize}
\item $use[n]$ are the variables that are used by the operations executed in the current state $n$;
\item $out[n]$ are the variables live-out at the current state $n$;
\item $def[n]$ are the variables defined by the operations executed in the current state $n$.
\end{itemize}

\vspace{1em} \noindent
The Equation~\ref{eqn:outcoming} is defined as:
\begin{itemize}
\item $in[s]$ are the variables live-in at the state $s$, where $s$ is a successor of state $n$ in the state transition graph.
\end{itemize}

\vspace{1em} \noindent
A solution to these equations can be found by iterating. $in[n]$ and $out[n]$ are initialized to the empty set, then the equations are repeatedly treated as assignment until a fixed point is reached. The convergence time of this algorithm can be significantly speeded up by ordering the nodes properly; this can be easily done with post-order ordering.
Additional features can be exploited since DFG and CDG have been already computed. In particular, when the $use[n]$ set is computed, each variable is annotated also with definitions and the related use. Let $n = \lbrace o_1, o_2, ... o_n \rbrace$ be the set of operations executed in the state $n$, each operation $o_i$ is analyzed to obtain the set of variables read by it. Once the variables are obtained, the defining operations (sources of incoming edges to operation $o_i$) can be obtained from the DFG. In this way, let $a$ and $b$ be the two variables read by operation $o_i$ and defined respectively by operations $o_a$ and $o_b$, the contribution to the set $use[n]$ due to operation $o_i$ is defined as follows:
\begin{eqnarray}
\begin{array}{cccccc}
a & \lbrace & (o_a) & ; & (o_n) & \rbrace \\ 
b & \lbrace & (o_b) & ; & (o_n) & \rbrace
\end{array}
\end{eqnarray}
where the former set represents the \textit{defs} and the latter one represents the \textit{uses} for that variable. The remaining operations are changed in according to this definition. In particular, the operation $out[n] - def[n]$ is modified as follows. Let $o_i$ be an operation executed in the state $n$ and contained in the \textit{def} set of one of the variables contained in $out[n]$, since it means that, this definition occurs into this state, the operation $o_i$ is removed from the set of operations defining the variable. If the set becomes empty, it means that no more definition as to be found and, therefore, the variable life is terminated. A \textit{kill} occurs and the variable won't be propagated anymore.

\vspace{1em} \noindent
In according to the \textit{kill} definition just provided, the control flow and, in particular, the speculation can be easily taken into account. In fact, a definition is killed also when the variable is propagated in a \textit{wrong} control flow. In particular, when the Equation~\ref{eqn:outcoming} is computed, variables incoming to each state $s_i$, successor of state $n$, are propagated. However, some variables and, in particular, some definitions can be not compatible with this propagation. For example, let $s_i$ be a join node for the state transition graph, closing the condition $IF$ and the state $n$ is the last state for the \textit{true} branch of that conditional evaluation. The definitions of variables that refer to the \textit{false} branch have not to be propagated on the \textit{true} branch since the related operations will never be found. This situation can be easily tested on the CDG. In fact, when a variable is back-propagated from the state $s$ to the state $n$, the edge between $n$ and $s$ is evaluated and the related conditions are considered. Then, the operations in the definition set of the variable that are not compatible with these conditions are removed. Therefore, if the definition set becomes empty, the variable is not propagated.

\vspace{1em} \noindent
In this way, \textit{def/use} chains are computed while the liveness analysis is performed and, for each variable, the actual life time interval can be obtained. Then, since an edge in the state transition graph represents a cycle step boundary, the variables alive on that edge are exactly the variables across that cycle boundary. For this reason, the compatibility/conflict graph has not to be created anymore, since the register allocation can be easily performed: let $n$ be the maximum number of variables alive between two states (i.e., alive on an edge of the state transition graph), the minimum number of registers that can be used to store all the intermediate results is equal to $n$. 


\section{Register Binding Heuristic}\label{sec:proposed:registerbinding}

The final step is an effective assignment of variables to the registers in order to minimize the interconnection elements to be allocated.
Therefore, a heuristic has been implemented to bind each variable to a register, reducing the register moves as much as possible.

\fbox{descrizione euristica?}

\section{Interconnection Allocation}\label{sec:proposed:interconnection}

At this moment, all elements in the circuit have to be connected. In datapath synthesis, each register output needs to be transferred to the input of a functional unit, and each functional unit outputs to the input of a register (or to the input of another functional unit, if a chaining occurs).
Since a mux-based architecture has been chosen, the objective of interconnection allocation algorithm is to maximize the sharing of interconnection units, while still supporting the conflict-free data transfers required by the register-transfer description.
It computes the data transfers that can occur and it creates a connection between the source element and the target one.
A data transfer is defined by:
\begin{itemize}
\item a source element $obj_{src}$; it could be an input port, the output of a register or a functional unit;
\item a target element $obj_{tgt}$; it could be an output port, the input of a register or a functional unit;
\item the pins $P$ of the target element $obj_{tgt}$ where the communication path will be attached;
\item a value to be transferred;
\item a set of control steps when the data transfer could be required;
\item the operation that requires the data transfer.
\end{itemize}
This information can be easily retrieved by the results of previous steps. In fact, data transfers are requested when an operation is executed, since it needs the input values to be transferred from their location to the inputs of the functional unit where the operation is executed. Besides, locations can depend on the control flow and which are the operation already executed.
So it is natural to associate any operations to the control states where it can be executed.

%\begin{algorithmic}[1]
%\IF {Cond}
%\STATE a = b + 1;\alglabel{prima}
%\ELSE
%\STATE a = 2 * b;\alglabel{seconda}
%\ENDIF
%\STATE c = a * b;\alglabel{terza}
%\end{algorithmic}

 Once the connection has been created, it will have to be specialized. In fact, if a target object is connected only with a source one, a direct connection can be implemented. Instead, if a target object is connected from different source objects, a multiplexer has to be introduced to choose, during the execution, which is the connection that is requested at each moment. The \textit{moment} is defined as the state of the finite state machine when that connection will have to be active. So the information coming to this state (e.g. operations executed and the related conditions) is unique and they can be used to construct the decoding logic for the multiplexer selector.

In general more connections can be created to the same operand of a functional unit. It can happen when different situations are verified:
\begin{itemize}
\item the value involving the connection can come from different location
\item the functional unit executes more than one operation and the operations require values from different locations.
\end{itemize}
When there is more than a single connection to an operand of a functional unit (or a register input), a multiplexer is needed. Otherwise, a direct connection can be created. In general, if there are $N$ connections coming to the same input, a multiplexer having $N$ inputs and one output is needed. The multiplexer with $N$ inputs can be easily converted to a tree of $N - 1$ multiplexers having only two inputs. The solution with multiplexers with only two inputs is preferred since a unique module has to be specialized in the resource library, not depending on the number of inputs. So, with the two-input solution, it is easier to calculate the logic function used to perform the selection.  The logic to perform the selection is based only on enable signals and the information on conditional evaluations coming from the controller. In fact, when an operation has to be executed, the related enable signal is raised by the controller. The enable signal is not sufficient to know which of the state of the state transition graph where the operation is executed. So the conditional evaluation coming from the controller and related to the \textit{if} statement allows the data path to recognize which is the real operation executed and, so, the real connection involved. The selection function is created as a \textit{truth table}, and it takes into account enable signals and conditional evaluation signals to select the right input to be transferred to the input of the functional unit.
To avoid undesired side-effects on memory elements, a write enable signal has been also provided. Therefore, the writing on a register can be performed only when the related enable signal has been raised. Then, the writing of a data value on a register can be performed only when the operation that produces that data value is executed. Starting from this information, the truth table for the writing enable signal can be computed like as it has been computed for the selectors of the multiplexers, based on activation and control evaluation signals.

\section{Controller Synthesis}\label{sec:proposed:controller}

\fbox{credo si possa togliere.. la fa il sintetizzatore, non noi}

The controller is the part of the circuit that computes the evolution of the control flow, based on conditional inputs coming from the evaluation of the control constructs in the initial specification. So the controller is created translating the state transition graph in a finite state machine, where the states are the vertices of the graph, the inputs are the control condition evaluations coming from datapath computations and the outputs are the activation signals to the operations that have to be executed at each steps by the datapath. To help the construction of the decoding logic of the multiplexers (as described in the Section~\ref{sec:proposed:interconnection}), also the information about the actual value of control conditions has been provided as outputs from the controller to the datapath.


\section{Conclusions}\label{sec:proposed:conclusions}

In this Chapter, the proposed methodology has been presented and detailed.
