%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%2345678901234567890123456789012345678901234567890123456789012345678901234567890
%        1         2         3         4         5         6         7         8

\documentclass[letterpaper, 10 pt, conference]{ieeeconf}  % Comment this line out
                                                          % if you need a4paper
%\documentclass[a4paper, 10pt, conference]{ieeeconf}      % Use this line for a4
                                                          % paper

\IEEEoverridecommandlockouts                              % This command is only
                                                          % needed if you want to
                                                          % use the \thanks command
\overrideIEEEmargins
% See the \addtolength command later in the file to balance the column lengths
% on the last page of the document



% The following packages can be found on http:\\www.ctan.org
\usepackage{graphicx} % for pdf, bitmapped graphics files
\usepackage{epsfig} % for postscript graphics files
\usepackage{mathptmx} % assumes new font selection scheme installed
\usepackage{amsmath} % assumes new font selection scheme installed
\usepackage{times} % assumes new font selection scheme installed
\usepackage{algorithm}
\usepackage{algorithmic}
%\usepackage{amsmath} % assumes amsmath package installed
%\usepackage{amssymb}  % assumes amsmath package installed
  \usepackage{colortbl}
  \usepackage{wasysym}
  \usepackage{multirow}


\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmictrue}{\textbf{template}}
\renewcommand{\algorithmicfalse}{\textbf{int}}

\algsetup{linenosize=\footnotesize}

\title{\LARGE \bf
Evaluating Regularity Extraction Pre and Post Logic Synthesis
}

%AUTHORS -- it is acceptable to add lines for affiliations (e.g., department).

\author{ \parbox{3.4 in}{\centering Fabrizio Ferrandi, Gerardo Gallucci, Christian Pilato, \\ Angelo Rosiello and Donatella Sciuto\\
         \textit{DEI - Politecnico di Milano}
	 \textit{\{ferrandi,pilato,sciuto\}@elet.polimi.it}}
         \hspace*{0.0 in}
         \parbox{3.4 in}{ \centering Davide Pandini\\
         \textit{STMicroelectronics}\\
         \textit{davide.pandini@st.com}}
}
% For authors with multiple affiliations, use multiple centered parboxes, as follows...
%\author{ \parbox{2 in}{\centering Huibert Kwakernaak\\
%         \textit{University of Twente}\\
%         \textit{h.kwakernaak@autsubmit.com}}
%         \hspace*{0.5 in}
%         \parbox{2 in}{ \centering Pradeep Misra\\
%         \textit{Wright State University}\\
%         \textit{pmisra@cs.wright.edu}}
%}
%


\begin{document}


\maketitle
\thispagestyle{empty}
\pagestyle{empty}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{abstract}

Modern chip integration scale requires further investigations to improve performance, power, 
functionality, yield and manufacturability. In this paper we propose a new effective 
algorithm for structural regularity extraction from logic netlists and we analyze 
advantages and disadvantages obtained introducing regularity extraction in a standard ASIC
design flow. In particular, we compare the results obtained with regularity extraction 
\emph{pre} and \emph{post} logic synthesis on a set of ISCAS benchmarks and we show 
that \fbox{\ldots}. Finally, we present the results on a case study (the DLX microprocessor) 
to show how the proposed methodology can be effectively applied to
real designs.

\end{abstract}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}

Most modern VLSI designs are strongly dominated by datapath circuitry which allows to satisfy classic
performance design metrics. One of the main characteristics of datapath circuitry is the presence of
many regular structures (i.e., \textit{templates}), that, if preserved and exploited during the design
flow, allow high-density layouts in placement~\cite{Nijssen1997}\cite{Arikati1997}\cite{Kutzschebauch2000}, 
improved delay \cite{Kutzschebauch2000}, yield and manufacturability \cite{Kheterpal2005}. A limited
number of solutions to exploit regularity during datapath design stages were considered during the
last years.
Datapath compilers (DPC) take profit from the regularity in the datapath specification and generally
use a bit-slice structure for the implementation. In this case, it is necessary to separate the
datapath design phases from the remaining system design, typically binding the solution to the
technology, adding non-negligible complexity due to the subsequent integration step. This kind
of solutions decreases also flexibility if original requirements of the system are lightly changed
during the design life cycle. New methodologies to extract and exploit regularity improving
standard ASIC design flows are needed.

Several authors have proposed techniques for regularity extraction from the behavioral and structural 
description of datapath-dominated circuits. Many of them address the problem of covering a circuit 
given a set of pre-defined library of templates~\cite{Nijssen1997}\cite{Arikati1997}\cite{Rao1992}\cite{Rao1993}. 
The major problem with these approaches is the creation of suitable sets of templates. Another set 
of solutions proposed by Chowdary et al. \cite{Chowdhary1998} is based on the automatic generation 
of templates for a given circuit, followed by the covering phase. Even if interesting results were 
produced, strong limitations are given by the high computational complexity of the proposed 
algorithms and by the restrictive assumptions adopted. To reduce the time complexity Chowdary et 
al. do not identify all multi-output templates but only a subset of them, i.e. single 
principal-output graphs.
Similar techniques are used by graph mining approaches like Subdue \cite{Holder1994}. The aim 
of \cite{Holder1994} is not the detection and the improvement of regularity but the minimization 
of the graph representation, as multi-level synthesis area minimization approaches similarly do.
Despite these approaches to regularity extraction, regularity preservation throughout the design 
flow has been rarely considered. Kutzschebauch and Stok \cite{Kutzschebauch2000} address this 
problem showing very good results in terms of the identified structural regularity and the 
exploitable amount of regularity during the physical design. The weakness of this approach is 
the extraction algorithm which allows the identification of a low degree of regularity. For 
this reason we introduce a new algorithm that allows the extraction of a very large set of 
templates (compared to the existing approaches) performing a regular backward and forward 
expansion, starting from a set of compatible gates (i.e. the seeds). The most suitable subset 
of all the identified templates is used to cover the circuit expressing it in a hierarchical 
and regular form. Once regularity is extracted, the computed hierarchical circuit contains 
regular structures that can be placed in rows and columns, producing physical clusters, to 
realize much denser layouts, simplifying the overall placement task \cite{Kutzschebauch2000}, 
too.
To the best of our knowledge, while Kutzschebauch and Stok \cite{Kutzschebauch2000} propose 
a regularity driven logic synthesis approach, no remarkable investigations were realized to 
quantify the impact of regularity extraction during the standard ASIC design flow in terms of 
area or delay. Moreover, even if it is widely acknowledged that logic synthesis destroys a 
considerable amount of regularity \cite{Chowdhary1998}\cite{Chan99challenges}\cite{Kutzschebauch1999},
no quantifiable results were reported. In this paper we achieve these tasks analyzing advantages
and disadvantages of extracting and preserving regularity pre and post logic synthesis. In 
particular we extract regularity pre logic synthesis constraining the logic synthesis tool 
to work on a hierarchical circuit structure, where ungrouping or flattening is not allowed. 
On the other hand we identify regular subcircuits working on the post logic synthesis netlist 
that also implies the reduction of the graph size and then the regularity extraction problem 
complexity. At last we compare the obtained results in terms of extracted amount of regularity
 and area penalties.
The remainder of the paper is structured as follows.\fbox{sistema}

\section{Formulation of Regularity Extraction and Clustering Problem}\label{sec:problem}
Given an input circuit $C$, the problem of regularity extraction consists in the identification
of recurrent structural equivalent subcircuits. To formalize the problem, it is possible to
express the circuit as a directed labeled graph $G(V,E)$ \cite{DeMicheli1994} where $V$ is the 
set of vertices corresponding to the set of the circuit logic functions while $E \subset VxV$ 
is the one of the edges representing the interconnections among the circuit components. Each 
graph vertex is labeled by the function , which maps the circuit logic functions onto the graph 
vertices.
A subgraph $G_i(V_i, E_i)$ of $G$ is \emph{consistent} if and only if (iff) $V_i \subset V$, 
$E_i \subset E$, and $G_i$ does not include any disconnected subgraph. Hence, a consistent 
subgraph of $G$ is a subcircuit of $C$. Two subgraphs $G_i$ and $G_j$ are equivalent iff:
\begin{itemize}
\item They are isomorphic;
\item The functionalities of corresponding vertices are the same.
\end{itemize}

The \textit{template generation problem} can be defined as follows:

\begin{quote}
\textbf{Template Generation Problem}: \emph{Given a circuit $C$ expressed by a graph $G(V,E)$, 
find the consistent subgraphs of $G$, which are not completely included in any other subgraph 
and with at least another equivalent subgraph in $G$.}
\end{quote}

Given a circuit $C$ the objective function of our methodology is to identify a set of regular groups 
which represent the clusters for the physical design stage. As shown in \fbox{Fig. 1}, a \emph{regular 
group} is a rectangular block containing the subgraphs for each identified template. The size of a 
regular group $S_{grp}$ is given by the product of height $h$ and width $w$ of the considered block. 
The height and width of each block is computed as the sum of heights and widths of components of the 
template. For example, considering the group in \fbox{Fig. 1}, the height and width are given by:

\[\left\{ 
\begin{array}{l}
  y = 3 \cdot height(AND2)\\
  x = width(AND2) + width(NOT) + width(BUF) + \\
   ~~ + width(XOR2) + width(FFD) \\
\end{array} \right. \]

Notice that the height of the cells in the blocks is defined in the technology library and is a constant 
depending on the used technology and the design metric drivers. Thus, $\left\{ height(AND2) = height(NOT) = height(BUF) = \ldots\right\}$ and so on.

To measure the amount of structural regularity that improves area and delay cost functions in the 
final physical layout we introduce the \textit{regularity physical index} $RI_p$ as defined in \cite{Kutzschebauch2000}:

\[
 RI_p = \frac{1}{n_{grp} + n_{nr}}\cdot\left(n_{nr} + \sum \left( S_{grp_i} \cdot \frac{2\cdot\sqrt{S_{grp_i}}}{h_i + w_i} \right) \right) -1
\]

where $n_{grp}$ is the number of regular groups, $n_{nr}$ the number of gates not belonging to any regular group, i.e. the glue logic with size 1. As explained in \cite{Kutzschebauch2000} the larger the height or width of a group, the more likely it will become frozen at an early placement stage, probably in a bad position. The physical regularity index keep this information in consideration by the expression $2\cdot\sqrt{S_{grp}}$ which is minimized by a square.

The \textit{regularity index} (RI) is another largely used index to measure the amount of extracted regularity before the technology mapping step not considering any geometrical aspect of the templates. It was introduced in \cite{Chowdhary1998} and is defined by the area of all templates in the cover, given by $\sum_i area[S_i]$, as a percentage of the total area of the graph $G$, given by $\sum_i |S_i|\cdot area[S_i]$ post-mapping.
Given a set of templates, the \textit{graph covering problem} can be expressed as:

\begin{quote}
\textbf{Template Graph Covering Problem}: \emph{Given a circuit $C$ represented by a graph $G(V, E)$ and a set of templates $T$, find the optimal subset of $T$ to cover $G$ respect to the objective function.}
\end{quote}

Since the problem of finding a graph optimum cover is NP-complete \cite{DeMicheli1994}, we heuristically solve the problem by choosing at each iteration one template, and removing (covering) all the corresponding subgraph instances in the input graph following a main criterion.

\section{Regularity Extraction Algorithm}\label{sec:algorithm}
During the last years many algorithms were proposed to extract structural regularity from datapath circuits. All the existing solutions are very expensive computationally or not enough effective. Starting from one of the most recent effective and efficient algorithm \cite{Rosiello2007}, we drastically speed up its performance, introducing a constrained local optimal research driver without substantially deteriorating its effectiveness.
The solution proposed in \cite{Rosiello2007} identifies isomorphic subgraphs by cryptographic hash functions, thus reducing the problem of matching two equivalent subgraphs to the detection of hash collision occurrences. During the matching process, the graph is explored in a backward and forward fashion allowing the identification of multi-outputs subgraphs and particular shapes, such as stars.

\subsection{Template Generation}
Exactly as described in \cite{Rosiello2007} we identify a set of \textit{seeds} representing the window of the template roots, but in order to speed up the performance of the algorithm we do not always consider all the seeds of the window. To accomplish this, we keep track of successes and failures of the graph explorations. An exploration is successful if the identified template is \textit{better} (respect to the objective function) than the best one already found and stored, else it is a failure. Every time a good template is found, we assign to the algorithm a positive score, else a negative one. In this way, if the algorithm is finding good solutions, we give it the chance and the time to make more explorations analyzing more seeds, else, when the overall sum of the obtained score is zero the algorithm is halted. In Table~\ref{tab:algorithm} the algorithm is formalized. The input values ${k,p,q}$ are definable by the user runtime and represent the goodness starting value ($k$), the scores given for a successful ($p$) and a failure exploration ($q$) respectively.
To evaluate the goodness of an exploration we consider as objective function the maximization of the $RI_p$, thus, to compare two templates we consider the value given by $S_{grp_i}\cdot\frac{2\cdot\sqrt{S_{grp_i}}}{h_i + w_i}$, as introduced in Section~\ref{sec:problem}. Therefore, the template $T_i$ is better than $T_j$ if and only if (iff):

\begin{equation}
 S_{grp_i}\cdot\frac{2\cdot\sqrt{S_{grp_i}}}{h_i + w_i} > S_{grp_j}\cdot\frac{2\cdot\sqrt{S_{grp_j}}}{h_j + w_j}\label{eq:formula}
\end{equation}

To illustrate the execution of the algorithm, consider the example in Fig.~\ref{fig:regularity} which is the graph representing a circuit described by a gate-level netlist. Let us suppose ${k=5, p=2, q=1}$ and let the vector of the compatible seeds be $V={(V3;V8);(V4,V9)}$. The regularity extraction algorithm will identify the template evidenced in orange in Fig.~\ref{fig:regularity} and since no better template were found previously, we will increment the goodness value $k$ by $p$, i.e.: $k=k+p=7$. Now, when the couple $(V4,V9)$ is analyzed we compare the associated template depicted in green with the previous stored one using the Eq.~\ref{eq:formula}. If the new template is \textit{better} than the already found one, the goodness value $k$ is incremented, else it is decremented by $q$. This procedure is repeated until $k$ is zero or all compatible seeds in the vector $V$ are analyzed.

\begin{table}
\caption{Constrained Local Optimal Research Algorithm}\label{tab:algorithm}
\begin{algorithm}[H]
  \caption{Constrained Local Optimal Research}
  \label{alg2}
  \begin{algorithmic}[1]
    \REQUIRE vector$<vertex>$ seeds, int $k$, int $p$, int $q$

    \STATE \TRUE~$better\_template$
    \STATE \FALSE~goodness=k
    \FOR {($int~i=0; i < seeds.size(); i++$)}
       \FOR {($int~i=0; i < seeds.size(); i++$)}
          \IF{template $(i,j)$ is better than $better\_template$}
             \STATE $better\_template$ = template $(i,j)$
             \STATE goodness = goodness + $p$
          \ELSE
             \STATE goodness = goodness - $q$
          \ENDIF
          \IF{goodness = $0$}
              \STATE exit()
          \ENDIF
       \ENDFOR
    \ENDFOR
  \end{algorithmic}
\end{algorithm}
\end{table}


\subsection{Template Graph Covering}
Given the set of templates generated during the Template Generation phase the graph covering will be solved heuristically by choosing a template from the set of templates, and then removing all instances of the chosen template from the graph. The procedure is repeated iteratively on the reduced graph, until or no more templates are extracted or the graph is completely covered. To select the template we use two main heuristics in order to maximize the $RI_p$ and $RI$ values:
\begin{itemize}
 \item \textit{BSFF} (Best Shape Fit First) – the objective function of the template selection phase is to maximize the physical regularity index ($RI_p$), thus, given the set of templates $T$, BSFF selects the template $T_i$ \textit{better} than $T_j, \forall j\in T$.
 \item \textit{LFF} (Largest Fit First) – the objective function of the template selection is to maximize the regularity index ($RI$), therefore, given a set of templates $T$, LFF selects the template $T_i$ with the largest area.
\end{itemize}

\begin{figure}[tpb]
   \centering
   \includegraphics[width=0.9\columnwidth]{regularity}
   \caption{Example of gate-level circuit netlist}
   \label{fig:regularity}
\end{figure}

\subsection{Analyzing regularity pre and post logic synthesis}

One of the main contributions of this paper is to evaluate the effects of regularity extraction and preservation in a standard ASIC design flow, identifying advantages and disadvantages. In \fbox{Fig. 3(a)} it is possible to observe the standard ASIC design flow, while the new flows considering regularity extraction are reported in \fbox{Fig. 3(b) and Fig. 3(c)}. Extracting regularity before applying logic synthesis, as depicted in \fbox{Fig. 3(b)}, means having more regularity to extract \cite{Chowdhary1998}\cite{Chan99challenges}\cite{Kutzschebauch1999} but also some considerable drawbacks, such as the area penalty post-technology mapping. In fact, since we bind and restrict logic synthesis to the identified templates, suboptimal solutions will be found (during both technology independent and dependent optimizations) compared to the traditional design flow where the solution research domain is larger.
If the flow depicted in \fbox{Fig. 3(c)} is followed, we will obtain no penalties during logic synthesis, since we directly work on the structural netlist post-technology mapping. All the results are reported in the experimental results section.

\section{Experimental Results}
The presented regularity extraction algorithm and methodology was implemented in C++ within the synthesis framework \textit{PandA}.
From \fbox{Table X} it is possible to evaluate and compare the flows in \fbox{Fig. 3(b) and Fig. 3(c)}, quantifying the amount of extracted regularity and the obtained pre-logic synthesis penalties using the algorithm described in Section~\ref{sec:algorithm}.
Physical synthesis is performed by Synopsys' IC Compiler~\cite{iccompiler}.

This framework receives as input a RTL netlist which can be described in
the \textit{bench} or \textit{edif} format, and it provides, as output, a Verilog circuit. This circuit
is composed by $n + 1$ modules ($n$ modules are the found template instances, while
the remaining module represents the Glue logic), held together in the top
module.
All technology mapping processes are performed by the standard-cell technology
mapping functionality in Synopsys’ Design Compiler~\cite{dcompiler}, using a CMOS 90 nm
technology library. For an evaluation of the proposed ﬂows,
initially, this methodology has been applied to a subset of well-known ISCAS89 test cases. Then it will applied to an actual design, the DLX processor, to show how it can obtain significant results in a real world application.

\subsection{Experimental setup}

To better evaluate the performance provided by all the synthesis ﬂow, the physical synthesis 
has been divided into three succesive steps.
\begin{enumerate}
\item During a first phase (see Figure~\ref{fig18}(a)), physical synthesis is executed over a mapped circuit with no-regularity found (i.e. a circuit provided by nRAS). Place\&route gives an initial area result. If tasks which perform place\&route verification (LVS, DRC, etc.) return \textit{five}\footnote{~This is the maximum acceptable error number, since they can be still repaired by hand.} or less errors, then the place\&route task is executed again,  with the constraint ``die area reduced by 5\%''. The procedure continues until the design is no more routable (i.e. the error number is greater than five). The last routed design It is saved, and the related area, error number and the pin constraints (in particular their order).
\item During a second phase (see Figure~\ref{fig18}(b)), the physical synthesis is executed over the same circuit, after regularity has been extracted (i.e. a circuit provided by REM or MER synthesis), loading the constraints (die area and pin order) obtained by previous step. Also in this step, the error number is saved.
\item The last phase is similar to the first one (area reduction until the design is no more routable), but the circuit that is loaded is a regular one (after REM/MER synthesis flow), and the pin disposition is fixed (loaded at startup).
\end{enumerate}
In the end, for a mapped circuit, the best obtained place\&route is returned, either with regularity (pre o post logic synthesis) and without regularity  (see Table~\ref{t1}).

\section {nRAS Results}
\input{tables/clock.tex}
\input{tables/nRAS_logic.tex}
No Regularity-Aware synthesis requires only two steps: technology mapping (logic synthesis) and placing (physical synthesis). Mapping is executed starting from a user-defined constraint: clock period, which sets the final circuit speed. \textit{Required Time} (RT), provided by synthesis tools after technology mapping, is the time needed by the circuit to work correctly, while \textit{Arrival Time} (AT) is the actual estimated time value for the working circuit. \textit{Slack} value is the difference $RT-AT$. If the slack time is too high, synthesis tools run several optimization to reduce area, until the AT is within the RT value. To avoid this optimization, the clock frequency has to be incresed (decreasing clock period given as input to the synthesis tools). Increasing clock frequency, slack approaches to $zero$, but a greater logic (and so, cells area) is required to complete mapping, and so an higher critical path is created: this is a tipical problem related to the multilevel synthesis. Table~\ref{t0} shows the tradeoff between mapped circuit area and circuit clock frequency: higher performance constraints lead to larger circuit areas. Clock period ($T$) values are reported in~$n$s: in this example, related to s1196 ISCAS89 circuit, the optimal result is $T=1~n$s, i.e. the slack is reduced to $zero$ by a clock speed of $f=1~G$Hz.
\par Now, once the best mapped circuit  has been chosen to be routed (i.e., the one with minimum slack), the physical synthesis can be performed. Table~\ref{t2} shows logic synthesis results. Cells area is measured in~$\mu$m$^2$, and it is obtained by the sum of combinatorial and no combinatorial area. AT and clock speed are reported in~$n$s and $G$Hz respectively.\\
\input{tables/nRAS_phys.tex}
\input{tables/nRAS_power.tex}
Table~\ref{t3} shows physical synthesis results.
\begin{itemize}
\item The reported area, measured in~$\mu$m$^2$, is the smallest obtained place\&route area, with few errors (see Figure~\ref{fig18}(a));
\item \textit{aspect} and \textit{CUF} columns report the aspect ratio (if the value is equal to 1, it represents a square) and the \textit{Core Utilization Factor}, i.e. the cells area over the total area;
\item last columns (\textit{Track Usage} x and y) report the average tracks usage ratio over horizontal tracks (x) and over vertical tracks (y). This last measure is related with the IC final tracks congestion. A design is said to exhibit routing congestion when the demand for the routing resources in some region within the design exceeds their supply.
\end{itemize}
The last table, Table~\ref{t4}, reports power results of the whole synthesis process. \textit{Cell Leakage Power} is measured in~$n$W, while other power measures (\textit{Cell Internal Power}, \textit{Net Switching Power} and \textit{Total Dynamic Power}) are reported in~$\mu$W.

\section {REM Results\label{due}}
To better evaluate the performance obtained with the REM synthesis flow, the logic synthesis is been performed as showed in Figure~\ref{fig19}.
\begin{figure}
\includegraphics[width=0.90\columnwidth]{images/REM}
\caption[REM experimental setup]{REM experimental setup.\label{fig19}}
\end{figure}
The value \textit{n} represents the minimum template length during \textit{Koala} regularity extraction. Its starting value is set to $two$ because a smaller value leads to trivial templates; a similar drawback can be noticed about $huge$ \textit{n}-value: it is a senseless task to search for big templates, since fabrics cannot design by hand ``\textit{Regular Brick}s'' having a such size (i.e., port number). As shown in Table~\ref{t02}, if \textit{n} is small, a greater number of nodes is covered during the regularity extraction, which leads to an higher degree of regularity. Besides, a small \textit{n} leads to a small \textit{Glue Logic}, due to the greater number of retrieved templates (and, consequently, to a larger portion of covered circuit).\\
ATL into Table~\ref{t02}, is the \textit{Average Template Length} resulting from a regularity extraction process. It is obtained as the average of all the templates lenght which do not belong to the \textit{Glue Logic}. ATL value is, obviously, always greater than the \textit{n}-value, because \textit{n} is the minimum value for a template to be extracted.\\
Tables~\ref{t01} and~\ref{t01b} shows results of the whole REM process over a single netlist. The chosen regular, mapped and routed design is the one that addresses the following issues:
\begin{itemize}
\item its area is acceptable if and only if it is comparable to Die Area of nRAS (see Table~\ref{t01}), augmented by 20\%. In this example, the incremented area is 2231~$\mu$m$^2$.
\item its timing is acceptable if and only if the Arrival Time (AT) is less than or equal to the nRAS Arrival Time (in this example, $AT \leq 2.21$).
\end{itemize}
This task corresponds to the \textit{evaluate} task in Figure~\ref{fig19}.
\input{tables/chooseREM2.tex}
In Tables~\ref{t02},~\ref{t01} and~\ref{t01b}, in the Fast part of the tables, the REM results are reported, with all the \textit{n}-value from $zero$ to $six$ (even if $zero$ and $one$ values are unnecessary). This is possible thanks to the high speed of \textit{Koala} Fast heuristic, and better results can be obtained: increasing the value of \textit{n}, area and AT approach to the corresponding nRAS area and AT value, while covered nodes (and so, regularity) and found templates (and so, found subgraphs) tend to $zero$.
\par Further considerations:
\begin{table}
\centering
\begin{tabular}{l||c|c|c|c}
\multirow{2}{*}{s9234} & \multicolumn{2}{c|}{\textit{n}-value = 8} & \multicolumn{2}{c}{\textit{n}-value = 9} \\
          & Backward & Forward &  Backward &  Forward \\
\hline \hline
coverage  &    21.77 &   35.50 &     18.54 &    33.12 \\
\hline
\#t (\#s) & 52 (108) & 90 (185)& ~~40 (80) & 71 (147) \\
\hline
area      &    14407 &   18165 &     12938 &    17275 \\
\end{tabular}
\caption[Comparisons between backward and forward expansions]{Comparisons between backward and forward expansions.\label{to}}
\end{table}
\begin{itemize}
\item increasing the \textit{n}-value, Combinatorial area approach to nRAS Combinatorial area value, while No Combinatorial area reaches its lower-bound (i.e. the nRAS No Combinatorial area value) in a very short times;
\item Fast heuristic reaches optimal area\&timing values with a very small \textit{n}-value, while noFast heuristic leads to higher \textit{n}-values, and so to a bigger \textit{Glue Logic};
\item increasing the \textit{n}-value, noFast heuristic takes always the same time (in this example, about 30 minutes on a RHEL 64 bit server farm), while Fast heuristic can obtain better results in a very small time (in this example, from 5 minutes with a small \textit{n}-value, down to 11 seconds with the final \textit{n}-value, on the same server farm);
\item A very important feature in \textit{Koala} Fast heuristic is the \textit{Koala Window}. The most important difference between Fast and noFast heuristics is related to the lenght of the ``view'': noFast heuristic looks for templates all over the graph representing the circuit RTL, Fast heuristic looks for templates only within a well defined region. This region is the \textit{Koala Window}, and it is used to:
\begin{itemize}
\item reduce the time required to perform regularity extraction, because high windows makes the Fast heuristic similar to noFast (indeed it finds higher ATL), while low window values lead Koala to look for templates into small ``subgraphs'' (and it finds smaller ATL);
\begin{figure}
\centering
\includegraphics[width=0.90\columnwidth]{images/KoalaWin}
\caption[AT reduction by Koala Window]{AT reduction by Koala Window.\label{fig20}}
\end{figure}
\item reduce final RT after P\&R phase (see Figure~\ref{fig20}), because of core area reduction;
\item reduce core area before and after P\&R phase. This last point is given by a higher coverage percentage during Fast regularity extraction by a small window, which leades to smaller \textit{Glue Logic} and ATL.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.90\columnwidth]{images/StartingCUF}
\caption[AT reduction by initial CUF]{AT reduction by initial CUF.\label{fig21}}
\end{figure}
\item a way to increase or reduce the final die area is to run P\&R phase, changing the initial Core Utilization Factor. Since the technology mapping provides a set of standard cells, which have a precise area, if P\&R is run starting from a small initial CUF, the final die area is huge, and vice versa. By default, physical synthesis tools choose a die size which is not the smallest one.
\newpage A typical report of this initial operation done by the place and route tool is the following one:
\begin{table}[!h]
\centering
\scriptsize
\begin{tabular}{l}
\texttt{Planner Summary:}\\
\texttt{This floorplan is created by using tile name (unit).}\\
\texttt{Row Direction = HORIZONTAL}\\
\texttt{Control Parameter = Aspect Ratio}\\
\texttt{Core Utilization = 0.703}\\
\texttt{Number Of Rows = 108}\\
\texttt{Core Width = 424.76}\\
\texttt{Core Height = 423.36}\\
\texttt{Aspect Ratio = 0.997}\\
\texttt{Double Back ON}\\
\texttt{Flip First Row = NO}\\
\texttt{Start From First Row = YES}\\
\texttt{Planner run through successfully.}\\
\end{tabular}
\end{table}
\par As shown in Figure~\ref{fig21}, the area's increasing value includes an increasing value of AT (obviously, if the die is bigger, the critical path is longer, and the arrival time is higher). Despite Figure~\ref{fig21} shows an oscillating trend of AT (due to the small and very dense sample), is clear that the linear regression of that curve is a straight line with is minimum value in ``1'', i.e. the ideal value in which the standard cells given by technology mapping are placed with no space for interconnections.
\item Koala Forward option is the most interesting, but less used, among Koala options. While ``basic'' regularity extraction look for templates only backward (it runs a Backward Expansion, looking for hash-based signatures only among the fanin vertices), Forward regularity extraction looks for templates either backward (before) and forward (after), and runs a template merging function. This option leads to better results in terms of regularity extraction, but also to worst results (as already shown) in terms of core area post technology mapping. Table~\ref{to} reports the percentage of regularity extracted, the number of retrieved templates and subgraphs, and the die area, by backward and forward expansions, changing two \textit{n}-value.
\item usually, noFast heuristic is not able to reach optimal results, while Fast heuristic is always able. This is due to the higher regularity found by noFast heuristic: with the same \textit{n}-value given to Koala Fast and noFast, noFast reaches a very high degree of regularity. Table~\ref{tf} reports the percentage of regularity extracted by Koala noFast and Fast heuristics, with different \textit{n}-values (Fast results are obtained by the best possible \textit{Koala Window}).
\end{itemize}
\begin{table}
\centering
\begin{tabular}{l||c|c|c|c|c|c|c|c}
~ & \multicolumn{2}{c|}{\textit{n}-value = 2} & \multicolumn{2}{c|}{\textit{n}-value = 3} & \multicolumn{2}{c|}{\textit{n}-value = 4} & \multicolumn{2}{c}{\textit{n}-value = 5} \\
~      & noFast & Fast & noFast & Fast & noFast & Fast & noFast & Fast \\
\hline \hline
~s9234 & 67.90 & 36.30 & 47.70 & 13.26 & 37.04 & ~4.87 & 31.66 & 4.20 \\
\hline
s13207 & 68.64 & 29.43 & 51.70 & 15.38 & 40.22 & 12.74 & 35.93 & 4.10 \\
\hline
s15850 & 68.35 & 26.72 & 51.38 & 12.07 & 41.41 & ~7.92 & 36.96 & 9.13 \\
\end{tabular}
\caption[Comparisons between noFast and Fast heuristics]{Comparisons between noFast and Fast heuristics.\label{tf}}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.90\columnwidth]{images/tradeoffAreaNvalue}
\caption[Tradeoff between die area and \textit{n}-value]{Tradeoff between die area and \textit{n}-value.\label{fig22}}
\end{figure}
As shown in Figure~\ref{fig22}, design constraints (i.e. die area) are very sensitive to the minimum size of templates used to cover the circuits. It is not surprising that the choise of subgraphs used to implement the logic will affect the performance and area costs. Regularity-aware design flow results in unnecessary area and performance penalties, which can be reduced by the use of ``\textit{Regular Brick}s'' (which decrease the transistor number), as reported in [\ref{syn3}], [\ref{syn5}], [\ref{syn6}] and [\ref{fabrics}].
\input{tables/chooseREM1.tex}
\input{tables/chooseREM3.tex}
\par The results of logic synthesis over several circuits are reported in Table~\ref{t5}, Table~\ref{t6}, Table~\ref{t7} and Table~\ref{t8}. In these tables, the best results for all the analyzed circuits are reported, i.e. the results obtained by a \textit{n}-value which leads to acceptable area and timing values. Some noFast results (over larger circuits) have been omitted because they are too much expensive, but their performances are not improved (area and timing constraints are always violated).
\input{tables/REM_eR_regex.tex}
\input{tables/REM_F_regex.tex}
\input{tables/REM_eR_logic.tex}
\input{tables/REM_F_logic.tex}
\par Finally the optimal mapped netlist to place, it is possible to proceed with physical synthesis.
\input{tables/REM_eR_phys.tex}
\input{tables/REM_F_phys.tex}

\section {MER Results\label{tre}}
The initial concept of regularity extraction post technology mapping is to build a ``standard cell-like'' based netlist. Given a mapped netlist, the idea is to organize it in a more regular way, then to spend a lot of computational effort mainly to build a single ``no-standard cell'', and finally to repeat that ``cell''. This idea could lead to a set of problems mainly due to layout tools: many tools are not able to maintain the hierarchy, and so the flow results corrupted. A good physical synthesis tool should, before placing the single cell, place the i/o pads of this single cell, and then use this information (as if it is a real standard cell) to build the final layout. If this synthesis flow could lead to benefits or disadvantages was still not know.\\
The rilevance to build this kind of new ``cells'' come from a need of exploiting in an exhaustive the regularity that has been retrieved. By a traditional standard cell place and route, it is not possible to deeply use regularity information. Here it is an example: given a mapped and regular netlist composed with four blocks, regular among them, and regular interconnections, by a traditional standard cell place and route they occupy four times the area needed for the single block. So, when the single block is synthesized, traditional flow does not exploit any border optimizations, and so there is an area penalty due to this lack of optimizations. It should be possible, anyway, to look for regularity among interconnections in the given initial netlist: currently, although it is given a mapped and regular netlist composed with four blocks, interconnections can be very complicated, and so, all benefits obtained with functional regularity are lost in the routing layout.\\
The idea of MER synthesis flow is that, given a mapped netlist, it is possible to look for regularity in that netlist. Also, given an optimal technology (e.g., the technology provided by the \textbf{fabrics}, explained in [\ref{fabrics}], which have a mechanism to reduce the area), it is possible to decrease the area penalty, decreasing the transistor count, thanks to ad-hoc builded cells.
\par Differently from REM synthesis, MER flow requires the smallest possible \textit{n}-value: $one$. Due to mapping pre-regularity extraction, the mapped netlist given to \textit{Koala} has a smaller number of nodes (because every node is a standard cell, and contains one, two or more initial ports). This results in a smaller circuit, which need a little \textit{n}-value for extraction. Expecially on the smaller circuit, starting with a \textit{n}-value bigger than $one$, regularity extraction finish with no template found and no subgraphs extracted. Again, differently from REM synthesis, this needs a little \textit{n}-value to avoid a huge area penalty. Having mapped the circuit before the regularity extraction, the core area remains the same also after the regularity extraction. Only timing values change: they are decidedly always under the nRAS values. Besides, due to mapping pre-regularity, Fast and noFast heuristics give the same results.
\par The results of logic synthesis over all the circuits (obtained giving $one$ as \textit{n}-value) are reported in Table~\ref{t9} and Table~\ref{t10}.
\input{tables/MER_eR_regex.tex}
\input{tables/MER_F_regex.tex}
\par Now it is possible to proceed with the physical synthesis. Its results are shown in Tables~\ref{t15}, Table~\ref{t16}, Table~\ref{t17} and Table~\ref{t18}.
\input{tables/MER_eR_phys.tex}
\input{tables/MER_F_phys.tex}
\input{tables/MER_eR_power.tex}
\input{tables/MER_F_power.tex}

\section {REM/MER comparisons\label{qua}}
Table~\ref{tc} reports the main feature of a synthesis process, and shows differences between the two implemented flow.\\
\begin{table}
\centering
\begin{tabular}{l||c|c}
Average values      & REM synthesis flow & MER synthesis flow\\
\hline \hline
$\Delta$ Area       & $\sim +30\%$       & $\sim +10\%$ \\
\hline
$\Delta$ Timing     & $\sim +10\%$       & $\sim 0$ \\
\hline
$\Delta$ Congestion & $\sim -10\%$       & $\sim -5\%$ \\
\hline
$\Delta$ TDP        & $\sim +5\%$        & $\sim 0$ \\
\hline
$\Delta$ CLP        & $\sim +20\%$       & $\sim +5\%$ \\
\hline
$\Delta$ Frequency  & $\checked$         & $\times$ \\
\hline
ATL                 & $\sim 10$          & $\sim 5$ \\
\hline
\textit{n}-value    & $\sim 5$           & 1 \\
\hline
Coverage percentage & $\sim 25\%$        & $\sim 10\%$ \\
\hline
Best Heuristic      & Fast               & noFast \\
\end{tabular}
\caption[Comparisons between REM and MER synthesis flows]{Comparisons between REM and MER synthesis flows.\label{tc}}
\end{table}
As you can notice in that table, extracting regularity pre or post synthesis leads to very different results, mainly due to the fact that the mapped circuit in the MER flow  is smaller than the related one considered in REM flow.
\begin{itemize}
\item area and timing are higher in the REM flow, since the resulting circuit has more logic;
\item congestion, although it is reduced in both flows, is higher in the MER flow, as predicted (interconnections are very complicated, and so, all benefits obtained with regularity extraction are lost in the routing);
\item although both flows introduce a small increment (with respect to nRAS circuit) in the power dissipation, the analysis of the origin for such power consumption is interesting to be noticed. Total Dynamic Power is the sum of two contributions: Cell Internal Power and Net Switching Power. While the first contribution is always a positive value (+12\% in REM and +2\% in MER), the second is always a negative one (-15\% in REM and -10\% in MER). This is a very good result: Net Switching Power, a value proportional to the product of net capacitance and signal switching rate, is the largest source of power dissipation in an IC, usually accounts for 40\% to 80\% of the total power consumptions;
\item a further source of power dissipation is the Cell Leakage Power, due to spurious currents in the non-conducting state of a transistor. REM and MER flow both introduce an increase in this power level, but MER contribution is very small, again because of the less amount of logic;
\item the analysis about frequency is very important as well. At this point, REM and MER flows are deeply different. While REM synthesis flow can speed or slow down a circuit, due the different kind of regularity extraction before the technology mapping, which set the final timing constraints of the circuit, MER synthesis flow cannot perform any circuit speed or slow down, since the technology mapping has been performed before the regularity extraction. In particular, REM synthesis flow can speed small and medium circuits, while it can not speed very large circuit (sometimes it has been necessary to slow down the clock frequency);
\item ATL and \textit{n}-value are strongly related: an increasing level of the \textit{n}-value leads to a higher level of the ATL. However, while the REM flow requires a higher \textit{n}-value for greater netlist (and so the ATL increases), due to the well-known tradeoff between \textit{n}-value and core area, the MER flow can run regularity extraction by the smallest possible \textit{n}-value: ``one''. It is important to note that both ATL and \textit{n}-value have a very different meaning: while in the REM flow this values show the real number of nodes of the original netlist (and so, the value ``$x$'' indicates ``$x$'' gates), in the MER flow every nodes of the input gragh contains a set of mapped gates (and so, ``$x$'' should be multiplied with the average number of nodes in a standard cell, i.e. about 3). Due to this feature, MER flow is much more sensible to the netlist greatness than REM flow (in fact, on very little netlists, MER flow cannot be performed). The same consideration can be done about the coverage percentage;
\item results show that regularity extraction gives best results with Fast heuristic in REM synthesis flow, and with noFast heuristic in MER synthesis flow. During REM, often, it has not been possible to find a good place and route result with noFast heuristic, while in MER, thanks to the smaller area, always it has been possible to apply the noFast heuristic, without any excessive area penalty.
\end{itemize}
\par Finally, in both synthesis flow, \textit{Glue Logic} and templates have been placed in two successive moments, but results were not so good. Area, timing and power penalty are always over of the average behaviour which comes from the placing of the whole regular mapped netlist. For this reason, this approach has not been further investigated.

\section {Case studies}
Since the aim of this thesis is to extract regularity, and check if this regularity (given by the several template instances) is preserved throughout the following place and route phase, it is very interesting to evaluate results provided by a circuit regular by default, such a multiplier or a pipelined processor.

\subsection {Full Adder block}
\begin{figure}
\centering
\includegraphics[width=0.90\columnwidth]{images/cellMult}
\caption[Basic building multiplier block]{Basic building multiplier block.\label{fig23}}
\end{figure}
The chosen netlist represents a symmetric four-bit (4x4) multiplier (an $n$-bit multiplier multiplies two $n$-bit numbers). Its single ``cell'', shown in Figure~\ref{fig23}(a), is a simple Full Adder, a logical circuit that performs an addition operation on three binary digits. It produces a sum and carry value, which are both binary digits. The circuit diagram of this Full Adder contains two And-gate, two Xor-gate, and an Or-gate, disposed as in Figure~\ref{fig23}(b). The same Figure shows the results of a simple regularity extraction performed over a simple circuit made up four parallel Full Adder, held together by an And-gate. It is possible to note that two templates  can be found, corresponding to the two circuit outputs: Sum (given by a Xor operation over two inputs and carry in input) and carry out (provided by a composition of And-gates and Or-gate).\\
Summarizing:
\begin{itemize}
\item The first found template contains the first fanout net, corresponding to $$ S = (A \otimes B) \otimes C $$
\item The second found template contains the second fanout net, corresponding to $$ C_{out} = (A \cdot B)+(A \cdot C_{in})+(B \cdot C_{in})$$
\end{itemize}

\subsection {Multiplier architecture}
\begin{figure}
\centering
\includegraphics[width=0.90\columnwidth]{images/complMult}
\caption[4x4 combinatorial multiplier]{4x4 combinatorial multiplier.\label{fig24}}
\end{figure}
When multiple full adders are used with the carry ins and carry outs chained together then this it is possible to create a logical circuit using several full adders to add multiple-bit numbers (each full adder inputs a carry in, which is the carry out of the previous one). Note that first row full adders are replaced by half adders, i.e full adders with carry in input fixed to the And-gate controlling value: zero. The complete multiplier's plan is shown in Figure~\ref{fig24}.\\
In Figure~\ref{fig25} the result of the regularity extraction is shown (with \textit{n}-value equal to one, and so with minimum template size equal to two). It is suitable to note that the multiplier is a combinatorial circuit. There is not a clock, and so, it does not exist a slack between Arrival Time and Required Time. Therefore, the whole area, either belonging to cells and to die, contains only combinatorial cells.
\begin{figure}
\centering
\includegraphics[width=0.90\columnwidth]{images/remMult}
\caption[Regularity extraction over the 4x4 multiplier]{Regularity extraction over the 4x4 multiplier.\label{fig25}}
\end{figure}
Although multiplier has a very regular structure, regularity finds templates which does not cover any Full Adder. As it is shown in Figure~\ref{trends}, during initial steps of the algorithm, found templates are very big. This breaks the default regularity of the circuit, and so its behaviour under regularity extraction is identical to any other circuit.

%tabella RegEx MULT
\begin{table}[!h]
\centering
\begin{tabular}{c|c|c|c|c|c|c}
{\bf Covered / Total}&{\bf Covered}&{\bf Templates}&{\bf Subgraphs}&{\bf ET}&{\bf \textit{n}-value}&{\bf ATL}\\
\hline \hline
             42 / 88 &       47.73 &             7 &            14 &    1'' &                    1 &       3 \\
\end{tabular}
\caption[Regularity extraction results over multiplier]{Regularity extraction results over multiplier.\label{tm1}}
\end{table}

%tabella TM&PhS MULT
\begin{table}[!h]
\centering
\begin{tabular}{l|c|c||c|c|c|c}
~    & {\bf Cells Area} & {\bf AT} & {\bf Die Area} & {\bf Aspect} & {\bf CUF} & {\bf AT} \\
\hline \hline
nRAS &           392.94 &     1.39 &            553 &         1.00 &      0.69 &     4.06 \\
\hline
REM  &           470.87 &     1.75 &            553 &         1.00 &      0.73 &     4.95 \\
\end{tabular}
\caption[Technology mapping and physical synthesis over multiplier]{Technology mapping and physical synthesis over multiplier.\label{tm2}}
\end{table}

%tabella TU&Pow MULT
\begin{table}[!h]
\centering
\begin{tabular}{l|c|c|c|c|c|c}
~    & {\bf TUx} & {\bf TUy} & {\bf CIP} & {\bf NSP} & {\bf TDP} & {\bf CLP} \\
\hline \hline
nRAS &      0.35 &      0.19 &    122.07 &    102.03 &    224.10 &      1.21 \\
\hline
REM  &      0.35 &      0.17 &    154.65 &    95.23  &    249.88 &      1.37 \\
\end{tabular}
\caption[Physical synthesis over multiplier]{Physical synthesis over multiplier.\label{tm3}}
\end{table}

\subsection {DLX processor architecture}
\begin{figure}
\centering
\includegraphics[width=0.90\columnwidth]{images/PipelineDLX}
\caption[DLX pipeline]{DLX pipeline.\label{fig26}}
\end{figure}
The DLX is a RISC\footnote{~This acronym, which stays for Reduced Instruction Set Computer, represents a CPU design strategy emphasizing the insight that simplified instructions which ``do less'' may still provide for higher performance if this simplicity can be utilized to make instructions execute very fast.}, reduced and simplified MIPS with a simple 32-bit load/store architecture. Like the MIPS design, it bases its performance on the use of an instruction pipeline. This pipeline (visible in  Figure~\ref{fig26}) contains five stages:
\begin{itemize}
\item \textbf{IF}, acronym for Instruction Fetch unit, loads instruction;
\item \textbf{ID}, acronym for Instruction Decode unit, gets instruction from IF, and extracts opcode and operand from that instruction. It also retrieves register values if requested by the operation;
\item \textbf{EX}, acronym for Execution unit, runs the instructions;
\item \textbf{MEM}, acronym for Memory access unit, fetches data from main memory, under the control of the instructions from ID and EX;
\item \textbf{WB}, acronym for WriteBack unit, writes the result.
\end{itemize}
\par Results of regularity extraction are very interesting. REM synthesis flow extracts several templates, which cover all the datapath of the DLX, creating the well-known bit-staged (due to the intrinsic pipeline) and bit-sliced (due to the regularity extraction across the pipelined dataflow) structure of Figure~\ref{fig9}. This is the most wished result. MER synthesis results, instead, on a circuit of such dimension, are not so good. The well-known problem due to the interconnections can be easily noticed. Analyzing physical synthesis results, it can be noticed how much metal present (about 25\% more than nRAS). This increses die area, final AT and power consumption. MER synthesis flow, so, needs to be strongly improved. Search also for interconnection-level regularity has becoming relevant, besides gate-level regularity. In this way it is possible to increase the Core Utilization Factor and decrease the area penalty.

%tabella RegEx DLX
\begin{table}
\centering
\begin{tabular}{l|c|c|c|c|c}
~          & {\bf Covered / Total} & {\bf Covered} & {\bf \#Ts (\# Subs)} & {\bf ET} & {\bf ATL} \\
\hline \hline
REM noFast &         14880 / 19250 &         77.30 &           372 (2611) &   3h 38' &         6 \\
\hline
REM Fast   &         ~5968 / 19250 &         31.00 &           127 (1500) &      38' &         4 \\
\hline
MER noFast &         ~2737 / 8630~ &         31.71 &           ~86 (398)~ &   2h 25' &         8 \\
\hline
MER Fast   &         ~5114 / 8630~ &         59.26 &           215 (1562) &       6' &         3 \\
\end{tabular}
\caption[Regularity extraction results over DLX]{Regularity extraction results over DLX.\label{td1}}
\end{table}

%tabella TM&PhS DLX
\begin{table}
\centering
\begin{tabular}{l|c|c||c|c|c|c}
~          & {\bf Cells Area} & {\bf AT} & {\bf Die Area} & {\bf Aspect} & {\bf CUF} & {\bf final AT} \\
\hline \hline
nRAS       &         80046.69 &     5.82 &         145008 &         1.00 &      0.56 &           4.25 \\
\hline
REM noFast &        125602.76 &     5.90 &         151801 &         0.99 &      0.56 &           3.99 \\
\hline
REM Fast   &         97643.59 &     5.86 &         138994 &         1.00 &      0.55 &           3.56 \\
\hline
MER noFast &                - &        - &         272254 &         1.00 &      0.32 &           5.27 \\
\hline
MER Fast   &                - &        - &         316267 &         0.99 &      0.29 &           5.07 \\
\end{tabular}
\caption[Technology mapping and physical synthesis over DLX]{Technology mapping and physical synthesis over DLX.\label{td2}}
\end{table}

%tabella TU&Pow DLX
\begin{table}
\centering
\begin{tabular}{l|c|c|c|c|c|c}
~          & {\bf TUx} & {\bf TUy} & {\bf CIP} & {\bf NSP} & {\bf TDP} & {\bf CLP} \\
\hline \hline
nRAS       &      0.67 &      0.64 &     12.90 &     15.45 &     28.35 &    273.07 \\
\hline
REM noFast &      0.66 &      0.55 &     13.05 &     13.62 &     26.67 &    298.15 \\
\hline
REM Fast   &      0.64 &      0.53 &     13.02 &      9.16 &     22.18 &    254.25 \\
\hline
MER noFast &      0.59 &      0.53 &     13.51 &     17.29 &     30.80 &    292.83 \\
\hline
MER Fast   &      0.59 &      0.53 &     13.74 &     20.50 &     34.24 &    311.64 \\
\end{tabular}
\caption[Physical synthesis over DLX]{Physical synthesis over DLX.\label{td3}}
\end{table}

\subsection{A case study: DLX processor}


\section{Conclusion and Future Works}


% Your goal is to simulate, as closely as possible, the usual appearance of typeset
%  papers. This document provides an example of the desired layout and contains
%  information regarding desktop publishing format, type sizes, and type faces.
%
% \subsection{Full-Size Camera-Ready (CR) Copy}
%
% Prepare your ICCD paper using letter-sized paper: 21.6 x 27.9 cm (8.5 x 11 in or 51 x 66 picas).
% Create an IEEE-compliant PDF file for submission and final versions.
%
% \addtolength{\textheight}{-3cm}   % This command serves to balance the column lengths
%                                   % on the last page of the document manually. It shortens
%                                   % the textheight of the last page by a suitable amount.
%                                   % This command does not take effect until the next page
%                                   % so it should come on the page before the last. Make
%                                   % sure that you do not shorten the textheight too much.
%
% \subsubsection{Typefaces and Sizes} Use a proportional serif typeface such as Times Roman.
% If possible, use the {\em times} and {\em mathptm} packages.
% If these are not available to you, use the closest typeface you
%   can. The minimum typesize for the body of the text is 10 point. The minimum
%   size for applications like table captions, footnotes, and text subscripts
%   is 8 point.
%
% \subsubsection{Margins} Set top and
% bottom margins to 25 mm (1 in or 6 picas), and left and right margins
% to about 18 mm (0.7 in or 4 picas). The column width is 88 mm (3.5 in or 21 picas).
%  The space between the two columns is 5 mm(0.2 in or 1 pica). Paragraph
%  indentation is about 3.5 mm (0.14 in or 1 pica). Left- and right-justify your
%  columns. Use either one or two spaces between sections,
%  and between text and tables or figures, to adjust the column length.
%   On the last page of your paper, try to adjust the lengths of the
%   two-columns so that they are the same. (See the source for this file
% ({\em sample\_new.tex}) to see an example of setting the \\textlength
% of the last page to balance columns.)
%
% Use automatic hyphenation and check spelling.
%
% \section{Section Formatting}
%
% \subsection{Title}
%
% The top of the title starts 18 points below the top margin.
% The text is bold, centered, and a 16-point font.  Leave a blank line between
% the title and the author names.
%
% \subsection{Authors}
%
% \textbf{NOTE: ICCD uses a double-blind review process.  When you submit your paper
% for review, do not include author names, affiliations, or emails.  This section
% {\em only} applies for the final camera-ready copy.}
%
% Author names are in 11-point font.  Author affiliations
% and email addresses are in italics, 11-point fonts, and centered beneath the names.
% (Multiple lines may be used for the affiliation, if desired.)  The exact format
% of the author names can be flexible, as long as the required information is provided
% in the proper font size, etc.   The source file for this document ({\em sample\_new.tex})
% shows an example of how to group authors with different affiliations.
%
% Leave two blank lines between the authors and the abstract.
%
% \textbf{NOTE: ICCD uses a double-blind review process.  When you submit your paper
% for review, do not include author names, affiliations, or emails.  This section
% {\em only} applies for the final camera-ready copy.}
%
% \subsection{Abstract}
%
% The abstract is in 9-point font.  It begins with the word
% ``Abstract'' in italics, followed by an em-dash.  The body of the abstract follows
% in bold, 9-point type.  Multiple paragraphs must be indented, with no space
% in between.
%
% Leave one blank line between the abstract and the first section of text.
%
%
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% \section{Section Numbering and Headers}
%
% Number sections using upper-case Roman numerals.  The section heading must
% be centered, on a line by itself, and in all upper-case letters in 10-point font.
% Leave at least one blank line before an after a section heading.
%
% \subsection{Subsections}
% Number subsections using upper-case letters.  For example, the first subsection of
% Section III would be labeled ``A''; a reference to that subsection from elsewhere
% in the documents would be ``III.A''.  The subsection heading must be left-justified,
% on a line by itself, in italics and 10-point font.  Leave at least one blank line before and after
% a subsection heading.
%
% All paragraphs within a section must be indented.  Do not leave space between
% paragraphs.
%
% \subsubsection{Sub-subsections} Sub-subsections are not recommended, but must be
% numbered using Arabic numerals, followed by a closing parenthesis.  The sub-subsection
% heading is part of the first paragraph; it is indented (just like all paragraphs),
% and the heading text must be in italics, followed by a colon.  A reference to the
% third sub-subsection of Section III.A would be ``III.A.3''.
%
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \section{Additional Requirements}
%
% \subsection{Figures and Tables}
%
% Position figures and tables at the tops and bottoms of columns.
% Avoid placing them in the middle of columns. Large figures and tables
% may span across both columns. Figure captions should be below the figures;
%  table captions should be above the tables. Avoid placing figures and tables
%   before their first mention in the text. Use the abbreviation ``Fig. 1'',
%   even at the beginning of a sentence.
% Figure axis labels are often a source of confusion.
% Try to use words rather then symbols. As an example write the quantity ``Inductance",
%  or ``Inductance L'', not just ``L''.
%  Put units in parentheses. Do not label axes only with units.
%  In the example, write ``Inductance (mH)'', or ``Inductance L (mH)'', not just ``mH''.
%  Do not label axes with the ratio of quantities and units.
%  For example, write ``Temperature (K)'', not ``Temperature/K''.
%
% \begin{table}[t]
% \caption{An Example of a Table}
% \label{table_example}
% \begin{center}
% \begin{tabular}{|c||c|}
% \hline
% One & Two\\
% \hline
% Three & Four\\
% \hline
% \end{tabular}
% \end{center}
% \end{table}
%
% \subsection{Citations and Reference List}
%
% Number reference citations consecutively in square brackets \cite{IEEEexample:article_typical}.
%  The sentence punctuation follows the brackets \cite{IEEEexample:articleetal}.
%  Refer simply to the reference number, as in \cite{IEEEexample:conf_typical}.
%  Do not use ``ref. \cite{IEEEexample:conf_typical}'' or ``reference \cite{IEEEexample:conf_typical}''.
%
% References are important to the reader; therefore, each citation must be complete and correct.
% If at all possible, references should be commonly available publications.  References must appear in an
% unnumbered section named ''References'' at the end of the paper (before any appendices).
%
% The {\em IEEEtrans.bst} file is provided, which is the standard BibTeX
% style file for IEEE publications.  {\em IEEEexample.bib} is an example
% BibTeX file that shows many types of references.  Four example references
% \cite{IEEEexample:article_typical,IEEEexample:articleetal,%
% IEEEexample:conf_typical,IEEEexample:book_typical} are shown at
% the end of this document.
%
%
% \subsection{Abbreviations and Acronyms}
%
% Define abbreviations and acronyms the first time they are used in the text,
% even after they have been defined in the abstract. Abbreviations such as
% IEEE, SI, CGS, ac, dc, and rms do not have to be defined. Do not use
% abbreviations in the title unless they are unavoidable.
%
% \subsection{Equations}
%
% Number equations consecutively with equation numbers in parentheses flush
%  with the right margin, as in (1).
% Punctuate equations with commas or periods when they are part of a sentence:
% \begin{equation}
% \Gamma_2 a^2 + \Gamma_3 a^3 + \Gamma_4 a^4 + ... = \lambda \Lambda(x),
% \end{equation}
% where $\lambda$ is an auxiliary parameter.
%
% Be sure that the symbols in your equation have been defined before the
% equation appears or immediately following.
% Use ``(1),'' not ``Eq. (1)'' or ``Equation (1),''
% except at the beginning of a sentence: ``Equation (1) is ...''.
%
%    \begin{figure}[tpb]
%       \centering
%       \includegraphics[width=0.9\columnwidth]{figurefile}
%       \caption{Inductance of oscillation winding on amorphous
%        magnetic core versus DC bias magnetic field}
%       \label{figurelabel}
%    \end{figure}
%
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% %\section{Conclusions and Future Work}
% %
% %\subsection{Conclusions}
% %
% %This is a repeat.
% %Position figures and tables at the tops and bottoms of columns.
% %Avoid placing them in the middle of columns. Large figures and tables
% %may span across both columns. Figure captions should be below the figures;
% % table captions should be above the tables. Avoid placing figures and tables
% %  before their first mention in the text. Use the abbreviation ``Fig. 1'',
% %  even at the beginning of a sentence.
% %Figure axis labels are often a source of confusion.
% %Try to use words rather then symbols. As an example write the quantity ``Inductance",
% % or ``Inductance L'', not just.
% % Put units in parentheses. Do not label axes only with units.
% % In the example, write ``Inductance (mH)'', or ``Inductance L (mH)'', not just ``mH''.
% % Do not label axes with the ratio of quantities and units.
% % For example, write ``Temperature (K)'', not ``Temperature/K''.
% %
% %
% %\subsection{Future Work}
% %
% %This is a repeat.
% %Position figures and tables at the tops and bottoms of columns.
% %Avoid placing them in the middle of columns. Large figures and tables
% %may span across both columns. Figure captions should be below the figures;
% % table captions should be above the tables. Avoid placing figures and tables
% %  before their first mention in the text. Use the abbreviation ``Fig. 1'',
% %  even at the beginning of a sentence.
% %Figure axis labels are often a source of confusion.
% %Try to use words rather then symbols. As an example write the quantity ``Inductance",
% % or ``Inductance L'', not just.
% % Put units in parentheses. Do not label axes only with units.
% % In the example, write ``Inductance (mH)'', or ``Inductance L (mH)'', not just ``mH''.
% % Do not label axes with the ratio of quantities and units.
% % For example, write ``Temperature (K)'', not ``Temperature/K''.
% %
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \section{Acknowledgments}
%
% The authors gratefully acknowledge the contribution of National Research Organization and reviewers' comments.
%
%
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
\bibliographystyle{IEEEtran}
\bibliography{regularity}


%\begin{thebibliography}{99}
%
%\bibitem{c1}
%J.G.F. Francis, The QR Transformation I, {\it Comput. J.}, vol. 4, 1961, pp 265-271.
%
%\bibitem{c2}
%H. Kwakernaak and R. Sivan, {\it Modern Signals and Systems}, Prentice Hall, Englewood Cliffs, NJ; 1991.
%
%\bibitem{c3}
%D. Boley and R. Maier, "A Parallel QR Algorithm for the Non-Symmetric Eigenvalue Algorithm", {\it in Third SIAM Conference on Applied Linear Algebra}, Madison, WI, 1988, pp. A20.
%
%\end{thebibliography}

\end{document}
