\chapter{A Hybrid Algorithm for the Single-Mode Trace Signal Selection Problem}
In this chapter, we present a new trace signal selection algorithm which is hybrid
and utilizes the right blend of metrics and simulation. It relies on a new
set of proposed metrics which can be evaluated significantly faster than
doing simulation. The experimental results show that our algorithm has
same or even better solution quality than the simulation-based algorithms
while being as fast as the metric-based algorithms. 

Before discussing our trace signal selection algorithm, we first present
the details of the algorithmic procedure for X-Simulation in Section
\ref{sec:xsim}. It will be used as a core ingredient in various stages of
our algorithm, as well as to accurately evaluate the solution quality. The
details of our algorithm are discussed in Section
\ref{sec:alg_sm}. Extensions of the algorithm for scalablility over large
benchmarks are discussed in \ref{sec:large}. Simulation results are
presented in Section \ref{sec:results_sm}.
\newpage

% \section{Introduction}
% For trace buffer-based debugging infrastructure, the set of signals for
% tracing should be decided before the debugging process starts. The trace
% signal selection problem aims to select a subset of state elements to
% trace, such that the values of not-traced state elements can be reproduced
% based on the values of the traced state elements as much as
% possible. Different algorithms have been proposed to solve the trace signal
% problem and can generally be categorized as metric-based and
% simulation-based algorithms. As described in Section
% \ref{sec:prvious_work}, these two categories of algorithms have their own
% advantages and disadvantages. We thus propose a hybrid algorithm which
% takes advantages of the both, to generate high quality solutions in terms of
% the state restoration ratio (SRR) with good runtime-scalability.

\section{Algorithmic Procedure of X-Simulation}\label{sec:xsim}
The set of signals which can be restored using the trace and control
signals are determined using an X-Simulator. Specifically, the X-Simulation
is about restoring the values of not-traced signals using the values of a set of
signals selected for tracing and the values of the control signals, within
the observation window. X-Simulation is an important step to
enhance the visibility inside a chip before other bug analysis techniques
are applied. We have briefly discussed how to
implement and accelerate the X-Simulation in Section \ref{sec:sr}.
Now we want to explain the X-Simulation in more details based on the
specific implementation used in this work.

Algorithm \ref{alg:core} describes our variation of an X-Simulator given in \cite{KoN09} which we refer to as the {\tt XSim-core} procedure. The inputs to the algorithm are a set of signals at a single cycle $n$ denoted by  $\cS^n_I$ which are assumed to have a known value of 0 or 1. For example $\cS^n_I$ could be a combination of the trace and control signals at cycle $n$.

\begin{algorithm}[!ht]
\caption{{\tt XSim-core}($\cS^n_I$, $\&\cS_R$, {\tt restore-once})} \label{alg:core}
\small
\begin{algorithmic}[1]
\STATE $\cS^n_R=\emptyset$;~~$Q=\emptyset$;~~visited$[p]$=false~~$\forall p$
\FOR{each $s:=(p,v,n)\in \cS^n_I$}
    \STATE Enqueue($Q$, $s$);~~ visited$[p]=$ {\tt restore-once};
\ENDFOR
\WHILE{$Q\neq \emptyset$}
%       \STATE $s_i:=(p_i,v_i,0)$
       \STATE $s_i \leftarrow$ Dequeue($Q$)
       \FOR{each $p\in \{FI_{p_i}\cup FO_{p_i}\}$ and !visited$[p]$}
            \STATE $s := (p,v=x,0)$
            \IF{$s$ can be restored using $\{\cS^n_I\cup\cS_R\}$}
                \STATE update the value and cycle fields of $s$
                \STATE $\cS_R\leftarrow \cS_R\cup \{s\}$
                \STATE Enqueue($Q$, $s$);~~visited$[p]=$ {\tt restore-once};
           \ENDIF
       \ENDFOR
%       \STATE if the values of all the signals remain unchanged set $Q=\emptyset$
\ENDWHILE
\end{algorithmic}
\end{algorithm}

In our variation of X-Simulator, we also introduce a binary ``{\tt
  restore-once}'' flag as input. It controls if restoration of an (unknown)
signal should stop as soon as the signal takes a known value ($v \neq x$). Otherwise, it is possible for a signal to be restored multiple times. More details about the use of this flag in our algorithm will be discussed in Section \ref{sec:alg_sm}. The output is the set of signals which can be restored using the input signals and is denoted by $\cS_R$. Note, these signals may be restored at any clock cycle which could be same as, before or after $n$. %However the X-Simulator does not record this cycle and instead uses a `0' for the cycle field of a restored signal.
The X-Simulator records the cycle when a signal is restored. Moreover, if {\tt restore-once} is false, it is possible for a pin to be restored more than once, corresponding to values taken at different clock cycles. In this case, more than one restored signal will be recorded by the algorithm for a pin.

The procedure starts by marking each pin as not visited, except for the ones corresponding to $\cS^n_I$ which are set to the {\tt restore-once} flag. The signals in $\cS^n_I$ are also added to a queue $Q$. (See lines 1-3.)

At each step the signal $s_i$ is dequeued from the head of the queue. Then a signal $s$ defined as $s:=(p,v=x,0)$ which corresponds to a fanin or fanout pin of $s_i$ is considered for restoration. If $p$ has not been visited before, its value is evaluated given the values of the other fanins and fanouts which are known. (See line 9.)  Such fanins and fanouts belong to either $\cS^n_I$ or have been restored in the previous steps of the algorithm so they belong to the current set of restored signals $\cS_R$. If $s$ is restored, then the restored $v \neq x$ and corresponding restoration cycle is added to $\cS_R$. (See lines 10-11.)  Next, pin $p$ takes the value of {\tt restore-once} flag and $s$ is enqueued. The process terminates when the queue is empty which may happen if all the signals are dequeued. The algorithm outputs the latest $\cS_R$ as the set of restored signals using $\cS^n_I$.

If the {\tt restore-once} flag is true,  as soon as a signal is restored,
the algorithm stops considering it for further restoration. As a result,
fewer signals may be restored but the algorithm terminates much faster.

\begin{figure}[t]
   \centering
   \includegraphics[width=2.0in]{figs/example.eps}
   \caption{Example to illustrate Algorithm \ref{alg:core}}
   \label{fig:example-xsim}
\end{figure}

{\noindent \bf Example:} Consider the sequential circuit in Figure \ref{fig:example-xsim}. Assume $S^n_I$ only consists of trace signal ($f_1$,1,2). So the value of $f_1$ is 1 in cycle $n=2$.  Algorithm \ref{alg:core} at the first round of the while loop restores the value taken by $g_1$ at cycle 1 to 1, and restores the value taken by $g_2$ at cycle 2 to be 1. So it enqueues signal ($g_1$,1,1) and then ($g_2$,1,2). Next ($g_1$,1,1) is dequeued which restores $f_2=1$ at cycle 1 so ($f_2$,1,1) is enqueued. Next ($g_2$,1,2) is dequeued which restores $g_3=0$ in cycle 2 and enqueues ($g_3$,0,2). Next, ($f_2$,1,1) is dequeued which attempts to restore $g_3$ to value 1 at cycle 0. Here if {\tt restore-once}=true, then ($g_3$,1,0) won't be enqueued. Otherwise we enqueue ($g_3$,1,0) and the algorithm continues.

\section{A Hybrid Trace Signal Selection Algorithm}
\label{sec:alg_sm}

In this section, we start with a general overview of our algorithm, followed by discussing our proposed metrics, and then discuss the steps of the algorithm in detail.

\begin{figure}[t]
   \centering
   \includegraphics[width=4.0in]{figs/overview.eps}
   \caption{Overview of our trace signal selection algorithm}
   \label{fig:overview}
\end{figure}

\subsection{Overview}
Figure \ref{fig:overview} shows an overview of our trace signal selection
algorithm. The procedure starts by initiating a set of metrics which will be
elaborated soon. Then either Method (i) or (ii) will be used to select the
next trace signal following the rules mentioned above. Some of the metrics
will get their values updated after that and the procedure repeats until
the required number of signals $B$ which is the trace buffer width, are
selected.

The procedure for selecting the next trace signal involves two
methods. Method (i) is based on forward greedy selection strategy. However,
Method (i) excludes a subset of state elements which we refer to as ``islands'' when selecting the next trace signal at each step. Method (ii) is applied after a few steps, e.g., after 8 selected trace signals in our simulation framework. It explicitly considers adding an island signal to the set of the currently-selected trace signals. The definition of island state elements will be given shortly in the next subsection.


\subsection{Proposed Metrics}
We propose a set of metrics to quickly and accurately identify a small set
of signals as top candidates, which are then sent to X-Simulation to
measure the state restoration ratio (SRR). The one with the highest SRR
value is selected as the next trace signal. The metrics are built upon
each other, which will be elaborated one by one from lower to higher level.

\subsubsection{{\bf $L^v_f$: Reachability list of state element $f$ taking value $v$}}\label{sec:rl_sm}
Given state element $f$, we consider the case if it takes a known value of 0 or 1. We denote the reachability list by $L^v_f$ and define it as a set of state elements which can be restored if state element $f$ takes a
known value $v \neq x$ while the values of the rest of the state elements are unknown. Note this definition is not associated with any particular clock cycle. In other words, the reachability list $L^v_f$ allows identifying those state elements whose values can be restored only if state element $f$ takes a known value (at the same or different clock cycle) while the values of the other state elements are unknown (but maybe restored).

\begin{algorithm}[t]
\caption{{\tt Reachability-list}($f, v, \cS_C, \&L^v_f$)}
\label{alg:RL}
\small
\begin{algorithmic}[1]
\STATE $s:=(p_f,v\neq x,0)$
\STATE {\tt XSim-core}($\{s\}\cup\cS_C, \&S_R$, {\tt restore-once}=true)
\STATE {\tt XSim-core}($\cS_C, \&\cS_{R_C}$, {\tt restore-once}=true)
\STATE $S_R = S_R\setminus \cS_{R_C}$
\STATE Return $L^v_f$ as the set of state elements in $S_R$
\end{algorithmic}
\end{algorithm}

For each state element $f$, two reachability lists with values $v=0$ and $v=1$ are computed. Algorithm \ref{alg:RL} shows the procedure. The inputs are the state element $f$, its considered value $v\neq x$, the set of input control signals denoted by $\cS_C$. (The notation does not associate the control signals with a clock cycle because we assume they remain constant within the capture window.) The output is $L^v_f$.

First, signal $s$ is defined which corresponds to state element $f$. (Since no particular cycle matters, an arbitrary cycle such as `0' is used for this field of $s$.) In line 2, the {\tt XSim-core} procedure is called with the union of control signals and $s$ as its input to identify the signals $\cS_R$ which can be restored. Since some of the signals in $\cS_R$ may be restorable solely using $\cS_C$, they must be removed to allow identifying the additional signals which can be restored when $s$ is also used. To remove these signals, in line 3, the {\tt XSim-core} procedure is called again, this time using only $\cS_C$ as input. The restored signals denoted by $\cS_{R_C}$ are then removed from $S_R$ (line 4). $L^v_f$ is the set of state elements corresponding to the signals in $S_R$.

For example, in Figure \ref{fig:example} we have $L^0_1=\{f_2,f_5\}$, $L^1_1=\{f_2,f_3\}$, $L^0_2=\{f_1, f_5\}$, $L^1_2=\{f_1, f_3\}$, $L^0_3=\{f_1, f_2, f_4, f_5\}$, $L^1_3=\emptyset$, $L^0_4=\{f_5, f_3\}$, $L^1_4=\emptyset$, $L^0_5=\emptyset$, $L^1_5=\{f_1,f_2, f_3, f_4\}$.

Note, when calling the {\tt XSim-core} procedure, the {\tt restore-once} flag is set to true. In the example of Figure \ref{fig:example-xsim}, $g_3$ is only restored once (by $f_1=1$) and subsequent restorations of $g_3$ won't result in enqueue-ing any more signals. This is because for computing the reachability list $L_1^1$, we only need to identify if $g_3$ can be restored by $f_1$ but we don't need to know what value $g_3$ is going to be restored to. Setting {\tt restore-once} flag to true, thus ensures that the reachability lists can be computed fast (and only once) as a pre-processing step of our trace signal selection algorithm.

It is possible that some state elements have an empty reachability list for both values of $v$ which we refer to as {\bf island} state elements. More precisely, $f$ is an island state element if $L^0_f=L^1_f=\emptyset$.

\subsubsection{{\bf $r_f$: Restorability rate of state element $f$}}\label{sec:r}
This metric reflects the probability that a single state element $f$ can be restored using the trace signals identified so far. The probability is computed within an observation window of $M$ clock cycles by continuously calling the {\tt XSim-core} procedure.

\begin{algorithm}[t]
\caption{{\tt Restorability-FFs} ($F, \cS_C, \cS^1_T, \ldots, \cS^M_T, \&r_{f,\forall f\in F}$)}
\label{alg:RF}
\small
\begin{algorithmic}[1]
\STATE $\cS_R=\emptyset$
\STATE {\tt XSim-core}($\cS_C, \&\cS_{R_C}$, {\tt restore-once}=false)
\FOR{n=1 to $M$}
    \STATE {\tt XSim-core}($\cS_{R_C}\cup \cS^n_T, \&\cS_{R^n}$, {\tt restore-once}=false)
    \STATE $\cS_R=\cS_R \cup \cS_{R^n}$
%    \FOR{each $f \in F$}
%            \STATE $r_f=r_f+1$ if $f$ is a state element in $\cS_R$
%    \ENDFOR
\ENDFOR
\FOR{all $f\in F$}
    \STATE $\cS_{R_f}=\{s =(f,~.,~.)\in \cS_R\}$
    \STATE $r_f=\frac{|\cS_{R_f}|}{M}$
\ENDFOR
\STATE return $r_f~\forall f\in F$
\end{algorithmic}
\end{algorithm}

Algorithm \ref{alg:RF} shows the details of computing the restorability rates for all the state elements. The inputs to the algorithm are the set of unselected state elements $F$, the control signals (denoted by $\cS_C$), and traced signals within an observation window of $M$ cycles (denoted by $\cS_T^1,\ldots,\cS_T^M$). The output of Algorithm \ref{alg:RF} is the restorability rate of state element $f$, denoted by $r_f$ $\forall f\in F$.

Recall $s=(p,v,n)\in \cS_T^n$, if $v\neq x$ and $p\in \cP_F$ indicating that $s$ corresponds to a state element which is traced in cycle $n$. This means the values of each traced state element at cycle $n$ within the observation window is known. Note, the set $\cS_T^n$ includes the trace signals corresponding to the (same) state elements selected so far, i.e., up to the current iteration of our trace signal selection procedure, $\forall n$ within the observation window.

While in practice the observation window corresponding to the trace buffer depth should be the capture window (ranging between 1K to 8K cycles), it has been shown in \cite{ChatterjeeMB11} that a much smaller observation window (e.g., $M$=64 cycles) provides sufficient accuracy for decision making within the simulation-based procedure. Similarly, we also use $M=64$ to internally approximate SRR in this work.
Specifically, the trace signals $\cS_T^n$ for $n=1,2,\ldots,M$ , for any call to this function, are computed using a one-time simulation done in the initial pre-processing step of the flow shown in Figure \ref{fig:overview}. The circuit is simulated for 1K cycles and the values of all state elements for each cycle are stored in the initialization step.

To compensate for the sensitivity of the error in a small observation window, similar to \cite{ChatterjeeMB11}, Algorithm \ref{alg:RF} is called three times with three sets of values of the observation window (which are three non-overlapping subsets of the capture window) to compute three different restorability rates for each state element. The final restorability rate of a state element is then the average of its three values.


According to Algorithm \ref{alg:RF}, first, the set of control signals $\cS_C$ is used to identify a set of restored signals denoted by $\cS_{R_C}$ using the {\tt XSim-core} procedure. (See line 2.) At each step, the {\tt XSim-core} procedure is used to identify the signals which can be restored using the trace and control signal values at cycle $n$ for $n=1,2,\ldots, M$.  Note these new signals may be restored at the same cycle $n$ or at a different cycle, smaller or larger than $n$. For example, in Figure \ref{fig:example-xsim}, when  Algorithm \ref{alg:core} is called with {\tt restore-flag} set to false, we use the trace signal ($f_1$,1,2) at cycle $n=2$, and restore state element $g_3$ twice at cycles 2 and 0, corresponding to signals ($g_3$,0,2) and ($g_3$,1,0).

The set of restored signals using the traces and control signals at cycle $n$ is denoted by $\cS_{R^n}$. This set is added to the set of restored signals identified so far in the for loop. (See line 5.) Restorability rate $r_f$ is computed for each state element in lines 7-10; for each state element $f$, we identify the subset of restored signals $\cS_{R_f}$ by identifying all signals whose pin field is the same as $f$. The restorability $r_f$ is then calculated as the number of elements in $\cS_{R_f}$ divided by the observation window $M$.

%The input trace signals $\cS_T^n$ for $n=1,2,\ldots,M$ , for any call to this function, are computed using a one-time simulation done in the initial pre-processing step of the flow shown in Figure \ref{fig:overview}. Specifically, the circuit is simulated for 1K cycles
%and the values of all state elements for each cycle are stored in the initialization step. Next, at each call of Algorithm \ref{alg:RF}, the values of the corresponding trace signals are looked up from the stored patterns for the $M$ cycles. Specifically, given the small size of the observation window, i.e., $M=64$, we randomly select 3 non-overlapping observation windows with different starting cycles from the 10K simulated cycles. For each window, Algorithm \ref{alg:RF} is called once and the final stored value of $r_f$ is computed as the average of these three values. (These 3 runs of Algorithm \ref{alg:RF} are implemented as 3 threads running on a multi-core machine.) %This idea of averaging over multiple computations was also used in \cite{ChatterjeeMB11} but for computing the SRR when $M$ is small. In addition, the calls to the {\tt XSim-core} procedure are made by setting the {\tt restore-once} flag to false. Recall by setting this flag to false, all the possible restorable signals will be identified. So it allows exact evaluation if $f$ can be restored by the current trace signals.


\subsubsection{{\bf$d_{i,f}^v$: Demand of state element $i$ from state element $f$ taking value $v$}}\label{sec:demand}
%If state element $i$ is not fully restored (i.e., $r_i<1$) we would like to quantify its demand if it is restored by another state element $f$.

Here we consider evaluating a not-traced state element $f$ with value $v$ as a candidate to be selected for tracing. We are interested to find out how much $f$ with value $v$, can further contribute to the restoration of $i$. This is given that $i$ is already restored by a rate $r_i$ by the already-traced state elements.

We define $d_{i,f}^v$ as the demand of $i$ to get fully restored by $f$ when $f$ has a known value $v\neq x$.

In practice, state element $f$ which allows restoring state element $i$, only falls within a small subset of the entire set of state elements. In other words, considering $f$ to be among the set of all the state elements results in many unnecessary and time-consuming computations. Therefore, we limit $f$ to be a state element which includes $i$ in its reachability list, for either value 0 or 1 (i.e., $i\in L^v_f$, $v\in\{0,1\}$). We define the demand $d_{i,f}^v$ as follows.
\begin{equation}
d_{i,f}^v=\min(1-r_i, a_f^v),~\forall i\in L^0_f~~~or~~~i\in L^1_f
\label{eq:d}
\end{equation}
where $a_f^v$ is the probability that state element $f$ takes value $v$. The probability $a_f^v$ is accurately computed in the initialization step using circuit simulation for a suitable number of clock cycles (e.g., 10K in this work with random values for non-control input vectors). %In Equation \ref{eq:d}, the quantity $1-r_i$ reflects the remaining restoration demand of $i$. If it is larger than $a_f^v$, then the demand is given by $a_f^v$ which is the likelihood that $f$ takes value $v$. %We note Equation \ref{eq:d} is an upper bound approximation which can be computed fast. Otherwise, accurate computation of the demands requires many time-consuming simulations and would be impractical for realization of a fast and scalable algorithm.

In the above equation, the two arguments of $\min$ are as follows. First, $1-r_i$ represents the demand of state element $i$ (potentially from all unselected state elements) in order to reach full restoration. The second argument $a_f^v$, gives an estimate on how much this demand can be supplied by $f$. Parameter $a_f^v$ is an upper bound estimate because in the best-case, $i$ is always restored when $f$ takes value $v$.

Minimum of demand of $i$ from all state elements, and supply provided by $f$ with value $v$ is defined as $d_{i,f}^v$ to approximate the degree of contribution of $f$ to restore $i$. Specifically, when $a_f^v >  1-r_i \rightarrow d_{i,f}^v=1-r_i$ indicating that $i$ won't need more than $1-r_i$ to be fully restored. Also when $a_f^v <  1-r_i\rightarrow d_{i,f}^v=a_f^v$ indicating $f$ cannot supply beyond this amount.

The above equation is only an approximation of actual demand of $i$. To accurately measure this demand, we need to call Algorithm \ref{alg:RF} twice, when $f$ with value $v$ is included and then excluded from the set of input trace signals $\cS^n_T$. The difference in the two restorability rates of state element $i$ obtained from these two calls to Algorithm \ref{alg:RF} shows the demand of $i$ from $f$ in a more accurate way because it is calculated using X-Simulation. In our experiments in Section \ref{sec:demand-exp} we show Equation \ref{eq:d} still allows obtaining a good solution compared to using the above simulation-based procedure to calculate the demands. This is with significant runtime benefit due to avoiding simulations between each pair of not-traced state elements $i$ and $f$.


\subsubsection{{\bf $w_f$: Impact weight of state element $f$}}
The impact weight of a state element $f$ captures the amount of restoration if $f$ is selected as the next trace signal. A key point in computing this impact weight is considering the remaining restoration of the not-restored state elements which is given by the restoration demand metric. Specifically, the impact weight of state element $f$ is defined as follows.
\begin{equation}
w_f=\sum_{v={0,1}}\sum_{\forall i\in L_f^v}{d^{v}_{i,f}}
\label{eq:w}
\end{equation}
In the above equation, the corresponding demands of the state elements in the reachability lists of $f$ for values 0 and 1 are added. The higher impact weight for a state element $f$ can be an indication that more state elements can be restored if $f$ is selected as the next trace while accounting for the amount of restoration using the already-selected trace signals.

As an example, in Figure \ref{fig:example}, the impact weight of state element $f_2$ is given by $w_2=d_{1,2}^0+d_{1,2}^1+d_{3,2}^1+d_{5,2}^0$. At the beginning when no trace signal is selected, the restorability rates of all the state elements are 0. Therefore the demand $d_{j,2}^v=a_j^v$ according to Equation \ref{eq:d}. Assuming the two primary inputs of this circuit are independent and each has a probability of 0.5 to be 0 or 1, we obtain the probability rates $a^0_1=a^1_1=0.5$, $a^1_3=0.75$, $a^0_5=0.75$, and $w_2=2.5$.

\subsection{Steps of the Algorithm}
We now discuss the details of different steps of our algorithm. These are shown in bold in Figure \ref{fig:overview}.

\subsubsection{{\bf Initialization}} \label{sec:init}
In this step, first, the circuit is simulated for 10K cycles using random values for non-control primary input vectors. The simulation results are used to compute the probability $a_f^v$ for each state element. As mentioned before, they are also used to provide the trace signals which are fed as inputs to Algorithm \ref{alg:RF} to compute $r_f$ for each state element $f$. Next, the demands and the impact weights are computed, similar to the given example.

\subsubsection{{\bf Method (i): Trace Signal Selection Ignoring Islands}}
\label{sec:method1}
At each step of the algorithm, method (i) is initially used to identify the next trace signal among all the not-traced state elements which are not of type island. In general, a state element with a higher impact weight is a better candidate for the next trace signal. However, simply selecting the state element with the maximum weight may not be a good choice because based on our observations, there may be other state elements with similar yet slightly smaller weight values which may result in a higher state restoration ratio (SRR). Therefore we evaluate the top $k=5\%$ of the state elements with the highest restoration demands. To select the next trace signal from the above identified subset,  we consider adding each to the current set of selected trace signals and directly measure the SRR for an observation window of $M=64$ cycles. (See Section \ref{sec:xsim} for computation of SRR.) The next trace signal is the one which yields to the maximum SRR. In our simulation results we show the effectiveness of the impact weights in identifying the top candidates (compared to use of pure simulation). We also show the need for X-Simulation to find the next trace among the best candidates and show pure use of impact weight to find the next trace degrades the solution quality.

By selecting $k=5\%$ of state elements with the highest weights to be our top candidates, we run a few X-Simulations with negligible runtime overhead in our framework. Moreover, any state element with a relatively high impact weight will be included in the top 5\% based on our observations.
Since computing SRR involves X-Simulation, the parameter $k$ should be set to a small value to ensure the runtime of the algorithm is feasible. In our implementation of method (i), we identify the top 5\% of the state elements with the highest weight to be our top candidates for selecting the next trace signal. We observed that this value is reasonable to identify the state elements with high weight values, and yet is small to ensure a negligible runtime overhead in our simulation framework.

\subsubsection{{\bf Method (ii): Trace Signal Selection Considering Islands}} Recall from Section \ref{sec:rl_sm} that an island state element has empty reachability lists, for both values of 0 and 1. It means that stand alone, an island state element is not able to restore any other state elements. Therefore as shown in Figure \ref{fig:overview}, after selecting every 8 trace signals, we consider adding an island signal. For example, for a typical trace buffer bandwidth of 64 bits (which is the maximum bandwidth considered for trace buffers in prior works), adding an island is considered seven times throughout the course of the algorithm.

Specifically, to select an island signal, we simply add each island signal individually to the current list of selected trace signals and measure the SRR for an observation window of 64 cycles. This is because we observed the number of islands are typically very low and consequently the runtime overhead to compute the SRRs is not significant. Once the SRRs are computed, the island with the maximum SRR is identified and if the SRR is higher than a threshold, then the island is also added to the set of trace signals. In that case, within one step of the algorithm two trace signals will be added to the set. However, if an island is not selected, adding an island will be postponed when eight additional trace signals are identified.

\subsubsection{{\bf Updating the Metrics}} Method (i) relies on using the impact weights corresponding to the most recent set of trace signals. Therefore, at each iteration of the loop in Figure \ref{fig:overview}, these weights are updated. Specifically, the core metric to be updated is the restorability rate of each state element which is used to compute the demand in Equation \ref{eq:d} and the weight in Equation \ref{eq:w}. In order to update the $r_f$ values, Algorithm \ref{alg:RF} is called which now takes the new sets of trace signals as inputs as explained in Section \ref{sec:r}. Note, the reachability lists do not change after the initialization step.

We further discuss the complexity of some of the steps for computing/updating the weights. First, updating the demands and the impact weight for one state element can be done in constant time once the restorability rate of state elements ($r_f$) are updated. This can be observed from Equation \ref{eq:d}. For the impact weight given by Equation \ref{eq:w}, in practice we observe a constant computational complexity because each state element is only contained in the reachability list of a few number of state elements, much smaller than the total number of state elements in the circuit. The computational complexity is dominated due to updating the $r_f$ values however this only requires calling {\tt XSim-core} procedure $3\times64$ times in Algorithm \ref{alg:RF} for all the not-traced state elements.

\section{Improvements for Scalability}
\label{sec:large}
In this section, we list a set of improvements of scalability of our trace signal selection algorithm. These improvements are two-fold in order to allow decreasing the runtime and increasing the solution quality with increase in the design size.


\subsection{Acceleration via Incremental Update of Restoration Map}\label{sec:extension1}
To accelerate the trace signal selection process, an efficient implementation of X-Simulator is necessary because the SRR measurement using X-Simulation takes a significant portion of the runtime based on our source code profiling. We first review an existing acceleration introduced in \cite{KoN09} which we also implemented in our framework. In \cite{KoN09}, the authors propose a bit-wise implementation of the X-Simulator which uses one bit to store the value of a signal at each cycle and the bit-wise operations are applied to restore the other signal values within the observation window. For example, an observation window of 64 cycles can get serially packed in a 64 bit integer data type. Then starting from the trace signals, bit-wise operations are applied to simultaneously evaluate 64 branch decisions. This implementation not only reduces the memory usage for storing the signal values, but also accelerates the restoration process by taking advantage of the fast bit-wise operations.

Now, we introduce a technique for incremental update of the restoration map to further accelerate the execution runtime of the trace signal selection process. Within our algorithm, the restoration maps of one iteration can be reused to accelerate the computation for selecting the next trace signal in the future iteration. This is because two restoration maps corresponding to consecutive iterations have commonalities which increase with increase in the iteration count.

Specifically, when a signal value is restored at an iteration, it will
remain restored for the remaining iterations. This is because the subset of
trace signals (selected up to the current iteration) has already
restored/will continue to restore this signal in the remaining
iterations. New signal values may be restored only at the cycles when their
values were unknown in the previous iteration.

\begin{figure}[t]
   \centering
   \includegraphics[width=3.2in]{figs/resmap_S35932.eps}
   \caption{\small{Restoration map of {\tt S35932} for four consecutive iterations; the
       red points represent the state elements just restored at
       that iteration (and not from the previous iterations) while the green
       points represent the state elements restored from the previous iterations.}}
   \label{fig:resmap_S35932}
 \end{figure}

For example, Figure \ref{fig:resmap_S35932} shows the restoration maps of benchmark circuit {\tt S35932} for four consecutive iterations labeled as (a)-(d). The green points represent the state element that are restored from the previous iterations (with (a) as the initial iteration). The red points represent the state elements that are just restored in the current iteration. The red points turn green in the next iterations and the locations of all the green points remain unchanged.

Our process of incremental update of restoration maps in consecutive restoration maps is as follows. At the end of each iteration after a new trace signal is selected, the restoration map for each set of intervals is temporarily restored. (Recall, in order to approximate SRR for an observation window smaller than the capture window, we store 3 intervals of 64 cycles each, and then compute and average the SRR over these 3 intervals.) When a new iteration starts, these restoration maps are fetched and used to measure the SRR when new trace candidates are added to further restore the previous restoration maps. To reduce the memory usage for storing the restoration maps, we use bit-wise implementation in a manner similar to the implementation used for the X-Simulator, as described earlier in this section.

By temporarily storing and reusing the restoration maps, we observe a significant speedup compared to our strategy used in \cite{LiD13,LiD14TCAD} in which the restoration maps were computed from scratch at each iteration and for each trace signal candidate. We note this acceleration technique may not be suitable for selection strategies other than the forward-greedy method.

\subsection{Extension of Metrics to Improve SRR}\label{sec:extension2}
When experimenting with some of the larger benchmarks (IWLS'05 \cite{IWLS05} and ISPD'12 \cite{OzdalAABWZ12}), we observed that in some cases, the size of the reachability lists have a decreasing trend compared to the smaller benchmarks, to the extent that the number of islands can become much higher, e.g., over 90\% of the total number of state elements. We noticed that this increase in the number of islands was due to the longer combinational paths, thus making it more difficult for a state element to restore its neighboring state elements (without the help of the other state elements). For example, the ratio of the number of combinational gates to the number of state elements in a large benchmark {\tt b22} \cite{IWLS05} is 60X higher than the largest three benchmarks in the ISCAS'89 \cite{ISCAS89} benchmark circuits.


The increase in the number of islands degrades the quality of our algorithm because islands are not considered at each step. Moreover, the iterations which consider adding an island can take much longer because the number of islands have already increased, thereby degrading the runtime of the algorithm.

More importantly, increase in the number of islands is also accompanied by decrease in the size of the reachability lists of the other state elements. Consequently, identification of the top candidates for tracing using the impact weight (which is defined over the set of the reachability list for each state element as given in Equation \ref{eq:w}) becomes erroneous.

To address the above issues, we observe that for larger benchmarks, even though a state element may not directly restore its neighboring state elements, it may still restore a high fraction of the combinational gates on the path connecting it to its neighboring state elements. Guided by this observation, we extend the definition of the reachability list as follows.

We introduce a parameter $\lambda$ between 0 and 1 to control the extent that the not-restored neighboring state elements may be included in the reachability list of another state element considered as the ``source''. A neighboring state element is one that is connected to the source by at least one (solely) combinational path. %For example, in our simulation framework we set $\lambda=0.4$ in the larger benchmark suites.

The extension of the reachability list is as follows. Given a source state element $f$ with value $v$ (0 or 1) and a neighbor state element $i$ which is not restored by $f$, we first count the number of not-restored combinational gates on the shortest path connecting $f$ and $i$. We denote this length by $H^{nr}_i$. Next, we define $H_f$ as the length of the longest path connecting $f$ to any of its neighboring state elements. We then define the fractional parameter $\beta_{f,i}$ as follows.
\begin{equation}
\beta_{i,f}=1-\frac{H^{nr}_i}{H_f}
\label{eq:rl-ext}
\end{equation}
We then include $i$ in the reachability list of $f$ if $\beta_{i,f}>\lambda$.
In our simulations, we show the impact of varying $\lambda$ on the solution quality.

With the above extension of the reachability list, it is also appropriate to extend the definition of the restoration demand in a similar way, as given by the equation below.
\begin{equation}
  d_{i,f}^v=\min(1-r_i, \beta_{i,f} \times a_f^v),~\forall i\in L^0_f~~~or~~~i\in L^1_f
\label{eq:modify-d}
\end{equation}
where $\beta_{i,f}$ is given by Equation \ref{eq:rl-ext}. Here, parameter $\lambda$ controls the size of the reachability lists, thus it controls the number of $d_{i,f}^v$ quantities defined between pairs of state elements.

\section{Simulation Results}
\label{sec:results_sm}

Our hybrid trace signal selection algorithm (referred as HYBR) was implemented in C++. It was tested for trace buffers of various bandwidths. To measure the solution quality, the State Restoration Ratio (SRR) with a capture window of 4096 cycles was used with random values for non-control primary input. Note, this size of capture window is also assumed in prior work and is considered feasible for on-chip implementation of the trace buffer. %4Our implementation of HYBR included the feature to improve runtime described in Section \ref{sec:extension1}.

We make comparison with other trace signal selection algorithms. We note that due to the technology library used for synthesis of the benchmarks in our experiments, direct comparison with other existing algorithms (by looking at the reported SRR values from the related publications) was not accurate. Therefore, we first implemented and compare our results with the following trace signal selection algorithms. Later in this section, we compare with more techniques based on the original (un-synthesized ISCAS'89) benchmarks.
\begin{itemize}
\item METR: metric-based \cite{ShojaeiD10}\footnote{For fair comparison, we
did not specify any state elements as critical state elements when
comparing with \cite{ShojaeiD10}.},
\item SIM-B: simulation-based with backward pruning of state elements \cite{ChatterjeeMB11},
\item SIM-F: simulation-based with forward greedy selection of signal traces.
\item SA: simulated anealing-based iterative purtabation of an initial solution generated using HYBR
\end{itemize}
In SIM-F, we use forward greedy strategy to select the trace signal that leads to the highest increase in SRR at each iteration. Also both SIM-B and SIM-F use simulation to estimate SRR during trace signal selection while METR uses the ``visibility'' metric from \cite{LiuX09}. As a result, METR is significantly faster than SIM-B and SIM-F. However, SIM-B and SIM-F have a higher solution quality because of using a more accurate SRR model. The goal of comparing with these alternative techniques is to mainly use METR as a reference for runtime and SIM-B/SIM-F as a reference for solution quality. Also, among the metric-based techniques we select \cite{ShojaeiD10} due to its fast execution runtime which is similar to \cite{LiuX12} but faster than \cite{KoN09} and \cite{BasuM13}.  Both \cite{ChatterjeeMB11} and \cite{ShojaeiD10} use the X-Simulator in their internal procedures and for fair comparison, we use the same procedure (i.e., {\tt XSim-core} given by  Algorithm \ref{alg:core}) which provides the efficient bit-wise implementation given in \cite{KoN09}. When implementing \cite{ChatterjeeMB11}, custom parameter selection was done the same way as reported in \cite{ChatterjeeMB11}. All experiments ran on an Intel quad-core 3.4GHz CPU with 12GB memory.

In SA, we first use HYBR to generate an initial solution. Then the
algorithm iteratively perturbs the current solution in order to improve the
solution quality in terms of SRR. Specifically at each iteration, $r$
(r\lt1$ && $r\leq3) trace signals that contribute least to restoration are
first eliminated, then the same number of new signals are added for tracing. 
A probablistic acceptance criteria is applied to accept a solution. 

Elimination of $r$ signals is done in two ways. When using `deterministic'
elimination of the control signals, X-Simulation is applied to measure the
SRR value when each one of the already-selected trace state elements is
removed. A ranking of all already-selected trace state elements is then
generated, and the $r$ trace signals which have the highest SRR value are
eliminated. When `random' eliminating the control signals, $r$ trace
signals are randomly selected for removal. 

At each iteration, deterministic elimination is first applied, with
$r=1$. If a new solution after perturbation is not accepted, $r$ will
be increased by one and the process repeat until $r==R$. If no solution is
accepted, the algorithm
will switch from deterministic elimination to random elimination with $r$
set back to 1. This process then repeats until a second time that
$r==R$. This time if the solution is still not accepted, the algorithm
will randomly eliminate a signal and add a new one, without considring
the acceptance criteria. The idea behind this flow is that 
the algorithm will make aggressive purtabation to avoid getting trapped in local optimum. 

For addition of new trace signals, HYBR will be applied to greedily select
the signals until the number of trace signals equals to the trace buffer width.

The intent of performing SA is to obtain an upper bound on solution quality in terms of SRR for each benchmark. In this case,
runtime is not a concern here as we want to run the SA long enough to generate the best solutions. Therefore for SA, the runtime limit is
set to 8 hours for all benchmarks throughout the experiments. 
 
\subsection{Comparison of Runtime}
\begin{sidewaystable}
%\begin{table*}[htbp]
\scriptsize
  \centering
  \caption{Runtime comparison of different algorithms}
    \begin{tabular}{c|c|c@{\hspace{0.5cm}}cc|c@{\hspace{0.5cm}}cc|c@{\hspace{0.5cm}}cc|c@{\hspace{0.9cm}}c@{\hspace{0.8cm}}c}
    \toprule
     &
     & \multicolumn{3}{c|}{\scriptsize METR: Metric-based  \cite{ShojaeiD10}}
     & \multicolumn{3}{c|}{\scriptsize \stackcell{SIM-B: Backward simulation-based \cite{ChatterjeeMB11} \\ (a quad-core machine)}}
     & \multicolumn{3}{c|}{\scriptsize \stackcell{SIM-F: Forward simulation-based \\ (a quad-core machine)}}
     & \multicolumn{3}{c}{\scriptsize \stackcell{HYBR:\\ Our hybrid algorithm}}
    \\
    \midrule
    Benchmark &\#FFs   & 8     & 16    & 32    & 8     & 16    & 32    & 8     & 16    & 32   & 8     & 16    & 32\\
    \midrule
    {\tt S5378}     &163  &  8     & 27    & 66    & 410   & 400  & 330   & 74     & 220    & 722    & 5     & 27    & 28 \\
    {\tt S9234}     &145  &  6     & 17    & 38    & 448   & 365  & 250   & 543    & 2036    & 4914    & 26    & 84    & 86 \\
    {\tt S13207}    &327  &  48    & 117   & 254   & 2892   & 2802  & 2463    & 697    & 2510   & 7780   & 68    & 163   & 166 \\
    {\tt S15850}    &137  &  7     & 18    & 37    & 154   & 123  & 40   & 348    & 877   & 1654   & 83    & 193   & 197 \\
    {\tt S35932}    &1728 &  73    & 167   & 408   & 25980   & 25920  & 25860   & 200   & 561   & 1506   & 139   & 208   & 217 \\
    {\tt S38417}    &1564 &  3690  & 7620  & 13428 & 180300   & 180240  & 180120   & 30600   & 35892  & 44597 & 434   & 2508  & 2521 \\
    {\tt S38584}    &1166 & 53     & 140   & 354   & 59580   & 59520  & 59460   & 862   & 4113   & 13167   & 167   & 741   & 752 \\
    \bottomrule
    \end{tabular}
  \label{tab:runtime_sm}
%\end{table*}
\end{sidewaystable}

In this experiment, we used the ISCAS'89 benchmarks \cite{ISCAS89} which were synthesized using Synopsys Design Compiler with a 90nm TSMC library. The number of state elements for each benchmark is reported in column 2 of Table \ref{tab:runtime_sm}. (We report the experiments for the larger benchmark later because some of the alternative approaches were not scalable to run on our larger benchmarks.)

Within the ISCAS'89 benchmarks, two benchmarks ({\tt S38584} and {\tt S35932}) have control signals as primary inputs. The names and values of these control signals are as follows. For {\tt S38584} we identified `g35' as an active low global reset so it was set to 1 which is also pointed out in \cite{KoN09}. For {\tt S35932} the active low global reset signal `RESET' was set to 1. Moreover, two control input signals `TM0' and `TM1' define four operation modes in this benchmarks. Therefore, we ran this benchmark four times for each of these operation modes and measured four separate SRR values. We then report the average of these four SRR values for {\tt S35932} in our experiments, as done in other related publications.

Table \ref{tab:runtime_sm} shows the comparison of runtime of HYBR with METR, SIM-B, and SIM-F for three buffer bandwidths of 8, 16, and 32 bits and a buffer depth of 4K. The reported runtimes are in seconds. We note, the work \cite{ChatterjeeMB11} describes a GPU-based implementation of SIM-B which exploits a high degree of parallelism. However, our implementation of \cite{ChatterjeeMB11} which ran on a quad-core CPU is based on multi-threading and only up to 8 parallel threads could run simultaneously in our setup. Therefore, our reported numbers for SIM-B is higher than \cite{ChatterjeeMB11}. But it gives a measure to highlight the speedups using HYBR.

As can be seen, HYBR has a comparable runtime to METR and is tremendously faster than SIM-B and SIM-F. Moreover, we note that metric-based algorithms are already verified to be much faster than simulation-based procedures, even when a GPU-based implementation is used, as reported in \cite{ChatterjeeMB11}. So we expect that HYBR is also much faster if a GPU-based implementation of SIM-B is used.

To analyze the fast runtime of HYBR and compare it with SIM-B, we compare the number of calls to the {\tt XSim-core} procedure in both algorithms. This step is called repetitively and is the most time-consuming step in both cases. The analysis is as follows. In HYBR, at each step, computation of restorability rate for \emph{all} the state elements ($r_f~\forall f\in F$) using Algorithm \ref{alg:RF} requires a total of $3\times 64$ calls to the {\tt XSim-core} procedure. Furthermore, at each step of HYBR, we use SRR computation for the top 5\% of state elements which have the highest impact weights. Each SRR computation requires 3$\times$64 calls to the {\tt XSim-core} procedure. Therefore the number of calls to {\tt XSim-core} at each step of HYBR is dominated by the number of SRR computations which is at most 5\% of the number of state elements and is a small number. In contrast, in SIM-F, at each step and for each not-traced state element an SRR is computed. Therefore the number of SRR computations is significantly higher. Furthermore, the number of steps in SIM-B is much higher than HYBR and SIM-F. This is because SIM-B is based on elimination of the least promising state element at each step, and the number of state elements are often much higher than the number of trace signals. %For example, {\tt S38584} has 1166 state elements.

\subsection{Comparison of Solution Quality}
In this experiment, we compare the solution quality of our algorithm and the alternative techniques in terms of SRR. As shown in Table \ref{tab:res1}, compared to METR, HYBR results in a significantly higher SRR. This is for the majority of the benchmarks except for the smallest one ({\tt S5378}) in which the SRR is quite similar in all the algorithms.

\begin{sidewaystable}
%\begin{table*}[htbp]
  \centering
  \scriptsize\tabcolsep=9pt
  \caption{Comparison of SRR of different algorithms}
    \begin{tabular}{c|ccc|c@{\hspace{0.8cm}}c@{\hspace{0.8cm}}c|
    c@{\hspace{0.8cm}}c@{\hspace{0.8cm}}c|c@{\hspace{0.8cm}}c@{\hspace{0.8cm}}c|c@{\hspace{0.8cm}}c@{\hspace{0.8cm}}c}
    \toprule
          \multicolumn{1}{c|}{}
          & \multicolumn{3}{c|}{\scriptsize METR: Metric-based \cite{ShojaeiD10}}
          & \multicolumn{3}{c|}{\scriptsize \stackcell{\stackcell{SIM-B: Backward \\ simulation-based \cite{ChatterjeeMB11}} \\(quad-core machine)}}
          & \multicolumn{3}{c|}{\scriptsize \stackcell{\stackcell{SIM-F: Forward \\ simulation-based} \\(quad-core machine)}}
          & \multicolumn{3}{c|}{\scriptsize \stackcell{HYBR:\\ Our hybrid algorithm}}
          & \multicolumn{3}{c}{\scriptsize \stackcell{SA: Simulated \\ annealing}}
          \\
          \midrule
    \multicolumn{1}{c|}{Benchmark} & 8   & 16   & 32    & 8     & 16    & 32    & 8     & 16    & 32    & 8     & 16    & 32 & 8     & 16    & 32\\
    \midrule
    \multicolumn{1}{c|}{{\tt S5378}} & 13.7  & 8.1   & 4.1   & 12.8  & 7.1   & 4.4   & 13.5  & 7.9   & 4.2   & 13.6  & 8.0   & 4.2 & 13.8 & 8.3 & 4.4 \\
    \multicolumn{1}{c|}{{\tt S9234}} & 8.4   & 5.8   & 3.4   & 9.1   & 6.6   & 3.6   & 9.8   & 5.9   & 3.5   & 9.8   & 6.8   & 3.6 & 10.0 & 7.2 & 3.6 \\
    \multicolumn{1}{c|}{{\tt S13207}} & 13.8  & 6.8   & 3.5   & 19.3  & 12.2  & 7.8   & 24.2  & 15.8  & 8.4   & 24.5  & 16.3  & 8.9 & 24.5 & 17.4 & 9.0 \\
    \multicolumn{1}{c|}{{\tt S15850}} & 14.4  & 7.6   & 4.1   & 14.5  & 7.8   & 4.1   & 15.4  & 7.9   & 4.0   & 15.6  & 8.1   & 4.1 & 15.6 & 8.1 & 4.1 \\
    \multicolumn{1}{c|}{{\tt S35932}} & 31.1  & 19.4  & 11.6  & 58.1  & 36.2  & 23.1  & 59.3  & 37.4  & 22.3  & 61.4  & 38.3  & 23.4 & 61.4 & 38.3 & 23.4 \\
    \multicolumn{1}{c|}{{\tt S38417}} & 17.6  & 13.1  & 9.7   & 29.4  & 17.8  & 20.0  & 51.5  & 24.0  & 16.8  & 51.3  & 30.1  & 17.5 & 51.4 & 30.4 & 17.7 \\
    \multicolumn{1}{c|}{{\tt S38584}} & 13.5  & 10.8  & 7.1   & 14.9  & 18.1  & 16.4  & 25.1  & 18.6  & 17.5  & 24.0  & 18.5  & 17.5 & 24.1 & 18.5 & 17.5 \\
    \bottomrule
    \end{tabular}%
  \label{tab:res1}
%\end{table*}%
\end{sidewaystable}

Compared to SIM-B, HYBR has a consistently higher SRR for small buffer bandwidths, i.e., 8 and 16 bits. For example in benchmark {\tt S38417} and for the buffer bandwidth of 8 bits, the SRR of HYBR is 51.3 while the SRR of SIM-B is 29.4. For the bandwidth of 32 bits, the two algorithms have a quite similar SRR (with HYBR running significantly faster). The main reason that HYBR performs better for smaller bandwidths is because it is based on selecting the most promising state element at each step while SIM-B is based on eliminating the least promising one at each step. Therefore in SIM-B, the error associated with the \emph{greedy} backward elimination of state elements grows as the buffer bandwidth decreases. In contrast, in HYBR, the error associated with greedy forward addition of promising state elements is the least for the smallest buffer bandwidth.

It can be noticed that both SIM-F and HYBR have similar SRR values. Specifically, SIM-F has the same or slightly higher SRR (in {\tt S38417} and {\tt S38584}) for buffer bandwidth of 8 bits and HYBR has slightly higher or same SRR for the other bandwidths over all the benchmarks. The main reason that HYBR is slightly better than SIM-F in the majority of the cases and for larger buffer bandwidths is due to the step for adding island state elements which compensates for a purely-greedy strategy used in the rest of the selection process. For a buffer bandwidth of 8, only one step of island insertion is used. (See Figure \ref{fig:overview} for the flow chart of our algorithm.)

We also notice that for a large trace buffer bandwidth, SIM-B usually has
better or similar solution quality compared to SIM-F. But for a small trace
buffer width, SIM-F always performs better. Again, this is because for
smaller bandwidths, the error associated with backward pruning becomes
higher than using a forward greedy strategy.

Finally, when compared with SA, we can see that SA can improve uppon the HYBR for some benchmarks with certain buffer widths. In
general, the improvement is not significant considering the long runtime (8 hours) that it takes. This indicates that the solutions
generated by HYBR can be considered as good enough, but still it does not mean that these solutions are close to optimal.

\subsection{Impact of Various Features of HYBR}
Next, we perform a set of experiments to show that the metrics and various steps used in HYBR each contribute to achieve a higher solution quality.

\subsubsection{Effectiveness of the Impact Weight in Identifying the Top Candidates}
Here, we will show that the impact weight can correctly identify the top candidates at each iteration. We ran HYBR on benchmark circuit {\tt S38417} and monitored the selection process in 8 consecutive iterations. At each iteration, we recorded the indexes of the top candidates identified using the impact weight metric. We then applied a variation of HYBR in which the top candidates at each iteration are found using X-Simulation, without using the impact weights. These are the top (same number) of the state elements with the highest SRR values. We consider this variation which is purely based on X-Simulation as our reference case.

At each iteration, we then compared the two sets of top candidates obtained using impact weights and using the reference case, and report the percentage of state elements which are common in both cases, obviously for the same number of top candidates in both cases.

\begin{figure}%[hbt]
   \centering
   \includegraphics[width=3.0in]{figs/topcandidates.pdf}
   \caption{Percentage of correctly-identified top candidates in {\tt S38417}}
   \label{fig:topcandidate}
\end{figure}
Figure \ref{fig:topcandidate} shows this comparison. As can be seen, more than 90\% of the top candidates are common, thus correctly identified. This should be considered given that the runtime of identifying these top candidates using impact weight is significantly lower than the reference case, as we showed in our previous experiment.

To further elaborate, in Table \ref{tab:res2}, we also report the average size of the reachability lists of the state elements in column 2, along with the percentage of the island state elements in column 3. We note the average size is between 10 to 20 over these benchmarks which is significantly smaller than the total number of state elements per benchmarks. This directly results in fast calculation of impact weights because the impact weights of each state element is defined using the state elements in its reachability list.

\begin{sidewaystable}
%\begin{table*}[htbp]
  \centering
  \scriptsize\tabcolsep=12pt
  \caption{Impact of various steps in HYBR on the SRR}
    \begin{tabular}{c|c|c|c@{\hspace{1.3cm}}c@{\hspace{1.2cm}}c|c@{\hspace{1.0cm}}c@{\hspace{1.0cm}}c|c@{\hspace{1.0cm}}c@{\hspace{1.0cm}}c}
    \toprule

          \multicolumn{1}{c|}{}  & \multicolumn{1}{c|}{$|L^v_f|$} & \multicolumn{1}{c|}{\#Islands}
          & \multicolumn{3}{c|}{\scriptsize \stackcell{HYBR-NOISL: Hybrid (w/o \\ considering islands during selection)}}
          & \multicolumn{3}{c|}{\scriptsize \stackcell{HYBR-NOSIM: Hybrid (w/o \\ simulation for top candidates)}}
          & \multicolumn{3}{c}{\scriptsize \stackcell{HYBR: \\ Our hybrid algorithm}}
          \\
    \midrule
    \multicolumn{1}{c|}{Benchmark} & \multicolumn{1}{c|}{} & & 8     & 16    & 32    & 8     & 16    & 32    & 8     & 16    & 32 \\
    \multicolumn{1}{c|}{{\tt S5378}}  & \multicolumn{1}{c|}{18.7} &17.4\%  & 12.5  & 7.8   & 4.1   & 13.4  & 7.9   & 4.0   & 13.6  & 8.0   & 4.2 \\
    \multicolumn{1}{c|}{{\tt S9234}}  & \multicolumn{1}{c|}{6.2}   &51.3\% & 8.1   & 6.5   & 3.5   & 9.4   & 6.1   & 3.3   & 9.8   & 6.8   & 3.6 \\
    \multicolumn{1}{c|}{{\tt S13207}} & \multicolumn{1}{c|}{15.3}  &18.4\% & 24.5  & 16.3  & 8.9  & 31.6  & 18.9  & 11.3  & 24.5  & 16.3  & 8.9 \\
    \multicolumn{1}{c|}{{\tt S15850}} & \multicolumn{1}{c|}{11.9}  &20.8\% & 15.6  & 8.1  & 4.1  & 18.1  & 10.3  & 5.9   & 15.6  & 8.1   & 4.1 \\
    \multicolumn{1}{c|}{{\tt S35932}} & \multicolumn{1}{c|}{10.1}  &16.7\% & 61.4  & 38.3  & 23.4  & 31.6  & 18.9  & 11.3  & 61.4  & 38.3  & 23.4 \\
    \multicolumn{1}{c|}{{\tt S38417}} & \multicolumn{1}{c|}{15.1}  &53.0\% & 48.2  & 28.7  & 16.7  & 18.1  & 10.3  & 5.9   & 51.3  & 30.1  & 17.5 \\
    \multicolumn{1}{c|}{{\tt S38584}} & \multicolumn{1}{c|}{12.3}  &40.1\% & 23.9  & 18.5  & 17.5  & 18.3  & 14.8  & 10.7  & 24.0  & 18.5  & 17.5 \\
    \bottomrule
    \end{tabular}%
  \label{tab:res2}
%\end{table*}%
\end{sidewaystable}

\subsubsection{Effectiveness of X-Simulation to Find the Best of the Top Candidates}
To show that X-Simulation is necessary to identify the best among the top candidates, we perform another variation of HYBR in which, after identifying the top candidates, we use the impact weights to determine the best candidate (instead of using X-Simulation) to be the next trace signal. We denote this variation by HYBR-NOSIM. All the other steps remain the same as HYBR.

Table \ref{tab:res2} compares the SRR of HYBR with HYBR-NOSIM. We observe that the SRR of HYBR is consistently better. Specifically, for {\tt S35932}, {\tt S38417}, and {\tt S38584}, the SRR of HYBR-NOSIM significantly degrades. These results show that impact weight metric is not able to distinguish the best trace among the top candidates and X-Simulation is also necessary.


\subsubsection{Impact of Island Consideration}
Next, to show the impact of adding islands, we perform a variation of HYBR in which we remove the step for considering island, which was referred to as Method (ii) in the flow chart given in Figure \ref{fig:overview}. We refer to this variation of HYBR as HYBR-NOISL. All the other steps remain the same as HYBR.

Comparison of the SRR of HYBR and HYBR-NOISL are shown in Table \ref{tab:res2}. We observe that HYBR-NOISL perform worse than HYBR in three benchmarks ({\tt S5378}, {\tt S9234}, and {\tt S38417}) while for the rest of the benchmarks, it has just a slightly worse solution quality than HYBR. Therefore HYBR consistently performs better. After further investigation in these three benchmarks, we found out that the islands are more crucial to the SRR of the final solution as compared to the other state elements in these benchmarks.

The SRR of HYBR-NOISL can also be compared against the SIM-F approach given in Table II. These two approaches are quite similar. Both are purely forward-greedy and the only difference is use of quick metrics in HYBR-NOISL instead of long and accurate
simulations in SIM-F to drive the trace signal selection process. The comparison shows they both have similar SRR indicating the effectiveness of the metrics used in HYBR-NOISL (and hence in HYBR as well).

\subsubsection{Impact of Using the Restoration Demand Equation}\label{sec:demand-exp}
Recall, the metric $d_{i,f}^v$ is only used as an approximation of the demand of state element $i$ from state element $f$ when $f$ takes value $v$. More accurate computation of the demand should be computed using the procedure given in (the end of) Section \ref{sec:demand} which evokes Algorithm \ref{alg:RF} and X-Simulation. We refer to a variation of our algorithm using the more accurate procedure to compute demand as HYBR-DS. All other steps in our algorithm remains the same in HYBR-DS.

\begin{figure}[t]
   \centering
   \includegraphics[width=3.0in]{figs/demand.eps}
   \caption{Impact of using restoration demand on the solution
     quality of benchmark {\tt S5378} compared with more accurate simulation-based calculation of demand}
   \label{fig:approxdemand}
\end{figure}

Here we ran an experiment to measure the solution quality of HYBR and HYBR-DS in 16 consecutive iterations of benchmark circuit {\tt S5378}. The results are shown in
Figure \ref{fig:approxdemand}. The X-axis is the iteration count and the Y-axis is SRR computed up to that iteration of the algorithm for the capture window of $N$=4K cycles. As can be seen the two results are still very close with HYBR-DS having a slightly higher SRR than HYBR. We conclude that Equation (1) is effective in obtaining results similar to HYBR-DS with significantly faster runtime.

\exclude{
Instead of using Equations (1) and (2) to compute the restoration demand and impact weight, we perform the following experiment to more accurately compute it by
introducing simulation. At each iteration of the
selection process, we test the impact of each not-traced state element on the
restoration of other state elements if it is traced. More specifically, each not-traced state
element is added to the set of already traced state elements and simulation
is performed to accurately compute its contribution to the restoration of
other state elements in the presence of \emph{increase in their
  restorability rate}. To make a fair comparison with the computation method of using Equation
\ref{eq:d} and \ref{eq:d}, we only add the increase of the restorability
rate of the state elements included in the reachability list of a certain
state element to represent the ``new'' impact weight of that state element. For
example, in Figure \ref{fig:example}, $L^0_1=\{f_2,f_5\}$,
$L^1_1=\{f_2,f_3\}$. Suppose $f_1$ is a not-traced state element and the new
restorability rates of $f_2$, $f_3$ and $f_5$ are $r_2'$, $r_3'$ and $r_5'$
respectively if $f_1$ is added to the trace set and accurate simulation is
performed. The impact weight of $f_1$
will then become $w_1 = (r_2'-r_2) + (r_3'-r_3) + (r_5'-r_5)$.

With the introduction of simulation, the new impact weight computation is
expected to be more accurate. However, it is also expected that a much larger
number of simulations are needed. Suppose the total
number of state elements in a design is $n$. To select 32 trace signals,
$n\times(n-1)..(n-31)$ more simulations will be taken, which greatly
increase the runtime for selection. Due to this reason, we applied the new
impact weight computation on the smallest ISCAS'89.*
and reported the solution quality for selecting 16-32 traces when keeping the
rest of the selection process same as HYBR. We compare the products of
different number of traces $\times$ SRR with the ones in HYBR as shown in
Figure \ref{fig:aproxdemand}. As can be seen, the new impact weight
computation is able to constantly improve the solution quality across
different trace buffer widths.
}

\subsection{Experiments on Larger Benchmarks}
In this section, we apply the extensions described in Section \ref{sec:large} to improve the scalability of HYBR for larger benchmarks. We compare two variations of HYBR-WO-EXT and HYBR-EXT. Here HYBR-EXT includes the extension in improving the solution quality with $\lambda=0.4$ which was described given in Section \ref{sec:extension2}. Both variations include the improvement in runtime given in Section \ref{sec:extension1}. We then perform our experiments on a set of large benchmarks selected from IWLS'05 \cite{IWLS05} and the ISPD'12 gate sizing contest \cite{OzdalAABWZ12}.

For HYBR-NO-EXT, often the algorithm was not able to select the required number of $B$ trace signals after the first $B$ iterations because most of the state elements were islands. For example more than 89\% of the state elements were islands in 5 of the 6 large benchmarks. In these cases, in order to create a complete solution, after the first $B$ iterations, we continued a forward greedy selection strategy and selected the next trace using X-Simulation (similar to SIM-F strategy) until $B$ trace signals were selected.

In this experiment, we do not report the results of simulation-based techniques because they failed to run in a reasonable amount of time on these benchmarks.

\begin{table*}[t]
  \centering
  \caption{Comparison of runtime and SRR in larger benchmarks}
  \resizebox{\columnwidth}{!}{%
    \begin{tabular}{rc|cc|ccc|ccc}
    \toprule
          \multicolumn{4}{c|}{Benchmark Info} & \multicolumn{3}{|c|}{HYBR-WO-EXT} &
          \multicolumn{3}{c}{HYBR-EXT} \\
    \midrule
    \multicolumn{1}{c}{Benchmark} & Benchsuite & \#Gates & \#FFs &  \%Islands & SRR   &
    Runtime & \%Islands & SRR   & Runtime \\
    \multicolumn{1}{c}{{\tt b17}} & IWLS'05 & 33888  & 1317  & 98.6\% &  1.25  & 600   &
    0.0\% & 1.98  & 156 \\
    \multicolumn{1}{c}{{\tt b18}} & IWLS'05 & 119762   & 3020 & 98.5\% & 1.50  & 2119  &
    0.0\% & 2.93  & 649 \\
    \multicolumn{1}{c}{{\tt b22}} & IWLS'05 & 58192 & 613   & 89.7\% & 1.62  & 201   &
    0.0\% & 1.93  & 301 \\
    \multicolumn{1}{c}{{\tt dsp}} & IWLS'05 & 54730  & 3605 & 90.1\% & 5.06  & 627   &
    0.1\% & 5.35  & 617 \\
    \multicolumn{1}{c}{{\tt DMA}} & ISPD'12 & 36556  & 2192 & 97.0\% & 5.01  & 368   &
    0.0\% & 6.33  & 357 \\
    \multicolumn{1}{c}{{\tt des\_perf}} & ISPD'12 & 149066  & 8802  & 0.30\% & 38.80 & 1941
    & 0.3\% & 38.80 & 1949 \\
    \bottomrule
    \end{tabular}%
  }
  \label{tab:large}
\end{table*}%

\subsubsection{Comparison of Runtime and SRR in Larger Benchmarks}
Table \ref{tab:large} compares the runtime and SRR between HYBR-WO-EXT and HYBR-EXT. Columns 3 and 4 show the total number of combinational gates and state elements for each benchmark. The trace buffer bandwidth was set to 64 bits for these larger benchmarks with the same buffer depth of 4096 cycles as the capture window. Besides {\tt b22}, all the other benchmarks contain control input signals which define different operation modes. Similar to the previous experiments, we ran each algorithm for each operation mode separately, and then reported the SRR and runtime when averaged over all operation modes.

We report the number of islands in HYBR-WO-EXT and HYBR-EXT as a percentage of total number of state elements (averaged over all the operation modes) per benchmark in columns 5 and 8. As can be seen, without the extension for solution quality (i.e., HYBR-WO-EXT), the percentage of islands is larger than 89\% in all the benchmarks except {\tt des\_perf}. After the extension (i.e., HYBR-EXT), the percentage of islands drop to almost 0\% for all the benchmarks.

Columns 6 and 9 report the SRR of the two variations. As can be seen, the SRR of HYBR-EXT is always higher than HYBR-WO-EXT in all the benchmarks. Also notice that for {\tt b17}, {\tt b18}, {\tt DMA} which have the highest percentage of islands in HYBR-WO-EXT, the improvement in SRR in HYBR-EXT is more compared to the other benchmarks.

Columns 7 and 10 compares the runtimes. Even though both variations employ the runtime extension, the somewhat higher runtime in HYBR-NO-EXT is due to the greedy selection strategy after $B$ iterations to create a complete solution.


\begin{figure}%[hbt]
   \centering
   \includegraphics[width=3.0in]{figs/lambda_var.pdf}
   \caption{Impact of varying $\lambda$ on SRR in benchmark {\tt b22}.}
   \label{fig:lambda}
\end{figure}
\subsubsection{Impact of Parameter $\lambda$ on SRR in Larger Benchmarks}
Here we show the impact of varying the parameter $\lambda$ on SRR in {\tt b22}. Figure \ref{fig:lambda} shows SRR as a function of $\lambda$ ranging from 0.1 to 0.9 for two cases (1) when the restoration demand is extended based on Equation \ref{eq:modify-d} denoted by D-EXT, and (2) when it is computed from Equation (1) denoted by WO-D-EXT. Setting $\lambda$ to a lower value increases the average size of the reachability list and decreases the number of islands. For example, the average size of the reachability list decreases from 146.3 to 16.3 when $\lambda$ varies from 0.1 to 0.9.

First, in both curves, we observe that with increase in $\lambda$ from 0.1 to 0.4, the SRR increases. This is because for very small values of $\lambda$, the average size of the reachability list is very large. However this is not an indication that the state elements can be easily restored if they belong to a reachability list. With increase of $\lambda$ beyond 0.4, the SRR decreases. This is because with increase in $\lambda$ the average size of the reachability list continuously decreases. As a result, the impact weights which are defined based on the state elements in the reachability list become less effective in identifying the top candidates.


Second, we observe from Figure \ref{fig:lambda} that D-EXT is almost above WO-D-EXT, indicating that the extension of the restoration demand is also helpful in improving the SRR.



\subsection{Comparison with Other Related Works}
\exclude{
We regenerated our results in terms of SRR and runtime using the original (un-synthesized) ISCAS'89 benchmarks. This allows comparing with a few other works by directly taking the SRR numbers from the other related papers. Table \ref{tab:original} shows SRR comparison with other approaches that have used the original benchmarks, and our runtime. The ``N/A'' entries in the table indicate that the related paper did not report results for that benchmark.

Results for two setups of {\tt S38584} and {\tt S35932} are listed. The R
setup is when random input vectors are used for simulation. The M setup is similar to what we did in the previous experiments. Here control primary inputs are kept constant during the simulation. Specifically {\tt S38584} and {\tt S35932} have 1
and 2 control signals respectively, which define 2 and 4 operation modes accordingly. The reported SRR value is average of the SRR values computed for each operation mode separately just like how it is computed in \cite{KoN09} and \cite{LiD13,LiD14TCAD} for these two benchmarks, and just like in the previous experiment in this section in Table II.

The M version is obviously a more accurate setup because it assumes control signals are not allowed to randomly change along with other primary inputs at each clock cycle which was first noted by \cite{KoN09}. The results in the table show that HYBR consistently produces similar or higher SRR with a runtime similar to the runtime reported in Table \ref{tab:runtime_sm}.
}

We regenerated our results using the original (un-synthesized) ISCAS'89 benchmarks. This allows comparing with other recent works by directly taking the SRR numbers from the other related papers. There are three ISCAS'89 benchmarks which were considered in the previous works which are {\tt S38584}, {\tt S35932}, and {\tt S38417}. The first two benchmarks contain control signals such as reset signals. The reported SRR values for these two benchmarks significantly vary if the control signals are changed randomly, and if/how they are changed deterministically. We noticed that the previous works have not consistently set these control signals when conducting simulation to compute the SRR. Therefore, here we compare with each work separately.

\subsubsection{Comparison with Liu \& Xu}

As reported in \cite{LiuX09} and further verifying with the authors, we used the following setup for comparison. For {\tt S38584}, the signal `g35' which is a global reset, is set to 1 (inactive). For {\tt S35932}, the signal `RESET' is set to 1 (inactive). However in {\tt S35932}, two other control signals `TM0' and `TM1' are randomly changed. The trace buffer depth was set to 4K. Using the same setup, we regenerated the results using our approach (HYBR-EXT).  As can be seen in Table \ref{tab:c1}, our approach results in a significant SRR improvement of on average 136.02\%.

\begin{table}[t]
  \renewcommand{\arraystretch}{0.9}
  \centering
  \caption{Comparison of SRR with Liu \& Xu}
  %\resizebox{\columnwidth}{!}{%
  %\scalebox{0.9}{
    \begin{tabular}{crrrc}
    \toprule
    \textbf{Benchmark} & \multicolumn{1}{c}{\textbf{Bandwidth}} & \multicolumn{1}{c}{\textbf{Liu \& Xu}} & \multicolumn{1}{c}{\textbf{HYBR-EXT}} & \textbf{\%Imp.} \\
    \midrule
    \multirow{3}[6]{*}[0.23cm]{\textbf{{\tt S38584}}} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{19.24} & \multicolumn{1}{c}{83.00} & 331.39 \\
          & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{13.96} & \multicolumn{1}{c}{45.00} & 222.35 \\
          & \multicolumn{1}{c}{32} & \multicolumn{1}{c}{8.68} & \multicolumn{1}{c}{23.00} & 164.98 \\
\hline
    \multirow{3}[6]{*}[0.23cm]{\textbf{{\tt S35932}}} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{64.00} & \multicolumn{1}{c}{96.00} & 50.00 \\
          & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{38.13} & \multicolumn{1}{c}{67.00} & 75.71 \\
          & \multicolumn{1}{c}{32} & \multicolumn{1}{c}{21.06} & \multicolumn{1}{c}{44.00} & 108.93 \\
\hline
    \multirow{3}[6]{*}[0.23cm]{\textbf{{\tt S38417}}} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{18.63} & \multicolumn{1}{c}{52.03} & 179.28 \\
          & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{18.62} & \multicolumn{1}{c}{30.89} & 65.90 \\
          & \multicolumn{1}{c}{32} & \multicolumn{1}{c}{14.21} & \multicolumn{1}{c}{17.86} & 25.69 \\
\hline
    \textbf{Avg} &       &       &       & \textbf{136.02} \\
    \bottomrule
    \end{tabular}%
  \label{tab:c1}%
%}
\end{table}%

%S38584-R: g35 can be 0 or 1.
%S35932-R: RESET can be 0 or 1, TM0 and TM1 can be 0 or 1.
%S38584-M: g35 is 1.
%S35932-M: RESET is 1, TM0 and TM1 can be 0 or 1.

%LiuXu didn't perform the experiments for the "-R" cases. For simulation setup, they did not specify the number of sets of random inputs, trace buffer depth is 4K for SRR evaluation. We use 8 sets of random inputs with trace buffer depth of 4K for SRR evaluation. 				
%\begin{sidewaystable}
%\end{sidewaystable}

%\begin{sidewaystable}				
% Table generated by Excel2LaTeX from sheet 'Sheet2'
%\end{sidewaystable}								

\subsubsection{Comparison with Ko \& Nicolici}	

As elaborated in the dissertation \cite{KoDis} and further verifying with
the authors, we used two different setups of deterministic and random for
comparison. For random, we append `-R' to the names of the two benchmarks
with control signals. For {\tt S38584-R} the signal `g35' is randomly changed and
for {\tt S38584-R}, the signals `RESET', `TM0' and `TM1' are also randomly changed. For
deterministic, we append `-D' to the names of the two benchmarks. For
{\tt S38584-D}, signal `g35' is set to 1. For {\tt S35932-D}, signal `RESET' is set to
1. Furthermore, in {\tt S35932-D} two additional control signals `TM0' and `TM1'
define four operation modes based on being set to `00', `01', `10', and `11'. The
work \cite{KoN09} computes the SRR of these four modes in the deterministic
case separately, and then reports the average of these four values.
Here we run HYBR-EXT for the same variations of random and deterministic as above
and use a trace buffer depth of 8K cycles similar to \cite{KoN09}.

\begin{table}[t]
  \renewcommand{\arraystretch}{0.8}
  \centering
  \caption{Comparison of SRR with Ko \& Nicolici}
  \scalebox{0.9}{
    \begin{tabular}{crrrc}
    \toprule
    \textbf{Benchmark} & \multicolumn{1}{c}{\textbf{Bandwidth}} & \multicolumn{1}{c}{\textbf{Ko \& Nicolici \cite{KoN09}}} & \multicolumn{1}{c}{\textbf{HYBR-EXT}} & \textbf{\%Imp.} \\
    \midrule
    \multirow{3}[6]{*}[0.23cm]{\textbf{{\tt S38584-R}}} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{127.20} & \multicolumn{1}{c}{160.30} & 26.02 \\
          & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{65.57} & \multicolumn{1}{c}{84.11} & 28.28 \\
          & \multicolumn{1}{c}{32} & \multicolumn{1}{c}{37.36} & \multicolumn{1}{c}{43.02} & 15.15 \\
\hline
    \multirow{3}[6]{*}[0.23cm]{\textbf{{\tt S35932-R}}} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{254.85} & \multicolumn{1}{c}{256.00} & 0.45 \\
          & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{127.77} & \multicolumn{1}{c}{128.80} & 0.81 \\
          & \multicolumn{1}{c}{32} & \multicolumn{1}{c}{64.58} & \multicolumn{1}{c}{64.70} & 0.19 \\
\hline
    \multirow{3}[6]{*}[0.23cm]{\textbf{{\tt S38584-D}}} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{19.00} & \multicolumn{1}{c}{83.31} & 338.47 \\
          & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{10.56} & \multicolumn{1}{c}{45.14} & 327.46 \\
          & \multicolumn{1}{c}{32} & \multicolumn{1}{c}{6.32} & \multicolumn{1}{c}{23.21} & 267.25 \\
\hline
    \multirow{3}[6]{*}[0.23cm]{\textbf{{\tt S35932-D}}} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{41.45} & \multicolumn{1}{c}{62.81} & 51.53 \\
          & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{39.31} & \multicolumn{1}{c}{42.32} & 7.66 \\
          & \multicolumn{1}{c}{32} & \multicolumn{1}{c}{24.76} & \multicolumn{1}{c}{27.69} & 11.83 \\
\hline
    \multirow{3}[6]{*}[0.23cm]{\textbf{{\tt S38417}}} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{19.62} & \multicolumn{1}{c}{52.12} & 165.65 \\
          & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{11.22} & \multicolumn{1}{c}{30.77} & 174.24 \\
          & \multicolumn{1}{c}{32} & \multicolumn{1}{c}{6.73} & \multicolumn{1}{c}{17.82} & 164.78 \\
\hline
    \textbf{Avg} &       &       &       & \textbf{105.32} \\
    \bottomrule
    \end{tabular}%
}
  \label{tab:c2}%
\end{table}%

For the random case, \cite{KoN09} reports four scenarios based on the random seed, and we use the largest number of these four from \cite{KoN09} in Table \ref{tab:c2}. %Also, the SRR numbers reported in \cite{KoN09} include restoration in primary inputs and outputs besides the state elements so we do the same in our comparison. 
As shown in Table \ref{tab:c2}, HYBR-EXT results in a significant SRR improvement of on average 105.32\%.

\begin{sidewaystable}[t]
%\rotatebox{90}{
%\begin{landscape}
%\begin{table*}[H]
%\begin{rotate}{90}
  \small
  \centering
  %\resizebox{\columnwidth}{!}{
  \caption{Comparison of SRR and runtime with Basu \& Mishra}
  \begin{tabular}{c|c|rrcc|rrc}
    \toprule
    & & \multicolumn{4}{c|}{SRR}& \multicolumn{3}{|c}{Runtime (Seconds)}\\
    %\textbf{Benchmark} & \textbf{Bandwidth} & \multicolumn{1}{c}{\textbf{ [3]}} & \multicolumn{1}{c}{\textbf{HYBR-EXT}} & \textbf{\%Imp [3]} & \textbf{\%Imp Sim-B [5]} & \multicolumn{1}{c}{\textbf{HYBR-EXT}} & \multicolumn{1}{c}{\textbf{[3]*}} & \textbf{ [3]*/HYBR-EXT} \\
    \textbf{Benchmark} & \textbf{Bandwidth} & \multicolumn{1}{c}{\textbf{\cite{BasuM11}}} & \multicolumn{1}{c}{\textbf{HYBR-EXT}} & \textbf{\%Imp \cite{BasuM11}} & \textbf{\%Imp Sim-B \cite{ChatterjeeMB11}} & \multicolumn{1}{c}{\textbf{HYBR-EXT}} & \multicolumn{1}{c}{\textbf{\cite{BasuM11}*}} & \textbf{ \cite{BasuM11}*/HYBR-EXT} \\
    \midrule
    \multirow{3}[6]{*}[0.23cm]{\textbf{{\tt S38584-R}}} & 8     & \multicolumn{1}{c}{155} & \multicolumn{1}{c}{156} & 0.6   & -2.5  & \multicolumn{1}{c}{55} & \multicolumn{1}{c}{320} & 5.8 \\
          & 16    & \multicolumn{1}{c}{82} & \multicolumn{1}{c}{83} & 1.2   & -2.4  & \multicolumn{1}{c}{70} & \multicolumn{1}{c}{341} & 4.9 \\
          & 32    & \multicolumn{1}{c}{42} & \multicolumn{1}{c}{42} & 0.0   & -2.3  & \multicolumn{1}{c}{313} & \multicolumn{1}{c}{409} & 1.3 \\
\hline 
    \multirow{3}[6]{*}[0.23cm]{\textbf{{\tt S35932-R}}} & 8     & \multicolumn{1}{c}{188} & \multicolumn{1}{c}{192} & 2.1   & -0.5  & \multicolumn{1}{c}{43} & \multicolumn{1}{c}{336} & 7.8 \\
          & 16    & \multicolumn{1}{c}{96} & \multicolumn{1}{c}{99} & 3.1   & -2.0  & \multicolumn{1}{c}{67} & \multicolumn{1}{c}{378} & 5.6 \\
          & 32    & \multicolumn{1}{c}{50} & \multicolumn{1}{c}{52} & 4.0   & -1.9  & \multicolumn{1}{c}{110} & \multicolumn{1}{c}{411} & 3.7 \\
\hline
    \multirow{3}[6]{*}[0.23cm]{\textbf{{\tt S38584-D}}} & 8     & \multicolumn{1}{c}{78} & \multicolumn{1}{c}{83} & 6.4   & -2.4  & \multicolumn{1}{c}{35} & \multicolumn{1}{c}{322} & 9.2 \\
          & 16    & \multicolumn{1}{c}{40} & \multicolumn{1}{c}{45} & 12.5  & -4.3  & \multicolumn{1}{c}{55} & \multicolumn{1}{c}{354} & 6.4 \\
          & 32    & \multicolumn{1}{c}{20} & \multicolumn{1}{c}{23} & 15.0  & -8.0  & \multicolumn{1}{c}{217} & \multicolumn{1}{c}{421} & 1.9 \\
\hline
    \multirow{3}[6]{*}[0.23cm]{\textbf{{\tt S35932-D}}} & 8     & \multicolumn{1}{c}{95} & \multicolumn{1}{c}{96} & 1.1   & 0.0   & \multicolumn{1}{c}{41} & \multicolumn{1}{c}{345} & 8.4 \\
          & 16    & \multicolumn{1}{c}{60} & \multicolumn{1}{c}{67} & 11.7  & 0.0   & \multicolumn{1}{c}{58} & \multicolumn{1}{c}{389} & 6.7 \\
          & 32    & \multicolumn{1}{c}{35} & \multicolumn{1}{c}{44} & 25.7  & -2.2  & \multicolumn{1}{c}{97} & \multicolumn{1}{c}{441} & 4.5 \\
\hline
    \multirow{3}[6]{*}[0.23cm]{\textbf{{\tt S38417}}} & 8     & \multicolumn{1}{c}{55} & \multicolumn{1}{c}{52} & -5.5  & 0.0   & \multicolumn{1}{c}{116} & \multicolumn{1}{c}{529} & 4.6 \\
          & 16    & \multicolumn{1}{c}{29} & \multicolumn{1}{c}{31} & 6.9   & -3.1  & \multicolumn{1}{c}{256} & \multicolumn{1}{c}{571} & 2.2 \\
          & 32    & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{18} & 12.5  & -10.0 & \multicolumn{1}{c}{443} & \multicolumn{1}{c}{649} & 1.5 \\
%    \textbf{DMA} & 64    & \multicolumn{1}{c}{4.85} & \multicolumn{1}{c}{6.33} & 30.5  & N/A   & \multicolumn{1}{c}{357} & \multicolumn{1}{c}{931} & 2.6 \\
\hline
    \textbf{Avg} &       &       &       & \textbf{6.49} & \textbf{-2.77} &       &       & \textbf{4.98} \\
    \bottomrule
    \end{tabular}%
  %}
    \label{tab:c3}%
%\end{rotate}
%\end{table*}%
\end{sidewaystable}
%\end{landscape}
				
\subsubsection{Comparison with Basu \& Mishra}				

The work of Basu \& Mishra \cite{BasuM13} provides comparison in both
random and deterministic cases however it does not provide any information
about how the control signals are setup. What we report here is based on
communication with the authors. In the random case (denoted by `-R'), in
{\tt S38584-R}, the signal g35 is changed randomly, and in {\tt S35932-R},
the signals `RESET', `TM0' and `TM1' are changed randomly. For the deterministic
case (denoted by `-D'), the signal `g35' is set to 1 in {\tt S38584-D}. For
benchmark {\tt S35932-D}, the signal `RESET' is set to 1 but `TM0' and `TM1' are changed randomly. Note this setup for deterministic is similar to the one in Liu \& Xu and different than the deterministic setup in Ko \& Nicolici. We ran HYBR-EXT with the exact setup described above and took the SRR values from \cite{BasuM13} for comparison. We also used the same trace buffer depth of 4K as in \cite{BasuM13}.

% Table generated by Excel2LaTeX from sheet 'Sheet3'

Table \ref{tab:c3} shows SRR comparison with \cite{BasuM13} in columns 3, 4, and 5. The average improvement of HYBR-EXT compared to \cite{BasuM13} is 6.49\%. Here our improvement is not significant as the previous cases. However we argue that both approaches provide very high SRR which is probably close to the highest attainable SRR for these ISCAS'89 benchmarks.

To show the above point and using the above experimental setup, we then generated the results for SIM-B case which is our implementation of the backward simulation-based approach of [5]. As discussed before SIM-B is entirely based on simulation so it does not use any metric-based approximations. This approach gives the best results in terms of SRR based on our implementation as well as what is reported in [5]. As can be seen in column 6, HYBR-EXT only has on-average 2.77\% degradation in SRR compared to SIM-B. We therefore conclude that the reason our improvement in SRR compared to \cite{BasuM13} is not as significant as in \cite{KoN09} and \cite{LiuX09} is because they are both very good solutions which cannot be much optimized.

Since the two SRRs were close, we next compare the runtimes of the two approaches. Since the binary of \cite{BasuM13} was not available as we contacted the authors, we had to implement the algorithm in \cite{BasuM13}. The results are given in columns 7-9. (All binaries ran on the same machine as explained in the beginning of this section.) On average HYBR-EXT is 4.98X faster. Note, this is based on our HYBR-EXT approach which includes our proposed extension for speedup. We furthermore note that the approach in \cite{BasuM13} is not scalable in runtime; it relies on computing a controllability metric which requires finding paths between pairs of state elements. We found out this step takes significantly long for larger benchmarks. For example in benchmark DMA from the ISPD'12 suite, our runtime was 357 seconds and the runtime of our implementation of \cite{BasuM13} was 931 seconds. We also note that the SRR values computed from our implementation of \cite{BasuM13} were somewhat lower than the ones reported in Table \ref{tab:c3} column 3. In fact, for the larger benchmark DMA, our improvement in SRR was significantly higher than our implementation of \cite{BasuM13} (about 31\% improvement in SRR). But we are reporting the higher numbers directly from the related paper (\cite{BasuM13}) for the ISCAS'89 benchmarks and the drawn conclusions remain the same.

Overall, when comparing with \cite{BasuM13}, we obtained similar numbers in SRR for the ISCAS'89 benchmarks which we showed were quite close to the SRR values obtained from the purely simulation-based algorithm in [5] so both approaches generated high quality solutions. However, our runtimes after incorporating the proposed extensions were significantly better than our implementation of \cite{BasuM13}.
