\section{Terminology and Problem Definition}
Given a sequential circuit, we denote a signal $s:=(p,v,n)$ when pin $p$ takes value $v$ at clock cycle $n$. Here the value $v$ could be 0, 1, or $x$ if the value is unknown. Let pin $p$ designate the index of an output pin of either a combinational gate or a state element. It may also designate a primary input which may further be a control signal, typically to select an operation mode of the circuit. We denote the subset of pins for state elements, gates, and control by $\cP_F$, $\cP_G$, and $\cP_C$, respectively. For signal $s=(p,v,n)$ in an observation window of $N$ clock cycle, we assume $n=1,2,\ldots,N$. For a control signal $(p,v,n)$, we have $p\in \cP_C$ and its value is known ($v\neq x$).

For pin $p\in \cP_G$, we denote $FO_p$ as the set of its ``fanout pins'' which are outputs of a combinational gate for which $p$ is an input. Similarly, we denote $FI_p$ to be the set of ``fanin pins'', if they are inputs of a combinational gate for which $p$ is an output.

We define a \emph{trace signal} $(p,v,n)$ if $p\in \cP_F$, corresponding to a state element. The trace signal is captured at run-time for an observation window of $1\leq n\leq N$. Also since the signal is captured in an on-chip trace buffer, its values are known within the observation window and we have $v\neq x$. Let us denote the set of the trace signals with $\cS_T$. The size of $\cS_T$ is $B\times N$ for a trace buffer of bandwidth $B$ allowing simultaneous tracing of $B$ signals in $N$ cycles. As an example, in Figure \ref{fig:example}, we have $\cP_F=\{p_1, p_2, p_3, p_4, p_5\}$, $\cP_G=\{p_8, p_9, p_{10}\}$. For $p_8$ we have $FI_{p_8}=\{p_1, p_7\}$ and $FO_{p_8}=\{p_3\}$. The highlighted flipflop $f_3$ is traced, so we have $\cS_T=\{(p_3, v, n)\}$.

A signal $(p,v,n)$ is defined to be \emph{restored} in cycle $n$ if pin $p$ does not correspond to a pin of a trace signal or of a control input signal, and the value $v$ can be restored to 0 or 1 by using the values of the trace and control signals. The algorithmic procedure for determining if a signal can be restored will be explained shortly. The trace selection problem aims to find $B$ trace signals such that the total number of restored signals over the $N$ cycles are maximized.

\section{The XSimulator}
The set of signals which can be restored using the trace and control signals  is determined using an XSimulator. Algorithm 1 describes our variation of an XSimulator given in \cite{KoN09} which we refer to as the {\tt XSim-core} procedure. The inputs to the algorithm are a set of signals at a single cycle $n$ denoted by  $\cS^n_I$ which are assumed to have a known value of 0 or 1. For example $\cS^n_I$ could be a combination of the trace and control signals at cycle $n$.

In our variation of XSimulator, we also introduce a binary ``{\tt restore-once}'' flag as input. It controls if restoration of an (unknown) signal should stop as soon as the signal takes a known value ($\neq x$). Otherwise, it is possible for a signal to be restored multiple times. More details about the use of this flag in our algorithm will be discussed in Section \ref{sec:algorithm}. The output is the set of signals which can be restored \emph{using} the input signals and is denoted by $\cS_R$. Note, these signals may be restored at any clock cycle which could be same as, before or after $n$. However the XSimulator does not record this cycle and instead uses a `0' for the cycle field of a restored signal.

The procedure starts by marking each pin as not visited, except for the ones corresponding to $\cS^n_I$ which are set to the {\tt restore-once} flag. The signals in $\cS^n_I$ are also added to a queue $Q$. (See lines 1-3.)

\begin{algorithm}[t]
\caption{{\tt XSim-core}($\cS^n_I$, $\&\cS_R$, {\tt restore-once})} \label{alg:core}
\small
\begin{algorithmic}[1]
\STATE $\cS^n_R=\emptyset$;~~$Q=\emptyset$;~~visited$[p]$=false;~~$\forall p$
\FOR{each $s:=(p,v,n)\in \cS^n_I$}
    \STATE Enqueue($Q$, $s$);~~ visited$[p]=$ {\tt restore-once};
\ENDFOR
\WHILE{$Q\neq \emptyset$}
       \STATE $s_i:=(p_i,v_i,0)$
       \STATE $s_i \leftarrow$ Dequeue($Q$)
       \FOR{each $p\in \{FI_{p_i}\cup FO_{p_i}\}$ and !visited$[p]$}
            \STATE $s := (p,v=x,0)$
            \STATE evaluate if $s$ can be restored using $\{\cS^n_I\cup\cS_R\}$
            \IF{$v\neq x$}
                \STATE $\cS_R\leftarrow \cS_R\cup \{s\}$
                \STATE Enqueue($Q$, $s$);~~visited$[p]=$ {\tt restore-once};
           \ENDIF
       \ENDFOR
       \STATE if the values of all the signals remain unchanged set $Q=\emptyset$
\ENDWHILE
\end{algorithmic}
\end{algorithm}

At each step the signal $s_i$ is dequeued from the head of the queue. Then a signal $s$ defined as $s:=(p,v=x,0)$ which corresponds to a fanin or fanout pin of $s_i$ is considered for restoration. If $p$ has not been visited before, its value is evaluated given the values of the other fanins and fanouts which are known ($\neq x$). (See lines 6-10.)  Such fanins and fanouts belong to either $\cS^n_I$ or have been restored in the previous steps of the algorithm so they belong to the current set of restored signals $\cS_R$. If $s$ is restored, then $v \neq x$ after the evaluation and it is added to $\cS_R$. Next, pin $p$ takes the value of {\tt restore-once} flag and $s$ is enqueued. The process terminates when the queue is empty which may happen if all the signals are dequeued or if there is no change in the values of the signals compared to the previous iteration of the while loop. (See line 16.) The algorithm outputs the latest $\cS_R$ as the set of restored signals using $\cS^n_I$.

If the {\tt restore-once} flag is true,  as soon as a signal is restored, the algorithm stops considering it for further restoration. As a result, fewer signals may be restored but the algorithm terminates much faster. For example in Figure \ref{fig:example}, assume $p_4$ is restored to 1, $p_2$ is not restored, and the {\tt restore-once} flag is true. As a result, $p_{10}$ will not be restored. However if {\tt restore-once} is false, it is possible for $p_4$ to be later restored to 0 (thus enqueued more than once) which allows further restoration of $p_{10}$. Our trace selection procedure utilizes both modes, for quickly finding a subset of restorable signals as well as finding all the restorable signals.


\subsection{Xsim-core}
\begin{figure}%[hbt]
   \centering
   \includegraphics[width=2.5in]{figs/example-all.eps}\vspace{-2mm}
   \caption{Example for illustration of the notations and metrics}\vspace{-3mm}
   \label{fig:example}
\end{figure}

\subsection{Reachability List Generation}
\begin{algorithm}[t]
\caption{{\tt Reachability-list}($f, v, \cS_C, \&L^v_f$)}
\small
\begin{algorithmic}[1]
\STATE $s:=(p_f,v\neq x,0)$
\STATE {\tt XSim-core}($\{s\}\cup\cS_C, \&S_R$, {\tt restore-once}=true)
\STATE {\tt XSim-core}($\cS_C, \&\cS_{R_C}$, {\tt restore-once}=true)
\STATE $S_R = S_R\setminus \cS_{R_C}$
\STATE Return $L^v_f$ as the set of state elements in $S_R$
\end{algorithmic}
\end{algorithm}

Given state element $f$, we consider the case \emph{if} it takes a known value of 0 or 1. We denote the reachability list by $L^v_f$ and define it as a set of state elements which can be restored only if state element $f$ takes a known value of $v\neq x$. Note this definition is not associated with any particular clock cycle. In other words, the reachability list $L^v_f$ allows identifying those state elements whose values can immediately be restored if state element $f$ takes a known value without relying on any other (restored) state elements.

For each state element $f$, two reachability lists with values $v=0$ and $v=1$ are computed. Algorithm 2 shows the procedure. The inputs are the state element $f$, its considered value $v\neq x$, the set of input control signals denoted by $\cS_C$. (The notation does not associate the control signals with a clock cycle because we assume they remain constant within the observation window.) The output is $L^v_f$.

First, signal $s$ is formed corresponding to state element $f$. (Since no particular cycle is considered, a special value of `0' is used for the clock cycle field in $s$.) In line 2, the {\tt XSim-core} procedure is called with the union of control signals and $s$ as its input to identify the signals $\cS_R$ which can be restored. Since some of the signals in $\cS_R$ may be restorable solely using $\cS_C$, they must be removed to allow identifying the signals which can be restored when $s$ is also used. To remove these signals, in line 3, the {\tt XSim-core} procedure is called again, this time using only $\cS_C$ as input. The restored signals denoted by $\cS_{R_C}$ are then removed from $S_R$ (line 4). The output $L^v_f$ is the set of state elements corresponding to the signals in $S_R$.

Note, when calling the {\tt XSim-core} procedure, the {\tt restore-once} flag is set to true. This ensures quick computation of the reachability list within our trace selection algorithm.

For example, in Figure \ref{fig:example} we have $L^0_1=\{f_2,f_5\}$, $L^1_1=\{f_2,f_3\}$, $L^0_2=\{f_1, f_5\}$, $L^1_2=\{f_1, f_3\}$, $L^0_3=\{f_1, f_2, f_4, f_5\}$, $L^1_3=\emptyset$, $L^0_4=\{f_5, f_3\}$, $L^1_4=\emptyset$, $L^0_5=\emptyset$, $L^1_5=\{f_2, f_3, f_4\}$.

It is possible that some state elements have an empty reachability list for both values of $v$ which we refer to as ``island'' state elements. More precisely, $f$ is an island state element if $L^0_f=L^1_f=\emptyset$.

\subsection{SRR Measurement}

A common measure for the quality of trace selection is the {\bf State Restoration Ratio (SRR)}, computed within an observation window of $M$ clock cycles. SRR is computed using Algorithm 1 as follows. We are given the input set $S^n_I$ as the trace signals observed in a window of $M$ clock cycles and the control signals. The {\tt restore-once} flag is also set to false. Next, the {\tt XSim-core} procedure is evoked $M$ times, for $n= 1,\ldots,M$. After finding $S_R$ for each cycle we have $SRR= (B + \sum_{n=1}^M k_n )/ B$ where $B$ is the trace buffer bandwidth, and $k_n$ indicates the number of restored signals which correspond to state elements using $S^n_I$. For an example to compute SRR, please refer to related work such as \cite{ChatterjeeMB11}.

\section{Metrics for Trace Selection}
\begin{figure}[t]
   \centering
   \includegraphics[width=3.0in]{figs/overview.eps}
   \caption{Overview of our trace selection algorithm}
   \label{fig:overview}
\end{figure}

Figure \ref{fig:overview} shows an overview of our trace selection algorithm. Our algorithm is driven by computing and continuously updating of an Impact Weight which is defined based on a new metric introduced in this work, namely the ``restoration demand''. It reflects the remaining demand of a state element $i$ to be restored by another state element $f$ when $f$ takes a known value of 0 or 1, and given that $i$ may be partially restored by an existing set of already-selected trace signals. Computation of this metric is quite fast using the {\tt XSim-core} procedure. After a pre-processing step to compute the initial restoration demands and weights, at each step, one or more new trace signals are selected which is followed by updating the restoration demands and weights for the next step, until all the $B$ trace signals are selected.

The procedure for selecting the next trace signal involves two methods. Method (i) uses the restoration demands and is applied at each step. It does not favor a small class of state elements, referred to as ``islands'' in this work, which have a poor rate of restoring the other state elements, in the presence of very few or no other trace signals. So after a few steps (e.g., every 8 selected signals), method (ii) is used to consider adding an island signal. (See Figure \ref{fig:overview}.)

We first define the restoration demand of each state element and the island state elements before discussing the steps of the algorithm.

\subsection{Restoration Rate}
\begin{algorithm}[t]
\caption{{\tt Restorability-FFs}($F, \cS_C, \cS^1_T,
\ldots, \cS^M_T, \&r_f~_{\forall f\in F}$)}
\small
\begin{algorithmic}[1]
\STATE $r_f=0~~\forall f\in F$
\STATE {\tt XSim-core}($\cS_C, \&\cS_{R_C}$, {\tt restore-once}=false)
\FOR{n=1 to $M$}
    \STATE {\tt XSim-core}($\cS_{R_C}\cup \cS^n_T, \&\cS_R$, {\tt restore-once}=false)
    \FOR{each $f \in F$}
            \STATE $r_f=r_f+1$ if $f$ is a state element in $\cS_R$
    \ENDFOR
\ENDFOR
\STATE return $r_f=\frac{r_f}{M}~\forall f\in F$
\end{algorithmic}
\end{algorithm}

The ``restorability rate'' metric reflects the probability that a single state element $f$ can be restored using the trace signals identified so far. The probability is computed within an observation window of $M$ clock cycles by continuously calling the {\tt XSim-core} procedure. While, in practice the observation window corresponding to the trace buffer depth is 1K to 8K cycles, it has been shown in \cite{ChatterjeeMB11} that a much smaller observation window (e.g., $M$=64 cycles) provides sufficient accuracy for evaluations within the simulation-based procedure. Similarly, we use $M=64$.

Algorithm 3 shows the details of computing the restorability rate for all the state elements. The inputs are the set of state elements which are not selected so far (denoted by $F$), input control signals (denoted by $\cS_C$), and the trace signals so far within an observation window of $M$ cycles (denoted by $\cS_T^1,\ldots,\cS_T^M$). Recall $s=(p,v,n)\in \cS_T^n$, if $v\neq x$ and $p\in \cP_F$ indicating that $s$ corresponds to a state element which is traced in cycle $n$. The output is the restorability rate of state element $f$, denoted by $r_f$. One call to the algorithm is sufficient to compute $r_f$ for \emph{all} $f\in F$.

According to Algorithm 3, initially $r_f$ is set to 0 $\forall f\in F$. Next, the set of control signals $\cS_C$ is used to identify a set of restored signals denoted by $\cS_{R_C}$ using the {\tt XSim-core} procedure. (See line 2.) At each step, the {\tt XSim-core} procedure is used to identify the signals which can be restored at cycle $n$ for $n=1,2,\ldots, M$.

Note in line 4, the input to {\tt XSim-core} is $\cS_T^n\cup\cS_{R_C}$ which allows accounting for the impact of the control signals in addition to the trace signals in that cycle. The restored set is denoted by $\cS_R$. If a state element $f\in F$ can be restored, then its corresponding signal is in $\cS_R$ and $r_f$ is added by 1. In the end, $r_f$ is returned as a probability by dividing it by $M$ for each $f\in F$.

The input trace signals $\cS_T^n$ for $n=1,2,\ldots,M$ , for any call to this function, are computed using a one-time simulation done in the initial pre-processing step of the flow shown in Figure \ref{fig:overview}. Specifically, the circuit is simulated for 1K cycles
and the values of all state elements for each cycle are stored in the initialization step. Next, at each call of Algorithm 3, the values of the corresponding trace signals are looked up from the stored patterns for the $M$ cycles. Specifically, given the small size of the observation window, i.e., $M=64$, we randomly select 3 non-overlapping observation windows with different starting cycles from the 10K simulated cycles. For each window, Algorithm 3 is called once and the final stored value of $r_f$ is computed as the average of these three values. (These 3 runs of Algorithm 3 are implemented as 3 threads running on a multi-core machine.) This idea of averaging over multiple computations was also used in \cite{ChatterjeeMB11} but for computing the SRR when $M$ is small. In addition, the calls to the {\tt XSim-core} procedure are made by setting the {\tt restore-once} flag to false. Recall by setting this flag to false, all the possible restorable signals will be identified. So it allows exact evaluation if $f$ can be restored by the current trace signals.

\subsection{Restoration Demand and Impact Weight}
If state element $i$ is not fully restored (i.e., $r_i<1$) we would like to quantify its demand if it is restored by another state element $f$. We define $d_{i,f}^v$ as the demand of $i$ to get fully restored by $f$ when $f$ has a known value.

In practice, state element $f$ which allows restoring state element $i$, only falls within a small subset of the entire set of state elements and considering $f$ to be among the set of all the state elements results in many unnecessary and time-consuming computations. Therefore, we limit $f$ to be a state element which includes $i$ in its reachability list, for either value 0 or 1 (i.e., $i\in L^v_f$, $v\in\{0,1\}$). We approximate the demand $d_{i,f}^v$ as follows.
\begin{equation}
d_{i,f}^v\approx\min(1-r_i, a_j^v),~\forall i\in L^0_f~~~or~~~i\in L^1_f
\label{eq:d}
\end{equation}
where $a_f^v$ is the probability that state element $f$ takes value $v$. The probability $a_f^v$ is accurately computed in the initialization step using circuit simulation for a suitable number of clock cycles (e.g., 10K in this work with random values for non-control input vectors). In Equation \ref{eq:d}, the quantity $1-r_i$ reflects the remaining restoration demand of $i$. If it is larger than $a_f^v$, then the demand is given by $a_f^v$ which is the likelihood that $f$ takes value $v$. Equation \ref{eq:d} is an upper bound approximation which can be computed fast. Otherwise, accurate computation of the demands requires many time-consuming simulations and would be impractical for realization of a fast and scalable algorithm.
The Impact Weight of a state element $f$ captures the amount of restoration if $f$ is selected as the next trace signal. A key point in computing this Impact Weight is considering the remaining restoration of the unrestored state elements which is given by the restoration demand metric. Specifically, the Impact Weight of state element $f$ is defined as follows.
\begin{equation}
w_f=\sum_{v={0,1}}\sum_{\forall i\in L_f^v}{d^{f,v}_{i}}
\label{eq:w}
\end{equation}
In the above equation, the corresponding demands of the state elements in the reachability lists of $f$ for values 0 and 1 are added. The higher Impact Weight for a state element $f$ can be an indication that more state elements can be restored if $f$ is selected as the next trace while accounting for the amount of restoration using the already-selected trace signals.

As an example, in Figure \ref{fig:example}, the Impact Weight of flipflop $f_2$ is given by $w_2=d_{1,2}^0+d_{1,2}^1+d_{3,2}^1+d_{5,2}^0$. At the beginning when no trace signal is selected, the restoration rates of all the flipflops are 0. Therefore the demand $d_{j,2}^v=a_j^v$ according to Equation \ref{eq:d}. Assuming the two primary inputs of this circuit are independent and each has a probability of 0.5 to be 0 or 1, we obtain the probability rates $a^0_1=a^1_1=0.5$, $a^1_3=0.5$, $a^0_5=0.75$, and $w_2=2.25$.

\section{The Trace Selection Process}
We now discuss the details of different steps of our algorithm. These are shown in bold in Figure \ref{fig:overview}.

\subsection{Basic Procedure}
In this step, first, the circuit is simulated for 10K cycles using random values for non-control primary input vectors. The simulation results are used to compute the probability $a_f^v$ for each state element. As mentioned before, they are also used to provide the trace signals which are fed as inputs to Algorithm 3 to compute $r_f$ for each state element $f$. Next, the demands and the Impact Weights are computed, similar to the given example.

\label{sec:method1}
At each step of the algorithm, method (i) is initially used to identify the next trace signal. In general, a state element with a higher Impact Weight is a better candidate for the next trace signal. However, simply selecting the state element with the maximum weight may not be a good choice because based on our observations, there may be other state elements with similar yet slightly smaller weight values which may result in a higher state restoration ratio (SRR). Therefore we evaluate the top $k\%$ of the state elements with the highest restoration demands.  To select the next trace signal from the above identified subset,  we consider adding each to the current set of selected trace signals and directly measure the SRR for an observation window of $M=64$ cycles. (See Section \ref{sec:xsim} for computation of SRR.) The next trace signal is the one which yields to the maximum SRR.

Since computing SRR involves X-Simulation, the parameter $k$ should be set to a small value to ensure the runtime of the algorithm is feasible. In our implementation of method (i), we identify the top 5\% of the state elements with the highest weight. We observed that this value is reasonable to identify the state elements with high weight values, and yet is small to ensure a negligible runtime overhead.

We further discuss the complexity of some of the steps for computing/updating the weights. First, updating the demands and the weight for one state element can be done in constant time once the restorability rate of state elements ($r_f$) are updated. This can be observed from Equation \ref{eq:d}. For the weight given by Equation \ref{eq:w}, in practice we observe a constant computational complexity because each state element is only contained in the reachability list of a few number of state elements, much smaller than the total number of state elements in the circuit. The computational complexity is dominated due to updating the $r_f$ values however this only requires calling {\tt XSim-core} procedure $M=64$ times in Algorithm 3 for all the untraced state element.

\subsection{Deal with `Island Flipflops'}
An island state element has empty reachability lists, for both values of 0 and 1. It means that stand alone, an island state element is not able to restore any other state elements. Therefore as shown in Figure \ref{fig:overview}, after selecting every 8 trace signals, we consider adding an island signal. For example, for a typical trace buffer bandwidth of 64 bits, adding an island is considered seven times throughout the course of the algorithm.

Specifically, to select an island signal, we simply add each island signal individually to the current list of selected trace signals and measure the SRR for an observation window of 64 cycles. This is because we observed the number of islands are typically very low and consequently the runtime overhead to compute the SRRs is not significant. Once the SRRs are computed, the island with the maximum SRR is identified and if the SRR is higher than a threshold, then the island is also added to the set of trace signals. In that case, within one step of the algorithm two trace signals will be added to the set. However, if an island is not selected, adding an island will be postponed when eight additional trace signals are identified.

the most recent set of trace signals. Therefore, at each iteration of the loop in Figure \ref{fig:overview}, these weights are updated. Specifically, the core metric to be updated is the restorability rate of each state element which is used to compute the demand in Equation \ref{eq:d} and the weight in Equation \ref{eq:w}. In order to update the $r_f$ values, Algorithm 3 is called which now takes the new sets of trace signals as inputs as explained in Section \ref{sec:r}. Note, the reachability lists do not change after the initialization step.

\section{Simulation Results}
\begin{sidewaystable}[htbp]
  \centering
    \scriptsize\tabcolsep=9pt
  \caption{Comparison of State Restoration Ratio (SRR) of different algorithms}\vspace{1mm}
    \begin{tabular}{c|ccc|ccc|ccc|ccc}
    \toprule
    & \multicolumn{3}{c|}{METR: Metric-based \cite{ShojaeiD10}} &   \multicolumn{3}{c|}{SIM: Simulation-based \cite{ChatterjeeMB11}} &   \multicolumn{3}{c|}{HYBR-NOSIM: Hybrid w/o} &   \multicolumn{3}{c}{HYBR: Hybrid with} \\
    & \multicolumn{3}{c|}{} &   \multicolumn{3}{c|}{} &   \multicolumn{3}{c|}{simulation for top candidates} &   \multicolumn{3}{c}{simulation for top candidates} \\
    \midrule
    & \multicolumn{3}{c|}{Trace Size} & \multicolumn{3}{c|}{Trace Size} & \multicolumn{3}{c|}{Trace Size} & \multicolumn{3}{c}{Trace Size}\\
    Benchmark & 8     & 16    & 32    & 8     & 16    & 32     &8     & 16    & 32    & 8    & 16    & 32\\
    \midrule
    S5378  & 13.7   & 8.1   & 4.1   & 12.8   & 7.1    & 4.4    & 13.4  &7.9  &4    & 13.6 (-0.7\%)       & 8.0 (-1.2\%)        & 4.2 (-4.5\%)       \\
    S9234  & 8.4    & 5.8   & 3.4   & 9.1    & 6.6    & 3.6    & 9.4   &6.1  &3.3  & 9.8  \bf{(+4.3\%)}  & 6.8 \bf{(+3.0\%)}   & 3.6 \bf{(+0.00\%)} \\
    S13207 & 13.8   & 6.8   & 3.5   & 19.3   & 12.2   & 7.8    & 22.2  &14.6 &8.0  & 24.5 \bf{(+10.4\%)} & 16.3 \bf{(+11.6\%)} & 8.9 \bf{(+11.3\%)} \\
    S15850 & 14.4   & 7.6   & 4.1   & 14.5   & 7.8    & 4.1    & 15.0  &7.8  &4.0  & 15.6 \bf{(+4.0\%)}  & 8.1 \bf{(+3.8\%)}   & 4.1 \bf{(+0.00\%)} \\
    S35932 & 31.1   & 19.4  & 11.6  & 58.1   & 36.2   & 23.1   & 31.6  &18.9 &11.3 & 61.4 \bf{(+5.7\%)}  & 38.3 \bf{(+5.8\%)}  & 23.4 \bf{(+1.3\%)} \\
    S38417 & 17.6   & 13.1  & 9.7   & 29.4   & 17.8   & 20.0   & 18.1  &10.3 &5.9  & 51.3 \bf{(+74.5\%)} & 30.1 \bf{(+12.9\%)} & 17.5 (-12.5\%)     \\
    S38584 & 13.5   & 10.8  & 7.1   & 14.9   & 18.1   & 16.4   & 18.3  &14.8 &10.7 & 24.0 \bf{(+31.1\%)} & 18.5 \bf{(+2.2\%)}  & 17.5 \bf{(+6.7\%)} \\
    % Average& 16.10  & 10.23 & 6.21  & 22.59  & 15.11  & 11.34 & 28.60  & 18.01 & 11.31\\
    \bottomrule
    \end{tabular}
  \label{tab:res}\vspace{-5mm}
\end{sidewaystable}

\subsection{Simulation Setup}
Our trace selection algorithm, which we refer to in short, as HYBR in our experiments, was implemented in C++. It was tested on the ISCAS89 benchmarks which were synthesized using Synopsys Design Compiler with a 90nm TSMC library for trace buffers of various bandwidths. The number of flipflops for each benchmark is reported in column 2 of Table \ref{tab:runtime}.

{\bf To measure the solution quality}, the State Restoration Ratio (SRR) with an observation window of 4098 cycles is used with random values for non-control primary inputs. Note, this observation window is the size that can typically be captured by a trace buffer, and is also assumed in all the previous works. The procedure to calculate SRR was explained in Section \ref{sec:xsim}.

Furthermore, two of the benchmarks ({\tt S35932}  and {\tt S38584}) have control signals as primary inputs to the circuit. The names and values of these control signals are as follows. For {\tt S38584} we identify `g35' as an active low global reset so it was set to 1 which is also pointed out in \cite{KoN09}. For {\tt S35932}, the active low global reset signal `RESET' was set to 1. Moreover, two control input signals `TM0' and `TM1' define four working modes in this benchmark. Therefore, we ran this benchmark four times for each of the working modes and measure four separate SRR values. We then report the average of these four SRR values for {\tt S35932} in our experiments.

We also make {\bf comparison with other trace selection algorithms}. We note that due to the technology library used for synthesis of the benchmarks in our simulations, direct comparison with other existing algorithms by looking at the reported SRR values from the related publications was inaccurate. Therefore, we also implement the following two trace selection algorithms for comparison: 1) METR: metric-based \cite{ShojaeiD10}\footnote{For fair comparison, no critical state elements was specified in \cite{ShojaeiD10}.}, and 2) SIM: simulation-based \cite{ChatterjeeMB11}. METR uses a metric to \emph{approximate} the SRR while SIM directly uses simulation to accurately compute SRR for trace signal selection throughout the course of the algorithm. As a result, METR is typically much faster than SIM. However SIM is shown to result in a much higher solution quality. We use METR mainly as a reference for runtime, and SIM mainly as a reference for the solution quality of our algorithm. We also note that among the metric-based algorithms, we select \cite{ShojaeiD10} due to its fast execution runtime which is similar to \cite{LiuX12} but faster than \cite{KoN09} and \cite{BasuM11}. Both \cite{ShojaeiD10} and \cite{ChatterjeeMB11} use the XSimulator in their internal procedures and for fair comparison, we use the same procedure (i.e., {\tt XSim-core} given by Algorithm 1) which provides the most efficient implementation. We also note that our implementation of {\tt XSim-core} exploits bitwise parallelism for state restoration given in \cite{KoN09}. When implementing \cite{ChatterjeeMB11}, custom parameter selection was done the same way as reported in \cite{ChatterjeeMB11}. All simulations ran on an Intel quad-core 3.4GHZ with 12GB memory.

\subsection{Runtime Comparison}

\begin{table*}[htbp]
  \centering
  \scriptsize\tabcolsep=9pt
  \caption{Runtime comparison of different algorithms}\vspace{1mm}
    \begin{tabular}{c|c|ccc|ccc|ccc}
    \toprule
     & & \multicolumn{3}{c|}{METR: Metric-based  \cite{ShojaeiD10}} & \multicolumn{3}{c|}{SIM: Simulation-based \cite{ChatterjeeMB11}} & \multicolumn{3}{c}{HYBR: Hybrid} \\
     & & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{(implemented on a quad-core machine)} & \multicolumn{3}{c}{} \\
    \midrule
     & \#FFs & \multicolumn{3}{c|}{Trace Size} & \multicolumn{3}{c|}{Trace Size} & \multicolumn{3}{c}{Trace Size} \\
    Benchmark   &   & 8     & 16    & 32    & 8     & 16    & 32    & 8     & 16    & 32 \\
    \midrule
    S5378  &163  &  8     & 27    & 66    & 00:06:50   & 00:06:40  & 00:05:30   & 5     & 27    & 28    \\
    S9234  &145  &  6     & 17    & 38    & 00:07:28   & 00:06:05  & 00:04:10   & 26    & 84    & 86    \\
    S13207 &327  &  48    & 117   & 254   & 00:48:12   & 00:46:42  & 00:41:3    & 68    & 163   & 166   \\
    S15850 &137  &  7     & 18    & 37    & 00:02:34   & 00:02:03  & 00:00:40   & 83    & 193   & 197   \\
    S35932 &1728 &  73    & 167   & 408   & 07:13:00   & 07:12:00  & 07:11:00   & 139   & 208   & 217   \\
    S38417 &1564 &  3690  & 7620  & 13428 & 50:05:00   & 50:04:00  & 50:02:00   & 434   & 2508  & 2521  \\
    S38584 &1166 & 53     & 140   & 354   & 16:33:00   & 16:32:00  & 16:31:00   & 167   & 741   & 752   \\
    \bottomrule
    \end{tabular}
  \label{tab:runtime}\vspace{-5mm}
\end{table*}

Table \ref{tab:runtime} shows the comparison of runtime of our HYBR with SIM and METR algorithms  for three buffer bandwidths of 8, 16, and 32 bits and a buffer depth of 4K. The reported runtimes are in seconds except for our implementation of \cite{ChatterjeeMB11} where the reported format is (hour:minute:sec). We note, the work \cite{ChatterjeeMB11} describes a GPU-based implementation of SIM which exploits a high degree of parallelism. However, our implementation of \cite{ChatterjeeMB11} which ran on a quad-core CPU is based on multi-threading and only up to 8 parallel threads could run simultaneously in our setup. Therefore, our reported numbers for SIM is higher than \cite{ChatterjeeMB11}. But anyhow, it gives a measure to highlight the significant speedups using HYBR.

As can be seen, HYBR has a comparable runtime to METR and is tremendously faster than SIM. Moreover, we note that metric-based algorithms are already verified to be much faster than simulation-based procedures, even when a GPU-based implementation is used, as reported in \cite{ChatterjeeMB11}. So we expect that HYBR is also much faster if a GPU-based implementation of SIM is used.

To analyze the fast runtime of HYBR and compare it with SIM, we compare the number of calls to the {\tt XSim-core} procedure in both algorithms. This step is called repetitively and is the most time-consuming step in both cases. The analysis is as follows. In HYBR, at each step, computation of restoration ratio for \emph{all} the state elements ($r_f~\forall f\in F$) using Algorithm 3 requires a total of $M=64$ calls to the {\tt XSim-core} procedure. Furthermore, at each step of HYBR, we use SRR computation for the top 5\% of state elements which have the highest Impact Weights. Each SRR computation requires 64 calls to the {\tt XSim-core} procedure. Therefore the number of calls to {\tt XSim-core} at each step of HYBR is dominated by the number of SRR computations which is at most 5\% of the number of state elements and is a small number. In contrast, in SIM, at each step and for each untraced state element, an SRR is computed. Therefore the number of SRR computations are significantly larger.

While each step of HYBR is significantly faster than SIM, we also note that another reason for the difference in the runtimes is because the number of steps in SIM is much larger than HYBR. This is because SIM is based on elimination of the least promising state elements and the number of state elements are often much higher than the number of trace signals. For example, {\tt S38584} has 1166 state elements which is much higher than the number of trace signals.

\subsection{SRR Comparison}
For this experiment, we report the results for a new variation of HYBR. Specifically, in this variation, when selecting the next trace signal using Method (i), we skipped the SRR-based evaluation of the top 5\% state elements with the highest Impact Weights. Instead, we directly selected the state element with the maximum Impact Weight in order to measure the effectiveness of the Impact Weight metric. Recall the eliminated step for SRR evaluation involved simulation so we refer to this variation as HYBR-NOSIM.

Table \ref{tab:res} shows the comparison of the solution quality (i.e., SRR). (The SRR values are computed for an observation window which was set to $M=$ 4K cycles corresponding to the buffer depth.) For HYBR, a percentage improvement in SRR is also reported for each case which is with respect to the SRR that is highest among METR and SIM for each benchmark. As can be seen, HYBR results in a significantly higher solution quality compared to METR. This is for the majority of the benchmarks except the smallest one ({\tt S5378}) in which the quality of solution is quite similar in all the algorithms.

Furthermore, HYBR has a consistently higher solution quality compared to SIM for small buffer bandwidths (i.e., 8 and 16 bits). For example in benchmark {\tt S38417} and for the buffer bandwidth of 8 bits, the SRR of HYBR is 51.3 while the SRR of SIM is 29.4. For the bandwidth of 32 bits, the two algorithms have a quite similar SRR. The main reason that HYBR performs better for smaller bandwidths is because it is based on selecting the most promising state element at each step while \cite{ChatterjeeMB11} is based on eliminating the least promising one at each step. Therefore in \cite{ChatterjeeMB11} the error associated with the greedy backward elimination of state elements grows as the buffer bandwidth decreases. In contrast, in HYBR, the error associated with greedy forward addition of promising state elements grows with the increase in the buffer bandwidth.

When comparing HYBR and HYBR-NOSIM, we observe that by eliminating the simulation for the top trace candidates, the solution quality significantly degrades. For the first four benchmarks HYBR-NOSIM often maintains a better solution quality over SIM, however SIM outperforms HYBR-NOSIM for the remaining  benchmarks. After further examining the first four benchmarks, we found out that the initially-generated reachability lists allow a more effective computation of the Impact Weights for these benchmarks. (Equation \ref{eq:w} shows direct dependency on the reachability lists.) This can be due to their smaller sizes or circuit topologies.

We conclude that simulation-based measurement of SRR provides a better method for evaluation of the \emph{top} candidates of our algorithm compared to the Impact Weights. However, we note this SRR-based evaluation is only done for a small percentage of the state elements which are quickly identified using the Impact Weight metric. (In contrast \cite{ChatterjeeMB11} utilizes SRR-based evaluation for all the candidate state elements.) From the above arguments, we can conclude our Impact Weight metric is able to effectively and quickly identify the top candidates. This directly translates into significant reduction in the number of SRR-based evaluations and the portion of runtime spent on simulation while maintaining the high solution quality.

\section{Conclusion}