\chapter{Post-silicon Validation using Trace Buffer: Preliminaries}
In this chapter, we will first go over the hardware infrastructure which
utilizes trace buffer for post-silicon validation. It allows tracing the
values of a few selected state elements (also referred to as trace signals)
within a fixed time window. We then introduce the process to restore the
values of all the state elements in the design using the values of the
trace signals within that time window. Next we briefly describe the X-Simulation process which is
utilized during the restoration process and different implementation
strategies to speed it up. Finally, we discuss a standard metric that is
used to measure the quality of restoration corresponding to a set of
selected trace signals which will be used to define the trace signal
selection problem. We will also categorize existing works to solve the
trace signal selection problem at the end of this chapter.
 
\section{Overview of Trace Buffer-based PSV Infrastructure}\label{sec:ELA}
%\todo[inline, size=\large, color=green]{ELA}
In the 1990s, external logic analyzers were widely used by the industry due
to their capability to let engineers monitor signals inside the chip by
directly connecting them through the I/O pins to the probes of the
analyzers \cite{Horak90, Hammond90}. However, with technology scaling and
increase in the operating frequency of the chips, external logic analyzers
could no longer process data in time to catch up with the data transmission
frequency of the Circuit-under-Debug (CUD). Meanwhile, the use of the
external logic analyzers was further limited due to the fact that I/O pins
are quite limited and only a small number of them can be used for PSV
purposes.

As an alternative, Embedded Logic Analyzer (ELA) has been adopted by the
industry \cite{CortiKMPSW04, MacNameeH00, BeenstraRH01, Altera09,
CoreSight} as a complement to the scan-chain-based PSV techniques. ELA can
be embedded on-chip, to reduce the cost of communication to outside the
chip, while at the same time making use of vacant on-chip resources to
coarsely process the data before getting transferred off-chip. Due to the
limited on-chip processing ability of ELA, some data will still be sent
off-chip for analysis. However the amount of data transferred off-chip is
significantly reduced since most data have already been coarsely processed.

\begin{figure}[t] \centering
  \includegraphics[width=4.5in]{figs/ELA.eps}
   \caption{Block diagram of a typical embedded logic analyzer}
   \label{fig:ELA}
\end{figure}

Figure \ref{fig:ELA} shows the basic components of the ELAs. ELAs are
usually composed of a sampling unit, a trigger unit, an offload unit and a
central control unit.

{\bf The sampling unit} is programmed before the debugging process to
record which subset of signals will be traced. The values of the trace
signals will be stored into the on-chip trace buffer which is inside the
sampling unit during online operation. This is done with the same frequency
as the operating frequency of the CUD.  The signals for tracing are
pre-decided using automated trace signal selection algorithms which will be
introduced at the end of this chapter. Note, in a typical trace buffer
infrastructure like Figure \ref{fig:ELA}, trace signals should be
determined beforehand and can not be altered during the debugging process.

{\bf The trigger unit} records the triggering condition indicating when the
process of logging the trace signals should begin. The triggering condition
is pre-programmed before the debugging process begins, and multiple
conditions can be stored to allow multiple invocations of the sampling
unit.

{\bf The control unit} is in charge of monitoring and communicating with
different units for online data collection. When a trigger condition is
met, the control unit will be notified by the trigger unit and another
notification will be sent to the sampling unit to start the sampling
process. Once the trace buffer is full, the sampling unit will inform the
control unit, which then notifies {\bf the offload unit} to transfer the
data off-chip to either a processor or other analysis equipment through the
I/Os. Compression is usually necessary for increasing the bandwidth of the
trace signals before they are sent out for analysis
\cite{AnisN07,YuanLX12,BasuMVTS11,PrabhakarSH11}. ELAs also allow real-time
check to verify the correctness of CUD at certain check-points by sending
the trace data to the on-chip assertion checkers directly
\cite{BouleCZ06,TongBZ10,FosterKL04}. The data that are verified to be
correct by the assertion checkers do not need to be sent off-chip,
therefore the amount of data sent for off-chip analysis is reduced.

{\bf Trace buffer} is the major storage unit inside the sampling unit of
the ELAs. They are essentially composed of a portion of the on-chip memory
with additional interconnection network between the memory and different
state elements. Internally, the values of the traced state elements will be
directed through the interconnection network to the on-chip memory for
storage. The design of the interconnection network should consider the
vacant spaces on the chip and distracts the electrical attributes of the
circuit as little as possible. Due to the limited on-chip area, not many
state elements can be traced. Otherwise, closely located wires which are
used for interconnection between state elements and the memory may
introduce noise and further lead to electric bugs. Also, the limited
on-chip memory is another factor that constrains the number of state
elements which can be traced. During online operations, a large portion of
the memory needs to be reserved, to avoid unnecessary disk swaps which may
change the behavior of the CUD when it runs under real conditions without
the interference of trace buffer. Given these limitations, the state
elements need to be carefully selected for tracing to enhance the
visibility inside the chip as much as possible.

After the values of trace signals are collected and sent off-chip, the
state restoration process is applied to make use of these data to reproduce
the values of the remaining state elements as if they were collected during
online operation.

\section{Overview of the State Restoration Process} \label{sec:restoration}
State restoration is the process to restore the values of state
elements that are not traced by using the collected values of the 
trace signals. It is an important step to increase the visibility
inside the chip before other debugging techniques for detection and root
cause analysis are applied. We start by introducing the necessary
notations, followed by discussing different strategies to accelerate the
restoration process.

\subsection{Notations} 
Given a sequential circuit, we denote a signal $s:=(p,v,n)$ when pin $p$
takes value $v$ at clock cycle $n$. Here the value $v$ could be `$0$',
`$1$', or `$X$' if the value is unknown. Let pin $p$ designate the index of
an output pin of either a combinational gate or a state element. It may
also designate a primary input which may further be a control signal,
typically to select an operation mode of the circuit. We denote the subset
of pins for state elements, gates, and control signals by $\cP_F$, $\cP_G$,
and $\cP_C$, respectively. The number of signals that can be selected is
usually denoted as the \emph{trace buffer width} and the number of cycles
for each selected signal to be traced is denoted as the \emph{trace buffer
depth}. The trace width and depth form a two-dimensional view which is
referred to as an ``observation window''. In practice, the trace buffer
width and depth are decided by the size of the free on-chip white-spaces to
add the interconnection network and by the available memory, which is the
portion left besides the memory used by the CUD itself. For example, for
ISCAS'89 benchmarks, trace buffer width is at most 32 bits and the depth
can be as large as 8K cycles \cite{KoDis}. For larger benchmark suite like
IWLS'05 or ISPD'12, the trace buffer width can be as high as 64 bits, which
is still considered reasonable since the circuit area corresponding to
benchmarks in IWLS'05 or ISPD'12 are usually more than 2X larger than the
largest ones in ISCAS'89 \cite{ISCAS89,ISPD12,IWLS05,MishchenkoCB06}.

For a control signal $(p,v,n)$, we have $p\in \cP_C$ and its value is known
($v \neq X$) during the debugging process. For pin $p\in \cP_G$, we denote
$FO_p$ as the set of its ``fanout pins'' which are outputs of a
combinational gate for which $p$ is an input. Similarly, we denote $FI_p$
to be the set of ``fanin pins'', if they are inputs of a combinational gate
for which $p$ is an output.

We define a \emph{trace signal} $(p,v,n)$ if $p\in \cP_F$ corresponding to
a state element. The trace signal is captured at run-time for an
observation window of $1\leq n\leq N$. Also since the signal is captured in
an on-chip trace buffer, its values are known within the observation window
and we have $v\neq X$. Let us denote the set of the trace signals by
$\cS_T$. The size of $\cS_T$ is $B\times N$ for a trace buffer of bandwidth
$B$ allowing simultaneous tracing of $B$ signals in $N$ cycles. 

\begin{figure}[t]
  \centering
  \includegraphics[width=2.7in]{figs/circuit.eps}
  \caption{Example circuit for explaining the notations}
  \label{fig:example}
\end{figure}

As an example, in Figure \ref{fig:example}, we have $\cP_F=\{p_1, p_2, p_3, p_4,
p_5\}$, $\cP_G=\{p_8, p_9, p_{10}\}$. For $p_8$ we have $FI_{p_8}=\{p_1,
p_7\}$ and $FO_{p_8}=\{p_3\}$. The highlighted state element $f_2$ is traced, so
we have $\cS_T=\{(p_2, v, n)\}$.

A signal $(p,v,n)$ is defined to be \emph{restored} in cycle $n$ if pin $p$
does not correspond to a pin of a trace signal or of a control input
signal, and the value $v$ can be restored to 0 or 1 based on the values
of the trace and control signals. The procedure for determining if
a signal can be restored will be explained shortly.

\subsection{State Restoration Using X-Simulation}\label{sec:sr}
The state restoration process is performed to restore the values of the
signals corresponding to the remaining state elements based on the values
of the trace signals within the observation window.

First, considering a single gate, the restoration process refers to using the
value(s) of one or more pin(s) of a gate which is already determined (i.e.,
it is `1' or `0') in order to
recover the values of the remaining pins of the gate. For each gate, two
types of restoration can be performed: forward restoration and backward
restoration \cite{KoN09}. For forward restoration, we rely on the values of
one or more input pins to restore the value at the output pin. For example,
for an AND gate, as long as one of the inputs has value 0, we will have the
output restored to 0. Similarly for backward restoration of an AND gate, as
long as the output has a value 1, all inputs of the gate should be 1. The
above example represents two simple cases in which the restoration solely
relies on either the inputs or the output. More complex cases are those in
which the values of both inputs and the output are used to restore the
values for unknown inputs. For example, for a 2-input AND gate, if the
output has a value of 0 and one of the inputs has a value of 1, it is
certain that the other input should have a value of 0.

A gate can be categorized into ``not-restored'', ``partially-restored'', 
or ``fully-restored'' after going through the restoration
process. A not-restored gate refers to the case when the output value of
the gate is unknown. A partially-restored gate refers to the case when the output is
known but only a subset of the input pins are known. Finally, a
fully-restored gate refers to the case when the value of the output pin and the values of
all input pins are known. It should be noted that a state element can only be in
two states, either fully-restored or not-restored. This is also true for the
gate types that have only one input (e.g., buffer and NOT gate).

Given the above restoration process for one gate, we now discuss the
restoration process at the circuit-level. Here, the restoration process can
be represented by a two dimensional table, referred to as a ``restoration
map''. 

\begin{figure}[t] \centering
  \includegraphics[width=3.5in]{figs/resmap.pdf}
  \caption{Restoration map for the example circuit}
  \label{fig:resmap}
\end{figure}

Consider the sample circuit shown in Figure \ref{fig:example}, with
its restoration map shown in Figure \ref{fig:resmap}. The horizontal axis
corresponds to the time interval considered for restoration while the
vertical axis lists the names of all signals corresponding to the outputs
of the state elements.  An entry in location $i,j$ of the table is the
value that signal $i$ is restored to at clock cycle $j$. Note that $X$
indicates that the signal value of a state element that can not be restored
through forward or backward restoration.  

Now suppose in Figure \ref{fig:example} that the state element $f_2$ is
getting traced. With a trace depth of 5 cycles, its output signal
corresponding to pin $p_2$ has a sequence of traced values
$\left\{1,0,1,1,0\right\}$ from cycle 0 to cycle 4, respectively. Starting
from state element $f_2$, using backward restoration, we can restore the
values of the state element $f_1$ at all cycles except for cycle 4. The
value of $f_1$ is $X$ at cycle 4 because the value of $f_2$ is not
available in clock cycle 5. Then forward restoration is used to restore the
values of $f_3$ and $f_5$ at certain cycles initiated by $f_1$ and $f_2$
respectively. More specifically, state element $f_3$ can be restored
through forward restoration when $f_1$ takes a value of 1 and state
element $f_5$ can be restored through forward restoration when $f_2$ takes
a value of 0. In this case, the restored value of $f_3$ will lie in the
subsequent cycle after $f_1$ becomes 1 and the restored value of $f_5$ will
lie in the subsequent cycle after $f_2$ becomes 0. This restoration process
is also referred as ``X-Simulation'' \cite{ChatterjeeMB11} because
X-Simulation utilizes simulation in order to eliminate the `$X$s' within
the observation window as much as possible.

\subsection{Acceleration of the X-Simulation Process}
X-Simulation can have a high runtime and memory usage if it is not
implemented carefully. The work of \cite{KoN09} first shows that the
X-Simulation can be implemented similar to the logic simulation as
follows. Given a signal $s:=(p,v,n)$ and an observation window of size $B
\times N$ (where $B$ is the trace buffer width and $N$ is the trace buffer
depth), whenever pin $p$ has a value of $v$ at cycle $n$ that is restored
from unknown to known, signal $s$ will be inserted to a queue. So this
queue contains the signals that have new values restored at certain cycles
within the observation window.  Then each signal will be popped out of the
queue in sequence, to restore the neighboring signals by following the
restoration procedure described in Section \ref{sec:sr}. During this
procedure, signals with newly-restored values at some cycles will be
inserted to the queue. The whole process terminates when the queue is
empty, meaning that no other signal can be restored at any of the cycles
from an unknown value to a known value.

The X-Simulation process is guaranteed to terminate. On one hand, this is
because for a signal $s:=(p,v,n)$, only the values within the observation
window can be restored ($n \in \{1..N\}$). On the other hand, the value $v$
at cycle $n$ will remain the same once it gets restored from unknown to
known. Therefore, the number of times that a signal is inserted to the
queue is bounded by $\mathcal{O}(N)$.

Since the signal value can only be `0', `1' or `$X$' at a given cycle, the
work \cite{KoN09} encoded these three values using two 1-bit values to save
memory space. Specifically, ``00'' is used to encode logic value 0 and
``11'' is used to encode logic value 1. Either ``01'' or ``10'' can be used
to encode logic value $X$. It also encapsulated signal values of 64 cycles
into a ``long integer'' (64 bits), where each bit of the long integer
refers to the signal value of a state element at a certain cycle to further
save the memory used for storing the traced data. The work \cite{KoN09}
also derived bit-wise computations to model the restoration process at
gate-level for different gate types where the signal values for 64 cycles
are considered all together during restoration. The condition of inserting
a signal to the list now becomes that at least one bit of a signal value
becomes known. This implementation is referred as the ``bit-wise
implementation of X-Simulation'' in \cite{KoN09}. Note that if more cycles
are needed for simulation (for example, 8K cycles is the typical trace
buffer depth), the same procedure can be applied by defining a custom data
type for that size.

When comparing the quality of multiple sets of trace signals, the state
restoration process needs to be performed for each set to show which set
can lead to the most restoration corresponding to all the state elements
within the observation window. This process can be time consuming even with
the bit-wise implementation of X-Simulation as described above. This is
because the total number of sets for comparison can be huge. For example,
given a design with $k$ state elements, theoretically the total number of
combinations for selecting $l$ trace signals is ${k \choose l}$, which is
quite huge since $k$ can usually be in the order of thousands
\cite{ISCAS89,ISPD12,IWLS05}.

A parallel acceleration strategy can be applied to speed up this process
with a cost of extra memory consumption. More specifically, the same
circuit can be copied multiple times such that each trace set is attached
to one copy of the circuit. Then the state restoration process can be
performed for all the copies in parallel and one process will not affect
the other. It should be pointed out that state restoration is usually
performed to obtain the metric to measure the quality of selected traces
during the signal selection process rather than focusing on how to use the
data to debug and diagnose. Therefore the copy of the circuit can be
discarded once the metric value is obtained. The memory space is then
reused for the X-Simulation of other sets.

\section{Trace Signal Selection for Post-Silicon Validation} \label{sec:TS_pre}
As explained in Section \ref{sec:ELA}, the trace signals can not be altered
during the debugging process. Therefore, they need to be carefully selected
to enhance the observerbility inside the Circuit-under-Debug (CUD) as much
as possible. We start this section by introducing a standard metric to
measure the quality of trace signal selection, based on which we then
define the trace signal selection problem. We conclude this chapter by 
giving an overview of previous work on trace signal selection.

\subsection{Measuring the Quality of Trace Signal Selection}
When the circuit operates in a single mode $m$, we assume the control
signals take constant and known values within the observation window of the
trace buffer. For a set of control signals $S_C$, this would be one out of
at most $2^{|S_C|}$ combinations.  The quality of trace signal selection is
typically measured by the {\bf State Restoration Ratio} for mode $m$
(denoted by $SRR^m$) which is computed within an observation window of $M$
clock cycles and is given by the following equation \ref{eq:SRR},
\begin{equation}\label{eq:SRR} SRR^m = \frac{B \times M + \sum_{n=0}^{M-1}
k_n}{B\times M}
\end{equation} where $B$ is the trace buffer bandwidth, and $k_n$ indicates
the number of restored signals in cycle $n$ excluding the signals which are
traced. For the restoration process described in Figure 2.3 related to the
circuit shown in Figure \ref{fig:example}, for $B=1$ and $M=5$, we have
$SRR=\frac{5+(1+1+3+2)}{5} = 2.4$. (Note, this example does not include any
control signals.)  Intuitively, SRR is an estimation of the amount of
restoration that can be obtained per trace signal per clock cycle. The
larger it is, the better visibility the selected trace set can offer.

One may argue that there can be other measures besides SRR to measure the
quality of restoration of a set of selected trace signals. For example, the
following works have proposed various metrics which are specific to a
target debugging process \cite{YangVN12,YangT12,PrabhakarH10,YangT09,HungW12}. 

In this work, we prefer using SRR as the measure of quality for our trace
signal selection for the following reasons. First, it has been widely
adopted by most of the existing works on trace signal selection such as
\cite{KoN09, LiD13,LiD14TCAD, LiD14, LiuX12, BasuM11, ChatterjeeMB11, KoN10SCTB,
RahmaniM13, ShojaeiD10} so it allows a fair comparison of solution quality
among different selection; moreover, a larger SRR indicates higher
restoration within the observation window, which intuitively gives better
chance for analyzing and detecting a bug.

\subsection{Problem Statement} \label{sec:prvious_work}
Given a trace buffer of size $B \times N$ and the control signal set $\cS_C$
specified for a single mode $m$, the Single-mode Trace Signal Selection
(SMTS) problem aims to select $B$ state elements in order to maximize
the restoration of the not-traced signals expressed by Equation \ref{eq:SRR}.

It should also be mentioned that all existing works have
targeted solving the trace signal selection problem for a single mode. We now
give a brief overview of the existing works to solve the single-mode trace
signal selection problem.

\subsection{Overview of Previous Work} 
We categorize the existing works on trace signal selection from two
perspectives: 1) the optimization strategy used to select the trace signals, 2) the method used to estimate the State Restoration Ratio (SRR) which drives the optimization.

\vspace{1mm}
{\noindent \bf 1) Optimization Procedure to Select the Trace Signals:} 
Various algorithmic procedures have been proposed to traverse through the 
trace signal candidates in order to select the best $B$ signals. These are summarized below.
\begin{itemize}\vspace{-1mm}
\item {\it Forward Greedy Traversal:} This is an iterative procedure for
  selecting the $B$ trace signals. At each iteration, the most promising 
  trace signal is selected. This is done by estimating the SRR for each
  trace signal candidate in that iteration. The estimation is done using 
  the trace signals which are selected from the previous iterations for 
  each candidate. Clearly this greedy strategy for selecting the trace 
  signals is not optimal. However, it is a scalable procedure which is 
  much faster than the other strategies. This is because in practice, the 
  number of iterations (e.g., $B=32$) is much smaller than the total number 
  of trace signal candidates (which is equal to the number of state
  elements). Therefore, the majority of prior works use forward greedy 
  traversal for trace signal selection \cite{BasuM11, HungW12, KoN09, KoN10SCTB,
    LiD13,LiD14TCAD, LiuX12, RahmaniM13, ShojaeiD10}. \vspace{-1mm}
\item {\it Backward Pruning-based Traversal:} This method is also an
  iterative procedure. Therefore for a circuit with $N$ state elements, a 
  solution can be found after $N-B$ iterations. The trace signals are the state elements which were not eliminated.
  The elimination procedure is also a greedy one so it can be thought as
  a backward greedy strategy. It is based on estimating the SRR of all
  the state elements which are not eliminated so far and eliminating the
  one with the smallest estimated SRR. The work \cite{ChatterjeeMB11}
  uses this method for trace signal selection procedure. The advantage of
  pruning-based selection strategy is that it is less prone to run into
  sub-optimality compared to forward greedy method. This is because the
  elimination process is less erroneous than selection of the next best
  candidate in a forward greedy strategy. However, given that in practice
  the trace buffer width is much smaller than the total number of state
  elements ($N$$\gg$$B$), the process of backward elimination is extremely
  time-consuming and not scalable with increase in the circuit size.
\item {\it Traversal using A Pareto Set:} Similar to the forward greedy 
  strategy, this case is an iterative procedure with $B$ number of
  iterations. However, instead of identifying the next trace at an 
  iteration, it identifies $K$ ``top'' solutions. A solution at 
  iteration $i$ contains $i$ trace signals identified so far, and 
  $K$ solutions are stored in that iteration. At each iteration, each 
  of the $K$ solutions from the previous iteration is visited and the 
  next trace signal is decided for each solution separately. In this 
  approach, the next trace signal for each of the $K$ solutions is decided 
  such that the identified solutions form a Pareto-optimal set of solutions 
  \cite{ShojaeiWDB10}. This means other than SRR, a secondary criteria can 
  be used to identify the quality of the solution. This approach is used in 
  \cite{ShojaeiD10}, where in addition to maximizing SRR (over all the state 
  elements), the secondary criteria is maximizing the restoration in a
  subset of \emph{critical} state elements. The advantage of the
  Pareto-based selection strategy is that it is less prone to run into 
  sub-optimal situation. However, the runtime of this approach can still 
  be much higher than the forward greedy strategy because evaluation of 
  SRR (and the secondary criteria) needs to be done more times by a factor
  of $k$ at each iteration. Also, the improvement in solution quality may not offset the runtime overhead.
\end{itemize}

\vspace{1mm}
{\noindent \bf 2) Estimation of State Restoration Ratio (SRR):} The
procedures for selecting the trace signals are all based on estimation of
the SRR to compare different solutions and select the best one. Here we
discuss two categories of techniques to estimate SRR. These are divided
into fast but inaccurate, versus slow but accurate estimation techniques.
\begin{itemize}
\item {\it Metric-based:} This estimation method is based on defining a set
of metrics to approximate the SRR \cite{KoN09, LiuX12, BasuM11,
ShojaeiD10}. These metrics are typically computed fast so they are suitable
to be integrated with more time-consuming optimization strategies. Various
metrics have been proposed in prior works. For example, the work
\cite{KoN09} introduces a ``visibility'' metric which is a probabilistic
measure of the degree of restoration for each gate, for a given set of
trace signals. The visibility for each gate is estimated and then the
summation of visibility metrics of all the gates is used as an
approximation of SRR. 
A shortcoming of this probabilistic measure is lack
of consideration in correlation between the inputs and output signals of
each gate. Therefore this metric in practice has been shown to be
inaccurate, especially for larger circuits. Also with increase in the
number of logic levels, the estimation error increases due to the impact of
signal correlations.
\item {\it Simulation-based:} The work \cite{ChatterjeeMB11} directly uses
simulation to compute the SRR. The estimation of SRR is usually much more
accurate than the metric-based technique. However, a higher accuracy
requires simulation for a higher number of clock cycles. At the same time,
a separate simulation needs to be done for each trace signal candidate
within each iteration of the trace signal selection procedure. Therefore the
number of simulations is very large. Each simulation is also very
time-consuming, especially as the circuit size grows. Therefore an
optimization procedure which is driven by a simulation-based estimation of
SRR takes significantly longer than the one driven by metric-based
estimation.
\end{itemize}

Overall, the metric-based estimation is more suitable to be integrated with
optimization strategies which are more time-consuming while
simulation-based estimation is more suitable for integration with forward
greedy strategy. We note the work \cite{ChatterjeeMB11} integrates
simulation-based estimation with backward-pruning strategy. The runtime is
extremely long and the method is only shown on small circuit instances,
despite a GPU-based parallel implementation using 480 GPU cores.
