\chapter{Introduction}
\noindent With continuous technology scaling, following Moore's law, more
and more transistors are integrated in modern designs. The higher
integration rate greatly increases the design complexity. This increase in
complexity combined with tighter time-to-market constraints require the
designs to be verified and tested as bug-free in a short period of time
during the post-silicon stage when the chip is fabricated but not yet
massively produced for delivery to market. This is because the pre-silicon
verification techniques which are mainly based on formal verification and
simulation are no longer effective to catch all the bugs before
fabrication. Technology scaling also brings additional uncertainty due to
the more complex manufacturing process, leaving more fabrication-induced
bugs inside the chips. Therefore, effective Post-silicon Validation (PSV)
techniques are becoming vital.  \newpage

\section{Overview of Pre-Silicon Verification and Post-Silicon Validation}
The correctness of a circuit needs to be verified with different emphasis
at different stages when building a VLSI (Very-Large-Scale
Integration) circuit from its high-level specifications to the final
product. Specifically, at the pre-silicon stage which is the stage before the design
is manufactured, verification and simulation-based techniques are applied
with the main goal of proving the functional correctness of the design and
finding the functional bugs. However, it is becoming increasingly difficult
to detect all the functional bugs during the pre-silicon verification
phase. This is because the two major methods used at the pre-silicon
stag, namely simulation and formal verification, have reached significant
limitations for modern designs.

First, simulation-based debugging techniques impose input vectors to the
circuit to check whether the outputs are consistent with the design
specifications. However, due to the large size of modern designs, this
technique can only be used for a small subset of valid input vectors.
For instance, a design with $n$ inputs has up to $2^n$ input vectors. For a
circuit with $m$ logic gates which is in the order of billions in modern
designs, simulation of a single assignment to the input vector requires
visiting every gate which has a runtime complexity of $\mathcal{O}(m)$. The
simulation time for all input vectors is of complexity $\mathcal{O}(2^n \times m)$. 
Furthermore, for sequential circuits, exhaustive simulation also needs
to consider the total number of state transitions. For example, $l$ state elements may
lead to $2^l$ states, which further increases the simulation time if
considering applying the input vectors for each state transition.

Another factor that limits the extensive usage of simulation is the
increase in runtime when more accurate models are used. Simulation allows
modeling both the functional and electrical attributes of different
components of the circuits, including wires, gates, clock network, etc. There
is always a gap in the electrical behavior obtained from the modeled circuit and the real
circuit after fabrication. This gap needs to be narrowed down for simulation
to be effective. However, applying more accurate electrical models is only
possible at the sacrifice of increasing the complexity of the model which leads to
longer runtime. At the pre-silicon stage where ensuring the functional
correctness is of more importance, electrical models are preferred to be simple. For
post-silicon validation, complex models should be used to better capture the
behavior of the fabricated circuit which in turn will take much longer
validation time than in the pre-silicon stage. Online data which capture the
runtime behavior of a fabricated chip can also be collected using various
design-for-debug hardware during real time operation to detect incorrect
circuit behavior.

Besides simulation, formal verification techniques are the second set of techniques used during
pre-silicon verification. Unlike simulation which tests the correct behavior over
a set of input vectors, formal verification checks the correctness of the circuit using
mathematical models which are derived from the design
specification. Therefore, the effectiveness of formal verification largely
depends on the accuracy and completeness of the design's specifications
\cite{KernG99}. For modern designs, obtaining a clear
and correct specification may not be easy. This in
part could be due to the massive use of Intellectual Property (IP) cores
when they are combined with other higher-level building blocks on a System
on a Chip (SoC). Since the IP cores may be from different vendors and may not all be
well documented or fully tested, their arbitrary integration in an SoC by
the designers may bring unpredictable risks to the system. Formal
verification techniques also suffer from poor runtime scalability with
increase in design size similar to simulation-based techniques. 

Due to the above limitations at the pre-silicon stage, it is possible that
some bugs escape the pre-silicon verification phase. Furthermore, new bugs
may be introduced due to the imperfections in the fabrication process. It
is necessary to fix all the bugs prior to the mass fabrication of the
design.  

First, manufacturing test can be used to ensure the functional and
electrical attributes of a chip remain consistent with the specifications,
and that they are not altered during the fabrication process.  
It should be noted that manufacturing faults are dealt with differently
during manufacturing test. Among different categories of manufacturing
faults, the most common ones are stuck-at-faults, bridging or open faults,
and delay faults. Stuck-at-faults result in an output of a gate or
interconnect to be permanently stuck at `0' or `1'. Similarly, bridging or
open faults result in an output of a gate or interconnect to be permanently
shortened to $V_{dd}$ or $V_{ss}$. A delay fault happens when the delay of one 
or more paths violates the timing constraints. To detect these
manufacturing faults, appropriate input patterns need to be
imposed to the design such that the erroneous results are generated and propagated to the primary
outputs. The input pattern generation process is time-consuming and may not
be as effective for identifying all the fault types. Despite significant
progress in the Automatic Test Pattern Generation (ATPG) algorithms over
the years, runtime scalability of the test pattern generation process and
limitation in the fault models stand as two major bottlenecks to the effectiveness of ATPG for PSV. Regardless of the limitations, 
manufacturing test still remains as an indispensable step in PSV.

After the manufacturing test, there may still be bugs left in the design
which should be identified using other PSV techniques. In order to locate
these bugs, the main challenge is to increase the visibility inside the
chips. This can be hard to achieve nowadays without the
assistance of Design-for-Debug (DFD) hardware. In the remainder of this
section, we first discuss the bug types that are encountered at the
post-silicon stage and then give an overview of various PSV techniques to
increase the visibility inside the chip.

\section{Bug Types at the Post-silicon Stage} Bugs at the post-silicon
stage can be categorized into three types: logic bugs, electric bugs and
system bugs. 

{\bf Logic bugs} are the functional errors that escape the pre-silicon
verification phase. Detection of logic bugs requires collecting online
data as the chip is operating so that when \newline erroneous values are observed
at the primary outputs, related data can be traced back within a ``suspicious'' 
time frame. Various DFD hardware including scan chains and trace buffer is
typically used for this purpose which will be explained in Section \ref{sec:debug_techniques}.

{\bf Electric bugs} refer to the mismatches found between the electrical attributes of
the circuit models and the real circuits. These mismatches stem from
process variations during fabrication, dynamic temperature and voltage
variations, as well as inaccuracy of modeling various electrical factors
such as cross-talk between the wires \cite{Joardar94, PaulR02}, or a gate
delay when considering simultaneous switching at its inputs \cite{ChenGB01,
ChouRP94}.

Electric bugs usually have cumulative effect and can eventually manifest themselves as logic malfunctions. 
For instance, process variations may result in a lowered $V_{dd}$ level for a certain
gate, which translates into a slower signal transition and larger gate
delay. While this slowdown at a single gate may not be sufficient to cause a
timing violation, the slowdowns in many gates on the same combinational
path can accumulate and result in a timing violation.
The timing failure in turn may get manifested as propagation of an
incorrect logic value to the subsequent logic stages, and be viewed as a logic
malfunction. 

One method for detecting electric bugs is by using probing techniques which
allow monitoring the electrical attributes of the circuit at certain
locations such as specific state elements. 
However, probing is a manual process and can be done for a very small number
of sites on chip. Therefore, other techniques are first used to narrow down
the suitable sites for probing. For example, a logic malfunction due to an
electrical bug may first be identified at the output of a state element or a
primary output. Next, detection techniques (similar to those used for logic
bugs) can be applied to find the exact location of the malfunction by 
backtracking the gates in the fan-in cone of the identified spot. Then
probing can be applied to that spot in order to check the electrical
attributes such as pin voltage level, load capacitance or noise level at this
gate to see the potential causes of the bug. The probing process may further
be extended to neighboring gates to identify cumulative factors, for
example in delay or noise levels, of a group of gates. 
In this way, the workload of probing is reduced since it does not
need to be performed right from the spot where the malfunction is initially found.

Finally, {\bf System bugs} occur when multiple CPUs or design blocks of a system interact
with each other during operation. For example, erroneous outputs from a
CPU core may be stored on a shared memory and be later used by another CPU
core that also shares the same memory. Essentially, these erroneous data can either be
functional bugs (e.g., from the CPU core), or caused by the electrical bugs
(e.g., along the data path from the core to the memory). Therefore system bugs are
 usually considered as a combination of electric bugs and logic bugs.
 
Due to the complexity of the present designs (e.g., out-of-order execution, deep
pipeline stages, long latency of communication within the memory
sub-system), it can take up to billions of clock cycles from the point
when a system bug emerges until incorrect behavior is detected. Because of this
long ``error-detection-latency''  \cite{HongLPMLKHNGM10}, current DFD hardware
is typically not directly applied to large systems, since they usually
are only able to track a short time range of data of a few thousands clock cycles. Therefore, current detection techniques 
\cite{AdirGLNSSZ11}, \cite{DeOrioWB09}, \cite{HongLPMLKHNGM10},
\cite{HopkinsM06}, \cite{WagnerB08} will first coarsely locate the bugs, by targeting
different design blocks (mainly on memory sub-system) and selecting a few 
representative types of bugs for detection based on their working 
mechanisms before other finer-grained debugging techniques for logic and electric bugs are applied.

To sum up, the detection of system bugs requires specific knowledge
about the working mechanism of the system. Once the bugs have been 
coarsely located within the suspicious time-windows, other logic and electric 
bug detection techniques can be applied. Meanwhile, electric bugs always
manifest themselves as logic malfunctions so they require the use of
detection techniques for logic bugs, as was explained before. From 
this perspective, detecting logic bugs may be viewed as a major effort
in post-silicon validation.

\section{Overview of Debug Techniques for Post-silicon Validation} \label{sec:debug_techniques}
The post-silicon validation process relies on capturing as many internal signals of
a circuit as possible during the online operation using actual workloads. A
major concern at the post-silicon stage is the lack of access to the
internal signals of the chip \cite{VermeulenG02, HsuTJC06}. Therefore,
different techniques have been proposed to increase the access and enhance 
the observability inside the chip. Below, we briefly explain these techniques.

{\bf Physical probing} tools \cite{PanicciaERY98, SchlangenLKBJMWLK07} can
directly touch the pins of a Circuit-under-Debug (CUD) to obtain the
voltage and current statistics, which are then output to the oscilloscope
for online monitoring. Observation of abnormal behavior at a pin (e.g., a lowered voltage level
for logic value `1') will lead to further probing of the neighboring pins
to search for related bugs.

Even though probing has the advantage
of obtaining accurate information of the electrical attributes at the pins,
it needs to be applied manually, and can only be applied to a small portion
of the circuit since it is impossible to probe every pin of a large modern
design \cite{ChatterjeeMB11}. Therefore as explained in the previous
section, probing is usually applied in a restricted setup for detecting
electric bugs, after other techniques are applied to narrow down the scope of probing.

{\bf Scan chains} \cite{DattaSA04, GuWLKC02} have been extensively used
both in the manufacturing test and post-silicon validation. During the
manufacturing test, test vectors generated through the Automatic Test
Pattern Generation (ATPG) process are fed into the scan-in channel of 
the scanned state elements, and the outputs of CUD are shifted out of the scan chains to be checked for the
existence of any manufacturing defects. Scan chains can also be used during
post-silicon validation, when functional data which are input vectors
corresponding to real workload can be applied to the primary inputs of the CUD and 
the results are dumped off-chip through the scan chains for analysis.

The main drawback of using scan chains is that it requires the CUD to
\emph{pause} the execution for up to a few thousands cycles until the internal signal
values have been fully propagated through the scan-out pins \cite{Josephson02}. 
This is because the scanned state elements do not have their own storage to
hold the values of the state elements. If the CUD does not pause
during the scan dump, wrong values may be scanned out. The use of scan
chains is also limited by the number of available I/O pins which decides
how many scan chains can be deployed. The more scan chains are available,
the fewer values of state elements will be dumped through each scan chain,
therefore the scan dump period is shortened.

{\bf Shadow State Elements} are an alternative to scanned state elements 
which can be applied to enable the continuous execution of the CUD
\cite{ErnstKDPRPZBAFM03, JosephsonG04}. Compared with the scanned state elements, shadow state
elements have additional storage to store the latched values thus they do not
interrupt the online operation of the CUD. However there can be a large
area overhead introduced by shadow state elements \cite{JosephsonG04},
rendering them only applicable to a small number of state elements because
the design blocks are required to be placed and routed compactly. Also, it
has been pointed out by \cite{YangJH14} that the shadow state elements
require extra hold margins at the design time (related to the
short-path hold constraint), to be able to capture the values of trace 
signals correctly. This further complicates the design of scan chains and 
limits their use in post-silicon validation.

{\bf Trace buffer} \cite{AbramoviciBDLMM06} has emerged as an effective DFD
hardware for Post-silicon Validation (PSV), which essentially
are composed of a small on-chip memory with a routing network 
to connect the state elements to the memory. They can record the signal 
values of a subset of the state elements for a few thousands of cycles and 
store them in the on-chip memory. The stored values can then be dumped
off-chip for analysis.

The notion of trace buffer emerged in the late 1990s to early 2000s when
there was a transition from the external logic analyzer to the embedded logic 
analyzer to make debugging more efficient \cite{CortiKMPSW04,
  MacNameeH00, BeenstraRH01}. Trace buffer has been used as the core
storage unit inside the embedded logic analyzer since then. Meanwhile,
there has been a growing demand for debugging infrastructure which
increases the visibility to as many internal states of a CUD as possible
rather than a few monitoring points such as data and address buses
\cite{Stollon10}. Trace buffer is a good fit for this purpose.

Even though trace buffer emerged after scan chains \cite{BrisacherKS05}, 
they have been increasingly used in PSV due to their advantage of online
tracing of data for a longer period of time before dumping the data off the
chip. This addresses the main shortcoming of scan
chains for PSV which requires continuous stopping of the execution of the CUD. 
Therefore, trace buffer has been widely embedded into various
validation sub-systems of FPGAs (e.g., Chipscope in Xilinx FPGAs
\cite{ChipScope}), SoCs and processors (e.g., CoreSight Trace
Macrocells in ARM SoCs and processors \cite{CoreSight}) over the years.
Still, many unexplored research avenues exist to increase the
visibility inside the chip by utilizing trace buffer. In the next chapter, 
we discuss trace buffers in detail.
