\section{\large Related Work}
\label{sec:related}
We finally summarize several closely related work to our project. We classify them
into the following two categories.

\textbf{\textit{Testing-based bug localization.}} Among a rich body of existing
testing-based bug localization techniques~\cite{Wang:2009, Santelices:2009, Wong:2007,Zhang:2009,Abreu:2007}, perhaps, the most closely
related works to our project are those that perform bug localization using probabilistic program
behavior models. The work of Feng and Gupta~\cite{feng:learning} uses the standard Bayesian network inference
techniques to predict the buggy statements. Liu et al~\cite{soberliu05} and Baah et al~\cite{baah08jul} applied probabilistic models to
analyze the behavior of predicates in passing and failing runs. Their method builds a distribution
for the outcomes of each predicate in both passing runs and failing runs and locates the fault by
comparing the distributions in passing runs with those in failing runs. They suggest if the
behavior of a predicate in a failing run is significantly different from that in passing run,
it is probably relevant to the failure.

Many other statistical techniques are proposed to localize software bugs. Liblit et al.~\cite{Liblit:2005}
proposed a method which uses sampling to collect data during program execution and identifies
predicates which are relevant to bugs. Jiang and Su~\cite{Jiang:2007} proposed an approach to automatically
generate a faulty control flow path by clustering correlated predicates. The faulty flow path
can help users but it cannot provide the exact location of faulty statements. Chilimbi et al.~\cite{Chilimbi:2009} 
presented a statistical debugging tool called HOLMES that used path profiles instead of
predicate profiles to isolate bugs.

Fundamentally differing from the existing work that learn the distributions for the dependence
of each statement in a program, our approach uses MLN to build a general model for each program
statement and then uses the model to reason about the buggy behavior. In addition, our approach
uses MLN inference techniques to overcome the limitation of existing Bayesian network inference
technique to identify the bug location.

\vspace{1mm}

\textbf{\textit{Markov logic Network and its applications.}}
As one of the most powerful tools in statistical relation learning, Markov logic~\cite{Richardson06markovlogic} is a
probabilistic extension of first-order logic. Markov logic makes it
possible to compactly specify probability distributions over complex relational domains, and has
been successfully applied to machine reading~\cite{Poon:2010}, unsupervised machine learning~\cite{poon2009},
structure learning~\cite{Kok:2005}, and many other fields~\cite{TKDE2009, Singla:2006}.

In this project, within our best knowledge, we are the first to apply Markov logic
to the domain of software engineering, and demonstrate that it can serve as an interface layer
for testing-based bug localization.
