\section{Introduction}



Today's software systems suffer from poor reliability, with software bugs costing the U.S. Economy
upwards of \$60 billion annually~\footnote{NIST report: \url{http://www.nist.gov/director/prog-ofc/report02-3.pdf}}. Attempts to reduce the number of bugs in software are estimated to
consume 50\% to 80\% of the development and maintenance effort~\cite{CollofelloW89}. Among the tasks required to
reduce the number of software bugs, debugging is one of the most time-consuming, and localizing the
bugs is the most difficult component of this debugging task. \textit{Intelligent} techniques that can reduce the
time required to reason about buggy software behaviors can have a significant impact
on the cost and quality of software development and maintenance.

%To reduce the time required to localize bugs, and thus the expense of debugging, 
Recently, researchers
have proposed many approaches to automate this process of searching for bugs.
Some existing approaches use statement coverage information provided test suites
to identify likely buggy statements~\cite{Jones02visualizationof}. Other approaches leverage
the static program
structure information to perform a binary search of the memory state
to find potential bug locations~\cite{Cleve:2005}.
Still other approaches~\cite{reiss:2003} are based on the comparison of statements covered
by passed tests and statements covered by failed tests. Several authors
have devised, compared and learned similarity measures for use in software
bug localization. In a comprehensive study~\cite{Jones:2005}, the statement coverage-based
approach \textsc{Tarantula}~\cite{Jones02visualizationof} consistently performs better than other
state-of-the-art approaches~\cite{reiss:2003}~\cite{Cleve:2005}.

%A number of approaches have been proposed, in which software bug localization
%is viewed as a classification problem: given a software statement (in presence of a failure), classify it as "correct" or "buggy". A separate match decision is made
%for each candidate statement, followed by xx. Typically, program execution information
%is used~\cite{}. One line of research has focused on scaling software bug localization
%to large programs by utilizing program structure information. Another has focused
%on the use of history data to XXXX. Several authors have devised, compared and
%learned similarity measures for use in software bug localization. A number of
%alternate formulations have also been proposed~\cite{}. Software bug localization
%has been applied in a wide variety of domains (e.g., ~\cite{}) and to different types of data. XXX~\cite{} surveys research in traditional research xxx.

Despite the advance in automated software bug localization research, most approaches
share two common limitations. First, they predict the likelihood of a statement being buggy 
\textit{sequentially} and \textit{separately}. This local search ignores
the inter-statement relationship in a program
that is potentially crucial. For example, if a statement outputs
an incorrect value, then the following statements using that incorrect output value
should \textit{not} be blamed for being buggy, even if they produce wrong results. Second,
existing approaches tend to use isolated information to localize bugs and
block the crosstalk between various information sources. In reality, however, evidence
from different information sources are often strongly intermixed and evidence from one side
can help to resolve the uncertainty of the another. For example, prior
bug knowledge should be incorporated to localize new bugs, since bugs repeatedly
found in the same software system can be similar with one another. The prior
knowledge of similar bugs confirmed in the past can be extremely useful to assist
in localizing new bugs. Clearly, a relational-learning model capable of
\textit{joint inference} to exploit those global information can address
these limitations, and open up new
avenues for automated software bug localization.


In this paper, we propose an approach based on \textit{Markov logic} to address
the software bug localization problem. Our approach takes
advantage of the recent progress in statistical relational learning (a.k.a.
multi-relational data mining), which provides rich representations and efficient
inference and learning algorithms for non-i.i.d data. In particular,
we use Markov logic, which combines first-order logic and Markov random fields~\cite{Richardson06markovlogic},
with weighted satisfiability testing for efficient inference and a voted perceptron
algorithm for criminative learning.  When applied to the software bug localization
problem, Markov logic permits to
combine different information sources into a comprehension solution.
We illustrate this in this paper by combining
a few salient ones, such as statement coverage, static program structure information,
and prior bug knowledge, and applying the resulting method to localizing
bugs in a set of real-world programs.


%I propose a unifying approach for machine reading based on Markov
%logic. Markov logic defines a probabilistic model by weighted first-order logical formulas.
%It provides an ideal language for representing and reasoning with complex, probabilistic
%knowledge, and opens up new avenues for leveraging indirect supervision via joint inference,
%where the labels of some objects can be used to predict the labels of others.

%violates one formula not impossible but less likely

%Using Markov logic as a representation
%language and the generic learning and inference algorithms available for it, our solution
%largely reduced to writing appropriate logical formulas and was able to achieve state-of-theart
%accuracy with substantially less engineering effort compared to previous approaches.


This paper makes the following contributions:

\begin{itemize}
\item We proposed the \textit{first} approach 
that uses \textit{Markov logic}
to represent and reason with complex software behaviors, and
localize potential bugs via joint inference.

\item We implemented an \textit{intelligent} debugging system, \textsc{MLNDebugger}, for automatically localizing potentially
buggy statements in a software system.
\item We evaluated our approach on 4 real-world programs.
The results show that our approach achieved
higher accuracy with substantially less engineering effort compared to previous approaches.
\end{itemize}

We begin by briefly reviewing the necessary background of 
Markov logic. We then describe our proposed approach,
report our experiments, discuss related work,
and outline directions for future work.

