
\section{\large Motivation}

Today's software systems suffer from poor reliability, with software bugs costing the U.S. Economy
upwards of \$60 billion annually. Attempts to reduce the number of bugs in software are estimated to
consume 50\% to 80\% of the development and maintenance effort~\cite{CollofelloW89}. Among the tasks required to
reduce the number of software bugs, debugging is one of the most time-consuming, and localizing the
bugs is the most difficult component of this debugging task. Clearly, techniques that can reduce the
time required to reason about buggy software behavior and locate bugs can have a significant impact
on the cost and quality of software development and maintenance.

\vspace{1mm}

\textbf{\textit{Manual bug localization is time-consuming and ineffective.}} When fixing a bug, developers consistently perform four tasks: (1) identify statements involved in
failures -- those executed by failed executions (e.g., from failed test cases); (2) narrow the search
by selecting suspicious statements that might contain bugs; (3) hypothesize about suspicious bugs; and
(4) restore program variable to a specific state to observe the program behavior. This project addresses
the \textbf{second} task --- selecting suspicious statements that may contain the bug.  To identify
suspicious statements, programmers typically use debugging tools to \textit{manually} trace the program,
with a particular input, encounter a point of failure, and then backtrack to find related buggy statements.
However, the manual process of localize software bugs can be very time-consuming. A technique that can
automate, or partially automate, the reasoning process can provide significant savings. Specifically,
an automated technique with tool support can lead developers to concentrate their attention locally
instead of providing a global view of the software, while still giving access to a global view. A technique
can provide more useful information by using results of many executions of the program, and help
developers understand more complex relationships in the system. However, with large programs and large
number of executions, the huge amount of data produced by executions, if reported in a low-level
form, may be prohibitively difficult to interpret.

\vspace{1mm}

\textbf{\textit{Automated bug localization techniques are often defined in an ad-hoc way without
a unified interface layer.}} 
Automated software bug localization techniques have been studied for over 30 years. However,
the state-of-the-art techniques are still very far from creating bug-free software.
This is especially true for today's software, whose
complexity, configurability, portability, and dynamism exacerbate automated bug localization challenges. 
In the last few years, there has been a great number of research techniques
that support automating or semi-automating several debugging activities~\cite{Ball:2003, Jiang:2007, Liblit:2005,
Jones02visualizationof, Chilimbi:2009, feng:learning, soberliu05, le:2010, le:2011}. Collectively, these
techniques have pushed forward the state of the art in debugging. A major category in
automated bug localization research is the \textit{testing-based} technique, that utilizes
data collected from test execution to determine potential buggy program entities.
Specially, those techniques identify potentially faulty code by observing the characteristics of
failing program executions (from test), and often comparing them to the characteristics of
passing executions. These approaches differ from one another in the type of information they
use to characterize executions and statements/path profiles~\cite{Jiang:2007}, counterexamples~\cite{Ball:2003}
, statement coverage~\cite{Jones02visualizationof}, and predicate values~\cite{Liblit:2005} and in the specic type of mining performed on
such information. Additional work in this area has investigated the use of
clustering techniques to eliminate redundant executions and facilitate bug localization~\cite{Dickinson:2001}.

Despite the advance in automated software bug localization research, most techniques are
defined in an ad-hoc way, and \textit{a common, rigorous interface layer separated from any specific
bug localization technique} has unfortunately not established itself as a foundation for
continuing automated debugging research. Typically, new bug localization techniques are
informally described. Researchers often need to deal with \textit{low-level} data (e.g., test coverage
) to figure out \textit{high-level} information (e.g., bug locations). This has been a persistent
problem in the research community.

\vspace{1mm}

\textbf{\textit{Why a unified interface layer is not established?}} Although the importance of
an interface layer is widely recognized, there is very few work in the software engineering
community to establish such a layer.  An interface layer needs to provide a language of operations that all
the infrastructure needs to support, and all that the higher-level applications need to know about.
A good interface layer also needs to expose important distinctions and hides unimportant
ones. How to do this is often far from obvious. In addition, creating a successful new interface layer
is thus not easy. Typically, it initially faces skepticism, because it can be less efficient than the
existing alternatives, and appears too ambitious, being beset by difficulties that are only resolved by
later research. All above conventional wisdom and common beliefs lead to the lack of an interface layer
for automated fault localization techniques.

\vspace{1mm}

\textbf{\textit{Contributions of this project.}} The project argues that a unified interface layer
must be established to push the frontier of testing-based fault localization techniques.
In summary, this project makes the following two contributions:

\begin{itemize}

\item we propose a framework based on \textit{Markov Logic Network} (MLN)~\cite{Richardson06markovlogic}
 as an interface layer for \textit{testing-based automated bug localization}. 
Using MLN as an interface layer for bug localization has several benefits: (1)
precisely locating a bug requires analyzing program statements, program executions, program variables\footnote{Simplified
notations ``statement", ``executions" will be used in this project
for program statements and program executions. We will keep using
``program variables" to tell from variables in Markov logic}, and their correlations. (2) first-order logic formulas, an
essential part in MLN, is expressive enough to represent observations and
relations between potential buggy program entities and test executions. (3) recent advance in probabilistic inference of MLN can be naturally used
to reason about the buggy behavior, and estimate the probability that a program entity (e.g., statement, method, or class) is buggy. And (4) Markov
logic framework and the Alchemy toolkit~\cite{alchemy} provide efficient tool support for learning and
inference. Furthermore, MLN separates researchers from \textit{low-level} details with their \textit{high-level}
goals. When designing new automated bug localization techniques, researchers can focus on concisely
encode their insights in a few first-order rules, instead of dealing with \textit{low-level} details.

\item we instantiate a new automated fault localization technique from the proposed framework, and empirically
demonstrate that it outperforms existing state-of-the-art testing-based bug localization techniques with
few human effort.

\end{itemize}

\vspace{1mm}

\textbf{\textit{Report organization.}} The rest of this report is organized as follows. Section~\ref{sec:background}
briefly introduces the background of testing-based fault localization. Section~\ref{sec:interface} argues that an interface
layer for testing-based fault localization is needed, and why Markov Logics Network (MLN) is a good fit. Section~\ref{sec:model}
presents the unified framework using MLN and instantiates a new fault localization technique based on
the proposed framework. Section~\ref{sec:example} demonstrates the benefit of our proposed framework on a simple
illustrating example. Section~\ref{sec:evaluation} gives our evaluation on a set of real-world programs. Section~\ref{sec:implication}
discusses the implications of this work. Section~\ref{sec:related} summarizes the related work, and Section~\ref{sec:conclusion}
concludes.

