\section{Bug Localization Approach}

Markov logic~\cite{Richardson06markovlogic} provides an ideal language for representing and reasoning
with complex probabilistic representation. In this section, we describe how to apply it
to the problem domain of software bug localization.
We first encode the problem into a multivariate Markov logic network model with
each predicate in the model as a bug predictor. Next, we
perform inference on the model to localize bugs.%decode the bug localization.



\subsection{Programming Model and Program Features}

Before presenting our approach in detail, 
We first briefly introduce the
programming model and a number of salient program features we use
in this paper.

\vspace{1mm}

\noindent \textbf{Programming Model.} From a high level, a software system $S$ consists of a sequence of statements $stmts$; each
statement in $stmts$ takes a set of input variables, performs certain
computation, and assigns computed values to the output variables.  
A bug occurs when a statement in $stmts$ assigns an incorrect value to one
of its output variables, and that incorrect output
variable is later used by other statements until certain buggy behavior
is observed.

In practice, not all statements
will be executed when a bug occurs. A single statement can be executed
multiple times (e.g., in a loop) before a bug becomes apparent.  The
goal of our approach is to identify the \textit{root} statements
 for the observed bug, instead of the statements that
have been exercised in a failed execution.

\vspace{1mm}
%Thus, a bug localization
%technique must handle such cases.

\noindent \textbf{Program Features.} A large volume of data
is produced during software development process, and 
can be abstracted into various features to facilitate the bug
localization task. To make our approach generally applicable,
we carefully select a few salient ones that can be easily
obtained in practice.

%\begin{enumerate}
\textbf{1. Statement coverage.} Software testing is an integral part in
modern software development. The statement coverage information
by passed or failed test executions reflects a software system's
dynamic behaviors.
%can be used to localize software bugs. 

Formally, given a test suite $T$ for a software system $S$, a test $t$
in $T$ exercises a subset of $stmts$ and gives test execution
result: pass/fail and code coverage information.
A test $t$ passes if the actual output for an execution of $S$ with
$t$ is the same as the expected output for $t$; otherwise, $t$
fails. 
The test suite $T$ may contain one or more failed tests, indicating
potential bugs in the tested software. 

Thus, whether a statement $s$ is covered by failed tests (or passed tests)
can be used as a program feature.

%The output is
%a ranked list suspicious statements that are likely to be buggy.

%The statement covered by $t$ consists of the source-code
%statements that are executed when $S$ is run with $t$. Thus, within the scope
%of testing-based bug localization, for a certain technique, the input normally
%consists of three components: the source
%code for $S$; the pass/fail results for executing $S$ with each $t$
%in $T$ ; and the code coverage of executing $S$ with each $t$ in
%$T$.  


\textbf{2. Program structure.}
Statement coverage information reflects the \textit{dynamic}
behaviors of a software system at runtime. On the other hand,
from a \textit{static} viewpoint, each statement may also
depend on other statements via control flow or data flow
dependence.

We further explore such static program structure information
to obtain inter-statement dependence relationships
%the static control and data flow dependence
%between two statements
as program features.


\textbf{3. Prior bug knowledge.} Previous research~\cite{Nguyen:2010} from
the software engineering
community confirms the existence of \textit{recurring bugs}. Bugs
repeatedly found in the same software system as it evolves
can be similar with one other. For a new bug, it is likely that
a similar bug has been confirmed or even fixed in the past.
Knowledge about previous bugs can be used to localize
new bugs.  Our approach extracts three common beliefs
about recurring bugs as program features and use them
as prior.
%our approach abstracts three common observations from common beliefs about
%recurring bugs as program features (see next section for details).
%can be useful to localize new bugs.
%In our approach, we explore such knowledge as \textit{prior}

%\end{enumerate}
%Various kinds of data are gathered through software development.
%Our technique utilizes the \textit{testing data} to localize bugs. 

%our technique requires two types of information about the
%execution of $S$ with $t$: pass/fail results and code coverage. 





\subsection{Modelling Buggy Behavior with Markov Logic}

With MLN, we can encode program features into a joint inference
system and conduct inference efficiently. The model is composed
of first-order logic formulas, with predicates and negations.
We assign each statement a predicate $s_i$. When $s_i=true$,
it indicates that the $i$-th statement is buggy. Therefore,
the assignment of predicates is a \textit{state} revealing
bugs in a software system. The goal of software bug localization is
to estimate the marginal distribution:  $p(s_1,s_2,\ldots,s_{|S|})$,
and then prioritize buggy statements by this distribution.

We next encode three categories of program features
presented in the last section as first-logic
formulas to reason about buggy program behaviors. 

%In this paper, we use three kinds of formulas: unit-test
%evidence, statements relations, prior knowledge.

\textbf{Encoding statement coverage.}
%Evidences are collected from existing test cases to decode 
%For each statement $s_i$, 
We use variable $t_k$ to indicate a test. Variable $t_k$ takes truth
value (i.e. $t_k=true$) when the test fails, and false otherwise.

%A test case is
%composed of a sequence of executions.

%The basic evidence of test-based
%bug localization comes from the execution of the unit-tests.
Intuitively, if a statement $s_i$ is covered by failed
tests, it is more likely to be buggy than those are not.
On the other hand, if $s_i$ is covered by passed tests,
it becomes less suspicious.
We express these two observations as follows:
%The intuition is: if a statement frequently appears in failed
%unit-tests, the statement is suspicious. We can simply put this
%observation as

\begin{equation}
\neg t_k \wedge \mathrm{cover}(t_k,s_i)\Rightarrow s_i
\end{equation}

\begin{equation}
t_k \wedge \mathrm{cover}(t_k,s_i)\Rightarrow \neg s_i
\end{equation}
where $\mathrm{cover}$($t_k$, $s_i$) is a predicate that
returns true if test $t_k$ covers statement $s_i$.

%test case $t_k$ covers statement $s_i$. We will discuss the
%weight of these formulas in Section \ref{sec_mln_inference}.

\textbf{Encoding program structure. } Additional evidences
can be gained by exploring a program's static structure.
Specifically, for two statements $s_i$ and $s_j$, if $s_i$
assigns a \textit{wrong} value to its output variable, and $s_j$
uses that wrong variable as its input (i.e., $s_j$ has \textit{data dependence}
on $s_i$); statement $s_j$ should not be regarded as buggy,
even if it produces an incorrect output.
%An intuition is, if a statement assigns a
%wrong value to its output variable, then the following statements
%using that output variable should not be blamed to be buggy, even if
%a wrong result is observed. 
We encode such inter-statement \textit{data dependence} relationship as follows:
%\begin{equation}
%\forall i:\ \neg s_i \wedge \mathrm{dataDep}(s_i, s_j)
%\Leftrightarrow s_j
%\end{equation}
\begin{equation}
 s_i \wedge \mathrm{dataDep}(s_i, s_j)
\Rightarrow \neg s_j
\end{equation}

where $\mathrm{dataDep}(s_i,s_j)$ is a predicate that returns
true if $s_j$ is data dependent on $s_i$.
%variable assigned by $s_i$ beforehand.


%\CodeIn{if(x > 0)}

%\CodeIn{   do some thing}

%\vspace{1mm}

Similarly, if a branching statement (i.e.,
an \texttt{if}, \texttt{else}, \texttt{while}, or \texttt{for} statement)
is buggy, all statements having
control dependence on it should not be blamed for being 
buggy, because the decision that makes the execution fail has already
been made earlier in the branching statement
before the code after the predicate is executed.
We encode such inter-statement \textit{control dependence} relationship as follows:



%\begin{equation}
%\label{eq_controldep}
%\forall i:\ \neg s_i \wedge \mathrm{controlDep}(s_i,s_j) \Leftrightarrow s_j
%\end{equation}
\begin{equation}
\label{eq_controldep}
s_i \wedge \mathrm{isBranch}(s_j) \wedge \mathrm{controlDep}(s_i,s_j) \Rightarrow \neg s_j
\end{equation}

where  $\mathrm{isBranch}(s_j)$ is a predicate that returns
true if $s_j$ is a branching statement and
$\mathrm{controlDep}(s_i,s_j)$ is a predicate that returns
true if $s_j$ has \textit{control dependence} on $s_i$. 

%Here is an example:

%\CodeIn{if(x > 0)}

%\CodeIn{   do some thing}

\vspace{1mm}

\textbf{Encoding prior bug knowledge. }
%Bugs repeatedly found in the same software
%system as it evolves can be similar with one another. For a new bug,
%it is likely that a similar bug has been confirmed or even fixed in
%the past. 
Our approach also incorporates prior knowledge by encoding
three common observations as prior in our MLN model. 
First, statements that involve pointer dereference
operations are more likely to account for pointer-related bugs (e.g.,
a \CodeIn{NullPointer} exception). Second,
statements that have array accessing operations are more likely to
account for array-related (e.g., an \CodeIn{ArrayOutOfBoundary}
exception). Third, a statement contains more than 3 variables
are more likely to be buggy compared to other simpler statements.
In our system implementation,
those prior bug knowledge set can be further enriched by users,
but our experiments did not use this capability.
%involve pointer dereference, array access, computation with more
%than 3 variables are more likely to be buggy. Such rules can be
%easily enriched by users (or even automatically learnt) for their
%specific projects.

%is extremely important
%for automated bug localization. We reuse such prior knowledge in two
%ways, first, the MLN can be learnt and weighted from existing data.
%Second, a knowledge database can be established to give extra weight
%for certain programming elements. For example, statements that
%involve pointer dereference, array access, computation with more
%than 3 variables are more likely to be buggy. Such rules can be
%easily enriched by users (or even automatically learnt) for their
%specific projects.

%Based on the above simple but powerful formulas, this project
%will investigate new models and techniques for bug localization.

%By far, we sketched and instantiated a new testing-based bug localization technique
%using the above 5 simple but powerful formula. 

%\vspace{1mm}


\subsection{Inference and Learning}
\label{sec_mln_inference}

%Our approach uses the above simple but powerful formulas to encode
%potential buggy software behaviors. 
Although many existing advanced
algorithms~\cite{Domingos04markovlogic} can be used to 
train the underlying markov model, our approach chooses
to use the following simple and effective algorithms, due to the characteristic
of our problem domain.

%our problem
%characteristic, we
%chose to use the following simple algorithms in our current implementation.

We observe that in modern software development, programmers often
run the regression test suite after a small amout of changes
are made. When a test fails (i.e., a new bug is introduced),
programmers usually need to fix it before implementing new features.
Thus, in practice, it is possible but unlikely that multiple bugs
 are introduced simultaneously. Under the assumption that
there is only one bug in a tested program,  the cardinality
of the assignment is one then. In other words, we only need to infer the
probability of the assignments whose cardinality is $1$, i.e.,

\[P(s_i=true, s_j=false)\forall j\neq i\]

Therefore, the search space in inferring the Markov logic model becomes linear,
instead of exponentially large, to the number of statements.
This permits us to perform learning and inference
efficiently even with an exhaustive  algorithm.
%Based on these formulas, the underlying markov logic
%model precisely captures the potential bug characteristics.
%XXXX leverage existing techniques

%The goal of our inference algorithm is to identify a list of likely buggy statements
%, instead of just pinpointing one buggy statement. The higher a statement
%ranks in the output, the more likely it is buggy. When given
%the output statement list, a programmers can inspect from the very beginning
%to check whether a statement is buggy.


To learn the weight for each formula, we adapt two different
strategies: (1) for the statement coverage evidences, we use
the weight as the ratio of failed tests it is executed.
The intuition is that a statement
is more likely to be buggy if it is covered by more failed tests.
%by
%the equation  XXX the ratio of failing test number
%\[\frac{\#fail(s)}{\#fail(s)+\#pass(s)}\]
%where $\#fail(s)$ is the number of failed unit-tests covering $s$,
%while $\#pass(s)$ is the number of passed unit-tests.
(2) for other program structure evidences and prior formulas,
we use a unified weight $\lambda$, and exhaustively
search for the best $\lambda$ among the value set:
$\{0,0.001,0.01,0.1,1,10\}$.


Using the above inference and learning strategies,
we demonstrate in the next experiment section that our 
approach is sufficiently accurate: it significantly improves the performance
of the baseline algorithm~\cite{Jones02visualizationof}.


