\section{\large Modeling Software Bug Localization with Markov Logic}
\label{sec:model}
%\subsubsection*{Modelling Buggy Behavior with Markov Logic Network}

We next describe how to use Markov logic network (MLN)
~\cite{Richardson06markovlogic} as an interface layer to the problem domain of bug localization.
We instantiate a new automated bug localization technique using the interface layer.
The key idea is to encode the problem into a multivariate MLN model with
each predicate in the model as a bug prediction; and
then perform inference on the model to localize the bug.%decode the bug localization.

In section \ref{sec_mln_model}, we
formalize the problem by summarizing our preliminary observations in Markov
logics. Then we discuss the inference and learning problems in
Section \ref{sec_mln_inference}.

\subsection{Model}
\label{sec_mln_model}

In order to identify the buggy statements in a program, we assign
 each statement a predicate $s_i$, where $s_i=true$ indicates the statement is buggy.

Evidences are collected from existing test cases to decode $s_i$. We
use variable $t_k$ to notate a test case. The variable takes truth
value (i.e. $t_k=true$) when the test-case fails. A test case is
composed of a sequence of executions.

The optimization goal is to estimate the marginal distribution for
statement predicates i.e. \[ p(s_1,s_2,\ldots,s_{|S|})\], and then
to localize bugs by this distribution.

Program features and observations can be encoded in first-logic
formulas. In this paper, we use three kinds of formulas: unit-test
evidence, statements relations, prior knowledge.

\textbf{Unit-test evidences: }The basic evidence of testcase-based
bug localization comes from the execution of the unit-tests. The
intuition is: if a statement frequently appears in failed
unit-tests, the statement is suspicious. We can simply put this
observation as

\begin{equation}
\neg t_k \wedge \mathrm{cover}(t_k,s_i)\rightarrow s_i
\end{equation}
where test case $t_k$ covers statement $s_i$. We will discuss the
weight of these formulas in Section \ref{sec_mln_inference}.

\textbf{Statement relations: }we explore a program's structure to
gain additional evidences. An intuition is, if a statement assigns a
wrong value to its output variable, then the following statements
using that output variable should not be blamed to be buggy, even if
a wrong result is observed. We encode this \textit{data dependence}
as the following formula.
\begin{equation}
\forall i:\ \neg s_i \wedge \mathrm{dataDep}(s_i, s_j)
\Leftrightarrow s_j
\end{equation}

where $\mathrm{dataDep}(s_i,s_j)$ is true when $s_j$ is using the
variable assigned by $s_i$ beforehand.


\vspace{1mm}

Similarly, if the statement an \texttt{if}, \texttt{else},
\texttt{while}, or \texttt{for} is buggy, all statements that are
control dependent on that predicate should not be blamed to be
buggy, since the decision that makes the execution fail has already
been made earlier before the code after the predicate be executed.
We encode this \textit{control dependence} as the following formula.



\begin{equation}
\label{eq_controldep}
\forall i:\ \neg s_i \wedge \mathrm{controlDep}(s_i,s_j) \Leftrightarrow s_j
\end{equation}

where $\mathrm{controlDep}(s_i,s_j)$ is true when $s_i$ is an
\texttt{if}, \texttt{else}, \texttt{while}, or \texttt{for}
statement, and $s_j$ is controlled by $s_i$ in the test case.

\vspace{1mm}

\textbf{Prior knowledge: }Bugs repeatedly found in the same software
system as it evolves can be similar with one another. For a new bug,
it is likely that a similar bug has been confirmed or even fixed in
the past. Thus, leveraging prior knowledge is extremely important
for automated bug localization. We reuse such prior knowledge in two
ways, first, the MLN can be learnt and weighted from existing data.
Second, a knowledge database can be established to give extra weight
for certain programming elements. For example, statements that
involve pointer dereference, array access, computation with more
than 3 variables are more likely to be buggy. Such rules can be
easily enriched by users (or even automatically learnt) for their
specific projects.

%Based on the above simple but powerful formulas, this project
%will investigate new models and techniques for bug localization.

By far, we sketched and instantiated a new testing-based bug localization technique
using the above 5 simple but powerful formula. The underlying markov logic
model precisely captures the potential bug characteristics, can be learnt by prior
knowledge, and be used to localize new bugs.

\subsection{Inference and Learning}
\label{sec_mln_inference}

The goal of our inference algorithm is to identify a list of likely buggy statements
, instead of just pinpointing one buggy statement. The higher a statement
ranks in the output, the more likely it is buggy. When given
the output statement list, a programmers can inspect from the very beginning
to check whether a statement is buggy.

We infer the probability of the assignments whose
cardinality is $1$, i.e. inference on probabilities of

\[P(s_i=true, s_j=false)\forall j\neq i\]


We assume that there is only one bug in the buggy statement, since in real
software development, programmers often run the whole test suite after
a small changes are made. It is possible but unlikely that multiple bugs
(involving with more than one statement) are introduced at the same time.
For our problem, the search
space is linear to the number of statements instead of exponentially
large. This permits us to perform inference efficiently even with a brute-force algorithm.

To get the weight for the formulas, we take two different
strategies: (1) for unit-test evidences, we give it the weight by
the equation
\[\frac{\#fail(s)}{\#fail(s)+\#pass(s)}\]
where $\#fail(s)$ is the number of failed unit-tests covering $s$,
while $\#pass(s)$ is the number of passed unit-tests. (2) for
dependency and prior formulas, we use a unified weight $\lambda$,
and use exhaustive search for the best $\lambda$ among the values
$\{0,0.001,0.01,0.1,1,10\}$.

In the following evaluation section, we demonstrate that this simple inference and
learning strategy is sufficiently precise: it significantly improves the performance
of the baseline algorithm~\cite{Jones02visualizationof}.
