\section{Experiments}

\subsection{System Implementation}

We implemented an intelligent debugging system, \textsc{MLNDebugger},
and performed experiments on 4 real-world programs. 
\textsc{MLNDebugger} is built on top of the
Alchemy framework\footnote{\url{http://alchemy.cs.washington.edu/}}. It takes the program source code, a set of
passed and failed tests
as \textbf{inputs}, and works in a fully-automatic way. It first instruments the program to
collect statement coverage information by executing
the associated tests. Then, it uses standard
data and control analyses algorithms~\cite{Nielson:1999:PPA}
to statically compute inter-statement relationships.
After that, \textsc{MLNDebugger} learns the probabilistic
Markov logic model, performs inference to rank each statement
according to the likelihood of being buggy, and finally
 \textbf{outputs} a ranked list of suspicious statements that may contain bugs.

%The ranked list of suspicious statements is a great starting
%point for the software developers, who can focus on the more likely
%statement first and determine if it 



\subsection{Subjects}

\begin{table}
\center
\begin{tabular}{|c|c|c|c|c|}
    \hline
Program &  LOC    &  \# Methods &  \# Versions  & \# Tests   \\
    \hline
    \hline
 replace & 562 & 21 & 31  & 5542\\
 schedule  & 365 & 18 & 9  & 2650 \\
 tcas & 173 & 9 & 41  & 1608\\
 printtokens & 562& 18 & 7 & 4130 \\
    \hline
\end{tabular}
\label{table:subjects}
\caption{The 4 programs for evaluating our \textsc{MLNDebugger}
system. Column ``LOC'' represents the
number of  lines of code. Column ``\# Methods'' represents
the number of methods in each subject. Column ``\# Versions''
represents the number of buggy versions. Column ``\# Tests''
represents the number of available tests.
For each subject, all its buggy versions use the same tests.}
\end{table}

%problem with the diagram, the x-axis can be incorrect, the line number
%add titles for both x/y axis
%make the lines more clear, also change the mini page

We evaluated \textsc{MLNDebugger} on the well-known Siemens suite~\cite{doESE05}. The Siemens suite consists of a set of real-world programs commonly used to
measure the effectiveness of bug localization approaches, and has been widely
used in the software engineering community. As indicated in its website\footnote{\url{http://sir.unl.edu}}, the Siemens suite has been used by at least 380 institutions
in the world and cited by over 250 published research papers.
Table 1 summarizes the characteristics of the programs
from the Siemens suite. Each program has a number of buggy versions, which contain
typical software bugs such as
operator and operand mutations, missing and extraneous code, and constant value 
mutation. Each buggy version contains one bug, but may have multiple
buggy statements that should be localized.
Also each program is associated with a 
comprehensive test suite, consisting of a number of passed and failed tests
for the buggy versions.
We excluded a few programs and their faulty versions due to the following three reasons: (1) they did not
produce any failing runs from the provided test cases; (2) they did not
have enough buggy versions for experiments, and
(3) some buggy versions cause infinite loop, which prevents coverage information being collected.


%XXX remove the printtoken2. Siemens is the de-facto standard



\subsection{Methodology}

\noindent \textbf{Experiment steps and environment.}
For each program, we randomly split buggy versions into three subsets so
that they were distributed as evenly as possible. We then used
2 subsets for learning the Markov logic model, and used the
rest 1 subset as the test data. Our system was run on each
test buggy version separately using the learnt model, and 
the output ranked statement list was manually checked to
determine its correctness.


Our experiments were conducted on a 2.67GHz Intel Core PC with 12GB
physical memory (3GB is allocated for JVM), running Ubuntu 10.04
LTS.


\noindent \textbf{Accuracy measurement.}
 We use the following metric to evaluate a bug localization system's accuracy:
we assign a score to the \textit{top} $k$ ranked statements that is the percentage
of actual buggy statements they contain. Specifically, suppose
our tool outputs a ranked statement list $S$, and $bug(S)$ is a function
that returns the number of actual buggy statements covered by 
$S$, and $bugs_{total}$ is the total number of actual buggy statements.
Thus, the score of $S$ is defined as:


%occurs at rank $r$ and
%there are a total $|S_{f}|$ statements being executed by failing executions. Thus, the score of
%our output $S$ is defined as:

%to each output statement that is the percentage of program statements executed by failing executions
%that \textit{need to be examined} if the statements are examined in rank order. Specifically, suppose
\[
scores(S) = \frac{|bug(S)|}{bugs_{total}} \times 100\%
\]

A higher score indicates that the output ranked statement list $S$ is accurate
% (the ideal case
%is: for each 0 $<$ $k$ $\le$ $bugs_{total}$ that $scores(k)$ = $k$ / $bugs_{total}$),
, since it reflects that more irrelevant statements in the program are ignored before the buggy statements are localized.


\vspace{1mm}

\noindent \textbf{Baseline for comparison.}
We choose the well-established \textsc{Tarantula} approach~\cite{Jones02visualizationof} as
the baseline for comparison. As described in a comprehensive
study~\cite{Jones:2005}, \textsc{Tarantula}
is the \textit{most} effective approach to localize software bugs while
compared to other related approaches~\cite{Cleve:2005}~\cite{reiss:2003},
albeit that it only uses the statement coverage information.

\textsc{Tarantula} uses a well-designed heuristic to infer likely
buggy statements from the statement coverage evidence.
It counts the number of passed tests and failed tests 
that cover a statement, and ranks a statement's
suspiciousness heuristically as follows: 

\[\frac{\%fail(s)}{\%fail(s)+\%pass(s)}\]

where \%$fail(s)$ is the ratio of the number of failed tests
that cover statement $s$ to the total number of failed tests
in the test suite. $\%pass(s)$, likewise,
is the ratio of the number of passed tests
that cover statement $s$ to the total number of passed tests
in the test suite.





\subsection{Results}

Figures \ref{fig:p1} to \ref{fig:p6} show our experiment results
of using \textsc{MLNDebugger} to localize bugs in the Siemens suite.
%of \ref{table:subjects}.
%\footnote{the printtoken2 subject is a variant
%of printtoken, and their results are very similar. Due to space limit,
%we omit its result for brevity.}
In each figure, the dash curve is the result of
\textsc{Tarantula}~\cite{Jones02visualizationof} ,and the solid
curve is the result of our \textsc{MLNDebugger} system.

 Figures \ref{fig:p1} to \ref{fig:p6} clearly indicate that
\textsc{MLNDebugger} remarkably outperforms the \textsc{Tarantula}
approach on average. Specifically, for three out of the four programs (tcas,
replace, and schedule), \textsc{MLNDebugger} achieves much
higher accuracy (XXXX). For the rest printtokens program, \textsc{MLNDebugger}
achieves the same accuracy as \textsc{Tarantula} does.

The experimental results also demonstrate that joint inference
consistently outperforms isolated inference. Our Markov logic-based
bug localization model permits to use different information
(e.g., statement coverage, static program structure, and
prior bug knowledge) to resolve the uncertainty of the another.
Crosstalk between various information sources is the key reason
for achieving better results than a single heuristic as \textsc{Tarantula}
 uses~\cite{Jones02visualizationof}. 

\textsc{MLNDebugger} tied with \textsc{Tarantula} on the
printtoken program due to the bug characteristics.
In printtoken, most of the bugs occur in the constant
 declaration statements.
However, after a C program is compiled into bytecode, such
constant declaration statements are erased by the
compiler. Thus, a standard data flow analysis algorithm
we use cannot collect the corresponding
statement coverage information correctly, and \textsc{MLNDebugger}
falls back to use isolate information. This is
not a limitation of our Markov logic model. Instead, such
result just reflects the necessity
of performing joint inference in the software bug localization task.
As our future work, we plan to use a more precise instrumentation technique
such as~\cite{Ball:1996} from the software engineering community
to remedy this problem.

%using
%Markov logic for joint inference is useful for bug localization.

%\begin{figure*}[ht]
%\centering \subfloat[Nell, New York Times]{
%   \includegraphics[width=3.4in]{p1}
%   \label{f:nyt1}
%}
%\subfloat[Nell, Wikipedia]{
%   \includegraphics[width=3.4in]{p2}
%   \label{f:wiki1}
%}
%\subfloat[IC, New York Times]{
%   \includegraphics[width=3.4in]{p3}
%   \label{f:nyt2}
%}
%\subfloat[IC, Wikipedia]{
%   \includegraphics[width=3.4in]{p4}
%   \label{f:wiki2}
%}
%\caption{Approximate .}
%\label{f:all4}
%\end{figure*}



\begin{figure*}[t]
%\hspace{0.5cm}
\begin{minipage}[b]{0.23\linewidth}
\centering
\includegraphics[scale=0.3]{p2}
\caption{replace} \label{fig:p2}
\end{minipage}
\begin{minipage}[b]{0.23\linewidth}
\centering
\includegraphics[scale=0.3]{p3}
\caption{schedule} \label{fig:p3}
\end{minipage}
\begin{minipage}[b]{0.23\linewidth}
\centering
\includegraphics[scale=0.3]{p1}
\caption{tcas} \label{fig:p1}
\end{minipage}
%\hspace{0.5cm}
\begin{minipage}[b]{0.23\linewidth}
\centering
\includegraphics[scale=0.3]{p5}
\caption{printtokens} \label{fig:p5}
\end{minipage}
\label{fig:results}
\caption{Results of using our \textsc{MLNDebugger} system to localize
bugs in 4 real-world programs. In each figure, the
horizontal axis number represents the top $k$ suspicious statements
 returned by a bug localization system, and the vertical axis number
represents the percentage of actual buggy ones they have covered (i.e.,
the $scores$ value of the corresponding returned statement list).
The dash curve is the result of
\textsc{Tarantula}~\cite{Jones02visualizationof} \textit{averaged} across all tested
buggy versions, and the solid curve is the result of
\textsc{MLNDebugger} \textit{averaged} across the same tested
buggy versions. Our \textsc{MLNDebugger} outperforms
\textsc{Tarantula} on three out of four programs, and ties
with \textsc{Tarantula} on the printtokens program.  }
\end{figure*}



\subsection{Discussions}

We next discuss several important issues in our experiments.

%\noindent \textbf{Threats to validity.}
%As with any experiment, there are threats
%to the validity of our conclusions. The major threat is the degree to
%which the subject programs used in our experiment are representative
%of true practice. Another threat is that we only compared \textsc{MLNDebugger}
%with \textsc{Tarantula}. Future work could compare with other approaches.
%Another
%one buggy statement, that is ok

%overfitting?

\noindent \textbf{Time and memory cost.}
\textsc{MLNDebugger} runs in a practical amount of time.  For each program evaluated, \textsc{MLNDebugger} 
used less than 200 seconds with 200MB memory consumption to output the results.
This permits \textsc{MLNDebugger} to be used in real software
development scenarios.
The performance has not been tuned, and we believe
that it could be improved further.


\noindent \textbf{System extensibility.}
%To make our approach generally applicable,
The current implementation of \textsc{MLNDebugger}
supports a few salient programs features.
However, our system has good extensibility.
It is built on top of the Alchemy framework, which permits users
to define new Markov logic rules, and plug them into
the sysem with with corresponding data. 
or tailored for specific usage scenarios. 
Thus, building and extending \textsc{MLNDebugger} requires substantially less
engineering effort than previous approaches that
hardcoded isolated information. For example, the
\textsc{Tarantula} consists of over 3000 lines of Java code
which is ten times larger than the code needed to
add the same statement coverage rules in \textsc{MLNDebugger}.

%People may wonder why MLN based approach doesn't achieve better
%result on the printtoken program . This is because of the bug
%characteristics in that program. Figuring out the  specific buggy
%lines requires a deep understanding of the code. For example, one
%ug in printtokens is mutating an integer constant from 80 to 10. 
%Such declaration related bug is often more difficult to be
%localized, since a declaration statement can not be directly executed;
%it only has implicit dependence
%with other covered statements.  However, for other subjects, Markov logic
%has already shown its potential: it serves as an interface layer to
%permit researchers to encode their insights of possible bug locations
%as first-logic formulas, and leverage Markov logic network to infer
%proper buggy locations efficiently.

