%\subsubsection*{Evaluation Methodology}
\section{\large Evaluation}
\label{sec:evaluation}

We implemented a prototype tool, and performed an empirical study on a number of real-world subjects. The prototype takes the
source code of the programs, a set of passing and failing executions
as \textbf{inputs}. It first instruments the program, collects execution information, and performs
model inference. After that, the probabilistic model ranks
the executed statements according to the likelihood of them being buggy, and \textbf{outputs}
a ranked list of suspicious statements that may contain bugs.

\subsection{Subjects}

We evaluated our prototype tool on the well-known Siemens benchmarks~\cite{doESE05}. The Siemens benchmarks
consist of a set of programs commonly used to measure the effectiveness of fault-localization techniques.
Table~\ref{table:subjects} shows the characteristics of the seven Siemens programs. All Siemens faulty versions contain
operator and operand mutations, missing and extraneous code, and constant value mutation. Most faulty
versions have only one faulty statement, but some are seeded with several faults in different
statements. We excluded a few faulty versions due to the following two reasons: (1) they did not
produce any failing runs from the provided test cases; (2) some faulty versions cause infinite loop, which
prevents coverage information being collected.


\begin{table}
\center
\begin{tabular}{|c|c|c|c|c|}
    \hline
Program &  Line of Code    &  Number of Methods &  Number of Faulty Versions  & Number of Tests   \\
    \hline
    \hline
 tcas  & 138 & 9 & 41 &  1608 \\
 replace & 516 & 21 & 31  & 5542\\
 printtoken & 402 & 18 & 7 & 4130 \\
 printtoken2   & 483 & 19 & 9  & 4115 \\
 schedule  & 299 & 18 & 9  & 2650 \\
 schedule2  & 297 & 16 & 9  & 2710\\
 totinfo & 346 & 7 & 23  & 1052 \\
    \hline
\end{tabular}
\label{table:subjects}
\caption{The Siemens benchmark programs. Column ``Line of Code'' represents the
number of  C code (including blank and comment lines). Column ``Number of Methods'' represents
the number of C procedures. For each Siemens program, all its faulty versions use the same test suite.}
\end{table}

\subsection{Evaluation Procedure}

In our evaluation, we primarily investigate the following research questions:

\begin{itemize}
\item \textbf{Accuracy in Bug localization}.  Our tool ranks a list of suspicious statements
that may contain bugs. We use the following metric to evaluate its accuracy: we assign a score
to each output statement that is the percentage of program statements executed by failing executions
that \textit{need to be examined} if the statements are examined in rank order. Specifically, suppose
our tool outputs a ranked list of statements $S$, the actual buggy statement occurs at rank $r$ and
there are a total $|S_{f}|$ statements being executed by failing executions. Thus, the score of
our output $S$ is defined as:

\[
scores(S) = \frac{r}{|S_{f}|} \times 100\%
\]

A lower score indicates that the output buggy statement set $S$ is accurate, because it indicates that
a large part of the statements executed by failing runs are ignored before the buggy statements is localized.

\item \textbf{Comparison with existing approaches}. To validate the effectiveness of our model, we also
plan to compare it with a well-established approach, called Tarantula~\cite{Jones02visualizationof}. Tarantula simply counts
the number of passing executions and failing executions that exercise a statement, and ranks a statement's
suspiciousness heuristically. Tarantula has been provided to be quite effective in practice. In this experiment,
we will use the $scores$ metric to compare our technique with Tarantula.

\end{itemize}


\subsection{Preliminary Results}
Figures \ref{fig:p1} to \ref{fig:p6} show our preliminary results
of using MLN to localize bugs in the subjects 
of \ref{table:subjects}.\footnote{the printtoken2 subject is a variant
of printtoken, and their results are very similar. Due to space limit,
we omit its result for brevity.} In Figures\ref{fig:p1} to \ref{fig:p6}, the
horizontal axis number represents the top $K$ suspicious statements
that are returned by our algorithm, and the vertical axis number
represents the percentage of buggy ones they cover. The dash curve is
the result of Tarantula~\cite{Jones02visualizationof} , and the solid
curve is the result of our MLN-based bug localization system.

 Figures \ref{fig:p1} to \ref{fig:p6} clearly indicates that using
Markov logic for joint inference is useful for bug localization.
For five out of six programs, our system significantly improves the
effectiveness of bug identification. The improvement is 
because MLN permits us to use the joint evidences and prior knowledge
to better characterize bug locations and symptons; and such evidences
are often better than a single heuristic as Tarantula uses~\cite{Jones02visualizationof}.



\begin{figure*}[t]
\begin{minipage}[b]{0.33\linewidth}
\centering
\includegraphics[scale=0.4]{p1}
\caption{tcas} \label{fig:p1}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[b]{0.33\linewidth}
\centering
\includegraphics[scale=0.4]{p2}
\caption{replace} \label{fig:p2}
\end{minipage}
\begin{minipage}[b]{0.33\linewidth}
\centering
\includegraphics[scale=0.4]{p3}
\caption{schedule} \label{fig:p3}
\end{minipage}
\end{figure*}
\begin{figure*}[t]
\begin{minipage}[b]{0.33\linewidth}
\centering
\includegraphics[scale=0.4]{p4}
\caption{schedule2} \label{fig:p4}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[b]{0.33\linewidth}
\centering
\includegraphics[scale=0.4]{p5}
\caption{printtokens} \label{fig:p5}
\end{minipage}
\begin{minipage}[b]{0.33\linewidth}
\centering
\includegraphics[scale=0.4]{p7}
\caption{totinfo} \label{fig:p6}
\end{minipage}
\end{figure*}


\subsubsection{Discussion}
People may wonder why MLN based approach doesn't achieve better
result on the printtoken program . This is because of the bug
characteristics in that program. Figuring out the  specific buggy
lines requires a deep understanding of the code. For example, one
bug in printtokens is mutating an integer constant from 80 to 10. 
Such declaration related bug is often more difficult to be
localized, since a declaration statement can not be directly executed;
it only has implicit dependence
with other covered statements.  However, for other subjects, Markov logic
has already shown its potential: it serves as an interface layer to
permit researchers to encode their insights of possible bug locations
as first-logic formulas, and leverage Markov logic network to infer
proper buggy locations efficiently.

%Reader may wonder why MLN based approach doesn't achieve improvement
%on program printtokens. It is because   \sai{Please discuss why some
%programs show better results, while others do not}
