\section{\large Conclusion and Future Work}
\label{sec:conclusion}

The lack of an interface layer for many software engineering tasks has become
a persistent problem in the community: techniques are defined in an ad-hoc
way without a fair basis for evaluation and comparison, and researchers often need to
deal with certain unnecessary low-leve details to manually figure out high-level information. The
report argues that such a unified interface layer must be established to push the frontier
of software engineering research.

As a first step, this report proposes a unified framework for \textit{testing-based}
fault localization techniques -- an important subfield in software engineering research.
The proposed framework based on \textit{Markov Logic Network} has sufficient expressiveness
for many existing techniques. Furthermore, using the proposed framework, we instantiate
a new fault localization technique and empirically demonstrate its effectiveness on a set
of real-world programs. In summary, we provide the following compelling evidence:

\begin{itemize}
\item MLN-based framework is expressive and powerful. 
It can be a good candidate as an interface layer for unifying testing-based fault localization
techniques.
 
\item MLN-based framework is easy to use. With a moderate effort, users only need to focus on
high-level ideas, and concisely encode their thoughts for new techniques
with first-order formulas, and leveraging existing learning and inference algorithms.

\item MLN has the potential of serving as an interface layer for other software engineering tasks,
such as change analysis~\cite{Kim2008:TSE}, and software anomaly predication~\cite{diduce}.

\end{itemize}


\subsection{Future work} Besides general issues such as performance or ease of use, our future
work will concentrate on the following topics:

\begin{itemize}

\item \textbf{Approximate vs. Exact Inference}. Our MLN-based probabilistic model can be inferred
by exact or approximate inferences. Since the theoretical time complexity of exact inference
can be prohibitively expensive, it may be slow for large commercial software which is much larger
than the experiment subjects. Thus, we need to determine the effectiveness/efficiency trade-offs
by using approximate inference. As part of our future work, we want to answer: (1) how many executions
is sufficient to build a sufficiently accurate probabilistic model? and
(2) can we use approximate inference for bug localization
at a lower cost but still achieve reasonably good results.

\item \textbf{Further evaluation.} The number of subject programs that can be used
for controlled experiments (i.e. with known defects, automated tests that reveal these defects, and
changes that fix the defects) is still too limited. Although the subject programs used in this report
have been widely used by researchers in the software engineering community, they may not be representative
enough. Thus, we can not claim our results drawn from the this paper can be extended to an
arbitrary program. As more such programs become available, we want to gather further experience.

\item \textbf{Further comparison.} We compared our unified framework with a state-of-the-art approach
called Tarantula~\cite{Jones02visualizationof} in this report. Although Tarantula is an influential 
or even the de facto testing-based automated debugging approach, it is still unknown whether the proposed
framework is expressive enough to represent \textit{all} testing-based fault localization techniques
in the literature, and outperform them. Conducting more comparison experiments is another piece
of our future work.

\item \textbf{Feature learning.} As  shown in Section~\ref{sec:model}, using our framework, users need
to manually encode their insights in a set of first-order formulas, such as the relations between data
flow dependence and bug locations. A natural question is that can we leverage recent advance in
feature learning and structural learning to infer useful relations, to further improve the effectiveness
of fault localization techniques.

\item \textbf{Other applications of MLN in software engineering.} MLN has been widely and successfully
used in many fields, such as machine reading~\cite{Poon:2010}, unsupervised learning~\cite{poon2009},
structure learning~\cite{Kok:2005}, and entity resolution~\cite{TKDE2009, Singla:2006}. In many software
engineering tasks, software engineering data (such as code bases, execution traces, historical code changes,
mailing lists, and bug databases) contains a wealth of information about a project's status, progress,
and evolution. Using well-established machine learning techniques, practitioners and researchers can leverage
the potential of this valuable data in order to better manage their projects and to produce higher quality
software systems that are delivered on time and on budget. We plan to investigate more applications of MLN
to the field of software engineering to explore the its potential to reason about complex software behaviors, and produce more reliable software systems.

\end{itemize}

