
We implemented the \SimpleTest algorithm in a tool named
\TestSimplifier. \TestSimplifier takes as inputs a JUnit test with
a user-specified predicate in the form of Java assertion. \TestSimplifier
parses the JUnit test, conducts semantic test
simplification, and outputs a simplified test that still satisfies
the same given predicate.

We evaluated \TestSimplifier in two ways:

\begin{itemize}

\item We used \TestSimplifier to simplify \testnum failing unit tests
from 7 real-world programs with up to 150KLOC.  The tests are
 generated by a state-of-the-art automated test generation tool,
Randoop~\cite{randoopicse}; each of which reveals a real
bug. We used \TestSimplifier to simplify each of them, and compared
its effectiveness  with four existing test simplification techniques, namely, static slicing~\cite{},
dynamic slicing~\cite{}, delta debugging~\cite{}, and the combination
of delta debugging and static slicing~\cite{}. This evaluation
is described in Section~\ref{sec:result}.


\item We applied \TestSimplifier to the application domain of
testing-based automated debugging, and investigated how useful are the
semantically-simplified tests in defect localization. We choose two
existing techniques with rather different goals, Tarantula~\cite{} and \tool{Failuredoc}~\cite{}
, to show the usefulness of semantic test simplification.  We
demonstrate that semantically-simplified tests are more useful in improving
these two automated debugging techniques than
un-simplified tests and syntactically-simplified tests by existing
approaches. This evaluation is described in sections~\ref{sec:tarantual} and
~\ref{sec:autodoc}.

\end{itemize}

\begin{table}[t]
\footnotesize{ \fontsize{8pt}{\baselineskip}\selectfont
\hspace*{-0.2cm}
\begin{tabular}{|p{4.3cm}||p{0.9cm}|p{0.9cm}|p{0.9cm}|}
\hline
 Program (version) & LOC & Classes & Methods  \\
\hline \hline
Time And Money (0.51) & 2372 & 29 & 492  \\
\hline
jdom (1.1) & 8513  & 57 & 807  \\
\hline
Apache Commons Primitives (1.0) & 9368  & 210 & 1739  \\
\hline
Apache Commons Beanutils (1.8.3) & 11382  & 94 & 1081  \\
\hline
Apache Commons Math (2.2) & 14469  & 131 & 1333\\
\hline
Apache Commons Collections (3.2.1) & 55400  & 445 & 5350  \\
\hline
\CodeIn{java.util} package (1.6.0\_12) & 48026  & 191 & 3387 \\
\hline \hline
Total & 149557 & 1163 & 12660 \\
\hline
\end{tabular}
\Caption{{\label{table:subjects} Subject programs. Column ``LOC'' is the number of non-blank, non-comment
lines of code, as counted by LOCC~\cite{locc}. Columns ``Classes'' and ``Methods'' show
the total number of classes and methods in each subject, respectively.} }}% \vspace*{-3mm}}
\end{table}


\begin{table*}[t]
\footnotesize{ \fontsize{8pt}{\baselineskip}\selectfont
\hspace*{-0.2cm}
\setlength{\tabcolsep}{0.55\tabcolsep}
%\begin{tabular}{|p{3.45cm}|p{0.6cm}p{0.6cm}|p{0.6cm}p{0.6cm}|p{0.6cm}p{0.6cm}|p{0.6cm}p{0.6cm}|p{0.6cm}p{0.5cm}|p{0.5cm}p{0.5cm}p{1.0cm}|}
\begin{tabular}{|l|ccc|cc|cc|cc|cc|ccc|}
\hline
 &  \multicolumn{3}{c|}{Failed Tests} & \multicolumn{2}{c|}{Static Slicing}
 & \multicolumn{2}{c|}{Dynamic Slicing}& \multicolumn{2}{c|}{Delta Debugging}& \multicolumn{2}{c|}{DD + Slicing~\cite{Leitner:2007}}
 & \multicolumn{3}{c|}{\textbf{\SimpleTest}}  \\
\cline{2-15}
 Program &  Tests & Bugs & Stmts & Stmts & Time & Stmts  & Time& Stmts  & Time & Stmts  & Time & Stmts  & Time &  Optimal? \\
\hline \hline
Time And Money&  23 & 3 & 1337 & 1337 & 0.1 & xx & xx & 1032 & 68.6 &  1032 & 70.0 & 90 & 20.6 & Y\\
\hline
jdom& 3  & 1 & 194 & 194 & 0.1 & xx & xx & 93 & 16.8 & 93 & 17.2 & 19 & 8.5 & Y\\
\hline
Apache Commons Primitives &  5  & 2 & 377 & 377 & 0.1 & xx & xxx& 142 & 23.8 & 142 & 26.2  & 37 & 20.5 & Y\\
\hline
Apache Commons Beanutils   & 10  & 2 & 317 &317 & 0.1 & xx & xx  & 166 & 10.4 & 166 &  10.1 & 53 & 6.5 & \textbf{N}\\
\hline
Apache Commons Math  & 18  & 3 & 747 & 705 & 0.1 & xx & xx & 490 & 35.8 & 467 & 33.5 & 114 & 19.3 & Y\\
\hline
Apache Commons Collections & 8 & 3 & 399 & 399 & 0.1 & xx & xx  & 283 & 100.9 & 283 & 105.8 & 49 & 59.1 & Y\\
\hline
\CodeIn{java.util} package  & 6  & 2& 178 & 178 & 0.1 & xx & xx & 107 & 10.2 & 107 & 10.4 & 36 & 6.6 & Y\\
\hline \hline
Total & \testnum & 16 & 3549 & 3507 & $<$1s & xx & xx & 2313 & 266.5 & 2290 & 273.2 & 398& 141.1 & \\
\hline
\end{tabular}
\Caption{{\label{table:results} Experimental results of simplifying failing tests.
 Column ``Failed Tests'' shows the information of the original un-simplified failing tests.
Sub-column ``Tests'' is the number of failing tests, sub-column ``bugs'' is the number of
distinct bugs revaled, and sub-column ``Stmts'' is the total number of test statements (excluding assertions).
Column ``Static Slicing'' shows the results of using static slicing for test simplification. Column ``Dynamic Slicing'' shows the results of
using dynamic slicing for test simplification. Column ``Delta Debugging'' shows
the results of using delta debugging~\cite{Zeller:2002} for test simplification.
Column ``DD + Slicing~\cite{Leitner:2007}'' shows the results of using the existing most
effective test simplification approach by combining static slicing and delta debugging.
Column ``\textbf{\SimpleTest}'' shows the results by using the \SimpleTest algorithm
proposed in this paper.  In each column, sub-column ``Stmts'' shows the number
of statements after simplification (lower is better),  sub-column ``Time'' shows the 
time cost in seconds, and sub-column ``Optimal?'' shows whether the output
 by \SimpleTest is optimal.} }}% \vspace*{-3mm}}
\end{table*}



\subsection{Simplifying Failed Tests}
\label{sec:result}

We used 7 real-world subjects, namely, Apache Commons
Collections\footnote{\scriptsize{Apache Commons Collections:
http://commons.apache.org/collections/}},
Primitives\footnote{\scriptsize{Apache Commons Primitives:
http://commons.apache.org/primitives/}},
Math\footnote{\scriptsize{Apache Commons Math:
http://commons.apache.org/math/}}, Beanutils\footnote{\scriptsize{Apache Commons Beanutils:
http://commons.apache.org/beanutils/}}, jdom\footnote{\scriptsize{JDOM:
http://www.jdom.org/}},
Time and Money\footnote{\scriptsize{Time And Money:
http://sourceforge.net/projects/timeandmoney/}}, and
\CodeIn{java.util}\footnote{\scriptsize{JDK 1.6:
http://download.oracle.com/javase/6/docs/api/}},  in this
experiment. Table~\ref{table:subjects} summarizes the subject details.

We ran \tool{Randoop}~\cite{randoopicse} on each subject
to create a regression unit test suite.
\tool{Randoop} works in a fully-automated way without
any human interference. As \tool{Randoop} generates
regression tests, it checks 5 default Java contracts, such
as the symmetry property of a Java object (i.e., for any non-null object \CodeIn{o},
\CodeIn{o.equals(o)} should return true) on every generated test.
Tests that violate a Java contract are classified as failing. We
ran \tool{Randoop} for at most 1000 seconds until no more
distinct bugs are found. In total, Randoop outputted \testnum failed tests for
the 7 subjects; each one indicates a real bug in a subject program.


Like \tool{Randoop}, many automated test generation tools are effective at disclosing
potential bugs, but tend to produce long tests including many statements irrelevant
to the bug.  In our experiment , each failed test has 39 lines of code excluding
assert statements. Most failed tests involve complex code interaction
between classes, and are difficult for us to tell the failure cause by
simply looking at the source code. Given such tests, programmers often need non-trivial
efforts to understand the test behavior, and isolate the failure root.
 %With the wider adoption of automated testing tools, quickly understanding failed tests
%becomes a demanding requirement.

For each failing test, we used \TestSimplifier to simplify it.  The results of \TestSimplifier (with
4 existing test simplification techniques)
are shown in Table~\ref{table:results}. To evaluate the quality of
\TestSimplifier's results, for each failed test, we manually
wrote an optimally-simplified test (i.e., the shortest sequence that
revealed the same defect) as a comparison. Column ``Optimal?'' in Table~\ref{table:results}
indicates whether \TestSimplifier has produced the optimal solution
for a certain subject.
 As indicated by Table~\ref{table:results}, \TestSimplifier is 
surprisingly effective: it reduces the
size of \testnum failed tests from 703 lines of code to 110 lines of
code in 41 seconds. More impressively, 17 out of \testnum simplified
failed tests are optimally-simplified; only 1 test from Apache
Commons Beanutils is sub-optimal (the optimal test size is 6 while
\SimpleTest outputs a test with size 7). This slight difference
is caused by \SimpleTest's greedy heuristic when performing code
transformation.


\vspace{1mm}
\noindent \textit{\textbf{Comparison with Four Existing Approaches.}}
We compared the effectiveness of \SimpleTest with the four existing
test simplification techniques: static slicing~\cite{Weiser:1981},
dynamic slicing~\cite{javaslicer}, delta debugging~\cite{Zeller:2002},
and the combination of static slicing and delta debugging~\cite{Leitner:2007}.
The results of each technique are shown in Table~\ref{table:results}. We summarize and compares
the results as follows:

\begin{itemize}

\item \textbf{Static slicing} is fast but ineffective. In
all \testnum failed tests, static slicing only removes 1
irrelevant statement. The primary reason is that 
static slicing is fundamentally conservative. For a test
that is incrementally built on top of existing method-call sequences,
static slicing often concludes all statements \textit{might}
affect the test execution results.

\item \textbf{Dynamic slicing} uses a concrete test execution trace
to analyze the dynamic control/data dependence between each
statement. It improves the effectiveness of static slicing, and
reduces the total failed test size to 223 lines of code.
However, dynamic slicing incurs a huge overhead; it takes 853
seconds to complete all tests.

\item \textbf{Delta debugging} uses an experimental approach to simplify
the tests. It reduces the total failed test size to 415 lines of
code in 125.2 seconds. There are two major reasons preventing delta debugging
to further simplify the tests. First, delta debugging
can only achieve local optimal~\cite{} if the test inputs satisify the
\textit{monotonicity} property, i.e., a superset of any failure-inducing
test input should also fail. However, that assumption does not always hold
for a failed test, since a superset of a simplified test can be
incompilable. Second, delta debugging can no longer simplify an already
syntactically-minimized test, in which the removal of any statement
will lead to a compilation error.

\item \textbf{The combination of delta debugging and static slicing}
uses static slicing as a preprocessing to remove all
statements that can be statically determined to be irrelevant, then
use delta debugging to further simplify the tests. This combined approach
yields the same test code reduction result (concurred with results
by other researchers~\cite{}), but slightly reduces the overhead $\spadesuit$

\item \textbf{\SimpleTest} is both effective and efficiency: it substantially
outperforms all existing approaches~\cite{}, and uses a moderate amount
of time (slower than static slicing, but 20X faster than dynamic
slicing, and 3X faster than delta debugging and the combination
of delta debugging with static slicing). The primary reason of such
improvement is because \SimpleTest uses a lightweight algorithm (Figure~\ref{})
to validate whether certain statements can be removed safely at
an earlier stage after every simplifying action, instead
of executing the new sequence after every remove action as delta debugging. $\spadesuit$ also
replace for further simplification $need to write more here$.

\end{itemize}


\vspace{1mm}
\noindent \textit{\textbf{Breakdown of Contributions.}} We also
investigated the test size reduction  contributed by each
action in the \SimpleTest algorithm.
As presented in Section~\ref{sec:algorithm}, \SimpleTest
consists of three major actions: \textit{remove}, \textit{reconstruct},
and \textit{checkProperty}. The \textit{checkProperty} action
verifies whether a simplified test still exhibits the same
behavior as the original one, and does not contribute to the
 test size reduction.  As indicated in Table~\ref{table:breakdown},
, the \textit{remove} action
eliminates  88\% of irrelevant statements, 
the \textit{reconstruct} action further eliminates 11\%, and the
proposed optimization  of using delta debugging to
remove irrelevant array element eliminates the remaining 1\%.
The proposed optimization has very limited contribution
to the \testnum failing tests.
This is expected, since such optimization is very specific to array-related
operations. However, among the \testnum failed tests, only 1 test from
Apache Commons Math contains array construct and is applicable
for this optimization. $\spadesuit$ make a conclusion here

\begin{table}[t]
\footnotesize{ \fontsize{8pt}{\baselineskip}\selectfont
\hspace*{-0.2cm}
\begin{tabular}{|p{3.5cm}||p{0.8cm}|p{1.2cm}|p{1.3cm}|}
\hline
 Program & \textit{remove} & \textit{reconstruct} & \textit{optimization} \\
\hline \hline
Time And Money& 1077 & 170 & 0   \\
 \hline
jdom& 172 & 3 & 0  \\
 \hline
 Apache Commons Primitive  & 335 & 5 & 0   \\
  \hline
Apache Commons Beanutils  &  260 & 4 & 0  \\
 \hline
 Apache Commons Math& 501 & 131 & 1  \\
 \hline
Apache Commons Collections & 316 & 34 & 0  \\
\hline
\CodeIn{java.util} package & 127 & 15 & 0  \\
\hline
\hline
Total & 2788 & 362 & 1  \\
\hline
\end{tabular}
\Caption{{\label{table:breakdown}Breakdown of test code
reduction in LOC. Columns ``\textit{remove}'', ``\textit{reconstruct}'', 
and ``\textit{optimization}'' shows the number of LOC reduced
by two \SimpleTest actions and the optimization of using delta debugging to
remove irrelevant array elements.} }}% \vspace*{-3mm}}
\end{table}





\vspace{1mm}

\noindent \textbf{\textit{Summary: (hopefully, need to wait
all data come out) semantic test simplification is
more effective in reducing the test size than previous approaches
exclusively based on syntax-level simplification.}}



\subsection{Testing-based Fault Localization}
\label{sec:tarantual}

We next show semantically-simplified tests are useful in the application domain
of automated debugging. The high cost of manual debugging motivates the
development of automated debugging techniques like~\cite{}. Many automated
debugging techniques~\cite{} use test execution information to assist
programmers to localize potential buggy program entities.
Tarantula~\cite{Jones:2004}, a well-established testing-based automated debugging
technique, utilizes test execution result and coverage that is readily available
from standard testing tools: the pass/fail information
about each test case, the entities that were executed by
each test case (e.g., statements, branches, methods), and
the source code for the program under test. The intuition
behind Tarantula is that entities in a program that are primarily
executed by failed test cases are more likely to be
faulty than those that are primarily executed by passed test
cases.

Tarantula computes the suspiciousness of each executed statements
based on the assumption that a statement covered by 
a failing test can be potentially buggy. Statements that are
executed primarily by failed test cases and are thus, highly
suspicious of being faulty;
statements that are executed primarily by passed test
cases and are thus, not likely to be faulty; and statements that are executed by a
mixture of passed and failed test cases and thus, and do not
lend themselves to suspicion or safety. 

However, a failing test in practice would inevitably cover correct
code, and thus introduce \textit{coverage noises} into the diagnose process.
Bug-free statements covered by multiple failing tests may incorrectly
have a higher suspiciousness score. For example, the buggy statements
revealed by the failing test in Figure~\ref{fig:failedtest} reside in
the \CodeIn{TreeSet.add} method. However, this failing test also covers
statements inside method \CodeIn{Array.asList}, \CodeIn{List.subList}, and
\CodeIn{Collections.synchronizedSeet} that are irrelevant to the bug. Those
statements covered by  failing tests will incorrectly get a
non-negative suspiciousness score, and make the diagnose results less accurate.

We use semantic test simplification to reduce such coverage noise, and
compare the Tarantula's fault localization results
by using the originally un-simplified tests, syntactically-simplified
tests, and semantically-simplified tests on the 7 subjects in
Table~\ref{table:subjects}.

In ths experiment,  we use the whole regression test suite including both failing and passing
tests generated by Randoop as the inputs of Tarantula.  Table~\ref{table:tanratularesult}
shows the test suite details, and the experiment results.
We use two metrics to measure the effectiveness
of Tarantula: how many statements are classified as suspicious statements (i.e., potentially buggy
with a non-negative suspiciousness score), and the average ranking of buggy statements
according to the suspiciousness score. The first metric measures Tarantula's precision,
and the second metric measures Tarantula's effectiveness. Normally, programmers will
inspect each buggy statement in the report in the descending order of suspiciousness
until the buggy place is found. 


\begin{table*}[t]
\begin{center}
\setlength{\tabcolsep}{0.3\tabcolsep}
%\smaller
\begin{tabular}{|l|cc|ccc|ccc|}
\hline
 & \multicolumn{2}{c|}{Tests} & \multicolumn{3}{c|}{Number of Suspicious Statements} & \multicolumn{3}{c|}{Average Ranking of Buggy Statements} \\
\cline{2-9}
 \multicolumn{1}{|c|}{Subject} & Passing & Failing & Tarantula & Tarantula +~\cite{Leitner:2007} & Tarantula + \SimpleTest & Tarantula & Tarantula +~\cite{Leitner:2007} & Tarantula + \SimpleTest\\
\hline \hline
 Time And Money & 359 & 23 & 1519 & 1196 &  639  &  24 & 24 &  24\\
 \hline
Apache Commons Primitive &  3018 & 5 & 420 & 290 &  281  &  64 & 64 &  64\\
 \hline
Apache Commons Math & 672 & 18 & 1759 & 1312 &  1306  &  31 & 31 &  31\\
 \hline
Apache Commons Collections &  447 & 8 && &   &   & &  \\
 \hline
Apache Commons Beanutils&   2696 & 10 &1969 & 1520 &  312  &  28 & 28 &  27\\
 \hline
jdom &   4268 & 3 &275 & 199 &  76  &  10 & 10 &  9\\
 \hline
\CodeIn{java.util} package &   433 & 6 & &  &    &   &  &  \\
 \hline
  \hline
Total &   11893 & \testnum &XX & XX &  XX  &  XX & XX &  XX\\
   \hline
\end{tabular}
\end{center}
\vspace{-8pt}
\Caption{\label{table:tanratularesult}
Results of using the Tarantula technique~\cite{} to localize defects for the 7 subjects in Table~\ref{table:subjects}.
Columns ``Passing'' and ``Failing'' show the number of passing and failing unit tests used
in this experiment, respectively. The failing tests are those shown in Table~\ref{sec:result}.
 Column ``Number of Suspicious Statements''
represents the number of suspicious statements (lower is better) identified by the Tarantula technique.
Column ``Average Ranking of Buggy Statements'' represents the average ranking of
all buggy statements (higher is better) in Tarantula's report. Sub-column ``Tarantula'' shows the
results of using the original un-simplified tests. Sub-column ``Tarantula +~\cite{Leitner:2007}''
shows the results of using the syntactically-simplified tests based on delta debugging
and static slicing. Sub-column ``Tarantula + \SimpleTest'' shows the results of the semantically-simplified
tests produced by \SimpleTest.}
\end{table*}

Table~\ref{table:tanratularesult} shows that using the original un-simplified
tests, Tarantula classified XXX statements as suspicious, and the average ranking of all
buggy statements is XXX. Using syntactically-simplified tests,  Tarantula classified XXX
statements as suspicious, and the average ranking of all
buggy statements is XXX. In contrast, using semantically-simplified tests, Tarantula
only classified XXX statements as suspicious, and the average ranking of 
all buggy statements is reduced to XXX. The result indicates that $\spadesuit$ un-simplified
tests XXXX, syntactically-simplified tests XXX


%Tarantula allows some tolerance for
%the fault to be occasionally executed by passed test cases.


\vspace{1mm}

\noindent \textbf{\textit{Summary:  (hopefully, need to wait until all data come out)semantic test simplification
reduces the coverage noise, and improves the accuracy of the
testing-based fault localization techniques.}}


\subsection{Testing-based Error Explanation} \label{sec:autodoc}

We further demonstrate that semantically-simplified tests can
help create more accurate and useful error-diagnosis messages, besides
localizing the exact buggy code.

We chose \FailureDoc~\cite{failuredoc}, a technique to infer explanatory
documentation  to explain why a test fails.
\FailureDoc has a rather different goal than Tarantula. It does not attempt
to pinpoint the exact buggy statement. Instead, it augments a failed test
with debugging clues (or called error-diagnosis messages): code comments that
provide potentially useful fact about the failure, helping programmers fix
the bug quickly.

In brief, \FailureDoc works as follows. It repeatedly mutates a
failing test by replacing each expression with alternative
values, then executes the mutated test to observe its outcome. 
\FailureDoc uses a statistical algorithm to correlate the
replaced values with their corresponding outcomes, identifying suspicious
statements and their \textit{failure-correcting} objects. Finally,
\FailureDoc uses a Daikon-like technique~\cite{} to summarize properties
of the observed failure-correcting objects, translating them into
code comments that indicate changes to the failing test that would
cause it to pass, helping programmers understand why the test fails. The
full details of \FailureDoc can be found in~\cite{failuredoc}.

A critical step in \tool{FailureDoc} is to find a set of \textit{failure-correcting}
objects to replace the existing values in a failed test.
The object used for replacement must be type-compatible with
other test code. For example, in Figure~\ref{fig:failedtest}, \FailureDoc
needs to use a \CodeIn{List}-type object to replace \CodeIn{var6}
at line 6. Otherwise, there would be a compliation error in the mutated test.

However, in practice, an un-simplified failing test containing irrelevant
statements often brings in additional type constraints, which prevents \FailureDoc
to find good replacement candidates. This leads \FailureDoc fail to infer 
 or can only infer less useful error-diagnosis messages. Take the un-simplified
failing test in Figure~\ref{fig:failedtest} as an example, \FailureDoc only infers one
piece of comment above line 8 as follows:

\noindent \CodeIn{  //Test passes if var6 is not added to var7}

\noindent \CodeIn{8. boolean var8 = var7.add(var6);}

This piece of comment only indicates \textit{when} the test fails, but
does not show \textit{why} the test fails.

Semantic test simplification reduces irrelevant statements from
the original failing test, and permits \FailureDoc to choose more diverse
objects as replacement candidates. Take the simplified test
in Figure~\ref{fig:simplified} as an example, \FailureDoc can use any \CodeIn{Object}-type
value to replace \CodeIn{var1} at line 8, instead of being constrained to
use \CodeIn{List}-type objects. This makes \FailureDoc infer more specific
and accurate error-messages. For this test, \FailureDoc generates two pieces of
comments above lines 1 and 8 as:

\noindent \CodeIn{  //Test passes if var1 implements Comparable}

\noindent \CodeIn{1. Object var1 = new Object();}

\noindent \CodeIn{  //Test passes if var1 is not added to var7}

\noindent \CodeIn{8. boolean var8 = var7.add(var1);}

The comment above line 1 is crucial for programmers to understand the
failure cause, but can \textit{only} be inferred after performing semantic test simplification.



\begin{table}[t]
\footnotesize{ \fontsize{8pt}{\baselineskip}\selectfont
\hspace*{-0.2cm}
\setlength{\tabcolsep}{0.2\tabcolsep}
\begin{tabular}{|l||c|c|c|}
\hline
 Program & Unsimplified & Simplified by~\cite{Leitner:2007} & \textbf{SimpleTest} \\
\hline \hline
Time And Money& xx & xxx & 0   \\
 \hline
jdom& xx & xx & 0  \\
 \hline
 Apache Commons Primitive  & xxx & xx & xx   \\
  \hline
Apache Commons Beanutils  &  xxx & xxx & xx  \\
 \hline
 Apache Commons Math& xxx & xxx &xxx  \\
 \hline
Apache Commons Collections & xx & xxx & xx0  \\
\hline
\CodeIn{java.util} package & xx & xx & xx  \\
\hline
\hline
Total & xx & xx & xxx  \\
\hline
\end{tabular}
\Caption{{\label{table:failuredoc} \FailureDoc's results
of inferring documentation for the \testnum failed tests from
Table~\ref{table:results}. Column ``Unsimplified'' shows
the number of un-simplified tests that \FailureDoc can infer documentation.
Column ``Simplified by~\cite{Leitner:2007}'' shows the number
of syntactically-simplified tests that \FailureDoc can infer documentation.
Column ``\textbf{SimpleTest}'' shows the number of
semantically-simplified tests that \FailureDoc can infer documentation.} }}% \vspace*{-3mm}}
\end{table}

We ran \FailureDoc on the \testnum failed tests under three experimental treatments:
originally un-simplified, syntactically-simplified by~\cite{}, and semantically-simplified
by \SimpleTest. Table~\ref{table:failuredoc} summarizes the results. 
$\spadesuit$

We also manually inspect the quality of the inferred comments, and
found $\spadesuit$ need to give some examples here


\vspace{1mm}

\noindent \textbf{\textit{Summary: semantic test simplification
improves the existing error explanation by removing irrelevant
test code, and permits inferring better explanatory
documentation.}}

\subsection{Experiment Discussion and Conclusion}

\noindent \textbf{\textit{Threats to Validity.}} There are four major
threats to validity. First, the 7 programs and the revealed bugs may
not be representative.  Thus, we can not claim the results can be
extended to an arbitrary program. Second, we only investigated the
effectiveness of \SimpleTest on a set of automatically-generated tests,
and have not used human-written unit tests in the experiment. 
Third, though we show \SimpleTest's usefulness on 2 automated debugging
techniques with rather different goals, we still can not claim \SimpleTest
is universally applicable to any automated debugging techniques.
Fourth, how useful is the \SimpleTest-simplified tests for real-world
programmers to understand a test's behavior is still unknown.
The first 3 threats can be eliminated by using more subject programs,
human-written tests, and applying \SimpleTest to more automated debugging
techniques. The fourth threat can be eliminated by performing a controlled
user study to understand how real programmers understand a test and fix the
revealed bug. All of these constitute our future work.

\vspace{1mm}

%\noindent \textbf{\textit{Applicability of \SimpleTest.}}


%To address this problem, we propose an algorithm, \SimpleTest, to
%automatically transform a failed test into a simpler test that
%exhibit the same bug.

%The simplified test can then be used in
%debugging instead of the more complicated original one,

%\vspace{1mm}

\noindent \textbf{\textit{Conclusion.}} \SimpleTest is more effective than
the existing syntax-based test simplification techniques~\cite{}. It
potentially relieves the programmer of some of the burden
associated with reasoning about irrelevant program states and
code. The semantically-simplified tests are also useful for testing-based
automated debugging by improving accuracy,
effectiveness, and error-diagnosis message.
