The benefits of unit testing are widely recognized. Unit testing
reveals defects that would be expensive to find
and fix in later development phases, and it verifies that changes to the
code do not break existing functionality. A good unit test must
be effective in finding potential defects, and it also needs to
be easy to understand. A test that finds no defects is useless for
testing.\todo{This wording is unclear.  A test that \emph{can} find none is
  indeed useless.  A test that never fails during the course of a
  development session or even an entire project can still be useful as a
  preventative measure.}
 However, a test that is hard to interpret is less useful
in practice. This is particular important when a test fails,
programmers must understand its failure cause before bug-fixing.

Recently, many automated test generation tools~\cite{randoopicse,eclat,korat,
palus, Baresi:2010, Mariani:2011} have been developed to reduce
the high cost of manual test \emph{creation}. However, few techniques
help programmers \textit{understand} the behavior of an existing test.
In most scenarios, not every statement in a test is equally important
to a programmer. For example, when a test fails by violating a predefined
assertion, programmers may only be interested in the test code that is
related to the assertion violation, and they prefer to
ignore other irrelevant parts. A practical solution, which we follow
in this paper, is to provide programmers a \textit{simplified} but still
\textit{executable} test that requires less comprehension effort. The simplified
test violates the same assertion, but by a much shorter sequence of setup
statements.
Programmers can choose to inspect the simplified version to understand
the failure conditions. Since the simplified test is still executable,
programmers can run it, use a debugger to step into the execution,
and then locate the defect.


%Despite advances on the automated test generation problem, a
%second problem still remains --- an automatically-generated
%\textit{error-revealing} test can be long and hard-to-interpret.
%Tools like $\spadesuit$. For such test, to fix the revealed error,
%programmers must spend a considerable amount of time to understand
%the failed test. Hence, it is often desirable to create a simpler
%test that shows the same bug.

%Besi

Simplifying tests not only helps programmers diagnose a defect,
but also has many other benefits. First, a simplified test takes
 less time and fewer resources to execute.
This is particularly useful for long-running tests~\cite{Zhang:2006}. Second,
a simplified test is easier to explain and communicate between programmers.
Furthermore, a simplified test can facilitate other software engineering activities,
such as identifying duplicate failures
(multiple tests that relate to the same defect)~\cite{Le:2010}, as the simplified tests
are more likely to contain similar or even the same code.
\todo{This was unclear.  Simplifying two different tests makes them more
  similar, or clarifies their differences?  I could see both arguments.}

Our empirical evaluation (Section~\ref{sec:evaluation}) demonstrates
another use case:  simplified tests aid in defect localization. 
For example, Tarantula~\cite{Jones:2004}
utilizes information obtained from both passing and failing tests
to identify buggy statements. The technique is based on the assumption that statements covered
by more failing tests are more likely to be buggy. However, a failing test may cover
correct code,  thus introduces \textit{coverage noises} to Tarantula's diagnosis process.
\tool{FailureDoc}~\cite{failuredoc}, a recent error explanatory technique, uses a
technique called \textit{value replacement} to infer
explanatory documentation for a failed test to explain its failure cause. Nevertheless,
the \textit{irrelevant test code} in a failed test often contributes but meaningless
control/data dependence, which prevents \tool{FailureDoc} from finding a suitable value replacement
candidate and results in poor-quality
documentation.  A simplified test is more likely to contain less
irrelevant code. As we show in our empirical evaluation (Section~\ref{sec:evaluation}),
semantically-simplified tests effectively reduce the \textit{coverage noise}
for \tool{Tarantula}, eliminate \textit{irrelevant
test code} for \tool{FailureDoc}, and thus improve the results of both techniques.


\vspace{1mm}

\noindent \textbf{\textit{Existing approaches.}}
Two existing approaches to simplify a unit test are
delta debugging~\cite{Zeller:2002}
and program slicing~\cite{Weiser:1981}.
Delta debugging~\cite{Zeller:2002, Lei:2005}, a general technique to
simplify failure-inducing inputs, takes as input a set of factors (i.e., all
statements in a failing test) that might
influence a test outcome, and repeats the test with subsets of
these factors. By keeping only those factors that are relevant for
the test outcome, it systematically reduces the set of factors
until a minimized set is obtained containing only relevant
factors. Program slicing~\cite{Weiser:1981, Agrawal:1990, Leitner:2007}
uses control/data dependence information to identify a
subset of statements that may affect a given property in a program. However,
a fundamental limitation of both delta debugging and program slicing is
their syntactic approach that isolates a relevant statement \textit{subset}.
A syntactically-minimized test can
still contain many irrelevant statements, which require extra
comprehension effort. For example, Figure~\ref{fig:failedtest}
shows a syntactically-minimized failed test, in which if any statement is removed,
the test would be either un-compilable or no longer trigger the defect. Delta
debugging over statements can no longer simplify this test, and a backward slice
from the failing assertion (line 10) contains every statement.  In fact,
the test in Figure~\ref{fig:failedtest} reveals that the \CodeIn{add} method
in the \CodeIn{TreeSet} class is buggy: it should not accept a non-comparable
object, but it does (line 8). The code in lines 2--6 is entirely irrelevant
to this bug, and should be removed to avoid distracting programmers.

\begin{figure}[t]
\begin{CodeOut}
\begin{alltt}
public void test1() \{
1.   Object var1 = new Object();
2.   Integer var2 = 0;
3.   Integer var3 = 1;
4.   Object[] var4 = new Object [] \{var1, var2, var3\};
5.   List var5 = Arrays.asList(var4);
6.   List var6 = var5.subList(var2, var3);
7.   TreeSet var7 = new TreeSet();
8.   boolean var8 = var7.add(var6);
9.   Set var9 = Collections.synchronizedSet(var7);
     // This assertion fails
10.  assertTrue(var9.equals(var9));
\}
\end{alltt}
\end{CodeOut}
\vspace*{-3.0ex} \Caption{\label{fig:failedtest} An automatically-generated test by Randoop~\cite{randoopicse}
that reveals a bug in JDK 1.6. It shows a call sequence ending with
the creation of an object that is not equal to itself. This test has
already been syntactically-minimized. Existing approaches~\cite{Lei:2005, Leitner:2007}
can not further simplify this test.} %\vspace{-5mm}
\end{figure}




\vspace{1mm}

\noindent \textbf{\textit{Proposed solution.}} This paper presents
a new test simplification technique that
explores the space of simplifyied tests at the \textit{semantic}
level. We first introduce and formalize the \textit{semantic
test simplification} problem:  simplifying a test to
a minimal number of instructions (steps) required
to reproduce a given property $\phi$. Then, we
prove this problem is NP-hard.\todo{If tests are small in general (as yours
  are), then does NP-hardness matter?  Exhaustive search is possible.  Please
  experimentally compare to exhaustive search.}
We further propose a heuristic algorithm,
\SimpleTest, that runs in time polynomial in the size of a failed test.
Given a failed test that satisfies
a property $\phi$, \SimpleTest automatically transforms it
 into a simpler test that still exhibits $\phi$. To efficiently achieve this,
 \SimpleTest greedily computes a locally optimal simplification instead of the
globally optimal simplification.  A key difference of \SimpleTest
from existing test simplification techniques~\cite{Lei:2005, Leitner:2007} is that
\SimpleTest tries  to \textit{reconstruct}\todo{I don't like this
  terminology --- it's a nonstandard use of the word ``reconstruct''.} an executable
test from the original one, instead of
\textit{carving} a subset of the existing test code. \SimpleTest starts from $\phi$,
follows back data dependencies, and repeatedly \textit{replaces} referred
expressions in each statement with other alternatives.
 After each replacement, \SimpleTest constructs a simpler test.
 The simpler test is immediately executed to validate whether $\phi$ is still
satisfied. \SimpleTest repeats the above replacement process until the result
test can no longer be simplified. Since \SimpleTest is a heuristic algorithm,
it can not guarantee optimal
simplification. However, we empirically show that it often yields optimally simplified tests
in practice (Section~\ref{sec:evaluation}). As an example, for the failed test in Figure~\ref{fig:failedtest},
\SimpleTest outputs the simplified
test in Figure~\ref{fig:simplified}. The simplified version still
triggers the same assertion at line 10, but is much simpler
than the original one. 
A crucial step conducted by \SimpleTest is
to replace \CodeIn{var6} by \CodeIn{var1} on line 8.
This replacement opens the possibility to further simplify
the test by removing irrelevant statements from line 2 to 6.

\todo{After the long, detailed paragraph above, I still don't understand
  the technique.  It should be described more concisely and clearly.  Is it
  simplifications in the AST space?  Is it tree rewriting?}

\begin{figure}[t]
\begin{CodeOut}
\begin{alltt}
public void test1() \{
1.   Object \textbf{\underline{var1}} = new Object();
\sout{2.   Integer var2 = 0;}
\sout{3.   Integer var3 = 1;}
\sout{4.   Object[] var4 = new Object [] \{var1, var2, var3\}};
\sout{5.   List var5 = Arrays.asList(var4);}
\sout{6.   List var6 = var5.subList(var2, var3);}
7.   TreeSet var7 = new TreeSet();
8.   boolean var8 = var7.add(\textbf{\underline{var1}});
9.   Set var9 = Collections.synchronizedSet(var7);
     // This assertion fails
10.  assertTrue(var9.equals(var9));
\}
\end{alltt}
\end{CodeOut}
\vspace*{-3.0ex} \Caption{{\label{fig:simplified} A semantically simplified test
produced by our \SimpleTest algorithm for the test in Figure~\ref{fig:failedtest}.
Line numbers are aligned with Figure~\ref{fig:failedtest}. The \sout{strikout} part
is not shown in the final simplified test, and the
replaced variable in semantic test simplification is highlighted by \underline{\textbf{underline}}.}} %\vspace{-5mm}
\end{figure}



\vspace{1mm}

\noindent \textbf{\textit{Evaluation.}} We have implemented \SimpleTest in a prototype tool, called \Simplifier, for Java
programs. We present two evaluation results to
show its usefulness. 

 First, we applied \Simplifier to simplify \testnum failing tests from 7
real-world programs with up to 150K lines of code in total. The results
 showed that \Simplifier can reduce as much as 84\% of the
test size in 41 seconds.  We also compared \SimpleTest
with four existing techniques, and found \SimpleTest reduced XXX\%
more irrelevant test code and is 3X faster than the most effective
existing approach~\cite{Le:2010}.

 Second, we applied \Simplifier to the application domain of
automated debugging.  We demonstrated that semantically-simplified tests
can substantially improve the effectiveness of two existing techniques:
Tarantula~\cite{Jones:2004}, a well-established fault localization technique,
and FailureDoc~\cite{failuredoc}, a recent error explanation technique.



\vspace{1mm}

\noindent \textbf{\textit{Contributions.}} The main contributions of this paper are:

\begin{itemize}

\item \textbf{Problem.} We introduce and  formalize the
\textit{semantic test simplification} problem and prove it is NP-hard (Section~\ref{sec:formal}).

\item \textbf{Technique.} We propose a heuristic test simplification algorithm,
\SimpleTest, which greedily applies three basic actions to
simplify a test (Section~\ref{sec:algorithm}).

\item \textbf{Tool.} We describe an implementation of the \SimpleTest algorithm,
which is integrated into a state-of-the-art automated test generation tool and
is publicly available at \url{http://code.google.com/p/randoop/}.

\item \textbf{Evaluation.} We conducted experiments to show the effectiveness
of our proposed technique (Section~\ref{sec:evaluation}).
\end{itemize}

Section~\ref{sec:related} discusses related work, and section~\ref{sec:conclusion} concludes.

