\section{Conclusions and Future Work}

This paper shows that generalizing the idea of delta debugging from an
algorithm to reduce the size of failing test cases to an algorithm to
reduce the size of test cases with respect to \emph{any} interesting
effect, which we call \emph{cause reduction}, allows us to produce
\emph{quick tests}: highly efficient test suites based on inefficient
randomly generated tests.  Reducing a test case with respect to
statement coverage not only (obviously) preserves statement and
function coverage; it also approximately preserves branch coverage,
test failure, fault detection, and mutation killing ability, for two
realistic case studies (and a small number of test cases for a third
subject, the GCC compiler).  Combining cause reduction by statement
coverage with test case prioritization by additional statement
coverage produced, across 30 second and 5 minute test budgets and
multiple versions of the SpiderMonkey JavaScript engine, an effective
quick test, with better fault detection and coverage than performing
new random tests or prioritizing a previously produced random test
suite.  The efficiency and effectiveness of reduced tests persists
across versions of SpiderMonkey and GCC that are up to a year later in
development time, a long period for such actively developed projects.

In future work we first propose to further investigate the best
strategies for quick tests, across more subjects, to determine if the
results in this paper generalize well.  Second, it is clear from GCC
that cause reduction by coverage is too expensive for some subjects,
and the gain in efficiency is relatively small compared to the
extraordinary computational demands of reduction.  Two alternative
mitigations come to mind: first, it is likely that reduction by even
coarser coverages, such as function coverage, will result in much
faster reduction (as more passes will reduce the test case) and better
efficiency gains.  Whether cause reduction based on coarse coverage
will preserve other properties of interest is doubtful, but worth
investigating, as statement coverage preserved other properties much
more effectively than we would have guessed.  Initial experiments with
function coverage based reduction of SpiderMonkey tests showed good
preservation of failure and fault detection, but we did not
investigate how well preservation carried over to future versions of
the software yet.  A second mitigation for slow reduction (but not for
limited efficiency gains) is to investigate changing \emph{ddmin} to
fit the case where expected degree of minimization is much smaller
than for failures, and where the probabilities of being removable for
contiguous portions of a test case are essentially independent, rather
than typically related, which motivates the use of a binary search in
\emph{ddmin}.  

We also propose other uses of cause reduction.  While some
applications are relatively similar to coverage-based minimization,
e.g., reducing tests with respect to peak memory usage, security
privileges, or other testing-based predicates, other possibilities
arise.  For example, reduction could be applied to a program itself,
rather than a test.  A set of tests (or even model checking runs)
could be used as an effect, reducing the program with respect to its
ability to satisfy all tests or specifications.  If the program can be
significantly reduced, it may suggest a weak test suite, abstraction,
or specification.  This approach goes beyond simply examining code
coverage because examining code that is removed despite being covered
by the tests/model checking runs can identify code that is truly
under-specified, rather than just not executed (or dead code).  Cause
reduction can also be used to produce ``more erroneous'' code when
considering ``faults'' with a quantitative nature.  For example, C++
compilers often produce unreasonably lengthy error messages for invalid
programs using templates, a problem well known enough and irritating
enough to inspire a contest (\url{http://tgceec.tumblr.com/}) for the
shortest program producing the longest error message.  Using C-Reduce,
we started with code from LLVM, with an effect designed
to maximize error message length proportional to code length.
C-Reduce eventually produced a small program:

{\scriptsize
\begin{verbatim}
struct x0 struct A<x0(x0(x0(x0(x0(x0(x0(x0(x0(x0(_T1,x0(_T1>
  <_T1*, x0(_T1*_T2>      binary_function<_T1*, _T2, x0{ }
\end{verbatim}
}

\noindent This produces a very large error message on the latest {\tt
g++}, and the message doubles in size for each additional {\tt
(x0}.
