\section{Introduction}

%\subsection{An Anecdote and Problem Statement: Using Previous Random Testing to Generate an Efficient ``Quick Test''}

In testing a flash file system implementation that eventually evolved
into the file system for the Mars Science Laboratory
(MSL) project's Curiosity rover \cite{ICSEDiff,AMAI}, one of the
authors of this paper discovered that, while an overnight sequence of
random tests was effective for shaking out even subtle
faults, random testing was not very effective if only a short time was
available for testing.  Each individual random test was a highly
redundant, ineffective use of testing budget.  As a basic sanity
check/smoke test before checking a new version of the file system in,
it was much more effective to run a regression suite
built by applying delta debugging \cite{DD} to a representative test
case for each fault previously found.

Delta debugging (or delta-minimization) is an algorithm
(called \emph{ddmin}) for reducing the size of failing test cases.
Delta debugging algorithms have retained a common core since
the original proposal of Hildebrandt and Zeller \cite{DDISSTA}: use a
variation on binary search to remove individual
components of a failing test case $t$ to produce a \emph{new}
test case $t_{1min}$ satisfying two properties: (1) $t_{1min}$ fails
and (2) removing any component from $t_{1min}$ results in a test case
that does not fail.  Such a test case is called \emph{1-minimal}.
Because 1-minimal test cases are potentially much larger than the
smallest possible set of failing components, we say that \emph{ddmin}
\emph{reduces} the size of a test case, rather than truly minimizing
it.  While the precise details of \emph{ddmin} and its variants can be
complex, the family of delta debugging algorithms can generally be
simply described.  Ignoring caching and the details of an effective
divide-and-conquer strategy for constructing candidate test cases,
\emph{ddmin} for a base failing test case $t_b$ proceeds by iterating the following two steps until termination:

\begin{enumerate}
\item Construct the next candidate simplification of $t_b$, which
we call $t_c$.  Terminate if no $t_c$ remain ($t_b$ is 1-minimal).
\item Execute $t_c$ by calling $\mathit{rtest}(t_c)$.  If $\mathit{rtest}$ returns \ding{55} (the test fails) then it is a simplification of $t_b$.  Set $t_b = t_c$.
\end{enumerate}

In addition to detecting actual regressions of the NASA code,
\emph{ddmin}-minimized test cases obtained close to 85\% statement
coverage in less than a minute, which running new random tests often
required hours to match.  Unfortunately, the delta debugging-based
regression was often \emph{ineffective} for detecting \emph{new}
faults unrelated to previous bugs.  Inspecting minimized test
cases revealed that, while the tests covered most statements, the
tests were extremely focused on corner cases that had
triggered failures, and sometimes missed very shallow bugs
easily detected by a short amount of more new random testing.
While the bug-based regression suite was effective as a pure
\emph{regression} suite, it was ineffective as a quick way to find
\emph{new} bugs; on the other hand, running new random tests was
sometimes very slow for detecting either regressions or new bugs.

The functional programming community has long recognized the value of
very quick, if not extremely thorough, random testing during
development, as shown by the wide use of the QuickCheck tool
\cite{ClaessenH00}.  QuickCheck is most useful, however, for data
structures and small modules, and works best in combination with a
functional style allowing modular checks of referentially transparent
functions.  Even using feedback \cite{Pacheco,ICSEDiff}, swarm testing
\cite{ISSTA12}, or other improvements to standard random testing, it
is extremely hard to randomly generate effective tests for complex
systems software such as compilers \cite{csmith} and file systems
\cite{ICSEDiff,AMAI} without a large test budget.  For example, even
tuned random testers show increasing fault detection with larger
tests, which limits the number of tests that can be run in a small
budget \cite{ASE08,csmith}.  The value of the \emph{ddmin} regressions
at NASA, however, suggests a more tractable problem: \emph{given} a
set of random tests, generate a truly \emph{quick test} for complex
systems software.  Rather than choose a particular test budget that
represents ``the'' quick test problem, we propose that quick testing
is testing with a budget that is at most half as large as a full test
budget, and typically more than an order of magnitude smaller.  Discussion
with developers and the authors' experience suggested two concrete
values to use in evaluating quick test methods.  First, tests that
take only 30 seconds to run can be considered almost without cost, and
executed after, e.g., every compilation.  Second, a 5 minute budget is
too large to invoke with such frequency, but maps well to short breaks
from coding (e.g. the time it takes to get coffee), and is suitable to
use before relatively frequent code check-ins.  The idea of a quick
test is inherent in the concept of test \emph{efficiency}, defined as
coverage/fault detection \emph{per unit time}
\cite{gupta-jalote-sttt06,Harder}, as distinguished from absolute
effectiveness, where large test suites will always tend to win.

%that an ideal quick test maximizes the value of
%test budgets ranging from 30 seconds to 5 minutes.  If more time than
%5 minutes is available, it is often reasonable to run overnight tests,
%in which case simply generating new random tests works.

The primary, practical, contribution of this paper is \emph{a proposed
method for solving the quick test problem}, based on test case
reduction with respect to code coverage (and simple coverage-based
test case prioritization). Generalizing the \emph{effect} in
\emph{ddmin} from preserving failure to \emph{code coverage}
properties makes it possible to apply \emph{ddmin} to improve test
suites containing both failing \emph{and successful} test cases, by
dramatically reducing runtime while retaining code coverage.  This
yields test suites with some of the benefits of the ddmin-regression
discussed above (short runtime) but with better overall testing
effectiveness. We show that retaining statement coverage can
approximate retaining other important effects, including \emph{fault
detection and branch coverage}.  A large case study based on testing
Mozilla's SpiderMonkey JavaScript engine uses real faults to show that
cause reduction is effective for improving test efficiency, and that
the effectiveness of reduced test cases persists even across a long
period of development, without re-running the reduction algorithm.
Even more surprisingly, for the version of SpiderMonkey used to
perform cause reduction and a version of the code from more than two
months later, the reduced suite not only runs almost four times faster
than the original suite, but detects \emph{more} distinct faults.  A
mutation-based analysis of the YAFFS2 flash file system shows that the
effectiveness of cause reduction is not unique to SpiderMonkey: a
statement-coverage reduced suite for YAFFS2 ran in a little over half
the time of the original suite, but killed over 99\% as many mutants,
including 6 not killed by the original suite.

The second contribution of this paper is introducing the idea of cause
reduction, which we believe may have applications beyond improving
test suite efficiency.

\section{The Quick Test Problem}

The \emph{quick test} problem is: given a set of randomly generated
tests, produce test suites for test budgets that are sufficiently
small that they allow tests to be run frequently during code
development, and that maximize:

\begin{enumerate}
\item Code coverage: the most important coverage criterion is probably
statement coverage; Branch and function coverage are also clearly
desirable;
\item Failures: automatic fault localization techniques
\cite{Tarantula} often work best in the presence of multiple failing
test cases; more failures also indicate a higher probability of
finding a flaw;
\item Distinct faults detected:  finally, the most important evaluation metric is the actual number of \emph{distinct} faults that a suite detects; it is generally better to produce 4 failures, each of which exhibits a distinct fault, than to produce 50 failures that exhibit only 2 different faults \cite{PLDI13}.
\end{enumerate}

It is acceptable for a quick test approach to require significant
pre-computation and analysis of the testing already performed if the
generated suites remain effective across significant changes to the
tested code without re-computation.  Performing 10 minutes of analysis
before each 30 second run is clearly unacceptable; performing 10 hours
of analysis once to produce quick test suites that remain useful for a
period of months is fine.  For quick test purposes, it is also
probably more feasible to build a generally good small suite rather
than perform change analysis on-the-fly to select test cases that need
to be executed \cite{YooHarman,SelectTest}; the nature of random
tests, where tests are all statistically similar (as opposed to
human-produced tests which tend to have a goal) means that in practice
selection methods tend to propose running most stored test cases.  In
addition compilers and interpreters tend to pose a difficult problem
for change analysis, since optimization passes rely on deep semantic
properties of the test case.

Given the highly parallel nature of random testing, in principle
arbitrarily many tests could be performed in 5 minutes.  In practice,
considerable effort is required to introduce and maintain cloud or
cluster-based testing, and developers often work offline or can only use local
resources due to security or confidentiality concerns.  More
critically, a truly small quick test would enable testing on slow,
access-limited hardware systems; in MSL development, random tests were
not performed on flight hardware due to high demand for access to the
limited number of such systems \cite{scriptstospecs}, and the slowness
of radiation-hardened processors.  A test suite that only requires 30
seconds to 5 minutes of time on a workstation, however, would be
feasible for use on flight testbeds.  We expect that the desire for
high quality random tests for slow/limited access hardware may extend
to other embedded systems contexts, including embedded compiler
development.  Such cases are more common than may be obvious: for
instance, Android GUI random testing \cite{Monkey} on actual mobile
devices can be even slower than on already slow emulators, but is
critical for finding device-dependent problems.  Quick testing's model
of expensive pre-computation to obtain highly efficient execution is a
good fit for the challenge of testing on slow and/or over-subscribed
hardware.

In some cases, the quick test problem might be solved simply by using
test-generation techniques that produce short tests in the first
place, e.g. evolutionary/genetic testing approaches where test size is
included in fitness \cite{BLM10,FA11,AMFL11}, or bounded exhaustive
testing (BET).  BET, unfortunately, performs poorly even for file
system testing \cite{AndrewsTR} and is very hard to apply to compiler
testing.  Recent evolutionary approaches \cite{FA11} are more likely
to succeed, but to our knowledge have not been applied to such complex
problems as compiler or interpreter testing, where hand-tuned systems
requiring expert knowledge are typical \cite{csmith,jsfunfuzz}. \comment{Even
random testers for file systems are often complex, hand-tuned systems
with custom feedback and hardware fault models \cite{ICSEDiff}.}


%\subsection{Contributions}


