\section{Related Work}

\comment{Related work can be divided into two large categories.}  This
paper follows previous work on delta debugging
\cite{DD,DDISSTA,Yesterday} and other methods for reducing failing
test cases.  While previous work has attempted to generalize the
circumstances to which delta debugging can be applied
\cite{IsolThread,LocCause,CauseEffect}, this paper replaces preserving
failure with any chosen effect.  Surveying the full scope of work on
failure reduction in both testing \cite{CReduce,HDD} and model
checking \cite{Gastin04minimizationof,MakeMost} is beyond the scope of
this paper.  The most relevant work considers delta debugging in
random testing \cite{MinUnit,ICSEDiff,AMAI,TCminim}, which tends to
produce complex, essentially unreadable, failing test cases
\cite{MinUnit}.  Random test cases are also highly redundant, and the
typical reduction for random test cases in the literature ranges from
75\% to well over an order of magnitude
\cite{MinUnit,TCminim,ICSEDiff,CReduce,PLDI13}.  Reducing
highly-redundant test cases to enable debugging is an essential enough
component of random testing that some form of automated reduction
seems to have been applied even before the publication of the
\emph{ddmin} algorithm, e.g. in McKeeman's early work
~\cite{Differential}, and reduction for compiler
testing is an active research area \cite{CReduce}.  Recent work has
shown that reduction has other uses: Chen et. al
showed that reduction was required for using machine learning to rank
failing test cases to help users sort out different underlying faults
in a large set of failures \cite{PLDI13}.
\comment{In a larger sense, work on causality could be considered as
related to delta debugging \cite{LewisCause}.}

Second, we propose an orthogonal approach to test suite minimization,
selection and prioritization from that taken in previous work, which
is covered at length in a survey by Yoo and Harman \cite{YooHarman}.
Namely, while other approaches have focused on minimization
\cite{mcmaster2008call,offutt1995procedures,hsu2009mints,ChenLau},
selection \cite{SelectTest} and prioritization
\cite{RothMin,TimeAware06,CostEffect} at the granularity of entire
test suites, this paper proposes reducing the size of the test cases
composing the suite, a ``finer-grained'' approach that can be combined
with previous approaches.  Previous work on suite minimization has
shown a tendency of minimization techniques to lose fault detection
effectiveness \cite{RothFault}.  While our experiments are not intended to directly
compare cause reduction and suite-level techniques, we note that for
SpiderMonkey, at the 30 second and 5 minute levels, fault detection
was much better preserved by our approach than by
prioritizations based on suite minimization techniques.

The idea of a quick test proposed here also
follows on work considering not just the effectiveness of a test
suite, but its \emph{efficiency}: coverage/fault detection per unit
time \cite{gupta-jalote-sttt06,Harder}.  Finally, as an alternative to
minimizing or prioritizing a test suite, tests can be constructed with
brevity as a criteria, as in evolutionary testing and bounded
exhaustive testing \cite{BLM10,FA11,AMFL11,AndrewsTR}.  However, the applications
where random testing is most used tend to be precisely those
where ``small by construction'' methods have not been shown to
be as successful, possibly for combinatorial reasons.
