\section{SpiderMonkey JavaScript Engine Case Study}

\begin{table*}
\caption{SpiderMonkey Unlimited Test Budget Results}
\label{tab:unlimited}

\begin{center}
\begin{tabular}{|c||c||c||c|c|c|c|c|c|c|c}
\hline
Release & Date & Suite & Size & Time(s) & ST & BR & FN & \#Fail & E\#F \\
\hline
\hline
1.6 & 12/22/06 & F & 13,323 & 14,255.068 & 19,091 & {\bf 14,567} & 966 & 1,631 & 22 \\
\hline
1.6 & 12/22/06 & M & 13,323 & {\bf 3,566.975} & 19,091 & 14,562 & 966 & 1,631 & {\bf 43}\\
\hline
1.6 & 12/22/06 & M$_F$ & 13,323 & {2,636.648} & 18,876 & 14,280 & 966 & 1,627 & 39 \\
\hline
\hline
1.6 & 12/22/06 & DD-Min & 1,019 & 169.594 & 16,020 & 10,875 & 886 & 1,019 & 22 \\
\hline
1.6 & 12/22/06 & GE-ST(F) & 168 & 182.823 & 19,091 & 14,135 & 966 & 14 & 5 \\
\hline
1.6 & 12/22/06 & GE-ST(M) & 171 & 47.738 & 19,091 & 14,099 & 966 & 14 & 8 \\
\hline
1.6 & 12/22/06 & GE-Min & 168 & 25.443 & 19,091 & 13,722 & 966 & 12 & 8 \\
\hline
\multicolumn{9}{}{} \\
\hline
NR & 2/24/07 & F & 13,323 & 9,813.781 & {\bf 22,392} & {\bf 17,725} & {\bf 1,072} & {\bf 8,319} & 20 \\
\hline
NR & 2/24/07 & M & 13,323 & {\bf 3,108.798} & 22,340 & 17,635 & 1,070 & 4,147 & {\bf 36} \\
\hline
NR & 2/24/07 & M$_F$ & 13,323 & 2,490.859 & 22,107 & 17,353 & 1,070 & 1,464 & 32 \\
\hline
\hline
NR & 2/24/07 & DD-Min & 1,019 & 148.402 & 17,923 & 12,847 & 958 & 166 & 7 \\
\hline
NR & 2/24/07 & GE-ST(F) & 168 & 118.232 & 21,305 & 16,234 & 1,044 & 116 & 5 \\
\hline
NR & 2/24/07 & GE-ST(M) & 171 & 40.597 & 21,323 & 16,257 & 1,045 & 64 & 3 \\
\hline
NR & 2/24/07 & GE-Min & 168 & 25.139 & 20,887 & 15,424 & 1,047 & 8 & 6 \\
\hline
\multicolumn{9}{}{} \\
\hline
NR & 4/24/07 & F & 13,323 & 16,493.004 & {\bf 22,556} & {\bf 18,047} & {\bf 1,074} & 189 & {\bf 10} \\
\hline
NR & 4/24/07 & M & 13,323 & {\bf 3,630.917} & 22,427 & 17,830 & 1,070 & {\bf 196} & 6 \\
\hline
NR & 4/24/07 & M$_F$ & 13,323 & 2,522.145 & 22,106 & 17,449 & 1,066 & 167 & 5 \\
\hline
\hline
NR & 4/24/07 & DD-Min & 1,019 & 150.904 & 18,032 & 12,979 & 961 & 158 & 5 \\
\hline
NR & 4/24/07 & GE-ST(F) & 168 & 206.033 & 22,078 & 17,203 & 1,064 & 4 & 1 \\
\hline
NR & 4/24/07 & GE-ST(M) & 171 & 45.278 & 21,792 & 16,807 & 1,058 & 3 & 1 \\
\hline
NR & 4/24/07 & GE-Min & 168 & 24.125 & 21,271 & 15,822 & 1,052 & 2 & 2 \\
\hline
\multicolumn{9}{}{} \\
\hline
1.7 & 10/19/07 & F & 13,323 & 14,282.776 & {\bf 22,426} & {\bf 18,130} & {\bf 1,071} & {\bf 528} & {\bf 15} \\
\hline
1.7 & 10/19/07 & M & 13,323 & {\bf 3,401.261} & 22,315 & 17,931 & 1,067 & 274 & 10\\
\hline
1.7 & 10/19/07 & M$_F$ & 13,323 & 2,474.55 & 22,022 & 17,565 & 1,065 & 241 & 11 \\
\hline
\hline
1.7 & 10/19/07 & DD-Min & 1,019 & 168.777 & 18,018 & 13,151 & 956 & 231 & 12\\
\hline
1.7 & 10/19/07 & GE-ST(F) & 168 & 178.313 & 22,001 & 17,348 & 1,061 & 6 & 2 \\
\hline
1.7 & 10/19/07 & GE-ST(M) & 171 & 43.767 & 21,722 & 16,924 & 1,055 & 5 & 2 \\
\hline
1.7 & 10/19/07 & GE-Min & 168 & 24.710 & 21,212  & 15,942 & 1,045 & 4 & 4 \\
\hline
\multicolumn{9}{}{} \\
\hline
1.8.5 & 3/31/11 & F & 13,323 & 4,301.674 & {\bf 21,030} & {\bf 15,854} & {\bf 1,383} & {\bf 11} & {\bf 2} \\
\hline
1.8.5 & 3/31/11 & M & 13,323 & {\bf 2,307.498} & 20,821 & 15,582 & 1,363 & 3 & 1\\
\hline
1.8.5 & 3/31/11 & M$_F$ & 13,323 & 1,775.823 & 20,373 & 15,067 & 1,344 & 3 & 2 \\
\hline
\hline
1.8.5 & 3/31/11 & DD-Min & 1,019 & 152.169 & 16,710 & 11,266 & 1,202 & 2 & 1 \\
\hline
1.8.5 & 3/31/11 & GE-ST(F) & 168 & 51.611 & 20,233 & 14,793 & 1,338 & 1 & 1\\
\hline
1.8.5 & 3/31/11 & GE-ST(M) & 171 & 28.316 & 19,839 & 14,330 & 1,327 & 1 & 1 \\
\hline
1.8.5 & 3/31/11 & GE-Min & 168 & 21.550 & 18,739 & 13,050 & 1,302 & 1 & 1 \\
\hline
\end{tabular}
\\
\vspace{0.1in}
{\scriptsize \bf
Legend:  ST = Statement Coverage; BR = Branch Coverage; FN = Function Coverage; \#Fail = Num. Failing Tests; E\#F = Estimated Num. of Distinct Faults\\
F = Original Suite; M = \emph{ddmin}(F, ST Cov.); DD-Min = \emph{ddmin}(F, Failure); GE-ST = Greedy Selection for ST. Cov}
\end{center}
\vspace{-0.2in}
\end{table*}



SpiderMonkey is the JavaScript Engine for Mozilla, an extremely widely
used, security-critical interpreter/JIT compiler.  SpiderMonkey has
been the target of aggressive random testing for many years now.  A
single fuzzing tool, \texttt{jsfunfuzz} \cite{jsfunfuzz}, is
responsible for identifying more than 1,700 previously unknown bugs in
SpiderMonkey \cite{jsfunfuzzbugs}.  SpiderMonkey is (and was) very actively
developed, with  over 6,000 code commits in the period from 1/06 to 9/11
(nearly 4 commits/day).  SpiderMonkey is thus ideal for evaluating a
quick test approach, using the last public release of the
\texttt{jsfunfuzz} tool, modified for swarm testing \cite{ISSTA12}.
Figures \ref{fig:before} and \ref{fig:after} show cause reduction by
statement coverage in action.  The first figure is a short test
generated by {\tt jsfunfuzz}; the second is a test case based on it,
produced by \emph{ddmin} using statement coverage as effect.  These
tests both cover the same 9,625 lines code.  While some reductions are easily
predictable (e.g. {\tt throw StopIteration}), others are highly
non-obvious, even to a developer.

\begin{figure}[t]
\begin{code}
tryItOut("with((delete \_\_proto\_\_))
          \{export \_\_parent\_\_;true;\}");
tryItOut("while((false for (constructor in false)))\{\}");
tryItOut("throw \_\_noSuchMethod\_\_;");
tryItOut("throw undefined;");
tryItOut("if(<><x><y/></x></>) \{null;\}else\{/x/;/x/g;\}");
tryItOut("\{yield;export \_\_count\_\_; \}");
tryItOut("throw StopIteration;");
tryItOut("throw StopIteration;");
tryItOut(";yield;");
\end{code}
\caption{{\tt jsfunfuzz} test case before statement coverage reduction}
\label{fig:before}
\end{figure}

\begin{figure}[t]
\begin{code}
tryItOut("with((delete \_\_proto\_\_))
          \{export \_\_parent\_\_;true;\}");
tryItOut("while((false for (constructor in false)))\{\}");
tryItOut("throw undefined;");
tryItOut("if(<><x><y/></x></>) \{null;\}else\{/x/;/x/g;\}");
tryItOut("throw StopIteration;");
tryItOut(";yield;");
\end{code}
\caption{{\tt jsfunfuzz} test case after statement coverage reduction}
\label{fig:after}
\end{figure}
  
%\subsection{Experimental Design}

The baseline test suite for SpiderMonkey is a set of 13,323 random
tests, produced during 4 hours of testing the 1.6 source release of
SpiderMonkey.  These tests constitute what is referred to below as the
{\bf Full} test suite.  Running the {\bf Full} suite is essentially
equivalent to generating new random tests of SpiderMonkey.  A reduced
suite with equivalent statement coverage, referred to as {\bf Min},
was produced by performing cause reduction on every test in {\bf
Full}.  The granularity of minimization was based on the semantic
units produced by \texttt{jsfunfuzz}, with 1,000 such units in each
test in {\bf Full}.  A unit is the code inside each {\tt tryItOut} call, approximately 1 line of code.  After reduction, the average test case size was just
over 122 semantic units, a bit less than an order of magnitude
reduction; while increases in coverage were allowed, in 99\% of cases
coverage was identical to the original test.  The computational cost
of cause reduction was, on contemporary hardware, similar to the costs
of traditional delta debugging reported in older papers, around 20
minutes per test case \cite{MinUnit}.  The entire process completed in
less than 4 hours on a modestly sized heterogeneous cluster (using
fewer than 1,000 nodes).  The initial plan to also minimize by branch
coverage was abandoned when it became clear that statement-based
minimization tended to almost perfectly preserve total suite branch
coverage. Branch-based minimization was also much slower and typically
reduced test case size by a factor of only 2/3, vs. nearly 10x reduction
for statements.

A third suite, referred to as {\bf DD-Min} (Delta Debugging
Minimized), was produced by taking all 1,631 failing test cases in
{\bf Full} and reducing them using \emph{ddmin} with the requirement
that the test case fail and produce the same failure output as the
original test case.  After removing numerous duplicate tests, {\bf
DD-Min} consisted of 1,019 test cases, with an average size of only
1.86 semantic units (the largest test contained only 9 units).
Reduction in this case only required about 5 minutes per test case.
Results below show why {\bf DD-Min} was not included in experimental
evaluation of quick test methods (essentially, it provided extremely
poor code coverage, leaving many very shallow bugs potentially
uncaught; it also fails to provide enough tests for a 5 minute
budget).

Two additional small suites, {\bf GE-ST(Full)} and {\bf GE-ST(Min)}
were produced by applying Chen and Lau's GE heuristic \cite{ChenLau}
for coverage-based suite minimization to the {\bf Full} and {\bf Min}
suites.  The GE heuristic first selects all test cases that are
essential (i.e., they uniquely cover some coverage entity), then
repeatedly selects the test case that covers the most additional
entities, until the coverage of the minimized suite is equal to the
coverage of the full suite (i.e., an additional greedy algorithm,
seeded with test cases that must be in any solution).  Ties are broken
randomly in all cases.

Finally, we combined GE-ST and cause reduction to produce the {\bf
GE-Min} suite, which was produced by first selecting tests using the
GE-ST algorithm on the full suite.  These tests were then, in the
order they were selected by the greedy additional algorithm, minimized
with the requirement that they \emph{cover all statements that were
their additional contribution in the greedy selection}.  That is,
rather than requiring each test to preserve its original statement
coverage, each test was only required to preserve coverage such that
the entire GE-Min suite would still have the same overall coverage.  

The evaluation measures for suites are: size (in \# tests), statement
coverage (ST), branch coverage (BR), function coverage (FN), number of
failing tests (\#Fail), and estimated number of faults (E\#F).  All
coverage measures were determined by running gcov (which was also used
to compute coverage for \emph{reffect}).  Failures were detected by
the various oracles in \texttt{jsfunfuzz} and, of course, detecting
crashes and timeouts.

Distinct faults detected by each suite were estimated using a binary
search over all source code commits made to the SpiderMonkey code
repository, identifying, for each test case, a commit such that: (1)
the test fails before the commit and (2) the test succeeds after the
commit.  With the provision that we have not performed extensive
hand-confirmation of the results, this is similar to the procedure
used to identify bugs in previous work investigating the problem of
ranking test cases such that tests failing due to different underlying
faults appear early in the ranking~\cite{PLDI13}.  This method is not
always precise. It is, however, uniform and has no obvious problematic
biases.  Its greatest weakness is that if two bugs are fixed in the
same check-in, they will be considered to be ``one fault''; the
estimates of distinct faults are therefore best viewed as \emph{lower}
bounds on actual distinct faults.  In practice, hand examination of
tests in previous work suggested that the results of this method are
fairly good approximations of the real number of distinct faults
detected by a suite.  Some bugs reported may be faults that developers
knew about but gave low priority; however, more than 80 failures
result in memory corruption, indicating a potential security flaw, and
all faults identified were fixed at some point during SpiderMonkey
development.

In order to produce 30 second and 5 minute test suites (the extremes
of the likely quick test budget), it was necessary to choose subsets
of {\bf Full} and {\bf Min}.  The baseline approach is to randomly
sample a suite, an approach to test case prioritization used as a
baseline in numerous previous test case prioritization and selection
papers \cite{YooHarman}.  While a large number of plausible
prioritization strategies exist, we restricted our study to ones that
do not require analysis of faults, expensive mutation testing, deep
static analysis, or in fact any tools other than standard code
coverage.  As discussed above, we would like to make our methods as
lightweight and generally applicable as possible.  We therefore chose
four coverage-based prioritizations from the literature
\cite{YooHarman,RothMin}, which we refer to as $\Delta$ST, $|$ST$|$,
$\Delta$BR, and $|$BR$|$.  $\Delta$ST indicates a suite ordered by the
incremental improvement ($\Delta$) in statement coverage offered by each test
over all previous tests (an additional greedy algorithm), while
$|$ST$|$ indicates a suite ordered by the absolute statement coverage
of each test case (a pure greedy algorithm).  The first test executed
for both $\Delta$ST and $|$ST$|$ will be the test with the highest
total statement coverage.  $\Delta$BR and $|$BR$|$ are similar, except
ordered by different coverage.

Finally, a key question for a quick test method is how long quick
tests remain effective.  As code changes, a cause reduction and
prioritization based on tests from an earlier version of the code will
(it seems likely) become obsolete.  Bug fixes and new features
(especially optimizations in a compiler) will cause the same test case
to change its coverage, and over time the basic structure of the code
may change; SpiderMonkey itself offers a particularly striking case of
code change: between release version 1.6 and release version 1.8.5,
the vast majority of the C code-base was re-written in C++.  All
experiments were therefore performed not only on SpiderMonkey 1.6, the
baseline for cause reduction, but applied to ``future'' (from the
point of view of quick test generation) versions of the code.  The
first two versions are internal source commits, not release versions
(NR for non-release), dating from approximately two months (2/24/2007)
and approximately four months (4/24/2007) after the SpiderMonkey 1.6
release (12/22/2006).  When these versions showed that quick tests
retained considerable power, it indicated that a longer lifetime than
we had hoped for might be possible.  The final two versions of
SpiderMonkey chosen were therefore the 1.7 release version
(10/19/2007) and the 1.8.5 release version (3/31/2011).  Note that all
suites were reduced and prioritized based on the 1.6 release code; no
re-reduction or re-prioritization was ever applied.

\subsection{Results:  An Effective Quick Test?}

\begin{table}
\caption{SpiderMonkey 30s Test Budget Mean Results}
\label{tab:run30s}
\centering
\begin{tabular}{|c|c||c|c|c|c|c|c|c}
\hline
Ver. & Suite & Size & ST & BR & FN & \#Fail & E\#F \\
\hline
\hline
\input{js1.6.30s.table}
\multicolumn{8}{}{} \\
\hline
\input{js2.24.30s.table}
\multicolumn{8}{}{} \\
\hline
\input{js4.24.30s.table}
\multicolumn{8}{}{} \\
\hline
\input{js1.7.30s.table}
\multicolumn{8}{}{} \\
\hline
\input{js1.8.5.30s.table}
\multicolumn{8}{}{} \\
\end{tabular}
\\
{\scriptsize \bf Legend:  ST=Statement Cov.; BR=Branch Cov.; FN=Func. Cov.; \#Fail=Num. Failing Tests; E\#F=Est. Num. Distinct Faults; 
Full/F=Original Suite; Min/M=\emph{ddmin}(Full, ST Cov.); $\Delta$ST=Inc. ST Cov. Prioritization, $\Delta$BR=Inc. BR Prior.}
\vspace{-0.1in}
\end{table}

\begin{table}
\caption{SpiderMonkey 5m Test Budget Mean Results}
\label{tab:run5m}
\centering
\begin{tabular}{|c|c||c|c|c|c|c|c|c}
\hline
Ver. & Suite & Size & ST & BR & FN & \#Fail & E\#F \\
\hline
\hline
\input{js1.6.5m.table}
\multicolumn{8}{}{} \\
\hline
\input{js2.24.5m.table}
\multicolumn{8}{}{} \\
\hline
\input{js4.24.5m.table}
\multicolumn{8}{}{} \\
\hline
\input{js1.7.5m.table}
\multicolumn{8}{}{} \\
\end{tabular}
\vspace{-0.3in}
\end{table}


\begin{figure}
  \centering
  \includegraphics[width=\columnwidth]{scov}
  \caption{ST coverage for 30s and 5m quick tests across SpiderMonkey versions}
  \label{fig:scov}
\vspace{-0.1in}
\end{figure}


Table \ref{tab:unlimited} provides information on the base test suites
across the five versions of SpiderMonkey studied.  Tables
\ref{tab:run30s} and \ref{tab:run5m} show how each proposed quick test
approach performed on each version, for 30 second and 5 minute test
budgets, respectively.  All nondeterministic or time-limited
experiments were repeated 30 times.  The differences between minimized
(M) suite and full suite (F) for each method and budget are
statistically significant at a 95\% level, under a two-tailed U-test,
with only one exception: the improvement in fault detection for the
non-prioritized suites for the 4/24 version is not significant.  The
best results for each suite attribute, SpiderMonkey version, and test
budget combination are shown in bold (ties are only shown in bold if
some approaches did not perform as well as the best methods).  Results
for absolute coverage prioritization are omitted from the table to
save space, as $\Delta$ prioritization always performed much better,
and absolute often performed worse than random selection.  Results for
version 1.8.5 are also omitted from the 5 minute budget results as the
30 second results suffice to show that minimized tests and
prioritizations based on version 1.6 are, as expected, not as useful
after 4 additional years of development, though still sometimes
improving on the full suite.

The results are fairly striking.  First, a purely failure-based quick
test such as was used at NASA ({\bf DD-Min}) produces very poor code
coverage (e.g., covering almost 100 fewer \emph{functions} than the
original suite, and over 3,000 fewer branches).  It also loses fault
detection power rapidly, only finding $\sim$7 distinct faults on the
next version of the code base, while suites based on all tests can
detect $\sim$20-$\sim$36 faults.  Given its extremely short runtime,
retaining such a suite as a pure regression may be useful, but it
cannot be expected to work as a good quick test.  Second, the suites
greedily minimized by statement coverage ({\bf GE-ST(Full)} and {\bf
GE-ST(Min)}) are very quick, and potentially useful, but lose a
large amount of branch coverage and do not provide enough tests to
fill a 5 minute quick test.  The benefits of suite minimization by
statement coverage (or branch coverage) were represented in the 30
second and 5 minute budget experiments by the $\Delta$ prioritizations,
which produce the same results, with the exception that for short
budgets tests included because they uniquely cover some entity are
less likely to be included than with random sampling of the minimized
suites.

The most important total suite result is that the cause reduced {\bf
Min} suite retains (or improves!) many properties of the {\bf Full}
suite that are \emph{not} guaranteed to be preserved by our modified
\emph{ddmin} algorithm.  For version 1.6, only 5 branches are
``lost'', and (most strikingly) the number of failing test cases is
\emph{unchanged}.  Most surprisingly, the estimated distinct fault
detection is \emph{improved}: it has grown from $\sim$22 faults to
$\sim$43 faults.  The difference in results is highly statistically
significant: dividing the test populations into 30 equal-sized
randomly selected test suites for both full and minimized tests we
find that the average minimized suite detects 11.83 distinct faults on
average, while the average full suite only detects 7.6 faults, with a
$p$-value of $5.2 \cdot 10^{-10}$ under U-test.  It is difficult to
believe that any bias in the fault estimation method produces this
strong an effect.  Our best hypothesis as to the cause of the
remarkable failure preservation level is that \emph{ddmin} tends to
preserve failure because failing test cases have unusually \emph{low}
coverage in many cases.  Since the \emph{ddmin} algorithm attempts to
minimize test size, this naturally forces it to attempt to produce
reduced tests that also fail; moreover, some failures execute internal
error handling code (many do not, however --- the numerous test cases
violating \texttt{jsfunfuzz} semantic checks, for example).  The
apparent increased diversity of faults, however, is surprising and
unusual, and suggests that the use of \emph{ddmin} as a test
mutation-based fuzzing tool might be a promising area for future
research.  In retrospect, it is obvious that \emph{ddmin} takes as
input a test case and generates a large number of related, but
distinct, new test cases --- it is, itself, a test case generation
algorithm.  It seems safe to say that the new suite is essentially as
good at detecting faults and covering code, with much better runtime
(and therefore better test efficiency \cite{gupta-jalote-sttt06}).  

Figure \ref{fig:scov} graphically exhibits the raw differences in
statement coverage for the suites sampled as quick tests, ignoring the
effects of prioritization, with 1 standard-deviation error bars on
points.  The power of coverage-based cause reduction can be seen in
Tables \ref{tab:run30s} and \ref{tab:run5m} by comparing
``equivalent'' rows for any version and budget: results for each
version are split so that {\bf Full} results are the first three rows
and the corresponding prioritization the for {\bf Min} tests are the
next three rows.  For the first three versions tested, it is almost
always the case that for every measure, the reduced suite value is
better than the corresponding full suite value.  For 30s budgets this
comparison even holds true for the 1.7 version, nearly a year
later. Moving from 1.6 to 1.7 involves over 1,000 developer commits
and the addition of 10,000+ new lines of code (a 12.5\% increase).  In
reality, it is highly unlikely that developers would not have a chance
to produce a better baseline on more similar code in a four year
period (or, for that matter, in any one month period). The absolute
effect size, as measured by the lower bound of a 95\% confidence
interval, is often large -- typically 500+ lines and branches and 10
or more functions, and in a few cases more than 10 faults.

It is difficult to generalize from one subject, but based on the
SpiderMonkey results, we believe that a good initial quick test
strategy to try for other projects would be to combine cause reduction
by statement coverage with test case prioritization by either $\Delta$
statement or branch coverage.  In fact, limitation of quick tests to
very small budgets may not be critical.  Running only 7 minutes of
minimized tests on version 1.6 detects an average of twice as many
faults as running 30 minutes of full tests and has (of course)
indistinguishable average statement and branch coverage.  The
difference is significant with $p$-value of $2.8 \cdot 10^{-7}$ under
a U-test.  In general, for SpiderMonkey versions close to the
baseline, running $N$ minutes of minimized tests, however selected,
seems likely to be much better than running $N$ minutes of full tests.
The real limitation is probably how many minimized tests are available
to run, due to the computational cost of minimizing tests.


\section{YAFFS 2.0 Flash File System Case Study}

\begin{table}[t]
\centering
\scriptsize{
\caption{YAFFS2 Results}
\label{tab:yaffs}
\begin{tabular}{|c||c|c|c|c|c|c|c}
\hline
Suite & Size & Time(s) & ST & BR & FN & MUT \\
\hline
\hline
Full & 4,240 & 729.032 & 4,049 & 1,925 & 332 & 616 \\
\hline
Min & 4,240 & 402.497 & 4,049 & 1,924 & 332 & 611 \\
\hline
\hline
\hline
Full(F) & 174.4 & 30.0 & 4,007.367 & 1,844.0 & 332.0 & 568.3 \\
\hline
F+$\Delta$ST& 372.5 & 30.0 & {\bf 4,049.0} & 1,918.0 & 332.0 & 594.0 \\
\hline
F+$\Delta$BR& 356 & 30.0 & {\bf 4,049.0} & {\bf 1,925.0} & 332.0 & {\bf 596.0} \\
\hline
F+$|$ST$|$& 112.5 & 30.0 & 4,028.0 & 1,889.0 & 332.0 & 589.0 \\
%\hline
%F+$|$BR$|$& 112.5 & 30.0 & 4,028.0 & 1,889.0 & 332.0 & 589.0 \\
\hline
\hline
Min(M) & 315.8 & 30.0 & 4,019.7 & 1,860.5 & 332.0 & 559.0 \\
\hline
M+$\Delta$ST & {\bf 514.7} & 30.0 & {\bf 4,049.0} & 1,912.0 & 332.0 & 571.0 \\
\hline
M+$\Delta$BR & 500.0 & 30.0 & {\bf 4,049.0} & 1,924.0 & 332.0 & 575.0 \\
\hline
M+$|$ST$|$& 255.0 & 30.0 & 4,028.0 & 1,879.0 & 332.0 & 552.0 \\
%\hline
%M+$|$BR$|$& 255.0 & 30.0 & 4,028.0 & 1,879.0 & 332.0 & 552.0 \\
\hline
\hline
\hline
Full(F) & 1,746.8 & 300.0 & 4,044.7 & 1,916.0 & 332.0 & 608.7 \\
\hline
F+$\Delta$ST & 2,027.0 & 300.0 & {\bf 4,049.0} & 1,921.0 & 332.0 & 601.0 \\
\hline
F+$\Delta$BR & 2,046.0 & 300.0 & {\bf 4,049.0} & {\bf 1,925.0} & 332.0 & 604.0 \\
\hline
F+$|$ST$|$ & 1,416.0 & 300.0 & 4,042.0 & 1,916.0 & 332.0 & {\bf 611.0} \\
%\hline
%F+$|$BR$|$ & 1,416.0 & 300.0 & 4,042.0 & 1,916.0 & 332.0 & {\bf 611.0} \\
\hline
\hline
Min(M) & 3,156.6 & 300.0 & 4,048.1 & 1,920.0 & 332.0 & 607.1 \\
\hline 
M+$\Delta$ST & {\bf 3,346.0} & 300.0 & {\bf 4,049.0} & 1,924.0 & 332.0 & 601.0 \\
\hline
M+$\Delta$BR & 3,330.0 & 300.0 & {\bf 4,049.0} & 1,924.0 & 332.0 & 605.0 \\
\hline
M+$|$ST$|$ & 2,881.7 & 300.0 & {\bf 4,049.0} & 1,924.0 & 332.0 & {\bf 611.0} \\
%\hline
%M+$|$BR$|$ & 2,881.7 & 300.0 & {\bf 4,049.0} & 1,924.0 & 332.0 & {\bf 611.0} \\
\hline
\end{tabular}
}
\end{table}

YAFFS2~\cite{yaffs2} is a popular open-source NAND flash file system
for embedded use; it was the default image format for early versions
of the Android operating system.  Lacking a large set of real faults
in YAFFS2, we applied mutation testing to check our claim that cause
reduction not only preserves source code coverage, but tends to
preserve fault detection and other useful properties of randomly
generated test cases. The evaluation used 1,992 mutants, randomly
sampled from the space of all 15,246 valid YAFFS2 mutants, using the C
mutation software shown to provide a good proxy for fault
detection~\cite{mutant}, with a sampling rate (13.1\%) above the 10\%
threshold suggested in the literature~\cite{MutRand}.  Sampled mutants
were not guaranteed to be killable by the API calls and emulation mode
tested.  Table \ref{tab:yaffs} shows how full and quick test suites
for YAFFS2 compared.  MUT indicates the number of mutants killed by a
suite.  Results for $|$BR$|$ are omitted, as absolute prioritization
by branch coverage produced an equivalent suite to absolute
prioritization by statement coverage.  Runtime reduction for YAFFS2
was not as high as with SpiderMonkey tests (1/2 reduction vs. 3/4),
due to a smaller change in test size and higher relative cost of test
startup.  The average length of original test cases was 1,004 API
calls, while reduced tests averaged 213.2 calls.  The most likely
cause of the smaller reduction is that the YAFFS2 tester uses a
feedback~\cite{ICSEDiff} model to reduce irrelevant test
operations. Basic retention of desirable aspects of {\bf Full} was,
however, excellent: only one branch was ``lost'', function coverage
was perfectly retained, and 99.1\% as many mutants were killed.  The
reduced suite killed 6 mutants not killed by the original suite.  We
do not know if mutant scores are good indicators of the ability of a
suite to find, e.g., subtle optimization bugs in compilers.  Mutant
kills \emph{do} seem to be a reliable method for estimating the
ability of a suite to detect many of the shallow bugs a quick test
aims to expose before code is committed or subjected to more testing.
Even with lesser efficiency gains, cause reduction plus
\emph{absolute} coverage prioritization is by far the best way to
produce a 5 minute quick test, maximizing 5-minute mutant kills
without losing code coverage.  \emph{All} differences in methods were
significant, using a two-tailed U-test (in fact, the highest $p$-value
was $0.0026$).

\section{GCC: The Potentially High Cost of Reduction}

Finally, we attempted to apply cause reduction to test cases produced
by Csmith \cite{csmith} using the GCC 4.3.0 compiler (released
3/5/2008), using C-Reduce \cite{CReduce} modified to attempt only
line-level reduction, since we hypothesized that reducing C programs
would be more expensive than reducing SpiderMonkey or YAFFS2 test
cases, which have a simpler structure.  Our hypothesis proved more
true than we had anticipated: after 6 days of execution (on a single
machine rather than a cluster), our reduction produced only 12 reduced
test cases!  The primary problem is twofold: first, each run of GCC
takes longer than the corresponding query for SpiderMonkey or YAFFS2
tests, due to the size and complexity of GCC (tests are covering 161K+
lines, rather than only about 20K as in SpiderMonkey) and the inherent
start up cost of compiling even a very small C program.  Second, the
test cases themselves are larger --- an average of 2,222 reduction
units (lines) vs. about 1,000 for SpiderMonkey and YAFFS --- and
reduction fails more often than with the other subjects.

While 12 reduced test cases do not make for a particularly useful data
set, the results for these instances did support the belief that
reduction with respect to statement coverage preserves interesting
properties.  First, the 12 test cases selected all crashed GCC 4.30
(with 5 distinct faults, in this case confirmed and examined by hand);
after reduction, the test cases were reduced in size by an average of
37.34\%, and all tests still crashed GCC 4.3.0 with the same faults.
For GCC 4.4.0 (released 4/21/2009), no test cases in either suite
caused the compiler to fail, and the reduced tests actually covered
419 \emph{more} lines of code when compiled. Turning to branch
coverage, an even more surprising result appears: the minimized tests
cover an additional 1,034 branches on GCC 4.3.0 and an additional 297
on 4.4.0.  Function coverage is also \emph{improved} in the minimized
suite for 4.4.0: 7,692 functions covered in the 12 minimized tests
vs. only 7,664 for the original suite.  Unfortunately the most
critical measure, the gain in test efficiency, was marginal: for GCC
4.3.0, the total compilation time was 3.23 seconds for the reduced
suite vs. 3.53 seconds for the original suite, though this improved to
6.35s vs 8.78s when compiling with GCC 4.4.0. Even a 37.34\% size
reduction does not produce large runtime improvement, due to the high
cost of starting GCC.  However, the added value of the reduced tests
is high enough that we are (1) rewriting portions of C-Reduce to
execute much faster and (2) planning to devote a large computing
budget to minimizing a high-coverage Csmith-produced suite for the
latest versions of GCC and LLVM.  It is unclear if 5 minutes of
testing, even after coverage prioritization, will be a strong
regression, but a stable, more efficient ``snapshot'' of good random
tests for critical infrastructure compilers will be a valuable
contribution to GCC and LLVM's already high-quality test suites.

\subsection{Threats to Validity}

First, we caution that cause reduction by coverage is intended to be
used on the highly redundant, inefficient tests produced by aggressive
random testing.  While random testing is sometimes highly effective
for finding subtle flaws in software systems, and essential to
security-testing, by its nature it produces test cases open to extreme
reduction.  It is likely that human-produced test cases (or test cases
from directed testing that aims to produce short tests) would be not
reduce well enough to make the effort worthwhile.  The quick test
problem is formulated specifically for random testing, though we
suspect that the same arguments also hold for model checking traces
produced by SAT or depth-first-search, which also tend to be long and
redundant.  The primary threat to validity is that experimental
results are based on one large case study on a large code base over
time, one mutation analysis of a smaller but also important and widely
used program, and a few indicative tests on a very large system, the
GCC compiler.
