\chapter{Evaluation}
\label{ch:Evaluation}

\section{Evaluation}

This section presents the evaluation of \tname{}.
Section~\ref{sec:subjects} presents the subjects we used,
Section~\ref{sec:setup} presents the experimental setup,
Section~\ref{sec:results} presents results, and
Section~\ref{sec:threats} shows threats to validity.

\begin{table*}[t]
\centering
\input{chapters/eval/subjects}
\vspace{1ex}
\caption{\label{table:main}Subjects organized in
  groups. \textbf{Group~1}: Subjects used in previous studies with one
  documented race.  \textbf{Group~2}: Subjects with errors added by
  volunteers. \textbf{Group~3}: Subjects with more than one race.}
\figtabsep
\end{table*}

\subsection{Subjects}
\label{sec:subjects}

Figure~\ref{table:main} describes the \numSubjects{} subjects we used
to evaluate our approach.  We organized the subjects in 3 groups.
Group 1 includes subjects of different sizes and complexity used in
different related
studies~\citep{doESE05,jpapa,java-grande-forum-web-page,pjbench}.
Group 2 includes subjects, also from a variety of sources, that have
been modified by volunteers.  We asked the volunteers \emph{not} to
use \jpf{} or other tools to guide what to change and asked them to
create faults that could lead to race conditions.  Experience with
concurrent programming varied across volunteers.  Group 3 includes
subjects from the benchmark used to evaluate the PECAN
predictive-analysis tool~\citep{huang-zhang-issta2011}.  Column
``group'' from Table~\ref{table:main} shows the subject group, column
``subject'' shows the subject name, column ``source'' indicates the
source from where that subject was obtained, column ``\#loc'' shows
the number of lines of code~(computed with
JavaNCSS~\citep{javancss:2010}), and column ``description'' explains
purpose.  Each subject contains a test driver (i.e., a main function)
that sets up the execution environment.  The test drivers as well as
the original and faulty versions of each subject are available online.



%% includes \numSubjectsGroupOne{} small subjects from the
%% Software-artifact Infrastructure Repository~\cite{doESE05} with
%% pre-existing (race provoking) faults, group 2 includes
%% \numSubjectsGroupTwo{} small subject modified with one bug.  larger
%% subjects from distinct sources also with pre-existing faults, and
%% group 3 includes \numSubjectsGroupThree{} larger subjects

%% the subjects
%% account, alarmclock, airlines, rax extended, two stages and wrong lock
%% were obtained from software repository
%% infrastructure. Moldyn, MonteCarlo and RayTracer were
%% get from the \Fix{java grande} benchmarks.

%% Account is a simple bank account implementation that contains a class
%% \CodeIn{acount} with the usual operations \CodeIn{withdraw, deposite
%%   and tranfer}, these operations changes the value of an
%% \CodeIn{amount} account field. This program's main method creates some
%% accounts instance and some threads that does some arbitrary operations
%% in these accounts. The number of thread can be parametrized as a
%% program argument. \Fix{This program has a race at
%%   \CodeIn{account.tranfer} method that the transfer method realize
%%   some unprotected writes in amount field causing a race condition.}

%% Airlines simulates passengers trying to purchase tickets to an
%% airplane. There's two race field in this subject:
%% \CodeIn{NumOfSeatSolds} and \CodeIn{stopsales}. As its names suggests,
%% these fields are used to control the number of seat sold in the
%% simulation and to stop the sales of tickets.  AlarmClock models one
%% thread of kind \CodeIn{Clock} invoking a \CodeIn{tick()} every time
%% unit and a set of registred clients that should be alarmed on proper
%% time. This clients are also diferent threads and its amount is defined
%% as a command argument. rax extended is simple demonstration of thread
%% communication through guarded blocks. The wrong lock presents a wrong
%% lock use in an simple java application by two concurrent methods call
%% holding different monitors. The twoStages is similar to wrong lock.
%% All subjects are exposed in \Fix{table 1} as well its number of
%% uncomment lines of code and whether it were changed or used equals its
%% source.

\subsection{Setup}
\label{sec:setup}

\vspace{0.5ex}\noindent\textbf{Test drivers.}  We changed the test
drivers of some subjects from group 1 to make the number of threads a
parameter.  For these subjects, we reported results for a selection of
$n$ threads that would make JPF take more than 10 minutes to find the
error and also reported results for the selection with $n-1$ threads.
The principle is to show how fast the \stsp{} grows with the increase
in number of threads to a point where time starts to become an issue.
We did \emph{not} change the drivers for the remaining subjects.

\vspace{0.5ex}\noindent\textbf{JPF.}  We ran each given test driver on
\jpf{} until it finds the error or runs out of memory.  We set \jpf{}
to use its precise race detector implementation
(\CodeIn{nasa.jpf.listener.PreciseRaceDetector}) and use default values
for all parameters.  For all experiments, we used an Intel I5 machine,
with Ubuntu 11.04,\Comment{ with 4GB of memory} and ran \jpf{} with a
maximum of 1GB of memory.

\Comment{, change order of methods calls or any other changes the they
  think the could be mistakenly did by a real programmer.}\Comment{ The
  paper authors \fix{invaders} volunteer to doesn't insert changes
  that looks artificial or unreal in the real programming, for do it
  the authors had recommended that the volunteers understand the
  programs well to create mutants the generate errors that could be
  done by real programmer.}
\Comment{
The volunteers knew the nature and goals of experiments
but didn't know details or how works \tname{}'s algorithm. They just
had known that changes would be used to evaluate data race detecting
tools.}


\newcommand{\cc}{\cellcolor[gray]{0.9}}

\lstset{escapeinside={\%*}{*)}}
\subfiguretopcaptrue
\begin{figure*}[t]
  \centering
  \begin{small}
    \input{results_d}
    \caption{\label{tab:results}Comparison of \tname{} with \jpf{}
      (using depth-first search), FindBugs, and \jlint{}.  Label ``-''
      indicates that the first positive warning was found in 5s and
      label X-Y indicates that out of X warnings Y are positive.}
  \end{small}
  \figtabsep{}
\end{figure*}



\subsection{Results}
\label{sec:results}

This section presents results of our experience with \tname{}.
Section~\ref{sec:jpf} evaluates \tname{} and \jpf{} on subjects from
groups 1 and 2, Section~\ref{sec:search-strategies} presents results
of \tname{} when using breadth-first search,
Section~\ref{sec:num-threads} elaborates the impact of increasing the
number of threads on \tname{}, Section~\ref{sec:num-warnings}
discusses the effect of the progress of the \stsp{} exploration in the
number of reported warnings, Section~\ref{sec:errors-removed}
discusses how \tname{} performs when we remove seeded errors,
Section~\ref{sec:multiple-errors} presents results of \tname{} for
subjects with multiple races (from group 3),
Section~\ref{sec:guided-search} presents the results of our
experiments on using heuristic search guided by the warnings of
\tname{}, Section~\ref{sec:time-overhead} presents the time overhead
of \tname{}\Comment{ in runtime of \jpf{}}, and finally
Section~\ref{sec:fbugs-jlint} compares \tname{} with static
pattern-based bug-finding tools that also report warnings to the user.

%% \subsubsection{\label{sec:comparison}Comparison with \jpf{}, FindBugs,
%%   and \jlint{}.}  

%\vspace{1ex}\noindent\textbf{Comparison with \jpf{}.} 

\subsection{Comparison with \jpf{}}
\label{sec:jpf}
We evaluate \tname{}'s effectiveness by measuring (a) how \emph{fast}
it reports \emph{useful} warnings compared to \jpf{} and (b) how
precise are the reported warnings.  Figure~\ref{tab:results} shows the
results of our comparison.  Column ``subject'' shows the subject name,
column ``err\#'' shows the identifier of the error, column ``\#thrs.''
shows the number of threads involved in the experiment, column
``Rabbit'' shows the results of our approach, subordinate columns
``5s'' and ``next'' show respectively the warnings found in less than
5 seconds and the time required to find the first \emph{positive}
warning in case one is not found up to 5 seconds.  The notation $x-y$
indicates that out of $x$ warnings reported, $y$ are positive, and the
difference of these values are false alarms.  Column ``JPF'' shows the
time that JPF required to find the error under different
configurations.  Column ``default (DFS)'' corresponds to the default
configuration of \jpf{} and columns ``H*'' correspond to runs of
\jpf{} that use search heuristics driven by the output of \tname{}.
Columns ``FB'' and ``\jlint{}'' show the warnings reported with
FindBugs and \jlint{}, respectively.  The notation ``NOERR<<~$t$''
indicates that JPF terminated in time $t$ without reporting errors and
the label ``OME<<~$t$'' indicates that JPF terminated in time $t$
raising an out of memory error message.  We make the following
observations from these results:

%\begin{itemize}
\begin{list}{\labelitemi}{\leftmargin=1em}

%% \item Overall, the results for the small subjects from group 1 show
%%   that JPF can find errors very fast for small \stsp{}s.  When we
%%   increase the size of the input (\#~threads) \tname{} is still able
%%   to quickly report a small number of warnings to inspect.

\item All of the cases for which \tname{} reports false alarms are
  result of its inability to detect causal dependencies, as indicated
  in Section~\ref{sec:example}.  However, we observed for the subjects
  we considered that the ratios of false alarm are typically very low.

\item For \CodeIn{alarmclock}, the time difference between \tname{}
  reporting the first positive warning and \jpf{} finding the error is
  not as high as for other cases.  That happened because \jpf{} took
  relatively more time to cover an important memory access that
  enabled \tname{} to emit a positive report.  \jpf{} did not take
  much longer to find the race after covering that access.

%% \item For \CodeIn{raytracer} error 1, \tname{} ran out of memory (as
%%   JPF) without reporting any warning.  That happened because, as
%%   occurred to \CodeIn{alarmclock}, the sensitive access is not easily
%%   covered with the \stsp{} exploration.

%% \item For 7 out of the 14 cases of large subjects from groups 3 and 4,
%%   JPF found the error in less than 1min.  That occurred either because
%%   the application was highly parallel (e.g., \CodeIn{montecarlo} and
%%   \CodeIn{raytracer}) and JPF could identify thread-safe memory
%%   accesses to avoid generation of non-deterministic choices or because
%%   the volunteer unintentionally created errors that were detected in
%%   the initial states that \jpf{} explored.

\Comment{\Fix{simplesmente sex preocuparam em fazer modificacoes
    (i.e. nao foi dado orientacao sobre a 'profundidade/complexidade'
    das alterações, para nao favorecer o rabbit) no
    programa. Naturalmente(talvez pela ordem de leitura do codigo),
    eles fizeram modificações que são exploradas em estados iniciais
    por jpf o que permitia a JPF que encontrasse os erros nos
    primeiros interleavings}}

\item For \CodeIn{tsp}, \tname{} reported a positive warning of error
  that \jpf{} missed.  Across the exploration, distinct threads access
  the shared variable denoting the current best path along the solving
  of the TSP problem.  However, for any particular execution schedule
  explored, only one thread writes to that variable.  That occurs
  because, for the given input and test driver, the first spawned worker
  thread always finds the best path.  \jpf{} misses the error for the
  given input and test driver.  \tname{} reports a warning as it
  realizes that threads with different ids access the variable without
  the proper protection; it ignores the fact that such accesses do not
  coexist in the same run.  This scenario is indeed possible for this
  subject (i.e., multiple worker threads modifying the best path
  variable) and could be manifested in \jpf{} with different
  inputs.\Comment{ for others where .}

\Comment{
\Fix{o input do programa não exercita o erro. Isso acontece devido ao
  fato de o programa de acordo com o tamanho do input. Nesse subject
  existe uma variavel que armazena o tamanho do menor caminho já
  descoberto, com o input fornecido sempre a primeira thread a
  executar encontra os menores caminho (dado a aspectos de
  implementação do subject).  Rabbit encontra o Race porque em uma
  dada execução a thread A escreve na variavel Compartilhada. Apos o
  backTrack, a thread B escreve na mesma variavel(por que achou o
  menores caminhos). Para o dado input nunca a thread A e a thread B
  escrevem na variavel compartilhada}
}

%% \item Considering all cases we analyzed, \tname{} missed the error
%%   only in one case (\CodeIn{raytracer} error \# 1) and both the ratio
%%   of false alarms as well as the number of warnings reported to the
%%   user were low.

\item \tname{} reports accurate warnings in less than 5s for 20 out of
  the 24 cases we analyzed from Figure~\ref{tab:results}.  For 14 of
  these 24 cases, \jpf{} takes more than 10 minutes to find the error
  (9 cases), or runs out of memory (4 cases) or misses the error (1
  case).

%\end{itemize}
\end{list}



\Comment{
Note that the number of threads is a parameter for subjects from group
1.  Therefore, we present results for two cases per subject, one per
row, each one using a different number of threads. In a similar vein,
all but one of the subjects of Group 4 have two different errors, each
one being the cause of different races. For each run of the
experiments, only one of these errors was enabled. Hence, we present
results for each error in a different row.} 



%% Figure~\ref{tab:results:bigone} shows results for the larger subjects
%% from group 2.  Figure~\ref{tab:results:big} shows the results for
%% subjects from groups 3 and 4.  The format of this table is the same as
%% others except for the addition of column ``err\#'' to indicate the
%% identifier of the error.  We make the following observations:


%% \vspace{1ex}
%% \begin{center}
%% \fbox{
%% \parbox[h]{4in}{Results suggest that \tname{} can improve
%%   responsiveness of program model checkers in providing earlier
%%   feedback to the user.  This is specially critical for time or
%%   memory-intensive searches.}  }
%% \end{center}
%% \vspace{0.5ex}


%% To better evaluate and compare the results, the author choosed to
%% divide the benchmark in two main groups: one with less then 900
%% uncommented lines of code(group a) and another with more lines than
%% that(group b). This choose was important because it had been expected
%% that dynamic tools have a better \fix{desempenho} with smaller number
%% of code line and static based tools could cover a huge amount of code
%% lines quickly. Beyond that, all subjects in the first category(\lt{}
%% 900 loc) can \fix{parametrizar} the number of dynamically running
%% threads and this division would be usefull to evaluate the scalability
%% of dynamic approaches. Another \fix{caracteristica} of smaller
%% programs size category is that all of them were unchanged by authors
%% or volunteers, \fix{meanwhile} some of programs of the other group
%% where changed.

%% In the smaller applications group, the static tools performed badly,
%% with several false positives and rare correct reports. Just in the
%% account subject, one of tools reported correct and
%% accurately. Findbugs detected the race reporting the message
%% \fix{'Inconsistent synchronization of Account.amount'} and the correct
%% code line where the bug happens. For the same subject \jlint{} doesn't
%% report any concurrency related error. In the airline subject, \jlint{}
%% tool reported the message \fix{'Airline/Bug.run() implementing
%%   Runnable interface is not synchronized'} this message doesn't make
%% sense at all, even because marking the run method as
%% \codein{synchronized} will generate a compilation
%% error.fix{TODO:explicar o motivo do erro de compilação} However, the
%% authors have considered this \jlint{} report as true positive, because
%% the bug could be fixed protecting access to variables in the reported
%% method. \jlint{} have reported the exactly same message
%% \fix{'implementing Runnable interface is not synchronized'} several
%% times during the work evaluation, all of the others were considered as
%% false positive by the authors, suggesting that this case was more a
%% coincidence than a real report. To airline, Findbugs reported just a
%% false positive,the false positive message informes that a method
%% thread.start() is invoked inside a constructor and it could be an
%% error. For all other subject, in category A\comment{definir categoria
%%   A}, the static tools doesn't detect concurrent related errors or
%% just reports false positives.  To evaluate the dynamics tools in
%% Category A, the authors have \fix{run} all subjects with the JPF model
%% checker and systematically increased program input until the time to
%% JPF takes to find the race is greater than 10 minutes. \coment{this
%%   was \fix{arbitrariamente} choosen by paper authors, but it doesn't
%%   \fix{compromete} the way that the evaluation was taken.} this was
%% just a way to try approximadelly uniformize the size of \stsp{} of
%% all the programs. 

%% \lstset{escapeinside={\%*}{*)}}
%% \subfiguretopcaptrue
%% \begin{figure}[t]
%%   \centering
%%   \begin{small}
%%     \input{results_c}
%%     \caption{\label{tab:results:bigone}Results for larger non-modified
%%       subjects.}
%%   \end{small}
%%   \figtabsep{}
%% \end{figure}

%% \lstset{escapeinside={\%*}{*)}}
%% \subfiguretopcaptrue
%% \begin{figure}[t]
%%   \centering
%%   \begin{small}
%%     \input{results_b}
%%     \caption{\label{tab:results:big}Results for modified subjects.}
%%   \end{small}
%%   \figtabsep{}
%% \end{figure}

%%\subsubsection{\label{sec:noerrors}Accuracy with other search strategies.} 


%\vspace{1ex}\noindent\textbf{Accuracy with other search strategies.}

\subsection{Impact of search strategy}
\label{sec:search-strategies}

We evaluated \tname{} with breadth-first search (BFS).  As expected,
the search strategy can influence results of \tname{} the same way it
can influence the time to find the error in \jpf{}.  Important to note
is that for all of the \badCasesDFS{} cases from
Figure~\ref{table:main} where \tname{} with depth-first search (DFS)
could \emph{not} find positive warning in 5s (see column ``next''),
\tname{} with BFS reported the positive warning within 5s.  For
different cases, however, \tname{} with DFS performed better than
\tname{} with BFS, as expected for search methods.

%% Considering the variation in results due to the choice of search
%% strategy and the fact that \tname{} often reports positive warning
%% early, we observed that it is beneficial to run \tname{} multiple
%% times with different search strategies for a short period before
%% running it with the user-informed strategy for longer duration.

%% \begin{table*}[t]
%% \centering
%% \input{results_bfs}
%% \vspace{1ex}
%% \caption{\label{table:main}Subjects organized in
%%   groups. \textbf{Group~1}: Small non-modified.  \textbf{Group~2}:
%%   Larger non-modified. \textbf{Group~3}: Small modified.
%%   \textbf{Group~4}: Larger modified.}
%% \end{table*}
%%\subsubsection{\label{sec:noerrors}False alarms with errors
%%removed.}
%\vspace{1ex}\noindent\textbf{Subjects with multiple errors.}

\subsection{Impact of number of threads}
\label{sec:num-threads}
We evaluated the impact of input size, more specifically the impact of
the number of threads involved, in the number and precision of warning
reports.  For that we used some of the subjects from group 1 which are
parametric in the number of threads.  For all subjects in that group
we varied the number of threads from 15 to 20.  For all cases, except
\CodeIn{alarmclock}, the number and precision of the report remained
the same.  For \CodeIn{alarmclock} the size of the \stsp{},
proportional to the number of threads, was an important factor to
determine effectiveness of \tname{}.  As discussed before, for this
subject, \tname{} only detects the race after the model checker covers
one specific statement of the program and it takes longer to cover
that statement as the size of the \stsp{} grows.

\subsection{Number of warnings}
\label{sec:num-warnings}
We evaluated how fast the number of reported warnings grow with the
progress of the \stsp{} exploration.  We observed that \tname{} finds
most of the errors very fast and then rarely reports new findings.
For example, note from Figure~\ref{tab:results} the low increase in
the number of reports for the cases where \tname{} cannot find the
positive warning in 5s.  Also, for the reported cases that \tname{}
finds the races in less than 5s no other warning is reported after
that.  This observation suggests that it may be beneficial to run
\tname{} for a short-period with different search strategies (see
Section~\ref{sec:search-strategies}).

\subsection{Subjects with errors removed}
\label{sec:errors-removed}
In a typical scenario of use of a model checker the user is not aware
about the errors the application may have.  It is possible, for
example, that the application does not contain errors that the test
driver can activate.  To simulate such scenario, we ran \tname{} using
versions of the subjects with errors removed and found that all false
positives reported on Tables~\ref{tab:results} and~\ref{tab:pecan}
persisted but no others have been reported.

\subsection{Subjects with multiple errors}
\label{sec:multiple-errors}
Figure~\ref{tab:pecan} shows the results of \tname{} for subjects from
the PECAN benchmark~\citep{huang-zhang-issta2011}, some of which
contain multiple races.  Column ``\#~errs.'' shows the number of
distinct races; we inspected the reports of PECAN and ignored multiple
manifestations of race.  For this experiment, we configured JPF to not
stop when it reaches the first error.  Instead, we set \jpf{} to run
for 5 minutes and then force it to stop.  Figure~\ref{tab:pecan} shows
the results of \tname{} for 5, 10, and 15 seconds.  The notation X-Y-Z
shows the number of warnings reported (X), the number of true positive
warnings (Y), and the number of distinct races found (Z).  The column
``\jpf{}'' shows the number of distinct races \jpf{} finds in 5
minutes.  The symbol * indicates that \jpf{} terminated before 5m with
an out of memory error.  The label ``-'' in a cell indicates that the
result is the same as the preceeding cell for that row.

\tname{} reports a warning for a race that \jpf{} misses in two cases:
\CodeIn{cache4j-pecan} and \CodeIn{shop}.  With the exception of these
misses, \jpf{} performs well for these subjects overall.  We highlight
in grey color the rows\Comment{ from Figure~\ref{tab:pecan}} that
correspond to the cases where \tname{} did not report, within 5
minutes of exploration, a warning for a race that PECAN reports.

%%  Note that \tname{} can find multiple races\Fix{Joao Paulo, na verdade
%%    nao temos como saber se multiplos erros sao detectados. Em X-Y, X e
%%    o numero de warnings e Y e o numero de warnings positivos.  Pode
%%    acontecer que todos as ocorrencias sao do mesmo erro/race como
%%    ocorre nos resultados da figura 6.}, that only in 2 cases it
%%  reported false alarms (\CodeIn{manager} and \CodeIn{shop}), and that
%%  \jpf{} misses some race scenarios for the 5m time budget.  In the
%%  case of \CodeIn{shop} \jpf{} did not report any race within 5m.

\lstset{escapeinside={\%*}{*)}}
\subfiguretopcaptrue
\begin{figure}[t]
  \centering
  \begin{small}
    \input{results_pecan}
    \caption{\label{tab:pecan}Results of \tname{} for the subjects
      from PECAN~\citep{huang-zhang-issta2011} (group \# 3).}
  \end{small}
  \figtabsep{}
\end{figure}


\begin{figure}[t]
  \begin{center}
    \includegraphics[scale=0.7]{plots/stack.pdf}
    \caption{\label{fig:performance}Impact of \tname{} in number of
      bytecode instructions executed within 5min.} \figtabsep
  \end{center}
\end{figure}

%% \fix{o numero de reports do Pecan é diferente de rabbit 1) devido a como os dados são reportados no rabbit. são exibida as threads com todos os acessos feitos por thread que podem levar a um Race. No pecan é exibido um produto cartesiano de todos os acessos desprotegidos que podem levar a um race. Ou seja,  3 acessos desprotegidos na thread 1 no field A e 3 accessos desprotegidos na thread B no mesmo field A vão levar a 9 reports em PECAN. Em rabbit todos esses races podem ser vistos nas listagem tambem, contudo no resumo(que está sendo usado na contagem) só sera exibido o field que contem o erro e um dos locais que podem levar ao erro. No Pecan serão reportados 9 erros. 
%% 2) nos exemplo, o rabbit não tem um trace completo do programa do momento do report. O pecan possui. Assim o pecan pode ter localizado mais locais de acesso desprotegidos, que associado com o produto cartesiado pode levar a um numero bem maior de warnings }

%\vspace{1ex}\noindent\textbf{False alarms with errors removed.}  

%% \Fix{nesses cenarios ha busca
%%   sera feita normalmente pelo model checking, havendo naturalmente um
%%   pequena perda de desempenho devido ao rabbit estar ativado. que
%%   conforme descrito na secao abaixo foi de 5\%, em media. Alem disso,
%%   como o rabbit não altera a busca natural do model checker, não
%%   deverá haver um maior impacto no uso de subjects sem erros, visto
%%   que abordagem do rabbit e primariamente de monitorar a execucao }
%% \subsubsection{\label{sec:scalability}Scalability.} \Fix{...}
%%\vspace{1ex}\noindent\textbf{Scalability.} 

\subsection{Guided-search}
\label{sec:guided-search}

We evaluated the effect of guiding the \jpf{} search towards schedules
that involve the threads associated with warnings that \tname{}
reports.  To achieve that goal we used JPF's infrastructure for
defining heuristic search.  More specifically, we instantiated the
\CodeIn{gov.nasa.jpf.search.heuristic.PreferThreads} heuristic search
class and informed \jpf{} to use that object as search strategy.  This
search heuristic takes on input a set of threads and has the effect of
making \jpf{} give priority to the schedules that involve the threads
from the informed set.

We evaluated two modes of interaction between \tname{} and the new
instance of \jpf{} that uses this heuristic search.  In the first
mode, we considered only the first true warning that \tname{} reports
(i.e., the two threads involved in such warning).  This mode serves as
reference; note that we needed to manually inspect the warning before
spawning the corresponding heuristic search.  In the second mode, we
considered the other extreme: we spawned our customized heuristic to
focus on all threads that appear in all reported warnings.  Important
to recall that symmetric bug reports are ignored (see
Section~\ref{sec:monitoring}).  In addition to the threads original
from warnings we explicitly add the main thread to the set of
preferred threads as it is central to setup the application.

Columns ``H1'' and ``H2'' show the results for the runs of \jpf{}
using heuristic search in modes 1 and 2, respectively.  Note that,
with the exception of the case of \CodeIn{daisy2} with error number 1
(in both modes), \jpf{} either finds the actual race in a few seconds
(12s in the worst-case) or runs out of memory.  The reason for
reaching memory limits in more cases compared to the default run of
\jpf{} is that both heuristics we use can potentially ignore threads
that contribute to creating intermediate states from where race
becomes reachable.  This problem is magnified in mode 1 runs since
these runs give priority to a smaller number of threads.  In fact,
note that using the greedy approach of ``H1'' does not payoff in
general: the gain is not significant for the increased risk of missing
important schedules.  Note that \CodeIn{weblench} was the only case
for which the run of in mode 2 raised an out of memory error when
\jpf{} in standard mode did not.  For this case, a scenario similar to
the one described above provoke the error: an important thread was not
included in the set of preferred threads.  Overall, we observed with
this experiment that for several cases where the default run of \jpf{}
(see column ``default (DFS)'') takes long to finish the heuristic
search in mode 2 finds the error very fast.

%% We set the threads involved in race warnings that \tname{} reports as
%% preferred threads.\FixM{joao, nao esta claro como estamos fazendo
%%   isto.  vc. deixa \tname{} terminar e em outra execucao inicia JPF?
%%   Isto eh online?  Vc. considera apenas um warning (o primeiro?) ou
%%   varios?}\Fix{eu pego as threads que rabbit reportadas como
%%   causadoras de Race e coloca como input da heuristica. nos casos que
%%   houve falso positivo, eu adicionei a mainthread}
%% Results \Fix{os resultados mostraram que para os casos em que rabbit
%%   levava mais que 5 segundos para encontrar o erro. Caso, a heuristica
%%   fosse alimentada com os valores previamente encontrado pelo rabbit,
%%   O jpf encontrava o Race antes de 5 segundos. isso aconteceu nos
%%   subjects Moldyn e nos dois exemplos de alarmClock. Para o Jpapa
%%   continuo havendo out of memory antes de se encontrado o erro }
%% \FixM{Por favor, seja mais especifico.  Quais os casos que levou mais
%%   de 5s?  Se sao estes que aparecem abaixo de 1st pos. warning, entao
%%   isto inclui jpapa.  Afinal, JPF encontra ou nao o erro em menos de
%%   5s para jpapa?  E o que ficou da estoria de BFS que vc. comentou??
%%   Por favor, elabore sobre isto tambem.}


%%\vspace{1ex}\noindent\textbf{Time Overhead.}

\subsection{Time Overhead}\label{sec:time-overhead}
Figure~\ref{fig:performance} uses stacked bars to show the overhead of
using \tname{}.  We ordered the subjects in the figure by name.  We
disabled the seeded errors and ran each experiment for 5
minutes\Comment{ setting \jpf{} not to stop on the first error found}.
The smaller bar indicates the percentage of instructions visited with
\tname{}.  The bigger bar indicates reaches 100\%; it indicates the
percentage of instructions that \jpf{} visits.  The shaded region
highlights the difference.  We ran each experiment for 5 times taking
averages.  The horizontal line marks the median value of overhead
across all cases (7.01\%).  There is no apparent correlation between
the runtime overhead and the size of the application or the size of
the \stsp{} produced.  Cost is proportional to the number of memory
accesses made throughout the exploration (see
Section~\ref{sec:monitoring}).  Note that \CodeIn{jpapa} is among the
cases with largest overhead but it is not among the largest
applications.

\Comment{
The case of \CodeIn{montecarlo} deviates from the average in the
number of memory accesses made throughout exploration.  This subject
creates many objects and for each one performs several field updates.}

%% Our comparison baseline is JPF but we also
%% compare our results with static pattern-based bug-detection tools,
%% namely FindBugs~\cite{findbugs-web-page} and
%% \jlint{}~\cite{jlint-web-page}, as they also aim to report warnings to
%% the user.

%\vspace{1ex}\noindent\textbf{Comparison with FindBugs and \jlint{}.}

\subsection{Comparison with FindBugs and \jlint{}}
\label{sec:fbugs-jlint}
We also compared \tname{} with pattern-based bug-finding tools, namely
FindBugs~\citep{findbugs-web-page} and \jlint{}~\citep{jlint-web-page}.
We evaluated the precision of warning reports of these tools compared
to \tname{}.  Note that these tools require different inputs from the
user.  In particular, they do not require test drivers on input as
\jpf{} and \tname{}, they are typically much more efficient but can be
very imprecise.  We observed from the results that FindBugs found the
error in only 1 cases and \jlint{} in only 2 cases.  We recall that we
used the default patterns for concurrency errors available in the
current versions of these tools.  \jlint{} reported a high number of
false alarms for \CodeIn{tsp} and \CodeIn{daisy2}.  That happens
because (i) the approach is based on syntactic pattern matching; hence
number of potential matches increases with code size and because (ii)
it repeats the same alarm for distinct accesses of a given object
field.\Comment{\Fix{please, explain why they performed sooo... bad.
    Could the writing of other patterns help?  Are the errors not
    localized in a procedure so matching performs badly?}} Considering
the cases we analyzed we found that the number of false warnings was
high and the number of positive warnings was very low.  

%% Note also that our approach is sensitive to the input (while FindBugs
%% and \jlint{} are not) but for most cases (see \CodeIn{alarmclock}),
%% result of \tname{} did not change with the increase in number of
%% threads.

\subsection{Threats to validity}
\label{sec:threats}

Threats to validity include the selection of the subjects we used, the
selection of search orders we considered, possibility of errors in our
implementation, and lack of willingness of model checker users in
accepting an increase in overall exploration time for increased
responsiveness or to inspect warnings that may not lead to actual
errors.

\section{Discussion}
\label{sec:discussion}

The list below summarizes the reasons \emph{in favor} of \tname{}:

%\begin{itemize}
\begin{list}{\labelitemi}{\leftmargin=1em}

\item For 78\% (26/\numCases{}) of the cases \tname{} responds fast
  with positive warnings.  Columns ``$<$5s'' from
  Figure~\ref{tab:results} and Figure~\ref{tab:pecan} show this.  This
  is particularly relevant for cases where \stsp{} exploration takes
  long to find the race;
  
\item Results indicate that the amount of warnings reported does not
  grow unbounded during \stsp{} exploration.  For the cases we
  considered, the number of distinct warnings is, on average, low and
  stabilizes fast;
  
\item In one particular case, \tname{} inferred a conflict that
  \jpf{} missed.  This was unexpected and showed utility beyond
  increased responsiveness;
  
\item Results indicate that \tname{} performs differently with
  depth-first and breadth-first search.  However, we observed that
  often when one strategy could not find a positive warning quickly
  the other could;

\item Results indicate that \tname{}'s output was very effective to
  guide JPF's heuristic-search, enabling \jpf{} to find the race in a
  few seconds for cases it would ran out of memory or report the error
  in hours.
  
  %\end{itemize}
\end{list}

\noindent{}The list below summarizes the reasons \emph{against}
\tname{}:

%\begin{itemize}
\begin{list}{\labelitemi}{\leftmargin=1em}
\item It can report false alarms. \tname{} is not aware of general
  causal relationships across the concurrent events of the application
  under test;

%% \item It could lead to race misses: \tname{} is not aware about the
%%   different ids given to the same object across different exploration
%%   paths of a model checker;

\item It adds overhead to the \stsp{} exploration time.
\end{list}
%\end{itemize}

Overall, results show that \tname{} often reports positive warnings
very fast.  Similarly, \jpf{} often can find races very fast when
using \tname{}'s guided-search.  These observations suggest that it is
beneficial to spawn short-lived instances of \jpf{} in \tname{} mode
using different search strategies with the goal of finding race
\emph{warnings} quickly (see Section~\ref{sec:search-strategies}) and
to spawn short-lived instances of \jpf{} in heuristic-search mode,
using the feedback from these multiple runs of \tname{}, with the goal
of finding/confirming the data-races quickly (see
Section~\ref{sec:guided-search}).  Figure~\ref{fig:future} illustrates
this approach where instances of \jpf{} produce warnings (left-side)
and other instances consume the warnings (right-side), focusing the
search on the warning-related threads.

\lstset{escapeinside={\%*}{*)}}
\subfiguretopcaptrue
\begin{figure}[t]
  \centering
  \begin{small}
    \includegraphics[scale=0.55]{figures/future.pdf}
  \end{small}
  \caption{\label{fig:future}Multiple searches for races.}  \figtabsep{}
\end{figure}


%% \vspace{1ex}
%% \begin{center}
%% \fbox{
%% \parbox[h]{3.1in}{}  }
%% \end{center}
%% \vspace{0.5ex}




% LocalWords:  pre LOC loc JavaNCSS JPF online escapeinside FindBugs pos FB thr
% LocalWords:  runtime alarmclock raytracer montecarlo bytecode parameterizable
% LocalWords:  sooo simplesmente grande zhang issta rax GB jlint thrs DFS NOERR
% LocalWords:  OME tsp
