\section{Evaluation}

This section presents the evaluation of \tname{}.
Section~\ref{sec:subjects} presents the subjects we used,
Section~\ref{sec:setup} presents the experimental setup,
Section~\ref{sec:results} presents results, and
Section~\ref{sec:threats} shows threats to validity..

\begin{table*}[t]
\centering
\input{subjects}
\vspace{1ex}
\caption{\label{table:main}Subjects organized in
  groups. \textbf{Group~1}: Small non-modified.  \textbf{Group~2}:
  Larger non-modified. \textbf{Group~3}: Small modified.
  \textbf{Group~4}: Larger modified.}
\figtabsep
\end{table*}

\subsection{Subjects}
\label{sec:subjects}

Figure~\ref{table:main} describes the \numSubjects{} subjects we used
to evaluate our approach.  We organized the subjects in 4 groups based
on size and source of the error.  Groups 1 and 2 include subjects with
pre-existing errors, while groups 3 and 4 include subjects for which
we introduced (new seeded) errors.  Groups 1 and 3 include smaller
subjects ($<$500LOC) and groups 2 and 4 include larger subjects
($>$800LOC).  Column ``group'' shows the subject group, column
``subject'' shows the application name, column ``source'' indicates
the source from where that subject was obtained, column ``\#loc''
shows the number of lines of code~(computed with
JavaNCSS~\cite{javancss:2010}), and column ``description'' explains
purpose.  Each subject came with a main function (test driver) that
creates the execution environment.

%% includes \numSubjectsGroupOne{} small subjects from the
%% Software-artifact Infrastructure Repository~\cite{doESE05} with
%% pre-existing (race provoking) faults, group 2 includes
%% \numSubjectsGroupTwo{} small subject modified with one bug.  larger
%% subjects from distinct sources also with pre-existing faults, and
%% group 3 includes \numSubjectsGroupThree{} larger subjects

%% the subjects
%% account, alarmclock, airlines, rax extended, two stages and wrong lock
%% were obtained from software repository
%% infrastructure. Moldyn, MonteCarlo and RayTracer were
%% get from the \Fix{java grande} benchmarks.

%% Account is a simple bank account implementation that contains a class
%% \CodeIn{acount} with the usual operations \CodeIn{withdraw, deposite
%%   and tranfer}, these operations changes the value of an
%% \CodeIn{amount} account field. This program's main method creates some
%% accounts instance and some threads that does some arbitrary operations
%% in these accounts. The number of thread can be parametrized as a
%% program argument. \Fix{This program has a race at
%%   \CodeIn{account.tranfer} method that the transfer method realize
%%   some unprotected writes in amount field causing a race condition.}

%% Airlines simulates passengers trying to purchase tickets to an
%% airplane. There's two race field in this subject:
%% \CodeIn{NumOfSeatSolds} and \CodeIn{stopsales}. As its names suggests,
%% these fields are used to control the number of seat sold in the
%% simulation and to stop the sales of tickets.  AlarmClock models one
%% thread of kind \CodeIn{Clock} invoking a \CodeIn{tick()} every time
%% unit and a set of registred clients that should be alarmed on proper
%% time. This clients are also diferent threads and its amount is defined
%% as a command argument. rax extended is simple demonstration of thread
%% communication through guarded blocks. The wrong lock presents a wrong
%% lock use in an simple java application by two concurrent methods call
%% holding different monitors. The twoStages is similar to wrong lock.
%% All subjects are exposed in \Fix{table 1} as well its number of
%% uncomment lines of code and whether it were changed or used equals its
%% source.

\subsection{Setup}
\label{sec:setup}

We ran each given test driver on \jpf{} until it finds the error or
runs out of memory.  We set \jpf{} to use its precise race detector
implementation (\CodeIn{sa.jpf.listener.PreciseRaceDetector}); all
parameters assume default values.  For all experiments, we used an
Intel I5 machine, Ubuntu 11.04,\Comment{ with 4GB of memory} and ran
\jpf{} with a maximum of 1GB of memory.

\vspace{0.5ex}\noindent\textbf{Test drivers.}  We changed the test
driver of subjects from group 1 to make the number of threads a
parameter.  For the subjects in this group, we reported results for a
selection of $n$ threads that would make JPF take more than 10min to
find the error and also reported results for the selection with $n-1$
threads.  We show only these two configuration for space reasons.  The
principle is to show how fast the state-space grows with the increase
in number of threads.  We did \emph{not} change the drivers for the
subjects from groups 2, 3, and 4.

\vspace{0.5ex}\noindent\textbf{Seeded errors.} We asked
\numVolunteers{} volunteers to inject data race errors for the
subjects from groups 3 and 4.  We could not find errors with \jpf{} in
the original versions of these subjects.  The volunteers were
instructed to create mutants that could result in race conditions.  We
asked the volunteers \emph{not} to use \jpf{} or other tools to guide
what to change; they made changes in the programs such as remove
\codein{synchronized} keywords.  Experience with concurrent
programming varied across volunteers.  The faulty subjects are
available online.

\Comment{, change order of methods calls or any other changes the they
  think the could be mistakenly did by a real programmer.}\Comment{ The
  paper authors \fix{invaders} volunteer to doesn't insert changes
  that looks artificial or unreal in the real programming, for do it
  the authors had recommended that the volunteers understand the
  programs well to create mutants the generate errors that could be
  done by real programmer.}
\Comment{
The volunteers knew the nature and goals of experiments
but didn't know details or how works \tname{}'s algorithm. They just
had known that changes would be used to evaluate data race detecting
tools.}


\newcommand{\cc}{\cellcolor[gray]{0.9}}
\lstset{escapeinside={\%*}{*)}}
\subfiguretopcaptrue
\begin{figure}[t]
  \centering
  \begin{small}
    \input{results_d}
    \caption{\label{tab:results}Comparison of \tname{} with \jpf{}
      (using depth-first search), FindBugs, and \jlint{}.  Label
      ``-''\Comment{ under column ``1st pos.  warning''} indicates
      that result is as for 5s.\Comment{ and text in bold font
        indicates that \jpf{} did not find the error.}}
  \end{small}
  \figtabsep{}
\end{figure}
\subsection{Results}
\label{sec:results}

This section presents results of our experience with \tname{}.  We (a)
compare \tname{} with related approaches, (b) present results with
other search strategies, (c) discuss how it performs when we remove
seeded errors, and (d) present the overhead of \tname{} in runtime of
\jpf{}.


%% \subsubsection{\label{sec:comparison}Comparison with \jpf{}, FindBugs,
%%   and \jlint{}.}  

\vspace{1ex}\noindent\textbf{Comparison with \jpf{}, FindBugs, and
  \jlint{}.}  We compared \tname{} with JPF and static tools that look
for bug patterns in code, namely FindBugs~\cite{findbugs-web-page} and
\jlint{}~\cite{jlint-web-page}.  We evaluate impact in
\emph{exploration time} comparing \tname{} with JPF and \emph{accuracy
  of warning reports} comparing \tname{} with the static tools
mentioned above.  We want to evaluate how \emph{fast} \tname{} reports
\emph{useful} warnings; we use \jpf{} and pattern-matching static tools
as baselines for this evaluation.  It is important to note that these
tools require different inputs from the user and provide different
guarantees on output.  For example, static tools do not require test
drivers on input as \jpf{} and \tname{}, they are typically more
efficient but can be very imprecise.  It is important therefore to
consider these differences when relating these distinct techniques.


%% Plain \jpf{} has the same input as \tname{}; but it looks for actual
%% errors and hence takes longer to execute.


Figure~\ref{tab:results} shows results of our comparison.  Column
``subject'' shows the subject name, column ``err\#'' indicates the
identifier of the error, column ``\#thr'' indicates the number of
threads involved in the experiment, column ``Rabbit'' shows the
results of our approach, subordinate columns ``$<$5s'' and ``1st
pos. warning'' show respectively the warnings found in less than 5
seconds and the time required to find the first \emph{positive}
warning in case one is not found in up to 5 seconds.  The notation
$x(y)$ indicates that out of $x$ warnings reported, $y$ are positive
and $x - y$ are false alarms.  Column ``JPF'' shows the time that JPF
required to find the error while columns ``FB'' and ``\jlint{}'' shows
respectively the warnings reported with FindBugs and \jlint{}.  The
value in bold in the \jpf{} column for \CodeIn{tsp} indicates that
\jpf{} finished exploration \emph{without} reporting the error.  The
number of threads is a parameter for the subjects from Group
1. Therefore, we present results for two cases per subject, one per
row, each one using a different number of threads. In a similar vein,
all but one of the subjects of Group 4 have two different errors, each
one being the cause of different races. For each run of the
experiments, only one of these errors was enabled. Hence, we present
results for each error in a different row. We make the following
observations:



%% Figure~\ref{tab:results:bigone} shows results for the larger subjects
%% from group 2.  Figure~\ref{tab:results:big} shows the results for
%% subjects from groups 3 and 4.  The format of this table is the same as
%% others except for the addition of column ``err\#'' to indicate the
%% identifier of the error.  We make the following observations:

\begin{itemize}

\item FindBugs found the error in only 2 cases and \jlint{} in only 4
  cases.  We recall that we used the default patterns for concurrency
  errors available in the current versions of these tools.  \jlint{}
  reported a high number of false alarms for \CodeIn{tsp} and
  \CodeIn{daisy2}.  That happens because (i) the approach is based on
  syntactic pattern matching; hence number of potential matches
  increases with code size and because (ii) it repeats the same alarm
  for distinct accesses of a given object field.\Comment{\Fix{please,
      explain why they performed sooo... bad.  Could the writing of
      other patterns help?  Are the errors not localized in a
      procedure so matching performs badly?}}  Overall, we found that
  the number of false warnings was very high and positive warnings
  very low considering the cases we analyzed.  Note also that our
  approach is sensitive to the input (while FindBugs and \jlint{} are
  not) but for most cases (see \CodeIn{alarmclock}), result pf \tname{}
  did not change with the increase in number of threads.

\item The results from the small subjects from group 1 show that JPF
  can find errors reasonably fast for small state spaces.  When we
  increase the size of the input \tname{} quickly reports a small
  number of warnings to inspect.

\item All of the cases for which \tname{} reported false alarms
  occurred because of its inability to detect causal dependencies as
  indicated in Section~\ref{sec:approach}.

\item For \CodeIn{alarmclock}, the time difference between the moment
  when \tname{} reported the first positive warning and \jpf{} found
  the error is not as high as for other cases in the same group.  That
  happened because \jpf{} took relatively more time to cover an
  important memory access that enable \tname{} to emit a positive
  report.  \jpf{} did not take much longer to find the race after
  covering that access.

\item For \CodeIn{raytracer} error 1, \tname{} ran out of memory (as
  JPF) without reporting any warning.  That happened because, as
  occurred to \CodeIn{alarmclock}, the sensitive access is not easily
  covered in the exploration.

\item For 7 out of the 14 cases from the large subjects from groups 3
  and 4, JPF found the error in less than 1min.  That occurred either
  because the application was highly parallel (e.g.,
  \CodeIn{montecarlo} and \CodeIn{raytracer}) and JPF could identify
  thread-safe memory accesses to avoid generation of non-deterministic
  choices or because the volunteer unintentionally created errors that
  were detected in the initial states \jpf{} explored.

\Comment{\Fix{simplesmente sex preocuparam em fazer modificacoes
    (i.e. nao foi dado orientacao sobre a 'profundidade/complexidade'
    das alterações, para nao favorecer o rabbit) no
    programa. Naturalmente(talvez pela ordem de leitura do codigo),
    eles fizeram modificações que são exploradas em estados iniciais
    por jpf o que permitia a JPF que encontrasse os erros nos
    primeiros interleavings}}

\item For \CodeIn{tsp}, \tname{} reported a positive warning of error
  that \jpf{} missed.  Across the exploration distinct threads access
  the shared variable denoting best path.  However, for any particular
  execution schedule explored, only one thread writes to that
  variable.  That occurs because for the given input and test driver
  the first spawned worker thread always finds the best path.  \jpf{}
  misses the error for the given input and test driver.  \tname{}
  reports a warning as it realizes that threads with different ids
  access the variable without proper protection; it ignores the fact
  that such accesses do not coexist in the same run.  This scenario is
  indeed possible for this subject (i.e., multiple worker threads
  modifying the best path variable) and could be manifested in \jpf{}
  with different inputs.\Comment{ for others where .}

\Comment{
\Fix{o input do programa não exercita o erro. Isso acontece devido ao
  fato de o programa de acordo com o tamanho do input. Nesse subject
  existe uma variavel que armazena o tamanho do menor caminho já
  descoberto, com o input fornecido sempre a primeira thread a
  executar encontra os menores caminho (dado a aspectos de
  implementação do subject).  Rabbit encontra o Race porque em uma
  dada execução a thread A escreve na variavel Compartilhada. Apos o
  backTrack, a thread B escreve na mesma variavel(por que achou o
  menores caminhos). Para o dado input nunca a thread A e a thread B
  escrevem na variavel compartilhada}
}

\item Considering all cases we analyzed, \tname{} missed the error
  only in one case (\CodeIn{raytracer} error 1) and both the ratio of
  false alarms as well as the number of warnings reported to the user
  are low overall.

\item In 21 out of the 28 cases we analyzed, \tname{} reported
  accurate warnings in less than 5s.  In 13 of these 21 cases, \jpf{}
  takes more than 10min to find the error (9 cases) or runs out of
  memory (2 cases).

\end{itemize}

%% \vspace{1ex}
%% \begin{center}
%% \fbox{
%% \parbox[h]{4in}{Results suggest that \tname{} can improve
%%   responsiveness of program model checkers in providing earlier
%%   feedback to the user.  This is specially critical for time or
%%   memory-intensive searches.}  }
%% \end{center}
%% \vspace{0.5ex}


%% To better evaluate and compare the results, the author choosed to
%% divide the benchmark in two main groups: one with less then 900
%% uncommented lines of code(group a) and another with more lines than
%% that(group b). This choose was important because it had been expected
%% that dynamic tools have a better \fix{desempenho} with smaller number
%% of code line and static based tools could cover a huge amount of code
%% lines quickly. Beyond that, all subjects in the first category(\lt{}
%% 900 loc) can \fix{parametrizar} the number of dynamically running
%% threads and this division would be usefull to evaluate the scalability
%% of dynamic approaches. Another \fix{caracteristica} of smaller
%% programs size category is that all of them were unchanged by authors
%% or volunteers, \fix{meanwhile} some of programs of the other group
%% where changed.

%% In the smaller applications group, the static tools performed badly,
%% with several false positives and rare correct reports. Just in the
%% account subject, one of tools reported correct and
%% accurately. Findbugs detected the race reporting the message
%% \fix{'Inconsistent synchronization of Account.amount'} and the correct
%% code line where the bug happens. For the same subject \jlint{} doesn't
%% report any concurrency related error. In the airline subject, \jlint{}
%% tool reported the message \fix{'Airline/Bug.run() implementing
%%   Runnable interface is not synchronized'} this message doesn't make
%% sense at all, even because marking the run method as
%% \codein{synchronized} will generate a compilation
%% error.fix{TODO:explicar o motivo do erro de compilação} However, the
%% authors have considered this \jlint{} report as true positive, because
%% the bug could be fixed protecting access to variables in the reported
%% method. \jlint{} have reported the exactly same message
%% \fix{'implementing Runnable interface is not synchronized'} several
%% times during the work evaluation, all of the others were considered as
%% false positive by the authors, suggesting that this case was more a
%% coincidence than a real report. To airline, Findbugs reported just a
%% false positive,the false positive message informes that a method
%% thread.start() is invoked inside a constructor and it could be an
%% error. For all other subject, in category A\comment{definir categoria
%%   A}, the static tools doesn't detect concurrent related errors or
%% just reports false positives.  To evaluate the dynamics tools in
%% Category A, the authors have \fix{run} all subjects with the JPF model
%% checker and systematically increased program input until the time to
%% JPF takes to find the race is greater than 10 minutes. \coment{this
%%   was \fix{arbitrariamente} choosen by paper authors, but it doesn't
%%   \fix{compromete} the way that the evaluation was taken.} this was
%% just a way to try approximadelly uniformize the size of state space of
%% all the programs. 

%% \lstset{escapeinside={\%*}{*)}}
%% \subfiguretopcaptrue
%% \begin{figure}[t]
%%   \centering
%%   \begin{small}
%%     \input{results_c}
%%     \caption{\label{tab:results:bigone}Results for larger non-modified
%%       subjects.}
%%   \end{small}
%%   \figtabsep{}
%% \end{figure}

%% \lstset{escapeinside={\%*}{*)}}
%% \subfiguretopcaptrue
%% \begin{figure}[t]
%%   \centering
%%   \begin{small}
%%     \input{results_b}
%%     \caption{\label{tab:results:big}Results for modified subjects.}
%%   \end{small}
%%   \figtabsep{}
%% \end{figure}

%%\subsubsection{\label{sec:noerrors}Accuracy with other search strategies.} 


\vspace{1ex}\noindent\textbf{Accuracy with other search strategies.}
We evaluated \tname{} with breadth-first search and random search.  As
expected, the search strategy can influence results of \tname{} the
same way it can influence time to find the error in \jpf{}.
Considering the variation in results due to the choice of search
strategy and the fact that \tname{} often reports positive warning
early, we observed that it is beneficial to run \tname{} multiple
times with different search strategies for a short period before
running it with the user-informed strategy for longer duration.  For
the 7 cases where \tname{} with depth-first search could \emph{not}
find positive warning in 5s, \tname{} with breadth-first search
reported the positive warning in 5 cases.

%% \begin{table*}[t]
%% \centering
%% \input{results_bfs}
%% \vspace{1ex}
%% \caption{\label{table:main}Subjects organized in
%%   groups. \textbf{Group~1}: Small non-modified.  \textbf{Group~2}:
%%   Larger non-modified. \textbf{Group~3}: Small modified.
%%   \textbf{Group~4}: Larger modified.}
%% \end{table*}
%%\subsubsection{\label{sec:noerrors}False alarms with errors
%%removed.}


\begin{figure}[t]
  \begin{center}
    \includegraphics[scale=0.7]{plots/stack.eps}
    \caption{\label{fig:performance}Impact of \tname{} in number of
      bytecode instructions executed within 5min.}
    \figtabsep
  \end{center}
\end{figure}

\vspace{1ex}\noindent\textbf{False alarms with errors removed.}
In a typical scenario of use of a model checker the user is not aware
about the errors the application may have.  It is possible, for
example, that the application does not contain errors that the test
driver can exercise.  We realized that \tname{} reports warnings in
moderate numbers.  For example, note from Figure~\ref{tab:results} the
low increase in the number of reports for the cases where \tname{}
cannot find the positive warning in 5s.  We did \emph{not} consider
cases where the subject contains multiple errors.  However, we believe
that this factor should not affect our approach.

%% \Fix{nesses cenarios ha busca
%%   sera feita normalmente pelo model checking, havendo naturalmente um
%%   pequena perda de desempenho devido ao rabbit estar ativado. que
%%   conforme descrito na secao abaixo foi de 5\%, em media. Alem disso,
%%   como o rabbit não altera a busca natural do model checker, não
%%   deverá haver um maior impacto no uso de subjects sem erros, visto
%%   que abordagem do rabbit e primariamente de monitorar a execucao }

%% \subsubsection{\label{sec:scalability}Scalability.} \Fix{...}

\vspace{1ex}\noindent\textbf{Scalability.} We evaluated the impact of
input size, more specifically the impact of the number of threads
involved, in the number and precision of warning reports.  For that we
used subjects from group 1 which are parametric in the number of
threads.  For all subjects in that group, except \CodeIn{alarmclock}
we varied the number of threads from 15 to 20.  For all these cases,
the number and precision of the report is the same.
\CodeIn{alarmclock} is one case where the size of the state-space
(which is proportional to the number of threads) was a critical
factor.  As we discussed before, for this subject \tname{} only
detects the race after the model checker covers one specific statement
of the program.

\vspace{1ex}\noindent\textbf{Time Overhead.}
Figure~\ref{fig:performance} uses stacked bars to show the overhead of
using \tname{}.  We ordered each subject in the figure by name.  We
disabled the seeded errors and ran each experiment for 5min setting
\jpf{} not to stop on the first error found.  The smaller bar
indicates the percentage of instructions visited with \tname{}.  The
bigger bar indicates the total number of instructions visited without
\tname{} (100\%).  The shaded region highlights the difference.  We
ran each experiment for 5 times taking averages.  The horizontal line
marks the median value of overhead across all cases (7.01\%).  There
is no apparent correlation between the runtime overhead and the size
of the application or the size of the state space produced.  Cost is
proportional to the number of memory accesses made throughout the
exploration (see Section~\ref{sec:monitoring}).  The case of
\CodeIn{montecarlo} deviates from the average in the number of memory
accesses made throughout exploration.  This subject creates many
objects and for each one performs several field updates.

\subsection{Threats to validity}
\label{sec:threats}

Threats to validity include the selection of subjects we used,
possibility of errors in our implementation, and lack of willingness
of model checker users in accepting an increase in overall exploration
time for increased responsiveness.

\subsection{Discussion}
\label{sec:discussion}

\noindent{}Based on experimental evidence, the list below shows
reasons \emph{in favor} of \tname{}:

\begin{itemize}
  \item As indicated with column ``$<$5s'' from
    Figure~\ref{tab:results}, \tname{} can improve responsiveness of
    program model checkers in providing earlier feedback to the user.
    This is specially critical for time or memory-intensive searches;
  \item In one particular case, \tname{} inferred a conflict that
    \jpf{} missed.  This was unexpected and showed utility beyond
    increased responsiveness;
  \item Results suggest that the use of different search strategies
    for a small period of time can increase the chances of \tname{} to
    report useful warnings earlier.
\end{itemize}

The list below shows reasons \emph{against} \tname{}:

\begin{itemize}
\item It could lead to false alarms: \tname{} is not aware of the
  causal relationships across concurrent events of the system;
\item It could lead to race misses: \tname{} is not aware about the
  different ids given to the same object across different exploration
  paths of a model checker;
\item \tname{} adds time overhead to state-space exploration.
\end{itemize}


\vspace{1ex}
\begin{center}
\fbox{
\parbox[h]{4.1in}{Considering the low, on average, runtime overhead
  and the precision of the race warnings obtained with our experiments
  using \tname{}, we believe that model checker users can benefit from
  receiving earlier\Comment{, but non-optimal,} feedback from a tool
  that often takes long to respond.}  }
\end{center}
\vspace{0.5ex}




% LocalWords:  pre LOC loc JavaNCSS JPF online escapeinside FindBugs pos FB thr
% LocalWords:  runtime alarmclock raytracer montecarlo bytecode parameterizable
% LocalWords:  sooo simplesmente
