\section{Conclusions}

This paper presents \tname{}, an approach to assist model checker
users in finding data race errors.  The general goal of our approach
is to increase responsiveness of model checkers, enabling users to
take action before a potentially long search for actual errors
finishes.  \tname{} looks for \emph{potential} races \emph{during}
\stsp{} exploration that can be inferred from the memory accesses it
monitors.  \tname{} uses a search-global identity for objects (and
threads) that enables it to relate objects created across different
exploration paths that the software model checker takes.  We analyzed
\numCases{} cases involving \numSubjects{} subjects of various sources
and sizes previously used in the analysis of concurrent systems.
Results indicate that the overhead in runtime compared to a regular
state-space exploration is low on average, the number of false
positives is low, and the reports are given most often very fast to
the user.\Comment{ \tname{} provides a new dimension to existing
  predictive analysis: the analysis of multiple traces.}  The approach
of \tname{} is lightweight as to handle the high volume of data
associated with observed memory accesses and the requirement to not
severely affect overall exploration time (to find actual errors).
Even though \tname{}'s primary goal is to report true warnings
quickly, we observed that our approach can be used effectively to
guide a customized heuristic search and confirm the race warnings that
\tname{} reported in a previous stage.  To the best of our knowledge,
this is the first paper that exploits the synergy between predictive
analysis and program model checking.  Our implementation and the
subjects we used in our experiments are available from the link
\url{http://pan.cin.ufpe.br/rabbit}.

% LocalWords:  interleavings runtime Alves Filipe Weslley Henrique Mateus Anand
% LocalWords:  Saswat Milos Gligoric d'Amorim FACEPE
