\newcount\draft\draft=0 % set to 0 for submission or publication
\newcount\cameraready\cameraready=1

\ifnum\cameraready=0
\documentclass[9pt]{sigplanconf}
%\documentclass[9pt,pldicrc]{sigplanconf-pldi15}
\else
\documentclass[9pt]{sigplanconf}
%\documentclass[9pt,pldicrc]{sigplanconf-pldi15}
\fi



%\ifnum\draft=1
 % \input{revision}
  %\usepackage{drafthead}
%\fi
\usepackage{xxx}
%\usepackage{floatrow}
%\newfloatcommand{capbtabbox}{table}[][\FBwidth]
\usepackage{flushend}
\usepackage{multirow}
\usepackage{graphicx}
\DeclareGraphicsExtensions{.pdf,.jpg,.png}
\graphicspath{{./figs/}}
\usepackage{caption}
\DeclareCaptionType{copyrightbox}
\captionsetup{belowskip=0pt,aboveskip=0pt}
\usepackage[ruled,linesnumbered]{algorithm2e} %% usepackage for algorithm environment
\usepackage{verbatim}
\usepackage{hyperref}
\usepackage{listings}
\lstset{
    language=C,
    emphstyle=\bfseries,
    basicstyle=\ttfamily\small,
    aboveskip=1mm plus 1mm minus 1mm,
    belowskip=1mm plus 1mm minus 1mm,
    mathescape=true,
    xleftmargin=\parindent,
}
\newcommand{\lil}{\lstinline}
\newcommand{\dsp}{DSP\xspace}
\newcommand{\dsps}{DSPs\xspace}

%\renewcommand{\baselinestretch}{.98}

\usepackage{xspace}
\newcommand{\sys}{Symbiosis\xspace}
\hyphenation{Symbiosis}

%Note: I had to change this because floatrow defines \captsize -BL
\newcommand{\captsize}{\footnotesize}
\newcommand{\rnr}{R\&R\xspace}

\usepackage{amsmath}
\usepackage[compact]{titlesec}
\titlespacing*{\section}{2pt}{6pt}{0pt}
\titlespacing*{\subsection}{2pt}{4pt}{0pt}
\titlespacing*{\subsubsection}{2pt}{4pt}{0pt}

\begin{document}
\special{papersize=8.5in,11in}
\setlength{\pdfpageheight}{\paperheight}
\setlength{\pdfpagewidth}{\paperwidth}

\conferenceinfo{PLDI'15}{June 13--17, 2015, Portland, OR, USA}
\CopyrightYear{2015}
\crdata{978-1-4503-3468-6/15/06}
\doi{2737924.2737973}

\title{Concurrency Debugging with \\Differential Schedule Projections}

%\author{\begin{tabular}{c c c}Brandon Lucia & Nuno Machado & Luis Rodrigues\\ Microsoft Research & Microsoft Research\end{tabular}}

\authorinfo{Nuno Machado}{INESC-ID, Instituto Superior T\'{e}cnico, Universidade de Lisboa, Portugal}{nuno.machado@tecnico.ulisboa.pt}
\authorinfo{Brandon Lucia\titlenote{This work was done in part while the author was a Researcher at Microsoft Research.}}{Carnegie Mellon University, USA}{blucia@ece.cmu.edu}

\authorinfo{Lu\'{i}s Rodrigues}{INESC-ID, Instituto Superior T\'{e}cnico, Universidade de Lisboa, Portugal}{ler@tecnico.ulisboa.pt}
%\date{}

\newcommand{\term}[1]{\emph{#1}}

\newcommand{\fakeparagraph}[1]{\vspace{1ex}\noindent \emph{\textbf{#1}}}


\maketitle


\abstract{ We present \sys: a concurrency debugging technique based on
  novel {\em differential schedule projections} (DSPs). A DSP shows
  the small set of memory operations and data-flows responsible for a
  failure, as well as a reordering of those elements that avoids the
  failure.  To build a DSP, \sys first generates a {\em full, failing,
    multithreaded schedule} via thread path profiling and symbolic
  constraint solving.  \sys selectively reorders events in the failing
  schedule to produce a {\em non-failing, alternate schedule}.  A DSP
  reports the ordering and data-flow {\em differences} between the
  failing and non-failing schedules.  Our evaluation on buggy
  real-world software and benchmarks shows that, in practical time,
  \sys generates DSPs that both isolate the small fraction of event
  orders and data-flows responsible for the failure, and show which
  event reorderings prevent failing. In our experiments, DSPs contain
  81\% fewer events and 96\% fewer data-flows than the full
  failure-inducing schedules.  Moreover, by allowing developers to
  focus on only a few events, DSPs reduce the amount of time required
  to find a valid fix.

\category{D.2.5}{Software Engineering}
                   {Testing and Debugging}
                   [Diagnostics]
\terms
   Algorithms, Design, Reliability
\keywords
   Concurrency, Debugging, Symbolic Execution, Constraint Solving, Differential Schedule Projection

\section{Introduction}

Concurrent and parallel programming are the new norm and are much more
difficult than sequential programming.  Shared-memory multi-threading is
especially wide-spread, and requires programmers to reason about multiple
threads of execution that interact by reading and writing shared variables.
Operations in different threads are unordered by default, unless ordered by
synchronization, and different executions may non-deterministically adhere to
different {\em schedules} of unordered operations that produce different
results.  A key challenge is that most schedules are correct, but some may
permit a multi-threaded sequence of shared-memory accesses that leads to
undesirable behavior, like a crash.  Such a schedule causes a {\em failure} and
is the result of a {\em concurrency bug} ({\em i.e.}, a mistake in the code
that incorrectly permits a failing schedule).

Eliminating concurrency bugs is extremely difficult.  Failing
schedules may manifest rarely and reproducing them is often difficult.
Prior work has addressed reproducibility with a number of different
strategies, including {\em deterministic record and replay} (\rnr)
(both order-based\,\cite{leap,order,care} and
search-based\,\cite{cooprep,stride,pres}) and {\em deterministic
  execution}\,\cite{kendo,grace,dmp}.  These techniques help produce
an execution that fails, but simply reproducing a failure may provide
no insight into its cause.  The key to debugging is understanding a
failure's {\em root cause}, {\em i.e.}, the set of event orderings
that are {\em necessary} for failure.  The number of events that
comprise a root cause is typically small\,\cite{pct}, but it is often
unclear which events in a full schedule to focus on.  Any operation in
any thread may have led to the failure and blindly analyzing a full
schedule is a metaphorical search for a {\em needle in a haystack}.
Even if the programmer {\em finds} the root cause, they still must
understand how to change the code in such a way the problematic events
do not execute in the failure-inducing order, which is also difficult.


We present \sys, a system which helps finding and understanding a
failure's root cause, as well as fixing the underlying bug.
Figure~\ref{fig:idea:overview} presents a schematic view of our
system.  
\sys first collects single-threaded path profiles from a concrete, failing execution. The profiles guide a symbolic execution, yielding per-thread symbolic event traces compatible with the failure.
These are then used to generate a Satisfiability Modulo Theory (SMT)
formulation, the solution to which represents a multi-threaded failing
schedule.  To prune irrelevant events from the failing schedule, \sys
generates an {\em unsatisfiable} SMT formulation encoding the failing
schedule, but the absence of the failure. As a result, the SMT solver reports a
subset of constraints that conflict in the unsatisfiable SMT formulation; 
their corresponding event orderings are necessary for the failure, and form the pruned {\em
  root cause schedule}.  The root cause schedule is used in another
SMT formulation to compute {\em non-failing, alternative schedules}
that comprise reorderings of the root cause schedule's events. \sys
enhances the debugging utility of the root cause schedule by reporting {\em
  only the important ordering and data-flow differences} between
failing and non-failing schedules.  We call the output of our novel
debugging approach a {\em differential schedule projection} (\dsp).

\dsps simplify debugging for two main reasons.  First, by showing only what
differs between a failing and non-failing schedule, the programmer sees only a
very small number of relevant operations, rather than a full schedule.  Second,
\dsps illustrate, not only the failing schedule, but also the way execution
{\em should} behave, if not to fail.  Seeing the different event orders
side-by-side helps understand the failure and, often, how to fix the bug.
Critically, \sys produces a \dsp from a {\em single} failing schedule, enabling
its use for failures observed rarely ({\em i.e.}, in production) and does not require
repeated program executions like prior work~\cite{aviso}.  Our evaluation in Section~\ref{sec:eval} shows
that \dsps have, on average, 81\% fewer events than full schedules and shows
qualitatively, with case studies, that \dsps help understand failures and fix
bugs.

\begin{figure*}[htb] \centering
\includegraphics[width=1.0\textwidth]{Symbiosis_Overview}
\caption{\captsize \label{fig:idea:overview}{\bf Overview of
\sys.} Boxes at the top represent processes.  Boxes at the bottom
represent inputs and outputs of processes.  Dashed arrows denote an
input relationship and solid arrows denote and output relationship.
{\bf Boldface} boxes represent the final outputs of \sys. 
} 
\end{figure*}


To summarize, our contributions are:
{\em i)} An SMT constraint formulation, based on the computed failing
schedule, that identifies the sub-schedule that is a failure's root
cause.
{\em ii)} An SMT constraint formulation that systematically varies the
order of root cause events to find alternative non-failing schedules
similar to the original failing schedule.
{\em iii)} A novel {\em differential schedule projection} methodology that
isolates important control and data-flow changes between failing
and non-failing schedules computed by \sys.
{\em iv)} An implementation of \sys for C/C++ and Java and an
evaluation, showing the debugging efficacy of \sys and its
applicability to failure avoidance and failure reproduction.


\section{Background}
\label{background:symbolic}

\sys helps with concurrency debugging by leveraging prior work on symbolic
execution and Satisfiability Modulo Theory (SMT) solving.  This section briefly
reviews these topics.

\fakeparagraph{Concurrency Bugs.}
Concurrency bugs are errors in code that permit multi-threaded schedules that
lead to a failure.  Concurrency bugs have been studied extensively in the
literature\,\cite{avio,velodrome,conmem,conseq,racerx,eraser,fasttrack,recon,aviso,atomaid,colorsafe}.
Data-races\,\cite{eraser,fasttrack,racerx}, atomicity
violations\,\cite{avio,velodrome,atomaid}, and ordering
violations\,\cite{lumistakes,bugaboo,falcon,conmem,conseq} are different types of
concurrency bugs studied by prior work.  These bugs vary in their mechanism and
result.  For example, while data-races may lead to violations of sequential
consistency\,\cite{lamport}, atomicity violations may lead to unserializable
behavior of atomic regions.  We defer to the literature for a detailed
discussion of these bug types and their failure modes. Instead, we just
emphasize that they share the following important characteristic: they lead to
a failure when they permit operations in different threads to execute
in an order that should be forbidden. \sys attacks the debugging problem by
identifying such incorrect operation orderings that constitute the root cause
of a failure.


\fakeparagraph{Symbolic Execution.}
{\em Symbolic execution}\,\cite{king76} explores the space of possible
executions of a program by emulating or directly executing its statements.
During symbolic execution, some variables have concrete values ({\em e.g.}, $12$),
and other variables, like unknown inputs, have {\em symbolic} values.  A
symbolic value represents a set of possible concrete values.  Assignments to
and from symbolic variables and operations involving symbolic variables produce
results that are also symbolic.  
When an execution reaches a branch dependent on a symbolic variable, it spawns two
identical copies of the execution state -- one in which the branch is taken, one in
which the branch is not taken.  Spawned copies continue independently along
these different {\em paths} and the process repeats for every new symbolic
branch.  Each path has a {\em path constraint}, encoding all branch outcomes on
that path.  Thus, the path constraint determines possible concrete values for
symbolic variables that lead execution down a particular path.  As we describe
in Section~\ref{sec:idea:symb}, \sys uses symbolic execution to find a path in
each thread that leads to a failure.  Like unknown inputs, \sys treats {\em
shared variables}  as symbolic, because they might be modified
non-deterministically by any thread during a multi-threaded execution.

\fakeparagraph{SMT Solvers.}
A {\em Satisfiability Modulo Theories} (SMT) solver is a tool that, given a
formula over variables, finds a satisfying assignment of the variables or
reports that it is unsatisfiable.  SMT is based on
boolean satisfiability (SAT).  However, SMT is more expressive than SAT,  for example,
handling arithmetic.  SAT and SMT are NP-complete, but decades of research have
produced solvers ({\em e.g.}, Z3\,\cite{Z3}) that practically solve large problems.
Practical SMT has found use in many areas: hardware\,\cite{hwverification} and
software verification\,\cite{swverification}, program
analysis\,\cite{satanalysis}, and test generation\,\cite{pex}.

CLAP\,\cite{clap} and our work link concurrency, SMT, and symbolic execution. A
symbolic path constraint corresponds to an SMT formula that constrains
variables at each point in a {\em sequential} execution\,\cite{klee,pex}.
Thus, a {\em concurrent} symbolic execution corresponds to {\em i)} a
combination of the SMT formulae for each thread's symbolic execution, and {\em
ii)} additional constraints encoding inter-thread data-flow and
synchronization. CLAP's goal was to reproduce failed executions, so it added
constraints corresponding to a failure's manifestation. \sys adds these constraints as well. 
%To decrease its time to find a solution, CLAP also constrained symbolic execution with a block profile from a concrete failing execution, and limited the number of context switches permitted by a symbolic execution.  

When an SMT formula is unsatisfiable, some SMT solvers\,\cite{Z3} are
able to {\em explain why} by reporting which constraints conflict in
an unsatisfiability core, or {\em UNSAT Core}.
%As Section~\ref{sec:idea:root} describes, 
BugAssist\,\cite{bugassist} pioneered the use of the UNSAT core to help
isolate errors in sequential programs, but \sys makes novel use of this feature
 to debug concurrency errors and reduce the information it
must analyze when building differential schedule projections.

\section{\sys}

\sys is a technique for concisely reporting the root cause of a failing
multi-threaded execution, alongside a set of non-failing, alternate executions
of the events that make up the root cause.  \sys produces {\em differential
schedule projections}, which reveal bugs' root causes and aid in debugging. \sys
has five phases, as depicted in Figure~\ref{fig:idea:overview}:

\noindent {\bf 1) Symbolic trace collection.} In a concrete, failing program
execution, \sys traces the basic blocks executed in each thread
independently.  The per-thread path profiles are used to guide symbolic
execution, producing a set of per-thread traces with symbolic information
({\em e.g.} path conditions and read-write accesses to shared variables). 

\noindent{\bf 2) Failing schedule generation.} \sys produces an SMT
formula that corresponds to the symbolic execution trace. The
formula includes constraints that represent each thread's path, 
as well as the failure's manifestation, memory access
orderings, and synchronization orderings.  The solution to the SMT
formula corresponds to a complete, failing, multi-threaded
execution. In other words, this solution specifies the ordering
of events that triggers the error. 

\noindent{\bf 3) Root cause sub-schedule generation.}  \sys produces an
SMT formula corresponding to the symbolic trace, but specifies that
the execution should not fail, by negating the failure condition. 
Combined with the constraints
representing the order of events in the full, failing schedule, the
SMT instance is unsatisfiable.  The SMT solver produces an  UNSAT
core that contains the constraints representing the execution event
orders that conflict with the absence of the failure. Those event
orders are necessary for the failure to occur, {\em i.e.}, the failure's
root cause sub-schedule.

\noindent{\bf 4) Alternate sub-schedule generation.}  \sys examines each
pair of events from different threads in the root cause.  For each pair,
\sys produces a new SMT formula, identical to the one used to find
the root cause, but with constraints implying the ordering of the
events in the pair reversed.  When \sys finds an instance
that is satisfiable, the corresponding schedule is very similar to the
failing schedule, but does not fail.  \sys reports the alternate,
non-failing schedule that is identical to the failing 
schedule, but with the pair of events reordered.

\noindent{\bf 5) Differential schedule projections and failure
  avoidance.}  \sys produces a differential schedule projection by 
  comparing the failing schedule and the alternate,
non-failing sub-schedule.  The DSP shows how the two schedules 
differ in the order of their events and in their data-flow
behavior. 
Additionally, as the reordered pair
from the alternate non-failing schedule eliminates an event order
necessary for the failure to occur, it can be leveraged by a
dynamic failure avoidance system\,\cite{aviso} to prevent future
failures.


\subsection{A Running Example}

To illustrate the main concepts of \sys, we use a running example that consists
of the modified version of {\tt pfscan} file scanner studied in prior
work\,\cite{concurrit}.  A slightly simplified snippet of
the program's code is depicted in Figure~\ref{fig:example}a.  The program uses
three threads.  The first thread enqueues elements into a shared queue. The two
other threads attempt to dequeue elements, if they exist. A shared variable,
named {\sl filled}, records the number of elements in the queue.
The code in the {\sl get} function checks that the queue is non-empty (reading
{\sl filled} at line 10), decreases the count of elements in the queue
(updating {\sl filled} at line 20), then dequeues the element.  

The code has a concurrency bug because it does not ensure that the check and
update of {\sl filled} execute atomically. The lack of atomicity permits some
unfavorable execution schedules in which the two consumer threads both attempt
to dequeue the queue's last element.  In that problematic case, both consumers
read that the value of {\sl filled} is 1, passing the test at line 10. One of
the threads proceeds to decrement {\sl filled} and dequeue the element.  The
other reaches the assertion at line 19, reads the value 0 for filled, and
fails, terminating the execution.  Figure~\ref{fig:example}b shows the
interleaving of operations that leads to the failure in a concrete execution.

The next sections show how \sys starts from a
concrete failing execution (like the one in Figure~\ref{fig:example}b), 
produces a focused root cause, and reports its significant
differences from alternate non-failing schedules to aid in debugging.



\begin{figure*}[htb] 
\centering
\includegraphics[width=1.0\textwidth]{figs/pfscan_total}
\caption{\captsize {\bf Running example}. a) Source code. b) Concrete failing execution. c) Per-thread symbolic execution traces. d) Failing Constraint Model.}
\label{fig:example}
\end{figure*}

\subsection{Symbolic Trace Collection}

Like CLAP\,\cite{clap}, \sys avoids the overhead of directly recording
the exact read-write linkages between shared variables that lead to a
failure.  Instead, \sys collects only per-thread path profiles from a
failing, concrete execution.  As in prior work~\cite{clap}, \sys's
path profile for a thread consists of the sequence of executed basic
blocks for that thread in the failing execution.

\sys uses the per-thread path profiles to guide a symbolic execution of each
thread and to produce each thread's separate symbolic execution trace.
Symbolic execution normally explores all paths, following the path along both
branch outcomes.  \sys, in contrast, guides the symbolic execution to correspond to the
per-thread path profiles by considering only paths that are compatible with the
basic block sequence in the profile.  As symbolic execution proceeds, \sys
records information about control-flow, failure manifestation, synchronization,
and shared memory accesses in each per-thread symbolic execution trace.
Together, the  traces are compatible with the
original, failing, multi-threaded execution. 

Each per-thread, symbolic, execution trace contains four kinds of information.
First, each trace includes a path condition that permits the failure to occur.
A trace's path condition is the sequence of control-flow decisions made during
the trace's respective execution.  Second, the trace for the thread that experienced the
failure must include the event that failed ({\em e.g.}, the failing assertion).
Third, the trace must record synchronization operations, noting their type
({\em e.g.}, lock, unlock, wait, notify, fork, join, {\em etc.}), and the
synchronization variable involved ({\em e.g.}, the lock address) if applicable.
Fourth, the trace must record loads from and stores to shared memory locations.
A key aspect of the shared memory access traces is that these are {\em symbolic}:
loads always read fresh symbolic values and stores may write either symbolic or
concrete values.  Recall from Section~\ref{background:symbolic} that a symbolic
value holds the last operation that manipulated a value. Also, a symbolic value
may, itself, be an expression that refers to other symbolic or concrete values.

Note that any technique for collecting concrete path profiles and generating
symbolic traces is adequate.  In our implementation of \sys that targets C/C++, 
we use a technique very similar to the front-end of CLAP\,\cite{clap}:
\sys records a basic block trace and uses KLEE to generate per-thread symbolic
traces conformant with the block sequence. \sys for Java uses Soot\,\cite{soot} to
collect path profiles and JPF\,\cite{jpf} for symbolic execution.  With some
additional engineering effort, \sys could also use Pex\,\cite{pex} for C\#, or general R\&R
techniques\,\cite{leap,stride}.  

\fakeparagraph{Trace Collection Example.}
Figure~\ref{fig:example}c illustrates a symbolic trace collection for
our running example: it shows the execution path followed by
each thread for the failing schedule in Figure~\ref{fig:example}b and
the corresponding symbolic trace produced by \sys.  Each path
condition in the trace represents a control-flow outcome in the
original execution ({\em e.g.} \textit{filled@2.10 $>$ 0} denotes that
thread T2's should read a value greater than zero from {\sl filled} at
line 10).  Thread T2's trace includes the assertion that leads to the
failure.  Each trace includes both symbolic and concrete values in
their memory access traces, as well as synchronization
operations from the execution.  Note that we slightly simplified the
threads' traces to keep the figure uncluttered.  {\sl enqueue} and
{\sl dequeue} also access shared data but we only show operations that
manipulate {\sl filled} and perform synchronization because they are
sufficient to illustrate the failure.




\subsection{Failing Schedule Generation} 
\label{sec:idea:symb}

The symbolic, per-thread traces do not explicitly encode the multi-threaded
schedule that led to the failure.  \sys uses the information in the symbolic
traces to construct a system of SMT constraints that encode information about
the execution.  The solution to those SMT constraints corresponds to a
multi-threaded schedule that ends in failure and is compatible with each
per-thread symbolic trace.  This section describes how the 
constraints are computed.

The SMT constraints refer to two kinds of unknown variables, namely the {\em
value variables} for the fresh symbolic symbols returned by read operations and
the {\em order variables} that represent the position of each operation from
each trace in the final, multi-threaded  schedule. We notate value variables as
\textsl{var}$_{t.l}$, meaning the value read from variable \textsl{var} by
thread $t$ at line $l$. We notate order variables as \textsl{Op}$_{t.l}$,
meaning the order of instruction \textsl{Op} executed by thread $t$ at line
$l$, where \textsl{Op} can be a read (R), write (W), lock (L), unlock (U), or
other synchronization operations such as wait/signal (our
notation differs slightly from that in \cite{clap} for clarity).

Figure~\ref{fig:example}d shows part of the system of SMT constraints
generated by \sys from the symbolic traces presented in
Figure~\ref{fig:example}c.  The system, denoted $\Phi_{fail}$, can be
decomposed into five sets of constraints:

\vspace{-3mm}
\begin{equation*} \Phi_{fail} = \phi_{path} \wedge \phi_{bug}
\wedge \phi_{sync} \wedge  \phi_{rw} \wedge \phi_{mo}
\end{equation*}

where $\phi_{path}$ encodes the control-flow path executed by each thread,
$\phi_{bug}$ encodes the occurrence of the failure, $\phi_{sync}$ encodes
possible inter-thread interactions via synchronization, $\phi_{rw}$ encodes
possible inter-thread interactions via shared memory, and $\phi_{mo}$ encodes
possible operation reorderings permitted by the memory consistency model.  The
following paragraphs explain how \sys derives each set of constraints from the
symbolic execution traces.
  
\fakeparagraph{Path Constraint ($\phi_{path}$)} The path SMT
constraint encodes branch outcomes during symbolic execution.  \sys
gathers path conditions by recording the branch outcomes along the
basic block trace from the concrete path profile. A thread's path
constraint is the conjunction of the path conditions for the execution
of the thread in the symbolic trace.  The $\phi_{path}$ constraint is
the conjunction of all threads' path constraints.  Each conjunct
represents a single control-flow decision by constraining the value
variables for one or more symbolic operands.  In our running example,
the shared variable \textsl{filled} is symbolic, resulting a
$\phi_{path}$ with three conjuncts. The three conjuncts express that
the value of \textsl{filled} should be greater than 0 when thread T1
executes lines 10 and 19, as well as when thread T2 executes line 10.
Figure~\ref{fig:example}d shows $\phi_{path}$ for our example.

\fakeparagraph{Failure Constraint ($\phi_{bug}$)} 
The failure SMT constraint expresses the failure's necessary conditions.  The
constraint is an expression over value variables for symbolic values returned
by some subset of read operations ({\em e.g.}, those in the body of an {\sl
assert} statement). Figure~\ref{fig:example}d shows $\phi_{bug}$ for the
running example, representing the sufficient condition for the assertion in
thread T2 to fail.
% (namely, that \textsl{filled}$_{2.19} \leq $~0).


\fakeparagraph{Synchronization Constraints ($\phi_{sync}$)} 
There are two types of synchronization SMT constraints: {\em partial order
constraints} and {\em locking constraints}.
Partial order constraints represent the partial order of different threads'
events resulting from {\em fork/join/wait/signal} operations.
For instance, a constraint for {\em fork} states that the first event of a 
child thread must occur after the {\em fork} operation in the parent thread.
Locking constraints represent the mutual exclusion effects of {\em
  lock} and {\em unlock} operations.  We define a locking constraint
for two threads, $t$ and $t'$, stating that each thread executes a
critical section protected by a {\em lock} operation and an {\em
  unlock} operation -- namely $L$/$U$ and $L'$/$U'$.  The constraint
is a disjunction of SMT expressions representing two possible cases.
In the first case, $t$ acquires the lock first and $U$ happens before
$L'$.  In the second case, $t'$ acquires the lock first and $U'$
happens before $L$.  For each such pair, the disjunction that becomes
part of $\phi_{sync}$ is composed of two SMT constraints, defined over
order variables, representing the alternation of the two cases.
Figure~\ref{fig:example}d shows a subset of the locking constraints
for our running example that involve the lock/unlock pair of thread T0
$(L_{0:3},U_{0:6})$.


\begin{figure*}[t] \centering
\includegraphics[width=0.9\textwidth]{pfscan_3rootcause}
\caption{\captsize \label{fig:idea:root}{\bf Root cause and
Alternate schedule generation}. a) Possible failing schedule
produced by the SMT solver for the constraint system in
Figure~\ref{fig:example}d ({\sl (U$_{2.20}$)} represents a synthetic unlock event). b) Root cause sub-schedule,
which corresponds to the UNSAT core produced by the solver. c)
Candidate pair reordering and respective alternate schedule. d) Differential Schedule Projection generated by Symbiosis.}
\end{figure*}

\fakeparagraph{Read-Write Constraints ($\phi_{rw}$)}
Read-write SMT constraints encode the matching between read and write operations
that leads to a particular read operation reading the value written by a
particular write operation.  Read-write constraints model the possible
inter-thread interactions via shared memory.   
A read-write constraint encodes that, for every read operation $r$ on a
variable $v$, if $r$ is matched to a write $w$ of the same variable, then the
order variable (and hence execution order) for all other writes on $v$ are
either less than that of $w$ or greater than that of $r$. The constraint also
implies that $r$'s value variable takes on the symbolic value written by $w$.
Note that read-write SMT constraints are special in that they affect order
variables and value variables.

In the running example, thread T0 reads \textsl{filled} at line 4.  If T0
reads 0 at that point, then the most recent write to \textsl{filled} must be
the one at line 2.   The matching of that read and write implies that the order
of the write must precede read operation ({\em i.e.} $W_{0.2} < R_{0.4}$), and that
all the other writes to \textsl{filled} ({\em e.g.} $W_{1.20}$) either occur before
$W_{0.2}$ or after $R_{0.4}$. The same reasoning is also applied for the
remaining reads of the program on symbolic variables.

\fakeparagraph{Memory Order Constraints ($\phi_{mo}$)} The
memory-order constraints specify the order in which instructions are
executed in a specific thread.  Although is possible to express
different memory consistency models\,\cite{clap}, in this paper we opted
not to focus on relaxed memory ordering, instead focusing on
sub-schedule generation and differential schedule
projections. Therefore, here we consider sequential consistency only,
meaning statements in a thread execute in program order.  For the
running example in Figure~\ref{fig:example}b, the memory order
constraint requires that operations in thread T0 respect the
constraint $W_{0.2} < L_{0.3} < R_{0.4} < W_{0.4} < U_{0.6}$.
 

\subsection{Root Cause Sub-schedule Generation}
\label{sec:idea:root} 

Each order variable referred to by an SMT constraint represents the ordering of
two program events from the separate single-threaded symbolic traces.  A
binding of truth values to the SMT order variables corresponds directly to an
ordering of operations in the otherwise unordered, separate, per-thread traces.
Solving the constraint system binds truth values to variables, producing a
multi-threaded schedule.  The constraint system includes a constraint
representing the occurrence of the failure, so the produced multi-threaded
schedule manifests the failure ($\phi_{bug}$).  Solving the generated SMT
formulae, \sys produces a full, failing, multi-threaded schedule $\phi_{fsch}$.   
The entire multi-threaded schedule may be long, complex, and may contain
information that is irrelevant to the root cause of the failure.  \sys uses a
special SMT formulation to produce a {\em root cause sub-schedule} that prunes
some operations in the full schedule, but preserves event orderings that are {\em
necessary} for the failure to occur.  To compute the root cause sub-schedule,
\sys generates a new constraint system, denoted $\Phi_{root}$, that is {\em
designed to be unsatisfiable} in a way that reveals the necessary orderings.
\sys leverages the ability of the SMT solver to produce an explanation, of why a
formula was unsatisfiable, to report only those necessary orderings.

To build the root cause sub-schedule SMT formula, \sys logically inverts the
{\em failure constraint}, effectively requiring the failure not to occur ({\em i.e.}
$\neg\phi_{bug}$).  \sys adds constraints to the formula that directly encode
the event orders in $\phi_{fsch}$ ({\em i.e.} the full, failing schedule that was previously computed).
The complete root cause sub-schedule formula is then written as follows:

\vspace{-3mm}
\begin{equation*} \Phi_{root} = \phi_{path} \wedge \neg\phi_{bug}
\wedge \phi_{sync} \wedge  \phi_{rw} \wedge \phi_{mo}  \wedge
\phi_{fsch} \end{equation*}

The original SMT formula that \sys used to find the full failing schedule
considers {\em all} possible multi-threaded schedules that are consistent with
the symbolic, per-thread schedules.  In contrast, the root cause sub-schedule
SMT formula adds the failing schedule $\phi_{fsch}$ constraint, accommodating
{\em only} the full, failing schedule.  Combining the inverted failure
constraint and the ordering constraints for the full, failing schedule make an
unsatisfiable constraint formula:  the inverted failure constraint requires the
failure not to occur and the failing schedule's ordering constraints require
the failure to occur.  

When an SMT solver, like Z3,  attempts to solve the
unsatisfiable formula, it produces an unsatisfiable (UNSAT) core
which is a subset of constraint clauses that conflict, leaving the
formula unsatisfiable.  The UNSAT core for $\Phi_{root}$ encodes the
subset of clauses that conflict because the $\phi_{fsch}$ requires the
failure to occur and $\neg\phi_{bug}$ requires the failure not to
occur.  The event orderings that correspond to those conflicted
constraints are the ones in $\phi_{fsch}$ that imply
$\phi_{bug}$. Those orderings are a necessary condition for the
failure; their corresponding constraints, together with
$\neg\phi_{bug}$ are responsible for the unsatisfiability of
$\Phi_{root}$.  Reporting the sub-schedule corresponding to the UNSAT
core yields fewer total events than are in the full, failing schedule,
yet includes event orderings necessary for the failure.

Figure~\ref{fig:idea:root}a shows a possible failing schedule produced by the
constraint system corresponding to the execution depicted in
Figure~\ref{fig:example}d. The failure constraint $\phi_{bug}$ requires
the corresponding execution to manifest the failure.  The generated path and
memory access constraints are compatible with the failure and the system is
satisfiable, producing the failing execution trace shown ($\phi_{fsch}$). Note
that Symbiosis inserts a {\em synthetic unlock} event {\sl(U$_{2.20}$)} in the model,
in order to preserve the correct semantics of synchronization constraints (see \S~\ref{sec:impl}).   

In Figure~\ref{fig:idea:root}b, the failure constraint is {\em negated}, requiring
the corresponding execution not to manifest the failure ({\em i.e.},
$\textsl{filled}_{2.19} > 0$ and the assertion at line 19 does not fail). The
UNSAT core shows why $\Phi_{root}$ is unsatisfiable: the negated
failure constraint conflicts with the subset of ordering constraints from
$\phi_{fsch}$ that cause $\textsl{filled}$ to be less than 0 when thread 2
executes line 19 (note that \textsl{R}$_{2.19}$ defines the value of \textsl{filled}$_{2.19}$
read by the assertion).    

In our experience, the UNSAT core produced by Z3 is typically not minimal. As a
result, while helpful, an UNSAT core alone is not sufficient for debugging and
necessitates a Differential Schedule Projection to isolate a bug's root cause.

\subsection{Alternate Sub-schedule Generation}

In addition to reporting the bug's root cause, \sys also produces {\em
  alternate, non-failing sub-schedules}.  These alternate
sub-schedules are {\em non-failing} variants of the root cause
sub-schedule, with the order of a single pair of events reversed.
Alternate sub-schedules are the key to building {\em differential
  schedule projections} (\S~\ref{sec:idea:diff}).
\sys generates alternate, non-failing sub-schedules after it
identifies the root cause.  To generate an alternate sub-schedule,
\sys selects a pair of events from different threads that were
included in the bug's root cause.  \sys then generates a new
constraint model, like the one used to identify the root cause.  We
call this model $\Phi_{alt}$. The $\Phi_{alt}$ model includes the
inverted failure constraint.  The model also includes a set of
constraints, denoted $\phi_{invsch}$, that encode the original
full, failing schedule, {\em except the constraint representing the
order of the selected pair of events is inverted}.  Inverting the
order constraint for the pair of events yields the following new
constraint model. 

\vspace{-3mm}
\begin{equation*} \Phi_{alt} = \phi_{path} \wedge \neg\phi_{bug}
\wedge \phi_{sync} \wedge  \phi_{rw} \wedge \phi_{mo}  \wedge
\phi_{invsch} \end{equation*}

The new $\Phi_{alt}$ model corresponds to a different, full
execution schedule in which the events in the pair occur in the order opposite
to that in the full, failing schedule.  If this new model is
satisfiable, then reordering the pair of events in the full,
failing schedule produces a new alternate schedule in which the
failure does not manifest, as shown in Figure~\ref{fig:idea:root}c. 

If there are many event pairs in the root cause, then \sys must generate and
attempt to solve many constraint formulae.  \sys systematically evaluates a set
of candidate pairs in a fixed order, choosing the pair separated by the fewest
events in the original schedule first.  The reasoning behind this design choice
is that events in a pair that are further apart are less likely to be
meaningfully related and, thus, less likely to change the failure behavior when
their order is inverted.  By default, we configured \sys to stop after finding
a single alternate, non-failing schedule.  However, the programmer can instruct
\sys to continue generating alternate schedules, given that studying sets of
schedules may reveal useful invariants\,\cite{recon}.

Arbitrary operation reorderings may yield infeasible schedules.  Reordering may
change inter-thread data flow, producing values that are inconsistent with a
prior branch dependent on those values.   The inconsistency between the data
and the execution path makes the execution infeasible.  Symbiosis produces only
feasible schedules by including path constraints in its SMT model.  If a
reordering leads to inconsistency, the SMT
path constraints become unsatisfiable and Symbiosis produces no schedule.

\subsection{Differential Schedule Projection} 
\label{sec:idea:diff} 

{\em Differential schedule projection} (DSP) is a novel debugging
methodology that uses root cause sub-schedules and non-failing
alternate sub-schedules.  The key idea behind debugging with a \dsp
is to show the programmer the salient differences between failing,
root cause schedules and non-failing, alternate schedules.  Examining
those differences helps the programmer understand how to {\em fix} the
bug, rather than helping them understand the failure only, like
techniques that solely report failing schedules.

Concretely, a DSP consists of a pair of sub-schedules
decorated with several pieces of additional information.  The first
sub-schedule is the root cause sub-schedule, which is the {\em source}
of the projection.  The second sub-schedule is an excerpt from the
alternate, non-failing schedule, which is the {\em target} of the
projection.

The order of memory operations differs between the schedules and,
as a result, the outcome of some memory operations may differ. A
read may observe a different write's result in one schedule than
it observed in another, or two writes may update memory in a
different order in one schedule than in another, leaving memory in
a different final state.  These differences are precisely the
changes in data-flow that contribute to the failure's occurrence.
\sys highlights the differences by reporting {\em data-flow
variations}: data-flow between operations in the source
sub-schedule that do not occur in the target sub-schedule and
\emph{vice versa}.   

To simplify its output, \sys reports only a subset of operations in
the source and target sub-schedules.  An operation is included if
it makes up a data-flow variation or if it is one of a pair of
operations that occur in a different order in one sub-schedule than
in the other.  Alternate, non-failing schedules vary in the order
of a single pair of operations, so all operations that precede both
operations in the pair occur in the same order in the source and
target sub-schedules.  \sys does not report operations in the common
prefix, unless they are involved in a data-flow variation.  By
selectively including only operations related to data-flow and
ordering differences, a DSP focuses
programmer attention on the changes to a failing execution that
lead to a non-failing execution.  Understanding those changes are
the key to changing the program's code to fix the bug.
For instance, the DSP in Figure~\ref{fig:idea:root}d shows 
that the data-flow \textsl{W}$_{1.20}\rightarrow$ \textsl{R}$_{2.19}$ 
(in $\phi_{fsch}$) changes to \textsl{W}$_{0.4}\rightarrow$ \textsl{R}$_{2.19}$
(in $\phi_{invsch}$). This data-flow variation is the actual bug's root cause. 
In addition, note that, by reordering the events, the DSP also suggests 
that the block of operations \textsl{L}$_{2.9}$--\textsl{(U}$_{2.20}${\sl)} should
execute atomically, which indeed fixes the bug. 

%Besides representing the actual bug's root cause, this data-flow variation also suggests executing atomically lines 9 to 19 of T2. This hint does fix the bug.

%\xxx[BL]{data-flow in-degree changes, data-flow out-degree
%changes, ignore common prefix between r-c and alts} 

%\xxx[BL]{Need to mention infeasible schedules here}

%\subsection{Automatic Failure Avoidance} \label{sec:idea:avoid}



\section{Implementation}
\label{sec:impl}
\subsection{Instrumenting Compiler and Runtime}

Our Symbiosis prototype implements trace collection for both C/C++ and
Java programs. C/C++ programs are instrumented via an LLVM function
pass. Java programs are instrumented using Soot\,\cite{soot}, which injects
path logging calls into the program's bytecode.
Like CLAP, we assign every basic block with a static identifier and,
at the beginning of each block, we insert a call to a function that
updates the executing thread's path. The function logs each block as
the tuple \emph{(thread Id, basic block Id)} whenever the block
executes. The path logging function is implemented in a custom library
that we link into the program.  Although our prototype is fully
functional, it has not been fully optimized yet. For instance, lightweight
software approaches ({\em e.g.}, Ball-Larus\,\cite{ball_larus}) or a
hardware accelerated approaches ({\em e.g.}, Vaswani {\em et
  al}\,\cite{vaswani}) could also be used to improve the efficiency of
path logging. 
The \sys prototype is publicly available at
\url{https://github.com/nunomachado/symbiosis}.

\subsection{Symbolic Execution and Constraint Generation}

Symbiosis' guided symbolic execution for C/C++ programs has been
implemented on top of KLEE\,\cite{klee}. Since KLEE does not support
multithreaded executions, similarly to CLAP, we fork a new instance of
KLEE's execution to handle each new thread created. We also disabled
the part of KLEE that solves path conditions to produce test inputs
because Symbiosis does not use them.  For Java programs, we have used
Java PathFinder (JPF)\,\cite{jpf}. In this case, we have disabled the
handlers for \emph{join} and \emph{wait} operations to allow threads
to proceed their symbolic execution independently, regardless of the
interleaving. Otherwise, we would have to explore different possible
thread interleavings when accessing these operations, in order to find
one conforming with the original execution.

Additionally, we made the following changes to both symbolic execution
engines. First, we ignore states that do not conform with the threads'
path profiles traced at runtime, guiding the symbolic
execution along the original paths only.  Second, we generate and
output a per-thread symbolic trace containing read/write accesses to
shared variables, synchronization operations, and path conditions
observed across each execution path.


\fakeparagraph{Consistent Thread Identification.} Symbiosis must ensure that threads are
consistently named between the original failing execution and the symbolic
executions. We use a technique previously used in jRapture\,\cite{jrapture}
that relies on the observation that each thread spawns its children threads in
the same order, regardless of the global order among all threads.  Symbiosis
instruments thread creation points, replacing the original PThreads/Java thread
identifiers with new identifiers based on the parent-children order
relationship.  For instance, if a thread $t_i$ forks its $j$th child thread,
the child thread's identifier will be $t_{i:j}$.


\fakeparagraph{Marking Shared Variables As Symbolic.}  
Precisely identifying accesses to shared data, in order to mark shared
variables as symbolic, is a difficult program analysis problem.
Although it is possible to conservatively mark all variables as
symbolic, varying the number of symbolic variables varies the size and
complexity of the constraint system.
For C/C++ programs we manually marked shared variables as symbolic,
like prior work\,\cite{clap}. We also marked variables symbolic if
their values were the result of calls to external libraries not
supported by KLEE.
For Java programs, we use Soot's {\em thread-local objects} (TLO)
static escape analysis strategy\,\cite{tlo}, which soundly
over-approximates the set of shared variables in a program ({\em i.e.}, some
non-shared variables might be marked shared). At instrumentation time, Symbiosis
logs the code point of each shared variable access.  During the
symbolic execution, whenever JPF attempts to read or write a variable,
it consults the log to check whether that variable is shared or not.
If so, JPF treats the variable as symbolic.

\fakeparagraph{Locks Held at Failure Points.}  
If a thread holds a lock when it fails, a reordering of operations in the
critical region protected by the lock may lead to a deadlocking schedule.
Other threads will wait indefinitely attempting to acquire the failing thread's
held lock because the failing thread's execution trace includes no release.  We
skirt this problem by adding a {\em synthetic} lock release for each lock held
by the failing thread at the failure point.  The synthetic releases allow the
failing thread's code to be reordered without deadlocks.  

\subsection{Schedule Generation and DSPs}

We implemented failing and non-failing, alternate schedule generation, as well
as differential schedule projections from scratch in around 4000 lines of C++ code. 
After building its SMT formula, Symbiosis solves it using Z3\,\cite{Z3}.
Symbiosis then parses Z3's output to obtain the solution of the model, or the
UNSAT core, when generating the root cause sub-schedule.
Finally, to pretty-print its output, Symbiosis generates graphical output showing the
differences between the failing and the alternate schedules.    


\section{Evaluation}
\label{sec:eval}

We evaluated \sys with three main goals.  First, we show that \sys efficiently
collects path profiles and symbolic path traces.  Second, we show that \sys
formulates and solves its SMT formulae in a practical, useful amount of time.
Third, we show how differential schedule projections help to understand the failure behavior
and points to the code requiring a bug fix.  We substantiate our
results with characterization data and several case studies, using buggy, multithreaded C/C++ and Java  applications,
including both real-world and benchmark programs.  We used four C/C++ test
cases: {\em crasher}, a toy program with an atomicity violation; {\em
stringbuffer}, a C++ implementation of a bug in the Java JDK1.4
\verb|StringBuffer| library, developed in prior work~\cite{flanagan03}; {\em
pfscan}, a real-world parallel file scanner adapted for research
by~\cite{concurrit}; and {\em pbzip2}, a real-world, parallel bzip2 compressor.
We used four Java programs: {\em cache4j}, a real-world Java object cache,
driven externally by concurrent update requests; and three tests from the IBM
ConTest benchmarks\,\cite{contest}: {\em airline}, {\em bank}, and {\em
2stage}. Columns 1-4 of Table~\ref{table:results} describe the test cases.     


\begin{table}[t]
\centering
\scriptsize
\caption{\captsize {\bf Benchmarks and performance.}  Column 2 shows lines of code; Column 3, the number of threads; Column 4, the number of shared program variables; Column 5, the number of accesses to shared variables; Column 6, the overhead of path profiling; Column 7, the size of the profile in bytes; Column 8, the symbolic execution time; Column 9, the number of SMT constraints; Column 10, the number of unknown SMT variables; Column 11, the time in seconds to solve the SMT system. } 
\renewcommand{\tabcolsep}{1pt}
\begin{tabular}{c|r|c|c|c|c||c|c|c||c|c|c|}
\cline{2-12}
 & \multirow{2}{*}{{\bf App.}} & \multirow{2}{*}{{\bf LOC}} & {\bf \#} & {\bf \#Shrd.} & {\bf \#Shrd.} & {\bf Prof.} & {\bf Log} & {\bf Symb.} & {\bf \#SMT} &{\bf \#SMT} & {\bf SMT}\\
 & 	&	& {\bf Thd.} & {\bf Var.} & {\bf Acc.} & {\bf Ovhd.} & {\bf Size} & {\bf Time} & {\bf Const.} & {\bf Var.} & {\bf Time}\\
\hline
\hline
\multirow{6}{*}{\rotatebox{90}{C/C++}}  & crasher & 70 & 6 & 4 & 266 & 25.4\% & 458B & 0.02s  & 22295 & 400 & 1m2s\\
\cline{2-12}
  & sbuff& 151 & 2 & 5 & 69 & 16.7\% & 632B & 0.05s  & 423 & 102 & 1s\\
\cline{2-12}
  & pfscan & 830 & 5 & 9 & 74 & 6.6\% & 3.8K & 1.87s & 678 & 131 & 1s\\
\cline{2-12}
  & pbz (S)  & \multirow{3}{*}{1942} & \multirow{3}{*}{9} & \multirow{3}{*}{14} & 176 & 2.5\% & 1.7K & 11.16s & 1361 & 289 & 1s\\
\cline{6-12}
  & pbz (M)  &  &  &  & 367 &1.3\% & 2.6K & 36.17s  & 6771 & 564 & 26s\\
\cline{6-12}
  & pbz (L)  &  &  &  & 1156 &2.5\% & 9.4K & 7m11s  & 514548 & 2866 & 15h15m\\
\hline
\hline
\multirow{6}{*}{\rotatebox{90}{Java}} & airline & 108 & 8 & 2 & 36 & 22\% & 262B & 1.30s  & 2670 & 84 & 1s\\
\cline{2-12}
  & bank & 125 & 3 & 3 & 115 & 12.4\% & 788B & 1.56s  & 8 250 & 197 & 2s\\
\cline{2-12}
  & 2stage & 123 & 4 & 4 & 49 & 14.8\% & 196B & 2.53s & 264 & 88 & 1s\\
\cline{2-12}
  & c4j (S) & \multirow{3}{*}{2344} & \multirow{3}{*}{4} & \multirow{3}{*}{7} & 28 & 7.3\% & 366B & 1.64s  & 122 & 51 & 1s\\
\cline{6-12}
  & c4j (M) &  &  &  & 1247 & 8.6\% & 17K & 4.56s  & 303626 & 1810 & 51s\\
\cline{6-12}
  & c4j (L) &  &  &  & 1411 & 9.3\% & 24K & 4.76s  & 1142120 & 2051 & 1h25m\\
\hline
\end{tabular}
\label{table:results}
\end{table}



We evaluated the scalability of \sys for {\em pbzip2} and {\em
  cache4j} by varying the size of their workload. For {\em pbzip2}, we
compressed input files of different sizes: 80KB (small), 2.6MB
(medium), and 16MB (large).  For {\em cache4j}, we re-ran its test
driver, for update loop iteration counts of 1 (small), 5 (medium), and
10 (large).  In some cases, we inserted calls to the \verb|sleep|
function, changing event timing and increasing the failure rate. Our
work is not targeting the orthogonal failure reproduction problem\,\cite{clap}, so
this change does not taint our results.  We ran our C/C++ experiments
on an 8-core, 3.5Ghz machine with 32GB of memory, running Ubuntu
10.04.4. For Java we used a dual-core i5, 2.8Ghz CPU with 8GB of
memory, running OS X.

\subsection{Trace Collection Efficiency}

We measured the time and storage overhead of path profiling relative to native
execution and the time cost of symbolic trace collection.    Columns 6-10 of
Table~\ref{table:results} report the results, averaged over five trials. 
\sys imposes a tolerable path profiling overhead, ranging from 1.3\% in {\em
pbzip2 (medium)} to 25.4\% in {\em crasher}. Curiously, the runtime slowdown is
smaller for real-world applications ({\em pfscan, pbzip2, and cache4j}) than
for benchmarks. The reason is that the latter programs have more basic blocks
with very few operations, making block instrumentation frequent.  The space
overhead of path profiling is also low, ranging from 196B ({\em 2stage}) to 24K
({\em cache4j}).  CLAP\,\cite{clap} showed that recording threads' path
profiles only reduces storage overheads considerably (up to 97\%!) compared to
\rnr ({\em e.g.} LEAP\,\cite{leap}).  \sys enjoys this reduction as well.
\sys collects symbolic traces in just seconds for most test cases.  The only
exception is {\em pbzip2 (large)}, which took KLEE around seven minutes. JPF
quickly produced the symbolic traces for all programs.

%BRANDON: I took this paragraph out because I do not understand it.
%, even {\em cache4j} with the larger workload. 
%We attribute the gap to a \emph{deamon thread} in {\em cache4j} 
%that ends abruptly in JPF, thus not executing all events in its trace (the full 
%workload finishes as expected, though). Additionally, optimizing symbolic
%execution is not a goal of this work.

%We attribute the gap to differing scalability of KLEE and JPF. While large, the difference is not likely to be problematic for \sys
%because both durations are practical.  Additionally, optimizing symbolic
%execution was not a goal of this work.


\begin{table}
\begin{center}
\scriptsize
\caption{\captsize {\bf Differential schedule projections.} Column 2 is the number of
event pairs reordered to find a satisfiable alternate schedule ({\em \#Alt Pairs}).
Column 3 shows the number of events in the failing schedule ({\em \#Evts FS})
and Column 4 shows the number of events in the corresponding differential
schedule projection ({\em \#Evts DSP}). Column 5 shows the number of
data-flow edges in the failing schedule ({\em \#D-F in FS}) and Column 6 shows
the number of data-flow variations in the differential schedule projection
({\em \#D-F in DSP}).  Columns 4 and 6 show the percent change
compared to the full schedule. Column 7 shows the number of operations involved
in the data-flow variations ({\em \#Ops to Grok}). Columns 8-9 show whether the
differential schedule projection explains the failure, and whether it directly
points to a fix of the underlying bug in the code.} 
\renewcommand{\tabcolsep}{1pt}
\begin{tabular}{|r|r|r|r|r|r|r|c|c|}
\hline
\multirow{2}{*}{{\bf App.}} & {\bf \#Alt.} & {\bf \#Evts} & {\bf \#Evts} & {\bf \# D-F } & {\bf \# D-F in} & {\bf Ops to} & {\bf Explan} & {\bf Finds} \\
                            & {\bf Pairs}  & {\bf FS}	  & {\bf DSP ($\Delta$\%)}    & {\bf in FS}     & {\bf DSP ($\Delta$\%)}    & {\bf Grok    } & {\bf atory? } & {\bf Fix?}  \\
\hline
\hline
crasher & 27 & 287 & 8 ({\bf $\downarrow$97}) & 107 & 1 ({\bf $\downarrow$99}) & 3 & Y & Y\\
\hline
sbuff & 9 & 73 & 16 ({\bf $\downarrow$78}) & 28 & 1 ({\bf $\downarrow$96}) & 3 & Y & Y\\
\hline
pfscan & 5 & 93 & 21 ({\bf $\downarrow$77}) & 32 & 1 ({\bf $\downarrow$96}) & 3 & Y & Y\\
\hline
pbz (S) & 1 & 206 & 20 ({\bf $\downarrow$90}) & 29 & 1 ({\bf $\downarrow$96}) & 3 & \multirow{3}{*}{Y} & \multirow{3}{*}{N}\\
\cline{1-7}
pbz (M) & 1 & 397 & 36 ({\bf $\downarrow$91}) & 82 & 1 ({\bf $\downarrow$98}) & 3 & & \\
\cline{1-7}
pbz (L) & 2 & 1223 & 168 ({\bf $\downarrow$86}) & 264  & 2 ({\bf $\downarrow>$99}) & 6 &  & \\
\hline
\hline
airline & 1 & 58 & 7 ({\bf $\downarrow$88}) & 25 & 1 ({\bf $\downarrow$92}) & 3 & Y & Y\\
\hline
bank & 181 & 124 & 56 ({\bf $\downarrow$55}) & 72 & 2 ({\bf $\downarrow$97}) & 6 & Y & Y\\
\hline
2stage & 14 & 60 & 12 ({\bf $\downarrow$80}) & 27 & 1 ({\bf $\downarrow$96}) & 3 & Y & Y\\
\hline
c4j (S) & 1 & 39 & 28 ({\bf $\downarrow$28}) & 11 & 2 ({\bf $\downarrow$82}) & 6 & \multirow{3}{*}{Y} & \multirow{3}{*}{N}\\
\cline{1-7}
c4j (M) & 1 & 1257 & 11 ({\bf $\downarrow>$99}) & 552 & 1 ({\bf $\downarrow>$99}) & 3 &  &\\
\cline{1-7}
c4j (L) & 1 & 1422 & 5 ({\bf $\downarrow>$99}) & 628 & 1 ({\bf $\downarrow>$99}) & 3 &  & \\
\hline
\end{tabular}
\label{table:diffdebug}
\end{center}
\end{table}


\subsection{Constraint System Efficiency}

The last three columns of Table~\ref{table:results} describe the SMT formulae
\sys built for each test case.  The Table also reports the amount of time \sys
takes to solve its SMT constraints with Z3, yielding a failing schedule.  The
data show that solver time is very low ({\em i.e.}, seconds) in most cases.
Solver time often grows with constraint count, but not always. {\em cache4j
(large)} has more than double the constraints of {\em pbzip2 (large)}, but was
around 11 times faster.  
Figure~\ref{fig:breakdown} helps explain the discrepancy by showing
the composition of the SMT formulations by constraint type.  {\em pbzip2} has many {\em
locking} and {\em read-write} constraints, while {\em cache4j} has no {\em
locking}, although many {\em read-write} constraints.  The solution to locking
constraints determines the execution's lock order, constraining the solution to
read-write constraints. The formulation's complexity grows not only with the count,
but mainly with the interaction of these constraint types.  
%{\em pbzip2 (medium)} and {\em bank} have the same characteristic.

\begin{figure}[t]
\centering
%\hspace{-6ex}
%\begin{figure}[tb] 
%\centering
%\ffigbox{%
\includegraphics[width=0.8\columnwidth]{plots/fig_constBreakdownR}
%}{%
\caption{\captsize {\bf Breakdown of the SMT constraint types.}}
\label{fig:breakdown}
%}
\end{figure}




{\em \sys's SMT solving times are practical for debugging use}. To produce a
\dsp, \sys requires only a trace from a single, failing execution and does not
require any changes to the code or input.  Our experiments are realistic
because a programmer, when debugging, often has a bug report with a small test
case that yields a short, simple execution. The data suggest that \sys handles
such executions very quickly ({\em e.g.}, {\em pbzip2 (small)}, {\em cache4j (medium)} ).
Debugging is a relatively rare development task, unlike compilation, which
happens frequently.  Giving \sys minutes or hours to help solve hard bugs (like
{\em pbzip2 (large)}) is reasonable.  Additionally, \sys could use parallel SMT
solving, like CLAP or incorporate lock ordering information,
like~\cite{bravo13}, to decrease solver time.



\subsection{Differential Schedule Projection Efficacy}

 \sys produces a Graphviz visualization of 
its differential schedule projections (DSPs) as a graph with specific 
identifying information on nodes and edges that reflects source code lines and variables.
This information includes both schedule variations and data-flow variations as well.  

To assess the efficacy of DSPs, we first measured the number of program events
and data-flow edges in the full, failing execution that \sys computes.  We then
compared that measurement to the number of events and data-flow edges in the
differential schedule projection.  

Table~\ref{table:diffdebug} summarizes our results.  The most important result
is that \sys's differential schedule projections are simpler and clearer than
looking at full, failing schedules.
\sys reports a small fraction of the full
schedule's data-flows and program events in its output -- on average, 80.8\%
fewer events and 96.2\% fewer data-flows.  By highlighting only the operations
involved in the data-flow variations, \sys focuses the programmer on just a few
events (3 to 6 in our tests).  Furthermore, all events \sys reports are part
of data-flow or event orders that dictate the presence or absence of the
failure.  DSPs depict those events only,
simplifying debugging. 


\sys finds an alternate, non-failing schedule after reordering few event pairs
-- just 1 in many cases ({\em e.g.}, {\em cache4j}, {\em pbzip2}).  \sys reorders one
 pair at a time, starting from those closer in the schedule to failure, and the
data show that this usually works well.  {\em bank} is an outlier -- \sys
reordered 181 different pairs before finding an alternate, non-failing
schedule.  The bug in this case is an atomicity violation that breaks a
program invariant that is not checked until later in the execution.  As a
result, \sys must search many pairs, starting from the failure point, to
eventually reorder the operations that cause the atomicity violation. 

Note that, even if a failure occurs only in the presence of a particular 
chain of event orderings, it suffices to reorder any pair in the chain 
to prevent that failure. This phenomenon is called the {\em Avoidance-Testing Duality},
and is detailed in previous work\,\cite{aviso}.

We now use some case studies to illustrate how differential schedule
projections focus on relevant operations and help understand each failure.

\begin{figure}[tb] 
\centering
\includegraphics[width=0.93\columnwidth]{figs/cstudies}
\caption{\captsize {\bf Summary of \sys' output for some of the test cases.} Arrows depict data-flows and dashed boxes depict regions that \sys suggests to be executed atomically.}
\label{fig:casestudies}
\end{figure}

\fakeparagraph{stringbuffer} is an atomicity violation first studied
in~\cite{flanagan03} and its \dsp is depicted in
Figure~\ref{fig:casestudies}.  {\sl T1} reads the length of the string
buffer, {\sl sb}, while {\sl T2} modifies it.  When {\sl T2} erases
characters, the value {\sl T1} read becomes stale and {\sl T1}'s
assertion fails .  The \dsp shows that the cause of the failure is
{\sl T2}'s second write, interleaving {\sl T1}'s accesses to {\sl
  sb.count}.  Moreover, \sys' alternate schedule suggests that, for
{\sl T1}, the write on value {\sl len} and the verification of the
assertion should execute atomically in order to avoid the failure.
For this case, this is actually a valid bug fix.

\fakeparagraph{pbzip2} is an order violation studied
in~\cite{tracesimplification}.  Figure~\ref{fig:casestudies}b shows
\sys's \dsp that illustrates the failure's cause. {\sl T1}, the
producer thread, communicates with {\sl T2} the consumer thread via
the shared queue, {\sl fifo}. If {\sl T1} sets the {\sl fifo} pointer
to null while the consumer thread is still using it, {\sl T2}'s
assertion fails.  The alternate schedule in
Figure~\ref{fig:casestudies}b, explains the failure because reordering
the assignment of {\sl null} to {\sl fifo} after the assertion
prevents the failure. The \dsp is, thus, useful for understanding the
failure.  However, to fix the code, the programmer must order the
assertion with the null assignment using a \verb|join| statement.  The
\dsp does not provide this suggestion, so, despite helping explain the {\em
  failure}, it does not completely reveal how to fix the
bug.

\fakeparagraph{bank} is a benchmark in which multiple threads update a
shared bank balance.  It has an atomicity violation that leads to a
lost update.  Figure~\ref{fig:casestudies}c shows the \dsp for the
failure: {\sl T1} and {\sl T2} read the same initial value of {\sl
  BankTotal} and subsequently write the same updated value, rather
than either seeing the result of the other's update.  The final
assertion fails, because {\sl accountsTotal}, the sum of per-account
balances, is not equal to {\sl BankTotal}.  The Figure shows that
\sys's \dsp correctly explains the failure and shows that eliminating
the interleaving of updates to {\sl BankTotal} prevents the failure.
It is noteworthy that in this example the atomicity violation is not
{\em fail-stop} and happens in the middle of the trace. Scanning the
trace to uncover the root cause would be difficult, but the \dsp
pinpoints the failure's cause precisely.

\fakeparagraph{cache4j} has a data-race that leads to an uncaught
exception when one thread is in a \verb|try| block and
another interrupts it with the library \verb|interrupt()|
function\,\cite{racefuzzer}. JPF doesn't support exception replay, so we
slightly modified its code, preserving the original behavior, by replacing the
exception with an assertion about a shared variable.
Figure~\ref{fig:casestudies}d shows that in our version of the code, {\sl
inTryBlock} indicates whether a thread is inside a \verb|try-catch| block or
not, the assertion \verb|inTryBlock == true| replaces the \verb|interrupt()|
call.  The program fails when {\sl T1} is interrupted outside a \verb|try|
block as in the original code.  The schedule variations reported in the \dsp
explain the cause of failure -- if the entry to the \verb|try| block
({\em i.e.}, \verb|inTryBlock = true|) precedes the assertion, execution succeeds; if
not, the assertion fails.  The involvement of exceptions makes the fix for this
bug somewhat more complicated than simply adding atomicity, but the
understanding that the \dsp provides points to the right part of the code and
illustrates the correct behavior.


\section{Related Work}

The work that is most closely related to ours is CLAP\,\cite{clap}, a
technique for reproducing concurrency failures, via symbolic
constraints. \sys builds on the following CLAP features: independent
track of thread control flow, use of guided symbolic execution, and
resorting to SMT constraints.  Like CLAP, \sys can {\em
  also} reproduce bugs 
% --- both yield full, failing schedules
(\S~\ref{sec:idea:root}). Finally, \sys's C/C++ implementation uses
KLEE, as well.  Despite these similarities, \sys differs fundamentally
from CLAP in its purpose and mechanisms.  Unlike CLAP, \sys produces
precise root cause sub-schedules and alternate, non-failing
sub-schedules.  \sys's sub-schedule reports include many fewer
operations than full schedules and do not require examining the whole
schedule to find the few operations involved in the failure, as is the
case with CLAP.  \sys's sub-schedule reports are the foundation of
differential schedule projections, a new debugging technique not
explored by CLAP.  \sys's mechanism for computing sub-schedules based
on the SMT solver's UNSAT core is novel, as is \sys's technique for
reordering event pairs to compute alternate, non-failing
sub-schedules.  Note that CLAP does not compute alternate, non-failing
schedules.  Unlike CLAP, we do not limit the context switch count,
because \sys precisely isolates the root cause, regardless of the
number of context switches in the entire execution.  Finally, \sys is
more broadly applicable than CLAP and we have demonstrated prototypes
for Java {\em and} C/C++.

{\em Record and Replay} (\rnr) techniques are also relevant to our
work.  These techniques fall into three categories: {\em i)
  Order-based} techniques record the order of certain events during an
execution and then replay them in the same
order\,\cite{leap,order,care}.
%If the recorded execution saw a failure, the replayed execution reproduces the failure;
{\em ii) Search-based} techniques only trace partial information at
runtime ({\em e.g.} record solely the order of write
operations\,\cite{stride}) and, then, search the space of executions for
one that conforms with the observed
events\,\cite{pres,odr,esd}; {\em iii) Execution-based}
techniques restrict all executions of a program so that, for a given
input, the program's behavior is constrained to be deterministic from
one run to the next\,\cite{grace,kendo,dmp,coredet}.
\sys is mostly orthogonal to the techniques above, but shares some
important characteristics.  %Similar in mechanism to some order and search based techniques, \sys traces events during a failing execution.  
Like \rnr techniques, given a concrete trace, \sys can produce a
failing schedule that conforms to those events, reproducing the
failure.  \sys's precise differential schedule projections and broader
applicability to debugging and failure avoidance make it novel in
contrast to \rnr techniques. Unlike deterministic execution
systems, \sys does not aim to perturb production runs, obviating
the risk in doing so.


%\paragraph{Concurrency Debugging and Failure Avoidance}

Techniques such as {\em interleaving
pattern-matching} or {\em sub-schedule search} also
bear similarity to \sys. In particular, they aim to identify the root cause of a
concurrency bug to show the programmer how to fix it, or to avoid
it in future executions.  
{\em Interleaving pattern-matching}\,\cite{falcon,colorsafe,avio}
techniques search an execution, dynamically or by
reviewing a log, for problematic patterns of memory accesses.
Although often effective, these solutions have the drawback of
missing bugs that not fit the known patterns. 
Unlike these techniques, \sys is not limited to searching
for known patterns.
Sub-schedule search, in turn, is general and not limited to specific patterns
\,\cite{conseq,aviso,defuse}.  Unfortunately, the space of all of an execution's
possible sub-schedules can be large. For some prior techniques, considering
different sub-schedules requires multiple additional program executions,
combined with statistical analysis to make search feasible. \sys requires only
a single, failing execution, does not rely on statistical reasoning, and
produces precise results.  Mechanically, these techniques differ in that none
uses SMT to search and none produces a differential  view of its result, like \dsps.

%\paragraph{Testing Approaches}

Finally, other techniques systematically explore the space of possible program
executions to generate test cases.  Java Path Finder\,\cite{jpf},
KLEE\,\cite{klee}, Pex\,\cite{pex}, and Mimic\,\cite{mimic} use symbolic program execution
to search for an {\em input} that induces a failing path constraint.
Chess\,\cite{chess}, PCT\,\cite{pct}, and Concurrit\,\cite{concurrit} run a program for a particular
input and rely on an augmented scheduler to push the execution to a
potential failure. On the other hand,  con2colic testing\,\cite{con2colic}
 employs {\em concolic execution} and uses heuristics to explore the space of 
possible thread interleavings and generate tests for multiple execution paths.

 These techniques reveal only full,
failing executions or buggy inputs, and do not provide root cause
information, nor differential schedule projections. In turn, \cite{issta2002}
also shares our goal of narrowing down the difference between successful 
and failing schedules to pinpoint a bug. This technique relies on random jitter
and requires re-executing the program, whereas Symbiosis only operates on  
SMT formulations, which are sound and complete and, thus, provide a 
formal guarantee that alternate schedules are non-failing.


\section{Conclusions \& Future Work}

This paper described \sys, a new technique that gets to the bottom of
concurrency bugs.  \sys reports focused {\em sub-schedules}, eliminating the
need for a programmer or automated debugging tool to search through an entire
execution for the bug's root cause.  \sys also reports novel {\em alternate,
non-failing schedules}, which help illustrate {\em why} the root cause is the
root cause and how to avoid failures.  Our novel {\em differential schedule
projection} approach links the root cause and alternate sub-schedules to
data-flow information, giving the programmer deeper insight into the bug's
cause than path information alone.  An essential part of \sys's mechanism is
the use of an SMT solver and, in particular, its ability to report the part of
a formula that makes it unsatisfiable.  \sys carefully constructs a
deliberately unsatisfiable formula so that the conflicting part of that formula
is the bug's root cause.    We built two \sys prototypes, one for C/C++ and one
for Java.  We used them to show that for a variety of real-world and benchmark
programs from the debugging literature that \sys isolates bugs' root causes and
providing differential schedule projections that show how to fix those root
causes.

\section*{Acknowledgements}
We would like to thank 
the anonymous reviewers for their invaluable feedback.
This work was partially supported by Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (FCT), 
under project UID/CEC/50021/2013. 

\begin{comment}
The authors would like to thank 
the anonymous reviewers for their valuable and constructive feedback.
This work was partially supported by Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (FCT), 
under project UID/CEC/50021/2013. 
\end{comment}

\bibliographystyle{abbrvnat}
%\bibliographystyle{plain}
\bibliography{paper_short}



\end{document}

