% !TEX TS-program = pdflatex
% !TEX encoding = UTF-8 Unicode

\documentclass[11pt]{article}

\usepackage[utf8]{inputenc}

\usepackage[margin=1in]{geometry}
\geometry{a4paper}

\usepackage{graphicx}
\usepackage{verbatim}
\usepackage{amssymb}
\usepackage{mathtools}
\usepackage{semantic}
\usepackage{iris}
\usepackage{url}

\newcommand{\ignore}[1]{}

\bibliographystyle{plain}

\title{Verifying Concurrent Programs with VST}
\author{William Mansky}
%\date{}

\begin{document}
\maketitle

\begin{abstract}
The latest version of VST includes integration of most of the features of Iris~\cite{iris1}, allowing us to verify concurrent programs with user-defined ghost state, global invariants, and logically atomic specifications. This document describes how to use Verifiable C to prove the correctness of concurrent programs, using examples that ship with VST. We will assume familiarity with the basics of VST, as described in the VST manual. The final sections also assume some familiarity with Iris: if you haven't used it before, Elizabeth Dietrich's guide~\cite{iris-guide} is a good resource for beginners.
\end{abstract}

\section{Verifying a Concurrent Program with Locks}
A concurrent C program is a sequential C program with a few additional features. It may create new \emph{threads} of execution, which execute functions from the program in parallel, but with a single shared memory: any data on the heap (including global variables and \texttt{malloc}ed memory) can potentially be accessed by every thread. Threads can thus communicate by passing values to each other through memory locations, and threads may also \emph{synchronize}, blocking each other's control flow to ensure that operations happen in a certain order. VST supports two kinds of synchronization by default: locks, and sequentially consistent atomic operations (which we will discuss in a later section). A \emph{lock} data structure supports functions \texttt{acquire} and \texttt{release}, and ensures that it can be held by at most one thread at a time. When a thread tries to \emph{acquire} a lock that is not currently available, it pauses its execution (``blocks'') until the lock becomes available\footnote{Note that the thread that acquires a lock need not be the one that releases it: a locked lock can be passed through another synchronization mechanism to another thread, which then releases it (the ``daring'' concurrency of O'Hearn~\cite{csl}), as demonstrated by the join lock in Section~\ref{threads}. Some authors use ``lock'' to refer specifically to a mutex that must be released by the thread that acquires it.}. Locks can be used to enforce \emph{mutual exclusion}, ensuring that a memory location is only accessed by one thread at a time. VST's locks are declared in the C header file \texttt{concurrency/threads.h}, which should be \texttt{\#include}d in any Verifiable C concurrent program.

The file \texttt{progs64/incr.c}\footnote{We assume that readers are using VST in 64-bit mode, but these examples also appear in the 32-bit folder \texttt{progs}.} contains a simple concurrent C program. It declares a global struct \texttt{c} of type \texttt{counter} with two fields: an integer \texttt{ctr}, and a \texttt{lock} that coordinates access to \texttt{ctr}. The struct has two accessor functions: \texttt{incr}, which increases the value of \texttt{ctr} by one, and \texttt{read}, which reads the current value of \texttt{ctr}. By acquiring the lock before using \texttt{ctr}, these functions ensure that their operations are well synchronized: without the lock, one thread might access \texttt{ctr} while another thread writes to it, causing a \emph{data race}, which is undefined behavior in C. This is reflected in Verifiable C by the use of \emph{shares}, as described in section 44 of the manual. A thread can only write to a memory location if it holds a sufficiently large share of the location that no other thread can possibly read from it. We allow \texttt{ctr} to be modified by multiple threads by moving its ownership share between threads via locks, as described below. A consequence of this share discipline is that if we prove any pre- and postcondition in Verifiable C for a program, we also know that as long as the precondition is met, the program does not have any data races (just as proving correctness of a sequential program also implies that it has no null-pointer dereferences). In this section, we will focus on the verification of the \texttt{incr} and \texttt{read} functions, and demonstrate how to prove correctness of programs with locks.

The proof of correctness for \texttt{incr.c} is in \texttt{progs/verif\_incr\_simple.v}. It has several elements that do not appear in sequential Verifiable C proofs. First, its imports include \texttt{VST.concurrency.lock\_specs}, which defines generic specifications for locks, and \texttt{VST.atomics.verif\_lock}, which provides a verified implementation of the lock specifications. It then associates the \texttt{spawn} function with its standard specification. We will go through the specifications in detail in Section~\ref{lock-specs}.

The first thing we need to do to verify the counter functions is to define a \emph{lock invariant}, a predicate describing the resources protected by the \texttt{lock} field. A lock invariant can be any Verifiable C assertion (i.e., \texttt{mpred}), subject to an exclusivity condition described later. In this case, the lock protects the data in the \texttt{ctr} field. We want to know specifically that \texttt{ctr} always contains an unsigned integer value, so we use the lock invariant \[\mathsf{cptr\_lock\_inv} \triangleq \mathsf{EX}\ z : \mathsf{Z}, \mathsf{field\_at}\ \mathsf{Ews}\ \mathsf{t\_counter}\ [\mathsf{StructField}\ \texttt{\_ctr}]\ (\mathsf{Vint} (\mathsf{Int.repr}\ z))\ \texttt{c}\] We use the $\mathsf{lock\_inv}$ predicate to assert that a lock exists in memory with a given invariant: $\mathsf{lock\_inv}\ \mathsf{sh}\ h\ R$ means that the current thread owns share $\mathsf{sh}$ of a lock with handle $h$ with invariant $R$. The lock handle can contain various information, but always at least contains the location of the lock in memory, accessed as $\mathsf{ptr\_of}\ h$. Shares of a lock can be combined and split in the same way as shares of $\mathsf{data\_at}$, and any nonempty share is enough to acquire or release the lock\footnote{This contrasts with ordinary $\mathsf{data\_at}$, in which we need a writable share to write to a location; multiple threads can try to acquire a lock at the same time, and the lock's built-in synchronization will prevent any race conditions.}.

Now we can give specifications to the functions that manipulate counters.
\begin{verbatim}
 DECLARE _incr
  WITH gv : globals, sh1 : share, sh : share, h : lock_handle
  PRE [ ]
    PROP  (readable_share sh1; sh <> Share.bot)
    PARAMS() GLOBALS(gv)
    SEP   (field_at sh1 t_counter [StructField _lock] (ptr_of h) (gv _c);
         lock_inv sh h (cptr_lock_inv (gv _c)))
  POST [ tvoid ]
    PROP ()
    RETURN ()
    SEP (field_at sh1 t_counter [StructField _lock] (ptr_of h) (gv _c);
         lock_inv sh h (cptr_lock_inv (gv _c)))
\end{verbatim}

\begin{verbatim}
 DECLARE _read
  WITH gv : globals, sh1 : share, sh : share, h : lock_handle
  PRE [ ]
    PROP  (readable_share sh1; sh <> Share.bot)
    PARAMS() GLOBALS(gv)
    SEP   (field_at sh1 t_counter [StructField _lock] (ptr_of h) (gv _c);
           lock_inv sh h (cptr_lock_inv (gv _c)))
  POST [ tuint ]
    EX z : Z,
    PROP ()
    RETURN (Vint (Int.repr z))
     SEP (field_at sh1 t_counter [StructField _lock] (ptr_of h) (gv _c);
          lock_inv sh h (cptr_lock_inv (gv _c))).
\end{verbatim}
These are surprisingly boring specifications! The \texttt{read} function needs to know that we have access to \texttt{lock} and that it has the appropriate invariant, and returns some number about which we know nothing. The \texttt{incr} function does even less, just returning its resources as is. These are enough to prove \emph{safety} of the program, to show that it has no data races, but not enough to learn much about what the program actually computes. This is a result of our invariant: when a thread acquires the lock, the \emph{only} thing it knows about the memory it gains access to is that it satisfies the invariant. This is a well-known limitation of basic concurrent separation logic, and it is generally solved using \emph{ghost state}, which we describe in Section~\ref{ghost}. For now, we will describe how to prove safety for this program; later we will see how the proof of correctness builds on the safety proof.

There is one more important step before we can prove that the counter functions satisfy their specifications. In order to use a resource invariant, we need to show that it is \emph{exclusive}, i.e., that it can only hold once in any given state. This is represented in VST by a property $\mathsf{exclusive\_mpred}\ R \triangleq R * R \vdash \mathsf{FF}$. This allows us to know that if the current thread holds the invariant, it also holds the lock. Fortunately, most common assertions (e.g., $\mathsf{data\_at}$ for a non-empty type) are exclusive, so we can fairly easily prove the desired property $\mathsf{ctr\_inv\_exclusive}$. It is useful to add this lemma to $\mathsf{auto}$'s hint database via \textsf{Hint Resolve}, so that the related proof obligations can be discharged automatically\footnote{In fact, the actual lock functions require a property called $\mathsf{weak\_exclusive\_mpred}$ that follows from $\mathsf{exclusive\_mpred}$ but avoids universe inconsistencies; the $\mathsf{lock\_props}$ tactic automatically makes the necessary transformations.}.

Now we can verify the bodies of \texttt{read} and \texttt{incr}, using the same Verifiable C tactics that we would use for a sequential program. The only new element is the use of the \texttt{acquire} and \texttt{release} functions, which allow threads to interact with locks and transfer ownership of resource invariants. We interact with these functions using the ordinary $\mathsf{forward\_call}$ tactic. Their witnesses take three arguments: the location $\ell$ of the lock, the share $\mathit{sh}$ of the lock it owned by the caller, and the lock invariant $R$. Their pre- and postconditions are as follows\footnote{In fact, this spec for \texttt{release} is a special case of a more general spec, as indicated by the \texttt{release\_simple} argument to \texttt{forward\_call}. We describe the lock specs in detail in Section~\ref{lock-specs}.}:
$$\{!!\mathit{sh} \neq \mathsf{Share.bot} \land \mathsf{lock\_inv}\ \mathit{sh}\ \ell\ R\}\ \texttt{acquire}(\mathsf{ptr\_of}\ \ell)\ \{R * \mathsf{lock\_inv}\ \mathit{sh}\ \ell\ R\}$$
$$\{!!(\mathit{sh} \neq \mathsf{Share.bot}) * (\mathsf{exclusive}\ R) * R * \mathsf{lock\_inv}\ \mathit{sh}\ \ell\ R\}\ \texttt{release}(\mathsf{ptr\_of}\ \ell)\ \{\mathsf{lock\_inv}\ \mathit{sh}\ \ell\ R\}$$
When we acquire the lock, we also gain access to the invariant; when we release the lock, we must re-establish the invariant.

Consider the proof of $\mathsf{body\_read}$: we begin with the usual invocation of $\mathsf{start\_function}$. We then use $\mathsf{forward\_call}$ to process the \texttt{acquire} call, adding $\mathsf{cptr\_lock\_inv}$ to the \textsf{SEP} clause. Unfolding its definition tells us that we now have access to \texttt{ctr}, and the integer stored in it, which we introduce as $z$. We assign $z$ to the local variable \texttt{t}, and then release the lock. We use the $\mathsf{lock\_props}$ tactic to discharge the exclusive obligation of \texttt{release} automatically, so that we need only prove that the invariant holds again. In this case, since have not changed the value of \texttt{ctr}, its value is still $z$. The return value of the function is that same $z$, and the proof is complete. The proof of $\mathsf{body\_incr}$ is almost identical, except that at the call to \texttt{release} the invariant now holds at $z + 1$.

\subsection{Compiling a Concurrent Program}
A concurrent program written for VST relies on \texttt{concurrency/threads.h} and \texttt{atomics/SC\_atomics.h}. To make sure these files are available to e.g. \texttt{gcc} when compiling the program, you should pass the arguments \texttt{-I <path-to-VST>/concurrency} and \texttt{-I <path-to-VST>/atomics}.

\section{Thread Creation and Joining}
\label{threads}
Every C program starts its execution as a single-threaded program. It becomes concurrent when it spawns a new thread, for instance by calling Verifiable C's \texttt{spawn} function. The \texttt{spawn} function takes two arguments: a pointer to a function that the new thread should execute, and a \texttt{void*} that will be passed as an argument to that function. The new thread begins execution at the start of the indicated function, and continues to execute until it returns from that function; until then, it can assign to local variables, perform memory operations, and call other functions just as a single-threaded program would. Each thread has its own local variables, but memory is shared between all threads in a program. In the current version of VST, the starting function for a thread must take a single argument of type \texttt{void*} and return a value of type \texttt{void*}; the value returned is ignored completely, so it will usually be \texttt{NULL}.

The separation logic rule for \texttt{spawn} is:
$$\{P(y) * (f : x.\ \{P(x)\}\{\mathsf{emp}\})\}\ \texttt{spawn}(f, y)\ \{\}$$
where $f : x.\ \{P(x)\}\{\mathsf{emp}\}$ means that the function $f$ takes a parameter $x$, and has precondition $P(x)$ and postcondition $\mathsf{emp}$. From the parent thread's perspective, we give away resources satisfying the precondition of the spawned function $f$ and get nothing back. Those resources now belong to the child thread, whose behavior is invisible to all other threads; the postcondition of \textsf{emp} reflects the fact that any resources held by the thread when it returns will be lost forever.

If we want to \emph{join} with a spawned thread once it finishes, retrieving its resources and learning the results of any computations it performed, we can do so with a lock, which we can either pass as the argument to $f$ or provide as a global variable. In order to recover \emph{all} the resources the thread owned, including the share of the lock that we use for joining, we need to use a \emph{recursive} lock, one whose invariant includes a share of the lock itself. We can make such an invariant with the \textsf{selflock} function, as we can see in the definition of $\mathsf{thread\_lock\_inv}$. Intuitively, $\mathsf{selflock}\ R\ \mathit{sh}\ \ell = R * \mathsf{lock\_inv}\ \mathit{sh}\ \ell\ (\mathsf{selflock}\ R\ \mathit{sh}\ \ell)$. 

In Verifiable C, functions that will be passed to \texttt{spawn} must have specifications of a certain form, as in $\mathsf{thread\_func\_spec}$:
\begin{verbatim}
 DECLARE _thread_func
  WITH y : val, x : share * share * lock_handle * lock_handle * globals
  PRE [ tptr tvoid ]
         let '(sh1, sh, h, ht, gv) := x in
         PROP  (readable_share sh1; sh <> Share.bot; ptr_of ht = y)
         PARAMS (y) GLOBALS (gv)
         SEP   (field_at sh1 t_counter [StructField _lock] (ptr_of h) (gv _c);
                lock_inv sh h (cptr_lock_inv (gv _c));
                lock_inv sh ht (thread_lock_inv sh1 sh (gv _c) h ht))
  POST [ tint ]
         PROP ()
         RETURN (Vint Int.zero)
         SEP ().
\end{verbatim}
The \textsf{WITH} clause must have exactly two elements: one of type \textsf{val} that holds the argument passed to the function, and another that holds the entire rest of the witness, usually as a tuple. We can then destruct the tuple inside the precondition to access the rest of the witness. In the precondition, \textsf{PARAMS} must contain only the argument. The \textsf{PROP} and \textsf{SEP} clauses are unrestricted. The postcondition must return 0, and otherwise be empty. In this example, the thread function takes a share of the \texttt{lock} field and a share of the corresponding $\mathsf{lock\_inv}$ assertion, along with the recursive lock for joining, whose location is passed in the argument. The interesting part of the proof of this specification is the call to \texttt{release}, where we use a different subspecification ($\mathsf{release\_self}$, for recursive locks) to transfer all of the thread's resources (including its share of the lock) into the lock invariant.

Verifying the \texttt{main} function, which spawns the thread, is more complicated. First, we create the locks used by \texttt{ctr} and $\mathtt{thread\_func}$, using the \texttt{makelock} function:
$$\{\mathsf{mem\_mgr}\ \mathit{gv}\}\ \texttt{makelock}()\ \{\ell.\ \mathsf{lock\_inv}\ \mathsf{Tsh}\ \ell\ (R\ \ell)\}$$
We can make a lock with any invariant $R$ at any time, and $R$ can refer to the lock itself. We specifically do \emph{not} need to know that $R$ holds when we call \texttt{makelock}; we create the lock in the locked state, and only need to provide $R$ when we release it. This is particularly convenient for making join-style locks, which are only released once (when the associated thread finishes its computation).

Next, we spawn the child thread using the $\mathsf{forward\_spawn}$ tactic, a \texttt{spawn}-specific wrapper around $\mathsf{forward\_call}$. Its general form is $\mathsf{forward\_spawn}\ \mathit{id}\ \mathit{arg}\ \mathit{w}$, where $\mathit{id}$ is the identifier of the function to be spawned, $\mathit{arg}$ is the value of the provided argument, and $\mathit{w}$ is the rest of the witness for the spawned function. The tactic automatically discharges the proof obligations of the spawn rule, leaving us to prove only the precondition of the spawned function. In this example, we split off a share of the \texttt{lock} field and each of the $\mathsf{lock\_inv}$ assertions and provide them to the spawned thread to satisfy the precondition of $\mathtt{thread\_func}$, while retaining the other shares so that \texttt{main} can invoke the \texttt{incr} function in parallel with the spawned thread.

Finally, we join with the spawned thread by acquiring its lock. Because the lock is recursive, acquiring it allows us to retrieve the other half of both the thread lock and the counter lock, regaining full ownership. This allows us to deallocate the locks with calls to \texttt{freelock} (which like \texttt{release} has recursive and nonrecursive subspecs). We must hold a lock in order to free it, as seen in the nonrecursive \texttt{freelock} rule:
$$\{(\mathsf{exclusive}\ R) * R * \mathsf{lock\_inv}\ \mathsf{Tsh}\ \ell\ R\}\ \texttt{freelock}(\ell)\ \{R\}$$
Once we have freed both locks, the program is finished.

\section{Using Ghost State}
\label{ghost}
In the previous section, we proved that \texttt{incr.c} is safe, but not that the counter is 2 after being incremented twice. To prove that, our threads need to be able to record information about the actions they have performed on the shared state, instead of sealing all knowledge of the value of the \texttt{ctr} field inside the lock invariant. We can accomplish this with \emph{ghost variables}, a simple form of auxiliary state.

In \texttt{verif\_incr.v}, we augment the proof of the previous section with ghost variables and prove that the program computes the value 2. To do so, we use the new $\mathsf{ghost\_var}$ assertion: $\mathsf{ghost\_var}\ \mathit{sh}\ a\ g$ asserts that $g$ is a \emph{ghost name} (\textsf{gname} in Coq) associated with the value $a$, which may be of any type. We can split and join shares of ghost variables in the same way as memory locations, but they are not modified by program instructions. Instead, they can change by \emph{view shifts}, which can be introduced at any point in the proof of a program. Whenever a thread holds full ownership (\textsf{Tsh}) of a ghost variable, it can change the value of the variable arbitrarily. For \texttt{incr.c}, we will add two ghost variables, each tracking the contribution of one thread to the value of the \texttt{ctr} field. We will divide ownership of each ghost variable between the lock invariant and one of the threads. By maintaining the invariant that \texttt{ctr} is the sum of the two contributions, we will be able to conclude that after two increments, the value of \texttt{ctr} is 2.

\subsection{Extending the Specifications}
Previously, the lock invariant for the \texttt{ctr} lock was $$\mathsf{EX}\ z : \mathsf{Z}, \mathsf{field\_at}\ \mathsf{Ews}\ \mathsf{t\_counter}\ [\mathsf{StructField}\ \texttt{\_ctr}]\ (\mathsf{Vint} (\mathsf{Int.repr}\ z))\ \texttt{c}$$
Now, we augment it with shares of two ghost variables:
\begin{align*}&\mathsf{EX}\ z : \mathsf{Z}, \mathsf{field\_at}\ \mathsf{Ews}\ \mathsf{t\_counter}\ [\mathsf{StructField}\ \texttt{\_ctr}]\ (\mathsf{Vint} (\mathsf{Int.repr}\ z))\ \texttt{c}\ * \\&\quad\mathsf{EX}\ x : \mathsf{Z}, \mathsf{EX}\ y : \mathsf{Z}, !!(z = x + y) \ \&\&\ \mathsf{ghost\_var}\ \mathsf{gsh1}\ x\ g1 * \mathsf{ghost\_var}\ \mathsf{gsh1}\ y\ g2\end{align*}
The thread that holds the other half of $g1$ or $g2$ can thus record its contribution to the \texttt{ctr}, but can only change that contribution while holding the lock, and only while maintaining the invariant that $z = x + y$.

Next, we modify each specification to take the ghost variables into account. Our specification for \texttt{incr} now needs to know which ghost variable the caller wants to increment, so it takes a boolean \textsf{left} telling it whether we are looking at the left ($g1$) or right ($g2$) ghost variable. %(In Section~\ref{incrN}, we will generalize this to allow the caller to pass any \textsf{gname} from a list.)
\begin{verbatim}
 DECLARE _incr
  WITH sh1 : share, sh : share, h : lock_handle, g1 : gname, g2 : gname,
       left : bool, n : Z, gv: globals
  PRE [ ]
    PROP  (readable_share sh1)
    PARAMS () GLOBALS (gv)
    SEP   (field_at sh1 t_counter [StructField _lock] (ptr_of h) (gv _c);
           lock_inv sh h (cptr_lock_inv g1 g2 (gv _c));
           ghost_var gsh2 n (if left then g1 else g2))
  POST [ tvoid ]
    PROP ()
    LOCAL ()
    SEP (field_at sh1 t_counter [StructField _lock] (ptr_of h) (gv _c);
         lock_inv sh h (cptr_lock_inv g1 g2 (gv _c));
         ghost_var gsh2 (n+1) (if left then g1 else g2)).
\end{verbatim}
Holding one of the ghost variables is not enough to guarantee anything about the value returned by \texttt{read}, but if we hold both of them, we should be able to predict the result.
\begin{verbatim}
 DECLARE _read
  WITH sh1 : share, sh : share, h : lock_handle, g1 : gname, g2 : gname,
       n1 : Z, n2 : Z, gv: globals
  PRE [ ]
    PROP  (readable_share sh1)
    PARAMS () GLOBALS (gv)
    SEP   (field_at sh1 t_counter [StructField _lock] (ptr_of h) (gv _c);
           lock_inv sh h (cptr_lock_inv g1 g2 (gv _c));
           ghost_var gsh2 n1 g1; ghost_var gsh2 n2 g2)
  POST [ tuint ]
    PROP ()
    RETURN (Vint (Int.repr (n1 + n2)))
    SEP (field_at sh1 t_counter [StructField _lock] (ptr_of h) (gv _c);
         lock_inv sh h (cptr_lock_inv g1 g2 (gv _c));
         ghost_var gsh2 n1 g1; ghost_var gsh2 n2 g2).
\end{verbatim}
Finally, we add ownership of ghost variable $g1$ to the resources passed to \texttt{thread\_func} (and collected by its lock when it terminates):
\begin{verbatim}
 DECLARE _thread_func
  WITH y : val, x : share * share * lock_handle * lock_handle * gname * gname * globals
  PRE [ tptr tvoid ]
    let '(sh1, sh, h, ht, g1, g2, gv) := x in
    PROP  (readable_share sh1; ptr_of ht = y)
    PARAMS (y) GLOBALS (gv)
    SEP   (field_at sh1 t_counter [StructField _lock] (ptr_of h) (gv _c);
           lock_inv sh h (cptr_lock_inv g1 g2 (gv _c));
           ghost_var gsh2 0 g1;
           lock_inv sh ht (thread_lock_inv sh1 sh h g1 g2 (gv _c) ht))
  POST [ tint ]
    PROP ()
    RETURN (Vint Int.zero)
    SEP ().
\end{verbatim}
The value of $g1$ starts at 0, and should be 1 by the time the thread terminates, as reflected in \texttt{thread\_lock\_R}.

\subsection{Proving with Ghost State}
The proof for \texttt{incr} begins in the same way as before: we acquire the lock, unfold the invariant, and introduce the variables $x, y,$ and $z$. This also gains us $\mathsf{gsh1}$ shares of both ghost variables. The code that reads and increments \texttt{ctr} proceeds in the same way as before; even though the value of \texttt{ctr} has increased, the ghost variables are not yet updated. We do the update after the increment, but in fact we are free to do it anytime between acquiring and releasing the lock: the relationship between the values of the ghost variables and the real value in memory is part of the lock invariant, so we can break it freely while the lock is held, as long as we restore it before calling \texttt{release}.

When we are ready, we gather together all the shares of ghost variables that we hold, and use the \textsf{viewshift\_SEP} tactic to update the ghost variables. This tactic is analogous to \textsf{replace\_SEP}, but instead of proving $P \vdash Q$, we instead prove $P \Rrightarrow Q$ (written as \verb+P |-- |==> Q+ in ASCII). This \emph{view shift} relation includes the ordinary derives relation, but also has a number of special rules that allow us to modify ghost state\footnote{Formally, the view shift operator allows us to perform any \emph{frame-preserving update} on ghost state, i.e., any change that could not invalidate any other thread's ghost state. We will discuss this idea further in Section~\ref{custom}.}. Of particular interest here is lemma \textsf{ghost\_var\_update}, which says that $\mathsf{ghost\_var}\ \mathsf{Tsh}\ v\ p \Rrightarrow \mathsf{ghost\_var}\ \mathsf{Tsh}\ v'\ p$. As long as we have total ownership of a ghost variable, we can change its value to anything of the same type.

The ghost state logic performed in the $\mathsf{viewshift\_SEP}$ is captured in the lemma $\mathsf{ghost\_var\_incr}$. We begin by using the lemma $\mathsf{bupd\_frame\_r}$ to frame out the unused ghost variable. Then we use the lemma $\mathsf{ghost\_var\_update'}$ to update the value of both halves of the remaining ghost variable. If one half had value $x$ and the other had value $n$, then the update 1) tells us that $x = n$ and 2) changes the value of both halves to $n + 1$.
Once this operation is complete in the body of \texttt{incr}, we have reestablished the lock invariant: the value of \texttt{ctr} has been changed from $z$ to $z + 1$, and exactly one of $x$ and $y$ has been incremented to match. Because the frame depends on whether we passed in $g1$ or $g2$, we instantiate it before doing the case analysis on \textsf{left}; other than that, the proof is straightforward.

The proof of correctness of \texttt{read} is similar, but we do not need to do a view shift: instead, we use an ordinary \textsf{assert\_PROP} to extract the values of both $x$ and $y$, so that we can compute $z$. The only change we need to make to the proof for \texttt{thread\_func} is to pass the extra arguments to \texttt{incr}, telling it that we are the thread holding $g1$ and its starting value is 0. The remaining interesting change is in the proof of \texttt{main}, where we need to create the ghost variables that we use in the rest of the program. We do this using a \textsf{ghost\_alloc} tactic that takes the ghost assertion we want to allocate, minus its \textsf{gname}; the tactic allocates a new \textsf{gname} at which the assertion holds, which we can then introduce as usual with \textsf{Intro}. Once we allocate the two ghost variables with starting value 0, we can incorporate them into the lock invariants when we call \texttt{makelock}, and the rest of the proof proceeds as before. When we spawn the child thread, we pass it the \textsf{gsh2} share of ghost variable $g1$ along with the shares of the locks, as its precondition now requires. When we reclaim its share of the ghost variable and call \texttt{read}, we can now use our half-shares of both ghost variables with value 1 to conclude that the value of \texttt{t} is 2.

\ignore{\subsection{Generalizing to $N$ Threads}
\label{incrN}

The structure of the ghost state in the previous program limited the number of threads accessing the counter to two. We could pass the ghost variables between threads to enable more than two threads to call \texttt{incr}, but no more than two threads could hold ghost variables at a time, since there were only two ghost variables. In this section, we show how we can make the counter agnostic to the number of ghost variables, extending this limit to an arbitrary $N$.

The code in \texttt{incrN.c} makes a few additions to \texttt{incr.c}. The counter has initialization and destruction functions \texttt{init\_ctr} and \texttt{dest\_ctr}, making the counter more of an independent data structure. (We could now move \texttt{ctr}, \texttt{ctr\_lock}, \texttt{init\_ctr}, and \texttt{dest\_ctr} to a separate file from \texttt{thread\_func} and \texttt{main}.) The \texttt{main} function now spawns \texttt{N} threads, each with its own lock, each executing \texttt{thread\_func} to increment the counter by 1. Once \texttt{main} has joined with all the threads, it reads the final value of \texttt{ctr}, which we expect to be equal to \texttt{N}. Note that none of the counter functions take \texttt{N} as an argument: the number of threads will be a parameter to their specifications, but does not affect the computations they perform, so we could use the counter in multiple programs with different numbers of threads.

To adapt our specifications to $N$ threads, we first generalize the counter lock's invariant to take a list of ghost variables $\mathit{lg}$. The counter value $z$ will then be the sum of the values of all the ghost variables:
\begin{align*}&\mathsf{EX}\ z : \mathsf{Z}, \mathsf{data\_at}\ \mathsf{tuint}\ (\mathsf{Vint} (\mathsf{Int.repr}\ z))\ \texttt{ctr}\ * \\&\quad\mathsf{EX}\ \mathit{lv} : \mathrm{list}\ Z, !!(z = \mathrm{sum}\ \mathit{lv}) \ \&\&\ \circledast_{g \in \mathit{lg}, v \in \mathit{lv}} \mathsf{ghost\_var}\ \mathsf{gsh1}\ v\ g\end{align*}
(In Coq, we can write iterated separating conjunction over a list with $\mathsf{iter\_sepcon}$, or over two lists with $\mathsf{iter\_sepcon}$.) The specifications for the previously existing functions are modified accordingly, taking an index $i$ into the list of ghost variables to indicate which variable will be used to record the calling thread's operations (and \texttt{thread\_func} now takes as its argument the lock it should use to join). The specification for the new function \texttt{init\_ctr} takes the number $N$ of simultaneous threads to support; although $N$ does not appear in the body of the function, in the specification we need to know how many ghost variables to create.
\begin{verbatim}
 DECLARE _init_ctr
  WITH N : Z, gv: globals
  PRE [ ]
     PROP  (0 <= N)
     LOCAL (gvars gv)
     SEP   (data_at_ Ews tuint (gv _ctr); data_at_ Ews tlock (gv _ctr_lock))
  POST [ tvoid ]
    EX lg : list gname,
     PROP (Zlength lg = N)
     LOCAL ()
     SEP (lock_inv Ews (gv _ctr_lock) (cptr_lock_inv lg (gv _ctr));
          iter_sepcon (ghost_var gsh2 0) lg).
\end{verbatim}
After the call, \texttt{ctr} is protected by its lock with the invariant, and $N$ ghost variables have been initialized to 0 (as, by implication, has \texttt{ctr} itself). Destructing the counter does the same thing in reverse:
\begin{verbatim}
 DECLARE _dest_ctr
  WITH lg : list gname, lv : list Z, gv: globals
  PRE [ ]
     PROP  ()
     LOCAL (gvars gv)
     SEP   (lock_inv Ews (gv _ctr_lock) (cptr_lock_inv lg (gv _ctr));
            iter_sepcon2 (fun g v => ghost_var gsh2 v g) lg lv)
  POST [ tvoid ]
     PROP ()
     LOCAL ()
     SEP (data_at Ews tuint (vint (sum lv)) (gv _ctr);
          data_at_ Ews tlock (gv _ctr_lock)).
\end{verbatim}
The \texttt{dest\_ctr} function retrieves the free shares of all $N$ ghost variables ($N = \mathrm{length}\ \mathit{lg}$, so it does not need to be passed explicitly), and frees the lock, guaranteeing that the current value of \texttt{ctr} is the sum of the values of the ghost variables.

The proofs for \texttt{incr} and \texttt{thread\_func} are almost unchanged from the previous version. The proof of \texttt{init\_ctr} is similar to that of the beginning of \texttt{main}, allocating the ghost variables (we use the $\mathsf{ghosts\_alloc}$ tactic to make a list of $N$ ghost variables) and showing that the lock invariant holds in the initial state. In \texttt{dest\_ctr}, we acquire and free the lock and then use the same sort of ghost variable reasoning as in \texttt{read} to show that the list of values associated with the ghost variables inside the lock invariant is the same as the list of values passed in by the caller (and therefore the value of \texttt{ctr} is equal to the sum of that list). At the end of the function, we deallocate the ghost variables: because they are not connected to real memory, we can eliminate them at any time with a view shift.

The proof of correctness of the modified \texttt{main} is slightly more complicated than before, illustrating common patterns for reasoning about programs that spawn several threads performing the same operations. We begin by calling \texttt{init\_ctr} to make the counter lock and the ghost variables. Because each thread needs to know about the counter lock, we use the $\mathsf{split\_shares}$ lemma to divide $\mathsf{Ews}$ into $N + 1$ pieces, one for each spawned thread and one retained by the parent. In the first loop, we give each thread its resources: a share of the counter lock, a ghost variable, and half of a thread lock for joining. In doing so, we gradually use up the data in the \texttt{thread\_lock} array by converting it into $\mathsf{lock\_inv}$ assertions. We use $\mathsf{sublist}\ i\ N$ to describe the list of remaining shares/ghost variables at the $i$th iteration; by the end of the loop, $i = N$ and all shares and ghost variables have been given away. In the second loop, we reverse the process, joining with each thread and reclaiming shares and ghost variables---but since each thread we join with has completed its body, each ghost variable now has a value of 1 instead of 0. So when we call \texttt{dest\_ctr}, we know that the final value of \texttt{ctr} is the sum of a list of $N$ 1's, which simple arithmetic tells us is equal to $N$.}

\section{Defining Custom Ghost State}
\label{custom}
\subsection{The Structure of Ghost State}
The ghost variables of the previous section are a special case of a much more general \emph{ghost state} mechanism. With ghost variables, every thread that holds a share knows the exact value of the variable, but there are many other sharing patterns that may be used in concurrent programs. The key to defining a new pattern is to describe what happens when two elements are joined together, by creating an instance of the \textsf{Ghost} typeclass. Many useful instances can be found in \texttt{concurrency/ghosts.v}. An instance of the \textsf{Ghost} typeclass is a \emph{separation algebra} with a \textsf{join} relation, plus a $\mathsf{valid}$ predicate marking those elements of the algebra that can be used in assertions. For instance, ghost variables of type $A$ are drawn from a separation algebra over the type $\mathsf{option}\ (share * A)$, where valid elements have nonempty shares. An element $\mathsf{Some\ (\mathit{sh}, a)}$ represents a share $\mathit{sh}$ of value $a$, and \textsf{None} represents no ownership or knowledge of the variable. Two \textsf{Some} elements join by combining their shares, but only if they agree on the value; a \textsf{None} element joins with any other element and is the identity.

Every ghost state assertion is a wrapper around the predicate $\mathsf{own}\ g\ a\ \mathit{pp}$, where $g$ is a \textsf{gname}, $a$ is an element of a \textsf{Ghost} instance, and $\mathit{pp}$ is a separation logic predicate\footnote{More accurately, $\mathit{pp}$ is of type $\mathsf{preds}$, a dependent pair of a type signature (possibly including $\mathsf{mpred}$) and a value of that type. This construction is used to embed predicates inside ghost state (as well as function pointers, lock invariants, etc.), which in turn can be the subject of predicates, without circular reference issues.}. For instance, $\mathsf{ghost\_var}\ \mathit{sh}\ v\ g$ is defined as $\mathsf{own}\ g\ (\mathsf{Some}\ (\mathit{sh}, v))\ \mathsf{NoneP}$. (For most kinds of ghost state, $\mathit{pp}$ will be the empty predicate \textsf{NoneP}, but its inclusion also allows us to create \emph{higher-order ghost state}~\cite{hogs}.) The \textsf{own} predicate is governed by a few simple rules:
$$\inference[\textsf{own\_alloc}]{\mathsf{valid}\ a}{\mathsf{emp} \Rrightarrow \mathsf{EX}\ g : \mathsf{gname}, \mathsf{own}\ g\ a\ \mathit{pp}}$$
$$\inference[\textsf{own\_op}]{\mathsf{join}\ a1\ a2\ a3}{\mathsf{own}\ g\ a3\ \mathit{pp} = \mathsf{own}\ g\ a1\ \mathit{pp} * \mathsf{own}\ g\ a2\ \mathit{pp}}$$
$$\inference[\textsf{own\_valid\_2}]{}{\mathsf{own}\ g\ a1\ \mathit{pp} * \mathsf{own}\ g\ a2\ \mathit{pp} \Rrightarrow\ !!(\exists a3, \mathsf{join}\ a1\ a2\ a3 \land \mathsf{valid}\ a3)}$$
$$\inference[\textsf{own\_update}]{\mathsf{fp\_update}\ a\ b}{\mathsf{own}\ g\ a\ \mathit{pp} \Rrightarrow \mathsf{own}\ g\ b\ \mathit{pp}}$$
$$\inference[\textsf{own\_dealloc}]{}{\mathsf{own}\ g\ a\ \mathit{pp} \vdash \mathsf{emp}}$$
Of these rules, \textsf{own\_alloc} and \textsf{own\_dealloc} let us create and destroy ghost state, \textsf{own\_op} lets us split and combine it according to its \textsf{join} relation, \textsf{own\_valid\_2} tells us that any two pieces of ghost state that we hold at the same \textsf{gname} are consistent with each other, and \textsf{own\_update\_ND} lets us do \emph{frame-preserving updates} to our ghost state: we can change its value arbitrarily as long as this does not invalidate any other piece of the same ghost state that might be held by another thread. Formally, $\mathsf{fp\_update}\ a\ b \triangleq \forall c, (\exists d, \mathsf{join}\ a\ c\ d \land \mathsf{valid}\ d) \rightarrow (\exists d, \mathsf{join}\ b\ c\ d \land \mathsf{valid}\ d)$.

The frame-preserving updates allowed by the join relation of each kind of ghost state determines what the ghost state can be used for. For instance, two pieces of a ghost variable only join if they have the same value; thus we can only change the value of a ghost variable when we have all its shares, because then we know that no other thread is restricting its value. Some ghost constructions allow smaller or older values to join with larger or newer ones, so that we can change a value without needing to update the records of all parties; others have extremely restrictive joins that ensure that a piece of ghost state belongs to only one thread at a time. Most concurrent programs can be verified with some combination of the types of ghost state defined in \texttt{ghosts.v}, but we are always free to define new \textsf{Ghost} instances for more complicated patterns of sharing and recording.

\subsection{Example: \texttt{incr} with Unbounded Threads}
\label{incr-gen}
We can put custom ghost state to use in further generalizing the \texttt{incr} example. The program \texttt{incrN.c} uses the same counter data structure, but creates more than two threads that access it simultaneously. We could give each one its own ghost variable, but we can write a simpler proof by recognizing that the value of the counter has nothing to do with which threads accessed it---each call to \texttt{incr} increments its value by 1, regardless of which thread calls \texttt{incr} or how many other threads have access to it. In other words, we should be able to track the counter's value with a single piece of ghost state that simply accumulates the number of calls to \texttt{incr}. In \texttt{verif\_incr\_gen.v}, we define custom ghost state to do exactly that, following the lead of Ley-Wild and Nanevski~\cite{subjective}.

We begin by declaring an instance of the \textsf{Ghost} typeclass. A \textsf{Ghost} instance has three fields: a carrier type $G$, a predicate $\mathsf{valid}$ on $G$, and a join relation $\mathsf{Join\_G}$. It also has three proof obligations: it must be a separation algebra and a permission algebra, and validity of an element must imply validity of its sub-elements according to $\mathsf{Join\_G}$. To be a permission algebra, the join predicate must be functional, associative, commutative, and non-decreasing; to be a separation algebra, it must support a function $\mathsf{core}\::\:G \rightarrow G$ that, for each element, gives a unit for that element. These are not fundamental requirements for ghost state in general, but VST expects them to hold of the heap, and so it is convenient to impose them on ghost state as well.

For our example, we want to count the number of \texttt{incr} calls in two places. First, each time a thread calls \texttt{incr}, it should record that it has made a call. Second, the counter's lock invariant should record the total number of calls made, since that should also be the value of the counter. If we omit the latter record, then our ghost state will count the number of calls made, but there will be nothing to connect this number to the value of \texttt{ctr}. This is a common pattern for ghost state, which we call the \emph{reference} pattern: each thread holds partial information describing its contribution to the shared state, and the shared resource holds a ``reference'' copy that records all of the contributions. We provide a function $\mathsf{ref\_PCM}$ that makes such a reference structure for any \textsf{Ghost} instance. An element of $\mathsf{ref\_PCM}$ is a pair of an optional contribution element $\mathsf{ghost\_part}\ \mathit{sh}\ a$ and an optional reference element $\mathsf{ghost\_reference}\ r$, where $a$ and $r$ are drawn from the underlying \textsf{Ghost} instance, and $\mathit{sh}$ is a nonempty share. To join two elements, we combine the shares and values of the contributions (if any), and require that the elements contain at most one reference between them (ensuring the uniqueness of the reference value). When a contribution element has the full share \textsf{Tsh}, it is guaranteed to be equal to the reference element, since this means we have collected all of the contributions. In general, we start by creating an initial contribution element and reference element, store the reference in the invariant of the shared data, and divide the contribution element into shares that we distribute to each thread. The contribution elements then record all the contributions of every thread, and when we rejoin them at the end of the program we learn exactly what all threads have done collectively and can deduce the state of the shared data. We will work this out in detail in the rest of the example; see \texttt{concurrency/ghosts.v} for the full list of lemmas about the $\mathsf{ghost\_part}$ and $\mathsf{ghost\_reference}$ assertions.

The underlying \textsf{Ghost} instance for the increment program is simply a $\mathsf{nat}$ recording the number of calls to \texttt{incr}. The join operation for the ghost is addition, and all numbers are valid. This $\mathsf{sum\_ghost}$ instance is then passed to $\mathsf{ref\_PCM}$ to make the reference ghost state we need. We also write some local definitions for the kinds of ghost state we expect to use: partial contributions ($\mathsf{ghost\_part}$), reference state ($\mathsf{ghost\_ref}$), and the combination of both ($\mathsf{ghost\_part\_ref}$). (Using these definitions, which specialize the parametric definitions from \texttt{ghosts.v} to the $\mathsf{sum\_ghost}$ instance, avoids relying on Coq to find the right \textsf{Ghost} instance for our ghost assertions.)

We are now ready to write the specifications for our functions. First, we note that every client of the counter should have both a share of the $\mathsf{lock\_inv}$ assertion for the counter lock, and a share of the $\mathsf{ghost\_part}$ assertion to record the number of increments this thread has observed. We can make the proofs simpler by bundling up both these resources in a $\mathsf{ctr\_handle}$ assertion. In more complex examples, clients may need many more resources to call data structure operations, and we can use this pattern to encapsulate details of the data structure's implementation. Now we can say that the \texttt{init\_ctr} function creates a handle with full ownership $\mathsf{Tsh}$ and starting value $0$, \texttt{dest\_ctr} deallocates a full handle and guarantees that its value is the current value of the \texttt{ctr} field, and \texttt{incr} increments the value in a handle by $1$, giving us a very neat summary of the operations on the counter data structure. We can also show that we can combine two $\mathsf{ctr\_handle}$s by adding together their shares and their increment counts.

%Now that we no longer have a list of ghost variables, the specifications for most functions are simpler. \texttt{init\_ctr} gives us a contribution ghost with full share and value 0, and \texttt{dest\_ctr} guarantees that the value of \texttt{ctr} is exactly the value of the total contributions of all threads. The \texttt{incr} and \texttt{thread\_func} functions no longer need indices; they simply take any arbitrary ghost part and increase its value by 1. Since all the parts will be summed to determine the value of the counter, this precisely reflects the fact that \texttt{incr} increases the counter by 1.

Proving the correctness of these specs involves correctly manipulating our new kind of ghost state. We allocate the ghost state in \texttt{init\_ctr}, as a combination of total information $(\mathsf{Tsh}, 0)$ and reference element $0$. This time, \textsf{ghost\_alloc} leaves us with a subgoal: we need to show that our initial element is valid. For a $\mathsf{ref\_PCM}$ instance, this means that the share of the thread contributions is nonempty (which $\mathsf{Tsh}$ is) and the contributions are \emph{completable} to the reference element---i.e., there exists some remaining contribution that could join with the existing contributions to make the reference element. When the share is total and the two elements are equal, this is easy to prove. Now, when we release the counter lock, we establish its invariant by separating the reference copy from the contributions and giving it to the lock.

Conversely, in \texttt{dest\_ctr}, we must relate the total contributions to the value of the counter. The calling thread passes in a total contribution element $\mathsf{ghost\_part}(\mathsf{Tsh}, v)$, and from the lock invariant we receive $\mathsf{ghost\_ref}(z)$, where $z$ is also the value of \texttt{ctr}. Given these two pieces, we can use the lemma $\mathsf{ref\_sub}$ (which is derived from the validity rule \textsf{own\_valid\_2} of general ghost state) to conclude that $z = v$, exactly as desired. %(Note how much simpler this proof is than that of the previous section, in which we needed to prove that each ghost variable in the list had the same value in the thread as it did in the lock invariant.)

In \texttt{incr}, after incrementing the \texttt{ctr} field, we want to simultaneously add 1 to the caller's contribution and the reference ghost state. To do this, we need to show that this addition is a frame-preserving update. Fortunately, $\mathsf{ref\_PCM}$ comes with a lemma $\mathsf{ref\_add}$ for doing just this kind of update: we can add any piece of ghost state to both the contribution and the reference. In general, when we define a new kind of ghost state, we will prove lemmas describing its common forms of frame-preserving update; in the absence of these lemmas, we can use the generic \textsf{own\_update} rule and work with the definition of frame-preserving update directly.

The proof of \texttt{thread\_func} is very similar to the previous versions, but \texttt{main} is slightly more complicated than before, illustrating common patterns for reasoning about programs that spawn several threads performing the same operations. We begin by calling \texttt{init\_ctr} to make the counter lock and the ghost variables. Then, because each of the $N$ threads will need a share of the $\mathsf{ctr\_handle}$ and the \texttt{lock} field, we use the $\mathsf{split\_shares}$ lemma to divide both $\mathsf{Tsh}$ and $\mathsf{Ews}$ into $N + 1$ pieces, one for each spawned thread and one retained by the parent. In the first loop, we give each thread its resources: a share of the \texttt{lock} field, a share of the counter handle with initial value 0, and half of a thread lock for joining. We store each of the locks in the \texttt{thread\_lock} array, so that we can join with each of the spawned threads later. In the second loop, we reverse the process, joining with each thread and reclaiming its shares---but since each thread we join with has completed its body, each reclaimed counter handle now has a value of 1 instead of 0. Adding these together, we get full ownership of a $\mathsf{ctr\_handle}$ with value $N$, so it is easy to show that after we call \texttt{dest\_ctr} the value of the counter is $N$.

\section{Basic Rules of Concurrent Separation Logic}
\label{CSL}
\subsection{Lock Specifications}
\label{lock-specs}
These specifications can be found in \texttt{concurrency/lock\_specs.v}, except for the recursive \textsf{self} variants, which are in \texttt{atomics/verif\_lock.v}.
$$\{\mathsf{mem\_mgr}\ \mathit{gv}\}\ \texttt{makelock}()\ \{\ell.\ \mathsf{lock\_inv}\ \mathsf{Tsh}\ \ell\ (R\ \ell)\}$$
Note that the $R$ passed to \texttt{makelock} is a function from the lock handle to the invariant, so that the invariant can reference the lock's own name (as in $\mathsf{selflock}$).
$$\{!!\mathit{sh} \neq \mathsf{Share.bot} \land \mathsf{lock\_inv}\ \mathit{sh}\ \ell\ R\}\ \texttt{acquire}(\mathsf{ptr\_of}\ \ell)\ \{R * \mathsf{lock\_inv}\ \mathit{sh}\ \ell\ R\}$$
The specs for \texttt{release} and \texttt{freelock} have several variants, to allow for things like recursive locks that contain parts of their own $\mathsf{lock\_inv}$ assertions. Their top-level specs are written in a confusing but flexible style that implies both normal and recursive subspecifications:
$$\{!!\mathit{sh} \neq \mathsf{Share.bot} \land (\mathsf{exclusive}\ R) * \triangleright \mathsf{lock\_inv}\ \mathit{sh}\ \ell\ R * P * (\mathsf{lock\_inv}\ \mathit{sh}\ \ell\ R * P \wand Q * R)\}\ \texttt{release}(\mathsf{ptr\_of}\ \ell)\ \{Q\}$$
$$\{\mathsf{lock\_inv}\ \mathsf{Tsh}\ \ell\ R * P * ((\mathsf{lock\_inv}\ \mathsf{Tsh}\ \ell\ R * P * R \vdash \mathsf{FF}) \land \mathsf{emp})\}\ \texttt{freelock}(\mathsf{ptr\_of}\ \ell)\ \{P\}$$
In nonrecursive use, the ``unknown precondition'' $P$ is simply the invariant $R$, yielding the familiar specs with subspec names $\mathsf{release\_simple}$ and $\mathsf{freelock\_simple}$:
$$\{!!\mathit{sh} \neq \mathsf{Share.bot} \land (\mathsf{exclusive}\ R) * R * \mathsf{lock\_inv}\ \mathit{sh}\ \ell\ R\}\ \texttt{release}(\mathsf{ptr\_of}\ \ell)\ \{\mathsf{lock\_inv}\ \mathit{sh}\ \ell\ R\}$$
$$\{(\mathsf{exclusive}\ R) * R * \mathsf{lock\_inv}\ \mathsf{Tsh}\ \ell\ R\}\ \texttt{freelock}(\mathsf{ptr\_of}\ \ell)\ \{R\}$$
In recursive use, it is instead the nonrecursive part of the invariant, allowing us to give up the $\mathsf{lock\_inv}$ assertion into the invariant (subspec names $\mathsf{release\_self}$ and $\mathsf{freelock\_self}$):
$$\{R * \mathsf{lock\_inv}\ \mathit{sh}\ \ell\ (\mathsf{selflock}\ R\ \mathit{sh}\ \ell))\}\ \texttt{release}(\mathsf{ptr\_of}\ \ell)\ \{\mathsf{emp}\}$$
$$\{!!(\mathit{sh1} + \mathit{sh2} = \mathit{sh}) * \mathsf{self\_part}\ \mathit{sh2}\ \ell * \mathsf{lock\_inv}\ \mathit{sh1}\ \ell\ (\mathsf{selflock}\ R\ \mathit{sh2}\ \ell)\}\ \texttt{freelock}(\mathsf{ptr\_of}\ \ell)\ \{\mathsf{emp}\}$$
 
\subsection{Spawn}
This specification can be found in \texttt{concurrency/semax\_conc.v}.
$$\{P(y) * (f : x.\ \{P(x)\}\{\mathsf{emp}\})\}\ \texttt{spawn}(f, y)\ \{\}$$

\subsection{Ghost Operations}
These rules can be found in \texttt{msl/ghost\_seplog.v}.
$$\inference[\textsf{own\_alloc}]{\mathsf{valid}\ a}{\mathsf{emp} \Rrightarrow \mathsf{EX}\ g : \mathsf{gname}, \mathsf{own}\ g\ a\ \mathit{pp}}$$
$$\inference[\textsf{own\_op}]{\mathsf{join}\ a1\ a2\ a3}{\mathsf{own}\ g\ a3\ \mathit{pp} = \mathsf{own}\ g\ a1\ \mathit{pp} * \mathsf{own}\ g\ a2\ \mathit{pp}}$$
$$\inference[\textsf{own\_valid\_2}]{}{\mathsf{own}\ g\ a1\ \mathit{pp} * \mathsf{own}\ g\ a2\ \mathit{pp} \Rrightarrow !!(\exists a3, \mathsf{join}\ a1\ a2\ a3 \land \mathsf{valid}\ a3)}$$
$$\inference[\textsf{own\_update\_ND}]{\mathsf{fp\_update\_ND}\ a\ B}{\mathsf{own}\ g\ a\ \mathit{pp} \Rrightarrow \mathsf{EX}\ b, !!(B\ b)\ \&\&\ \mathsf{own}\ g\ b\ \mathit{pp}}$$
$$\inference[\textsf{own\_update}]{\mathsf{fp\_update}\ a\ b}{\mathsf{own}\ g\ a\ \mathit{pp} \Rrightarrow \mathsf{own}\ g\ b\ \mathit{pp}}$$
$$\inference[\textsf{own\_dealloc}]{}{\mathsf{own}\ g\ a\ \mathit{pp} \Rrightarrow \mathsf{emp}}$$

\ignore{\section{Implementation of Ghost State and Advanced Concurrency Reasoning}
The implementation of ghost state and related constructs in VST recapitulates the corresponding constructions from Iris, with some slight modifications. Some features are reimplemented on top of VST's separation logic (Verifiable C), some are imported directly from Iris, and some are definitions from Iris added as axioms. In this section, we will give an overview of all the advanced concurrency features of VST and their implementation.

\subsection{Ghost State in the Model}
Verifiable C assertions are predicates on \textsf{rmap}s, step-indexed maps from memory locations to annotated values/predicates. Ghost state is implemented as another component of the \textsf{rmap}: every heap is a step-indexed pair of a resource map and a collection of named pieces of ghost state. Each piece is a dependent tuple of a ghost algebra (an instance of the \textsf{Ghost} typeclass), an element of that algebra, a proof that that element is $\mathsf{valid}$, and a predicate (which allows for simple forms of higher-order ghost state). Every Verifiable C predicate is evaluated on the combination of resource map and ghost state; common assertions like \textsf{data\_at} refer to empty pieces of ghost state, while $\mathsf{own}$ assertions refer to empty resource maps.

This approach is notably different from that of Iris, in which the subject of separation logic predicates is a single piece of ghost state: even points-to assertions are ghost state that is separately linked to assertions about a monolithic physical state, and all the different kinds of ghost state in the program are joined into a single dependent map which itself satisfies the properties of a ghost algebra. VST's approach allowed us to reuse the proofs about points-to assertions with minimal changes, but is not quite as flexible when it comes to higher-order ghost state. On the other hand, VST's approach has a surprising advantage: while in Iris the type of the top-level ghost map is a parameter to the type of predicates, and all reasoning is done in the context of a typeclass asserting that certain kinds of ghost state are present in the map, VST can add new ghost state of any kind at any time during a proof, by adding a new dependent tuple for the desired ghost algebra to the collection.

The \textsf{own} predicate and view shift operator are defined by recapitulating the constructions from Iris: $\mathsf{own}\ g\ a\ \mathit{pp}$ asserts that the ghost state of the \textsf{rmap} is a singleton collection with name $g$, element $a$, and predicate $\mathit{pp}$. Likewise, the view shift is derived from a basic update operator $\upd$ of type $\mathsf{mpred} \rightarrow \mathsf{mpred}$ (written \texttt{|==>} in Coq), where $\upd P$ asserts that $P$ is true on an \textsf{rmap} whose ghost state is a frame-preserving update away from the current state. $P \Rrightarrow Q$ is then syntactic sugar for $P \vdash \upd Q$. These definitions allow us to prove the rules of section 6.2, giving us a ``separation logic with ghost state'' in the style of Iris but independent of the Coq development of Iris.

It still remains to eliminate the ghost update operators: after all, we do not want to prove at the end of the increment program that $\upd\ (\texttt{x} = 2)$, but rather that $\texttt{x} = 2$. This is accomplished by augmenting the semantics of the ``juicy machine'', the operational semantics used by Verifiable C (later erased to the actual CompCert semantics of C). After every step of the juicy machine, the machine can make a frame-preserving update to its ghost state. This leads to an extended rule of consequence:

$$\inference[\textsf{view\_shift\_conseq}]{P \Rrightarrow P' & \{P'\}\ c\ \{Q'\} & Q' \Rrightarrow Q}{\{P\}\ c\ \{Q\}}$$

This rule underlies the \textsf{viewshift\_SEP} tactic, which allows us to prove that $P \Rrightarrow P'$ and then replace $P$ with $P'$ in our precondition.

The constructions of this section are sufficient for the examples described above, but they only capture a small piece of the functionality of modern concurrent separation logics. In the following sections, we describe how more features of Iris have been integrated into VST.}

\section{Global Invariants and Atomic Operations}
\subsection{Invariants and Fancy Updates}
\label{inv}
One of the most powerful applications of ghost state is in defining ``global invariants'', which are similar to lock invariants but are not associated with any particular memory location. Instead, a global invariant is true before and after every step of a program, acting as a publicly accessible resource. We have already seen how to use the $\mathsf{viewshift\_SEP}$ tactic to change the value of ghost state; we can also use it to interact with global invariants. When we call $\mathsf{viewshift\_SEP}$ with current assertion $P$ and target assertion $P'$, we get a goal of the form $P \vdash \pvs[\top] P'$, where $\pvs[\top]$ is a ``fancy update'' operator parameterized by a set of enabled invariants (the full set $\top$ by default). This allows us to create, open, and close invariants in order to prove $P'$, as long as we never open the same invariant twice and always end with all invariants closed. The primary rules for manipulating invariants are:

$$\inference[\textsf{inv\_alloc}]{}{\triangleright P \vdash \pvs[E] \mathsf{EX}\ i : \mathsf{iname}, \knowInv{i}{P}}$$
$$\inference[\textsf{inv\_dup}]{}{\knowInv{i}{P} = \knowInv{i}{P} * \knowInv{i}{P}}$$
$$\inference[\textsf{inv\_open}]{i \in E}{\knowInv{i}{P} \vdash \pvs[E][E \setminus i] \triangleright P * (\triangleright P \wand \pvs[E \setminus i][E] \mathsf{emp})}$$
where $\knowInv{i}{P}$ is written in Coq as \textsf{invariant i P}. We can put any resources we currently own into an invariant, which can then be freely duplicated and shared between threads---but can only be accessed through view shifts (e.g., between program steps). During a view shift, we can open any enabled invariant, but must close it again before taking any steps of execution (except for those that are explicitly marked as atomic; more on this in Section~\ref{phys-atomic}). A global invariant effectively turns its contents into a public resource, freely accessible by all threads as long as they maintain it in its current state at all times (except instantaneously during view shifts).

\subsection{Atomic Operations}
\label{phys-atomic}
A program instruction can use the contents of a global invariant if it can guarantee that no one will ever see an intermediate state in which the invariant does not hold. This means that \emph{atomic} operations can freely access invariants: for instance, if a global invariant holds full ownership of a memory location \texttt{x}, then an atomic operation can read or write the value of \texttt{x}. Theoretically, an atomic operation is any operation that is guaranteed by the language to not result in any visible intermediate states. In C (as of the C11 standard), there is a set of functions that explicitly perform atomic memory operations: \texttt{atomic\_load}, \texttt{atomic\_store}, etc. We can give separation logic rules for these operations, building on the idea that they are like the corresponding nonatomic operations but can also access invariants, and then use those rules to prove correctness of C programs that use them (i.e., lock-free concurrent programs).

The generic proof rule for atomic operations in Iris is:
$$\inference[atomic consequence]{P \vs[E][E \setminus E'] P' & \{P'\}\ e\ \{Q'\} & Q' \vs[E \setminus E'][E] Q & e \text{ atomic}}{\{P\}\ e\ \{Q\}}$$
In other words, we can open a set of invariants $E'$ (using the rule \textsf{inv\_open} from section~\ref{inv}), prove a triple for the atomic operation $e$ using the resources from the invariants, and then restore them. In C, we have a small fixed set of atomic operations, so instead of adding an atomic predicate to the definition of the language, we can just instantiate this rule for each atomic operation. For instance, we know that the rule for a load is $\{x \mapsto v\}\ \texttt{*x}\ \{v.\ x \mapsto v\}$. Plugging this into the Hoare triple in the consequence rule gives us:
$$\inference[atomic load]{P \vs[E][E \setminus E'] x \mapsto_{\mathrm{a}} v * R & x \mapsto_{\mathrm{a}} v * R \vs[E \setminus E'][E] Q}{\{P\}\ \texttt{atomic\_load}(x)\ \{v.\ Q\}}$$
where $\mapsto_{\mathrm{a}}$ is a special atomic points-to assertion (written as $\mathsf{atomic\_int\_at}$ or $\mathsf{atomic\_ptr\_at}$ in VST), and we use the frame $R$ to store everything that isn't the memory location being accessed. We can get around the need to supply the frame by using magic wand instead:
$$\inference[atomic load]{P \vs[E][E \setminus E'] x \mapsto_{\mathrm{a}} v * (x \mapsto_{\mathrm{a}} v \wand \pvs[E \setminus E'][E] Q)}{\{P\}\ y = \texttt{atomic\_load}(x)\ \{v.\ Q\}}$$
$$\inference[atomic store]{P \vs[E][E \setminus E'] x \mapsto_{\mathrm{a}} \_ * (x \mapsto_{\mathrm{a}} v \wand \pvs[E \setminus E'][E] Q)}{\{P\}\ \texttt{atomic\_store}(x, v)\ \{Q\}}$$
Now we can see that if an invariant $I$ contains a points-to assertion $\texttt{x} \mapsto_{\mathrm{a}} v$, we can access and modify the value of \texttt{x} using atomic operations (and \emph{only} atomic operations). The VST specifications corresponding to these rules can be found in \texttt{atomics/SC\_atomics\_base.v}.

Note that, as the name \texttt{SC\_atomics} suggests, these are rules for \emph{sequentially consistent} atomic operations: weaker memory orders have more complicated rules, since a thread is not guaranteed to see an objective ``current value'' held in the invariant. %A rough axiomatization of the rules of iGPS~\cite{igps} (a weak-memory logic build on top of Iris) can be found in \texttt{acq\_rel\_atomics} and \texttt{acq\_rel\_SW} (for single-writer protocols); interested parties are encouraged to refer to the cited paper.

\subsection{Cancelable Invariants}
The invariants of Section~\ref{inv} have a major disadvantage in a memory-managed language like C: they can never be deallocated. If we put a points-to assertion into an invariant, it must be in the invariant for the entire rest of the program, and there is no way to retrieve and deallocate it. However, there is a simple workaround: instead of putting a memory assertion $P$ into an invariant, we can instead put $P \lor G$, where $G$ is an exclusive ghost state token. After we allocate the invariant, we hold $G$, so whenever we access the invariant we can prove that it contains $P$. Once we are done with the invariant, we can put $G$ into it and take $P$ out, so that while $P \lor G$ is still true, it only holds ghost resources $G$ and we are free to deallocate the memory resources $P$. Of course, we want our invariant to be accessed by multiple threads simultaneously, so it should be possible to divide $G$ into shares and distribute it among threads; only once we have recollected all its pieces can we trade it for $P$ and stop using the invariant. This is the \emph{cancelable invariant} trick~\cite{rustbelt-relaxed} that makes invariants usable in malloc-free languages.

Cancelable invariants are defined in VST in \texttt{concurrency/cancelable\_invariants.v}. The ghost resource $G$ is defined as $\mathsf{cinv\_own}\ \mathit{g}\ \mathsf{Tsh}$, where $\mathsf{cinv\_own}$ is custom ghost state whose carrier type is a share and whose join operation is the ordinary share join; a cancelable invariant is defined as $\mathsf{cinvariant}\ i\ g\ P \triangleq \mathsf{invariant}\ i\ (P \vee \mathsf{cinv\_own}\ g\ \mathsf{Tsh})$. The relevant rules are:
$$\inference[\textsf{cinv\_alloc}]{}{\triangleright P \vdash \pvs[E] \mathsf{EX}\ i\ g, \mathsf{cinvariant}\ i\ g\ P * \mathsf{cinv\_own}\ g\ \mathsf{Tsh}}$$
$$\inference[\textsf{cinv\_open}]{\mathit{sh} \neq \mathsf{Share.bot} \land i \in E}{\mathsf{cinvariant}\ i\ g\ P * \mathsf{cinv\_own}\ g\ \mathit{sh} \vdash \pvs[E][E \setminus i] \triangleright P * \mathsf{cinv\_own}\ g\ \mathit{sh} * (\triangleright P \wand \pvs[E \setminus i][E] \mathsf{emp})}$$
$$\inference[\textsf{cinv\_cancel}]{i \in E}{\mathsf{cinvariant}\ i\ g\ P * \mathsf{cinv\_own}\ g\ \mathsf{Tsh} \vdash \pvs[E] \triangleright P}$$

The operations on cancelable invariants closely correspond to those of lock invariants: we can allocate them, split them into shares, pass them to different threads, recollect them, and deallocate them. The difference is that cancelable invariants can only be accessed instantaneously between program steps and by atomic operations, while lock invariants can be accessed in the space between \texttt{acquire} and \texttt{release} calls. In fact, we can use cancelable invariants to implement lock invariants, and this is exactly what we do in \texttt{atomics/verif\_lock.v}. For a lock at location $\ell$ that protects resources $R$, its cancelable invariant is \[\mathsf{inv\_for\_lock}\ \ell\ R \triangleq \exists b.\ \ell \mapsto_{\mathrm{a}} b * \text{if } b \text{ then } \mathsf{emp} \text{ else } R\]
When the lock is held (i.e., $\ell \mapsto_{\mathrm{a}} \mathsf{true}$) the thread that holds it also owns $R$; when the lock is not held ($\ell \mapsto_{\mathrm{a}} \mathsf{true}$) then $R$ must be in the invariant. In \texttt{verif\_lock.v}, we use $\mathsf{inv\_for\_lock}$ to prove that a simple atomic-operation-based spinlock implements the lock specifications of Section~\ref{lock-specs}. This is the standard lock implementation for concurrent VST programs.

\section{Iris in VST}
The ghost state, invariants, and view shifts described in this manual are heavily influenced by Iris, a language-independent separation logic framework. In fact, VST's logic is provably an instance of the Iris framework. Doing this instance proof lets us use many Iris definitions and tactics directly in VST, and makes it much easier to deal with complicated view shifts and derived concepts like logical atomicity. This section assumes basic familiarity with the features of Iris; if you have not used it before and are interested in writing proofs of correctness for complicated concurrent programs, we recommend working through Elizabeth Dietrich's beginner's guide~\cite{iris-guide} first. Also note that we only explain how to \emph{use} the Iris features in VST: for implementation details, please see the technical report on arXiv~\cite{iris-vst-arxiv}.

User beware: Iris and VST are both large and complicated Coq developments, and combining them can lead to notational oddities, universe inconsistencies, and other unpredictable issues. Please report any issues encountered by creating an issue on GitHub or emailing \url{mansky1@uic.edu}. In particular, Iris is built on top of std++, a variant standard library for Coq, and importing any of its files will cause significant changes to your proof environment (new definitions of standard list functions, unicode symbols for $\forall$ and $\exists$, $\lambda$-notation for anonymous functions, and many others). You do not need to write your own definitions in this style, and you may find it easier to read once you get used to it, but it is definitely a change from vanilla Coq!

\subsection{Iris Proof Mode}
One of the best features of Iris is Iris Proof Mode (also called MoSeL), a set of tactics for doing separation logic proofs in the same style as standard Coq proofs~\cite{ipm}. The tactics are described at \url{https://gitlab.mpi-sws.org/iris/iris/blob/master/docs/proof_mode.md}, and tutorials and examples are available on the Iris Project website (\url{https://iris-project.org}). Fortunately, the proof mode is defined in a very generic way, and can be used for any separation logic that is proved to be an instance of Iris's formulation of the logic of bunched implications (BI). The file \texttt{veric/bi.v} contains a proof that Verifiable C is such a logic, allowing us to use IPM/Mosel in VST proofs.

Iris has its own framework for language semantics and associated symbolic execution engine, but it is most naturally suited to functional languages and generally less automatic than VST's \textsf{forward}. On the other hand, the Iris tactics for reasoning about separation logic implications give a much higher degree of control than \textsf{cancel} and \textsf{entailer}, and are more fully featured than \textsf{sep\_apply}. IPM also has a smooth treatment of ``modalities'' such as the basic and fancy updates; while in VST we might need to explicitly apply lemmas about the monotonicity, transitivity, and frame properties of $\pvs$, in IPM we can often eliminate them automatically. Importing \texttt{VST.veric.bi} (or any of the other files mentioned in this section, all of which export \texttt{bi}) gives access to IPM for any separation logic entailment: simply use the \textsf{iIntros} tactic, and the goal will immediately be converted into an Iris proof state where all Iris tactics can be applied. There is also an \textsf{iVST} tactic for switching back to VST's style of entailment if needed.

\subsection{Invariants and Namespaces}
Invariants in Iris have slightly different proof rules from the ones in Section~\ref{inv}: in \textsf{inv\_alloc}, instead of obtaining the new invariant at some existentially quantified name $i$, the user chooses a \emph{namespace} of the form $\mathsf{nroot}\ .@\ \mathit{name}$ (where $\mathit{name}$ is a string) and obtains the invariant at that namespace. Namespaces can also be further qualified ($\mathsf{nroot}\ .@\ \mathit{name}_1\ .@\ \mathit{name}_2\ .@\ ...$). This makes it easier to open invariants at the same time: instead of tracking that two arbitrary $i$'s are different from each other, we can simply observe that two concrete strings are not equal. VST supports this mechanism as well, in \texttt{concurrency/invariants.v}, giving exactly the same interface to invariants as in Iris.

VST's invariants also satisfy the typeclasses necessary to use Iris's tactics for invariants. Invariants are \emph{persistent}, and can be introduced/destructed with the \texttt{\#} pattern to put them in the persistent context, so that they are automatically replicated as needed (at least during the IPM section of a proof). The $\mathsf{iInv}$ tactic can be used to open invariants: instead of explicitly invoking \textsf{inv\_open}, users can (and should) write tactics like $\mathsf{iInv}\ N\ \mathsf{as}\ \texttt{"H"}\ \texttt{"Hclose"}$ to open an invariant with name $N$ (either namespace or hypothesis name), call its contents \texttt{"H"}, and call the closing viewshift \texttt{"Hclose"}.

\subsection{Logical Atomicity}
In Iris, global invariants and fancy updates are used to implement the \emph{logically atomic triples} introduced in the TaDA logic~\cite{tada}. Logically atomic triples are a strong specification for concurrent data structures, expressing the idea that an operation appears to take effect instantaneously at a linearization point, and no intermediate states are visible. Logically atomic operations \emph{appear to execute atomically}, and so they can access the contents of invariants, exactly as if they were atomic operations in the sense of Section~\ref{phys-atomic}.

The general form of an atomic triple is $$\forall a.\ \langle P_l\ |\ P_p(a)\rangle\ c\ \langle Q_l\ |\ Q_p(a)\rangle$$
where $P_l$ and $Q_l$ are \emph{private} or \emph{local} pre- and postconditions, and $P_p$ and $Q_p$ are \emph{public} pre- and postconditions, parameterized by an abstract value $a$. The private pre- and postconditions work in the same way as in an ordinary Hoare triple, but the public ones are different: $P_p$ must be true at \emph{every point} from the beginning of the function until the linearization point, for some value of $a$ that is not known to the function and may change without notice, and $Q_p$ must become true at the linearization point (for the value of $a$ in force at that point), but may not still be true by the end of the function. More concisely, $P_p$ is true until the linearization point, at which point the function atomically transitions from $P_p$ to $Q_p$, and then continues to execute (without changing the data in the public pre/postcondition) until it returns.

In general, the public pre- and postcondition will hold the abstract state of a data structure, which may change as it is accessed by other threads; the private pre- and postcondition will hold the local information that a thread needs to access the structure. For instance, an atomic specification for a map insert function might look like $$\forall m.\ \langle \mathsf{is\_map}(p)\ |\ \mathsf{map\_state}(p, m)\rangle\ \texttt{insert}(p, x, v)\ \langle \mathsf{is\_map}(p)\ |\ \mathsf{map\_state}(p, m[x \mapsto v])\rangle$$
We are not guaranteed that the map state at the end of the insertion will be $m[x \mapsto v]$ where $m$ is the map state when \texttt{insert} is called: rather, we know that $p$ always holds \emph{some} map during the function's execution, and at some point the function will take that map $m$, add $x \mapsto v$, and then eventually return (and in the meantime other threads may have further modified the map). If we provide a similar specification for \texttt{lookup}, then we will have specified a linearizable concurrent map, whose behavior in an execution always matches that of some linear sequencing of the lookups and inserts performed during the execution.

In \texttt{atomics/general\_atomics.v}, we use Iris's definition of atomic updates to build atomic specifications in VST. We provide an atomic variant of the funspec notation:

\begin{verbatim}
ATOMIC TYPE W OBJ a INVS E
WITH ...
PRE [ ... ]
  PROP (...)
  LOCAL (...)
  SEP (P_l) | (P_p)
POST [ ... ]
  PROP ()
  LOCAL ()
  SEP (Q_l) | (Q_p)
\end{verbatim}
where \texttt{W} is the \textsf{TypeTree} representing the type of the WITH clause; \texttt{a} is the quantified abstract state for the triple; \texttt{E} is the mask (set of invariants) needed to implement the triple (often $\mathsf{empty}$), and the public and private pre- and postconditions are as explained above. Atomic triples will always need to be declared with \texttt{Program Definition}, and will have a number of obligations related to nonexpansiveness, but in the current version of VST these should almost always be solved automatically. The notation for atomic specs is slightly brittle, and reasonable-looking specs will sometimes fail to parse: if you encounter this, please create a GitHub issue or email \url{mansky1@uic.edu}.

When \textbf{proving} that a function implements an atomic specification, the precondition will contain an \textsf{atomic\_shift} assertion with the public pre- and postcondition inside it. This atomic shift is a wrapper around Iris's $\mathsf{atomic\_update}$ (\texttt{AU}), and can be opened with the $\mathsf{iMod}$ tactic. It acts similarly to an invariant, except that it always has a choice of \textbf{two} closing view shifts: an \emph{aborting} view shift that can be used when the public precondition is maintained, and restores the atomic shift; and a \emph{committing} view shift that can be used when the public postcondition is satisfied, and returns a black-box assertion $Q$ that is required in order to prove the function's postcondition. We can abort the atomic shift any number of times, treating the public precondition like an invariant, but in order to complete the proof of the function's atomic specification, we must always do a commit to obtain $Q$, after which we lose the atomic shift and can no longer access the public precondition.

We can \textbf{use} an atomic triple with the usual \textsf{forward\_call} tactic. The witness to \textsf{forward\_call} needs to have one additional element: the desired postcondition $Q$. As usual, \textsf{forward\_call} will generate an obligation to prove the precondition of the spec: in this case, the precondition includes both the private precondition $P_l$ and the atomic shift itself. Usually the client will have an invariant containing the abstract state of the data structure (i.e., the public precondition) as well as additional information (e.g., the history in the case of linearizability); proving the atomic shift will then involve proving that the contents of the invariant imply the public precondition, and that the public postcondition can be used to re-establish the invariant while also proving $Q$. Iris's $\mathsf{iAuIntro}$ tactic can be used to open atomic shift proof obligations, turning them into a conjunction of committing and aborting view shifts.

One of the simplest use cases for atomic specifications is locks. Instead of the usual lock-invariant-based specs for locks, atomic triples let us state much simpler specs:
\[\forall b.\ \langle \mathsf{lock\_state}(\ell, b)\rangle\ \texttt{acquire}(\ell)\ \langle b = \mathsf{false} \land \mathsf{lock\_state}(\ell, \mathsf{true})\rangle\]
\[\langle \mathsf{lock\_state}(\ell, \mathsf{true})\rangle\ \texttt{release}(\ell)\ \langle \mathsf{lock\_state}(\ell, \mathsf{false})\rangle\]
where $\mathsf{lock\_state}(\ell, \mathsf{true})$ means that $\ell$ is held by some thread, and $\mathsf{lock\_state}(\ell, \mathsf{false})$ means that it is not. An \texttt{acquire} takes a lock in an unknown state, and at some point when it is not held, atomically sets it to held; a \texttt{release} atomically releases a held lock. These specifications say nothing about who owns $\ell$ or what resources it protects, giving us more flexibility in our reasoning: we can put the lock assertion into a global invariant instead of passing out shares among threads, and we can create a lock in one place and attach an invariant to it in another. These specs also imply the ordinary lock specs. In \texttt{atomics/verif\_lock\_atomic.v}, we re-verify our lock implementation against these specs (with $\mathsf{lock\_state}(\ell, b)$ implemented by an atomic points-to $\ell \mapsto_{\mathrm{a}} b$), and provide a range of subspecifications for using the locks with invariants, atomically or nonatomically.

We use the atomic lock specifications to prove atomic specifications for the counter example as well. In \texttt{progs64/verif\_incr\_atomic.v}, we use atomic specifications to avoid the need to choose a kind of ghost state in advance when verifying the \texttt{incr} and \texttt{read} functions: instead, we prove triples that look like $$\langle \textsf{c.lock} \mapsto \ell\ |\ \textsf{ctr\_state}\ \mathit{gv}\ \ell\ n\ g\rangle\ \texttt{incr()}\ \langle \textsf{c.lock} \mapsto \ell\ |\ \textsf{ctr\_state}\ \mathit{gv}\ \ell\ (n + 1)\ g\rangle$$
that precisely capture the behavior of the function (\texttt{incr} atomically adds 1 to the value of the counter). When we acquire and release the counter's lock in \texttt{incr} and \texttt{read}, we use the atomic invariant-based subspecs $\mathsf{acquire\_inv}$ and $\mathsf{release\_inv}$, which allow us to find the lock invariant in the public state (i.e., the precondition of the $\mathsf{atomic\_shift}$) instead of requiring the calling thread to own it. In the client, we create an invariant $$\knowInv{}{\mathsf{EX}\ x\ y,\ \mathsf{ghost\_var}\ \mathsf{gsh1}\ x\ g_1 * \mathsf{ghost\_var}\ \mathsf{gsh1}\ y\ g_2 * \mathsf{ctr\_val}\ g\ (x + y)}$$
and use it to call the atomic specifications. For instance, when thread 1 performs an increment, it starts with the invariant and its own ghost variable $\mathsf{ghost\_var}\ \mathsf{gsh2}\ 0\ g_1$, and proves that the atomic spec provided by \texttt{incr} leads to a postcondition $\mathsf{ghost\_var}\ \mathsf{gsh2}\ 1\ g_1$ (while re-establishing the invariant). Different clients with different invariants and ghost state could call the same specs for \texttt{incr} and \texttt{read}, as long as their invariants included the \textsf{ctr\_val} assertion (the abstract state of the counter) used in the public pre- and postconditions. We believe that this is the strongest possible specification for the increment data structure (such as it is): rather than tying it to a specific kind of ghost state based on the expected use, we simply show that \texttt{incr} increases a value by 1 and \texttt{read} reads a value, atomically, and then allow clients to build protocols on top of this functionality as they see fit.

A larger and more compelling use case for atomic specifications is for \emph{fine-grained} concurrent data structures, which use multiple locks and/or atomic pointers for different sections of the data structure, allowing multiple threads to perform unrelated operations simultaneously. There is a full example of this approach in \texttt{atomics/verif\_hashtable\_atomic.v}, a verification of a simple lock-free hashtable implemented with SC atomic operations. We prove atomic triples of the form
\[\begin{array}{c}
\langle H.\ h \mapsto \vec{p}\ |\ k \neq 0 \land \mathsf{hashtable}\ H\ \vec{p}\ \vec{g}\rangle\ \texttt{set\_item}(h, k, v) \ \langle h \mapsto \vec{p}\ |\ \mathsf{hashtable}\ H[k \mapsto v]\ \vec{p}\ \vec{g}\rangle
\\
\\
\langle H.\ h \mapsto \vec{p}\ |\ k \neq 0 \land \mathsf{hashtable}\ H\ \vec{p}\ \vec{g}\rangle\ \texttt{get\_item}(h, k)\  \langle v.\ h \mapsto \vec{p}\ |\ H(k) = v \land \mathsf{hashtable}\ H\ \vec{p}\ \vec{g}\rangle
\end{array}\]
showing that the hashtable functions atomically perform the desired operations on the global state of the hashtable, and then attach an invariant asserting that the current state of the hashtable can be reconstructed from a ghost-state history of all operations performed, effectively proving \emph{linearizability} of the hashtable.

%\section{External State}
%Orthogonal to concurrency, we may want to verify programs that interact with the external world---programs that perform console I/O, for instance, or communicate over a network. One way to model this interaction is to have the program carry around a piece of \emph{external state}, state that the program cannot modify itself, but instead passes to external calls that modify it. For instance, the external state may be a trace of I/O operations performed, and each call to \texttt{read} or \texttt{write} might add a corresponding operation to the trace. We could extend the type of state in VST to include such external state, but we already have a mechanism for introducing state of arbitrary type that can only be modified in particular ways: ghost state. In this section, we describe how VST builds on ghost state to provide reasoning principles for external interaction.

%To build the external state, we use the reference pattern we introduced in Section~\ref{incr-gen}, in which we divide the state into a reference copy and a possibly partial local copy. Conceptually, the reference copy is held by the part of the external world that has permission to modify the ghost state (e.g., the console in the case of console I/O), and the program holds the local copy. Because the only nontrivial frame-preserving updates for the reference pattern require possession of the reference copy, this ensures that only the external world can modify this state. The program's knowledge of the external state is held in a $\mathsf{has\_ext}\ z$ assertion, which is just syntactic sugar for $\mathsf{own}(\mathsf{ghost\_part}\ \mathsf{Tsh}\ z)$.

%To describe the interaction between programs and the external world, we give specifications to the relevant external functions, and include the external state in those specifications. For instance, for console I/O, we might define the type of external state as interaction trees (see ?), and give the following specs for \texttt{putchar} and \texttt{getchar}:

%\begin{verbatim}
%Definition putchar_spec :=
%  WITH c : int, k : IO_itree
%  PRE [ 1%positive OF tint ]
%    PROP ()
%    LOCAL (temp 1%positive (Vint c))
%    SEP (has_ext (write c ;; k))
%  POST [ tint ]
%    PROP ()
%    LOCAL (temp ret_temp (Vint c))
%    SEP (has_ext k).
%
%Definition getchar_spec :=
%  WITH k : int -> IO_itree
%  PRE [ ]
%    PROP ()
%    LOCAL ()
%    SEP (has_ext (r <- read ;; k r))
%  POST [ tint ]
%   EX i : int,
%    PROP (- two_p 7 <= Int.signed i <= two_p 7 - 1)
%    LOCAL (temp ret_temp (Vint i))
%    SEP (has_ext (k i)).
%\end{verbatim}
%These specs are written in ``consuming style'', in which the program starts with a list/tree of the allowed I/O operations and uses up an element each time it performs an operation. (We may instead use ``producing style'', in which the program starts with an empty trace and adds to the trace each time it performs an operation.) $\texttt{putchar}(c)$ consumes a $\mathsf{write}\ c$ event and returns $c$; $\texttt{getchar}()$ consumes a $\mathsf{read}$ event and returns an integer in the character-value range, providing the same value to the continuation of the interaction tree.

%To connect these specs to the semantics of external calls, we must construct an \emph{external oracle}. We have seen external oracles in passing before, in lines like $\mathsf{Existing\ Instance\ NullExtension.Espec}$ before proofs of $\mathsf{semax\_prog}$. There are more interesting examples in \texttt{concurrency/semax\_conc.v}, for the lock functions, and \texttt{progs/io\_specs.v}, for \texttt{putchar} and \texttt{getchar}. The most important function of this oracle is to set the type of external state, which will then be used in the specification of \texttt{main} when we specify the program's initial external state.

%Once we have done this setup, we can give specs for functions using this kind of external state fairly straightforwardly. An example for console I/O can be found in \texttt{verif\_io.v}. The program \texttt{io.c} repeatedly prompts the user to enter a digit, then prints the sum of the digits entered so far. To specify the behavior of the whole program, we define an interaction tree representing the I/O behavior of a valid execution ($\mathsf{main\_itree}$), and then use $\mathsf{main\_pre\_ext}$ to indicate that \texttt{main} should start with external state allowing it to perform exactly that I/O:
%\begin{verbatim}
%Definition main_spec :=
% DECLARE _main
%  WITH gv : globals
%  PRE  [] main_pre_ext prog main_itree nil gv
%  POST [ tint ] main_post prog nil gv.
%\end{verbatim}
%The proofs use the same techniques as in programs without I/O; the external state is only used to satisfy the preconditions of calls to \texttt{getchar} and %\texttt{putchar}, and is only modified by those calls. The top-level correctness property for the program is $\mathsf{semax\_prog\_ext}$ instead of $\mathsf{semax\_prog}$, indicating that the program has only been proved correct for a particular starting value of the external state.

\bibliography{sources}

\end{document}
