\pagebreak
\section{Formal UPC Memory Consistency Semantics}
\label{mem-semantics}
\index{memory consistency}
\npf The memory consistency model in a language defines the order in 
which the results of write operations may be observed through read operations.  
The behavior of a UPC program may depend on the timing of accesses to shared
variables, so in general a program defines a set of possible executions,
rather than a single execution.  The memory consistency model
constrains the set of possible executions for a given program;
the user may then rely on properties that are true of all
of those executions.  

\np The memory consistency model is defined in terms of the read and 
write operations issued by each thread in a na\"{\i}ve translation
of the program, i.e., without any program transformations during
translation, where each thread issues operations as defined by 
the abstract machine defined in [ISO/IEC00 Sec. 5.1.2.3].  
[ISO/IEC00 Sec. 5.1.2.3] allows a UPC implementation to perform
various program transformations to improve performance, provided they are not visible 
to the programmer - specifically, provided those transformations do not affect 
the external behavior of the program.
UPC extends this constraint, requiring the set of externally-visible behaviors
(the input/output dynamics and volatile behavior defined in [ISO/IEC00 Sec. 5.1.2.3])
from any execution of the transformed program be indistinguishable from those
of the original program executing on the abstract machine 
and adhering to the memory consistency model as defined in this appendix.

\np This appendix assumes some familiarity with memory consistency
models, partial orders, and basic set theory.

\subsection{Definitions}
\npf A UPC program execution is specified by a program text and a
number of threads, $T$.  An \emph{execution} is a set of operations $O$,
each operation being an instance of some instruction in the program 
text.  The set of operations issued by a thread $t$ is denoted 
$O_t$.  The program executes memory operations on a set of 
variables (or locations) $L$.  The set $V$ is the set of 
possible values that can be stored in the program variables.
%\footnote{This is the point that we could add an atomicity 
%constraint on what types of values are the fundamental unit
%of a read or write, possibly using something like ISO C's sig\_atomic\_t.
%There are actually two separate issues here, namely atomicity
%and clobbering a.k.a. word tearing.}

\index{shared access}
\index{strict shared read}
\index{strict shared write}
\index{relaxed shared read}
\index{relaxed shared write}
\np A \emph{memory operation} in such an execution is given by a location $l \in
L$ to be written or read and a value $v \in V$, which is the value to
be written or the value returned by the read.  A memory operation $m$
in a UPC program has one of the following forms, as defined in Section~\ref{def-access}:
\begin{list}{ $\bullet$ }{\setlength{\itemsep}{0pt}}
\item a strict shared read, denoted SR(l,v)
\item a strict shared write, denoted SW(l,v)
\item a relaxed shared read, denoted RR(l,v)
\item a relaxed shared write, denoted RW(l,v)
\item a local read, denoted LR(l,v)
\item a local write, denoted LW(l,v)
\end{list}
%(Here shared vs local is determined by the sharing type qualification on the
%expression used to perform the access, and for shared accesses, 
%strict vs relaxed is determined as described in UPC Spec 6.4.2).

\np In addition, each memory operation $m$ is associated with exactly one 
of the $T$ threads, denoted $Thread(m)$, and the accessor $Location(m)$ 
is defined to return the location $l$ accessed by $m$.
%and a non-negative integer 
%$SourcePoint(m)$, which uniquely determines the point in the 
%program text from which the operation was issued.  (Note: we
%may not need this.)

\np Given a UPC program execution with $T$ threads, let $M \subseteq O$ be
the set of memory operations in the execution and $M_t$ be the
set of memory operations issued by a given thread $t$.  Each operation
in $M$ is one of the above six types, so the set $M$ is 
partitioned into the following six disjoint subsets:
\begin{list}{ $\bullet$ }{\setlength{\itemsep}{0pt}}
\item $SR(M)$ is the set of strict shared reads in $M$
\item $SW(M)$ is the set of strict shared writes in $M$
\item $RR(M)$ is the set of relaxed shared reads in $M$
\item $RW(M)$ is the set of relaxed shared writes in $M$
\item $LR(M)$ is the set of local reads in $M$
\item $LW(M)$ is the set of local writes in $M$
\end{list}

\np The set of all writes in $M$ is denoted as $W(M)$:
\begin{eqnarray*}
W(M)\ {def \atop =}\ SW(M)\ \cup\ RW(M)\ \cup\ LW(M)
\end{eqnarray*}
and the set of all strict accesses in $M$ is denoted as $Strict(M)$:
\begin{eqnarray*}
Strict(M)\ {def \atop =}\ SR(M)\ \cup\ SW(M)
\end{eqnarray*}

\subsection{Memory Access Model}
\label{MemoryAccessModel}
\index{StrictPairs}
\index{StrictOnThreads}
\index{AllStrict}
\npf Let $StrictPairs(M)$, $StrictOnThreads(M)$, and $AllStrict(M)$
be unordered pairs of memory operations defined as:
\[
StrictPairs(M) {def \atop =} \left\{\ (m_1, m_2)\ \left|\ 
\begin{array}{l} m_1 \neq m_2\ \land\ m_1 \in Strict(M)\ \land \\
                 m_2 \in Strict(M)\ \end{array} \right. \right\} 
\]
\[
StrictOnThreads(M) {def \atop =} \left\{\ (m_1, m_2)\ \left|\ 
\begin{array}{l} m_1 \neq m_2\ \land\ \\ Thread(m_1) = Thread(m_2)\ \land \\
                 (\ m_1 \in Strict(M)\ \lor\ m_2 \in Strict(M)\ ) \end{array} \right. \right\} 
\]
\[
AllStrict(M) {def \atop =} StrictPairs(M)\ \cup\ StrictOnThreads(M)
\]

\np Thus, $StrictPairs(M)$ is the set of all pairs of strict memory 
accesses, including those between threads, and $StrictOnThreads(M)$ is
the set of all pairs of memory accesses from the same thread in which at least 
one is strict.  $AllStrict(M)$ is their union, which intuitively is the set of
operation pairs for which all threads must agree upon a unique ordering
(i.e. all threads must agree on the directionality of each pair).
In general, the determination of that ordering will depend on the resolution of 
race conditions at runtime.  
%We later define an {\it ordering} of 
%$AllStrict(M)$ -- a set of ordered pairs that contains all 
%pairs in $AllStrict(M)$ but with an orientation for each pair.  

\index{Conflicting}
\index{DependsOnThreads}
\np UPC programs must preserve the serial dependencies within each thread, 
defined by the set of ordered pairs $DependOnThreads(M_t)$:
\[
Conflicting(M) {def \atop =} \left\{\ (m_1, m_2)\ \left|\ 
\begin{array}{lc} Location(m_1) = Location(m_2)\ \land \\
             (\ m_1 \in W(M)\ \lor\ m_2 \in W(M)\ ) \end{array} \right. \right\} 
\]
\[
\begin{array}{l}
DependOnThreads(M)\ {def \atop =}
\left\{ \langle m_1, m_2\rangle  \left|\ \begin{array}{l} 
 m_1 \neq m_2\ \land\ \\ Thread(m_1) = Thread(m_2)\ \land \\
 Precedes(m_1, m_2)\ \land \\
 \left( \begin{array}{l} (m_1, m_2) \in Conflicting(M)\ \lor \\
                         (m_1,m_2) \in StrictOnThreads(M) \end{array}\right)\ \end{array} \right. \right\} 
\end{array}
\]
% An alternate formatting possibility:
%\[
%\begin{array}{l}
%DependOnThreads(M) {def \atop =} \vspace{1em} \\
%\left\{ \langle m_1, m_2\rangle \left|\ \begin{array}{l} 
%  m_1 \neq m_2\ \land\ Thread(m_1) = Thread(m_2)\ \land\ Precedes(m_1, m_2) \land \\
%  (\ (m_1, m_2) \in Conflicting(M)\ \lor\ (m_1,m_2) \in StrictOnThreads(M)\ ) \end{array} \right. \right\} 
%\end{array}
%\]

\np $DependOnThreads(M_t)$ establishes an ordering between operations issued 
by a given thread $t$ that involve a data dependence (i.e. those operations in $Conflicting(M_t)$)
-- this ordering is the one maintained by serial compilers and hardware.  
$DependOnThreads(M_t)$ additionally establishes an ordering between 
operations appearing in $StrictOnThreads(M_t)$.
In both cases, the ordering imposed is the one dictated by $Precedes(m_1, m_2)$,
a predicate which intuitively is an ordering relationship defined by serial program order.% 
\footnote{The formal definition of $Precedes$ is given in Section~\ref{MemModelPrecedes}.}
It's important to note that $DependOnThreads(M_t)$ intentionally avoids introducing
ordering constraints between non-conflicting, non-strict operations executed by a single thread 
(i.e. it does not impose ordering between a thread's relaxed/local operations to independent
memory locations, or between relaxed/local reads to any location). As demonstrated in Section~\ref{MemModelExamples},
this allows implementations to freely reorder any consecutive relaxed/local operations issued 
by a single thread, except for pairs of operations accessing the same location where 
at least one is a write; by design this is exactly the condition that is 
enforced by serial compilers and hardware
to maintain sequential data dependences -- requiring any stronger ordering property 
would complicate implementations and likely degrade the performance of relaxed/local accesses.
The reason this flexibility must be directly exposed in the model (unlike
other program transformation optimizations which are implicitly permitted by [ISO/IEC00 Sec. 5.1.2.3]) is because the results of
this reordering may be ``visible" to other threads in the UPC program (as demonstrated in Section~\ref{MemModelExamples})
and therefore could impact the program's ``input/output dynamics".

\np A UPC program execution on $T$ threads with memory accesses $M$ is 
considered {\it UPC consistent} if there exists a partial order 
$<_{Strict}$ that provides an orientation for each pair in $AllStrict(M)$ and for 
each thread $t$, there 
exists a total order $<_t$ on $O_t\ \cup\ W(M)\ \cup\ SR(M)$ 
(i.e. all operations issued by thread $t$ and all writes and strict reads issued by any thread) such that:
\begin{enumerate}
\item $<_t$ defines a correct serial execution.  
In particular:
\begin{itemize}
\item Each read operation returns the value of the ``most recent" 
preceding write to the same location, where ``most recent" is defined by $<_t$.
If there is no prior write of the location in question, the read returns the
initial value of the referenced object as defined by [ISO/IEC00 Sec. 6.7.8/7.2.0.3].%
\footnote{i.e. the initial value of an object declared with an initializer
is the value given by the initializer. Objects with static storage duration 
lacking an initializer have an initial value of zero. 
Objects with automatic storage duration lacking an initializer
have an indeterminate (but fixed) initial value.
The initial value for a dynamically allocated object is described by the 
memory allocation function used to create the object.
}

\item The order of operations in $O_t$ is consistent with the 
ordering dependencies in $DependOnThreads(M_t)$. \label{localDepend}
\end{itemize}

\item $<_t$ is consistent with $<_{Strict}$.  In particular, this implies that all threads 
agree on a total order over the strict operations ($Strict(M)$), and the relative ordering 
of all pairs of operations issued by a single thread where at least one is strict ($StrictOnThreads(M)$).
\end{enumerate}

\np The set of $<_{t}$ orderings that satisfy the above constraints are said to be the 
{\it enabling orderings} for the execution.  An execution is UPC consistent if
each UPC thread has at least one such enabling ordering in this set.
Conformant UPC implementations shall only produce UPC consistent executions.

\np The definitions of $DependOnThreads(M)$ and $<_t$ provide well-defined consistency 
semantics for local accesses to shared objects, making them behave similarly to relaxed shared accesses.
Note that private objects by definition may only be accessed by a single thread, 
and therefore local accesses to private objects trivially satisfy the constraints of the model --
provided the serial data dependencies across sequence points mandated by [ISO/IEC00 Sec. 5.1.2.3] are preserved
for the accesses to private objects on each thread.

\subsection{Consistency Semantics of Standard Libraries and Language Operations}

\subsubsection{Consistency Semantics of Synchronization Operations}
\index{memory consistency, locks}
\index{memory consistency, barriers}
\index{memory consistency, fence}

\npf UPC provides several synchronization operations 
in the language and standard library that can be 
used to strengthen the consistency requirements of a program.
Sections \ref{upc_lock} and \ref{upc_barrier} define the consistency
effects of these operations in terms of a ``null strict reference''.
The formal definition presented here is operationally equivalent to that
normative definition, but is more explicit and therefore included here for completeness.

%Under the alternative asymmetric ordering semantics, this formulation would 
%represent a slight relaxation to the current language semantics, (which state
%that {\it upc\_lock}, {\it upc\_unlock}, {\it upc\_notify} and {\it upc\_wait}
%all imply a full {\it upc\_fence}) because it permits more aggressive movement
%of memory operations past synchronization operations, as allowed by many
%architectural memory models, such as release consistency.

\np The memory consistency semantics of the 
synchronization operations are defined in terms of equivalent accesses
to a fresh variable $l_{synch} \in L$
that does not appear elsewhere in the program.%
\footnote{
Note: These definitions do not give the synchronization operations
their synchronizing effects -- they only define the memory 
model behavior.}

\begin{itemize}
\item A $upc\_fence$ statement implies a strict write followed
by a strict read: $SW(l_{synch}, 0)\ ;\ SR(l_{synch}, 0)$  

\item A $upc\_notify$ statement implies a strict write: $SW(l_{synch}, 0)$
immediately after evaluation of the optional argument (if any) and
before the notification operation has been posted.

\item A $upc\_wait$ statement implies a strict read: $SR(l_{synch}, 0)$ 
immediately after the completion of the statement.

\item A $upc\_lock()$ call or a successful $upc\_lock\-\_attempt()$ call 
implies a strict read: $SR(l_{synch}, 0)$ immediately before return.

\item A $upc\_unlock$ call implies a strict write: $SW(l_{synch}, 0)$
immediately upon entry to the function.

\end{itemize}

\np The actual data values involved in these implied strict accesses
is irrelevant.  The strict operations implied by the synchronization operations 
are present only to serve as a consistency point, introducing orderings in 
$<_{Strict}$ that restrict the relative motion in each $<_t$ of any
surrounding non-strict accesses to shared objects issued by the calling thread.

\subsubsection{Consistency Semantics of Standard Library Calls}

\npf Many of the functions in the UPC standard library can be used to 
access and modify data in shared objects, either non-collectively 
(e.g. $upc\_mem\-\{put,get,cpy\}$) or collectively (e.g. $upc\_all\_broadcast$, etc).
This section defines the consistency semantics of the accesses 
to shared objects which are implied to take place within the 
implementation of these library functions, to provide well-defined
semantics in the presence of concurrent explicit reads and writes of the same shared objects.
For example, an application which calls a function such as $upc\_memcpy$ 
may need to know whether surrounding explicit relaxed operations on
non-conflicting shared objects could possibly be reordered relative to the
accesses that take place inside the library call.  This is a subtle but
unavoidable aspect to the library interface which needs to be explicitly
defined to ensure that applications can be written with portably deterministic
behavior across implementations. 

\np The following sections define the consistency semantics of shared accesses
implied by UPC standard library functions, in the absence of any explicit
consistency specification for the given function (which would always take
precedence in the case of conflict).

\paragraph{Non-Collective Standard Library Calls}
\index{memory consistency, non-collective library}

\npf  For {\it non-collective} functions in the UPC standard library (e.g. $upc\_mem\{put,get,cpy\}$),
any implied data accesses to shared objects behave as a set of relaxed shared
reads and relaxed shared writes of unspecified size and ordering, issued by the
calling thread. No strict operations or fences are implied by a non-collective
library function call, unless explicitly noted otherwise.

\np EXAMPLE 1:
\begin{verbatim}
        #include <upc_relaxed.h>

        shared int x, y;      // initial values are zero
        shared [] int z[2];   // initial values are zero
        int init_z[2] = { -3, -4 };
        ...
        if (MYTHREAD == 0) {
            x = 1;
 
            upc_memput(z, init_z, 2*sizeof(int)); 

            y = 2;
        } else {
            #pragma upc strict
            int local_y = y;  
            int local_z1 = z[1];
            int local_z0 = z[0];
            int local_x = x;
            ...
        }
\end{verbatim}

\noindent In this example, all of the writes to shared objects are relaxed 
(including the accesses implied by the library call), and thread 0 
executes no strict operations or fences which would inhibit reordering. 
Therefore, other
threads which are concurrently performing strict shared reads of the 
shared objects ($x, y, z[0]$ and $z[1]$) may observe the updates occurring
in any arbitrary order that need not correspond to thread 0's
program order. 
For example, thread 1 may observe a final result of 
$local\_y == 2$, $local\_z1 == -4$, $local\_z0 == 0$ and $local\_x == 0$,
or any other permutation of old and new values for the result of the strict shared reads.
Furthermore, because the shared writes implied by the library call have unspecified size,
thread 1 may even read intermediate values into $local\_z0$ and $local\_z1$ which 
correspond to neither the initial nor the final values for those shared objects.%
\footnote{This is a consequence of the byte-oriented nature of shared data movement
functions (which is assumed in the absence of further specification) and is
orthogonal to the issue of write atomicity.}
Finally, note that all of these observations remain true even if $z$ had instead been declared as:
\begin{verbatim}
        strict shared [] int z[2];
\end{verbatim}
because the consistency qualification used on the shared object declarator 
is irrelevant to the operation of the library call, whose implied shared 
accesses are specified to always behave as relaxed shared accesses.

\np If $upc\_fence$ operations were inserted in the blank lines immediately
preceding and following the $upc\_memput$ invocation in the example above, then
$<_{Strict}$ would imply that all reading threads would be guaranteed to observe
the shared writes according to thread 0's program order.  Specifically, any
thread reading a non-initial value into $local\_y$ would be guaranteed to read
the final values for all the other shared reads, and any thread reading the
initial zero value into $local\_x$ would be guaranteed to also have read the
initial zero values for all the other shared reads.%
\footnote{However, for threads reading the initial value into $local\_y$ and
the final value into $local\_x$, the writes to $z[0]$ and $z[1]$ could still
appear to have been arbitrarily reordered or segmented, leading to
indeterminate values in $local\_z0$ and $local\_z1$.}
Explicit use of $upc\_fence$ immediately preceding and following non-collective
library calls operating on shared objects is the recommended method for
ensuring ordering with respect to surrounding relaxed operations issued by the
calling thread, in cases where such ordering guarantees are required
for program correctness.

\paragraph{Collective Standard Library Calls}
\index{memory consistency, collective library}

\npf For {\it collective} functions in the UPC standard library, any implied data
accesses to shared objects behave as a set of relaxed shared reads and relaxed
shared writes of unspecified size and ordering, issued by one or more
unspecified threads (unless explicitly noted otherwise).

\np For {\it collective} functions in the UPC standard library
that take a $upc\_flag\_t$
argument (e.g. $upc\_all\_broadcast$), one or more $upc\_fence$ operations
may be implied upon entry and/or exit to the library call, 
based on the flags selected in the value of the $upc\_flag\_t$ argument, as follows:

\begin{itemize}
\item
{\tt UPC\_IN\_ALLSYNC} and {\tt UPC\_IN\_MYSYNC} imply a $upc\_fence$ operation on 
each calling thread, immediately upon entry to the library function call.

\item
{\tt UPC\_OUT\_ALLSYNC} and {\tt UPC\_OUT\_MYSYNC} imply a $upc\_fence$ operation on 
each calling thread, immediately before return from the library function call.

\item 
No fence operations are implied by {\tt UPC\_IN\_NOSYNC} or {\tt UPC\_OUT\_NOSYNC}.
\end{itemize}


\np The $upc\_fence$ operations implied by the rules above are sufficient to 
ensure the results one would naturally expect in the presence of 
relaxed or local accesses to shared objects issued immediately 
preceding or following an {\tt ALLSYNC} or {\tt MYSYNC} collective
library call that accesses the same shared objects. Without such fences, 
nothing would prevent prior or subsequent non-strict operations 
issued by the calling thread from being reordered relative 
to some of the accesses implied by the library call (which 
might not be issued by the current thread), potentially leading to 
very surprising and unintuitive results. The {\tt NOSYNC} flag
provides no synchronization guarantees between the execution stream of
the calling thread and the shared accesses implied by the collective library call,
therefore no additional fence operations are required.%
\footnote{Any deterministic program which makes use of {\tt NOSYNC} collective 
data movement functions is likely to be synchronizing access to shared objects 
via other means -- for example, through the use of explicit $upc\_barrier$ or
{\tt ALLSYNC}/{\tt MYSYNC} collective calls that already provide sufficient synchronization
and fences.}

\subsection{Properties Implied by the Specification}

\npf The memory model definition is rather subtle in some points, but
as described in Section 5.1.2.3, most programmers need not worry about these details.  There
are some simple properties that are helpful in understanding 
the semantics.% 
\footnote{Note the properties described in this section and in Section 5.1.2.3 apply only to 
programs which are ``conforming'' as defined by [ISO/IEC00 Sec. 4] -- 
namely, those where no thread performs an operation which is labelled as 
having undefined behavior (e.g. dereferencing an uninitialized pointer).}
\index{sequential consistency}
The first property is:
\begin{itemize}
\item A UPC program which accesses shared objects using only strict operations%
\footnote{i.e. no relaxed shared accesses, and no accesses to shared objects via pointers-to-local}
will be sequentially consistent.
\end{itemize}

\np This property is trivially true due to the global total order that $<_{Strict}$
imposes over strict operations (which is respected in every thread's $<_t$), but may
not very useful in practice -- because the exclusive use of strict operations
for accessing shared objects may incur a noticeable performance penalty.
Nevertheless, this property may still serve as a useful debugging mechanism, because even 
in the presence of data races a fully strict program is guaranteed
to only produce behaviors allowed under sequential consistency [Lam79],
which is generally considered the simplest parallel memory model to understand and the one which
na\"{\i}ve programmers typically assume.

\np Of more interest is that programs free of race conditions 
will also be sequentially consistent.  This requires a more
formal definition of race condition, because programmers
may believe their program is properly synchronized using
memory operations when it is not.  

\index{PotentialRaces}

\np $PotentialRaces(M)$ is defined as a set of unordered pairs $(m_1, m_2)$:
\[
PotentialRaces(M) {def \atop =} \left\{(m_1, m_2)\left|\ 
\begin{array}{l} Location(m_1) = Location(m_2)\ \land\\
                 Thread(m_1)\ \neq\ Thread(m_2)\ \land\\
                 (\ m_1 \in W(M)\ \lor\ m_2 \in W(M)\ )\end{array}\right.\right\}
\]

\np An execution is race-free if every $(m_1, m_2) \in PotentialRaces(M)$ is ordered by $<_{Strict}$.
 i.e. an execution is race-free if and only if:
\[
\forall (m_1, m_2) \in PotentialRaces(M) : (\ m_1 <_{Strict} m_2\ )\ \lor\ (\ m_2 <_{Strict} m_1\ )
\]

\np Note this implies that all threads $t$ and all enabling orderings $<_t$ agree upon the ordering of each $(m_1, m_2) \in PotentialRaces(M)$ (so there is no race).  
These definitions allow us to state a very useful property of UPC programs:

\begin{itemize}
\item A program that produces only race-free executions will be sequentially consistent.  
\end{itemize}

\np Note that UPC locks and barriers constrain $PotentialRaces$ as one would
expect, because these synchronization primitives imply 
strict operations which introduce orderings in $<_{Strict}$ for the operations in question.

%--------------------------------------------------------------------------
\subsection{Examples}
\label{MemModelExamples}
\index{memory consistency, examples}

\npf The subsequent examples demonstrate the semantics of the memory
model by presenting hypothetical execution traces and explaining how the memory
model either allows or disallows the behavior exhibited in each trace. The
examples labelled ``disallowed'' denote a trace which is not UPC consistent and
therefore represent a violation of the specified memory model. Such an
execution trace shall never be generated by a conforming UPC implementation.
The examples labelled ``allowed'' denote a trace which is UPC consistent and
therefore represent a permissible execution that satisfies the constraints of
the memory model. Such an execution trace \emph{may} be generated by a
conforming UPC implementation.%
\footnote{The memory model specifies guarantees
which must be true of any conformant UPC implementation and therefore may be
portably relied upon by users. A given UPC implementation may happen to provide
guarantees which are stronger than those required by the model, thus in general
the set of behaviors which can be generated by conformant implementation will
be a subset of those behaviors permitted by the model.} 

\np In the figures below, each execution is shown by the linear
graph which is the $Precedes()$ program order for each thread, generated
by an execution of the source program on the abstract machine.  Pairs of 
memory operations that are ordered by the global ordering
over memory operations in $AllStrict(M)$ (i.e. $m_1 <_{Strict} m_2$) are represented
as $m_1 \Rightarrow m_2$.  All threads must agree
upon the relative ordering imposed by these edges in their $<_t$ orderings.  
Pairs ordered by a thread $t$ as in 
$m_1 <_t m_2$ are represented by $m_1 \rightarrow m_2$.\\
Arcs that are implied by transitivity are omitted.  Assume
all variables are initialized to 0.

\np EXAMPLE 1: \textbf{Allowed behavior} 
that would not be allowed under
sequential consistency.  There are only relaxed operations,
so threads need not observe the program order of
other threads.  Because all operations are relaxed,
there are no $\Rightarrow$ orderings between operations.


\begin{tabbing}WWWWW\=WWWWW\=WWWWW\=WWWWW\=\kill
$T0$: \> RR(x,1); \> RW(x,2)\\
$T1$: \> RR(x,2); \> RW(x,1)\\
\end{tabbing}

\bigskip
$<_0$:\hspace{0.25in}
\xymatrix{
RR(x,1) \ar[r] & RW(x,2)  \\
 & RW(x,1) \ar[ul]
}\hspace{.5in}
\parbox[t]{3in}
{$T0$ observes $T1$'s write happening before its own read.}

\bigskip
$<_1:$\hspace{0.25in}
\xymatrix{
 & RW(x,2) \ar[dl] \\
RR(x,2) \ar[r] & RW(x,1) 
}\hspace{.5in}
\parbox[t]{3in}
{$T1$ must observe its own program order for conflicting operations,
but it sees $T0$'s write as the first operation.}

\bigskip
Note that relaxed reads issued by thread $t$ only appear in the $<_t$ of that thread.

\bigskip
%--------------------------------------------------------------------------
\np EXAMPLE 2: \textbf{Disallowed behavior}
which is the same as 
the previous example, but with all accesses
made strict.  All edges in the graph below
must therefore be $\Rightarrow$ edges.
This also implies the program order edges must
be observed in $<_{Strict}$ and the two threads must agree on the
order of the races.  The use of unique values 
in the writes for this example forces an orientation
of the cross-thread edges, so an acyclic 
$<_{Strict}$ cannot be defined that satisfies the
write-to-read data flow requirements for a valid $<_t$.
\begin{tabbing}WWWWW\=WWWWW\=WWWWW\=WWWWW\=\kill
$T0$: \> SR(x,1); \> SW(x,2)\\
$T1$: \> SR(x,2); \> SW(x,1)\\
\end{tabbing}

\bigskip
$<_{Strict}$:\hspace{.2in}
\xymatrix{
SR(x,1) \ar@{=>}[r] & SW(x,2) \ar@{=>}[dl] \\
SR(x,2) \ar@{=>}[r] & SW(x,1) \ar@{=>}[ul]
}\hspace{.2in}
\parbox[t]{3.25in}
{All of the edges shown are required, but this \\
is not a valid $<_{Strict}$, since it contains a cycle.  }

\bigskip
%--------------------------------------------------------------------------
\np EXAMPLE 3: \textbf{Allowed behavior} 
that would be disallowed (as in the first
example) if all of the accesses were
strict.  Again one thread may observe the other's 
operations happening out of program order. 
This is the pattern of memory operations 
that one might see with a spin lock, where $y$ is 
the lock protecting the variable $x$.  The implication
is that UPC programmers should not build synchronization
out of relaxed operations.

\begin{tabbing}WWWWW\=WWWWW\=WWWWW\=WWWWW\=\kill
$T0$: \> RW(x,1); \> RW(y,1)\\
$T1$: \> RR(y,1); \> RR(x,0)\\
\end{tabbing}

\bigskip
$<_0$:\hspace{0.25in}
\xymatrix{
RW(x,1) \ar[r] & RW(y,1)  \\
}\hspace{.5in}
\parbox[t]{3in}
{$T0$ observes only its own writes. \\
 The writes are non-conflicting, so either ordering constitutes a valid $<_0$.}

\bigskip
$<_1:$\hspace{0.25in}
\xymatrix{
RW(x,1) & RW(y,1) \ar[dl] \\
RR(y,1) \ar[r] & RR(x,0) \ar[ul]
}\hspace{.5in}
\parbox[t]{3in}
{To satisfy write-to-read data flow in $<_1$, 
RW(x,1) must follow RR(x,0) and
RR(y,1) must follow RW(y,1).
There are three other valid $<_1$ orderings 
which satisfy these constraints.}

\bigskip
%--------------------------------------------------------------------------
\np  EXAMPLE 4: \textbf{Allowed behavior} 
that would be disallowed
under sequential consistency.  This example is similar
to the previous ones, but involves a read-after-write
on each processor.  Neither thread sees the update by
the other, but in the $<_t$ orderings, each thread
conceptually observes the other thread's operations happening out
of order.

\begin{tabbing}WWWWW\=WWWWW\=WWWWW\=WWWWW\=\kill
$T0$: \> RW(x,1); \> RR(y,0)\\
$T1$: \> RW(y,1); \> RR(x,0)\\
\end{tabbing}

\bigskip
$<_0$:\hspace{0.25in}
\xymatrix{
RW(x,1) \ar[r] & RR(y,0) \ar[dl]  \\
RW(y,1)
}\hspace{.25in}
\parbox[t]{3in}
{The only constraint on $<_0$ is RW(y,1) must follow RR(y,0).
 Several other valid $<_0$ orderings are possible.}

\bigskip
$<_1:$\hspace{0.25in}
\xymatrix{
RW(x,1) \\
RW(y,1) \ar[r] & RR(x,0) \ar[ul]
}\hspace{.25in}
\parbox[t]{3in}
{Analogous situation with a write-after-read, this time on x.
 Several other valid $<_1$ orderings are possible.}

\bigskip
%--------------------------------------------------------------------------
\np EXAMPLE 5: \textbf{Disallowed behavior} 
because with strict accesses,
one of the two writes must ``win'' the race condition.
Each thread observes the other thread's write happening
after its own write, which creates a cycle when one
attempts to construct $<_{Strict}$.
\begin{tabbing}WWWWW\=WWWWW\=WWWWW\=WWWWW\=\kill
$T0$: \> SW(x,2); \> SR(x,1)\\
$T1$: \> SW(x,1); \> SR(x,2)\\
\end{tabbing}

$<_{Strict}$:\hspace{.5in}
\xymatrix{
SW(x,2) \ar@{<=>}[d] \ar@{=>}[r] & SR(x,1) \\
SW(x,1) \ar@{=>}[r] & SR(x,2)
}\hspace{.5in}

\bigskip
%--------------------------------------------------------------------------
\np EXAMPLE 6: \textbf{Allowed behavior} 
where a thread observes its own reads occurring out-of-order.
Reordering of reads is commonplace in serial compilers/hardware, but
in this case an intervening modification by a different thread makes
this reordering visible.
Strengthening the model to prohibit such reordering of 
relaxed reads to the same location would impose serious restrictions on the implementation 
of relaxed reads that would likely degrade performance - 
for example, under such a model an optimizer could not reorder the reads in this example
(or allow them to proceed as concurrent non-blocking operations if they
might be reordered in the network) unless it could statically prove
the reads were to different locations or no other thread was writing the 
location.
\begin{tabbing}WWWWW\=WWWWW\=WWWWW\=WWWWW\=\kill
$T0$: \> RW(x,1); \> SW(y,1); \> RW(x,2)\\
$T1$: \> RR(x,2); \> RR(x,1)\\
\end{tabbing}

\bigskip
$<_{Strict}$:\hspace{0.1in}
\xymatrix{
RW(x,1) \ar@{=>}[r] & SW(y,1) \ar@{=>}[r] & RW(x,2)
}\hspace{0.25in}
\parbox[t]{2.5in}
{$DependOnThreads(M_0)$ implies this is the only valid $<_{Strict}$ ordering
over $StrictOnThreads(M)$}

\bigskip
$<_0$:\hspace{0.2in}
\xymatrix{
RW(x,1) \ar@{=>}[r] \ar@/^1pc/[r] & SW(y,1) \ar@{=>}[r] \ar@/^1pc/[r] & RW(x,2)
}\hspace{0.25in}
\parbox[t]{2.5in}
{$<_0$ conforms to $<_{Strict}$}

\bigskip
$<_1$:\hspace{0.2in}
\xymatrix{
RW(x,1) \ar@{=>}[r] \ar[dr] & SW(y,1) \ar@{=>}[r] \ar@/^1pc/[r] & RW(x,2) \ar[dll] \\
RR(x,2) & RR(x,1) \ar[u]
}\hspace{0.25in}
\parbox[t]{2.5in}
{$<_1$ conforms to $<_{Strict}$.
T1's operations on x do not conflict because they are both reads, 
and hence may appear relatively reordered in $<_1$. 
One other $<_1$ ordering is possible.}

% DOB: dirty hack to fix an ugly page break
\pagebreak
%\bigskip
%--------------------------------------------------------------------------
\np EXAMPLE 7: \textbf{Disallowed behavior} 
similar to the previous example, but in this case
the addition of a relaxed write on thread 1 introduces dependencies in 
$DependOnThreads(M_1)$, such that (all else being equal) the model requires T1's second read
to return the value 3. If T1's write were to any location other than x, 
the behavior shown would be allowed.
\begin{tabbing}WWWWW\=WWWWW\=WWWWW\=WWWWW\=\kill
$T0$: \> RW(x,1); \> SW(y,1); \> RW(x,2)\\
$T1$: \> RR(x,2); \> RW(x,3); \> RR(x,1)\\
\end{tabbing}

\bigskip
$<_{Strict}$:\hspace{0.1in}
\xymatrix{
RW(x,1) \ar@{=>}[r] & SW(y,1) \ar@{=>}[r] & RW(x,2)
}\hspace{0.1in}
\parbox[t]{2.5in}
{$DependOnThreads(M_0)$ implies this is the only valid $<_{Strict}$ ordering
over $StrictOnThreads(M)$}

\bigskip
$<_0$:\hspace{0.1in}
\xymatrix{
RW(x,1) \ar@{=>}[r] \ar@/^1pc/[r] & SW(y,1) \ar@{=>}[r] \ar@/^1pc/[r] & RW(x,2) \ar[dl] \\
 & RW(x,3)
}\hspace{0.25in}
\parbox[t]{2.5in}
{$<_0$ conforms to $<_{Strict}$. Other orderings are possible.}

\bigskip
$<_1$:\hspace{0.1in}
\xymatrix{
RW(x,1) \ar@{=>}[r] \ar@/^1pc/[r] & SW(y,1) \ar@{=>}[r] \ar@/^1pc/[r] & RW(x,2) \ar[dll] \\
RR(x,2) \ar[r] & RW(x,3) \ar[r] & RR(x,?) 
}\hspace{0.25in}
\parbox[t]{2.5in}
{This is the only $<_1$ that conforms to $<_{Strict}$ and $DependOnThreads(M_1)$.
The second read of x cannot return 1 - it must return 3.}

\bigskip
%--------------------------------------------------------------------------
\np EXAMPLE 8: \textbf{Disallowed behavior} 
demonstrating why strict reads appear in every $<_t$,
rather than just for the thread that issued them. If the strict reads were 
absent from $<_0$, this behavior would be allowed.

\begin{tabbing}WWWWW\=WWWWW\=WWWWW\=WWWWW\=\kill
$T0$: \> RW(x,1); \> RW(x,2)\\
$T1$: \> SR(x,2); \> SR(x,1)\\
\end{tabbing}
% DOB: dirty hack to fix an ugly page break
\pagebreak
%\bigskip
$<_{Strict}$:\hspace{0.25in}
\xymatrix{
\\
SR(x,2) \ar@{=>}[r] & SR(x,1) 
}\hspace{0.4in}
\parbox[t]{2.5in}
{$DependOnThreads(M_1)$ implies this is the only valid $<_{Strict}$ ordering
over $StrictOnThreads(M)$}

\bigskip
$<_0$:\hspace{0.5in}
\xymatrix{
RW(x,1) \ar[r] & RW(x,2) \ar[dl] \\
SR(x,2) \ar@{=>}[r] \ar@/^1pc/[r] & SR(x,?) 
}\hspace{0.4in}
\parbox[t]{2.5in}
{This is the only $<_0$ that conforms to $<_{Strict}$ and $DependOnThreads(M_0)$.
The second read of x cannot return 1 - it must return 2.}

\bigskip
%--------------------------------------------------------------------------
\np EXAMPLE 9: \textbf{Allowed behavior} 
similar to the previous example, but the writes are
no longer conflicting, and therefore not ordered by $DependOnThreads(M_0)$.

\begin{tabbing}WWWWW\=WWWWW\=WWWWW\=WWWWW\=\kill
$T0$: \> RW(x,1); \> RW(y,1)\\
$T1$: \> SR(y,1); \> SR(x,0)\\
\end{tabbing}

\bigskip
$<_{Strict}$:\hspace{0.25in}
\xymatrix{
\\
SR(y,1) \ar@{=>}[r] & SR(x,0) 
}\hspace{0.4in}
\parbox[t]{2.5in}
{$DependOnThreads(M_1)$ implies this is the only valid $<_{Strict}$ ordering
over $StrictOnThreads(M)$}

\bigskip
$<_0,<_1$:\hspace{0.25in}
\xymatrix{
RW(x,1) & RW(y,1) \ar[dl] \\
SR(y,1) \ar@{=>}[r] \ar@/^1pc/[r] & SR(x,0) \ar[ul]
}\hspace{0.25in}
\parbox[t]{2.5in}
{The writes are non-conflicting, therefore not ordered by $DependOnThreads(M_0)$.}

\bigskip
%--------------------------------------------------------------------------
\np EXAMPLE 10: \textbf{Allowed behavior} 
Another example of a thread observing its own relaxed reads out of order, 
regardless of location accessed.

\begin{tabbing}WWWWW\=WWWWW\=WWWWW\=WWWWW\=\kill
$T0$: \> RW(x,1); \> SW(y,1)\\
$T1$: \> RR(y,1); \> RR(x,1); \> RR(x,0)\\
\end{tabbing}
% DOB: dirty hack to fix an ugly page break
\pagebreak
%\bigskip
$<_{Strict}$:\hspace{0.25in}
\xymatrix{
RW(x,1) \ar@{=>}[r] & SW(y,1) 
}\hspace{0.4in}
\parbox[t]{2.5in}
{$DependOnThreads(M_0)$ implies this is the only valid $<_{Strict}$ ordering
over $StrictOnThreads(M)$}

\bigskip
$<_0$:\hspace{0.25in}
\xymatrix{
RW(x,1) \ar@{=>}[r] \ar@/^1pc/[r] & SW(y,1) 
}\hspace{0.6in}
\parbox[t]{2.5in}
{Relaxed reads from thread 1 do not appear in $<_0$}

\bigskip
$<_1$:\hspace{0.25in}
\xymatrix{
RW(x,1) \ar@{=>}[r] \ar@/^1pc/[r] & SW(y,1) \ar[dl] \\
RR(y,1) \ar[r] & RR(x,1) & RR(x,0) \ar[ull]
}\hspace{0.25in}
\parbox[t]{2in}
{Relaxed reads have been reordered. Other $<_1$ orders are possible.}

\bigskip
%--------------------------------------------------------------------------
\np EXAMPLE 11: \textbf{Disallowed behavior} 
demonstrating that a barrier synchronization 
orders relaxed operations as one would expect.

\begin{tabbing}WWWWW\=WWWWW\=WWWWWWW\=WWWWWW\=\kill
$T0$: \> RW(x,1); \> upc\_notify; \> upc\_wait\\
$T1$: \> \> upc\_notify; \> upc\_wait; \> RR(x,0)\\
\end{tabbing}

\bigskip
$<_{Strict}$:\\
\xymatrix{
RW(x,1) \ar@{=>}[r] & *\txt{$upc\_notify$\\$(=SW*)$} \ar@{=>}[r] \ar@{=>}[d] \ar@{=>}[dr] & *\txt{$upc\_wait$\\$(=SR*)$}\\
& *\txt{$upc\_notify$\\$(=SW*)$} \ar@{=>}[r] \ar@{=>}[ur] & *\txt{$upc\_wait$\\$(=SR*)$} \ar@{=>}[r] \ar@{=>}[u] & RR(x,0)
}\hspace{0.1in}
\parbox[t]{2.3in}
{$DependOnThreads(M)$ and the synchronization semantics of barrier imply
that $<_{Strict}$ must respect all the edges shown.\footnotemark}
\footnotetext{except for the 
edge between the $upc\_wait$ operations and the edge between the $upc\_notify$ operations,
both of which can point either way.}

\bigskip
There is no valid $<_1$ which respects $<_{Strict}$ -- write-to-read data flow
along the chain $RW(x,1) \Rightarrow upc\_notify \Rightarrow upc\_wait \Rightarrow RR(x,0)$
implies the read must return 1 (i.e. because $RW(x,1) <_{Strict} RR(x,0)$ and there are no 
intervening writes of x).

\bigskip
%--------------------------------------------------------------------------
\np EXAMPLE 12: \textbf{Disallowed behavior} 
$<_{Strict}$ is an ordering over the pairs in $AllStrict(M)$, which 
includes an edge between two $upc\_notify$ operations. Every $<_t$ must conform
to a single $<_{Strict}$ ordering -- all threads agree on a single total
order over $SR(M)\ \cup\ SW(M)$ in general, and in particular they all agree
on the order of $upc\_notify$ operations. Therefore, at least one of the
read operations must return 1.

\begin{tabbing}WWWWW\=WWWWW\=WWWWWWW\=WWWWW\=WWWWWW\=WWWWW\=\kill
$T0$: \> RW(x,1); \> upc\_notify; \> RR(y,0); \> (upc\_wait not shown)\\
$T1$: \> RW(y,1); \> upc\_notify; \> RR(x,0); \> (upc\_wait not shown)\\
\end{tabbing}

\bigskip
$<_{Strict}$:\\
\xymatrix{
RW(x,1) \ar@{=>}[r] & *\txt{$upc\_notify$\\$(=SW*)$} \ar@{=>}[r] \ar@{=>}[d] & RR(y,0) \\
RW(y,1) \ar@{=>}[r] & *\txt{$upc\_notify$\\$(=SW*)$} \ar@{=>}[r] & RR(x,0) 
}\hspace{0.1in}
\parbox[t]{3in}
{$DependOnThreads(M_0)$ implies these edges in $StrictOnThreads(M)$ must be 
respected by $<_{Strict}$.\footnotemark}
\footnotetext{except the edge between the $upc\_notify$ operations, which can point either way.}

\bigskip
$<_0$:\hspace{0.25in}
\xymatrix{
RW(x,1) \ar@{=>}[r] \ar@/^1pc/[r] & *\txt{$upc\_notify$\\$(=SW*)$} \ar@{=>}[r] \ar@{=>}[d] \ar@/^1pc/[r] & RR(y,0) \ar[dll] \\
RW(y,1) \ar@{=>}[r] \ar@/^1pc/[r] & *\txt{$upc\_notify$\\$(=SW*)$}  
}\hspace{0.4in}
\parbox[t]{2.5in}
{}

\bigskip
$<_1$:\hspace{0.25in}
\xymatrix{
RW(x,1) \ar@{=>}[r] & *\txt{$upc\_notify$\\$(=SW*)$} \ar@{=>}[d]  \\
RW(y,1) \ar@{=>}[r] & *\txt{$upc\_notify$\\$(=SW*)$} \ar@{=>}[r] & RR(x,0) 
}\hspace{0.25in}
\parbox[t]{2in}
{ \vspace{0.5in} Read cannot return 0. }

\bigskip
There is no valid $<_1$ which respects $<_{Strict}$ -- write-to-read data flow
along the chain $RW(x,1) \Rightarrow upc\_notify \Rightarrow upc\_notify \Rightarrow RR(x,0)$
implies the read must return 1 (i.e. because $RW(x,1) <_{Strict} RR(x,0)$ and there are no
intervening writes of x). Reversing the edge between the $upc\_notify$ operations in $<_{Strict}$
causes an analogous problem for y in $<_0$.

%Note that under the alternate asymmetric semantics proposed in section~\ref{asymmetric},
%this behavior would be allowed (because one or both of the relaxed reads could be moved earlier 
%than the upc\_notify's).%
%\footnote{
%CW: The individual
%upc\_notify's in a single collective synchronization operation are totally
%ordered.  I think this is undesirable, as it enforces synchronization
%``too early".  Consider the following example:
%
%\begin{tabbing}WWWWW\=WWWWW\=WWWWWW\=WWWWW\=WWWWW\=WWWWW\=\kill
%$T0$: \> RW(x,1); \> upc\_notify; \> RW(x,2); RR(x,3) \\
%$T1$: \> RW(x,3); \> upc\_notify; \> RW(x,4); RR(x,1) 
%\end{tabbing}
%
%I think this should be allowed, since upc\_notify by itself doesn't imply
%any synchronization; there's no need for T0 to be aware of T1's write,
%and vice versa..  But if the upc\_notify's are ordered, one of the two
%reads will be disallowed. 
%(DOB: again, this is not a problem under the 
%alternate asymmetric semantics.)
%}
\subsection{Formal Definition of Precedes}
\label{MemModelPrecedes}
\index{Precedes}
\index{program order}
\npf This section outlines a formal definition for the $Precedes(m_1,m_2)$
partial order, a predicate which inspects two memory operations in the
execution trace that were issued by the same thread and returns true if and
only if $m_1$ is required to precede $m_2$, according to the sequential
abstract machine semantics of [ISO/IEC00 Sec. 5.1.2.3], applied to the given
thread. Intuitively, this partial order serves to constrain legal serial
program behavior based on the order of the statements a programmer wrote in the
source program. For most purposes, it is sufficient to rely upon an intuitive
understanding of sequential program order when interpreting the behavior of
$Precedes()$ in the memory model - this section provides a more concrete
definition which may be useful to compiler writers.

\np In general, the memory model affects the instructions which are issued (and
therefore, the illusory ``program order", if we were endeavoring to construct a
total order on memory operations given only a static program). Luckily,
providing a functional definition for $Precedes()$ does not require us to
embark on the problematic exercise of defining a totally-ordered ``program
order" of legal executions based only on the static program. All that's
required is a way to determine after-the-fact (i.e. given an execution trace)
whether two memory operations that \emph{did} execute on a single thread were
generated by source-level operations that are required to have a given ordering
by the sequential abstract machine semantics. Finally, note that $Precedes()$
is a partial order and not a total order - two accesses from a given thread
which are not separated by a sequence point in the abstract machine semantics
will not be ordered by $Precedes()$ (and by extension, their relative order
will not be constrained by the memory model).

\np Given any memory access in the trace, it is assumed that we can decide
uniquely which source-level operation generated the access. One mechanism for
providing this mapping would be to attach an abstract ``source line number" tag
to every memory access indicating the source-level operation that generated it.%
\footnote{Compiler optimizations which coalesce accesses or remove them
entirely are orthogonal to this discussion - specifically, the correctness of
such optimizations are defined in terms of a behavioral equivalence to the
unoptimized version. Therefore, as far as the memory model is concerned, every
operation in the execution trace is guaranteed to map to a unique operation at
the source level.} 

In practice, this abstract numbering needs to be slightly different from actual
source line number because the user may have broken a line in the middle of an
expression where the abstract machine guarantees no ordering - but we can
conceptually add or remove line breaks as necessary to make the line numbers
match up with abstract machine sequence points without changing the meaning of
the program (ie whitespace is not significant). Also, without lack of
generality we can assume the program consists only of a single UPC source file,
and therefore the numbering within this file covers every access the program
could potentially execute.%
\footnote{Multi-file programs are easily accomodated by
stating the source files are all concatenated together into a single master
source file for the purposes of defining $Precedes$. }

\np Now, notice that given the numbering and mapping above, we could
immediately define an adequate $Precedes()$ relation if our program consisted
of only straight-line code (ie a single basic block in CFG terminology).
Specifically, in the absence of branches there is no ambiguity about how to
define $Precedes()$ - a simple integer less-than ($<$) comparison of the line
number tags is sufficient. 

Additionally, notice that a program containing only straight-line code and
forward branches can also easily be incorporated in this approach (ie the CFG
for our program is a DAG). In this case, the basic blocks can be arranged such
that abstract machine execution always proceeds through line numbers in
monotonically non-decreasing order, so a simple integer less-than ($<$)
comparison of the line number tags attached to the dynamic operations is still
a sufficient definition for $Precedes$.  

\np Obviously we want to also describe the behavior of programs with backward
branches. We handle them by defining a sequence of abstract rewriting
operations on the original program that generate a new, simplified
representation of the program with equivalent abstract machine semantics but
without any backward branches (so we reduce to the case above).  Here are the
rewriting steps on the original program:

\textbf{Step 1}. Translate all the high-level control-flow constructs in the
program into straight-line code with simple conditional or unconditional
branches. Lower all compound expressions into ``simple expressions" with
equivalent semantics, introducing private temporary variables as necessary.
Each ``simple expression" should involve at most one memory access to a
location in the original program. Order the simple expressions such that the
abstract machine semantics of the original program are preserved, placing line
breaks as required to respect sequence point boundaries. In cases where the
abstract machine semantics leave evaluation order unspecified, place the
relevant simple expressions on the same line.

At this point rewritten program code consists solely of memory operations,
arithmetic expressions, built-in operations (like $upc\_notify$), and
conditional or unconditional goto operations.  For example this program:
\begin{verbatim}
1: i = 0;
2: while ( i < 10 ) {
3:   A[i] = i;
4:   i = i + 1;
5: }
6: A[10] = -1;
\end{verbatim}
Conceptually becomes:
\begin{verbatim}
1: i = 0;
2: if ( i >= 10 ) goto 6;
3: tmp_1 = i; A[i] = tmp_1;
4: tmp_2 = i; i = tmp_2 + 1;
5: goto 2;
6: A[10] = -1;
\end{verbatim}
The translation for the other control-flow statements is similarly
straightforward and well-documented in the literature of assembly code
generation techniques for C-like languages. All control flow (including
function call/return, setjmp/longjmp, etc) can be represented as
(un)conditional branches in this manner. Call this rewritten representation the
\textit{step-1 program}.

\textbf{Step 2}. Compute the maximum line number ($MLN$) of the step-1 program
($MLN=6$ in the example). Clone the step-1 program an infinite number of times
and concatenate the copies together, adjusting the line numbering for the 2nd
and subsequent copies appropriately (note, this is an abstract transformation,
so the infinite length of the result is not a practical issue). While cloning,
rewrite all the goto operations as follows: 

For a goto operation in copy $C$ of the step-1 program (zero-based numbering),
which is a copy of line number $N$ in the step-1 program and targeting
original line number $T$: 
\begin{verbatim}
if (T > N) set goto target = C*MLN + T  // step-1 forward branch 
else       set goto target = (C+1)*MLN + T // step-1 backward branch 
\end{verbatim}
In other words, step-1 forward branches branch to the same relative place in
the current copy of the step-1 program, and backward branches become forward
branches to the \emph{next} copy of the step-1 program.  So our example above
conceptually becomes:
\begin{verbatim}
1: i = 0;
2: if ( i >= 10 ) goto 6;
3: tmp_1 = i; A[i] = tmp_1;
4: tmp_2 = i; i = tmp_2 + 1;
5: goto 8;                     // rewritten backward goto 
6: A[10] = -1;

7:  i = 0;
8:  if ( i >= 10 ) goto 12;    // rewritten forward goto 
9:  tmp_1 = i; A[i] = tmp_1;
10: tmp_2 = i; i = tmp_2 + 1;
11: goto 14;                   // rewritten backward goto 
12: A[10] = -1;

13: i = 0;
...
\end{verbatim}

After this transformation, all branches are forward branches. Now, the memory
model describes behavior of the step-2 rewritten program, and $Precedes()$ is
defined as a simple integer less-than ($<$) comparison of the step-2 program's
line number tags attached to any two given memory accesses in the execution
trace.

