

%\minitoc

\section{Introduction}

In this chapter we continue our study of the fundamental properties of higher-order process calculi.
We concentrate on 
\emph{asynchrony} (and its relationship with \emph{synchrony}) and \emph{polyadic communication}.
These are two well-understood mechanisms in first-order calculi. 
Asynchronous communication is of practical relevance  
since, e.g., it is easier to establish and maintain than synchronous communication.
It is also of theoretical interest: 
numerous works have studied %Probably the most influential framework for asynchronous communication is 
the \emph{asynchronous $\pi$-calculus} %---proposed by Boudol and, independently, by Honda and Tokoro---
and the rather surprising effects 
that the absence of output prefix has over the behavioral theory 
and expressiveness of the calculus. 
In a well-known result, 
Palamidessi showed that the asynchronous $\pi$-calculus with separate choice is strictly 
less expressive than the synchronous $\pi$-calculus \citep{Palamidessi03}.
As for polyadic communication ---that is, the passing of tuples of values in communications---
it is 
among the most natural and convenient features for modeling purposes;
indeed, it is a stepping stone for the representation of data structures ---such as lists and records--- as processes.

%, and that can be used for private communications between them.

%As a result, 

% In the case of (a)synchrony, both Boudol and Honda and Tokoro proposed
% encodings of synchronous communication into asynchronous one; an agreement on a restricted name is essential 
% to ensure that the synchrony is faithfully encoded. 
% In the case of polyadic communication, Milner showed that by reaching an agreement on a restricted name,
% an arbitrary arity in polyadic communication can be correctly encoded in terms of several monadic synchronizations.


%In first-order process calculi, the representation of polyadic communication into monadic one is well understood, 
%and it is often taken as a primitive construct in languages.

%\as{Contradiction: asynchronous is strictly less expressive than synchronous, yet there is an encoding?}

In the $\pi$-calculus without choice, 
%The use and study of 
both synchronous and polyadic communication are supported by 
%the existence of 
\emph{encodings} 
%of both aspects 
into more basic settings, namely synchronous into asynchronous communication \citep{Boudol92,HondaT91}, 
and polyadic into monadic communication \citep{Milner93}, respectively.
%These are very well-known and intuitive translations, in which the correspondence between source and target languages is rather tight.
A salient commonality in both encodings is the fundamental r\^{o}le 
played by 
the \emph{communication of 
restricted names}. 
%\as{rephrase, missing verb} 
More precisely, 
both encodings exploit 
the ability that
first-order processes have of \emph{establishing private links} between two or more processes
by generating and communicating restricted names.
% In the remainder, as a way of introducing the peculiarities of the higher-order setting,
% we discuss the classic encoding of polyadic first-order communication into monadic one; 
% then, we comment on the notion of encoding 
% on which our results rely. %Such a notion offers a refined account of internal communications.
Let us elaborate further on this point by
recalling the encoding of the polyadic $\pi$-calculus into the monadic one in \citep{Milner93}: 
\begin{eqnarray*}
 \encpp{x(z_1, \ldots, z_n).P} & = & x(w).w(z_1).\cdots.w(z_n).\encpp{P} \\
 \encpp{\outC{x}\langle a_1, \ldots, a_n\rangle.P} & = & \nu w \, \outC{x}w. \outC{w}a_1.\cdots.\outC{w} a_n.\encpp{P} 
\end{eqnarray*}
(where $\encpp{\cdot}$ is an homomorphism for the other operators). A single $n$-adic synchronization is encoded as $n+1$ monadic synchronizations. The first synchronization establishes a \emph{private link} $w$: the encoding of output creates a private name $w$ and sends it to the encoding of input. 
%In other words, b
As a result of the synchronization on $x$, the scope of $w$ is extruded, and 
each of $a_1,\ldots,a_n$ 
can then be communicated through monadic synchronizations on $w$.
This encoding is very intuitive, and satisfies a tight operational correspondence property: 
a term of the 
polyadic calculus
%source language 
with 
\emph{one single public} synchronization (i.e., a synchronization on an unrestricted name such as $x$) 
is encoded into a term of the monadic calculus 
%is matched by the encoding 
with 
\emph{exactly one public} synchronization on the same name, followed by a number of \emph{internal} synchronizations (i.e., synchronizations on a private name such as $w$). 
That is, not only the observable behavior is preserved, but 
a source term and its encoding in the target language %(given by the encoding)
perform the exact same number of visible actions.
%the encoding does not perform more visible actions than the performed by the source term. 
The crucial advantage of establishing 
%Also, thanks to the ability of setting 
a private link on 
%the newly created name 
$w$ is that the encoding is \emph{robust with respect to interferences}: 
once the private link has been established between two parties, 
no surrounding  ---possibly malicious--- context can get access to the monadic communications on $w$.
%We observe that b
%This way, both name-passing and a disciplined use of 
%private links 
%%the forms of internal behavior (public and internal synchronizations) 
%are crucial for an \emph{atomic} implementation of polyadic communication.


% we find it appealing at least two reasons. First, it preserves the number of synchronizations on public names: an observable synchronization of the source term is mapped into an observable synchronization into the encoded term. Second, it is truly atomic in the sense that once a public synchronization on $x$ has taken place, we can be sure each of $a_1, \ldots, a_n$ will be communicated to the process who received $w$ on $x$ in the first place. 
% 
% The above observations are closely related to a ``encodings as security protocols'' analogy. In fact, when encoding an atomic operation in the source language into a number of operations in the target language, one is concern about correctness properties that resemble much desirable properties for a security protocol. In this case, the goal of the protocol is to encode faithfully the atomic operation, taking into account the possibility of external, possibly malicious processes trying to ``break'' the encoding by interferring with the intermediate steps of the protocol. In this context, actions on public names can be assimilated as actions of a protocol over an unreliable medium that is prone to eavesdroppers; restricted names (as $w$ in the encoding above) would then correspond to unique, randomly generated. \emph{nonces}.
% In this security context, the above two advantages acquire a more concrete meaning. Indeed, the first one is a guarantee that the encoded term is as secure as the original term. The second advantage is a guarantee that the encoding is not subject to external interferences. 



The establishment of private links is then seen to arise naturally from 
the interplay of restriction \emph{and} name-passing as available in the $\pi$-calculus.
In this chapter we aim at understanding whether 
the settled situation in the first-order setting carries over to the higher-order one.
More precisely, we study the extent to which 
private links can be established 
in the context of \hocore, a higher-order process calculus \emph{without name passing}.
This appears as a particularly intriguing problem: %question in the case of \hocore:
in spite of its minimality, \hocore is very expressive: 
not only it is Turing complete 
but also several modelling idioms (disjoint choice, input-guarded replication, lists) 
are expressible in it as derived constructs.
%This is the question we shall address in this chapter. 
Hence, the answer to this question is far from obvious.
%In order to isolate the effects of process passing, here we 

%More precisely, 
Here we shall consider 
%\emph{core}
%higher-order process calculi with \emph{name restriction}. 
%More precisely, we consider 
two extensions of \hocore.
The first one---denoted \rhocore---extends \hocore with restriction 
and polyadic communication; 
%to \hocore, whereas 
the second extension---denoted \shocore---extends \rhocore 
with %\emph{both} restriction and 
output prefixes, so as to represent \emph{synchronous} process passing. 
%As a matter of fact, 
Since both calculi consider polyadic communication, 
\rhocore and \shocore 
actually represent
%stand for 
two \emph{families} 
of higher-order process calculi:
% with varying arity in polyadic communications:
given $n \geq 0$, we use \hopis{n}{×} (resp. \ahopis{n}{x}) 
to denote 
the synchronous (resp. asynchronous) higher-order process calculus
%for the variant of \shocore (resp. \rhocore) 
with $n$-adic communication.


%In a strictly higher-order calculus %such as \hocore 
%the situation is much more limited.
%Before giving details on the definition of the above mentioned hierarchy, 
It is useful to comment on the consequences of considering restricted names in 
higher-order process calculi
\emph{without name-passing}. 
%we find it convenient to explain that 
The most notable one 
%In comparison to first-order process calculi, higher-order process calculi are much more constrained:
%This is essentially because 
%the absence of name-passing 
is the \emph{partial effect} 
that scope extrusions have. % due to process-passing have.
Let us explain what we mean by this.
%First, dynamic creation of new names is not possible: although private names could be available, these can only be defined statically, when processes are first defined. Second, 
%Notice that the communication of a process does not imply the communication of the names contained in it. Indeed, i
In a process-passing setting, received processes can only be executed, forwarded, or discarded. 
Hence, an input context cannot gain access to the (private) names of those processes it receives; 
to the context received processes are much like a ``black box''. 
Although higher-order communications might lead to scope extrusion of the private names \emph{contained} 
in the transmitted processes, %. However, 
such extrusions are vacuous: without name-passing, a receiving context can 
only %not \emph{make use} of such names.
use the names 
contained in a process 
in a restricted way, namely the way decreed by the sender of
the process.\footnote{In this discussion we understand process-passing that does \emph{not} consider abstraction-passing, i.e. 
the communication of functions from processes to processes. As we shall see, the situation is rather different with abstraction-passing.}
% This way, e.g., in the communication of a process
% %: even if one can communicate a process such as 
% %$P = \nu x \, Q$, 
% $P$ with an input (output) capability on a restricted name $x$, 
% any receiver $R$ %. Even if 
% %, any receiving process $R$ will not be able to perform any action on $x$.
% %the scope of $x$ might enlarge to cover also $R$, 
% will not be able to exploit the input (output) capability of $P$ on $x$.
%In a loose analogy with security protocols, the communication of private names with process-passing only would correspond to the %communication of an encrypted message to some recipient that does not have the key to decrypt the message (nor a way of obtaining it). 
The sharing of (private) names one obtains from using 
process-passing only is then incomplete: names can be \emph{sent} as part of processes 
but they cannot be freely used by a recipient.
% This suggests that an encoding of polyadic process-passing into monadic process-passing that enjoys basic properties (notably, compositionality and 
% robustness with respect to interferences) might not exist. 
% However, formalizing this intuition into a non encodability result is far from trivial.

With the above discussion in mind, we begin by investigating the relationship between synchrony and asynchrony in process-passing calculi.
Our first main result is an encoding of \shocore into \rhocore.
Intuitively, a synchronous output is encoded by an asynchronous output that 
communicates both the communication object and the continuation. 
This \emph{encodability} result is significant:  
it reveals that the absence of name passing 
does \emph{not} necessarily imply that 
encodings that rely on 
name-passing and private links are not 
expressible with process-passing only. 
In fact, the encoding bears witness to the 
expressive power intrinsic to (asynchronous) process-passing.

Based on this positive result, we move to examine the situation for polyadic communication in process-passing calculi.
% As we shall see, however, 
% As a matter of fact, the absence of name-passing in such calculi prevents from
% expressing the \emph{agreements on private names} that are at the heart of 
% several well-known encodings, including the monadic representations of polyadic interactions.
% Here we study the consequences of polyadicity in higher-order calculi. 
We  consider variants of \shocore with different arity in communications, and study their relative expressive power.
%Somewhat surprisingly, 
Interestingly, 
we determine that it is indeed the case that the absence of name-passing causes an expressiveness loss
when considering polyadic communication.
Our second main contribution is a non-encodability result:
for every $n > 1$, 
\hopis{n}{×} cannot be encoded into \hopis{n-1}{×}.
This way we obtain a \emph{hierarchy} of higher-order process calculi of strictly increasing expressiveness.
%the variant of \shocore with $n$-adic communication cannot be encoded into a similar variant with $n-1$-adic communication.
Hence, polyadic communication is a striking point of contrast between first-order 
and higher-order process calculi \emph{without name-passing}. %, such as \shocore.
%We now return to comment further on the hierarchy of higher-order process calculi based on polyadicity.

The crux for obtaining 
the above hierarchy 
%it 
is a characterization of the 
\emph{stability conditions} of higher-order processes with respect to their sets of private names.
Intuitively, such conditions are meant to capture the following insight: 
without name-passing, the set of names that are private to a given process remains invariant along computations.
As such, 
two processes that 
interact respecting the stability conditions
and 
do not share a private name 
will never be able to establish a private link on it.
Focusing on the set of names private to a process is  crucial to characterize 
the private links it can establish. 
Central to the definability of the stability conditions 
is a refined account of internal actions that is enforced by the 
%labeled transition system (
LTS
associated to \shocore.
In fact, the LTS distinguishes the internal actions that result from synchronizations on restricted names
from those that result  from synchronizations on public names.
While the former are the only kind of internal actions, the latter are considered as visible actions.

The separation result 
for polyadic communication 
depends on a
notion of encoding that 
is defined in accordance 
to the stability conditions and requires 
one visible action in the source language to be matched by at most one visible
action in the target language.
When compared to proposals for ``good'' encodings
in the literature, this requirement might appear as rather demanding.
However, we claim a demanding notion of encoding is indispensable in our case for at least two reasons.
First, 
such a notion allows us to concentrate 
in compositional encodings that are robust with respect to interferences.
As we have discussed, 
these two properties follow naturally 
from the ability of establishing of private links in the first-order setting.
Arbitrary, potentially malicious interferences are thus a central issue.
The requirement on visible actions is intended to ensure that
a term and its encoding are exposed to the same \emph{interference points}. 
% 
% That is, encodings that behave as expected in \emph{every context}. 
% This is quite . 
% Giving a general formalization in terms of arbitrary contexts is rather challenging; 
We argue that such a requirement %on visible actions mentioned before 
is 
a reasonable way of 
including arbitrary sources of interferences 
%the r\^{o}le of arbitrary, potentially malicious environments.
into the notion of encoding.
Second,
in the higher-order setting the encoding 
of synchronous communication into asynchronous one can be seen as a \emph{particular case} of the encoding 
of polyadic communication into monadic one. 
%In fact, synchronous communication can be encoded 
%into asynchronous one by sending the continuation of output as an additional parameter;
This way, for instance, \emph{monadic synchronous} communication 
corresponds to the class of \emph{biadic asynchronous} communication in which the second parameter 
(i.e., the continuation of output) is executed only once. 
This observation and the encodability result for synchronous communication into asynchronous one
suggest that the gap between what can be encoded with process-passing and what cannot
is rather narrow. Therefore, a notion of encoding more discriminating 
 than usual is necessary in our case to 
% As such, the encodabiility criteria required to determine the actual point where 
% the expressiveness of 
% first-order communication separates from 
% that of higher-order communication should be necessarily 
% more discriminating: we need rather fine tools to 
be able to %tell things apart.
formalize separation results among calculi with different polyadicity.

In the final part of the chapter we consider 
%\shoca, 
the extension of \shocore with 
\emph{abstractions}. An abstraction is an
 expression of the form $(x)  P$---it is a parameterized process.
An abstraction  has a {functional} type.  
Applying an  abstraction $( x) P$ of type $T \rightarrow \behtype $ (where $
\behtype$ is the type of all processes)  to an  argument $W$ of 
 type $T$  yields the  process $P \sub W x$. The argument $W$  can itself  be an
 abstraction; therefore the \emph{order} of an abstraction, that is, the level 
 of arrow nesting in its type, can be arbitrarily high. The order can
 also be $\omega$, if there are recursive types.
We consider \bhopis{n}{×}, the extension of \hopis{n}{×} with abstractions of order one 
(i.e., functions from processes to processes).
Our last main result shows that abstraction passing provides \shocore with the ability of 
establishing of private links.
Indeed, we show that \hopis{n}{×} can be encoded into \bhopis{1}{×}.
This can be used to demonstrate that 
there is no encoding of \bhopis{n}{×} into \hopis{n}{×}.
%\as{?} is strictly more expressive than any variant of \shocore, independently of its arity.
This result thus provides further evidence on 
the relationship between 
the ability of establishing private links
and absolute expressiveness.
% has on expressiveness.

% All in all, this chapter provides new results on the 
% expressive power of higher-order process calculi by 
% analyzing the establishment of private links and its relationship
% with (a)synchrony, polyadic communication, and abstraction-passing.

% This section will only appear in some forthcoming paper. 
\paragraph{Related Work.} 
While a number of works address the relationship between synchronous and 
asynchronous communication in first-order calculi (see, e.g., \citep{Palamidessi03,CacciagranoCP07,BeauxisPV08}), 
we are not aware of analogous studies for higher-order process calculi.
A similar situation occurs for the study of polyadic communication;
in the first-order setting the interest has been in characterizing 
fully-abstract translations of polyadic communication into monadic one (see, e.g., \citep{QuagliaW05,Yoshida96}),
but the case of polyadicity in higher-order communication has not been addressed.

The most related work is by  
\cite{San96int}.
There, the expressiveness of the $\pi$-calculus with respect to higher-order $\pi$ 
is studied by identifying hierarchies of fragments of 
first-order and higher-order calculi  with increasingly expressive power. 
The first-order hierarchy is based on fragment of 
the $\pi$-calculus in which mobility is {\em internal}, i.e., where outputs are 
only on private names ---no free outputs are allowed. 
This hierarchy is denoted as $\pi \mathrm{I}^n$, where the $n$ denotes the degree of
mobility allowed; 
this is formalized by means of \emph{dependency chains} in name creation. 
In this hierarchy, e.g., $\pi \mathrm{I}^1$ does not allow mobility and corresponds to the core of CCS,
and $\pi \mathrm{I}^n$ will allow dependency chains of length at most $n$.
The hierarchy in the higher-order case follows a similar rationale, and is based on the
{\em strictly higher-order} $\pi$-calculus, i.e., a higher-order calculus without
name-passing features. Also in this hierarchy, the less expressive language (denoted $\mathrm{HO}\pi^1$) corresponds
to the core of CCS. Sangiorgi shows that $\pi \mathrm{I}^n$ and $\mathrm{HO}\pi^n$ have the same expressiveness,
by exhibiting fully-abstract encodings. 
In contrast to \citep{San96int}, the hierarchy of higher-order process calculi we consider here is not 
given by the degree of mobility allowed, but by the size of the tuples
that can be passed around in polyadic communications.

The distinction between internal and public synchronizations here proposed 
for our notion of encoding 
has been used and/or proposed in other contexts. 
In \citep{Lanese07} 
labels of internal actions are annotated with the name on which synchronization occurs
so as to define \emph{located} semantics which are then used to study
concurrent semantics for the $\pi$-calculus 
using standard labeled transition systems.
In the higher-order setting 
\citep{Amadio93} 
obtains a finitely-branching bisimilarity for CHOCS
by means of a reduction into bisimulation for a variant of the $\pi$-calculus.
In such a variant, processes are only allowed to exchange names of \emph{activation channels} (i.e. the channels that trigger a copy of a process in the representation of higher-order communication with first-order one).
The desired finitely-branching bisimilarity is obtained by relying on a
labeled transition system in which synchronizations on activation channels are distinguished.






\section{The Calculi}\label{s:calculus}
\subsection{A Higher-Order Process Calculus with Restriction and Polyadic Communication}
Here we define 
\rhocore, 
the
%the higher-order process languages we shall work with. 
%We begin by defining the higher-order $\pi$-calculus, denoted $\mbox{HO}\pi$.
%As mentioned before, it corresponds to the 
extension of \hocore with a restriction operator and polyadic communication.
As such, it is asynchronous and does not feature name-passing. 
%The language of \rhocore processes is defined as follows.

\begin{mydefi}%[Syntax of \rhocore Processes]
\label{d:procs}
The language of \rhocore processes is given by the following syntax:
\begin{eqnarray*}
P,Q,\ldots & ::= & a(\til x).P \midd \bar{a} \angp{\til Q}  \midd P_1 \parallel P_2 \midd \nu r\, P 
\midd x \midd  \mathbf{0} 
\end{eqnarray*}
where $x,y$ range over process variables, and $a, b, r,s$ denote  names.
\end{mydefi}

Assuming standard notation and properties for tuples of syntactic elements, 
polyadicity in process passing is interpreted as expected:
an output message  $\outC{a}\langle \til Q \rangle$ sends the tuple of processes $\til Q$ on name $a$;
an  input prefixed process $\inp a {\til{x}} . P$ can receive  a tuple $\til Q$ on 
name (or channel) $a$ and continue as 
$P \sub{\til Q}{ \til x}$.
In both cases, $a$ is said to be the \emph{subject} of the action.
We sometimes write $| \til x |$ for the length of tuple $\til x$; 
the length of the tuples that are passed around determines the actual \emph{arity}
in polyadic communication.
In interactions, we assume inputs and outputs agree on their arity;
we shall rely on notions of \emph{types} and \emph{well-typed processes} as in \citep{San96int}.
Parallel composition allows processes to interact, and $\nu r \, P$ makes $r$ private (or restricted) to the process $P$.
Notions of bound and free names and variables ($\bn{\cdot}$, $\fn{\cdot}$, $\bv{\cdot}$, and $\fv{\cdot}$, resp.) are defined in the usual way: an input $\inp a {\til x}.P$ binds the free occurrences of variables in $\til x$ in $P$; 
similarly, $\nu r \, P$ binds the free occurrences of name $r$ in $P$.
We abbreviate $a({\til x}).P$ as $a.P$ 
when none of the variables in $\til x$ is in $\fv{P}$, 
and $\outC{ a} \langle \til \nil \rangle$ as $\overline{a}$.
%We sometimes omit the $\mathbf{0}$ in continuations. 
We use notation $\prod^k P$ to represent $k$ copies of process $P$ in parallel.

\begin{mydefi}%[Structural Congruence]
\label{d:struct} 
The \emph{structural congruence} relation 
for \rhocore processes  
is the smallest congruence 
generated by the following laws: $P \parallel \mathbf{0} \equiv P$, ~~$P_1 \parallel P_2 \equiv P_2 \parallel P_1$,~~$P_1 \parallel (P_2 \parallel P_3) \equiv (P_1 \parallel P_2) \parallel P_3$, ~~$\nu a \, \nu b P \equiv \nu b \, \nu a P$, ~~$\nu a \, \nil \equiv \nil$,~~
$\nu a \, (P_1 \parallel P_2) \equiv \nu a \, P_1 \parallel P_2$ ---if $a \not \in \fn{P_2}$.
 \end{mydefi} 

%A notion of structural congruence $\equiv$ is assumed to be defined as expected.

The semantics for \rhocore is given in terms of the %Labelled Transition System (
LTS
%for the (monadic) calculus is 
given in Figure \ref{f:lts1}. %\as{You should make sure you don't capture names of P2 in Tau1.}
%It is defined in the usual way; 
% with only one noteworthy difference: 
% we distinguish between \emph{internal} and \emph{public} synchronizations.
% The former are given by synchronizations on \emph{restricted} names; 
% they are the only source of internal behavior and are denoted as $\arro{~\tau~}$.
% The latter are given by synchronization on \emph{public} names: 
% a synchronization on the public name $a$ leads to the visible action $\arro{a \tau}$. 
% The distinction between internal and public synchronizations does not have behavioral consequences; 
% it only represents a more refined standpoint of internal behavior. 
% We thus have four kinds of transitions: in addition to internal and public synchronizations,
There are three kinds of transitions: 
internal transitions $P \arro{~\tau~} P'$, 
input transitions $P \arro{a(\til x)} P'$, and output transitions $P \arro{(\nu \til y)\out{a}{\langle \til Q \rangle}} P'$ (with extrusion of the tuple of names $\til y$), which have the expected meaning. 
We use $\alpha$ to range over actions.
The subject of action $\alpha$, denoted as $sub(\alpha)$, is 
defined as $sub(a(\til x)) = a$, $sub(\out{a}{\langle \til Q \rangle}) = a$, and is
undefined otherwise.
Notions of bound and free names and variables extend to actions as expected.
We sometimes use \seaa to denote a sequence of actions $\alpha_1, \ldots, \alpha_n$.
Weak transitions are defined in the usual way. We write $\Ar{}$ for the reflexive, transitive closure of $\arro{~\tau~}$. Given an action $\alpha$, notation $\Ar{\alpha}$ stands for $\Ar{} \arro{\alpha} \Ar{}$.
%Intuitively, these represent weak transitions that are \emph{safe} in that the additional internal behavior is 
%not subject to external (malicious) interferences.
%Given an action $\alpha$, notation $\Ar{\alpha}$ stands for $\Rightarrow \arro{\alpha} \Rightarrow$.
%Hence, $\SAr{\alpha}$ can be seen as the description of internal behavior that, unlike $\Ar{\alpha}$,
%says nothing about public synchronizations.
Given a sequence $\seaa = \alpha_1, \ldots, \alpha_n$, 
we define $\Ar{\seaa}$ as $\Ar{\alpha_1} \cdots \Ar{\alpha_n}$.
%and $\SAr{\seaa}$ as $\SAr{\alpha_1} \cdots \SAr{\alpha_n}$.


%\as{The label in the input rule seems wrong. Also, rule Res seems wrong: it prevents too many reductions (I'd say only $r \not \in \n{\alpha}$).}

% \begin{myrem}
%  Similar distinctions between public and internal actions have been used and/or proposed in 
% \citep{Amadio93} and \citep{Lanese07}.
% \end{myrem}

\begin{figure}
\[\mathrm{\textsc{Inp}}~~~{\inp a {\til x}. P} \arr{\ia a {\til x}  }  {P } \qquad \qquad \mathrm{\textsc{Out}}~~~{\outC{a} \langle \til Q \rangle} \arr{\outC{a} \langle \til Q \rangle  }  {\nil}\]
\infrule{\textsc{Act1}~~}{P_1 \arr\alpha P_1' \andalso 
\bv \alpha \cap \fv{P_2} = \emptyset
}{
P_1 \parallel P_2 \arr\alpha P'_1 \parallel P_2 
} 
\infrule{\textsc{Tau1}}{P_1 \arro{(\nu \til y)\out{a}{\langle \til P \rangle}} P_1' \andalso 
P_2 \arr{a(\til x)} P'_2 \andalso \til y \cap \fn{P_2} = \emptyset
}{
P_1 \parallel P_2 \arr{~\tau~}  \nu \til y \,(P'_1 \parallel P'_2 \sub{\til P}{\til x})}
 
%\infrule{\textsc{IntRes}}{P \arr{a\tau} P' }{\nu a \, P \arr{\tau} \nu a \, P}

\infrule{\textsc{Res}}{P \arr{\alpha} P' \andalso r \not \in \n{\alpha}}{\nu r \, P \arr{\alpha} \nu r \, P'}
\infrule{\textsc{Open}}{P \arro{(\nu \til y)\out{a}{\langle \til P'' \rangle}} P' \andalso x\neq a, \, x \in \fn{\til P''}-\til y}{
\nu x \, P  \arro{(\nu x \til y)\out{a}{\langle \til P'' \rangle}}  P'}
\caption[The LTS of \rhocore]{The LTS of \rhocore. We have omitted 
rules \textsc{Act2} and \textsc{Tau2}, 
the symmetric counterparts of rules \textsc{Act1} and \textsc{Tau1}.}\label{f:lts1} 
\end{figure}


\begin{myconv}
In what follows we shall 
say that, for some $n >0$, 
\ahopis{n}{×} corresponds to the higher-order process calculus obtained 
from the syntax given in Definition \ref{d:procs}
in which polyadic communication has arity $n$.
\end{myconv}




%Note also that, in any case, any public synchronization has a weak transition in this sense.

The following definition is standard.

\begin{mydefi}[Strong and Weak Barbs]\label{d:barbs}
Given a process $P$ and a name $a$, we write
\begin{itemize}
\item $P \stbarb{a}$ ---a \emph{strong input barb}--- if $P$ can perform an input action with subject $a$;
\item $P \stbarb{\overline{a}}$ ---a \emph{strong output barb}--- if $P$ can perform an output action with subject $a$.
\end{itemize}
Given $\mu \in \{a, \overline{a}\}$, we define 
%\begin{itemize}
%\item 
a \emph{weak} barb $P \webarb{\mu}$ if, for some $P'$,  $P \Ar{} P' \stbarb{\mu}$.
%\item a \emph{safe weak} barb $P \swbarb{\mu}$ if, for some $P'$, $P \SAr{} P' \stbarb{\mu}$.
%\end{itemize}
\end{mydefi}

%We assume definitions of input and output barbs as customary; we use notations $P \stbarb{a}$ and $P \stbarb{\outC{a}}$, resp. 
%Also, we decree $P \webarb{\outC{a}}$ to be defined as $P \Ar{} P' \stbarb{\outC{a}}$, for some $P'$. 
%Also, we decree $P \wsbarb{\outC{a}}$ to be defined as $P \SAr{} P' \stbarb{\outC{a}}$, for some $P'$. 

\subsection{A Higher-Order Process Calculus with Synchronous Communication}
We now  introduce \shocore, the extension of \rhocore with 
%both restriction and 
synchronous communication. As such, processes of \shocore are defined in the same 
way as the processes of \rhocore (Definition \ref{d:procs}), except that output is a prefix:

\begin{mydefi}%[Syntax of \shocore Processes]
\label{d:procs-s}
The language of \shocore processes is given 
by the syntax in Definition \ref{d:procs}, excepting that
output message $\outC{a} \langle \til Q \rangle$ is replaced with 
%\[
$\outC{a}\langle \til Q \rangle.P$.
%\]
\end{mydefi}

The intended meaning of the output prefix is as expected:
$\outC{a}\langle \til Q \rangle.P$ 
can send the tuple of processes $\til Q$ via name $a$ and then continue as $P$.
All notions on bound variables and names are defined as in \rhocore.

The LTS for \shocore is obtained from that for \rhocore in Figure \ref{f:lts1}
with two modifications. The first one concerns the shape of output actions: rule \textsc{Out} is replaced with 
\[
\mathrm{\textsc{SOut}}~~~{\outC{a} \langle \til Q \rangle.P } \arr{\outC{a} \langle \til Q \rangle  }  {P}
\] 
which formalizes synchronous output. 
The second modification enforces the distinction between 
\emph{internal} and \emph{public} synchronizations hinted at in the introduction.
This distinction is obtained in two steps. First, by
replacing rule \textsc{Tau1} 
with the following one:
\infrule{\textsc{PubTau1}}{P_1 \arro{(\nu \til y)\outC{a}{\langle \til P \rangle}} P_1' \andalso 
P_2 \arr{a(\til x)} P'_2   \andalso \til y \cap \fn{P_2} = \emptyset
}{
P_1 \parallel P_2 \arr{a\tau}  \nu \til y \,(P'_1 \parallel P'_2 \sub{\til P}{\til x})}
(And similarly for \textsc{Tau2}, which is replaced by \textsc{PubTau2}, the analogous of \textsc{PubTau1}.)  
The second step consists in extending the LTS with the following rule:
\infrule{\textsc{IntRes}}{P \arr{a\tau} P' }{\nu a \, P \arr{\tau} \nu a \, P}

This way we are able 
%Rule \textsc{IntRes} is useful 
to distinguish between \emph{internal} and \emph{public} synchronizations.
The former are given by synchronizations on \emph{restricted} names; 
they are the only source of internal behavior and are denoted as $\arro{~\tau~}$.
The latter are given by synchronization on \emph{public} names: 
a synchronization on the public name $a$ leads to the visible action $\arro{a \tau}$. 
The distinction between internal and public synchronizations does not have behavioral consequences; 
it only represents a more refined standpoint of internal behavior that 
we shall find useful for obtaining results in Section \ref{s:sepresults}. As a result, 
we have four kinds of transitions: in addition to internal and public synchronizations,
we have input and output transitions as defined for \rhocore.
Accordingly, we extend the definition of subject of an action for the case of public synchronizations,
and decree that $sub(a\tau) = a$.


% We have just defined a calculus with monadic communication. 
% It is indeed a \emph{strictly higher-order} calculus, in that only processes can be communicated; 
% it does not consider names  as communication objects.
% It does not consider \emph{abstractions}, i.e., parameterizable processes, either; we shall consider such an extension later on. 
% The first extension we shall consider for $\mbox{HO}\pi$ is with 
% \emph{polyadicity}, i.e., calculi in which tuples of processes are passed around. 
% For this 
% We then have:

By varying the arity in polyadic communication, 
Definition \ref{d:procs-s} actually gives a \emph{family} of higher-order process calculi. 
We have the following notational convention:


\begin{myconv}
In what follows we shall 
say that, for some $n >0$, 
\hopis{n}{×} corresponds to the higher-order process calculus obtained 
from the syntax given in Definition \ref{d:procs-s}
in which polyadic communication has arity $n$.
% assume 
% \shocore corresponds to 
% a family of strictly-higher order calculi denoted \hopis{i}{-}, where $i$ stands for the polyadicity in process passing. %, and $j$ stands for the level of abstraction passing. 
% This way, we have that 
% \begin{itemize}
% \item $\hopis{0}{-}$ allows synchronizations only %(no process nor abstraction passing) 
% and is the same as the core of CCS.
% \item For some $n \geq 1$, $\hopis{n}{-}$ represents 
% the higher-order calculus with process-passing of polyadicity $n$. % and no abstraction passing.
% %This way, $\hopis{1}{-}$ corresponds to $\mbox{HO}\pi$, the calculus with monadic process passing just defined.
% %\item For some $n \geq 1$, $\hopis{n}{1}$ is the higher-order calculus with arbitrary polyadicity for process-passing and with abstraction passing of one level (i.e., passing of functions from processes to processes).
%\end{itemize}
\end{myconv}

% \begin{myconv}
% In what follows we shall assume a family of strictly-higher order calculi denoted \hopis{i}{j}, where $i$ stands for the polyadicity in process passing, and $j$ stands for the level of abstraction passing. This way, we have that 
% \begin{itemize}
% \item $\hopis{0}{-}$ allows synchronizations only (no process nor abstraction passing) and is the same as the core of CCS.
% \item For some $n \geq 1$, $\hopis{n}{-}$ represents 
% the higher-order calculus with process-passing of polyadicity $n$ and no abstraction passing.
% This way, $\hopis{1}{-}$ is the calculus defined before, allowing monadic process passing. 
% \item For some $n \geq 1$, $\hopis{n}{1}$ is the higher-order calculus with arbitrary polyadicity for process-passing and with abstraction passing of one level (i.e., passing of functions from processes to processes).
% \end{itemize}
% \end{myconv}
 
% Our working behavioral equivalence shall be {\em weak higher-order bisimilarity}:
% 
% \begin{mydefi}
% \label{d:wb}
% A symmetric relation $\,\mathcal{R}\,$ on  higher-order processes is said to be a {\em weak (higher-order) bisimulation} if
% $P \,\mathcal{R}\, Q$ implies
% \begin{enumerate}
% \item whenever $P \arro{\tau} P'$ then, for some $Q'$, $Q \Ar{} Q'$ and $P' \,\mathcal{R}\, Q'$;
% \item whenever $P \arro{a\tau} P'$ then, for some $Q'$, $Q \Ar{a\tau} Q'$ and $P' \,\mathcal{R}\, Q'$;
% \item whenever $P \arro{a(x)} P'$ then, for some $Q'$, $Q \Ar{a(x)} Q'$ and $P' \,\mathcal{R}\, Q'$;
% \item whenever $P \arro{\out a \langle P'' \rangle } P'$, then for some $Q', Q''$, 
% $Q \Ar{\out a \langle Q'' \rangle } Q'$ with $P'' \,\mathcal{R}\, Q''$ and $P' \,\mathcal{R}\, Q'$.
% \end{enumerate}
% $P$ is \emph{weakly bisimilar} to $Q$, written $P \approx Q$, 
% if $P \mathcal{R} Q$ for some weak bisimulation $\mathcal{R}$.
% \end{mydefi}

\section{An Encodability Result for Synchronous Communication}\label{s:enc-result}

We begin by 
studying the relationship between synchronous and asynchronous communication.
The main result of this section is an encoding of 
\hopis{n}{×} into \ahopis{n}{}.

%presenting encodings of synchronous communication into asynchronous one.
A naive encoding would simply consist in sending 
both the communication object and the continuation of the output action
in a single synchronization. 
The continuation is sent explicitly as a parameter, 
and so a synchronous calculus with polyadicity $n$ would have to be encoded into 
an asynchronous calculus with polyadicity $n+1$.
To illustrate this, consider 
the 
naive encoding 
of \hopis{1}{-} 
into \ahopis{2}{×}: 
\begin{eqnarray*}
\encpp{\outC{a}\langle P \rangle.S} & = &\outC{a}\langle \encpp{P}, \encpp{S} \rangle\\ % \quad m,n \mbox{~not in~} \fn{P,Q} \\
\encpp{a(x).R} &= &a(x,y).(y \parallel \encpp{R})  
 \end{eqnarray*}
where 
$\encpp{\cdot}$ is an homomorphism for the other operators in \hopis{1}{-}.
This encoding allows to appreciate how 
in the higher-order setting 
the synchronous/asynchronous 
distinction can be considered as a particular case of the polyadic/monadic distinction.
Notice that the fact that the continuation is supposed to be executed only once
is crucial for the simplicity of the encoding.
%In fact, the encoding 

%However, this encoding is not satisfactory as it requires an extra level of polyadicity,
%which, as argued before, cannot be taken for granted in a setting without name-passing.

Interestingly, we notice that asynchronous 
process-passing is expressive enough so as to
encode synchronous communication of the \emph{same arity}.
%\emph{without} appealing to additional assumptions in polyadicity. 
Intuitively, 
the idea is to send a \emph{single process}
consisting of a guarded choice between a communication object and the continuation of the synchronous output.
For the monadic case the encoding is as follows:

 \begin{eqnarray*}
\encpp{\outC{a}\langle P \rangle.S} & = &\nu k,l \, (\outC{a}\langle k.(\encpp{P} \parallel \outC{k}) + l.(\encpp{S} \parallel \outC{k}) \rangle \parallel \outC{l})\\ % \quad m,n \mbox{~not in~} \fn{P,Q} \\
\encpp{a(x).R} &= &a(x).(x \parallel \encpp{R})  
 \end{eqnarray*}

where ``$+$'' stands for the encoding of disjoint choice in \hocore, presented in Section \ref{ss:core-expres};
$k, l$ are two names not in $\fn{P,S}$; and 
$\encpp{\cdot}$ is an homomorphism for the other operators in \hopis{1}{-}.

The synchronous output action is thus encoded by sending a guarded, disjoint choice between
the encoding of the communication object and the encoding of the continuation of the output.
The encoding exploits the fact that the continuation should be executed
exactly once, while the communication object can be executed zero or more times.
Notice that there is only one copy of the trigger that executes the encoding of the continuation
(denoted $\outC{l}$ in the encoding above),
which guarantees that it is executed exactly once.
This can only occur after that the synchronization has taken place, thus ensuring a correct
encoding of synchronous communication. 
Notice that $\outC{l}$ releases both the encoding of the continuation and a trigger for 
executing the encoding of the communication object (denoted $\outC{k}$); such an execution will only occur
when the choice sent by the encoding of output appears at the top level.
This way, it is easy to see that a trigger $\outC{k}$ is always available.
This idea can be generalized to encode synchronous calculi of arbitrary polyadicity 
as follows:

\begin{mydefi}[Synchronous into Asynchronous]\label{d:syn-asyn}
For some $n > 0$, the encoding of \hopis{n}{-} into \ahopis{n}{×} is defined as follows:
 \begin{eqnarray*}
\encpp{\outC{a}\langle P_1, \ldots, P_n \rangle.S} & = &\nu k,l \, (\outC{a}\langle \encpp{P_1}, \ldots, \encpp{P_{n-1}},
T_{k,l}[\, \encpp{P_n}, \encpp{S} \, ] \rangle \parallel \outC{l}) \\
% \parallel \outC{m}) + n.(\encpp{Q} \parallel \outC{m}) \rangle \parallel \outC{n}\\ % \quad m,n \mbox{~not in~} \fn{P,Q} \\
\encpp{a(x_1, \ldots, x_n).R} &= &a(x_1, \ldots, x_n).(x_n \parallel \encpp{R})  
 \end{eqnarray*}
with 
\[
 T_{k,l} [ M_1, M_2 ] = k.(M_1 \parallel \outC{k}) + l.(M_2 \parallel \outC{k})
\]
where 
$\{k, l\} \cap \fn{P_1, \ldots, P_n,S} = \emptyset$, and 
$\encpp{\cdot}$ is an homomorphism for the other operators in \hopis{n}{-}.
\end{mydefi}

We now give informal arguments for the correctness of the encoding;
we leave a formal proof for future work. 
Key to a correctness argument is a characterization of the ``garbage'' that the
process leaves along reductions. Such garbage is essentially determined by 
occurrences of the trigger 
that activates a copy to (the encoding of) the last parameter of the polyadic communication 
(denoted $\outC{k}$ in Definition \ref{d:syn-asyn}).
Such occurrences remain while the summation that the encoding sends is not at the top-level;
some triggers might remain even if all summations have been consumed. Crucially,
since such triggers are on restricted names, they are harmless for the rest of the process,
and so the encoding is correct up to these extra triggers.


% \begin{myprop}
%  Let $P$ be a \hopis{n}{} process and $\encpp{\cdot}$ be the encoding in Definition \ref{d:syn-asyn}.
% Then we have that if $P \arro{~l~} P'$ then $\encpp{P} \Ar{~l~} \encpp{P'}$,
% where $l \in \{\tau, a\tau\}$, for some $a \in \fn{P}$.
% \end{myprop}
% 
% \begin{proof}
%  By induction on the transitions. 
% \end{proof}


The encoding is significant as it provides compelling evidence 
on the expressive power that (asynchronous) process-passing has 
for representing protocols that  rely
on establishment of private links in the first-order setting.
Not only the encoding bears witness of the fact that 
%Hence, it is not the case that the whole class of 
such protocols can indeed be encoded into calculi with process-passing only;
the observation that the encoding of synchronous into asynchronous communication
is a particular case of that of polyadic into monadic communication
leaves open the possibility that, 
following a similar structure, 
an encoding of polyadic communication 
(as the proposed by Milner) might exist for the case of process-passing.
In the next section we prove that this is \emph{not} the case.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% \section{An Extended Calculus}\label{s:ext-calculus}
% \subsection{Syntax and Semantics}
% Here we define the higher-order process languages we shall work with. 
% We begin by defining the higher-order $\pi$-calculus, denoted $\mbox{HO}\pi$.
% As mentioned before, it corresponds to the extension of \hocore with a restriction operator;
% as such, it does not feature name-passing. 
% The language of {\em processes} $P,Q.\ldots$ is defined as follows.
% 
% \begin{mydefi}[Syntax of Processes]\label{d:procs}
% \begin{eqnarray*}
% P,Q,\ldots & ::= & a(x).P \midd \bar{a} \angp{Q}.P  \midd P_1 \parallel P_2 \midd \nu r\, P 
% \midd x \midd  \mathbf{0} 
% \end{eqnarray*}
% where $x,y$ range over process variables, and $a, b, r,s$ denote  names.
% \end{mydefi}
% 
% An  input prefixed process 
% $\inp a x . P$ can receive  on
% name (or channel) $a$ a process that will be substituted in the place of $x$ in the body $P$;
% an output message  $\out a\langle Q \rangle.P$ can send $P$ on $a$ and then continue as $P$;  in both cases $a$ is said to be the \emph{subject} of the action.
% Parallel composition allows processes to interact, and $\nu r \, P$ makes $r$ private (or restricted) to the process $P$.
% 
% Notions of bound and free names and variables ($\bn{\cdot}$, $\fn{\cdot}$, $\bv{\cdot}$, and $\fv{\cdot}$, resp.) are defined in the usual way: an input $\inp a x.P$ binds the free occurrences of variable $x$ in $P$; similarly, $\nu r \, P$ binds the free occurrences of name $r$ in $P$.
% We abbreviate $a(x).P$ as $a.P$ 
% when $x \not \in \fn{P}$, 
% and $\out a \langle \nil \rangle.P$ as $\overline{a}.P$.
% We sometimes omit the $\mathbf{0}$ in continuations. 
% We use notation $\prod^k P$ to represent $k$ copies of process $P$ in parallel.
% A notion of structural congruence $\equiv$ is assumed to be defined as expected.
% 
% The LTS for the (monadic) calculus is given in Figure \ref{f:lts}. It is defined in the usual way, with only one noteworthy difference: 
% we distinguish between \emph{internal} and \emph{public} synchronizations.
% The former are given by synchronizations on \emph{restricted} names; 
% they are the only source of internal behavior and are denoted as $\arro{~\tau~}$.
% The latter are given by synchronization on \emph{public} names: 
% a synchronization on the public name $a$ leads to the visible action $\arro{a \tau}$. 
% The distinction between internal and public synchronizations does not have behavioral consequences; 
% it only represents a more refined standpoint of internal behavior. 
% We thus have four kinds of transitions: in addition to internal and public synchronizations,
% we have  
% input transitions $P \arro{a(x)} P'$ and output transitions $P \arro{(\nu \til y)\out{a}{\langle P'' \rangle}} P'$ (with extrusion of the tuple of names $\til y$) have the expected meaning. 
% We use $\alpha$ to range over actions.
% Notions of bound and free names and variables extend to actions as expected.
% We sometimes use \seaa to denote a sequence of actions $\alpha_1, \ldots, \alpha_n$.
% %\as{The label in the input rule seems wrong. Also, rule Res seems wrong: it prevents too many reductions (I'd say only $r \not \in \n{\alpha}$).}
% 
% \begin{myrem}
%  Similar distinctions between public and internal actions have been used and/or proposed in 
% \cite{Amadio93} and \cite{Lanese07}.
% \end{myrem}
% 
% \begin{figure}
% \[\mathrm{\textsc{Inp}}~~~{\inp a x. P} \arr{\ia a x  }  {P } \qquad \qquad \mathrm{\textsc{Out}}~~~{\out a \langle Q \rangle.P } \arr{\out a \langle Q \rangle  }  {P}\]
% \infrule{\textsc{Act1}~~}{P_1 \arr\alpha P_1' \andalso 
% \bv \alpha \cap \fv{P_2} = \emptyset
% }{
% P_1 \parallel P_2 \arr\alpha P'_1 \parallel P_2 
% } 
% \infrule{\textsc{Tau1}}{P_1 \arro{(\nu \til y)\out{a}{\langle P \rangle}} P_1' \andalso 
% P_2 \arr{a(x)} P'_2   
% }{
% P_1 \parallel P_2 \arr{a\tau}  \nu \til y \,(P'_1 \parallel P'_2 \sub{P}{x})}
%  
% \infrule{\textsc{IntRes}}{P \arr{a\tau} P' }{\nu a \, P \arr{\tau} \nu a \, P}
% 
% \infrule{\textsc{Res}}{P \arr{\alpha} P' \andalso r \not \in \n{\alpha}}{\nu r \, P \arr{\alpha} \nu r \, P'}
% \infrule{\textsc{Open}}{P \arro{(\nu \til y)\out{a}{\langle P'' \rangle}} P' \andalso x\neq a, \, x \in \fn{P''}-\til y}{
% \nu x \, P  \arro{(\nu x \til y)\out{a}{\langle P'' \rangle}}  P'}
% \caption{LTS of \hopis{1}{-}. We have omitted the symmetric counterparts of rules \textsc{Act1} and \textsc{Tau1}.}\label{f:lts} 
% \end{figure}
% 
% 
% 
% 
% Weak transitions are defined in the usual way, always taking into account that internal behavior comes only from synchronizations on restricted names. 
% %The distinction between internal and public synchronizations extends to the definition of weak transitions and weak barb predicates.
% %We write $\swtra$ to denote the reflexive, transitive  closure of the \emph{internal} synchronization $\arro{\tau}$. 
% We then write $\Ar{}$ for the reflexive, transitive closure of $\arro{\tau}$. Given an action $\alpha$, notation $\Ar{\alpha}$ stands for $\Ar{} \arro{\alpha} \Ar{}$.
% %Intuitively, these represent weak transitions that are \emph{safe} in that the additional internal behavior is 
% %not subject to external (malicious) interferences.
% %Given an action $\alpha$, notation $\Ar{\alpha}$ stands for $\Rightarrow \arro{\alpha} \Rightarrow$.
% %Hence, $\SAr{\alpha}$ can be seen as the description of internal behavior that, unlike $\Ar{\alpha}$,
% %says nothing about public synchronizations.
% Given a sequence $\seaa = \alpha_1, \ldots, \alpha_n$, 
% we define $\Ar{\seaa}$ as $\Ar{\alpha_1} \cdots \Ar{\alpha_n}$.
% %and $\SAr{\seaa}$ as $\SAr{\alpha_1} \cdots \SAr{\alpha_n}$.
% 
% 
% %Note also that, in any case, any public synchronization has a weak transition in this sense.
% 
% \begin{mydefi}[Strong and Weak Barbs]\label{d:barbs}
% Given a process $P$ and a name $a$, we write
% \begin{itemize}
% \item $P \stbarb{a}$ ---a \emph{strong input barb}--- if $P$ can perform an input action with subject $a$;
% \item $P \stbarb{\overline{a}}$ ---a \emph{strong output barb}--- if $P$ can perform an output action with subject $a$.
% \end{itemize}
% Given $\mu \in \{a, \overline{a}\}$, we define 
% %\begin{itemize}
% %\item 
% a \emph{weak} barb $P \webarb{\mu}$ if, for some $P'$,  $P \Ar{} P' \stbarb{\mu}$.
% %\item a \emph{safe weak} barb $P \swbarb{\mu}$ if, for some $P'$, $P \SAr{} P' \stbarb{\mu}$.
% %\end{itemize}
% \end{mydefi}
% 
% %We assume definitions of input and output barbs as customary; we use notations $P \stbarb{a}$ and $P \stbarb{\outC{a}}$, resp. 
% %Also, we decree $P \webarb{\outC{a}}$ to be defined as $P \Ar{} P' \stbarb{\outC{a}}$, for some $P'$. 
% %Also, we decree $P \wsbarb{\outC{a}}$ to be defined as $P \SAr{} P' \stbarb{\outC{a}}$, for some $P'$. 
% 
% We have just defined a calculus with monadic communication. 
% It is indeed a \emph{strictly higher-order} calculus, in that only processes can be communicated; 
% it does not consider names  as communication objects.
% It does not consider \emph{abstractions}, i.e., parameterizable processes, either; we shall consider such an extension later on. 
% The first extension we shall consider for $\mbox{HO}\pi$ is with 
% \emph{polyadicity}, i.e., calculi in which tuples of processes are passed around. 
% For this we shall assume notions of \emph{sorts} and \emph{well-sorted processes}, as in the standard way (see, e.g., \cite{San96int}).
% We then have:
% 
% 
% \begin{myconv}
% In what follows we shall assume a family of strictly-higher order calculi denoted \hopis{i}{-}, where $i$ stands for the polyadicity in process passing. %, and $j$ stands for the level of abstraction passing. 
% This way, we have that 
% \begin{itemize}
% \item $\hopis{0}{-}$ allows synchronizations only (no process nor abstraction passing) and is the same as the core of CCS.
% \item For some $n \geq 1$, $\hopis{n}{-}$ represents 
% the higher-order calculus with process-passing of polyadicity $n$. % and no abstraction passing.
% This way, $\hopis{1}{-}$ corresponds to $\mbox{HO}\pi$, the calculus with monadic process passing just defined.
% %\item For some $n \geq 1$, $\hopis{n}{1}$ is the higher-order calculus with arbitrary polyadicity for process-passing and with abstraction passing of one level (i.e., passing of functions from processes to processes).
% \end{itemize}
% \end{myconv}
% 
% 


\section{Separation Results for Polyadic Communication}\label{s:sepresults}

In this section we present the separability results for \shocore.
First, in Section \ref{s:encoding}, we introduce the notion of encoding 
on which the results rely and we present its main properties. % and discuss its features.
Then, in Section \ref{s:distforms}, we introduce the notion of \emph{distinguished forms}, which allow us 
to capture a number of \emph{stability conditions} of processes with respect to their sets of private names.
Finally, in Section \ref{s:hierarchy} we present the hierarchy of \shocore calculi based on polyadic communication.

\subsection{The Notion of Encoding}\label{s:encoding}

%\as{You should point to the paper that inspired this.}
%\subsubsection{Definition}
The following definition of encoding is inspired on that of \cite{Gorla08}, who proposed five criteria a ``good encoding'' should satisfy. %We use four of these criteria; we do not need divergence sensitiveness.

\begin{mydefi}
A {\em language} $\mathcal{L}$ is defined as:
\begin{itemize}
\item  a set of \emph{processes} $\mathcal{P}$; 
\item  a labeled transition relation $\longrightarrow$ on $\mathcal{P}$, i.e. a structure $(\mathcal{P},\mathcal{A}, \longrightarrow)$ for some set $\mathcal{A}$ of \emph{actions} or \emph{labels}.
\item a weak behavioral equivalence $\approx$ (i.e. a behavioral equivalence that abstracts from internal actions in $\mathcal{A}$).
\end{itemize}
\end{mydefi}

A \emph{translation} considers two languages, a \emph{source} and a \emph{target}: %\as{why does it have to be injective?}

\begin{mydefi}[Translation]
Given a source language $\mathcal{L}_\mathsf{s} = (\mathcal{P}_\mathsf{s}, \longrightarrow_\mathsf{s}, \approx_\mathsf{s})$ and 
a target language $\mathcal{L}_\mathsf{t} = (\mathcal{P}_\mathsf{t}, \longrightarrow_\mathsf{t}, \approx_\mathsf{t})$, 
a {\em translation} of $\mathcal{L}_\mathsf{s}$ into $\mathcal{L}_\mathsf{t}$ is %an injective 
a
function $\encpp{\cdot}: \mathcal{P}_\mathsf{s} \to \mathcal{P}_\mathsf{t}$.
\end{mydefi}

%The reflexive and transitive closure of the latter kind of transitions is denoted $\Ar{}_i$.

We shall be interested in a class of translations that respect both syntactic and semantic conditions. %\as{Isn't it much stronger than what we used to have (for parallel). Why the change? Also for soundness, it means we rule out translations that take more steps and whose intermediate state cannot be related to the source terms.}

%---Check whether 1 or 2 below is required. It seems we need 2 rather than 1.

\begin{mydefi}[Syntactic Conditions on Translations]
\label{d:syncon}
Let $\encpp{\cdot}: \mathcal{P}_\mathsf{s} \to \mathcal{P}_\mathsf{t}$ be a translation of $\mathcal{L}_\mathsf{s}$ into 
$\mathcal{L}_\mathsf{t}$. 
We say that $\encpp{\cdot}$ is
\begin{enumerate}
\item {\em Compositional}: 
if for every $k$-ary operator $\mathtt{op}$ of $\mathcal{L}_\mathsf{s}$ and 
for all $S_1, \ldots, S_k$ with \\ $\fn{S_1,\ldots,S_k} = N$, then
there exists a $k$-ary context
$C^{N}_{\mathtt{op}} \in \mathcal{P}_\mathsf{t}$ such that 
\[
\encpp{\mathtt{op}(S_1,\ldots,S_k)} = C^{N}_{\mathtt{op}}[\encpp{S_1},\ldots, \encpp{S_k}].
\]
% \item {\em Compositional}: if it is homomorphic with respect to parallel composition, i.e. 
% %\item \emph{Homomorphic with respect to parallel composition}:  
% $\encpp{P \parallel Q} = \encpp{P} \parallel \encpp{Q}$.

\item {\em Name invariant}: 
if $\encpp{\sigma(P)} = \sigma(\encpp{P})$, for any injective permutation of names $\sigma$.
\end{enumerate}
\end{mydefi}


%Now the semantic conditions.

\begin{mydefi}[Semantic Conditions on Translations]
\label{d:opcorr}
Let $\encpp{\cdot}: \mathcal{P}_\mathsf{s} \to \mathcal{P}_\mathsf{t}$ be a translation of $\mathcal{L}_\mathsf{s}$ into 
$\mathcal{L}_\mathsf{t}$. 
%\begin{enumerate}
%\item 
We say that $\encpp{\cdot}$ is \emph{operational corresponding} if the 
following properties hold:
\begin{enumerate}
\item {\em Completeness/Preservation}: For every $S,S' \in \mathcal{P}_\mathsf{s}$ 
and $\alpha \in \mathcal{A}_\mathsf{s}$
such that  
$S \Ar{\alpha}_\mathsf{s} S'$, 
it holds that 
$\encpp{S} \Ar{\beta}_\mathsf{t}\, \approx_\mathsf{t} \encpp{S'}$, where $\beta \in \mathcal{A}_\mathsf{t}$
and $sub(\alpha) = sub(\beta)$.
\item {\em Soundness/Reflection}: 
For every $S \in \mathcal{P}_\mathsf{s}$, $T \in \mathcal{P}_\mathsf{t}$, 
$\beta \in \mathcal{A}_\mathsf{t}$
such that 
$\encpp{S} \Ar{\beta}_\mathsf{t} T$ 
there exists an $S' \in \mathcal{P}_\mathsf{s}$ 
and an action $\alpha \in \mathcal{A}_\mathsf{s}$
such that 
$S \Ar{\alpha}_\mathsf{s} S'$, $T \Ar{} \approx_\mathsf{t} \encpp{S'}$, and $sub(\alpha) = sub(\beta)$.
\end{enumerate}
Furthermore, we shall require {\em adequacy}: if $P \approx_\mathsf{s} Q$ then $\encpp{P} \approx_\mathsf{t} \encpp{Q}$.
\end{mydefi}

Notice that adequacy is necessary because we make no assumptions on the nature of $\approx_\mathsf{s}$ and $\approx_\mathsf{t}$.


\begin{mydefi}
\label{d:enc}
We call  {\em encoding} any translation that satisfies both the syntactic conditions 
in Definition \ref{d:syncon} and the semantic conditions in Definition \ref{d:opcorr}.
\end{mydefi}


\begin{myrem}
Notice that our definition of encoding 
intends to capture the fact that 
%assumes that since 
an action in the source language might be not matched by the exact same 
action in the target language.
%, the encoding extends also to consider actions. (Perhaps a more detailed explanation on this is necessary.)
\end{myrem}









% \begin{myrem}
% \label{r:comp}$T'$ is in ZDF with respect $\til n$ and $P$; 
% by recalling that a ZDF is a special case of MDF we are done.  
% Following Gorla, in the definition of operational correspondence 
% the ``$\approx_\mathsf{t}$'' is merely
% intended to get rid of ``junk processes'' that do not contribute behaviorally in the target side.
% \end{myrem}
% 
% \begin{myrem}
% Notice that the above definition of operational correspondence \emph{does not} disregard synchronizations on public names.
% %These can be captured as a visible action $\alpha$. 
% The important thing to notice %in the definition of operational correspondence 
% is the fact that the extra steps that the encoding might take in mimicking an action should be \emph{internal} synchronizations only.
% \end{myrem}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\paragraph{Some Properties of Encodings.}

\begin{myprop}
\label{p:names}
Let $a$ be a name. If $a \in \fn{P}$ then also $a \in \fn{\encpp{P}}$.
\end{myprop}

\begin{proof}
%	\as{I edited this a bit, you should check I did not introduce mistakes.}
By contradiction.
Take two distinct names $a$ and $b$. Suppose $a$ is free in $P$.
Clearly, we have that 
\[
P \sub b a \not = P ~~(*)
\] Suppose, for the sake of contradiction, that $a$ is not free in $\encpp{P}$.
Under that assumption, one has that 
$\encpp{P}\sub b a = \encpp{P}$
as substituting a non-free name with another name is a vacuous operation. Notice that by name invariance one has 
$\encpp{P}\sub b a = \encpp{P \sub b a }$.
Now, considering $(*)$ above, one has the  $\encpp{P}\sub b a \not = \encpp{P \sub b a  }$, a contradiction.
\end{proof}

\begin{myprop}
 Let $\encpp{\cdot}$ be an encoding of $\mathcal{L}_\mathsf{s}$ into 
$\mathcal{L}_\mathsf{t}$. Then $\encpp{\cdot}$ satisfies:
\begin{enumerate}
 \item {\em Barb preservation}: for every $S \in \mathcal{P}_\mathsf{s}$ it holds that  $S \webarb{\outC{a}}$ (resp. $S \webarb{a}$) 
if and only if $\encpp{S} \webarb{\outC{a}}$ (resp. $\encpp{S} \webarb{a}$).

\end{enumerate}
\end{myprop}
\begin{proof}
 It follows from operational correspondence in the definition of encoding (Definition \ref{d:enc})
\end{proof}

\begin{myprop}[Composability of Encodings]\label{p:enc-comp}
If $\mathcal{C}\encpp{\cdot}$ is an encoding of $\mathcal{L}_1$ into $\mathcal{L}_2$, and 
$\mathcal{D}\encpp{\cdot}$ is an encoding of $\mathcal{L}_2$ into $\mathcal{L}_3$  then 
their composition $(\mathcal{D} \cdot \mathcal{C}) \encpp{\cdot}$ 
is an encoding of $\mathcal{L}_1$ into $\mathcal{L}_3$.
\end{myprop}

\begin{proof}
 From the definition of encoding (Definition \ref{d:enc}). The syntactic conditions (compositionality, name invariance) are easily seen to hold for  $(\mathcal{D} \cdot \mathcal{C}) \encpp{\cdot}$; the semantic conditions (operational correspondence, adequacy) rely on the fact that $\approx_1, \approx_2$, and $\approx_3$ are equivalences and hence transitive. Note that adequacy is crucial to show the composability for operational correspondence.
\end{proof}






% \begin{myprop}
% \label{p:enc-out}
% Let $\encpp{\cdot}$ be an encoding as in Definition \ref{d:enc}. 
% Suppose a process 
% $P = \outC{a}.P'$, with $a \not \in \fn{P'}$. 
% Then, for some $R$, $\encpp{P} \SAr{\overline{a}} R$, and $R \approx \encpp{P'}$. 
% \end{myprop}
% \begin{proof}
% This proof follows easily from operational correspondence (completeness).
% % The first part ---$\encpp{P}$ has an output on $a$ and evolves into some $R$--- follows immediately by barb preservation.
% % For the second part ---$R \approx \encpp{P'}$--- take the context $\ct{\cdot} = a \parallel \holE$, and consider the process $\ct{P}$.
% % We have that $\ct{P} \arro{a\tau} P'$. Consider now $\encpp{\ct{P}}$. 
% % By completeness one has that, for some $R'$, $\encpp{\ct{P}} \SAr{a\tau} R' \approx \encpp{P'}$.
% % To show that $R \approx \encpp{P'}$, we prove that $R$ and $R'$ coincide.
% % One starts by noticing that by compositionality of $\encpp{\cdot}$,  
% % $\encpp{\ct{P}} = \encpp{C}[\encpp{P}]$ which is equal to $\encpp{a \parallel \holE} [\,\encpp{P}\,]$.
% % Using compositionality again and filling in the hole, we obtain  $\encpp{a} \parallel \encpp{P}$. 
% % %\as{Why? You should give more details.}. 
% % The public synchronization between $\encpp{a}$ and $\encpp{P}$ ---guaranteed by completeness--- 
% % corresponds to the one between the output $\encpp{P}$ has and the input offered by the context.
% % Notice that, by hypothesis,  $a \not \in \fn{P'}$; by using Prop. \ref{p:names},
% % $a \not \in \fn{\encpp{P'}}$. 
% % Therefore, there is only one output action on $a$ in $\encpp{P}$ that can interact with the input in the context.
% % Then the synchronization leads exactly to $\encpp{P'}$, and $R'$ indeed coincides with $R$. 
% % It then follows that $R \approx \encpp{P'}$, as desired.
% \end{proof}





%\newpage 

%\subsubsection{Discussion}






\subsection{Distinguished Forms}\label{s:distforms}
Here we define a number of \emph{distinguished forms} for 
\shocore
%\hopis{n}{-} 
processes.
They are intended to capture the structure of processes along communications, focusing 
on the private names shared among the participants. 


\subsubsection{Definition}
The definition of distinguished forms exploits
%As usual, 
{\em contexts}, that is, processes with a hole. 
%We shall consider contexts with two dimensions: one on the number of processes that should fill each hole, 
%and another one on the number of occurrences of the processes in the hole(s).
%As for the first one, w
We shall consider {\em multi-hole contexts}, that is, 
contexts with more than one hole. 
More precisely, a multi-hole context is $n$-ary if at most $n$ different holes $\holE_1, \ldots, \holE_n$,  occur in it. 
(A process is a 0-ary multi-hole context.) 
%As for the second dimension, w
We will assume that 
any hole $\holE_i$ can occur more than once in the context expression. %\footnote{Cacciagrano et al call \emph{singularly-structured} contexts in which each hole occurs exactly once. Perhaps we should follow that terminology.}.
Notions of free and bound names for contexts are as expected and denoted $\bn{\cdot}$ and $\fn{\cdot}$, respectively.


\begin{mydefi}\label{d:contexts}
Syntax of (guarded, multihole) contexts:
\begin{eqnarray*}
C,C', \ldots & ::= & a(x).D \midd \bar{a} \angp{D}.D \\
D,D', \ldots & ::= & \holE  \midd P \midd C \midd D \parallel D \midd \nu r\, D
\end{eqnarray*}
\end{mydefi}

%\begin{myrem}\label{r:cont}
% In process contexts, as given by Def. \ref{d:contexts}, holes do not admit continuations.
%\end{myrem}




%\as{maybe a better definition would be: $T$ is in DF with respect to $\til n$, $P$ and $\til R$ iff $T \equiv \nu \til n (P \parallel \ct{\til R})$ with $\til{n} \subseteq \fn{P,\til R}$ and $\til{n} \cap \fn{C} = \emptyset$. We can always get there by scope intrusion of $\til{n_2}$ in $C[\til R]$, and renaming $\til {n_1}$ as $\til n$. I'm adding the $P$ part so that the definition makes sense for ZDF (otherwise it may be vacuously satisfied taking $R$ as $\nil$ and alpha renaming $\til n$ to something else.)}

\begin{myrem} \label{r:nonbindcon}
%Let $P$ be any \hopis{i}{j} process. Suppose $P \arro{~a\tau~} P'$ after the communication of some process $R$ on a name $a$.
%In all cases, $P'$ corresponds to a $\ct{R}$, where $C$ is a 
%Because of the nature of the calculi we are considering, w
We are always working with 
\emph{non-binding contexts}, i.e., contexts that do not capture 
the free variables of the processes that fill their holes.
%(or, in other words, contexts whose bound variables do not occur free in $R$).
\end{myrem}

Below we define \emph{disjoint forms}, the main distinguished form we shall use in the chapter.

\begin{mydefi}[Disjoint Form] \label{d:df}
Let $T \equiv \nu \til n (P \parallel \ct{\til R})$ be a \hopis{m}{-} process where 

\begin{enumerate}
%\item there exist sets of names $\tilde{n_1}, \tilde{n_2}$ such that $\til n = \til{n_1} \uplus \til{n_2}$ with $\til{n_1} \subseteq \fn{P,\til R}$ and $\til{n_2} \subseteq \fn{C}$,  $\til{n_2} \cap \fn{P,\til R} = \emptyset$, and $\til{n_1} \cap \fn{C} = \emptyset$; 

\item $\til n$ is a set of names such that $\til{n} \subseteq \fn{P,\til R}$ and $\til{n} \cap \fn{C} = \emptyset$; 

\item $C$ is a $k$-ary (guarded, multihole) context;

\item $\til R$ contains $k$ %guarded, 
closed processes.
\end{enumerate}
We then say that $T$ is in \emph{$k$-adic disjoint form with respect to $\til n$, $\til R$, and $P$.}
\end{mydefi}

The above definition decrees an arbitrary arity for the context.
We shall sometimes say that processes in such a form are in \emph{$n$-adic disjoint form}, or NDF.
By restricting the arity of the context, this general definition can be instantiated:

\begin{mydefi}[Monadic Disjoint Form, MDF]
\label{d:mdf}
Suppose a process $T$ that is in disjoint form with respect to some $\til n$, $\til R$, and $P$.
If $|\tilde{R}| = 1$ then 
$T$ is said to be in {\em monadic disjoint form} (or {\em MDF}) with respect to $\til n$, $R$, and $P$.
\end{mydefi}

Recall that even if MDFs have monadic contexts, the content of the hole (i.e. the single process $R$) can appear more than once
in the process. It could even be the case the content does not appear at all.
This is a special case of MDF, as we define below:

\begin{mydefi}[Zero-adic Disjoint Form, ZDF]\label{d:zdf}
Let $T \equiv \nu \til n \, (P \parallel \ct{R})$  
be in MDF  with respect to $\til n$, $R$, and $P$.
If $C[R] \not = \emptyset$  and $R = \nil$ then $T$ is said to be 
in \emph{zero-adic disjoint form (ZDF)}
with respect to $\til n$ and $P$.
Moreover, $T$ can be rewritten as 
$T \equiv \nu \til n_1 P \parallel \nu \til n_2 Q$, for some $Q \equiv C[\nil]$
and for some disjoint sets of names $\til{n_1}$ and $\til{n_2}$
such that both $\til n = \til{n_1} \cup \til{n_2}$ and $\til{n_1} \cap \til{n_2} = \emptyset$
hold.
\end{mydefi}

% \begin{mydefi}[Zero-adic Disjoint Form, ZDF]\label{d:zdf}
% Let $T \equiv \prod^{k}_{i = 1} \nu \til {n_i} \, P_i$  (with $\til {n_i} \not = \emptyset$) be a \hopis{m}{-}
% process in which for all $i,j \in {1..k}$, if $i \not =j$ then $\til {n_i} \cap \til {n_j} = \emptyset$. 
% We then say that $T$ 
% is in {\em zero-adic disjoint form (or {\em  ZDF}) 
% with respect to $\til {n_i}, \ldots, \til {n_k}$}.
% When the actual sets of names $\til {n_i}, \ldots, \til {n_k}$ are not important, 
% we will alternatively say that $T$ is in ZDF of \emph{degree} equal to $k$.
% \end{mydefi}
% 
% \begin{myrem}\label{r:mdf-are-zdf}
% Notice that if a process $T \equiv \nu \til n \, (P \parallel \ct{R})$  
% is in MDF  with respect to $\til n$, $R = \nil$, and $P$, 
% then it can be rewritten as 
% $T  \equiv \nu \til n_1 \, P \parallel \nu \til n_2 \,Q$, for some $Q \equiv \ct{\nil}$ and for some 
% disjoint sets $\tilde{n_1}$ and $\tilde{n_2}$.
% As such, $T$ is actually in ZDF of degree 2 with respect to $\tilde{n_1}$ and $\tilde{n_2}$.
% \end{myrem}

%\newpage

The following property will be useful in proofs.


\begin{myprop}[Encodings preserve ZDFs]\label{p:dfispres}
Let $\encpp{\cdot}$ be an encoding as in Definition \ref{d:enc}. 
If $T$ is in ZDF with respect to some $\til n$ and $P$
%$ and $P$, then 
then $\encpp{T}$ is in ZDF with respect to 
$\til n$ and $\encpp{P}$.
\end{myprop}
\begin{proof}%[Proof (Sketch)]
%The proof proceeds by exploiting 
%Definition \ref{d:enc} 
%%(notably, using compositionality and name invariance), 
%and   
%%and using the definition of encoding, and 
%Proposition \ref{p:names}.  
%For simplicity, let us assume 
We know that, for some $Q$ and $\til m$, 
$T \equiv \nu \til n \, P \parallel \nu \til m \, Q$ is in ZDF with respect to $\til n$ and $P$, 
and that $\til n \cap \til m = \emptyset$.
By compositionality (Definition \ref{d:syncon}(1)) we have that, for some context $C$,
$\encpp{T} = C[\encpp{\nu \til n \, P}, \encpp{\nu \til m \,Q}]$. 
The sensible issue here is to ensure that $\encpp{\nu \til n \, P}$ and 
$\encpp{\nu \til m \, Q}$ do not share private names because of the enclosing context $C$.
There are two cases: the first one is that a name that is free in $\nu \til n \, P$ but 
private to $\nu \til m \, Q$ becomes private in both $\encpp{\nu \til n \, P}$ and $\encpp{\nu \til m \, Q}$ (and the symmetric case);
the second case is that a name that is free in both $\nu \til n \, P$ and $\nu \til m \, Q$
becomes private in both $\encpp{\nu \til n \, P}$ and $\encpp{\nu \til m \, Q}$.
Proposition \ref{p:names} ensures that none of these cases is possible;
for every name $a$ and process $R$, such a proposition guarantees that if $a \in \fn{R}$ then also $a \in \fn{\encpp{R}}$.
As a consequence, even if the context $C$ could involve restrictions enclosing both
$\encpp{\nu \til n \, P}$ and $\encpp{\nu \til m \, Q}$, such restriction will not bind names in them.
Notice that $C[\encpp{\nu \til n \, P}, \encpp{\nu \til m \,Q}]$ can be rewritten as
$\encpp{T} \equiv \nu \til a (\encpp{\nu \til n \, P} \parallel \encpp{\nu \til m \,Q} \parallel S)$,
for some process $S$. 
Because of the discussion before, names in $\til a$ do not bind names in
$\encpp{P}$ nor in $\encpp{Q}$. 
Hence, 
$\encpp{T}$ is in ZDF with respect to 
$\encpp{P}$ and $\til n$, as desired.
%$C[\encpp{P_1}] \parallel C[\encpp{P_2}] \parallel Q$, for some $Q$

%In fact, Prop. \ref{p:names} ensures that it is not the case that 
%\as{Not really straightforward, as you need the semantics condition to rule out wrong encodings like from $\encpp{T} = \nu a.\outC{a}\angp{T}$. Also, you should make the relation stronger: anything can be in ZDF, by choosing the empty process as second component.}
\end{proof}



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsubsection{Properties of Disjoint Forms I: Stability Conditions}
We are interested in characterizing the transitions that preserve disjoint forms.
We focus on internal and output actions. 
In what follows we discuss properties that apply to arbitrary NDFs; 
for the sake of readability, however, in proofs we sometimes restrict ourselves to the case of MDFs, since 
cases for other disjoint forms are analogous and only differ in notational burden.
%Sometimes we discuss properties for the particular case of MDFs; however, these invariants apply to arbitrary NDFs as well.


The following proposition
formalizes that, up-to structural congruence,  
derivatives of NDFs that have unguarded  occurrences of some $R_i$ 
can be brought back into an NDF by ``pushing'' such occurrences into the side of $P$ of the NDF.

\begin{myprop}\label{p:mdf}
Suppose a process %in disjoint form 
$T \equiv \nu \til n \, (P \parallel C[  \til R])$ such that
\begin{enumerate}
 \item $T$ complies with conditions (1) and (2) in Definition \ref{d:df}; 
\item $\til R$ contains $k$ closed processes and
$C\holE$ is a context with one or more holes in evaluation context. 
\end{enumerate}


Then, there exists $T' \equiv T$ such that: 
(i) $T'  = \nu \til n \, (P' \parallel C'[\til R])$;
(ii) $\fn{P',  \til R} = \fn{P,  \til R}$ and $\fn{C'} = \fn{C}$;
(iii) $T'$ is in DF  with respect to $\til n$, $\til R$, and $P'$. 
\end{myprop}


\begin{proof}
We prove the particular case in which  $T$ is in MDF (i.e., we have a single $R$); the proof is analogous for the other disjoint forms.
We then need to show that a MDF $T'$ indeed exists. 
Since $T$ adheres to condition (1) in Definition \ref{d:df}, 
%We exploit the fact that by definition of MDF, 
$P$ and $R$ share conditions on names.
Without loss of generality, we can assume 
that $C[R] \equiv \nu \til n_2 (\prod^{k} R \parallel C'[R])$ where, for a $k \geq 0$,  
$\prod^{k} R $ represents the occurrences of $R$ that are in evaluation context,
$\til n_2 \subseteq \til n$ is the set of private names of $C$, 
and $C'[R]$ represents the part of $C$ in which each occurrence of $R$ is behind a prefix with names in $\fn{C}$. 
That is, $C'[\cdot]$ is the subcontext of $C$ in which top-level holes have been removed.
%\as{Can't you have some news as well around the unguarded $R$?}
Since $R$ and $C$ do not share private names %(condition (1) holds), 
we know that $C[R] \equiv \prod^{k} R \parallel \nu \til n_2 \, C'[R]$. 
Consider the process $T' \equiv  \nu \til n \, (P' \parallel C'[R])$,
structurally congruent to $T'$ and  where  
$P' =  P \parallel \prod^{k} R $. 
We verify conditions on names for MDFs hold for $T'$: 
by the above considerations on $C'$, it holds that $\fn{C'} = \fn{C}$; 
%\as{Well, it's a subterm, but removing top-level holes does not change the free names}; 
also, since $P$ and $R$ share conditions on names %\as{Not clear, you could just use what $P'$ really is}, 
it holds that $\fn{P', R} = \fn{P \parallel R, R} = \fn{P, R}$.
Finally, observe that in $C'$ all occurrences of $R$ remain guarded.
We conclude that $T'$ is indeed in MDF with respect to $\til n$, $\til R$, and $P'$, as desired.
\end{proof}

%Recall that $P \arr{\tau} P'$ stands for the reduction of $P$ from a synchronization on a {\em private} name.

Disjoint forms are \emph{stable} with respect to internal synchronizations.

\begin{mylem}%[NDFs are invariant on internal synchronizations]
\label{l:mdf-inv}
Let $T \equiv \nu \tilde{n}\, (P \parallel \ct{\til R})$ be a process 
in NDF with respect to $\til n$, $\til R$, and $P$.
If $T \arr{\tau} T'$ then: $T' \equiv \nu \tilde{n}\, (P' \parallel C'[\til R])$; 
$\fn{P',\til R} \subseteq \fn{P,\til R}$ and $\fn{C'} \subseteq \fn{C}$; 
 $T'$ is in NDF with respect to $\til n$, $\til R$, and $P'$.
\end{mylem}

\begin{proof}
We proceed by a case analysis on the communicating partners in the transition.

\begin{description}
\item[Transition internal to $P$. ]
We have a transition $P \arro{\tau} P'$, and hence 
$T' \equiv \nu \tilde{n}~( P' \parallel \ct{\til R})$. 
The transition is private to $P$, and as such, $\fn{P'} \subseteq \fn{P}$. 
Names in $C$ remain unchanged; we then have that 
 $T'$ is in MDF with respect to $\til n$, $\til R$, and $P'$, 
as desired. 


\item[Transition internal to $\ct{\til R}$.] 
We have a transition $\ct{\til R} \arro{\tau} D[\til R]$.
Since $C$ and $\til R$ do not share private names, the transition can only correspond to an internal synchronization on the names private to $C$.
Process  $D[\til R]$ can have two possible forms,  
depending on whether or not  the prefixes involved in (and consumed by) the transition are guarding some occurrence of ${R_i}$.
We thus have two cases. 
%\as{You should say somewhere that the transition is internal to $C$ as there cannot be a communication between it and $R$.}

\begin{enumerate}
\item In the case $D[\til R]$ has no unguarded occurrences of $\til R$ (i.e. there are no holes at the top level of the 
context), we have $D \equiv C'[\til R]$, for a context $C'$ that is 
exactly as $C$ except from two prefixes. 
The transition concerns only names private to $C$; hence, $\fn{C'} \subseteq \fn{C}$ 
and the other conditions on names are not affected.
We then have that  $T' = \nu \tilde{n}~( P \parallel C'[\til R])$ is in NDF with respect to $\til n$, $\til R$, and $P$, as desired. 

\item In the case occurrences of some $R_i$ end up unguarded after the transition, 
with the aid of 
Proposition \ref{p:mdf} we infer that $T'$ is structurally congruent to a MDF with respect to $\til n$, $\til R$, and $P$, 
and we are done.
\end{enumerate}

\item[Transition internal to some $R_i$.] This is not possible as by definition of disjoint form, 
every occurrence of $\til R$ in $\ct{\til R}$ is underneath a prefix.

\item[Communication between $P$ and $\ct{\til R}$.] This is not possible since 
by definition of disjoint form, $P$ and $C$ do not share private names. %\as{and: 
No $R_i$ can evolve, thus there cannot be a communication between $P$ and any $R_i$. %}
\end{description} 
\end{proof}

\begin{mycoro}%%%%%[ZDFs are invariant on internal transitions]
\label{l:zdf-inv}
%Let $T$ be a process in ZDF of degree $k$. If $T \arr{\tau} T'$, then $T'$ is in ZDF of degree $k$ too.
Let $T$ be a process in ZDF with respect to some $\til n$ and $P$. If $T \arr{\tau} T'$, then $T'$ is in ZDF with respect to $\til n$ and $P$ too.
\end{mycoro}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


The lemma below asserts that disjoint forms are stable also under output actions
that do not involve extrusion of names.
To see this, consider a MDF $T$: 
%As we shall see, given a process in MDF, 
the only risk for it after an output action is 
that the $R$ in $\ct{R}$ could be communicated, therefore ``downgrading'' the MDF into a ZDF.
Since, as we have seen, ZDFs are a special case of MDFs, this is not a problem and MDFs are preserved.
Below we say a process $P$ is {\em contained} in a process $Q$ if and only if there exists a context $C$ such that
$Q \equiv \ct{P}$.

%\as{Don't you need to consider output actions with restricted names (scope extrusion)?} 

%Yes, but I wonder if the following is a problem: if you extrude names with an output it could occur that the process after the output is in MDF wrt some $\til n' \subset \til n$, because you could extrude some names that do not occur in the continuation of the output. Is this an issue to be detailed further, or is the below adjustment enough? 


\begin{mylem}%[NDFs are invariant on output action]
\label{l:mdf-out}
Let $T \equiv \nu \til n\, (P \parallel \ct{\til R})$ be a process in NDF with respect to $\til n$, $\til R$, and $P$.
If $T \arr{\outC{a} \langle Q \rangle} T'$ %(with $\til n_1 \subseteq \til n$) 
then: there exist $P'$ and $C'$ so that 
$T' \equiv \nu \tilde{n}\, (P' \parallel  C'[\til R])$; both 
$\fn{P',\til R} \subseteq \fn{P,\til R}$ and $\fn{C'} \subseteq \fn{C}$ hold; 
$T'$ is in MDF  with respect to $\til n$, $\til R$, and $P'$.
\end{mylem}
\begin{proof}
By a case analysis on the source of the action. We prove the particular case in which $T$ is in MDF; the proof is analogous for the other disjoint forms.
\begin{itemize}
\item If $P \arr{\outC{a} \langle Q \rangle} P'$ %, for some $\til n_1 \subseteq \fn{P,R}$, 
then $T' \equiv \nu \til n\, (P' \parallel \ct{R})$. 
Since $P'$ is contained in $P$, we have $\fn{P',R} \subseteq \fn{P,R}$.
Conditions on names in $\fn{C}$ are unchanged, and we have that $T'$ is in MDF  with respect to $\til n$, $R$, and $P'$, as desired. 

\item If $\ct{R} \arr{\outC{a} \langle Q \rangle} D[R]$ then we 
reason on $k$, the number of guarded occurrences of $R$ in $D[R]$. 
The thesis is immediate for $k >0$; if $k = 0$ then 
$D[R]$ is actually in ZDF %of degree 2, 
with respect to $\til n$ and $P$;
%some other set of names $\til m \subseteq \fn{D}$ such that $\til n \cap \til m = \emptyset$.
by recalling that a ZDF is a special case of MDF %(Remark \ref{r:mdf-are-zdf}) 
we are done.
% \begin{enumerate}
% \item If $R$ is not contained in $\out a \langle Q \rangle$ %then $\til n_1 \subseteq \fn{C}$ and 
% we consider whether or not $D[R]$ has occurrences of $R$ at the top level.
% If there are no such occurrences, then %since $D[R]$ and 
% $T'$ is trivially in MDF. 
% Otherwise, in the presence of such occurrences, we appeal to Proposition \ref{p:mdf} and deduce there is a 
% process structurally congruent to $T'$ that is in MDF with respect to $\til n$, $R$, and $P$.
% 
% \item If  $R$ is contained in $\out a \langle Q \rangle$ then %$\til n_1 \subseteq \fn{C} \cup \fn{P,R}$, and 
% $k$, the number of occurrences of $R$ in $\ct{R}$, becomes relevant to determine the actual shape of $T'$.
% If $k=1$ then after the output action there are no occurrences of $R$ in $D$, and $R = \nil$. 
% As such, we have that $T'$ is in ZDF with respect $\til n$ and $P$; 
% by recalling that a ZDF is a special case of MDF we are done.  
% If $k >1$ then we proceed exactly as in case (1) above (i.e. considering a context $C'$ with $k-1$ occurrences of $R$)
% and we are done.
% \end{enumerate}
\end{itemize}
\end{proof}

The following property formalizes the consequences public synchronizations have on ZDFs.

\begin{mylem}
\label{l:mdf-pub}
Let $T$ be a \hopis{n}{-} process in ZDF with respect to 
$\til n$ and $P$. 
%$\til n_1$ and $\til n_2$.
Suppose $T \arro{a\tau} T'$ where $\arro{a\tau}$ is a public $n$-adic synchronization. 
Then $T'$ is in $n$-adic disjoint form with respect to 
%either $\til n_1$ or $\til n_2$, 
$\til n$, 
some $\til R$, and $P$.
\end{mylem}
\begin{proof}
The proof proceeds by a case analysis on the rule used to infer $\arro{a\tau}$.
We concentrate on the case 
in which 
$\arro{a\tau}$ is a monadic public synchronization, 
and arises from interaction of two processes that do not share private names;
the other cases are similar or simpler.
There are two cases, corresponding to rules \textsc{Tau1} and \textsc{Tau2}.
We analyze the first one.
Without loss of generality, % \as{well, some processes in parallel are missing}, 
we can assume 
$T \equiv \nu \til n_1 P \parallel \nu \til n_2 Q$, 
%in which
which is in ZDF with respect to $\til n_1 \cup \til n_2$ and $P$.
In $T$, we have that 
$P = \outC{a} \langle R \rangle. P' \parallel P''$, $Q = a(x).Q' \parallel Q''$, 
and $\til n_1$, $\til n_2$ are two disjoint sets of names. % by definition of ZDF.
%that $\til n_1, \til n_2$ are 
%\as{Recall the shape of $T$, to introduce $P$ and $Q$.}
We then have $\nu \til n_1 \, P \arro{(\nu \til n'_1)\outC{a} \langle R \rangle} \nu \til n_1 \, P'$ 
(with $\til n'_1 \subseteq \til n_1$) and $\nu \til n_2\, Q \arro{a(x)} \nu \til n_2 \, Q'$. 
That is, we are assuming  the case in which 
the output on $a$ extrudes some private names $\til n'_1$.
%\as{This seems still wrong: $\til n_1$ is mentioned twice. You should compute the set of extruded names, and split n1 according to it.}
Using rule \textsc{Tau1} we obtain 
$\nu \til n_1 P \parallel \nu \til n_2 \, Q \arro{a\tau} 
\nu \til n_1 \, P' \parallel \nu \til n'_1 \til n_2 \,Q' \sub{R}{x} = T'$. 
%because of $\fn{R} \cap \til n_1 \neq \emptyset$, 
% $\til n_2$ must be in the private names of $Q' \sub{R}{x}$.
% These two lines are wrong, as the names of the context (n_2) are not in the definition of DF
%By noticing that $\til n'_1 \subseteq \til n$ and that $\fn{P'} \cap \til n_2 = \emptyset$, we have that 
%$T' \equiv \nu \til n_1 \til n_2 \, (P' \parallel Q' \sub{R}{x})$.
By noticing that $\til n'_1 \subseteq \til n$ 
%and that $\fn{P'} \cap \til n_2 = \emptyset$, 
we have that 
$T' \equiv \nu \til n_1 \, (P' \parallel \nu \til n_2 \, Q' \sub{R}{x})$,
so $T'$ can be brought into %Moreover, %by decreeing $\til n = \til n_1 \cup \til n_2$, 
a MDF with respect to $\til n_1$, $R$, and some $P'$.
%Indeed, notice that process $Q' \sub{R}{x}$ can be replaced by a context $C\holE$ that is filled with $R$.
First, consider the context that is obtained by replacing each occurrence of $x$ in $Q$ with a single hole.
Call that context $C\holE$; since we have monadic communication, $C$ is \emph{monadic}.
We can then see that $\nu \til n_2 \, Q' \sub{R}{x}$ corresponds to $\ct{R}$.
The resulting process can be written as $\nu \til n_1 \, (P' \parallel \ct{R})$;  %\as{Why don't you make $C$ explicit?}; 
in case there are unguarded occurrences of $R$ in $\ct{R}$ (because of top-level occurrences of $x$ in $Q$), 
 with the aid of Proposition \ref{p:mdf}, the process can be rewritten as a MDF with respect to $\til n_1$, $R$, and some $P''$ containing both $P'$ and a number of occurrences of $R$.

The case for \textsc{Tau2} is completely analogous, and only differs in the fact that the 
process after the public synchronization is in MDF with respect to $\til n_2$ (rather than to $\til n_1$).
\end{proof}

%\newpage

\subsubsection{Properties of Disjoint Forms II: Origin of Actions}

We now give some properties regarding the order and origin of 
internal and output
actions of processes in DFs.

%\emph{Q: Below, perhaps saying that we care only about the origin of internal and output actions?}

%\emph{Q2: There might be a problem in the second item: asking $a \in \til n$ makes no sense, as by definition of NDF $C$ shares no free names with $\til n$. One should require $a \in \bn{C}$ or something similar.}

\begin{mydefi}\label{d:origin}
Let $T = \nu \til n \, (A \parallel C[\til R])$ be an NDF with respect to $\til n$, $\til R$, and $A$. 
Suppose $T \arro{\alpha} T'$ for some action $\alpha$.
\begin{itemize}
 \item Let $\alpha$ be an output action. We say that $\alpha$ \emph{originates in $A$} if 
$A \arro{\alpha} A'$
occurs as a premise in the derivation of $T \arro{\alpha} T'$, 
and that $\alpha$ \emph{originates in $C$} if $C[\til R] \arro{\alpha} C'[\til R]$
occurs as a premise in the derivation of $T \arro{\alpha} T'$. 
 
\item Let $\alpha = \tau$. We say that $\alpha$ \emph{originates in $A$} if, for some $a \in \til n$, 
$A \arro{a\tau} A'$ occurs as a premise in the derivation of $T \arro{\alpha} T'$, 
and that $\alpha$ \emph{originates in $C$} if %, for some $a \in \til n$, 
$C[\til R] \arro{\tau} C'[\til R]$
occurs as a premise in the derivation of $T \arro{\alpha} T'$. 

\end{itemize}
\end{mydefi}

\begin{myprop}
 Let $T = \nu \til n \, (A \parallel C[\til R])$ be an NDF with respect to $\til n$, $\til R$, and $A$.
Suppose $T \arro{\alpha} T'$, where $\alpha$ is either an output action or an internal synchronization. 
Then $\alpha$ originates in \emph{either} $A$ or $C$.
\end{myprop}

\begin{proof}
 The thesis is immediate for the case of output actions. For internal synchronizations the thesis follows by noting that by definition internal synchronizations take place on private names only. By definition of MDF, $A$ and $C$ do not share private names, and all occurrences of $\til R$ in context $C$ are guarded, so they cannot interact with $A$. As a result, there is no way $A$ and $C$ can interact through an internal synchronization; such an action must originate in either $A$ or $C$.
\end{proof}

Notice that both $A$ and $C$ can have the same action $\alpha$ (for instance, an output action on a public name that is shared among them). This, however, does not mean that a single instance of $\alpha$ originates in both $A$ and $C$.

The following proposition says that the only consequence an internal transition originated in $C$ might 
have on  the structure of an NDF is to release new copies of the processes in $\til R$:

\begin{myprop}\label{p:copies}
Let $T = \nu \til n \, (A \parallel C[\til R])$ be a NDF with respect to $\til n$, $\til R$, and $A$.
Suppose $T \arro{\tau} T'$, where $\tau$ originates in $C$.
Then, for some $k_1, \ldots, k_n \geq 0$, $T' \equiv \nu \til n \, (A \parallel C'[\til R] \parallel \prod^{k_1} R_1 \parallel \cdots \parallel \prod^{k_n} R_n)$.
\end{myprop}

\begin{proof}
Immediate by recalling that by definition of MDF occurrences of $\til R$ appear guarded in $C[\til R]$, and by noticing that an internal synchronization consumes two (complementary) prefixes. The number of copies of any $R_i$ (for $i \in 1..n$) is greater than zero if the prefixes involved in the synchronization guard an occurrence of $R_i$.
\end{proof}

The following lemma states the conditions under which two actions of a disjoint form can be safely
\emph{swapped}.

\begin{mylem}[Swapping Lemma]\label{l:swapp}
 Let $T = \nu \til n \, (A \parallel C[\til R])$ be an NDF with respect to $\til n$, $\til R$, and $A$.
Consider two actions $\alpha$ and $\beta$ that can be either an output action or an internal synchronization.
Suppose that $\alpha$ originates in $A$, $\beta$ originates in $C$, and that there exists a $T'$ such that $T \arro{\alpha} \arro{\beta}T'$.
%, where $\alpha$ is an output action or an internal synchronization that originates in $A$ and $\beta$ is an output action or an internal synchronization that originates in $B$. 
Then $T \arro{\beta} \arro{\alpha}T'$ also holds, i.e., action $\beta$ can be performed before $\alpha$ without affecting the final behavior.
\end{mylem}

\begin{proof}
 We proceed by a case analysis on $\alpha$ and $\beta$, analyzing their possible combinations.
Since we have two kinds of actions (output actions and internal synchronizations), we have four cases to check.
All of them are easy, and follow by the semantics of parallel composition. 
Consider, for instance, in the case in which $\alpha = \tau$ 
through a synchronization on private name $a$,
and $\beta = \tau$ through a synchronization on private name $a$.
%\as{I thought we only considered output and internal actions?}.
Then, for some complementary actions $\alpha_0, \overline{\alpha_0}$ on (private) name $a$, and complementary actions $\beta_0, \overline{\beta_0}$ on (private) name $b$, we have that 
\begin{eqnarray*}
 T & \equiv & \nu \til n \, (\alpha_0.A_1 \parallel \overline{\alpha_0}.A_2 \parallel A' \parallel \beta_0.C_1[\til R] \parallel \overline{\beta_0}.C_2[\til R] \parallel C'[\til R] ) ~~\mbox{and} \\
 T' & \equiv & \nu \til n \, (A_1 \parallel A_2 \parallel A' \parallel C_1[\til R] \parallel C_2[\til R] \parallel C'[\til R] ) 
\end{eqnarray*}
By definition of internal synchronizations, $a$ is a name private to $A$ and $b$ is a name private to $C$.
Since by definition of MDF $A$ and $C$ do not share private names, then there is no possibility for interferences between the prefixes $\alpha_0, \overline{\alpha_0}, \beta_0$, and $\overline{\beta_0}$.  Hence, it is safe to perform $T \arro{\beta}\arro{\alpha} T'$, and the thesis holds.
% 
% \item[Case 2: $\alpha = \out a \langle S \rangle, \beta = b\tau$] Then, for complementary actions $\beta_0, \overline{\beta_0}$ on the  (private) name $b$, we have that 
% \begin{eqnarray*}
%  T & \equiv & \nu \til n \, (\out a \langle S \rangle.A_1 \parallel A' \parallel \beta_0.C_1[\til R] \parallel \overline{\beta_0}.C_2[\til R] \parallel C'[\til R] ) ~~\mbox{and} \\
%  T' & \equiv & \nu \til n \, (A_1 \parallel A' \parallel C_1[\til R] \parallel C_2[\til R] \parallel C'[\til R] ) 
% \end{eqnarray*}
% It is easy to see that performing $\beta$ first represents no harm to the output on $a$. Indeed, this is because actions $\beta_0, \overline{\beta_0}$ are on the private name $b$; they cannot interfere with an output on $a$, which is public.
% It is then safe to perform $T \arro{b\tau}\arro{\out a \langle S \rangle} T'$, and the thesis holds.
% 
% \item[Case 3: $\alpha = a\tau, \beta =  \out b \langle T \rangle$]  Similar to Case 2.
% 
% 
% \item[Case 4: $\alpha = \out a \langle S \rangle, \beta = \out b \langle T \rangle$] Then, we have that 
% \begin{eqnarray*}
%  T & \equiv & \nu \til n \, (\out a \langle S \rangle.A_1 \parallel A' \parallel \out b \langle T \rangle.C_1[\til R] \parallel  C'[\til R] ) ~~\mbox{and} \\
%  T' & \equiv & \nu \til n \, (A_1 \parallel A' \parallel C_1[\til R]  \parallel C'[\til R] ) 
% \end{eqnarray*}
% and clearly  $T \arro{\out b \langle T \rangle}\arro{\out a \langle S \rangle} T'$, so the thesis holds.
% \end{description}
\end{proof}

Notice that the converse of the Swapping Lemma does not hold: since an action $\beta$ originated in $C$ can enable an action $\alpha$ originated in $A$ (e.g., an action enabled by an extra copy of $R$), these cannot be swapped.
We now generalize the Swapping Lemma to a sequence of internal synchronizations and output actions.
%This way, all the actions originated in the context $C$ can be safely executed before all the other actions.

\begin{mylem}[Commuting Lemma]\label{l:commute}
Let $T = \nu \til n \, (A \parallel C[\til R])$ be a NDF with respect to $\til n$, $\til R$, and $A$. 
Suppose $T \Ar{\seaa} T'$, where $\seaa$ is a sequence of output actions and internal synchronizations only.
Let $\seaa_C$ (resp. $\seaa_A$) be the sequence of actions that is exactly as $\seaa$ but in which actions originated in $A$ (resp. $C$) or its derivatives are not included.
%Similarly, let $\seaa_A$ be the sequence of actions that is exactly as $\seaa$ but in which actions originated in $B$ are not included. 
Then, there exists a process $T_1$ such that 
\begin{enumerate}
  \item $T \Ar{\seaa_C} T_1 \Ar{\seaa_A} T'$. 
\item $T_1 \equiv \nu \til n \, (A \parallel 
\prod^{m_1} R_1 \parallel \cdots \parallel \prod^{m_k} R_n
%\prod^m R 
\parallel C'[\til R])$, for some $m_1, \ldots, m_k \geq 0$.
\end{enumerate}
\end{mylem}

\begin{proof}
We proceed by an induction on $k$, the number of actions originated in $C$ that occur after an action originated in $A$ in the sequence \seaa.
The base case is when $k=0$; that is, when all the actions after $T_1$ are originated in $A$, and we are done. 
The inductive step requires a second induction on $j$, the number of actions originated in $A$ which precede a single action originated in $C$. 
This induction follows easily exploiting 
%The base case is when $j=1$, and follows by a single application of 
the Swapping Lemma (Lemma \ref{l:swapp}).  %The inductive step exploits the same lemma.
%In this case, the inductive step exploits the Swapping Lemma (Lemma \ref{l:swapp}).
The fact that, for each $i \in 1..n$, $T_1$ involves a number $m_i \geq 0$ of copies of $R_i$ is an immediate consequence of Proposition \ref{p:copies}.
\end{proof}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


\subsection{A Hierarchy of Synchronous Higher-Order Process Calculi}\label{s:hierarchy}

%\subsubsection{Preliminaries}

We define an expressiveness hierarchy 
for the higher-order process calculi 
in the family given by \shocore.
The hierarchy is defined
in terms of the impossibility of encoding 
\hopis{n}{×}
%a higher-order calculus with polyadicity $n$ into a calculus with polyadicity $n-1$. 
into \hopis{n-1}{×}, according to the definition given in Section \ref{s:encoding}.
We begin by showing the 
impossibility result that sets the basic case of the hierarchy, namely that biadic process passing cannot be encoded into monadic process passing (Lemma \ref{l:biadic}). 
The proof exploits the notion of MDF and its associated stability properties. 
We then state the general result, i.e. the impossibility of encoding \hopis{n+1}{-} into \hopis{n}{-} (Lemma \ref{l:ppas-hier}); this is done by generalizing the proof of Lemma \ref{l:biadic}.

%We stress that our notion of encoding is as in Definition \ref{d:enc}.
%Our main result depends critically on such a notion, in particular on compositionality and on operational correspondence (more precisely, completeness). 


%\subsubsection{The Proof}

\begin{mylem}\label{l:biadic}
There is no encoding of $\hopis{2}{-}$ into $\hopis{1}{-}$.
\end{mylem}

\begin{proof} 
Assume, towards a contradiction, that an encoding $\encpp{\cdot}: \hopis{2}{-} \to \hopis{1}{-}$ does indeed exist.
In what follows, we use $i,j$ to range over $\{1,2\}$, assuming that $i \neq j$. 

Assume processes $S_i = \outC{m_i}\parallel m_i.\outC{s_i}$ and $S_j = \outC{m_j}\parallel m_j.\outC{s_j}$.
Consider the $\hopis{2}{-}$ process $P = E^{(2)} \parallel F^{(2)}$, where $E^{(2)}$ and $F^{(2)}$ are defined as follows:
\begin{eqnarray*}
E^{(2)} & = & \nu m_1, m_2 \, (\out{a}\langle{S_1,\,S_2}\rangle.\nil) \quad \\
F^{(2)} & = & \nu b \, (a(x_1, x_2).( \out{b}\langle \outC{b_1}.x_1 \rangle. \nil \parallel \out{b}\langle \outC{b_2}.x_2 \rangle. \nil \parallel b(y_1).b(y_2).y_1))
\end{eqnarray*}
where both $b_1,b_2 \not \in \fn{E^{(2)}}$ (with $b_1 \neq b_2$) and $s_1,s_2 \not \in \fn{F^{(2)}}$ (with $s_1 \neq s_2$) hold. 
Let us analyze the behavior of $P$. We first have a public synchronization on $a$:
\[
P  \arro{a \tau} \nu m_1, m_2, b \, (\out{b}\langle \outC{b_1}.S_1 \rangle. \nil \parallel \out{b}\langle \outC{b_2}.S_2 \rangle. \nil \parallel b(y_1).b(y_2).y_1) = P_0 \, . 
\]
In $P_0$ we have 
two private synchronizations on name $b$ that implement
an internal choice: 
both processes $\outC{b_1}.S_1$ and $\outC{b_2}.S_2$ are consumed
but only one of them will be executed. 
We then have either $P_0 \arro{~\tau~} \arro{~\tau~} \outC{b_1}.S_1 = P_1$ or 
$P_0 \arro{~\tau~} \arro{~\tau~} \outC{b_2}.S_2 = P'_1$.
Starting in $P_1$ and $P'_1$ we have the following sequences of actions: 
\begin{eqnarray*}
&P_1& \arro{\outC{b_1}} P_2 \arro{~\tau~} \arro{\outC{s_1}} \nil \\
&P'_1 &\arro{\outC{b_2}} P'_2 \arro{~\tau~} \arro{\outC{s_2}} \nil  \, .
\end{eqnarray*}
In both cases, a private synchronization on $m_i$ precedes an output action on $s_i$.
All the above can be summarized as follows:
\begin{eqnarray}
& P & \arro{a\tau} P_0 \arro{~\tau~} \arro{~\tau~}P_1 \arro{\outC{b_1}} P_2 \arro{~\tau~} \arro{\outC{s_1}} \nil \label{der:1} \label{e:trace1}\\
& P & \arro{a\tau} P_0 \arro{~\tau~}\arro{~\tau~} P'_1 \arro{\outC{b_2}} P'_2 \arro{~\tau~} \arro{\outC{s_2}}  \nil \label{der:2} \label{e:trace2}\, .
\end{eqnarray}
These sequences of actions might help to appreciate the effects of the internal choice on $b$, discussed above. Such a choice has direct influence on: (i) the output action on $b_i$, (ii) the internal synchronization on $m_i$, and (iii) the output action on $s_i$.
Notice that each of these actions enables the following one, and that an output on $b_i$ precludes the possibility of actions on $b_j$, $m_j$, and $s_j$.

Consider now the behavior of $\encpp{P}$ ---the encoding of $P$--- 
with the aid of (\ref{e:trace1}) and (\ref{e:trace2}) above.
By definition of encoding (in particular, completeness) 
%and Prop. \ref{p:enc-out} (correspondence for output actions without name extrusion) 
we have 
the following two, mutually exclusive, possibilities for behavior: % from $\encpp{P_0}$: %\as{Why are the $P_1$ identical in both 
\begin{eqnarray}
& & \encpp{P} \Ar{a \tau}\approx \encpp{P_0} \Ar{} \approx \encpp{P_1} \Ar{\outC{b_1}} \approx \encpp{P_2} \Ar{\outC{s_1}} \approx \nil \label{der:wtrace1} \label{e:trace3} ~~\mbox{~and}\\
& & \encpp{P} \Ar{a \tau}\approx \encpp{P_0} \Ar{} \approx \encpp{P'_1} \Ar{\outC{b_2}} \approx \encpp{P'_2} \Ar{\outC{s_2}} \approx \nil \label{der:wtrace2} \label{e:trace4} \, .
\end{eqnarray}
%Notice that with respect to (\ref{e:trace1}) and (\ref{e:trace2}), in (\ref{der:wtrace1}) and  (\ref{der:wtrace2})  
%each the two internal synchronizations on $b$ have collapsed into a single
%safe weak transition, and that each $\SAr{\outC{s_i}}$ includes the internal synchronization on $m_i$.
We notice that the first (weak) transition, namely 
\[
\encpp{P} \Ar{a \tau}\approx \encpp{P_0} \, ,
\]
 is the same in both possibilities. Let us analyze it, by relying on Definition \ref{d:barbs}.
For \hopis{1}{-} processes $T,T'$, and $T_0$, it holds 
\begin{equation}
\label{e:comp} 
\encpp{P} \Ar{} T \arro{a \tau} T' \Ar{} T_0 \approx \encpp{P_0} \, .
\end{equation}

We examine the distinguished forms in the processes in (\ref{e:comp}).
We notice that $P$ is in ZDF 
with respect to $\{m_1, m_2, b\}$ 
%and $\{b\}$:
and $E^{(2)}$: 
$m_1, m_2$ do not appear in $F^{(2)}$, and $b$ does not appear in $E^{(2)}$.
From Proposition \ref{p:dfispres} we know that $\encpp{P}$ is also in ZDF with respect to $\{m_1, m_2, b\}$ 
and $\encpp{E^{(2)}}$. 
%and $\{b\}$.
Since DFs are preserved by internal actions (Corollary \ref{l:zdf-inv}), 
we know that $T$ is also a ZDF with respect to $\{m_1, m_2,b\}$ 
%and $\{b\}$.
and $A$, the derivative of $\encpp{E^{(2)}}$. 
%Recall that a ZDF is a special case of a MDF.
%Hence, we can equivalently state that $\encpp{P}$ and $T$ are in MDF with respect to $\{m_1,m_2,b\}$, $\nil$, and $A$.
In the general case, Lemma \ref{l:mdf-pub} ensures that a public synchronization causes a ZDF to become a MDF.
In this case, we have a communication from $E^{(2)}$ to $F^{(2)}$ which is mimicked by the encoding;
we then have that $T'$ is in MDF with respect to $\{m_1,m_2\}$, some $R \not = \nil$, and $A'$, 
the derivative of $A$ after the public synchronization. 
Finally, since $T'$ evolves into $T_0$ by means of internal synchronizations only, 
by Lemma \ref{l:mdf-inv}, we know that $T_0$ is also in MDF with respect to $\{m_1,m_2\}$, $R$, and $A_0$, the derivative of $A'$. Indeed, for some context $C_0$ (with private name $b$), we have that
\[
 T_0 = \nu m_1, m_2 \, (A_0 \parallel C_0[R])~.
\]


Notice that (\ref{e:comp}) ensures that process $T_0 \approx \encpp{P_0}$. 
Hence, by definition of $\approx$, $T_0$ should be able to match each action possible from $\encpp{P_0}$ 
by performing either  the sequence of actions given in (\ref{der:wtrace1}) or the one in (\ref{der:wtrace2}). 
We have just seen that $T_0$ is in MDF with respect to $\{m_1,m_2\}$, $R$, and $A_0$.
Crucially, both  (\ref{der:wtrace1}) and  (\ref{der:wtrace2}) involve only internal synchronizations and output actions.
Therefore, by Lemmas \ref{l:mdf-inv} and \ref{l:mdf-out}, every derivative of $T_0$ intended to mimic the behavior of $\encpp{P_0}$ (and its derivatives) is a process in MDF with respect to $\{m_1, m_2\}$, $R$, and some $A_i$.

We now use this information on the structure of the derivatives of $T_0$ to analyze 
the bisimulation game for $T_0 \approx \encpp{P_0}$. 
We use the observability predicates (barbs) as in Definition \ref{d:barbs}. 
We know from (\ref{der:wtrace1}) and  (\ref{der:wtrace2}) that $\encpp{P_0}$ evolves into 
either $\encpp{P_1}$ or $\encpp{P'_1}$ after a weak transition. 
The encoding preserves the 
%These behaviors represent a 
mutually exclusive, internal choice that was discussed for the source term $P_0$; in the encoding such a choice 
is governed by the encoding of $F^{(2)}$.
Also, as in the source term,  
%As we will see (and from the previous discussion on the effect of the internal choice on the observables), 
the output barb on $b_i$ (resp. $b_j$) available in $\encpp{P_1}$ (resp. $\encpp{P'_1}$) is enough to recognize the result of such a choice.
Process $T_0$ should be capable of mimicking this internal choice, and 
there should exist derivatives $T_1$ and $T'_1$ of $T_0$ such that both
$T_0 \Ar{~} T_1$ with $T_1 \approx \encpp{P_1}$
and $T_0 \Ar{~} T'_1$ with $T'_1 \approx \encpp{P'_1}$ hold. 
%As discussed before, both $T_1$ and $T'_1$ are in MDF with respect to $\{m_1,m_2,b\}$, $R$, and some $A_i$.

Consider now the behavior from $\encpp{P_1}$, one of the two possible derivatives of $\encpp{P_0}$
(given in (\ref{e:trace3})).
After a weak output transition 
%There are two possibilities: \emph{either} (i) to make an output 
on $b_1$, the process evolves into one that is behaviorally equivalent to $\encpp{P_2}$.
 %which in turn, will perform an output action on $s_1$.
%(ii) to make an output on $b_2$ (thus leading to $\encpp{P'_2}$, which will enable an output on $s_2$). 
This output barb gives evidence on the internal choice that took place in $\encpp{P_0}$.
Recall that such a choice was a mutually exclusive choice: therefore, once an output barb on $b_1$ is performed, 
the possibility of an output barb on $b_2$ is precluded.
By definition of $\approx$, process $T_1$ should be able to perform a weak output transition on $b_1$, thus evolving into a process $T_2$ behaviorally equivalent to $\encpp{P_2}$. 
The behavior from $\encpp{P'_1}$ (the other derivative of $\encpp{P_0}$, given in (\ref{e:trace4})) is similar: after a weak output transition
on $b_2$, the process evolves into a process behaviorally equivalent to $\encpp{P'_2}$.
 %which in turn,  will perform an output action on $s_2$. 
The \hopis{1}{-} process $T'_1$ should mimic this behavior as expected, and evolve into a $T'_2$ such that $T'_2 \approx \encpp{P'_2}$.
Since MDFs are preserved by output action (Lemma\ref{l:mdf-out}) both $T_2$ and $T'_2$ 
are in MDF with respect to $\{m_1,m_2\}$, $R$, and some $A_i$.

To complete the bisimulation game, we have that $T_2$ and $T'_2$ should be able to match the internal synchronizations and output actions that are performed by $\encpp{P_2}$ and $\encpp{P'_2}$, respectively. Summing up %, because of and (\ref{e:comp}), 
we have the following behavior from $T_0$:
\begin{eqnarray}
& & T_0 \Ar{} T_1 \Ar{\outC{b_1}} T_2 \Ar{\outC{s_1}} \approx \nil  \label{e:t01}~~\mbox{~and}\\
& & T_0 \Ar{} T'_1 \Ar{\outC{b_2}} T'_2 \Ar{\outC{s_2}} \approx \nil \label{e:t02} .
\end{eqnarray}
where, by definition of $\approx$, $\encpp{P_i} \approx T_i$ for $i \in \{0,1,2\}$ and
$\encpp{P'_j} \approx T'_j$ for $j \in \{1,2\}$.
Call $C_2$ and $C'_2$ to the derivatives of $C_0$ in $T_2$ and $T'_2$, respectively.
It is worth noticing that by conditions on names, output actions on $s_1$ and $s_2$ cannot originate in $C_2$ and $C'_2$.

The behavior of $T_0$ described in (\ref{e:t01}) and (\ref{e:t02}) can be equivalently described as 
$T_0 \Ar{\alpha_1} \nil$ and $T_0 \Ar{\alpha_2} \nil$, where $\alpha_1$ contains outputs on $b_1$ and $s_1$, and $\alpha_2$ contains outputs on $b_2$ and $s_2$, respectively. Using the Commuting Lemma (Lemma \ref{l:commute}) on $T_0$, we know there exist 
%$\vec{\alpha_{1B}}$, $\vec{\alpha_{1A}}$, $\vec{\alpha_{2B}}$, $\vec{\alpha_{2A}}$, 
processes $T^*_1$, and $T^*_2$ such that

\begin{enumerate}
\item $T^*_1 \equiv \nu \til n \, (A_0 \parallel \prod^m R \parallel C^*_1[R])$ and $T^*_2 \equiv \nu \til n \, (A_0 \parallel \prod^{m'} R \parallel C^*_2[R])$, for some $m,m' \geq 0$. Recall that these processes are obtained by 
performing every action originated in $C_0$
(which can only be output actions and internal synchronizations);
as a result, we have that $C^*_1[R] \not \arro{}$ and $C^*_2[R] \not \arro{}$. 

\item $T^*_1$ (resp. $T^*_2$) can only perform an output action on $s_1$ (resp. $s_2$) and 
internal actions. % not originated in $C^*_1$  (resp. $C^*_2$). 
Considering this, we have that $T^*_1 \webarb{\overline{s_1}}$, $T^*_1 \not \webarb{\overline{s_2}}$ and $T^*_2 \webarb{\overline{s_2}}$, $T^*_2 \not \webarb{\overline{s_1}}$ should hold.
\end{enumerate}


From item (1) above it is easy to observe that the only difference between $T^*_1$ and $T^*_2$ is 
in $m$ and $m'$, the number of copies of $R$ released as a result of executing first all actions originating in $C_0$.
We then find that 
the number of copies of $R$
%this difference 
has direct influence on performing an output action on $s_1$ or on $s_2$;
in turn, this has influence on  
%Consequently, the value of $m$ and $m'$ is fundamental in 
the bisimulation game between $\encpp{P_2}$ and $T_2$, and that 
between $\encpp{P'_2}$ and $T'_2$.  We consider three possible cases for the value of $m$ and $m'$:

\begin{description}
 \item[Case 1: $m = m'$.] This is not a possibility, since it would imply that both $T^*_1$ and $T^*_2$ have the same possibilities of behavior, i.e., that outputs on both $s_1$ and $s_2$ are possible from $T^*_1$ and $T^*_2$. 
Clearly, this breaks the bisimilarity condition. 

\item[Case 2: $m > m'$.] Consider the process $T^*_1$.
We have already seen that in order to play correctly the bisimulation game, it must be the case that $T^*_1 \webarb{\overline{s_1}}$ and $T^*_1 \not \webarb{\overline{s_2}}$. Process $T^*_1$ has more copies of $R$ than $T^*_2$; we can thus rewrite it as 
\[
T^*_1 \equiv \nu \til n \, (A_0 \parallel \prod^{m'} R \parallel \prod^{m-m'} R \parallel C^*_1[R]) \, .
\]
Considering that $C^*_1[R] \not \arro{}$ and $C^*_2[R] \not \arro{}$, we can 
formally state that the $m - m'$ copies of $R$ in $T^*_1$ are the only behavioral difference between
$T^*_1$ and $T^*_2$, i.e. 
\begin{equation} \label{e:contr}
T^*_1 \approx T^*_2 \parallel \prod^{m-m'} R \, . 
%T^*_1 = T^*_2 \parallel \prod^{m-m'} R \, .
\end{equation}

Let us analyze the consequences of this relationship between $T^*_1$ and $T^*_2$.
As argued before, it must be the case that 
$T^*_1 \webarb{\overline{s_1}}$ and $T^*_2 \webarb{\overline{s_2}}$ should hold.
Notice that because of (\ref{e:contr}), 
if $T^*_2 \webarb{\overline{s_2}}$ 
then 
%$T^*_1 \webarb{\overline{s_1}}$ \emph{and} 
$T^*_1 \webarb{\overline{s_2}}$
holds. 
This would break the bisimilarity game between $\encpp{P_2}$ and $T_2$, since $\encpp{P_2} \not \webarb{\overline{s_2}}$. 
Even in the (contradictory) case that $T^*_2 \webarb{\overline{s_2}}$ would not hold, the bisimilarity game between $\encpp{P_2}$ and $T_2$ would succeed, but the game between $\encpp{P'_2}$ and $T'_2$ would fail, as $\encpp{P'_2}$ could perform an output on $s_2$ that $T_2$ could not match. Hence, in the case $m > m'$ the bisimilarity game would fail.

\item[Case 3: $m < m'$.] This case is completely symmetric to Case 2.
\end{description}

This analysis reveals that there is no way a MDF can faithfully mimic the observable behavior of a \hopis{2}{-} process when such a behavior depends on internal choices implemented with private names. 
%In all cases, there are derivatives of $T_0$ that break the bisimilarity condition,  %Therefore, the completeness statement given by 
%and (\ref{e:comp}) is contradicted.
We then conclude that there is no encoding  $\encpp{\cdot}: \hopis{2}{-} \to \hopis{1}{-}$.
\end{proof}

% \as{In what sense the following encoding is not compositional? Also, you keep talking about SHO, but you have asynchronous output here.}
% 
% \begin{myrem}
% There are encodings of $\hopis{2}{-}$ into $\hopis{1}{-}$ based on communication of pairs, such as the following:
% \begin{eqnarray*}
% \{P,Q\} & = & l.P + r.Q \\
% \encpp{\outC{a} \langle P, Q \rangle.R} & = & \outC{a}\langle \{P,Q\} \rangle.\encpp{R} \\
% \encpp{a(x,y).R} & = & a(z). \encpp{R {\sub {(\outC{l} \parallel z)} x} {\sub {(\outC{r} \parallel z)} y}}
% \end{eqnarray*}
% However, this kind of encodings is not compositional. 
% \end{myrem}


The scheme 
used in the proof of Lemma \ref{l:biadic}
can be generalized for calculi with arbitrary polyadicity.
Therefore we have the following.

\begin{mylem}\label{l:ppas-hier}
For every $n>1$, there is no encoding of $\hopis{n}{-}$ into $\hopis{n-1}{-}$.
\end{mylem}
\begin{proof}
The proof proceeds by contradiction, assuming an encoding $\encpp{\cdot}: \hopis{n}{-} \to \hopis{n-1}{-}$ indeed exists.
 Consider the \hopis{n}{-} process $P = E^{(n)} \parallel F^{(n)}$, where $E^{(n)}$ and $F^{(n)}$ are defined as follows:
\begin{eqnarray*}
E^{(n)} & = & \nu m_1, \ldots, m_{n} \, (\out{a}\langle{S_1,\ldots, S_{n}}\rangle.\nil) \quad \\
F^{(n)} & = & \nu b \, (a(x_1, \ldots, x_{n}).( \out{b}\langle \outC{b_1}.x_1 \rangle. \nil \parallel 
\cdots \parallel \out{b}\langle \outC{b_{n}}.x_{n} \rangle. \nil \parallel  b(y_1).\cdots.b(y_{n}).y_1)
\end{eqnarray*}
where, for each $l \in 1..n$, $S_l = \outC{m_l} \parallel m_l.\outC{s_l}$. Also, 
$b_1, \ldots, b_n$ are pairwise different names not in $\fn{E^{(n)}}$ and
$s_1, \ldots, s_n$ are pairwise different names not in $\fn{F^{(n)}}$.

Using this $P$, the analysis follows the same principles and structure than the proof of  Lemma \ref{l:biadic}.
After a public synchronization on $a$, $P$ evolves into some $P_0$. In $P_0$ there are $n$ internal synchronizations on the private name $b$, which implement an internal, mutually exclusive choice and lead to the execution of one (and only one) of the $\outC{b_l}.S_l$. 
In the encoding side, using Proposition \ref{p:dfispres}, the \hopis{n-1}{-} process $\encpp{P}$ can be shown to be in ZDF with respect to
$\{m_1,\ldots,m_n,b\}$ and $\encpp{E^{(n)}}$; using Corollary \ref{l:zdf-inv} and the generalization of Lemma \ref{l:mdf-pub} to the case of a public $(n-1)$-adic synchronization, 
$\encpp{P_0}$ can be shown to be behaviorally equivalent to a process $T_0$ that is
in $(n-1)$-adic disjoint form with respect to $\{m_1,\ldots,m_n\}$, some $R_1, \ldots, R_{n-1}$, and some $A_0$. 

The analysis of the bisimulation game $T_0 \approx \encpp{P_0}$ is similar as before; the only difference is that now there 
are $n$ alternatives for an output action on some $b_i$ which enables an output action on $s_i$. Process $T_0$ should be able to match any such actions; this exploits the fact that along the bisimulation game the $(n-1)$-adic disjoint form is preserved (by Lemmas \ref{l:mdf-inv} and \ref{l:mdf-out}). The Commuting Lemma (Lemma \ref{l:commute}, which holds for arbitrary NDFs) can be then applied to show that the $n-1$-adic disjoint form $T_0$ might perform some observable behavior that $\encpp{P_0}$ is not able to perform. In particular, if $\encpp{P_0}$ executes only some $\outC{b_l}.S_l$, $T_0$ 
could exhibit \emph{also} barbs associated to some $\outC{b_k}.S_k$, where $k \in 1..n$ and $k \neq l$. 
This leads to a contradiction, and the thesis holds.
\end{proof}




% \begin{myrem}
% In \cite{Gorla08} the notion of encoding can be parameterized using different \emph{renaming policies}, which define how a single name is translated. Here we have considered the renaming policy that is used in most (if not all) expressiveness studies:
% a name is simply translated into itself.
% Counterexamples for Lemmas \ref{l:biadic} and \ref{l:ppas-hier} could be devised by using less standard renaming policies.
% For instance, if one assumes 
% a renaming policy under which
% each name $a$ is translated as a pair of names $a_1, a_2$ then one could find an encoding as in Definition \ref{d:enc}, 
% thus contradicting Lemma \ref{l:biadic}. 
% \end{myrem}

\begin{myrem}[A hierarchy for \emph{asynchronous} calculi]
The expressiveness hierarchy 
characterized by Lemma \ref{l:ppas-hier} for calculi in \shocore 
%should hold 
holds for calculi in \rhocore 
%for the asynchronous case 
as well. 
In fact, a detailed proof would simply consist in 
%One would have to check and adapt: 
adapting 
the definition of guarded contexts (Definition \ref{d:contexts}), 
the stability lemmas (Lemmas \ref{l:mdf-inv} and \ref{l:mdf-out}), the conditions under which the 
Swapping Lemma holds (Lemma \ref{l:swapp}), and the counterexample used in Lemma \ref{l:biadic}.
Roughly speaking, there are no substantial differences between the synchronous and the asynchronous case:
%In principle, things should not be that different: 
having one less prefix does not change
the main structure of the proof; the definition of disjoint form becomes somewhat weaker, as copies 
of the process inside context would be only released after an input action.
\end{myrem}



%\newpage

\section{The Expressive Power of Abstraction Passing}\label{s:abstraction}
In this section we show that 
abstraction passing, i.e., parameterizable processes,
is strictly more expressive than process passing.
We consider \bhopis{n}{}, the extension of \hopis{n}{×} with 
the communication of abstractions of one level of 
arrow nesting, i.e., functions from processes into processes.
%\subsection{The Extended Language}
The language of \bhopis{n}{} processes is obtained by extending the syntax of \shocore processes (Definition (\ref{d:procs-s}) 
in the following way:

\[
 P, Q, \ldots ::= \cdots \midd (x)P \midd \papp{P_1}{P_2}
\]
 
That is, we consider abstractions of the form $(x)P$ and 
\emph{applications} of 
the form $\papp{P_1}{P_2}$, that allows to assign an argument $P_2$ to an abstraction $P_1$.
As usual, $(x_1)\ldots (x_n)P$ is abbreviated as $(x_1,\ldots, x_n)P$.
%We assume notions of open and closed processes as expected.
The operational semantics 
of \bhopis{n}{} is that of \shocore, extended with %notions of beta reduction, normal forms, type systems as in \cite{San96int}.
the following rule: % for handling abstractionapplications
%We also extend the definition of (guarded) contexts (Def. \ref{d:contexts}) accordingly:

%\[
% C, C', \ldots :: = \cdots \midd (x)\holE \midd \papp{\holE}{P} \midd \papp{P}{\holE} 
%\]

\[
\textsc{App}~~\frac{}{\papp{(x)P}{Q} \arro{~\tau~} P \sub Q x} \, .
\]
Moreover, for \bhopis{n}{} we rely in notions of types and as in \citep{San96int}, 
and consider only well-typed processes. 

\begin{example}[Private Link Establishment with Abstraction Passing]
Let us introduce 
a very simple example of the way in which abstraction passing 
is able to model private link establishment on a name. % ($s$ in this case):
Consider the \bhopis{1}{×} process $P = S \parallel R$, where $S$ and $R$ are defined as follows:
\begin{eqnarray*}
 S & = & \nu s \, (\outC{a} \langle (y)\outC{s} \langle y \rangle \rangle.s(x).x) \\
R & = & a(x).\papp{x}{Q} \, .
\end{eqnarray*}
We then have that 
a private link between $S$ and $R$ is created 
once they synchronize on $a$; the private link is used to send $Q$ from the derivative of $R$ to that of $S$:
\begin{eqnarray*}
 P & \arro{~a\tau~} & \nu s \, (s(x).x \parallel \papp{(y)\outC{s} \langle y  \rangle}{Q}) \\
 & \arro{~\tau~} & \nu s \, (s(x).x \parallel \outC{s} \langle Q  \rangle ) \\
& \arro{~\tau~} & Q \, . \\
\end{eqnarray*}

\end{example}

% ============ OLD STUFF ===================================================================
% We now show that abstraction passing increases the expressive power of pure process passing
% in \shocore. 
% Our result is based on the remark below, which shows how 
% \hopis{n}{×} can be encoded into the extension 
% of \hopis{n-1}{×} with abstraction passing.
% Recall that \hopis{n}{×} \emph{cannot} be encoded into \hopis{n-1}{×} 
% with process passing only
% (Lemma \ref{l:ppas-hier}).
% 
% \begin{myrem}[Abstraction-passing can encode polyadic communication]\label{r:procabs}
% There are encodings of 
% \hopis{n}{k} into \bhopis{n-1}{k+1}.
% Consider, for instance, the case of $n=2$: 
% %, i.e. an encoding of \hopis{2}{×}into  \bhopis{1}{×}. 
% %biadic process passing into monadic communication of abstractions of level one:
% \begin{eqnarray*}
% \encpp{\outC{a} \langle P_1, P_2 \rangle.R} & = & \nu r (a(z).\papp{z}{\encpp{P_1},\outC{r}} \parallel r.( \papp{z}{\encpp{P_2},\outC{r} } \parallel r.\encpp{R})) \\
% \encpp{a(x_1,x_2).Q} & = & \nu s( \outC{a} \langle (y_1,y_2)\outC{s} \langle y_1 \rangle.y_2 \rangle. s(x_1).s(x_2).\encpp{Q}) 
% \end{eqnarray*}
% where $\encpp{\cdot}$ is an homomorphism for the other operators in \hopis{2}{×}.
% The encoding of input sends to the encoding of output an abstraction that will communicate $P_1$, $P_2$ from the side of the encoding output. 
% The communication on the public name $a$ is then inverted for this purpose. 
% Crucially, in the encoding of input, the abstraction and the continuation of the output action share a private name ($s$ above).
% The abstraction allows two parameters: the object to be communicated, and a synchronization signal so as to preserve the correct order in communication. 
% %This scheme can be easily generalized to exhibit 
% % for $k \geq 0$ and $m > n$. 
% % The intuition is that the 
% % one extra level of abstraction-passing 
% % is used to communicate the $m-n$ communication objects that make a difference in terms of polyadicity. 
% % Hence, one would 
% % communicate an abstraction of type $k+1$ which is used 
% % $\lceil m/n \rceil$ times to communicate the $m$ objects.  
% % The above case corresponds to the particular case in which $k=0$, $n = 1$, and $m = 2$. 
% \end{myrem}
% 
% % \begin{myrem}
% %  This is more a question: does Remark \ref{r:procabs} mean that polyadicity is no longer an issue once
% % you have at least one level of abstraction passing? In other words: perhaps polyadicity of abstraction 
% % passing is not relevant because one can always encode arbitrary polyadicity of abstraction passing (of whather type)
% % using the trick in Remark \ref{r:procabs}.
% % \end{myrem}
% 
% Remark \ref{r:procabs} leads to the following separation result:
% 
% \begin{myprop}\label{p:sep-abs-pp1}
%  %Take \hopis{n}{2}, the higher-order process calculus with biadic abstraction passing and $n$-adic process passing.
% There is no encoding of \bhopis{n}{1} into \hopis{n}{-}.
% \end{myprop}
% 
% \begin{proof}
% Suppose, for the sake of contradiction, there is an encoding 
% \[
% \mathcal{A}\encpp{\cdot}: \bhopis{n}{1} \to \hopis{n}{-} \, . 
% \]
% By Remark \ref{r:procabs}, we know there is an encoding 
% \[
% \mathcal{B}\encpp{\cdot}: \hopis{n+1}{-} \to \bhopis{n}{1} \, . 
% \]
% Since the composition of two encodings is an encoding (Proposition \ref{p:enc-comp}), this means that 
% $(\mathcal{A}\cdot\mathcal{B})\encpp{\cdot}$ is an encoding of \hopis{n+1}{-} into \hopis{n}{-}.
% However, by Lemma \ref{l:ppas-hier} we know such an encoding does not exist, and we reach a contradiction.
% \end{proof}

We now show that abstraction passing increases the expressive power of pure process passing
in \shocore. 
The result is based on the encoding below.

\begin{mydefi}[Monadic abstraction-passing can encode polyadic communication]\label{r:procabs}
%There exist encodings of \hopis{n}{k} into \bhopis{1}{k+1}. 
%For $n=2$, such an 
The encoding 
$\encpp{\cdot}: \hopis{2}{k} \to \bhopis{1}{k+1}$ 
is defined as:
\begin{eqnarray*}
\encpp{\out a \langle P_1, P_2 \rangle.R} & = & a(z).(\encpp{R} \parallel \nu m , n, c \, ( \outC{n} \parallel 
\papp{z}{n.(\outC{c} \parallel \outC{m}) + m.(\encpp{P_1} \parallel \outC{m})} \parallel c.\papp{z}{\encpp{P_2}})) \\
\encpp{a(x_1,x_2).Q} & = & \nu b \,( \out a \langle (y)\outC{b}\langle y \rangle \rangle \parallel b(x_1).(x_1 \parallel b(x_2).\encpp{Q})) 
\end{eqnarray*}
where $\encpp{\cdot}$ is an homomorphism for the other operators in \hopis{2}{k}.
\end{mydefi}

Definition \ref{r:procabs} 
can be generalized
so as to obtain an encoding $\encpp{\cdot}: \hopis{n}{k} \to \bhopis{1}{k+1}$, 
for any $n > 1$.
This encoding leads to the following separation result:

\begin{mylem}\label{p:sep-abs-pp1}
 %Take \hopis{n}{2}, the higher-order process calculus with biadic abstraction passing and $n$-adic process passing.
There is no encoding of \bhopis{n}{1} into \hopis{n}{-}.
\end{mylem}

\begin{proof}
Let us just consider the case $n=1$; the other cases are similar. 
Suppose, for the sake of contradiction, there is an encoding 
$\mathcal{A}\encpp{\cdot}: \bhopis{1}{1} \to \hopis{1}{-}$. 
By Definition \ref{r:procabs}, we know there is an encoding 
$\mathcal{B}\encpp{\cdot}: \hopis{2}{-} \to \bhopis{1}{1}$. 
Since the composition of two encodings is an encoding (Proposition \ref{p:enc-comp}), this means that 
$(\mathcal{A}\cdot\mathcal{B})\encpp{\cdot}$ is an encoding of \hopis{2}{-} into \hopis{1}{-}.
However, by Lemma \ref{l:ppas-hier} we know such an encoding does not exist, and we reach a contradiction.
\end{proof}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \subsection{Second Approach}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This approach relies on disjoint forms rather than on the definition of encoding.
% The hope is that this approach provides a base upon which a proof for a ``vertical hierarchy'' of abstraction passing
% can be formalized.
% 
% We state some auxiliary results first.
% 
% \begin{myfact}\label{f:independence}
%  Let $T = \nu \til n_1 \, P_1 \parallel \nu \til n_2 \, P_2$ be 
% a ZDF of degree $2$ with respect to $\til n_1$ and $\til n_2$.
% The only way in which $\nu \til n_1 \, P_1$ 
% can enable a (weak) output action originated in $\nu \til n_2 \, P_2$ 
% is via a public synchronization. 
% \end{myfact}
% 
% The following proposition formalizes the fact that the content of non-binding contexts
% remain unaffected after transitions.
% 
% \begin{myprop}\label{p:subs-cont}
%  Let $P = \ct{\til S}$ be a \hopis{m}{-} process where $C$ is a guarded, multihole context.
% If $P \arro{~\alpha~} P'$ %, for some $\alpha$ and $P'$, 
% then
% $P' \equiv C'[\til S]$, for some context $C'$.
% %$P \arro{~\alpha~} P \sub{\til T}{\til x} = P'$ then we have that
% %$P' = C\sub{\til T}{\til x}[\til S]$.
% \end{myprop}
% \begin{proof}[Proof (Sketch)]
% The thesis holds by
% noticing that since $C$ is guarded, $\alpha$ cannot originate from processes in $\til S$; 
% hence, $\alpha$ can only originate in context $C$. %, and necessarily $\til x \in \bv{C}$. 
% Since we assume non-binding contexts (see Remark \ref{r:nonbindcon}), 
% processes in $\til S$ result unaffected after $\alpha$, so 
% we obtain that only $C$ can evolve because of $\alpha$, and hence we obtain that $P' = C'[\til S]$.
% %any substitution involving $\til x$ in $C$; we then obtain that $P'$ actually corresponds to $C\sub{\til T}{\til x}[\til S]$.
% \end{proof}
% 
% 
% The following definition formalizes disjoint forms inside parallel contexts that do not share private names with them. 
% 
% \begin{mydefi}[Contextual DFs]\label{d:context-df}
% Let $T \equiv \nu \til n (P \parallel \ct{\til R})$  be a $k$-adic disjoint form with respect to 
% to $\til n$, $\til R$, and $P$.
% Consider a process $S$ that does not share private names with $C$ nor with $P$ and $R$.
% We say that the process $T \parallel S$ is a \emph{contextual disjoint form} 
% with respect to $\til n$, $\til R$, $P$, and $S$. 
% \end{mydefi}
% 
% \begin{myrem}
%  All stability properties of disjoint forms transfer to contextual disjoint forms in the expected way.
% \end{myrem}
% 
% 
% The following lemma formalizes the effect that a particular kind of public synchronizations has 
% over disjoint forms. Intuitively, after such synchronizations, 
% the depth of the disjoint form remains unchanged, while its order increases. 
% 
% \begin{myprop}\label{p:kadic}
%  Let $T = \nu \til n \, (P \parallel a(\til x).\ct{\til S_1})$ 
% be a \hopis{k}{-} process, 
% with $a \not \in  \til n$ and $a \not \in   \fn{P, \til S_1}$.
% Suppose $T$ is in $k$-adic disjoint
% form with respect to $\til n$, $P$, and $\til S_1$.
% Moreover, let $V = \nu \til m \, \outC{a}\langle \til S_2 \rangle. V'$ be a process in which
% \begin{enumerate}
%  \item $a \not \in \til m$ % and $a \not \in \fn{V'}$
% %\item $\til m \cap \til n = \emptyset$
% \item $\fn{V'} \cap \fn{P,\til S_1, C} = \emptyset$
% \end{enumerate}
% Let $U = V \parallel T$. If $U \arro{~a\tau~} U'$ then, for some guarded context $C'$,
% \[
% U' \equiv \nu \til n \til m \, (P \parallel V' \parallel C'[\til S_1, \til S_2])
% \]
% is a $2k$-adic disjoint form with respect to $\til n \cup \til m$, $P \parallel V'$, and
% $\til S_1 \cup \til S_2$.
% \end{myprop}
% 
% \begin{proof}
% 
% If $U \arro{~a\tau~} U'$ then, by conditions on names, 
% the only possibility is that the output on $a$ in $V$ and
% the input on $a$ in $C$ interacted so as to obtain a public synchronization. That is, both 
% \[
%  T = \nu \til n \, (P \parallel a(\til x).\ct{\til S_1}) \arro{~a(\til x)~} \nu \til n \, (P \parallel \ct{\til S_1})
% \]
% and
% \[
%  V = \nu \til m \, \outC{a}\langle \til S_2 \rangle. V' \arro{~\nu \til m_1 \, \outC{a}\langle \til S_2\rangle~} \nu \til m \, V' \quad \mbox{(for some $\til m_1 \subseteq \til m$)}
% \]
% took place with rules \textsc{Inp}  and \textsc{Out}. So, using rule \textsc{Tau1} 
% we have that 
% \[
%  U \arro{~a\tau~} \nu \til m \, V' \parallel \nu \til n \, (P \parallel \nu \til m_1 (C[\til S_1])\sub {\til S_2}{\til x} ) = U' \,. 
% \]
% Prop. \ref{p:subs-cont} ensures that the substitution does not affect processes in $\til S_1$, so we actually have
% that 
% %substitution $$ 
% % relies on the fact that we
% % consider only non-binding contexts (see Remark \ref{r:nonbindcon}). As such, in $a(x).\ct{\til S_1}$
% % occurrences of variable $x$ in $\til S_1$ are distinguished from those in $C$. As a result, 
% % in the substitution $C\sub {\til S_2}{x} [\til S_1]$, the content of the hole remains unaffected. 
% % We then have that $C\sub {\til S_2}{x} [\til S_1]$ 
% %can be rewritten as $\ct{\til S_1, \til S_2}$. 
% %This way, 
% $U' \equiv \nu \til m \, V' \parallel \nu \til n (P \parallel \nu \til m_1 \, C\sub {\til S_2}{\til x}[\til S_1])$.
% Now, by replacing each occurrence of the variables in $\til x$ 
% in $C$ with a context hole, 
% we obtain a context $C'$ whose hole has size $2k$:
% \[
% U' \equiv \nu \til m \, V' \parallel \nu \til n (P \parallel \nu \til m_1 \, C'[\til S_1, \til S_2])
% \]
% Let us now verify that $U'$ is indeed  a disjoint form of order $2k$. First, 
% by conditions on names, we can safely expand the scope of $\til m$, and write 
% \[
% U' \equiv \nu \til m \til n \, (V' \parallel P \parallel C'[\til S_1, \til S_2])
% \]
% Indeed, conditions on names ensure that $P$ and $V'$ do not share names, so now it might appear clear that 
% $U'$ is a $2k$-adic disjoint form with respect to $\til n \cup \til m$, $P \parallel V'$, and
% $\til S_1 \cup \til S_2$, as desired.
% \end{proof}
% 
% \begin{myprop}\label{p:enabling}
%  Let $T$ be a \hopis{m}{-} process such that $T \Ar{\outC{a}} \Ar{\outC{b}} \Ar{\outC{c}}$.
% If $T$ is in $k$-adic disjoint form then it is not possible that 
% \emph{both }$\outC{a}$ enables $\outC{b}$ \emph{and} that $\outC{b}$ enables $\outC{c}$.
% \end{myprop}
% 
% \begin{proof}[Proof (Sketch)]
% The proof proceeds by observing that the 
% enabling of an action is a form of dependency.
% In a DF $T = \nu \til n \, (P \parallel \ct{\til R})$ 
% there is only one level of dependency, the one given by the containment of $\til R$ in the context $C$: performing actions originated in $C$ might enable actions between processes in $\til R$ and $P$. 
% This is precisely the intuition of the Commuting Lemma (Lemma \ref{l:commute}). 
% We identify the different possible options for the 
% origin of the visible actions. There are four cases; we analyze them with the aid of the Commuting Lemma.
% 
% \begin{enumerate}
%  \item All actions originate in $C$ and no action originates in $\til R$. This is not a possibility, as this 
% means that actions on $\outC{a}$, $\outC{b}$, and $\outC{c}$ are all ``at the same level'', 
% in the sense that all three actions could be performed by $T$, with no particular dependencies among them.
% 
% \item Action $\outC{a}$ originates in $C$, and $\outC{b}$ and $\outC{c}$ originate in $\til R$. 
% This is not a possibility either since only one enabling would be enforced, and hence 
% $\outC{b}$ and $\outC{c}$ would end up ``at the same level'', that is, 
% if $T \Ar{\outC{a}} T'$ then in $T'$ both $\outC{b}$ and $\outC{c}$ could be performed 
% without any dependencies among them. 
% \item Actions $\outC{a}$ and  $\outC{b}$ originate in $C$, and $\outC{c}$ originates in $\til R$. 
% This means that in $T$ both $\outC{a}$ and $\outC{b}$ could be performed without any dependencies
% among them. This is not satisfactory, independently of the fact that each of them 
% would have the possibility of enabling $\outC{c}$.
% 
% \item All actions originate in $\til R$ and no action originates in $C$. Analogous to case (1).
% \end{enumerate}
% We then conclude that the structure of a disjoint form is not enough to induce two consecutive enablings of actions.
% \end{proof}
% 
% 
% 
% 
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% 
% 
% 
% 
% \begin{myprop}
%  %Take \hopis{n}{2}, the higher-order process calculus with biadic abstraction passing and $n$-adic process passing.
% There is no encoding of \hopis{n}{1} into \hopis{n}{-}.
% \end{myprop}
% 
% \begin{proof}[Proof (Sketch)] 
%  The proof
% proceeds by contradiction, assuming such an encoding $\encpp{\cdot}$ indeed exists.
% Consider the $\hopis{n}{1}$ process $V_0 = R \parallel S_1 \parallel S_2$ where 
% $S_1$, $S_2$, and $R$ are defined as follows:
% \begin{eqnarray*}
% S_1 & = & \nu d \, (\outC{a}\langle (w)\outC{d} \parallel d. \outC{s}.w\rangle.\nil )  \\
% S_2 & = & \nu e \, (\outC{b}\langle \outC{e} \parallel e.\outC{t}\rangle.\nil) \\
% R & = & \nu c \, (a(x).b(y).\outC{c} \parallel c.\outC{r}.\papp{x}{y})
% \end{eqnarray*}
% where $s,d \not \in \fn{S_2,R}$; $r,c \not \in \fn{S_1,S_2}$; $t,e \not \in \fn{S_1,R}$; 
% $\fn{S_1} \cap \fn{S_2} = \emptyset$; 
% $a \not \in \fn{S_2}$; and $b \not \in \fn{S_1}$. 
% Intuitively, we have two senders ($S_1$ and $S_2$) and a receiver ($R$).
% The senders cannot communicate to each other; they can only transmit something to the receiver 
% through a public name ($a$ in $S_1$, $b$ in $S_2$). 
% 
% Let us analyze the behavior of $V_0$. We have the following
% \begin{eqnarray*}
%  V_0 = R \parallel S_1 \parallel S_2 & \arro{~a\tau~} & \nu c d \, (b(y).\outC{c} \parallel c.\outC{r}.((w)\outC{d} \parallel d.\outC{s}.\papp{w}{y})) \parallel S_2 = V_1 \\
% & \arro{~b\tau~} & \nu c d e\, (\outC{c} \parallel c.\outC{r}.(\outC{d} \parallel d.\outC{s}.(\outC{e} \parallel e.\outC{t}))) = V_2
% \end{eqnarray*}
% Notice how action $\arro{~a\tau~}$ between $S_1$ and $R$ enables action $\arro{~b\tau~}$ between $S_2$ and $R$. 
% Once in $V_2$, we have the following sequence of actions
% \[
%  V_2 \arro{~\tau~}\arro{~\outC{r}~} V_3 \arro{~\tau~}\arro{~\outC{s}~} V_4 \arro{~\tau~}\arro{~\outC{t}~} \nil
% \]
% where each output action is enabled by an internal synchronization on a private name, i.e.
% $V_2 \not \webarb{\outC{s}}$, $V_2 \not \webarb{\outC{t}}$, and $V_3 \not \webarb{\outC{t}}$ hold.
% 
% Putting together the two sequences above, we have the following:
% \begin{equation}
%  V_0 \arro{~a\tau~} V_1 \arro{~b\tau~} V_2 \arro{~\tau~}\arro{~\outC{r}~}V_3 \arro{~\tau~}\arro{~\outC{s}~}V_4 \arro{~\tau~}\arro{~\outC{t}~} \nil
% \end{equation}
% 
% Consider $\encpp{V_0}$, the encoding of $V_0$. By definition of encoding, $\encpp{V_0}$ has the following behavior:
% \begin{equation}\label{e:ap1}
% \encpp{V_0} \Ar{~a\tau~} \approx \encpp{V_1} \Ar{~b\tau~} \approx \encpp{V_2} 
% \Ar{~\outC{r}~}\approx \encpp{V_3} \Ar{~\outC{s}~}\approx \encpp{V_4} \Ar{~\outC{t}~} \approx \nil
% \end{equation}
% 
% It is convenient to analyze in more detail the first two public synchronizations above, 
% so as to appreciate the disjoint forms involved in them.
% Let us expand the (weak) transitions associated to such synchronizations:
% \[
%  \encpp{V_0} \Ar{~} T_0 \arro{~a\tau~} T_1 \Ar{~} \approx 
% \encpp{V_1} \Ar{~}  T_2 \arro{~b\tau~} T_3 \Ar{~} T_4 \approx \encpp{V_2} 
% \]
% 
% First of all, we begin by noticing that $V_0$ is in ZDF of degree 3, 
% with respect to  $\{c\}$,  $\{d\}$, and $\{e\}$. By Prop. \ref{p:dfispres}, we have that
% $\encpp{V_0}$ is also in ZDF of degree 3, with respect to  $\{c\}$,  $\{d\}$, and $\{e\}$.
% Also, using compositionality of $\encpp{\cdot}$, we know that 
% $\encpp{V_0} = \encpp{R} \parallel \encpp{S_1} \parallel \encpp{S_2}$. 
% Using Lemma \ref{l:mdf-inv}, we know that $T_0$ is in ZDF with respect to  $\{c\}$,  $\{d\}$, and $\{e\}$ as well. 
% 
% Now we analyze the transition $T_0 \arro{a\tau} T_1$. 
% Notice that by conditions on names, public synchronization on $a$ 
% necessarily has to take place between the derivatives of $\encpp{S_1}$ and $\encpp{R}$ after zero or more 
% internal actions. The derivative of $\encpp{S_2}$ does not perform visible behavior.
% Exploiting Lemma \ref{l:mdf-pub} and the fact that 
% (the derivative of)
% $\encpp{S_2}$ does not share private names with 
% (the derivatives of)
% $\encpp{R}$ and $\encpp{S_1}$,
% we know that $T_1$ 
% is a contextual $k$-adic disjoint form (Def. \ref{d:context-df})
% with respect to name $d$, 
% some $k$-tuple of processes $\til N$, $P'_1$ ---the continuation after the output action of the derivative of $\encpp{S_1}$---, 
% and $P'_2$, the derivative of $\encpp{S_2}$ after zero or more internal actions.
% %This is because $d$ is the only private name that 
% %the derivative of $\encpp{S_1}$ can send to the derivative of $\encpp{R}$ through a
% %public synchronization on $a$; n
% %Notice that the derivative of $\encpp{S_2}$ cannot perform any visible action; 
% %thus $T_1$ should also contain the (weak) derivative of $\encpp{S_2}$; 
% %since by conditions on names this derivate does not share names with the other components, 
% Using Lemma \ref{l:mdf-inv} we obtain that $T_2$ is also in contextual $k$-adic disjoint form 
% with respect to name $d$, the tuple $\til N$, 
% the derivative of $P'_1$ after zero or more internal actions, and 
% the derivative of $P'_2$ after zero or more internal actions.
% 
% Notice that in $V_1$ the public synchronization on $b$ becomes enabled; 
% by operational correspondence the same applies to $\encpp{V_1}$. 
% This is reflected in the transition from $T_2 \arro{b\tau} T_3$:
% there is a communication of a $k$-tuple of processes $\til M$ from the derivative
% of $P'_2$ to the derivative of $\encpp{R}$ which, as just described, is part of the 
% contextual $k$-adic disjoint that first arose in $T_1$.
% Notice that the communication of $\til M$ can only extrude name $e$.
% We are thus in place to use Prop \ref{p:kadic} we deduce that 
% $T_3$ is in $2k$-adic disjoint form with respect to $\{d,e\}$, $\til N$ and $\til M$, and 
% the derivative of $P'_1$ in parallel with the continuation of $P'_2$ after the output action on $b$.
% These conditions hold also for $T_4$, using Lemma \ref{l:mdf-inv} yet another time.
% That is, we know that 
% \[
%  T_4 \equiv \nu c,d, e \, (P''_1 \parallel P''_2 \parallel \ct{\til N, \til M})
% \]
% where 
% $P''_1$, $P''_2$, and $C$ correspond to the derivatives 
% of $\encpp{S_1}$, $\encpp{S_2}$, and $\encpp{R}$ (resp.) after the public synchronizations on $a$ and $b$, as well
% after the internal actions associated to the weak transitions. 
% Moreover, $c$ is a private name in $C$.
% Furthermore, we know that 
% \[
%  T_4 \approx \encpp{V_2}
% \]
% which means that $T_4$ should be able to reproduce the behavior stated in Equation \ref{e:ap1}, that is, 
% there should exist derivatives $T_5$, $T_6$, and $T_7$ of $T_4$ such that 
% \[
% T_4 \Ar{\outC{r}} T_5 \Ar{\outC{s}} T_6 \Ar{\outC{t}} T_7
% \]
% and $T_5 \approx \encpp{V_3}$, $T_6 \approx \encpp{V_4}$, $T_7 \approx \nil$ hold.
% 
% Two ways of ending the proof.
% 
% \begin{enumerate}
%  \item \emph{First Way.~} At this point, we can apply the Commuting Lemma (Lemma \ref{l:commute}) on $T_4$.
% Before doing so, it is convenient to recall that such a lemma allows to execute first all the actions (public and internal) 
% that originate in the guarded context of the disjoint form ($C$ in $T_4$), and then executes the remaining actions.
% Therefore, it is worth noticing that starting in $V_2$, each of the outputs on $\outC{r}$, $\outC{s}$, and $\outC{t}$ 
% is enabled by a synchronization on a private name ($c$, $d$, and $e$, respectively).
% This allows to infer that the only output that can originate in the context $C$ of $T_4$ is 
% $\outC{r}$. Hence, the Commuting Lemma ensures the existence of a process $T^*_4$ such that
% the following holds:
% \begin{enumerate}
%  \item We have that $T_4 \Ar{\alpha} T^*_4 \Ar{\beta}$, and the only visible action in $\alpha$ is $\outC{r}$, 
% whereas the only two visible actions in $\beta$ are $\outC{s}$ and $\outC{t}$.
% \item $T^*_4 \equiv \nu d,e \, (P''_1 \parallel \prod^{k_n}_{i=0} N'_i \parallel P''_2 \parallel \prod^{k_m}_{i=0} M'_i \parallel  
% C'[\til N, \til M])$ with $C'[\til N, \til M] \not \arro{~}$. 
% \end{enumerate}
% 
% Notice that in $T^*_4$, $\prod^{k_n}_{i=0} N'_i$ and $\prod^{k_m}_{i=0} M'_i$ (with $k_n, k_m \geq 0$) stand for all those processes in $\til N, \til M$ that are left unguarded by 
% actions in $\alpha$ which consume prefixes in $C$.
% By recalling that $P''_1$ and $\prod^{k_n}_{i=0} N'_i$ share the private name $d$, 
% and that $P''_2$ and $\prod^{k_m}_{i=0} M'_i$ share the private name $e$, we can rewrite $T^*_4$ as 
% \[
%  T^*_4 \equiv \nu d \, (P''_1 \parallel \prod^{k_n}_{i=0} N'_i) \parallel \nu e \, (P''_2 \parallel \prod^{k_m}_{i=0} M'_i)
% \parallel \nu d, e \, (C'[\til N, \til M])
% \]
% recalling that $\nu d, e \, (C'[\til N, \til M]) \not \arro{~}$. 
% This rewriting is useful to see how the first two components are indeed in ZDF of degree 2 with respect to $\{d\}$ and
% $\{e\}$. As a matter of fact, these two components are responsible for performing the two output actions on 
% $\outC{s}$ and $\outC{t}$ that are still necessary to complete the bisimulation game between $T_4$ and $\encpp{V_2}$.
% Again, not only these two actions are enabled by synchronizations on private names, but we 
% know that action $\outC{s}$ enables action $\outC{t}$. Clearly, the former should originate from 
% $\nu d \, (P''_1 \parallel \prod^{k_n}_{i=0} N'_i)$ 
% whereas the latter should originate from $\nu e \, (P''_2 \parallel \prod^{k_m}_{i=0} M'_i)$.
% However, as stated by Fact \ref{f:independence}, 
% an output action
% from $\nu e \, (P''_2 \parallel \prod^{k_m}_{i=0} M'_i)$ ($\outC{t}$ in this case)
% can only be enabled by a public synchronization from
% $\nu d \, (P''_1 \parallel \prod^{k_n}_{i=0} N'_i)$ ---an output action ($\outC{s}$ in this case) 
% is not enough to cause enabling. What this means is that 
% in $T^*_4$ the outputs on $s$ and $t$ are 
% at the same level in the sense they are 
% completely \emph{independent} and as such, the enabling that takes place in $V_3$ cannot be mimicked by $T^*_4$.
% This allows to conclude that the operational correspondence between $V_0$ and $\encpp{V_0}$ does not hold, and as such, 
% we reach a contradiction. 
% 
% \item \emph{Second Way.~}
% We know that $T_4$ is in $2k$-adic disjoint form and, furthermore, that 
% \[
% T_4 \Ar{\outC{r}} T_5 \Ar{\outC{s}} T_6 \Ar{\outC{t}} T_7 \, ,
% \]
% which puts us in the setting of Prop. \ref{p:enabling}.
% Indeed, using such a proposition we know that $T_4$ is not able to reproduce the enabling sequence on 
% $\outC{r}$, $\outC{s}$, and $\outC{t}$ as in $\encpp{V_2}$.
% This is enough to deduce that the operational correspondence 
% between $V_0$ and $\encpp{V_0}$ does not hold, and as such, 
% we reach a contradiction. 
% \end{enumerate}
% 
% \end{proof}
% 

%\newpage


% \section{Discussion: The Notion of Encoding}\label{s:discuss}
% 
% 
% More precisely, our notion of encoding takes a rather strict account of \emph{visible actions}:
% a visible action in the source language is matched by the target language with exactly one action.
% This requirement is complemented by the fact that the LTS of \shocore 
% decrees that 
% internal synchronizations 
% (i.e. synchronizations on restricted names) are the only kind of internal actions.
% Hence, the encoding is not allowed to add more visible actions than those present in the source term.
% The combination of elements from the operational semantics and from the criteria of the encoding
% is aimed at guaranteeing that encoded terms are robust with respect to interferences, 
% a property that 
% the encodings of synchronous and polyadic communication in the first-order case
% enjoy thanks to the establishment of private links.
% Interferences are indeed a crucial issue. In order to see this, take the 
% following (naive) encoding of \hopis{2}{×} into \hopis{1}{×}:
% 
% \begin{eqnarray*}
% \encpp{\outC{a}\langle P_1, P_2 \rangle.S} & = &\outC{a}\langle \encpp{P_1} \rangle. \outC{a}\langle \encpp{P_2} \rangle. \encpp{S}\\ % 
% \encpp{a(x_1,x_2).R} &= &a(x_1).a(x_2).\encpp{R}  
%  \end{eqnarray*}
% 
% where $\encpp{\cdot}$ is an homomorphism for the other constructs in \hopis{2}{×}.
% The encoding sends each parameter of the biadic communication separately, and hence 
% a single biadic synchronization on name $a$ is matched by the above encoding with \emph{two}
% monadic synchronizations on $a$. While this treatment of visible actions could be acceptable in a closed world, 
% it is clearly unsatisfactory under the 
% more reasonable assumption that the encoding executes as
% part of a larger environment with potentially malicious contexts.
% In an encoding such as the given above, a
%  very simple malicious context would be given by a process interfering with an input process on $a$: it could be the case
% that the first process does arrive to its intended recipient but,
% in the absence of name passing and restriction,
%  there is no way of guaranteeing that
% the second process will arrive to the same recipient, let alone the intended one.
% %It is worth noticing how the absence of communication of names makes things harder.
% 
% 
% % %- Compositionality of encodings (we can chain them).
% % 
% % - Public communication may break because of the environment.
% % While interferences are not necessarily harmful, 
% % the encoding of a single public action in the source with more than action in the target
% % might compromise the whole protocol. 
% % It is very difficult to capture the capabilities of an environment.
% % We use two things: (i) synchronizations on public names are distinguished from those on private names,
% % and (ii) a public action is matched with exactly one public action.
% 
% 
% It must be noticed that 
% %Hence, it is worth stressing that 
% our study takes place in a very concrete setting and has a very precise motivation.
% As mentioned before, the encodability result for synchronous communication in Section \ref{s:enc-result} 
% strongly suggests that the gap between encodability and non-encodability
% is very narrow. As a matter of fact, 
% by providing evidence that there exists encodings for the synchronous into asynchronous
% case, such a result is an indicator of the difficulty in
% formalizing the non-encodability result for the case of polyadic communication.
% This narrow gap for encodability calls for the demanding notion of encoding
% we have introduced. 
% 
% 
% 
% 


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Concluding Remarks}\label{s:conc}


\paragraph{Summary.} In first-order process calculi such as the $\pi$-calculus both
(a)synchronous and polyadic communication are well-understood mechanisms;
they rely on the ability of establishing
\emph{private links} for process communications that are robust with respect to
external interferences. 
Such an ability is natural
to first-order process calculi, 
as it arises from the interplay of restriction and name passing.
In this chapter we have studied synchronous and polyadic communication and their
representability in higher-order process calculi 
\emph{with restriction} but 
\emph{without name-passing}.
Central to our study is the invariance of the set of private names of a process
along certain computations.
We have studied two \emph{families} of higher-order process calculi:
the first one, called \rhocore, extends \hocore with 
restriction and
polyadic communication; 
the second, called \shocore, 
replaces asynchronous communication in \rhocore with synchronous communication. 
Each define calculi with different arity in communications, denoted \ahopis{n}{×} and \hopis{n}{×}, respectively.
Our first contribution was an \emph{encodability} result of \hopis{n}{×} into \ahopis{n}{×}.
%By exploiting disjoint choice as available in \hocore, 
Such an encoding 
bears witness of the expressive power of the process passing communication paradigm
and gives insights on how to represent certain scenarios using process passing only.
With this positive result, we moved to analyze polyadic communication. 
We showed that in the case of polyadicity the absence of name-passing does entail
a loss in expressiveness; this is represented by the 
non-existence of an encoding of \hopis{n}{×} into \hopis{n-1}{×}.
% Our notion of encoding relies on a distinguishing synchronizations as 
% \emph{public} (i.e. made on a public name and considered visible actions) 
% and \emph{internal} (i.e. made on a restricted name, the only kind of internal actions),
% and requires a visible action in the source language to be matched by exactly one 
% visible action in the target language.
This \emph{non-encodability} result is our second main contribution; it 
determines a
\emph{hierarchy} of higher-order process calculi
based on the arity allowed in process passing communications.
Finally, 
we showed 
that unlike process passing, \emph{abstraction passing} provides a way of establishing private links.
As a matter of fact, we showed an encoding of 
\hopis{n}{×} into \hopis{1}{×} extended with 
abstraction passing, and used such a result to prove our final contribution:
the non-existence of an encoding of abstraction passing into process passing of any arity.

\paragraph{More on the Notion of Encoding.}
It has become increasingly accepted that a unified, all-embracing notion of encoding
that serves all purposes 
is unlikely to exist, and that the 
 exact definition of encoding should depend on the particular purpose.
This way, for instance, the kind of criteria adopted in 
encodability results is usually different from those generally present in separation results.
In this chapter we have adopted a notion of encoding that is arguably more demanding 
than those previously proposed in the literature for separation results.
We argue that such a definition is in line with our overall goal, 
that of assessing the expressiveness of higher-order concurrency with respect to
(a)synchrony and polyadicity and, most importantly, in the absence of name passing.

Interferences are a major concern in our setting, essentially because 
the absence of name passing leaves us without suitable mechanisms for establishing
private links. 
Devising a definition of encoding so as to incorporate a notion of 
potentially malicious context (including techniques for reasoning 
over \emph{every possible} context) 
appears  very challenging.
To this end, we combine suitable elements from the operational
semantics and from the definition of encoding.
It could be rightly argued that not all interferences are necessarily harmful,
and in this sense our approach to interference handling would appear too coarse.
We would like to stress on the difficulties inherent to only \emph{considering}
interferences; attempting to both \emph{considering} and \emph{handling} them in a selective way
seems much more challenging.
Also, even if we do not actually prove that encodings behave 
correctly under every possible context, we think that our approach is an initial effort in that direction.

Notice that we do not claim our notion of encoding should be taken as a reference for
other separation results; it simply intends to capture the ---rather strong--- correctness requirements
(i.e. compositionality and robustness with respect to interferences)
which we consider appropriate and relevant in the restricted setting we are working on.
Similarly, we believe that a strict comparison 
between our notion of encoding and
recent proposals
for ``good'' encodings would not be fair: while it is clear that 
the ``quality'' of an encoding will always be an issue, 
such proposals should be taken primarily as a reference.
%This is particularly true in the case of expressiveness studies aiming at
%clarifying the situation of phenomena in very restricted contexts, such as ours.
%Again, o
Our interest is not in introducing a new notion of encoding 
but in deepening our understanding of the process-passing 
paradigm and its expressive power. Consequently, we feel that our results 
should not be judged solely on the basis of conformance to the requirements of
some ``good'' notion of encoding.
%Having said all the above, it is clear that it would be certainly
%desirable to count with proofs based on a more liberal definition.

\paragraph{Future Work.}
%As for future work, t
There are a number of directions worth investigating.
An immediate issue is to explore whether 
the hierarchy of expressiveness for polyadic communication presented in Section \ref{s:sepresults}
holds for a less contrained definition of encoding.
Here we have focused on deriving the impossibility result based on the 
invariance of private names along certain computations; 
it remains to be explored if other approaches to the separation result, in particular 
those based on \emph{experiments} and \emph{divergence} 
as in testing semantics \citep{NicolaH84}, could allow for a proof with a less constrained definition of encoding.
We wish to insist that the challenge is to find a notion that enforces the same correctness
guarantees as the ones we have aimed to enforce here. 
Clearly, more relaxed conditions in the definition of encoding
would give more significance to our results. Unfortunately, up to now we have been unable 
to prove the separation results using a less constrained definition.

We have discussed two \emph{dimensions} of expressiveness: a \emph{horizontal} 
dimension 
given 
by the hierarchy based on polyadic communication in Section \ref{s:sepresults}, 
and a \emph{vertical} dimension
that is given by the separation result based on abstraction passing in Section \ref{s:abstraction}.
The horizontal hierarchy has been obtained by identifying a distinguised form
over higher-order processes with process-passing only, and by defining a number of
stability conditions over such forms. 
While the horizontal hierarchy has been defined for any arity greater than zero, 
the result in Section \ref{s:abstraction} only provides one ``level'' 
in the vertical hierarchy, i.e. the separation between calculi without abstraction 
passing and calculi with only passing of abstractions of order one.
(Recall that a very similar hierarchy based on abstraction passing has been obtained in \citep{San96int}.)

We believe that an approach based on distinguished forms and stability conditions can 
be given so as to characterize the other levels of the vertical hierarchy. 
As a matter of fact, 
%We plan to extend the approach based on stability conditions so as to formalize 
%arbitrary levels for the vertical hierarchy. That is to say, a hierarchy of 
%higher-order process calculi based on the order of the abstractions that can be passed around.
%
%this suggests that a characterization based on 
%based on the degree of mobility 
we have preliminary results in such an extended approach: 
%We have already explored how to proceed in this case; as a matter of fact,
we have  an alternative 
proof for Proposition \ref{p:sep-abs-pp1}
which relies on an extension of the notion of Disjoint Form (see Definition \ref{d:df}) 
that represents the more complex structure (i.e. an additional level of nesting of processes) 
that processes with abstraction passing might exhibit. 
As in the case of the separation result in Section \ref{s:sepresults},
the alternative proof for Lemma \ref{p:sep-abs-pp1} exploits both the dependencies
induced by nesting of processes in the distinguished form 
and the fact that private names ``remain disjoint'' to a certain extent.
The proof we have at present 
requires 
\emph{three} communication partners that feature
\emph{two} public synchronizations among them 
(one of which communicates an abstraction of level one)
in order to arrive to the distinguished form
for the abstraction passing case.  
This is in contrast to the proof of Lemma \ref{l:ppas-hier} 
which requires only two communication partners and a single public synchronization.
Consequently, the alternative proof involves many more details 
and subtleties 
than the one of Lemma \ref{l:biadic}.
Our current intuition is that in order to prove the separation 
between calculi in higher levels of the vertical hierarchy 
we will require a \emph{varying} number 
of communication partners (and hence, of public synchronizations);
the exact number should be proportional to the order of the abstractions in the calculi involved.
Hence, the complexity of the separation is expected to increase as we ``move up'' in the hierarchy.

