\chapter{Preliminaries}
\label{chap:prelim}
\minitoc

This chapter provides the theoretical background for the dissertation.
It is in three sections. % upon which the other two rely.
In Section \ref{s:pre-basic} we introduce 
the basic terminology and concepts used in the dissertation.
In order to do so, we present a description of CCS
%the calculus of communicating systems by 
\citep{Milner89}
and of 
%With the purpose of presenting some additional concepts, 
%that section also introduces, although in a more succinct way, 
the $\pi$-calculus \citep{MilnerPW92a}.
In Section \ref{s:pre-ho} we introduce \emph{higher-order process calculi}: 
we review their origins and behavioral theory.
The higher-order $\pi$-calculus, as well as Sangiorgi's representability result,
are detailed there.
%, as well as the main reasoning techniques proposed for them.
Section \ref{s:pre-expr} introduces main issues in the analysis of the expressiveness of concurrent languages.
We give an overview to the most common kinds of expressiveness studies and the techniques used to carry them out.
Furthermore, previous efforts on studying the expressiveness of higher-order languages
are reviewed. 


\section{Technical Background}\label{s:pre-basic}

\subsection{Bisimilarity}\label{ss:beheq}

Broadly speaking, \emph{behavioral equivalences} allow to determine when 
the \emph{behavior} of two concurrent system can be considered as \emph{equal}.
There are many plausible motivations for aiming at definitions of behavioral equivalences.
For instance, one would like the behavior of the implementation of system to be 
behaviorally equivalent to that of its specification; similarly, in a component-based system
it is generally desirable to replace a component with a new one that features \emph{at least} 
the same possibilities for behavior.
Accordingly, many definitions of behavioral equivalences for concurrent systems have been proposed;
notable notions include \emph{trace equivalence} ---which equates two processes if they can perform the 
same finite sequences of transitions--- and the \emph{testing framework} \citep{NicolaH84}, 
in which the behavior of two processes is deemed as equal if they \emph{pass the same tests} provided by an 
external \emph{observer}. 
In this context, \emph{bisimilarity}
is widely accepted as the finest behavioral equivalence one would like to impose on processes.
Following \citep{SanBook09}, we now define bisimilarity and state a few of its fundamental properties.
%Then, we comment on the properties of bisimilarity in CCS.
%First bisimilarity. 

A fundamental notion %for giving meaning to process languages 
is that of 
\emph{Labeled Transition System} (LTS in the sequel).
\begin{mydefi}\label{d:lts}
A \emph{Labelled Transition System} (LTS) is a triple
$ (S, T, \{\arro{~t~}: t \in T\} )$
where $S$ is a set of \emph{states}, $T$ is a set of \emph{(transition) labels}, 
and  $\arro{~t~} \subseteq S \times S$ for each $t \in T$ is the \emph{transition relation}.
\end{mydefi}


It is customary to write $P \arro{~\alpha~} Q$ 
to denote the fact that $(P,Q) \subseteq \arro{~\alpha~}$.
In the context of concurrency theory, it is natural to relate states and \emph{processes}, 
and labels as the \emph{actions} processes can perform.
This way, $P \arro{~\alpha~} Q$ 
is indeed a \emph{transition} which represents that process $P$ can perform $\alpha$ and evolve into $Q$.
The transition relation 
for a process language
is generally defined by means of a set of \emph{transition rules} which realize the intended behavior of
each construct of the language. 
%We now introduce \emph{bisimilarity}.
In what follows, we say that a \emph{process relation} is a binary relation on the states of an LTS.

\begin{mydefi}[Bisimilarity]
A process relation $\mathcal{R}$ is a \emph{bisimulation} if, whenever $P \mathcal{R} Q$,
for all $\alpha$ we have that:
\begin{enumerate}
 \item for all $P'$ with $P \arro{~\alpha~} P'$, there is $Q'$ such that $Q \arro{~\alpha~} Q'$ and $P' \mathcal{R} Q'$;
\item the converse, on the transitions emanating from $Q$:
for all $Q'$ with $Q \arro{~\alpha~} Q'$, there is $P'$ such that $P \arro{~\alpha~} P'$ and $P' \mathcal{R} Q'$.
\end{enumerate}
\emph{Bisimilarity}, written $\sim$, is the union of all bisimulations;
thus $P \sim Q$ if there is a bisimulation $\mathcal{R}$ with $P \mathcal{R} Q$.
\end{mydefi}

Given this definition, the \emph{bisimulation proof method}  naturally follows: 
to determine that two processes $P$ and $Q$ are bisimilar, it is sufficient to exhibit
a bisimulation relation containing $(P,Q)$.
It is useful to state a few fundamental properties of bisimilarity.

\begin{mytheo}[Basic Properties of Bisimilarity]\label{th:bisim}
Given $\sim$, it holds that:
\begin{enumerate}
 \item $\sim$ is an \emph{equivalence relation}, i.e., it is reflexive, symmetric, and transitive.
 \item $\sim$ is itself a \emph{bisimulation}.
\end{enumerate}
\end{mytheo}

Item (2) is insightful in that it allows to grasp the \emph{circular} flavor of bisimilarity:
bisimilarity itself is a bisimulation, and is part of the union on which it is defined. Hence, 
the following theorem holds.

\begin{mytheo}
 Bisimilarity is the largest bisimulation.
\end{mytheo}



\subsection{A Calculus of Communicating Systems}
%A historical account of process algebra can be found in \cite{Baeten05}. 
We introduce a number of relevant concepts of CCS, following the 
presentation in \cite{Milner89}.

CCS departs from theories of sequential computation by focusing on the notion of \emph{interaction}: 
a concurrent system \emph{interacts} with its environment which realizes the behavior of the system through \emph{observations}.
In CCS ---like in other process calculi such as ACP and CSP---
the overall behavior of a system is entirely determined by the \emph{atomic actions} it performs.
The distinguishing principle in CCS is that the notion of interaction is equated to that of observation:
not only actions are \emph{observable}, but we observe an action produced by the system by \emph{interacting} 
with it, that is, by performing its complementary action, or \emph{coaction}.
We then say that the two participants, system and observer, 
have \emph{synchronized} in the action
by means of this mutual observation.

\paragraph{Syntax.}
We shall assume a set of \emph{names} $\mathcal{N} = \{a, b, c, \ldots\}$, as well as a disjoint set
of \emph{co-names} defined as $\mathcal{\overline{N}} = \{\overline{a} \mid a \in \mathcal{N}\}$.
There is a set of \emph{labels} defined as $\mathcal{L} = \mathcal{N} \cup \mathcal{\overline{N}}$; 
we let $l, l', \ldots$ range over $\mathcal{L}$. 
Labels give an account of the observable behavior of the system.
We shall use $K, L$ for subsets of $\mathcal{L}$; $\overline{L}$ stands for the set of complements of the labels in $L$.
We consider the distinguished symbol $\tau$ 
representing the \emph{internal} or \emph{silent} action that results from synchronizations. 
We then define  $\mathcal{A} = \mathcal{L} \cup \tau$
to be the set of \emph{actions}; $\alpha, \beta$ range over $\mathcal{A}$.
In the spirit of the above discussion, actions $a$ and $\overline{a}$ are thought of as 
complementary; this way, $\overline{\overline{a}} = a$ and $\overline{\tau} = \tau$. 
%We shall use a set $\mathcal{X}$ of \emph{agent variables}, and a set $\mathcal{K}$ of \emph{agent constants};
%we use $X,Y, \ldots$ to range over $\mathcal{X}$, and $A, B, \ldots$ to range over $\mathcal{K}$.
The set of CCS processes expressing finite behavior is given as follows:

\begin{mydefi}\label{d:ccs}
The set of finite CCS processes %is the smallest set including $\mathcal{X}$ and $\mathcal{K}$, and 
is 
given by the following syntax:
\[
 P, Q, \ldots ::= \sum_{i \in I} \alpha_i.P_i \midd P \backslash a \midd P_1 \parallel P_2  %\midd A(y_1, \ldots, y_n)
\]
where $I$ is an indexing set.
\end{mydefi}

The \emph{summation} $\sum_{i \in I} \alpha_i.P_i$ represents the process that is able to 
perform one and only one of its actions $\alpha_i$, and then behaves as its associated $P_i$.
It is customary to write $\nil$ 
---nil, the process that does nothing--- in case $|I| = 0$, 
$\alpha.P$ if $|I| = 1$, and ``$+$'' for binary sum. 
The restriction $P \backslash a$ behaves exactly as $P$ but it cannot offer neither $a$ or $\overline{a}$
to its surrounding environment. 
%(The singleton restriction $\backslash \{c\}$ is abbreviated as $\backslash c$.)
Both $a$ and $\overline{a}$ are then said to be \emph{bound} in $P$; 
we shall use $\fn{P}$ to denote the set of \emph{free names}, i.e., not bound, in $P$; 
the \emph{bound names} of $P$, $\bn{P}$, are those with a bound occurrence in $P$.
The \emph{parallel composition} $P \parallel Q$ allows $P$ and $Q$ to run concurrently:
either $P$ or $Q$ may perform an action, or they can synchronize by performing complementary actions. 

%The notion of \emph{alpha-conversion} ---the change of a bound name with a fresh name--- 
%is retained in the standard sense. Alpha-conversion might be necessary in case of name substitutions. 
%For instance, if $P = (\nu b)\, a.b$, then $P \sub b a = (\nu b')\, b.b'$, where the bound occurrence of $b$
%has been alpha-coverted to $b'$. 

% Up to now we have described processes with finite behavior. 
% % Finally, $A(y_1, \ldots, y_n)$ represents a \emph{(process) identifier}
% % of arity $n$; this a way of specifying processes with \emph{infinite behavior}.
% % We assume each identifier to have a unique, possibly recursive, \emph{definition}
% % $A(x_1, \ldots, x_n) = P_A$, with 
% % $\fn{P_A} \subseteq \{x_1, \ldots, x_n\}$. The intuition is that $A(y_1, \ldots, y_n)$ behaves as its \emph{body}
% % $P_A$ with each $y_i$ replacing the \emph{formal parameter} $x_i$. 
% A \emph{constant} $A$ is a process whose meaning is given by a defining equation of the form $A \eqdef P$. 
% Constants can be defined in terms of other constants; this allows to define processes with infinite behavior. 
% Alternative forms of infinite behavior include \emph{recursive definitions} and \emph{replication}.


\paragraph{Semantics and Infinite Behavior.}
The operational semantics of CCS is given by an 
LTS in which the set of processes is 
the set of states, and the set of labels is taken to be $\mathcal{A}$, the set of actions in CCS.
The transition relation is given by the set of transition rules in Figure \ref{f:ccs-ops}.
%It is easy to see how such rules realize the intuitive behavior for each of the constructs of the language.



\begin{figure}[t]
$$
\textsc{Sum} ~~\sum_{i \in I} \alpha_i.P_i \arro{\alpha_j} P_j~~ \mbox{if $j \in I$}
\qquad
\textsc{Res} ~~ \frac{P \arro{\alpha} P'}{P\backslash a \arro{\alpha} P\backslash a}~~ \mbox{if $a \notin \{\alpha, \overline{\alpha}$ \}}
$$
$$
\rightinfer	[\textsc{Par1}]
			{P \parallel Q \arr\alpha P' \parallel Q}
			{P \arr\alpha P' }
\qquad
\rightinfer	[\textsc{Tau}]
			{P \parallel Q \arr\tau   P' \parallel Q'}
			{P \arr{l} P' \andalso Q \arr{\overline{l}} Q'}
$$
\caption[An LTS for CCS]{An LTS for CCS\label{f:ccs-ops}. Rule \textsc{Par2}, the symmetric of \textsc{Par1}, is omitted.}
\end{figure}

Let us move now to the different ways of expressing \emph{infinite behavior}.
We consider \emph{recursion} and \emph{replication}.
%Let us consider recursion first.
In order to represent \emph{recursion} a denumerable set of \emph{constants}, ranged
over by $D$, is assumed. 
It is also assumed that each constant $D$ has associated a (possibly recursive)
defining equation of the form $D \eqdef P$.
The extension of (finite) CCS with recursion is then
is obtained by adding the production $P ::= D$ to the grammar in Definition \ref{d:ccs}, and
by extending the operational semantics in Figure \ref{f:ccs-ops} with the following transition rule
\[
\textsc{Cons} ~~ \frac{P \arro{\alpha} P' \andalso D \eqdef P}{D \arro{\alpha} P'} \, .
\]

As \cite{Busi09} remark, recursive behavior defined by means of constants can be intuitively assimilated to 
infinite behavior ``in depth'', in that process copies can be nested at an arbitrary depth
by using constant application.
This is in sharp contrast to the kind of infinite behavior provided by 
\emph{replication}: by means of 
 the replication operator $!P$ it is possible to obtain an unbounded number of copies of $P$;
such copies, however, are all at the same level, thus defining infinite behavior ``in width''.
The extension of (finite) CCS with replication is obtained by adding the production
$P ::= !P$ to the grammar in Definition \ref{d:ccs}, and by 
extending 
the operational semantics in Figure \ref{f:ccs-ops} with the following transition rule
\[
 \textsc{Repl} ~~ \frac{P \parallel !P \arro{\alpha} P'}{!P \arro{\alpha} P'} \, .
\]

A word on \emph{proof techniques} is most convenient at this point.
Defining the semantics in terms of a LTS provides us automatically with two basic 
proof techniques, both of which are forms of \emph{induction}: one on 
the structure of process terms (\emph{structural induction}), and 
one on the transition rules (\emph{transition induction}).
The finitary character of inductive proof techniques is in contrast with the
infinite behavior concurrent systems generally exhibit.
%This is in contrast to the  one 
%needs to use 
% As we shall see, this is particularly relevant 
As a result, when addressing the issue of \emph{equality} of concurrent systems, 
one needs to appeal to \emph{coinductive} proof techniques.
% that is to say, \emph{when two systems should be deemed as equal?}
% Not surprisingly, such an issue relies on a different kind of proof techniques; 
% in fact, such techniques rely on \emph{coinduction}, on which we elaborate in
Bisimilarity as introduced in 
Section \ref{ss:beheq}, 
is probably the most representative coinductive proof-technique.




% \paragraph{Value-passing CCS}
% \cite{Milner89} introduces
% an extension of CCS which allows value-passing in communications, and 
% is particularly convenient for modeling purposes.
% It relies heavily on \emph{infinite summations}, i.e., 
% summations %(as in Definition \ref{d:ccs}) 
% indexed over an infinite set.
% The value-passing calculus is defined in terms of a translation 
% into the basic calculus as in Definition \ref{d:ccs}.
% Roughly speaking, a process in the value-passing calculus is represented by
% a \emph{family} of basic processes, which is indexed by a set of values $V$.
% 
% More formally, in the value-passing calculus one has \emph{prefixes}
% $a(x).P$ (input) and $\overline{a}(e).P$ (output), where $x$ is a variable and $e$ is a value expression.
% Let us denote with $\encpp{\cdot}$ the translation of value-passing processes into basic ones;
% in the case of input and output, it is defined as follows:
% \[
% \encpp{a(x).P} = \sum_{v \in V} a_v.\encpp{P\sub v x} \qquad \encpp{\overline{a}(e).P} = \overline{a_e}.\encpp{P} \, .
% \]


\subsection{More on Behavioral Equivalences}
Having introduced the notion of bisimilarity, and some basic notions of CCS, we 
find it useful to informally present some additional concepts on 
behavioral equivalences.
The discussion here is intended to introduce useful terminology; 
technical accounts of the concepts mentioned here can be found elsewhere (see, e.g., \citep{SanBook09,Milner89}).

It is desirable to require bisimilarity to be preserved by all process contexts.
This allows to replace, in any process expression, a subterm with a bisimilar one.
An equivalence relation with this property is said to be a \emph{congruence}.
Proofs of congruence combine inductive and coinductive arguments:
the former are necessary as the syntax of the processes is
defined inductively, whereas the latter are required in that bisimilarity is
a coinductive definition. 
In the case of CCS we have the following.

\begin{mytheo}
 In CCS, $\sim$ is a congruence relation.
\end{mytheo}

When %In the cases in which 
bisimilarity is decidable, it may be possible to give 
an algebraic characterization of it, or \emph{axiomatization}.
The axiomatization of an equivalence on a set of terms consists essentially 
of some equational axioms that suffice for proving all and only the equations among the terms
that are valid for the given equivalence.
These axioms are used together with rules of equational reasoning, 
which include reflexivity, symmetry, transitivity, and congruence rules that
allow to replace any subterm of a process with an equivalent one. 
A bit more formally, given a set of axioms $\mathcal{S}$,
it is usual to write $\mathcal{S} \vdash P = Q$ if one can derive
$P = Q$ using the axioms in $\mathcal{S}$ and the laws of equational
reasoning.
The objective is then to show
that the axiomatization is 
a full characterization of bisimilarity, i.e., that it is 
both sound and complete with respect to bisimilarity:
\begin{equation}\label{eq:axiom}
P \sim Q \mbox{~if and only if~} \mathcal{S} \vdash P = Q \,.  
\end{equation}

While establishing soundness (i.e., the backward direction in (\ref{eq:axiom})) is in general easy,
establishing completeness (i.e., the forward direction in (\ref{eq:axiom}))
often involves defining some standard syntactic form for processes and requires more effort.
This the case of, e.g., finite-state CCS processes as studied by \cite{Milner89}.

We have seen that CCS considers the special action $\tau$ as a form of internal activity.
Often it is useful to describe concurrent behavior by abstracting from 
such internal actions. This gives rise to \emph{weak} transition relations, denoted
$\Ar{~}$ and $\Ar{~\alpha~}$. While $P \Ar{~} Q$ is used to mean
that $P$ can evolve to $Q$ by performing any number of internal actions (even zero),
$P \Ar{~\alpha~} Q$ means that $P$ can evolve to $Q$ as a result of an evolution that
includes an action $\alpha$, but may involve any number of internal actions before
and after $\alpha$. As such, $\Ar{~\tau~}$ is different from $\Ar{~}$ 
as the former guarantees that \emph{at least} one internal action has been performed.
More formally, we have the following.

\begin{mydefi}[Weak transitions]
.
 \begin{itemize}
  \item Relation $\Ar{~}$ is the reflexive and transitive closure of $\arro{~\tau~}$.
That is, $P \Ar{~} P'$ holds if there if there is $n \geq 0$ and processes 
$P_1, \ldots, P_n$ with $P_n = P'$ such that $P \arro{~\tau~} P_1 \cdots \arro{~\tau~} P_n$.
(Notice that $P \Ar{~} P$ holds for all processes.)

\item For all $\alpha \in T$, relation $\Ar{~\alpha~}$ is the composition of the relations
$\Ar{~}$,  $\arro{~\alpha~}$, and $\Ar{~}$. That is, 
$P \Ar{~\alpha~} P'$ holds if there are $P_1$, $P_2$ such that
$P \Ar{~} P_1  \arro{~\alpha~} P_2 \Ar{~} P'$.
 \end{itemize}
\end{mydefi}

With the aid of weak transitions, it is possible to define \emph{weak bisimulation}
and \emph{weak bisimilarity}, as in the following definition. 

\begin{mydefi}%[Weak bisimilarity]
 A process relation $\mathcal{R}$ is a \emph{weak bisimulation} if, 
whenever $P \mathcal{R} Q$, for all $\alpha$ we have:
\begin{enumerate}
 \item for all $P'$ with $P \Ar{~\alpha~} P'$ there is a $Q'$ such that $Q \Ar{~\alpha~} Q'$ and $P' \mathcal{R} Q'$;
 \item for all $P'$ with $P \Ar{~\tau~} P'$ there is a $Q'$ such that $Q \Ar{~~} Q'$ and $P' \mathcal{R} Q'$;
 \item the converse of (1) and (2), on the actions from $Q$.
\end{enumerate}
$P$ and $Q$ are \emph{weakly bisimilar}, written $P \approx Q$, if $P \mathcal{R} Q$ for some weak bisimulation $\mathcal{R}$.
\end{mydefi}

% 
% The above definition features weak transitions on the challenger side.
% Hence, the work required in proofs is signiu

We now discuss the ideas behind \emph{barbed bisimilarity} \citep{MiSa92}.
A transition $P \arro{~\alpha~} P'$ of an LTS intuitively describes a pure synchronization between
$P$ and its external environment along a port $a$ mentioned in $\alpha$.
This is but one particular of concurrent interaction; a natural question that arises
is how to adapt the idea of bisimulation to other kinds of interaction. 
The idea is to set a bisimulation in which the observer has a \emph{minimal}
ability to observe actions and/or process states.
This yields a bisimilarity, namely indistinguishability under such observations, 
which in turns yields a congruence over terms, namely bisimilarity in all contexts.
The bisimilarity is called \emph{barbed bisimilarity}; the congruence is called
\emph{barbed congruence}. 

The main assumption in the barbed setting is the existence of a \emph{reduction relation}
in the language. Such a relation is intended to express an evolution step of a term 
in which no intervention from the environment is required.
In CCS, such a relation is $\arro{~\tau~}$. The reduction relation represents the
most fundamental notion in the operational semantics of a language.
The \emph{reduction semantics} of a language is then an approach to operational semantics 
in which the meaning is only attached to reductions; it explains how a system can evolve independently
of its environment. This approach is then in clear contrast to that underlying 
a labeled transition system.

In barbed bisimilarity the clauses involve challenges only on reductions.
In addition, equal processes should exhibit the same \emph{barbs}---i.e., predicates
representing basic observables of the states. Barbs are of the essence to obtain
an adequate discriminating power. Barbed congruence is a contextual equivalence:
it is the closure of barbed bisimilarity over contexts. 
The definition of barbs we shall be interested in is as follows.

\begin{mydefi}
 Given a visible action $\alpha$, the \emph{observability predicate}
$\dwa_{\alpha}$ holds for a process $P$ if, for some $P'$, $P \arro{~\alpha~} P'$. 
\end{mydefi}

We now define strong barbed bisimulation.

\begin{mydefi}[Barbed bisimilarity]\label{d:sbb}
A process relation $\mathcal{R}$ is said to be a \emph{barbed bisimulation} 
%Strong barbed bisimulation, written $\strongbarbedbis$, is the largest symmetric relation on the class of processes of the language such that 
if whenever $P \strongbarbedbis Q$ it implies:
\begin{enumerate}
 \item whenever $P \arro{} P'$ then $Q \arro{} Q'$ and $P' \mathcal{R} Q'$;
 \item for each visible action $\alpha$, if $P \dwa_{\alpha}$ then $P \dwa_{\outC{\alpha}}$.
\end{enumerate}
\emph{Barbed bisimilarity}, written $\strongbarbedbis$, is the union of all barbed bisimulations.
\end{mydefi}

The weak version of Definition \ref{d:sbb} is obtained in the standard way.
Let $\Ar{}$ be the reflexive and transitive closure of $\arro{}$ and 
$\Dwa_{a}$ be defined as $\Ar{} \dwa_{a}$. Then, \emph{weak barbed bisimulation},
written $\barbedbis$,
is defined by replacing the reduction $Q \Ar{} Q'$ with $Q \Ar{} Q'$ and 
the predicate $Q \dwa_{\mu}$ with $Q \Dwa{\mu}$. 
As mentioned before, by quantifying over contexts,
we obtain \emph{barbed congruence}:

\begin{mydefi}\label{d:sbc}
 Two processes $P$ and $Q$ are said to be
\emph{strongly barbed congruent}, written
$P \sbc Q$, if for every context $C\holE$, it holds that $C[P] \strongbarbedbis C[Q]$.
\end{mydefi}

We obtain  \emph{weak barbed congruence}, written $\wbc$, by replacing $\strongbarbedbis$ with $\barbedbis$
in the definition above.

A main drawback of the notion of barbed congruence is that the universal quantification 
on contexts, which can make it impractical to use in proofs. 
The challenge is then to find tractable characterizations of barbed congruence. 
A well-established approach here is to use (labeled) bisimilarities:
the objective is to find a bisimilarity that is both \emph{sound} and \emph{complete} with respect to 
barbed congruence. That is, a notion of bisimilarity that 
both \emph{includes} and \emph{contains} barbed congruence. 
While for the case of CCS and the $\pi$-calculus effective characterizations of
barbed congruence have been thoroughly studied (see, e.g., \citep{SaWabook}), 
we shall see that 
this is not quite the case
for higher-order process calculi, in which the situation is much less clear.



\subsection{A Calculus of Mobile Processes}\label{ss:pi}
We introduce the (polyadic) $\pi$-calculus following the presentation given in \citep{San923,San93cc}; 
this will make the introduction of the higher-order $\pi$-calculus easier.
The reader is referred to \citep{MilnerPW92a,SaWabook} for complete references on the $\pi$-calculus.


The $\pi$-calculus departs from CCS with the capability of sending (first-order) values along communication
channels. Its significance derives from the fact that such values include the set of communication
channels; new communication channels can be created dynamically, and shared among processes, possibly in a restricted 
way. This is most useful to represent dynamic communication topologies.

\paragraph{Syntax.}
We use $a, b, c, \ldots, x, y, z, \ldots$ to range over \emph{names} (or \emph{channels}) 
and $P, Q, R, T, \ldots$ to range over processes. 
We use a tilde to represent \emph{tuples} of elements; this way, given a name $y$, $\til y$ stands for a 
tuple of names. 
The set of $\pi$-calculus processes is given by the following definition. 

\begin{mydefi}%[$\pi$-calculus processes]
The set of $\pi$-calculus  processes is given by the following syntax:
\[
 P, Q, \ldots ::= \sum_{i \in I} \alpha_i.P_i \midd P_1 \parallel P_2 \midd (\nu x)\, P \midd [x=y]P \midd D\langle \til x \rangle
\]
where $I$ is any finite indexing set. The set of \emph{prefixes} is given by 
\[
 \alpha ::= x(\til y) \midd \outC{x}\langle \til y \rangle \,.
\]
\end{mydefi}

As in CCS, %in this presentation 
we 
%consider constant applications $D(x_1, \ldots, x_n)$ for expressing processes with infinite behavior: 
assume that each constant $D$ has a defining equation 
of the form $D \eqdef (\til x)P$,
where the parameters $\til x$ collect all names which may occur free in $P$.
%Indeed, in the expression $(\til x)P$ there might be calls to other constants, including $D$ itself. 
Some constraints to tuples in input and output prefixes are in order.
In an input prefix $x(\til y)$, tuple $\til y$ is required to be made of pairwise distinct elements. 
We omit brackets $(\,)$ and $\langle \, \rangle$ when the tuple is empty. 
Also, 
tuple $\til y$ is required to be finite in both input and output prefixes. 
%in both $x(\til y)$ and $\outC{x}\langle \til y \rangle$, 
This is not the case for the tuple $\til x$ in constant definitions and applications;
hence, it can be infinite. 

The intuitive semantics of processes is as expected.
An input-prefixed process $x(\til y).P$ waits for a tuple $\til z$ to be transmitted along name $x$; 
once this occurs, the process $P$ in which $\til y$ has been instantiated by $\til z$ executes. 
An output-prefixed process $\outC{x}\langle \til y \rangle.P$ sends tuple $\til y$ along $x$ and then 
behaves like $P$. 
The \emph{matching} operator $[x = y]P$ is used to test for equality of the names $x$ and $y$.
The intuition behind the restriction operator 
is somewhat similar to that in CCS: $(\nu x)\, P$ makes name $x$ local to $P$; thus $x$ becomes a new, unique name, 
distinct from all those external to $P$. 
We often write $(\nu \til x)\,P$ to stand for the process $(\nu x_1)(\nu x_2)\ldots(\nu x_n)\, P$. 
The semantics and notation for (guarded) summation follow those in CCS. 
In particular, we shall use $+$ to represent binary sum. 

We have already commented on the use of constants to represent infinite behavior.
Notice that it is possible to encode replication using constants. 
It is worth noticing that, given $D = \langle \til x \rangle P$, in an \emph{application}
$D(\til y)$ tuple $\til y$ must be of the same length as $\til x$.
This kind of potential disagreements on the arities of tuples, 
as well as some other aspects of the name-passing discipline,
are enforced by the use of appropriate 
\emph{type systems}
on names.\footnote{In early proposals of the $\pi$-calculus (see, e.g., \citep{Milner93}) discipline on names was enforced by the notion of \emph{sorting}. The presentation of the first- and higher-order $\pi$-calculus in \citep{San923} relies on sorts. 
In \citep{PiSa96b} the notion of sort was refined into the notion of typing for processes.}
For the sake of conciseness,
we do not elaborate on the definitions 
and properties of sorts.
As such, along the chapter we always assume well-sorted processes; 
we use notation $x : y$ to mean that names $x$ and $y$ have the same sort. 
If $D \eqdef (\til x)P$ and $\til x$ is not empty then $D$ and $(\til x)P$ are called
%sometimes referred to as 
\emph{abstractions}. Abstractions and processes are \emph{agents}.
We use $F, E, \ldots$ and $A$ to range over abstractions and agents, respectively.

Notions of free and bound names are as expected: in $a(\til b).P$, $(\nu \til b)\,P$, and $(\til b)P$
all free occurrences of names $\til b$ in $P$ are \emph{bound}. 
The sets of free and bound names of an agent  $A$ are denoted
$\fn{A}$ and $\bn{A}$, respectively.
Notice that if $A = D\langle \til x \rangle$ then $\fn{A} = \til x$ and $\bn{A} = \emptyset$.
%We also assume definitions of alpha-conversion and substitution 
%to be defined as expected. 
Name substitution is a function from names to names. 
Given a vector of distinct names $\til x$, we write $\sub {\til y} {\til x}$
for the substitution that maps the $x_i$-th name in $\til x$ to the $y_i$-th
name in $\til y$, and maps all names not in $\til x$ to themselves. 
We assume standard definitions of substitution
and $\alpha$-conversion on processes, with possible renamings so as to 
avoid capture of free names.
In what follows we shall be working modulo $\alpha$-conversion, and
hence we decree two processes as equal if one is $\alpha$-convertible into the
other. 


\paragraph{Operational Semantics.}
We present now a reduction semantics and an LTS for the $\pi$-calculus.
As argued before, the reduction semantics is intended to capture the behavior
that is intrinsic to a process, that is, 
the behavior that does not include the potential interactions
between the process and its environment. 
Central to the reduction semantics is the notion of \emph{structural congruence}
that allows flexibility in the syntactic structure of the process, thus
promoting interactions to occur. 

Structural congruence, denoted $\equiv$, is the smallest congruence over the 
set of $\pi$-calculus processes that satisfies the following rules:

\begin{enumerate}
 \item $P \equiv Q$ if $P$ is $\alpha$-convertible to $Q$;
\item  abelian monoid laws for $+$: $P + \nil \equiv P$,\, $P + Q \equiv Q + P$,\, $(P + Q) + R \equiv P + (Q + R)$;
\item  abelian monoid laws for $\parallel$: $P \parallel \nil \equiv P$,\, $P \parallel Q \equiv Q \parallel P$,\, $(P \parallel Q) \parallel R \equiv P \parallel (Q \parallel R)$;
\item laws for restriction: $\nu x \nil \equiv \nil$, \, $\nu x \, \nu y P \equiv \nu y \, \nu x P$, \, 
$(\nu x P) \parallel Q  \equiv \nu x (P \parallel Q)$ if $x \not \in \fn{Q}$; 
\item law for match: $[x = x]P \equiv P$;
\item law for constants: if $D \eqdef (\til x)P$ and $\til x : \til y$ then $D \langle \til y \rangle \equiv P\sub{\til y}{\til x}$. (In case of replication is used: $!P \equiv P \parallel !P$.)
\end{enumerate}

The notion of interaction is formalized by the \emph{reduction rules} given in Figure \ref{f:pi-red}.

\begin{figure}[t]
$$
\textsc{Com} ~~(\cdots + x(\til y).P) \parallel (\cdots + \outC{x}\langle \til z \rangle.Q) \arro{} P\sub {\til z} {\til y} \parallel Q 
$$

$$
\rightinfer	[\textsc{Par}~~]
			{P \parallel Q \arro{} P' \parallel Q}
			{P \arro{} P' }
\qquad
\rightinfer	[\textsc{Res}~~]
			{\nu x P  \arro{} \nu x P}
			{P \arr{} P' }
$$

$$ 
\rightinfer	[\textsc{Struct}~~]
			{P \arro{} P'}
			{P \equiv Q \andalso Q \arro{} Q' \andalso Q' \equiv P'}
$$
\caption{Reduction semantics for the $\pi$-calculus.\label{f:pi-red}}
\end{figure}

We now present 
the semantics in terms of 
a labelled transition system. It is actually the 
\emph{early} semantics for the $\pi$-calculus: 
the bound names of an input are instantiated as soon as possible, 
namely in the rule for input. 
(This is contrast to the \emph{late} semantics, in which such an instantiation
takes place later, in the rule for communication.)
Actions can take three possible forms. 
In addition to the silent action $\tau$ that represents interaction, we have the following:
\begin{description}
 \item[$P \arro{~x\langle \til y \rangle~} P'$] which stands for an \emph{input action}: $x$ is the name at which it occurs, while $\til y$ is the tuple of names which are received.
\item[$P \arro{~(\nu \til{y'})\, \outC{x}\langle \til y \rangle~} P'$] 
which stands for an \emph{output action}, namely the output of names $\til y$ at $x$. It always holds that
$\til{y'} \subseteq \til y - x$. Tuple $\til{y'}$ represents those private names that are emitted from $P$,
carried out of their current scope. This is commonly known as \emph{scope extrusion}. 
\end{description}

In both cases, $x$ is  the \emph{subject} and $\til y$ is the \emph{object} part of the action.
There is a difference in the brackets of input \emph{prefixes}
and input \emph{actions}: they are round in the former and angled in the latter.
This is meant to emphasize the fact that in the input prefix
$x(\til y)$ 
 names in $\til y$
are binders (i.e. placeholders waiting to be instantiated), whereas 
in the input action $x\langle \til y \rangle$ they represent values (i.e. binders already instantiated).

We use $\mu$ to represent the label of a generic action. Given an action $\mu$, 
the bound and free names of $\mu$, denoted $\bn{\mu}$ and $\fn{\mu}$, respectively, is
as follows:

\begin{center}
% use packages: array
\begin{tabular}[c]{lll}
$\mu$ & $\fn{\mu}$ & $\bn{\mu}$  \\ \hline
$x\langle \til y \rangle$ & $x$,$\til y$ & $\emptyset$ \\ 
$(\nu \til{y'})\, \outC{x}\langle \til y \rangle$ & $x$, $\til y - \til{y'}$ & $\til{y'}$ \\ 
$\tau$ & $\emptyset$ & $\emptyset$
\end{tabular}
\end{center}

The set of names of $\mu$ is defined as $\n{\mu} = \fn{\mu} \cup \bn{\mu}$.
The labeled transition system is given in Figure \ref{f:lts-pi}.
To conclude this introduction to the $\pi$-calculus, it is worth mentioning
that, up to structural congruence, the reduction semantics $\arro{}$ is
exactly the relation $\arro{~\tau~}$ of the labeled transition semantics.
This result is sometimes referred to as the \emph{harmony lemma} (see, e.g., \cite{SaWabook}).


\begin{figure}
\infrule{\textsc{Alp}~~}{P'\arr\mu Q \andalso 
\mbox{$P$ and $P'$ are $\alpha$-convertible}
}{
P \arr\mu Q}
\[\mathrm{\textsc{Inp}}~~~{\inp x {\til y}. P} \arr{x \langle{\til z} \rangle  }  {P\sub {\til z}{\til y} }, \mbox{  if $|\til z| = |\til y|$}  \qquad \qquad \mathrm{\textsc{Out}}~~~{\outC{x} \langle \til y \rangle}.P \arr{\outC{x} \langle \til y \rangle  }  {P}\]
\[
\textsc{Sum}~~ \frac{P \arr\mu P'}{P + Q \arr\mu P'} \quad \quad \textsc{Par}~~ \frac{P \arr\mu P'}{P \parallel Q \arr\mu P' \parallel Q }~~\bn{\mu} \cap \fn{Q} = \emptyset
\]
\infrule{\textsc{Com}~~}{P_1 \arro{(\nu {\til y'})\outC{x}\langle \til y \rangle} P' \andalso 
Q \arro{x\langle \til y \rangle} Q' \andalso \til{y'} \cap \fn{Q} = \emptyset
}{
P \parallel Q \arr\tau \nu \til{y'} (P' \parallel Q')
} 
%\infrule{\textsc{Tau1}}{P_1 \arro{(\nu \til y)\out{a}{\langle \til P \rangle}} P_1' \andalso 
%P_2 \arr{a(\til x)} P'_2 \andalso \til y \cap \fn{P_2} = \emptyset
%}{
%P_1 \parallel P_2 \arr{~\tau~}  \nu \til y \,(P'_1 \parallel P'_2 \sub{\til P}{\til x})}
 
%\infrule{\textsc{IntRes}}{P \arr{a\tau} P' }{\nu a \, P \arr{\tau} \nu a \, P}
\[
\textsc{Res}~~\frac{P \arr{\mu} P' \andalso r \not \in \n{\mu}}{\nu r \, P \arr{\mu} \nu r \, P'} 
\quad\quad \textsc{Const}~~\frac{P\sub{\til y}{\til x} \arr{\mu} P'}{D\langle \til y \rangle \arr{\mu} P'}~~\mbox{if ~$D \eqdef (\til x)P$}
\]
\[
 \textsc{Open}~~\frac{P \arro{(\nu \til{y'})\outC{z}{\langle \til y \rangle}} P'}{
\nu x \, P  \arro{(\nu x,\til{y'})\outC{z}{\langle \til y \rangle}}  P'}~~  x\neq z, \, x \in \fn{\til y}-\til{y'}
\quad\quad
\textsc{Match}~~\frac{P \arr{\mu} P'}{[x=x]P \arr{\mu} P'}
\]


\caption[The (early) labeled transition system for the $\pi$-calculus]{The (early) labeled transition system for the $\pi$-calculus. Rules \textsc{Act2} and \textsc{Tau2}, 
the symmetric counterparts of rules \textsc{Act1} and \textsc{Tau1}, are omitted.}\label{f:lts-pi} 
\end{figure}

% \begin{itemize}
%  \item Show how the previously defined notions change/adapt. 
% 
% 
% 
% \item barbs, soundness, completeness (we need to introduce pi for these)
% 
% \end{itemize}
% 
% \subsection{Notes}
% A number of studies concerning the different forms of expressing infinite behavior have appeared in the 
% literature. A good survey paper is \citep{Palamidessi05}.





\section{Higher-Order Process Calculi}\label{s:pre-ho}



%\subsection{Generalities}\label{ss:pre-hogen}
% 
% We begin by reviewing the main proposals previous to Sangiorgi's higher-order $\pi$-calculus (henceforth HOpi)
% ---probably the most representative higher-order process calculus. 
% Then in Sect X we review HOpi and comment on its main features. 
% We conclude reviewing other process calculi proposed after HOpi.


{\em Higher-order process calculi} are calculi 
in which  processes  (more generally, values containing processes)
can be communicated. Thus a computation step involves the
instantiation of a variable with a term, which is then
%  In general, the variable may
% occur more than once, in which case  copies 
%The agent being communicated is
copied as many times as there are occurrences of the variable. 
If there are multiple occurrences, the size of a system
may grow. % as a result of the computation step. 
Higher-order process calculi have been put forward in the 
early 90s, with 
 CHOCS \citep{Tho89} and Plain CHOCS (its variant with static binding) 
\citep{Tho90}, and with the  Higher-Order $\pi$-calculus 
\citep{San923}. The basic  operators are those of CCS \citep{Milner89}. 

The appearance of   processes inside values usually 
 has  strong consequences on the semantics: 
 namely on labeled
transition systems (notions of alpha conversion,
higher-order substitutions, scope extrusions) and, especially,  
on behavioral equivalences (e.g.  bisimulation).
%  now two
% equivalent processes  may match each other action in the bisimulation
% game with values  that not identical. 
Higher-order, or process-passing, concurrency is often presented as an
alternative paradigm 
to the  first order, or name-passing, concurrency of the $\pi$-calculus
for the   description of   mobile systems, i.e.\ concurrent systems whose
communication topology may change dynamically. 
Higher-order calculi are formally closer to the
$\lambda$-calculus, whose basic computational step --- $\beta$-reduction ---
involves term instantiation.  
%igher-order calculi are formally closer to, and are inspired by, the
%$\lambda$-calculus, whose basic computational step --- $\beta$-reduction ---
%involves term instantiation. 
As in the $\lambda$-calculus, a computational step in higher-order calculi results in the
instantiation of a variable with a term, which is then
copied as many times as there are occurrences of the variable, resulting in potentially larger terms. 



%Origins, Syntax, Semantics, Languages.
%\subsubsection{Origins}
%\emph{The following classification is mentioned in the Amadio and Dam paper:}


The remainder of this section is structured as follows.
In Section \ref{ss:hopi} we present the higher-order $\pi$-calculus;
this is necessary to introduce Sangiorgi's representability result in Section \ref{ss:sangio-rep}.
Then, in Section \ref{ss:other-ho} we 
%In this section we give an overview on the fundamental notions on higher-order process calculi.
%In Section \ref{ss:pre-hogen} we comment on general aspects of these calculi.
%and
review several proposals of higher-order languages in the literature.
Finally, in Section \ref{ss:pre-beht}, we report on previous 
works on the behavioral theory for languages in the higher-order setting. 
%In Section \ref{ss:pre-other} we overview, for the sake of completeness, other reasoning techniques that have been developed for higher-order process calculi.


\subsection{The Higher-Order $\pi$-calculus}\label{ss:hopi}
Here we present the higher-order $\pi$-calculus, abbreviated \Hopi. % in the following.
We introduce the language by building on the 
notations presented for the $\pi$-calculus in Section \ref{ss:pi}.

Let $Var$ be a set of agent-variables, ranged over $X,Y$.
In order to obtain \Hopi, the syntax of the $\pi$-calculus (cf. Section \ref{ss:pi}) is modified in two ways.
First, variable application is allowed, so that an abstraction received as input can be provided with appropriate
arguments. Second, tuples in inputs, outputs, applications, and abstraction
may also contain agent or agent-variables. To simplify the notation in the grammar below
we use $K$ to stand for an agent or a name and $U$ to stand for a variable or a name.

\begin{eqnarray*}
P, Q & ::=  & \sum_{i \in I} \alpha_i.P_i \midd P \parallel Q \midd \nu x P \midd [x=y]P \midd D \langle \til K \rangle \midd X \langle \til K \rangle \\
\alpha & ::= & \outC{x} \langle \til K \rangle  \midd x(\til U)
\end{eqnarray*}

Recall that $K$ may be an agent: hence, it may be a process, but also an abstraction of arbitrary high order.
The grammar of agents is the following:
\[
 A ::= (\til U)P \midd (\til U)X \langle \til K \rangle \midd (\til U)D \langle \til K \rangle
\]
(Notice also that a variable $X$ and a constant $D$ are agents, corresponding to the cases in
which $\til U$ and $\til K$ are empty.
We make the assumptions regarding finiteness of tuples as in the $\pi$-calculus.
A variable $X$ which is not underneath some input prefix $x(\til U)$ or an abstraction 
$(\til U)$ with $X \in \til U$ is said to be \emph{free}. An agent containing free variables
is said to be \emph{open}. We use $\fv{A}$ to denote the set of free variables of agent $A$.

In \Hopi the notions of types and type systems are more involved than in the $\pi$-calculus.
For the sake of conciseness, we do not present such details here, and assume well-sorted expressions.
The reader is referred to \citep{San923,San96int} for details. 

Now we present 
reduction and labeled transition semantics 
for \Hopi.
Let us introduce the reduction semantics first.
The structural congruence rules and the reduction rules for \Hopi
are the same as for the $\pi$-calculus.
We only have to generalize the structural congruence rule (6)
and the rule \textsc{Com}, so that the tuples involved may contain agents.
This way, these rules become, respectively:
\[
\mbox{6. If $D \eqdef (\til U)P$ and $\til U : \til K$, then $D \langle \til K \rangle \equiv P \sub{\til K}{\til U}$}
\]
and

\[
 \textsc{Com}\quad (\cdots + x(\til U).P) \parallel (\cdots + \outC{x}\langle \til K \rangle) 
\arro{} P\sub{\til K}{\til U} \parallel Q \, .
\]

The labeled transition semantics
for \Hopi is given in Figure \ref{f:lts-hopi}.
It arises as a generalization of that for the $\pi$-calculus
given in Figure \ref{f:lts-pi}.
While input actions take the form $x \langle \til K \rangle$,
output actions are of the form $(\nu \til y)\outC{x}\langle \til K \rangle$;
for the latter it holds that $\til y \subseteq \fn{\til K} - x$.
The correspondence between reduction and labeled transition semantics mentioned
for the $\pi$-calculus holds for \Hopi as well. 



\begin{figure}
\infrule{\textsc{Alp}~~}{P'\arr\mu Q \andalso 
\mbox{$P$ and $P'$ are $\alpha$-convertible}
}{
P \arr\mu Q}
\[\mathrm{\textsc{Inp}}~~~{\inp x {\til U}. P} \arr{x \langle{\til K} \rangle  }  {P\sub {\til K}{\til U} }, \mbox{  if $|\til z| = |\til y|$}  \qquad \qquad \mathrm{\textsc{Out}}~~~{\outC{x} \langle \til K \rangle}.P \arr{\outC{x} \langle \til K \rangle  }  {P}\]
\[
\textsc{Sum}~~ \frac{P \arr\mu P'}{P + Q \arr\mu P'} \quad \quad \textsc{Par}~~ \frac{P \arr\mu P'}{P \parallel Q \arr\mu P' \parallel Q }~~\bn{\mu} \cap \fn{Q} = \emptyset
\]
\infrule{\textsc{Com}~~}{P_1 \arro{(\nu {\til y})\outC{x}\langle \til K \rangle} P' \andalso 
Q \arro{x\langle \til K \rangle} Q' \andalso \til{y'} \cap \fn{Q} = \emptyset
}{
P \parallel Q \arr\tau \nu \til{y} (P' \parallel Q')
} 
\[
\textsc{Res}~~\frac{P \arr{\mu} P' \andalso r \not \in \n{\mu}}{\nu r \, P \arr{\mu} \nu r \, P'} 
\quad\quad \textsc{Const}~~\frac{P\sub{\til K}{\til U} \arr{\mu} P'}{D\langle \til K \rangle \arr{\mu} P'}~~\mbox{if ~$D \eqdef (\til U)P$}
\]
\[
 \textsc{Open}~~\frac{P \arro{(\nu \til{y})\outC{z}{\langle \til K \rangle}} P'}{
\nu x \, P  \arro{(\nu x,\til{y})\outC{z}{\langle \til K \rangle}}  P'}~~  x\neq z, \, x \in \fn{\til K}-\til{y}
\quad\quad
\textsc{Match}~~\frac{P \arr{\mu} P'}{[x=x]P \arr{\mu} P'}
\]


\caption[The labeled transition system for \Hopi]{The labeled transition system for \Hopi. Rules \textsc{Act2} and \textsc{Tau2}, 
the symmetric counterparts of rules \textsc{Act1} and \textsc{Tau1}, are omitted.}\label{f:lts-hopi} 
\end{figure}


\subsection{Sangiorgi's Representability Result}\label{ss:sangio-rep}

We now introduce $\mathcal{C}$, the compilation of \Hopi into the $\pi$-calculus, following the presentation
in \citep{San93cc}. Hence, for readability purposes, only the monadic calculus is considered.
Also, it is assumed that agent definitions use a finite number of constants;
this allows the use of replication in place of constants. 
Alternative presentations of the representability result ---focused on asynchronous calculi---
can be found in \citep{San98Udine,SaWabook}.


The translation uses notation $P \{m:=F\}$ to stand for $\nu m (P \parallel !m(U).F\langle U \rangle)$,
where $U$ is a name or a variable. The intuition is that $\mathcal{C}$ replaces the communication 
of an agent with the communication of the access to that agent. This way,
$P_1 \eqdef \outC{a}\langle F \rangle.Q$ is replaced by
$P_2 \eqdef (\outC{a}\langle m \rangle.Q) \{m:=F\}$.
While an agent interacting with $P_1$ may use $F$ directly with, e.g., argument $b$,
an agent interacting with $P_2$ uses $m$ to \emph{activate} $F$ and provide it with $b$. The name
$m$ is called \emph{name-trigger} or simply \emph{trigger}.

The definition of $\mathcal{C}$ is presented in Figure \ref{f:sang-enc}.
Notice that a variable $X$ is translated into a name $x$.
The correctness of $\mathcal{C}$ is studied in depth in \cite{San923}.
There, $\mathcal{C}$ is derived in two steps.
The first is a mapping $\mathcal{T}$ which transforms an agent into a 
\emph{triggered} agent. These are \Hopi agents in which every agent
emitted in an output or expected in an input has the same structure of a trigger.
This gives homogeneity to higher-order communications and simplifies the reasoning
over agents. The agent $\mathcal{T}\encpp{A}$ has the same structure as
$\mathcal{C}\encpp{A}$ and maintains agents in the higher-order setting.
A complementary mapping, denoted $\mathcal{F}$, transforms triggered agents
into first-order processes. 
The compilation $\mathcal{C}$ is \emph{fully-abstract} with respect to 
(weak) barbed congruence, i.e., for each pair of agents $A_1$ and $A_2$,
\[
 A_1 \wbc A_2 \mbox{~if and only if~} \mathcal{C}\encpp{A_1} \wbc \mathcal{C}\encpp{A_2} \, .
\]
This result is then complemented by a statement of \emph{operational correspondence}  between
$P$ and $\mathcal{C}\encpp{P}$ which reveals the way the latter simulates the behavior of the former.


\begin{figure}[t]
 \begin{eqnarray*}
  \mathcal{C}\encpp{X} & \eqdef & \left\{  
\begin{array}{ll}  
 \mathcal{C}\encpp{(Y)X\langle Y \rangle} & \mbox{if $X$ is a higher-order abstraction}\\  
 \mathcal{C}\encpp{(a)X\langle a \rangle} & \mbox{otherwise} .  
\end{array}\right.  \\
\mathcal{C}\encpp{\alpha.P} & \eqdef & \left\{  
\begin{array}{ll}  
 (\outC{a}\langle m \rangle.\mathcal{C}\encpp{P}) \{m:=\mathcal{C}\encpp{F}\} & \mbox{if $\alpha = \outC{a}\langle F \rangle$ }\\  
a(x).\mathcal{C}\encpp{P} & \mbox{if $\alpha = a(X)$}  \\
 \alpha.\mathcal{C}\encpp{P} & \mbox{otherwise}  
\end{array}\right.
\end{eqnarray*}
 \begin{eqnarray*}
\mathcal{C}\encpp{X \langle F \rangle} ~~\eqdef~~  (\outC{x}\langle m \rangle.\nil) \{m:=\mathcal{C}\encpp{F}\}  \qquad \mathcal{C}\encpp{X\langle b \rangle} ~~\eqdef~~ \outC{x}\langle b \rangle.\nil \\
\mathcal{C}\encpp{P \parallel Q} ~~\eqdef~~  \mathcal{C}\encpp{P} \parallel \mathcal{C}\encpp{Q} \qquad 
\mathcal{C}\encpp{P + Q} ~~\eqdef~~  \mathcal{C}\encpp{P} + \mathcal{C}\encpp{Q} \\
\mathcal{C}\encpp{!P} ~~\eqdef~~ ! \, \mathcal{C}\encpp{P} \qquad \mathcal{C}\encpp{\nu a \,P} ~~\eqdef~~ \nu a \, \mathcal{C}\encpp{P} 
\qquad \mathcal{C}\encpp{\, [a = b]P \, } ~~\eqdef~~ [a=b]\, \mathcal{C}\encpp{P} \\
\mathcal{C}\encpp{(X)P}~~\eqdef~~ (x)\mathcal{C}\encpp{P} \qquad \mathcal{C}\encpp{(a)P}~~\eqdef~~ (a)\mathcal{C}\encpp{P}
 \end{eqnarray*}
\caption{The compilation $\mathcal{C}$ from higher-order into first-order $\pi$-calculus\label{f:sang-enc}}
\end{figure}



\subsection{Other Higher-Order Languages}\label{ss:other-ho}
We review a number higher-order languages with concurrency, % in their broadest sense, 
following the way they appeared in theoretical computer science:
%In the past, the study of higher-order concurrent languages has found two main motivations:
\begin{itemize}
 \item As a way of studying the foundations of programming languages integrating functional and concurrent paradigms.
This is represented by variants of the $\lambda$-calculus enriched with forms of parallelism
(see, e.g., \citep{Boudol89,Nielson89}).
\item As a way of studying forms of code mobility and mobile agents. 
This is represented by process calculi with 
%complex communication objects.
process-passing (see, e.g., \citep{Tho90,San923,SchmittS04,HilBun04}).
\end{itemize}

In what follows we give an overview of both research strands, 
with an emphasis on the efforts concerning process calculi.

\subsubsection{Functional Languages with Concurrency}
A number of works advocated that a formal model for concurrent, communicating processes 
should contain the $\lambda$-calculus as a simple sub-calculus.

%In this line, 
\cite{Boudol89} proposes the $\gamma$-calculus, a strict extension of lambda calculus with CCS-like communication.
The $\gamma$-calculus is a \emph{direct generalization} of the $\lambda$-calculus,
in that a $\beta$-reduction is formally defined to be a particular instance of the communication rule 
(and not something that is representable by a series of communications, for instance).
The calculus is parametrized on a set of ports, and the parallel composition operator of CCS
is splitted in two constructs:
\emph{interleaving} and \emph{cooperation}, which represent concurrency and communication, respectively. 
The cooperation operator is not associative, so $\lambda$-calculus application is represented by
cooperation and output constructs.
This way the desired (tight) relationship between communication and application is achieved.
%, the latter being a particular instance of the former.
Interestingly, in the $\gamma$-calculus the cooperation operator is ``dynamic'' in that reduction \emph{behind} prefixes
 are allowed; much like as active outputs in certain modern process calculi.
%\emph{1. Davide remarks in his bisimilarities paper that gamma has dynamic binding. Check this.}
%\emph{2. See \cite{JagadeesanP90} for a denotational account of Boudol's Gamma.}

In a similar spirit, 
\cite{Nielson89} proposes an extension of the \emph{typed} 
$\lambda$-calculus with process communication. 
Here the r\^{o}le of types is indeed prominent, as 
they are meant to record
the communication possibilities of processes, while retaining the usual information about functions and tuples that is provided 
by the typed $\lambda$-calculus. Here ``communication possibilities'' refers essentially to the channels over which communication can occur and the types of the entities that can be communicated over these. 
The type system then results from generalizing the notion of \emph{sort} as defined in CCS;
% As expected the language is endowed with a semantics that respects the type information.
% The type system rules out programs with two class of errors: type mismatches and deadlocks. 
it guarantees that, given some expression $e$ with type $t$, everything $e$ evaluates to will also have type $t$, provided
that $e$ reads values of permissible types. 


The FACILE language framework \citep{GiacaloneMP89,PrasadGM90}
is an integration of functional and concurrent programming.
Unlike other proposals, which enrich one of the programming styles with features of the other,
FACILE intends to be \emph{symmetric} in that a full functional language is integrated with a full concurrent language.
In FACILE concurrent processes communicate through synchronous message passing; processes manipulate data 
in functional style. While the operational definition of FACILE first given in \citep{GiacaloneMP89}
consisted of a translation into a concurrent functional abstract machine, in \citep{PrasadGM90} the semantic
foundations of the framework are given in terms of 
a notion of program behavior that combines the observable behavior of processes and the evaluation of expressions.
Program equivalence is based on a form of contexts called \emph{windows}, which 
are meant to index families of bisimulation relations. 
Roughly speaking, 
two expressions are equivalent if they reduce to equivalent values while producing equivalent behavior.
The higher-order nature of the language is reflected in the observational equivalence by developing the notion 
of equivalent actions to be based on the idea of equivalent values. This is like higher-order bisimilarity defined in CHOCS.

%\emph{Another combination of functions and processes: \cite{Meira89}}

%Two further proposals are worth reviewing.
Finally, 
CML is a concurrent extension of Standard ML \citep{Reppy91,Reppy92}
in which synchronous operations are treated as first-class values. 
Synchronous operations are represented in the set of values by \emph{events}; 
the language provides combinators to construct complex events from event values.
This way a wide range of 
synchronization abstractions 
can be constructed, 
which in turn allows to support different concurrency idioms. 
% \cite{Smolka94} proposes a language higher-order concurrent programming
% which includes as primitives logic variables, names, procedural abstractions,
% and cells to represent mutable state.
% The language is shown to correspond to an asynchronous, polyadic $\pi$-calculus 
% with logic variables.

%REVIEW WORKS ON CORE FACILE, FACILE, GAMMA BY BOUDOL \cite{Bou89}.

\subsubsection{Process Calculi with Higher-Order Features}
%Pure CHOCS, POPL paper: \cite{Tho89}

%Thesis: \cite{Tho90}

%Plain CHOCS: \cite{Tho93}

%Long version of the POPL paper: \cite{Thomsen95}


Here we review a number of concurrent languages which 
rely on a process calculi basis in order to implement higher-order features.
% As the reader shall notice, we are particularly biased towards the higher-order $\pi$-calculus
% for we present it in more detail.
% This bias is justified by the significance of the higher-order $\pi$-calculus.
% Also, this presentation shall be useful later, when introducing Sangiorgi's representability result.

\paragraph{CHOCS and Plain CHOCS.}
\cite{Thomsen95,Tho90} introduces and develops the basic theory of CHOCS, an extension of CCS with process passing.
Probably the most distinctive feature of CHOCS is the treatment of the restriction operator as a dynamic binder.
This simplifies significantly several aspects of the theory, such as the definition of an algebraic theory and denotational semantics.
Similarly as in \citep{Boudol89}, the behavioral equivalence defined for CHOCS (higher-order bisimilarity) 
considers the bisimilarity of the values of the actions, rather than their equality. 
Higher-order bisimilarity is shown to be a congruence, and an algebraic theory for it is also developed. 
Main ideas of the type system developed by \cite{Nielson89} are adapted to CHOCS so as to define a notion of sorts. 
As for the observational equivalence, internal actions are abstracted in the delayed style: an arbitrary number of
internal actions are only allowed before a visible one. 
Using this definition, and similarly as in CCS, the observational equivalence is proven to be a congruence for sum-free processes.
Finally, an extension of the Hennessy-Milner logic is proposed for characterizing higher-order bisimilarity.
The higher-order nature of the calculus is captured by enriching modalities with (i) formulas representing the processes sent or received, and (ii) the state after the transition. 
The expressiveness of the calculus is demonstrated by exhibiting encodings of the lambda calculus (with several
evaluation strategies) into CHOCS, as well as an interpretation of a simple imperative programming language into CHOCS.
%There is an encoding of lambda in chocs. 
%The encoding seem to work well with an evaluation strategy that is lazy and by-name.
%The encoding does not preserve full beta reduction.
%A second encoding of lambda into chocs enforces call-by-value reduction.

Inspired in the $\pi$-calculus, Plain CHOCS \citep{Tho93,Tho90} results from considering 
the restriction operator in CHOCS as a static binder.
The transition system for Plain CHOCS and the bisimilarity developed upon it follow closely those defined for the lazy
$\lambda$-calculus \citep{Abramsky89}. The relationship between Plain CHOCS and the $\pi$-calculus is explored by means of encodings
in the two directions. 
The encoding of Plain CHOCS into the $\pi$-calculus follows the strategy in \cite{MilnerPW92a}: the communication of a process in Plain CHOCS is represented in the $\pi$-calculus as the communication of a link to a trigger construct that provides copies of the communicated process. This encoding is defined for the fragment of Plain CHOCS without renaming. 
An alternative encoding that considers renaming by admitting a set of names as parameter is also proposed.
The encoding of the $\pi$-calculus into Plain CHOCS is more involved: the communication of a name in the $\pi$-calculus is represented by the communication of a Plain CHOCS process that contains the (input, output) capabilities of a name. This process, so-called \emph{wire}, is meant to be ``plugged'' into the context of a given receiver 
by renaming operations that allow to ``localize'' the capabilities of the name. 
The encoding is then formalized as a two-level translation.
In the first step, all free names and input-bound names are translated into process variables. Names bound by restriction are simply translated as names in Plain CHOCS. In the second step of the translation, the process variables corresponding to free names in the $\pi$-calculus process are translated into names in Plain CHOCS. 

\cite{Bloom94} proposes a meta-theory for higher-order process calculi that generalizes CHOCS 
by considering constructs for broadcasting communication and interruption of process execution.
Computation is of two kinds: process algebraic and functional so the 
meta-theory subsumes the name passing capabilities of the $\pi$-calculus and reduction as in the $\lambda$-calculus.
The metatheory is typed; types of channels recognize input and output capabilities.
The behavioral equivalence investigated is a generalization of higher-order bisimulation as proposed
by \cite{Boudol89} and \cite{Tho90}, and is shown to be a congruence. 


\paragraph{The Blue Calculus.}
The Blue calculus \citep{Boudol98} provides an integration of the $\lambda$-calculus with the $\pi$-calculus
with the objective of obtaining a direct model for higher-order concurrency
with the same expressive power of the $\pi$-calculus while offering a more convenient programming notation.
The Blue calculus is endowed with a type system that encompasses both Curry's type inference for the $\lambda$-calculus and 
Milner's sorting type system for the $\pi$-calculus. 
In a nutshell, the computational model of Blue is built around of a \emph{name-passing} $\lambda$-calculus 
in which asynchronous messages might call for resources (or services) 
available in the form of linear and ``inexhaustible'' declarations. 
Because of the unified rationale of Blue, 
programs in the $\lambda$-calculus and processes in the $\pi$-calculus have 
both a Blue interpretation; this way, the Blue calculus is useful to formalize 
a number of intuitions on the relationship between the $\lambda$-calculus and the $\pi$-calculus 
as well as encodings of evaluation strategies of the former into the latter.




%\paragraph{Other Calculi}
%We now briefly review a few other higher-order languages proposed in the literature. 




\paragraph{The M-calculus.}
\cite{SchmittS03,SchmittS02TR} propose the M-calculus, 
a higher-order distributed calculus that provides the notion of {\em hierarchical, programmable locality}
as a way of representing those distributed systems in which 
localities can be of different kinds and exhibit different kinds of behaviors (e.g. with respect to access control or to failures).
This is in sharp contrast with other calculi for distributed programming (such as the Ambient Calculus \citep{CardelliG00}) 
in which localities are homogeneous, i.e. they are all of the same kind and have the same pre-defined behavior.
This kind of distributed localities with explicit, programmable behavior are called \emph{cells}; 
in combination with higher-order communication and dynamic binding features they allow 
to give a unified view of process migration and communication. 
The design of the M-calculus retains features from the Blue Calculus \citep{Boudol98} and the Join calculus \citep{FournetGLMR96}; 
the M-calculus features a functional character in messages (as in Blue), message patterns within definitions, 
and named cells so as to form a tree-like hierarchy (as in Join).
As a novelty, the M-calculus introduces a \emph{passivation} operator, 
which can ``freeze'' running processes, and a type system for guaranteeing the unicity 
of names of active cells  are also introduced. 

%\emph{Also, M has a form of dynamic binding, yet to be understood--- see INRIA Report 4361.}

\paragraph{The Kell calculus. }
The Kell calculus \citep{SchmittS04} arises as a generalization of the M-calculus, 
defined as a \emph{family} of calculi 
intended to serve as a basis for component-based distributed programming. 
%No bisimilarity theories are proposed for the M-calculus; this is one of the concerns 
%that the authors address in 
%the design of the Kell calculi \citeyearpar{SchmittS04}. This is a family of calculi 
Built around a $\pi$-calculus core, 
%i are parameterized on the language used to define input patterns and 
the main features of the Kell calculus are hierarchical, programmable localities
and local actions. 
While the former are inherited from the M-calculus and allow to express different semantics for
containment and movement, the latter embody a principle 
under which atomic actions should occur within a locality, or at the boundary between a locality and its enclosing
environment. Also as in the M-calculus, 
in Kell the execution of a process 
within a locality
can be controlled through its \emph{passivation}. 
The Kell calculus can be instantiated by means of \emph{input pattern languages}, i.e. 
the language allowed in input constructs;
this is most useful in defining a generic behavioral theory for the calculus. 
As a matter of fact, under sufficient conditions on substitution properties of such pattern languages, 
a co-inductive characterization of contextual equivalence is provided in terms of 
a form of higher-order bisimulation termed strong context bisimulation.

%As mentioned before, 
\paragraph{Homer. }
Homer \citep{HilBun04} is a higher-order calculus for mobile embedded resources. 
Its main features are active code mobility, explicit nested locations, and local names.
Given a resource (i.e. a process) inside a location, active process mobility refers to the fact that
the resource might be \emph{taken} (or \emph{pulled}) by a suitable complementary prefix.
This kind of movement ---sometimes referred to as \emph{objective mobility}--- is a feature Homer shares with 
the M-calculus and Kell.
Crucially, 
and similarly as Boudol's $\gamma$-calculus reviewed above, 
the resource has the capability of performing internal computations inside locations, 
that is, the resource can evolve on its own before being moved.
This is in sharp contrast to usual process passing and substitution. 
Location addresses are defined by nested names; interactions between resources at arbitrarily nested resources are allowed.
These nested locations come with an involved treatment of local names, scope extension and extrusion. 
% It contains primitives for expressing 
% linear and non-linear process passing; 
% named, nested locations; and local names. Homer is 
% %expressive enough to model the synchronous $\pi$-calculus and is 
% endowed with a type system which distinguished between resources that are 
% affine linear (used at most once) and non-linear. In Homer, 
In Homer, barbed congruence is shown to be characterized by a labeled transition bisimulation congruence.
%GodskesenHS02

\paragraph{HOPLA. }
HOPLA \citep{NygaardW02}
is a higher-order language for non-de\-ter\-mi\-nis\-tic processes that arises
from a proposal for domain theory for concurrency; 
that is, from a denotational/categorical approach for giving meaning to concurrent computation. 
Roughly speaking, 
HOPLA is an extension of the typed $\lambda$-calculus in which 
a process is typed with a collection of its possible \emph{computation paths}. 
The notion of \emph{prefix-sum type} is introduced for this purpose. 
The denotation of a process then relies on set-based operations on paths and their extensions. 
HOPLA has a developed operational semantics and behavioral theory; 
sensible equivalences have been defined for it, including ordinary bisimilarity, applicative bisimilarity, and
higher-order bisimilarity. An advantage of the domain-theoretical approach for concurrency is that it is 
general by definition, and naturally leads to metalanguages for process description. 
This is evidenced in HOPLA, which can encode directly languages such as CCS, CCS with process passing, and mobile ambients. 
An extension of HOPLA that incorporates name generation has been introduced in \citep{WinskelN04}.

% 
% \paragraph{HO for services/coordination}
% In \cite{RadestockE96} Higher-order pi as a foundation for Darwin, a coordination language; they claim plain pi is inadequate.
% 
% \cite{BundgaardGHHN08}
% propose HomeBPEL, a higher-order language for (web) service implementation and orchestration.
% HomeBPEL is intended to formalize scenarios in which 
% disconnected operation of services is achieved by their movement to some local process engine 
% (such as, e.g., a mobile device).
% In HomeBPEL, processes are values that can be stored in variables, passed as messages, and activated 
% as embedded \emph{sub-instances}.
% Sub-instances can be dynamically \emph{frozen} and stored as processes in a variable and, 
% as any other variable content, they can be sent to remote services; 
% upon reception, they can be \emph{thawed} or reactivated as a local sub-instance.
% This is very similar to process passivation as discussed for the M-calculus and Kell.
% Following work for the Homer calculus \citep{BundgaardH06}, the semantics of HomeBPEL is based on binding bigraphical systems as formalized by the BPL Tool; 
% it exploits the close correspondence between bigraphs and XML to provide a formalized 
% run-time format that also constitutes the representation of frozen sub-instances.
% 
% 
% \paragraph{Other Stuff}

\paragraph{KLAIM. }
KLAIM \citep{NFP98} is a {\em process description
language}: as such, it falls between a programming language and a 
process calculi (see \citep{Nicola06} for a short survey on this distinction). 
In its process calculus dimension, 
processes and data can be moved from one computing environment to another. 
KLAIM builds on the Linda tuple space model, and can be seen as an asynchronous higher-order
process calculus whose basic actions are the original Linda primitives enriched with explicit
information about the location of the nodes where processes and tuples are allocated.
%KLAIM supports programming with explicit localities and 
%has associated a simple type system to control access rights.
The behavioral theory for sub-languages of KLAIM focuses on barbed
congruence and may testing; it has been studied by \cite{BorealeNP99}.

% Mobile Ambients  \citep{CardelliG00} is a process calculus for modeling mobile agents in wide-area networks.
% It is a model where the notion of mobility is intimately related to that of barrier crossing: 
% we find computational ambients that are hierarchically structured; 
% agents or processes are confined to ambients and ambients move under the control of agents.
% A context bisimilarity which characterizes barbed congruence in Mobile Ambients
% has been defined  by \citet{MerroN05}.
% BRIEFLY CITE CALCULI SUCH AS SEAL HERE
% 
% 
% Higher-order petri nets in:
% Higher-order Petri net modelling - techniques and applications (2002): \cite{Janneck02}
% 
% This seems to be a more reliable reference for higher-level nets: \cite{Smith96}
% 
% That paper is cited by Hildebrandt, when speaking of business processes, another application of HO stuff
% (see ``Languages and Architectures for Pervasive Business Processes'').
% 
% CHECK ALSO THE PAPER CITED IN THOMAS' COORDINATION'08 PAPER: \cite{HoffmannM02} THOMAS EXPLAINS SOME OF THE DRAWBACKS.
% 
% 

\paragraph{Other proposals.}
In addition to the above mentioned calculi, other proposals of calculi for higher-order concurrency
can be found in the literature. For the sake of conciseness, we only mention them
without expanding in their details.
\cite{RadestockE96} put forward a higher-order process calculus
for coordination in environments with distributed components.
\cite{OstrovskyPT02} propose a 
higher-order process calculus with broadcasting communication, and study 
its semantic theory in depth.
\cite{MeredithR05,MeredithR05-2} propose 
a \emph{reflective} higher-order process calculus
in which names as in the $\pi$-calculus are obtaining by \emph{quoting}
processes.
\cite{HennessyRY05} have proposed sophisticated type systems
with dependent and existential types
for a distributed version of the $\pi$-calculus 
with higher-order communication
of parametrized  code.
\cite{MostrousY07,MostrousY09} have studied 
calculi for structured communication
with higher-order communication and 
type disciplines for them.
Higher-order process calculi oriented towards security issues have been put forward by 
\cite{MaffeisAFG08}, who propose a higher-order spi-calculus \citep{AbadiG99}
for code-carrying authorization, and by 
\cite{SatoSumii09}, who 
 define a higher-order calculus 
with cryptographic-like operations over terms such as decomposition.
They rely in environmental bisimilarities ---to be reviewed later on---
for developing the behavioral theory of their calculus.

%CALCULI FOR STRUCTURED COMMUNICATION WITH PROCESS PASSING (Papers by Dimitris and Nobuko).

% 
% A higher order spi calculus : 

%\subsection{Behavioral Theory}
%\subsection{Generalities}
%History, main issues (labeled bisimilarities, soundness, completeness, congruence, barbs).

\subsection{Behavioral Theory} \label{ss:pre-beht}
Here we review some works that have addressed the behavioral theory for higher-order languages.
We concentrate on works for higher-order process calculi.


\citet{San94} studies equivalences for versions of the $\lambda$-calculus possibly involving parallelism.
The objective is to find the finest behavioral equivalence on terms (i.e. the one that discriminates the most).
The starting point is Abramsky's applicative bisimilarity for the lazy $\lambda$-calculus \citep{Abramsky89}.
Two approaches are followed. In the first one, the equivalence induced by the encoding of the lazy $\lambda$-calculus into 
the $\pi$-calculus (so-called lambda observational equivalence) is studied. 
Such an equivalence is shown to be a congruence (using a direct proof, i.e. without using the encoding into pi), 
and fully-abstract with respect to Levy-Longo trees, the
tree-like model for lazy $\lambda$-calculus terms. The second approach considers extensions of the pure $\lambda$-calculus.
A rule format for \emph{well-formed operators} is proposed for that purpose; intuitively, the rule format generates operators whose 
behavior only depends on their semantics, and not on their syntax. The most discriminating congruence is obtained when all 
well-formed operators
are admitted; such a congruence (so called rich applicative congruence) is shown to coincide with 
lambda observational equivalence. Non determinism is shown to be the essential component for obtaining maximal discrimination. 






The definition of a satisfactory notion of  bisimilarity is 
a hard problem for a  higher-order process language.
In ordinary bisimilarity, as e.g. in CCS, 
 two processes  are 
bisimilar if  any action by one of them  can be
matched by an equal  action from the other in such a way that
the resulting  derivatives   are again  bisimilar.  The two
 matching actions  must be  syntactically   {\em identical}. This
condition is unacceptable 
 in higher-order concurrency; for instance it
  breaks  
vital algebraic
laws such as the commutativity of parallel
composition.
The approach taken by \citet{Tho90}, following earlier ideas by
\citet{AsGi88} and \citet{Boudol89},  
 is to require {\em bisimilarity\/}
rather than {\em identity\/} of the processes emitted in  a
higher-order output action.
This weakening is natural for higher-order calculi and the 
bisimulation  checks involved are simple. 
\citet{San923} then   argued that this  form of  bisimulation,   called  
{\em higher-order bisimilarity}, is in general troublesome
or over-discriminating  as a behavioral equivalence, and 
 basic properties, such as  congruence, may be very hard to establish. 
He then  proposed {\em context bisimilarity} \citeyearpar{San96H}, a form of bisimilarity 
that
 avoids  the separation between  object part and 
continuation of an output action
  by  explicitly taking into account  the {context} in
which the  emitted  agent is supposed to go.
Context bisimilarity
yields more satisfactory process equalities,
and coincides with contextual equivalence (i.e., barbed congruence). 
However, it 
has the drawback   of a 
 universal quantifications over contexts, which  can make it hard,
 in practice,  to check 
equivalences. 

  {\em Normal bisimilarity} \citep{San923,JR05,Cao06} is a
   simplification of context bisimilarity without   universal quantifications in
the output clause.
The input clause is simpler too: normal bisimilarity can indeed be
viewed as  a form of \emph{open bisimilarity} \citep{San96acta}, where the
formal parameter of an input is not substituted in the input clause,
and free variables of terms are observable during the bisimulation game.
However, the  definition of the bisimilarity may depend on the
operators in the calculus, and the correspondence with context
bisimilarity may be hard to prove. 
%Interestingly, t
The characterization of context bisimilarity using normal bisimilarity in \citep{San96H}
exploits \emph{triggered bisimilarity}, an intermediate characterization of bisimilarity 
defined over \emph{triggered processes}, i.e. a set of processes in which every 
communication takes place by the exchange of a trigger.
\cite{San96H} obtains this characterization for the \emph{weak} case; 
however, the proof technique based on going through triggered agents does not carry over to the 
strong case, as it adds extra internal actions. 
Recently, \citet{Cao06} showed that 
strong/weak context bisimulation and strong/weak normal bisimulation coincide in
higher-order $\pi$-calculus. To do so, he
goes through {\em indexed bisimilarity}, 
an equivalence defined over a variant of 
the calculus in which every prefix is indexed. 
%{\em Aug 08: 
%In fact, in \citep{San96H} 
%the coincidence of context and normal bisimulation in the strong case is not addressed: 
%the proof technique for the weak setting (defining triggered processes and a suitable {\em triggered} bisimilarity over them) does not
%carry over to the strong case, as this adds extra tau actions. 
Cao then uses indices to distinguish ``internal'' tau actions (those originating inside a component) 
from those ``external'' ones (those taking place among different components). 
The first ---which are essentially the kind of actions added by the encoding into triggered processes--- are neglected in the 
indexed versions of the bisimulation games. 
Apart from settling the issue of the coincidence between
normal and context bisimilarities in the strong case, the work in \citep{Cao06} provides a uniform setting for proving
the coincidence of bisimilarities: in the 
index-based proof technique
the coincidence for the weak case results as a particular instance.
% Mikkel says the following paper is wrong:
%In \citep{Cao07} the same indexed language is the basis of a distributed higher-order $\pi$-calculus.
%Based on a modified semantics (which includes variable locations for internal tau actions), three 
%notions of bisimilarity are shown to coincide: normal and context bisimilarities, and a version of barbed equivalence
%that doesn't test for actions in some given names (i.e. barbs) but for actions between certain locations.
%}

A drawback of  
the characterization of context bisimilarity with normal bisimilarity 
in 
\citep{San96H} is that it is restricted to languages with finite types.
\cite{JR05} extends such a characterization to a language with recursive types. 
% Their objective is to dispense with a restriction on finite types: a
%; only for some (asynchronous) subcalculi this restriction can be removed (check this and add details). 
Their approach is based on an enriched labeled transition system 
in which special operators representing references to triggers 
are included in the labels. As a result, a direct proof of soundness is possible, i.e., bisimilarity based on this enriched labeled transition system implies context bisimilarity. Completeness also holds; for the proof the original approach based on triggers is necessary. 
%All in all, this work shows that by postponing the interpretation of the triggers until the completeness proof the restriction of finite types can be removed. This way, a fully-abstract characterization of context bisimilarity is possible for full HOpi in the presence of recursive types.

% EXPAND ON THIS, IF TIME PERMITS
% As in the $\pi$-calculus, in higher-order calculi we find the early/late distinction
% to define bisimulations. 
% {\em Fix, Aug 08: Consider the characterization of barbed congruence using 
% context bisimilarity (NOTE: at some point it should be said that the main goal is to
% characterize barbed congruence.). In an {\em early} definition, the evolution that matches an input or output action depends on 
% the choice of the interacting context. This is not the case for in the {\em late} definition, 
% in which evolution and interacting context are independent.}
% The distinction can be observed in 
% %The difference can be intuitively understood by observing 
% the order of the quantifications in the bisimulation clauses. 
% In fact, let us we compare 
% the output clause of context bisimilarity
% in both styles. First the 
% %late style 
% early style
% %($\bullet$ denotes pseudo-application):
% %$$\mbox{whenever } P \arr{ \bar{a} } C, \mbox{ then {\em for all} closed abstractions } F\, \mbox{{\em there exists} } D \mbox{ s.t. } Q \arr{ \bar{a} } D \mbox{ and } F \bullet C\, \R\, F \bullet D.$$
% %}
% Now the late one:
% %\small{ 
% %}
% %\[
% %\mbox{whenever } P \arr{ \bar{a} } C, \mbox{{\em  there exists} } D\, \mbox{s.t. } Q \arr{ \bar{a} } D \mbox{ and } F  \bullet C\, \R\, F \bullet D 
% %\mbox{ {\em for all} closed abstractions } F.
% %\]
% As opposed to the $\pi$-calculus case, in higher-order languages both styles coincide \citep{San96H}.

%Congruence studies are usually done in the late one (explain why!, see Baldamus thesis); nevertheless
%Davide says that the use of howe's technique in the early style (by hilde) is
%novel.
In addition to the definition of a suitable notion of bisimilarity, 
a related hard problem  is the proof that the bisimilarity is a congruence.
In fact, for higher-order languages the ``term-copying'' feature inherited from the $\lambda$-calculus  can
make it hard to  prove that  bisimilarity is a  congruence.
A classical method for proving congruence of higher-order bisimulations is that 
of \citet{howe}. 
%It consists in defining a congruence
%inductively in the structure of the process terms that includes a late labeled bisimulation,
%and then prove it to be included in the labeled bisimulation as well. Howe's method, 
Originally introduced for (lazy) functional programming languages, this method 
was first adapted to higher-order process calculi by 
%applied to a calculus similar to  by 
\citet{Bal98} and %(using a variant of Plain CHOCS with local names and code mobility), 
%Thomsen 
\citet{Tho89,Tho93} who used it for (variants of) CHOCS and Plain CHOCS. 
More recently, it has been used by 
\citet{FerreiraHJ98} --for a concurrent version of ML-- and by 
Hildebrandt et al. 
\citeyearpar{HilBun04,GH05}, %for untyped and typed versions of 
to show that late and input early bisimilarities are congruences in  untyped and typed versions of 
Homer. %, a calculus of non-linear, higher-order mobile embedded resources (see below).

Recently, as a means of alleviating some of the problems Howe's method entails when used for
concurrent languages (most notably, its lack of flexibility), 
\citet{SangiorgiKS07} proposed {\em environmental bisimulations}, 
a %bisimulation 
method for higher-order languages that 
aims at make proofs of congruence easier and compatible with the so-called up-to techniques \citep{San98MFCS}.
Roughly, an environmental bisimulation 
makes a clear distinction between the terms tested in the bisimulation clauses 
and the environment, that is, an observer's current knowledge. As such, for instance,
in the output clause of the environmental bisimulation for HO$\pi$, the emitted processes become part of the environment;
the extruded names also receive special treatment inside the clause.
This is a more robust technique than previous approaches; it has been applied to both functional languages (pure $\lambda$-calculus
and $\lambda$-calculus with information hiding) and to concurrent ones (Higher-Order $\pi$-calculus).

Two very recent works develop further the theory of environmental bisimulations. 
\cite{SatoSumii09} adapt and extend it 
in the setting of a higher-order, applied $\pi$-calculus featuring 
cryptographic operations such as encryption and decryption.
\cite{KoutavasH09}
propose a first-order behavioral theory for higher-order processes based on the combination of 
the principles of environmental bisimulations and the improvements to normal bisimilarity proposed by \cite{JR05}.
At the heart of the proposed theory is a novel treatment of name extrusions,
which is formalized as an LTS in which configurations not only contain the current knowledge of the environment and a process, but also information on the names extruded by the process. 
%Interestingly, the environment explicitly records information concerning triggers to emitted processes.
As a consequence, the labels of such an  LTS have a very simple structure.
The weak bisimilarity derived from this LTS is shown to be a congruence, 
fully abstract with respect to contextual equivalence, and 
to have a logic characterization using a very simple Hennessy-Milner logic.


\cite{LengletSS09-F,LSS08,LengletSS09} have 
studied the 
behavioral theory of 
variants of higher-order $\pi$-calculi
with restriction and/or passivation constructs. 
In \citep{LengletSS09-F,LSS08} they show that 
in a higher-order calculus with a passivation 
operator (such as Kell and Homer), the presence of a restriction operator disallows the 
characterization of barbed congruence by means of strong and normal bisimilarities.
They use Howe's method to prove congruence of a weak higher-order bisimilarity 
for a calculus with passivation but without restriction. 
This result is improved in \citep{LengletSS09} where barbed congruence is characterized 
for a higher-order process calculus with both passivation and restriction.
To that end, they exploit Howe's method with the aid of so-called 
\emph{complementary semantics}, which coincide with contextual semantics and 
allow the use of Howe's method to prove soundness of weak bisimilarities.


% CHECK THIS, AND DETERMINE IF IT'S WORTH INCLUDING
% 
% In \citep{LiL04} the authors investigate an alternative LTS for HOpi, following/adapting ideas Sewell (Concur98, and then TCS 2002) 
% and Jeffrey and Rathke (LICS 2000).
% In their approach, labels represent the answer of the process to some test provided by surrounding context.
% Restricted names are assumed to be in the context, and so labels do not need to deal with them.
% As a result, labels are simpler and have a more uniform structure. 
% The LTS includes a commitment relation that represents the possibility a process has of offering a certain test.
% The focus is on the strong bisimiliarity induced by this LTS, which is shown to be a congruence.
% It is also shown to coincide with barbed congruence (ie. the behavioral equivalence induced by the original reduction semantics)
% and with the original bisimilarity (strong and context) proposed by Sangiorgi.
% 
% %mention up-to-techniques by Pous \citep{PousTR07}
% 
% 
% \emph{may 09: It might be interesting to mention applications of coinductive techniques for higher-order languages, notably in works by Hennessy, Merro, Zappa-Nardelli in which the techniques have been used for Ambient-like calculi, and by Parrow, in a proof of correctness of an encoding of spi into pi.}

%\subsubsection{Rule formats for HO languages}
The congruence of bisimilarity can also be approached by means of (syntactic) rule formats (see, e.g., \citep{MousaviRG07}, 
for a survey on formats and metatheory of structural operational semantics).
These are formats that induce congruence for any given notion of bisimilarity once the rules of the operational semantics
adhere to the formats. 
\citet{Bernstein98} proposes a rule format (promoted tyft/tyxy) for languages with higher-order features.
The paper shows that for any language defined in the format, strong bisimulation is a congruence.
The approach is applied to the lazy $\lambda$-calculus, the $\pi$-calculus, and CHOCS.
In all cases, the studied equivalence is bisimilarity; other behavioral equivalences, 
such as applicative bisimulation or higher-order bisimulation, are not considered.
Also, the format imposes a number of restrictions on labels. 
In \citep{MousaviGR05} both these shortcomings are studied.
They build on Bernstein's work and propose a more general and relaxed rule format which induces congruence of (strong) higher-order 
bisimilarity. They use CHOCS to illustrate their rule format. The definition of suitable rule formats for other, more sensible, 
notions of bisimilarity (say, normal and context bisimilarity) is left 
 in \citep{MousaviGR05} as an open question. 
%(Check works by Bloom, Howe and Sangiorgi on SOS formats for functional languages; Sangiorgi analyzes concurrent lambda).

% \subsection{Other Reasoning Techniques}\label{ss:pre-other}
% \subsubsection{Denotational Semantics}
% \cite{Ramesh92} proposes a fully-abstract semantics for CHOCS, using Testing. COMPLETE
% 
% Determine how these are related:
% Matthew Hennessy: A Fully Abstract Denotational Model for Higher-Order Processes Inf. Comput. 112(1): 55-95 (1994)
% Matthew Hennessy: Higher-Order Process and Their Models. ICALP 1994: 286-303
% Matthew Hennessy: A Fully Abstract Denotational Model for Higher-Order Processes (Extended Abstract) LICS 1993: 397-408
% 
% \subsubsection{Logics}
% We conclude this part of the review by mentioning two works on logical characterization of bisimulations in higher-order calculi.
% Drawing inspiration from the Hennessy-Milner logic, 
% 
% %MERGE THE FOLLOWING TWO DESCRIPTIONS:\\
% \cite{AmadioD95} characterize strong context bisimilarity in higher-order process calculi with static scoping.
% %Plain CHOCS with a negation-free logic, and provide a complete proof system for it. 
% %\cite{AmadioD95} propose a logical characterization of bisimilarity for Plain CHOCS (ie. static binding).
% Unlike similar works on logic characterizations that rely on denotational approaches, 
% the authors rely on an operational approach, and study extensions for the Hennessy-Milner logic.
% This is motivated by the difficulties in obtaining domain theoretical models for higher-order languages with static scoping. 
% The main problem in extending the logic characterizations proposed for the first-order case 
% is expressing the dependencies on restricted names between the sender of an object and the object itself. 
% (This is reminiscent of the problems that led to the definition of context bisimilarity.)
% An extension of the Hennessy-Milner logic 
% that characterizes bisimilarity in the restriction-free fragment of the language
% is proposed, and two complete proof systems for it are developed.
% 
% ADD XU STUFF \cite{Xu07}.
% 
% %The denotational approach used in other works (WHICH ONES?) is abandoned, and the operational approach given by the HML are the stating point. Two proof systems for the subcalculus without restriction are proposed.  
% 
% %Two motivations for HO calculi are identified: in the first one, the idea is to add parallel/concurrent constructs to functional programming frameworks; in the second, the idea is to model the notion of code transmission.
% 
% Baldamus et al. \citeyearpar{BaldamusD97,Bal98}
% would later 
% improve \citep{AmadioD95} by providing a modal 
% characterization of {\em weak} context bisimulation. For their results, they go through 
% {\em existential bisimilarity}, which is shown to coincide with context bisimilarity.
% EXPAND ON BALDAMUS WORK.
% 
% \subsubsection{Types}
% A series of works by Hennessy and Yoshida on types for higher-order calculi:
% see 
% 
% \cite{YoshidaH02,YoshidaH99,YoshidaH00}
% (the journal version subsumes the two conference papers.)
% 
% The above are improved in \cite{Yoshida04}
% 
% Perhaps also related:
% \cite{HennessyR02}
% 
% Also: termination stuff by Romain: FSEN'09.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


%\newpage

\section{Expressiveness of Concurrent Languages}\label{s:pre-expr}
% Establishing coherent relationships between different models is a natural concern in science at large.
% From a broad perspective, our interest is in models of concurrent computation and their fundamental aspects. 
% One such aspects concerns their \emph{expressiveness}, or the ability such languages have for describing 
% some precise phenomena.
In this section we give a broad overview of the main approaches to the 
\emph{expressiveness} of concurrent languages. 
We focus on the issues and techniques we shall use in this dissertation; the reader is
referred to, e.g., \citep{Parrow08}, for a recent survey on the area.

We discuss on general issues in expressiveness in
Section \ref{ss:general}.
Then, in Section \ref{ss:encoding}, we briefly review some of the notions of encoding that have been proposed in the literature.
A classification of the main kinds 
of expressiveness results and the approaches to obtain them is presented in 
Section \ref{ss:expr-approaches}.
Finally, 
we report on previous efforts on the expressiveness of higher-order concurrent languages (Section \ref{ss:expr-ho}).

Along the section, we shall follow a few notational conventions. % in the previous section,
We use $\mathcal{L}_1, \mathcal{L}_2, \ldots$ to range over languages;
we use $\approx$ (possibly decorated) to denote a suitable behavioral equivalence.
Also, $\arro{}$ and $\Ar{}$ denote some (reduction) semantics 
and its reflexive, transitive closure, respectively.

%Perhaps arguing something/mentioning along the lines of \cite{Baeten91}

\subsection{Generalities}\label{ss:general}
An important criterion for assessing the significance of a paradigm is its \emph{expressiveness}. 
While in other areas of computer science (most notably, automata theory), 
the notion of  expressiveness is well-understood and settled, in concurrency theory
there is yet no agreement on %a set of notions that 
a formal
characterization of the \emph{expressive power}
of a language, possibly with respect to that of some other language or model.
%allowing one to assert that a language $\mathcal{L}_1$ is more expressive than some other language $\mathcal{L}_2$.
While such a unified theory would be certainly desirable, 
%the very different nature and purpose of 
the wide variety of existing models for concurrency 
(and consequently, of the expressiveness issues inherent to them)
strongly suggests that a single theory for language comparison
embracing them all does not exist. 


%As \cite{Palamidessi03} remarks, 


%Of course, this does not prevent from formulating the set of notions and principles underlying
%the notion of comparison in each case. 


%Expressiveness studies aim at obtaining formal assessments of the capabilities that 
%concurrent languages

The crux of expressiveness studies is the notion of \emph{encoding}, 
i.e., a function (or map) $\enco{\cdot}$ 
%that relates (
from the terms of a \emph{source language} 
into the terms of a \emph{target language} 
that satisfies certain \emph{correctness criteria}.
These criteria enforce both \emph{syntactic} and \emph{semantic} 
conditions on the nature of $\enco{\cdot}$. 
It is not difficult to see that 
the main source of difficulty in defining a unified theory for language comparison
lies precisely in the exact definition of these criteria:
depending on the purpose and on the given language(s), 
the set of applicable criteria might vary and/or 
there might be criteria more adequate than others. 

%At this point, the reader might rightly wonder about what we mean with the 
From the point of view of 
%But, what is 
their \emph{purpose}, % of an expressiveness study?
%Very broadly, 
expressiveness studies 
can be broadly seen to be aimed at two kinds of results: \emph{encodability} and 
\emph{non-encodability} (or \emph{impossibility}) results.
As their name suggests, the former are concerned with the \emph{existence} of an encoding, 
whereas the latter address the opposite issue. 
These two kinds of questions are intimately related as, given two languages $\lang{1}$ and $\lang{2}$,
in order to assert that $\lang{1}$ is \emph{more expressive} 
than $\lang{2}$, 
one needs to provide instances of both kinds of results: 
one should exhibit an encoding $\enco{\cdot}: \lang{2} \to \lang{1}$ and, at the same time, 
one should provide a formal argument ensuring that an encoding $\enco{\cdot}: \lang{1} \to \lang{2}$
does not exist. 
That is, it should be made clear that while $\lang{1}$ is able to express all 
the behaviors of $\lang{2}$,
there are some behaviors in $\lang{1}$ that $\lang{2}$ is unable to represent.
It might then appear clear that the correctness criteria for an encodability result
should be different from those for an impossibility result.
Indeed, for encodability results 
one would like to exhibit the best encoding possible, 
i.e., one satisfying the most demanding correctness criteria possible;
in contrast, for impossibility results
one would like to rely on the most general formal argument, 
i.e. one satisfying the least demanding correctness criteria possible.
Not surprisingly, the proof techniques involved and the ingenuity
required to obtain either result can be quite different.

Another broad classification of expressiveness studies takes into account whether or not
the expressive power of a given language is analyzed with respect to another language.
In other words, whether one is interested in 
\emph{absolute} 
or in
\emph{relative} expressiveness. 

In studies of absolute expressiveness the interest is therefore in assessing the 
expressive power that is intrinsic to the language and its associated semantics:
as \cite{Parrow08} explains, 
this question entails determining exactly the transition systems 
---as well as the operators on them--- that are expressible in a given language. 
That is, the focus is on the expressiveness of the terms of the language, and 
on the kind of operators that are expressible in it. 
These questions depend on suitable denotations of labeled transition systems, 
which explains the fact that expressiveness results of this kind have been reported 
only for \emph{basic} process calculi, with relatively simple labels \citep{Parrow08}.
A pioneering work in this direction is De Simone's study of the expressive power of the MEIJE process algebra \citep{Simone85}.
%Other yardsticks, such as, e.g., the finite-state behaviors might be also useful in certain settings.
A seemingly widespread approach to absolute expressiveness relies on some standard model of computation
---rather than on the semantic machinery of the language---
to assess the expressive power of a language. 
A common yardstick here is Turing completeness, 
which is generally shown by exhibiting an encoding of some Turing-equivalent model into the given language.
While this approach to absolute expressiveness ---sometimes referred to as \emph{computational expressiveness} (see, e.g., \cite{Aranda09,BusiZ09})---
takes some external model as reference (and as such, it is not entirely ``absolute''),
the fact that such reference models are widely known and/or understood 
often constitutes a satisfactory measure of the intrinsic expressive power of a language.


In relative expressiveness one measures the expressive power of a
given language $\mathcal{L}_1$ by taking some other language 
$\mathcal{L}_2$
as a reference. 
This is particularly appealing when, for instance, 
one wants to show that 
$\mathcal{L}_1$ and $\mathcal{L}_2$
%two different languages 
have the same expressive power.
In this case, 
the %expressiveness study reduces 
objective is
to obtain two encodability results, one in each direction. %language into the another.
Another common situation 
is when one wishes to determine the influence a particular operator or construct has on the expressiveness of a language
$\mathcal{L}_1$.
In this case, %given $\mathcal{L}_1$, 
the reference language $\mathcal{L}_2$ is the fragment of $\mathcal{L}_1$ without the operator(s) of interest.
%one generally has some language and a sub-language of it; 
In this case, 
%the objective is %then, as explained before, 
%to obtain both an encodability and an impossibility result.
one aims at showing that $\mathcal{L}_1$ cannot be encoded into $\mathcal{L}_2$.
If this can be done then the difference in the expressive power between the two languages
has been singled out: it is in the operators that $\mathcal{L}_1$ has but that
$\mathcal{L}_2$ lacks.
This is sometimes referred to as a \emph{separation result}, as 
the analyzed construct \emph{separates} the world with 
it from the world without it \citep{Yoshida02}.





\subsection{The Notion of Encoding}\label{ss:encoding}
We present a historical account of 
the evolution of definition of \emph{encoding}, starting 
from proposals within programming languages at large and concluding with 
the most relevant proposals for concurrent languages.



\subsubsection{Early Attempts to Expressiveness}
%\emph{Definitions, a bit of history, perhaps quickly referring to works in other communities, most notably, functional programming.}

It is instructive to examine the origin of the notions of expressiveness and expressive power in the realm of 
programming languages at large.
The earliest attempts towards a formal notion of ``expressive power'' can be traced back to the late 1960s, when a proliferation of programming languages was first noticed. 
Perhaps the most influential work of that period is due to \cite{Landin66}, who proposed a unified framework aimed at describing \emph{families} of programming languages from which particular languages can be derived by an appropriate choice of primitives.
Main concerns in Landin's formal framework are conventions about user-defined names and functional relationships.

Later on, in the early 1970s, the question of the expressive power was studied by representing families of programs by means of \emph{program schemas}, i.e., abstract representations of programming features with uninterpreted constant and function symbols (see, e.g., \citep{ChandraM76}). This line of research ---sometimes referred to as \emph{comparative schematology}--- is mainly concerned about the expressiveness of single constructs.

\cite{Felleisen91} developed a framework for expressiveness studies in the context of functional languages.
His framework is suited for comparing a language and some \emph{extension} of it; 
hence, it is  suited for studies of relative expressiveness as introduced before.
The framework departs from the idea of \emph{eliminable} syntactic symbols as proposed in logic by Kleene and others.
More concretely, given two languages $\lang{1}$ and $\lang{2}$ such that 
$\lang{1} \subseteq \lang{2}$, if the additional symbols/constructs of $\lang{2}$
are eliminable (with respect to $\lang{1}$) then $\lang{2}$ is said to be a \emph{definitional extension} of $\lang{1}$.
Several notions and concepts that we shall encounter in ``modern'' studies of expressiveness of concurrent languages can be found already in Felleisen's work. 
For instance, the crucial observation that 
the key to (programming) language comparison is a restriction on the set of admissible translations between (programming) languages.
This observation is represented by structural (syntactic) and semantic conditions; 
while the former include notions such as compositionality of translations and homomorphism of a translation with respect to some operator, 
the latter is represented by the preservation of terminating behavior, a natural requirement in a functional setting.
In Felleisen's view, the expressiveness of a programming language is closely related to the programming discipline since, intuitively, programs written in the extension of some core language can be more readable than the programs written in the core language. 


\cite{Mitchell93} compares (functional) languages according to the ability of making sections of a program 
``abstract'' by hiding some details of the internal functioning of the code. 
He defines so-called \emph{abstraction-preserving reductions}, which are compositional translations that preserve
observational equivalence.  Perhaps the simplest reduction of this kind is the one translating program blocks into 
function declaration and calls. Proofs showing that more involved reductions are abstraction-preserving might involve
appealing to the operational and denotational semantics of the languages in question. 
\cite{Riecke93} uses and extends the notion of abstraction-preserving reductions in the study 
of the expressive power of different evaluation strategies in the functional language PCF.
He shows that call-by-value and lazy PCF are equally expressive, and that both are more expressive than call-by-name PCF.


\subsubsection{Encodings Among Concurrent Languages: The Early Days}


\cite{Shapiro89} was the first to study expressiveness issues for concurrent languages. 
He proposed the notion of \emph{embedding} as a way of comparing
%His proposal was first specific to the realm of 
concurrent logic programming languages; 
considered languages are thus relatively similar and it is easy to focus on their differences.
An \emph{embedding} is composed of a \emph{compiler} and a \emph{viewer} (or \emph{decoder}).
Given two languages $\lang{1}$ and $\lang{2}$, the compiler is a function $c$ 
from programs of $\lang{1}$ into programs of $\lang{2}$, whereas the viewer is a function $v$ 
from observables of $\lang{2}$ into observables of $\lang{1}$. 
Both $c$ and $v$ form an embedding of $\lang{1}$ into $\lang{2}$ if the observables of every program $P$ in $\lang{1}$
correspond to the observables of the program obtained by compiling $P$ using $c$ and viewing (or decoding) its behavior using $v$.
In order to define a hierarchy of concurrent logic programming languages, 
this notion of embedding is tailored to the logic programming setting by requiring \emph{natural embeddings}, 
i.e., embeddings in which (a) the unification mechanism of one language is implemented in the unification mechanism of the other, and 
(b) logical variables of one language are mapped into logical variables of the other.
This proposal for language comparison was refined by \cite{Shapiro91}  and by \cite{BoerP90, BoerP94}.
We comment on both refinements next. 

\cite{Shapiro91} claims that no method similar to program schemas exists for comparison of concurrent languages.
 He then proposes a \emph{general framework} for language comparison, 
which relies on the (non) existence of \emph{mappings} that preserve the syntactic and semantic structure of the languages. 
Those mappings adhering to such preservation conditions are called \emph{embeddings}.
The framework is expressed in categorical terms, and is general enough so as to work for any family of languages with syntactic operations and a semantic equivalence.
Shapiro identifies three categories of embeddings that provide an incremental notion on the preservation of the semantic structure of languages: 
\emph{sound embeddings}, i.e. mappings that preserve observable distinctions; 
\emph{faithful embeddings}, i.e. sound embeddings that preserve the semantic equivalence;
\emph{fully-abstract embeddings}, i.e. embeddings that are faithful with respect to the congruence induced by the semantic equivalence.
The work concentrates in the formalization of separation results; 
a so-called \emph{separation schema} arises from considering parallel composition as the sole composition operation and by 
considering three properties:
\emph{compositionality}, i.e. the coincidence of the semantic equivalence with its induced congruence;
\emph{interference-freedom}, which disallows the parallel composition of a program with itself;
\emph{hiding}, i.e. the existence of programs that are semantically different from the trivial program, but whose composition is semantically equivalent to the trivial program. 
The framework for language comparison is used to provide a number of separation results among several concurrent languages and models, including Input/Output Automata, Actors, concurrent Prolog, and (variants of) CCS and CSP. 
In \citep{Shapiro92}, the general framework is also shown to be useful for formalizing 
positive (i.e. encodability) results.

%In \citep{BoerP90} the concept of \emph{modular embedding} was first proposed to compare the expressive power of concurrent logic languages.

After observing that the notion of embedding introduced by Shapiro fell short for 
formalizing certain separation results among concurrent constraint languages, 
\cite{BoerP94} introduced the refined notion of \emph{modular embedding}.
A modular embedding is an embedding that satisfies the following three restrictions.
First, since in the presence of non-determinism the domain of the observables of a language 
is a powerset, the decoder of the embedding %(here called \emph{decoder}) 
is required to be defined elementwise on the elements on the 
set of observables. Second, the compiler is required to be \emph{compositional} with respect to the parallel composition and the non-deterministic choice
operators. Third, the embedding must be \emph{termination invariant}:  a success (resp. deadlock or failure) in the target language
must correspond to a success (resp. deadlock or failure) in the source language. 
The notion of modular encoding is then used to derive separation results in the context of concurrent constraint languages
with different communication primitives in guarded-choice operators. 
The key idea to achieve separation relies on a semantic argument: 
two variants 
are separated by showing that a certain closure property is satisfied by the semantics of one variant but not by the semantics of the other. 
The notion of modular embedding was also used in \citep{BoerP91} to show separation results for variants of CSP
with different communication primitives in the guards. Indeed, it is shown that asynchronous CSP is strictly less 
expressive than CSP, thus confirming results obtained by \cite{Bouge88}, who exploited the capability each variant have of 
expressing symmetric solutions to the leader election problem. 

\subsubsection{Encodings Among Concurrent Languages: Towards ``Modern'' Criteria}

%Similarly as in several other aspects of concurrency theory, t
The introduction of the $\pi$-calculus in the early 1990s
gave a significant momentum to the study of expressiveness issues in process calculi. 
Indeed, 
the simplicity and flexibility of 
name-passing 
%for the representation of mobile systems 
as embodied in 
the $\pi$-calculus triggered many works proposing variants or extensions of it. 
Such works addressed 
a wide variety of concerns, including, e.g., 
%with, e.g., 
polyadic communication \citep{Milner93},
asynchronous communication \citep{Boudol92,HondaT91},
higher-order communication \citep{Tho90,San923},
stochastic behavior \citep{Priami95}, 
structured communication \citep{HondaVK98},
security protocols \citep{AbadiG99,AbadiF01}.
While some of these variants were mainly only of theoretical interest, 
some others (e.g., \citep{Priami95,AbadiG99}) 
were aimed at 
exploiting working analogies between 
the behavior of mobile systems as in the $\pi$-calculus 
and that of systems in areas such as systems biology and security.

In this context, expressiveness studies 
for the $\pi$-calculus 
were then indispensable to understand 
its fundamental properties,
to identify the intrinsic sources of its expressive power, 
and to discern about the relationships between its many variants.
As representative examples of works in these directions, we find studies on the properties of
the translation of polyadic into monadic $\pi$-calculus \citep{Yoshida96,QuagliaW05},
on the relationship between point-to-point and broadcasting communication \citep{EneM99},
on the different kinds of choice operators \citep{Nestmann00,NestmannP00} and, closely related,
on mechanisms for synchronous and asynchronous communication \citep{Palamidessi03,CacciagranoCP07}.
Probably as a consequence of the 
different motivations for approaching expressiveness, 
each of these works advocated its own 
definition of encoding, one in which the set of correctness criteria 
is defined in accordance to some specific working intuition or necessity.
In what follows we review some of those proposals and comment on their main features. 
For the sake of conciseness, we focus on a few, representative proposals 
---namely those by \cite{San923}, \cite{Nestmann96}, \cite{Palamidessi03}, and \cite{Gorla08}---
in order to give a broad overview 
to the area 
and to contrast certain aspects that we judge relevant.

As part of his study on the relationship between first-order and higher-order $\pi$-calculus, 
\cite{San923} identifies three phases in determining that a given source language can be
\emph{representable} into some target language:
\begin{enumerate}
 \item Formal definition of the semantics of the two languages;
\item Definition of the encoding from the source to the target language;
\item Proof of correctness of the encoding with respect to the semantics given.
\end{enumerate}

%The requirements made on each phase appear quite natural. 
Concerning the properties of (2), the only requirement is \emph{compositionality}, that is,
that the definition of the encoding of a term should only depend on the definition of its immediate 
constituents. Given source and target languages $\mathcal{L}_s$ and $\mathcal{L}_t$,
an encoding $\encpp{\cdot} : \mathcal{L}_s \to \mathcal{L}_t$, 
and an $n$-adic construct $\mathtt{op}$ of $\mathcal{L}_s$,  
compositionality can be expressed 
as follows:
\begin{equation}\label{eq:compos}
 \encpp{\mathtt{op}(P_1, \ldots, P_n)} = C^{\mathtt{op}}[\encpp{P_1}, \ldots, \encpp{P_n}]
\end{equation}
where $C^{\mathtt{op}}$ is a valid process context in $\mathcal{L}_t$.
As for correctness criteria, the main criteria adopted is \emph{full-abstraction}, i.e.,
two terms in the source language should be equivalent if and only if their translations are equivalent:
\begin{equation}
 S_1 \approx_s S_2 \mbox{~if and only~} \encpp{S_1} \approx_t \encpp{S_2} \,.
\end{equation}
That is, full-abstraction enforces both \emph{preservation} and \emph{reflection} of the equivalence of source terms.
Sangiorgi admits that full-abstraction represents a strong approach to representability.
As we shall elaborate later, 
the purpose of Sangiorgi is to \emph{transfer} reasoning techniques from the 
first-order setting to the higher-order one. 
In this sense, requiring full abstraction turns out to be necessary, given 
that target terms should be usable in any context, and the indistinguishability of two source terms
should imply that of their translations in order to switch from one language to another.
He also acknowledges that full-abstraction alone is not informative enough 
with respect to the relationship between source and target terms. 
To that end, 
he argues that 
full-abstraction should be complemented with some form 
of \emph{operational correspondence} relating a term and its translation. 

Based on his works on the encodability of choice operators into the (choice-free) $\pi$-calculus, 
\cite{Nestmann96} collects a number of desirable correctness criteria for encodings. 
As for full-abstraction, Nestmann comments that it might not applicable in those cases
in which the source language is not equipped with a notion of equivalence. 
Then, a suitable notion of operational correspondence gains relevance. 
Operational correspondence is usually expressed as two complementary criteria.
The first one, \emph{completeness}, 
ensures the preservation of execution steps, i.e., that the translation is able to simulate all the computations
of the source term:
\begin{equation}
 S_1 \arro{}_s S_2 \mbox{~implies~}  \encpp{S_1} \Ar{}_t \encpp{S_2} \, .
\end{equation}
The second criteria, \emph{soundness}, ensures the reflection of execution steps, i.e.,
that the behavior of a term in the target language can be related to the behavior of its corresponding
term in the source language:
\begin{equation}\label{eq:soundness}
 \encpp{S_1} \Ar{}_t \encpp{S_2}  \mbox{~implies~}  S_1 \Ar{}_s S_2 \, .
\end{equation}
However, soundness as in (\ref{eq:soundness}) is not satisfactory as it 
disregards
the intermediate processes the translation of a source term might need to go through in order to simulate
its behavior. A refinement that considers such intermediate steps is the following:
\begin{equation}
 \mbox{if~} \encpp{S} \arro{}_t \encpp{T} \mbox{~then there is~} S \arro{}_s S' \mbox{~such that~} \encpp{T} \approx_t \encpp{S'} \, .
\end{equation}
A further refinement to soundness is the one that takes into account the \emph{administrative steps}
that an encoding might have to perform \emph{before} simulating a step of the source term: 
\begin{equation}
 \mbox{if~} \encpp{S} \Ar{}_t \encpp{T} \mbox{~then there is~} S \Ar{}_s S' \mbox{~such that~} \encpp{T} \Ar{}_t \encpp{S'} \, .
\end{equation}

In addition to full-abstraction and operational correspondence, 
\cite{Nestmann96} considers two further correctness criteria
\emph{effectiveness/efficiency} and 
preservation/reflection of \emph{deadlocks and divergence}.
Let us elaborate only the latter criterion.
Nestmann regards as interesting to 
consider both the reflection and preservation of deadlocks.
The former is quite natural: 
the translation of a term 
should not deadlock if the given source term does not deadlock. 
Preservation of deadlocks is also reasonable as long as potential administrative
steps in the target side 
that might precede deadlock
are taken into account. As for divergence, 
Nestmann distinguishes between the kind of translation performed by
compilers and that performed by encodings. Indeed, while 
a compiler is not expect to add divergent behavior, 
Nestmann finds 
an encoding that adds divergence perfectly acceptable. 
To put this position into context, it is worth noticing that the issue of divergence is central to
the work in \citep{NestmannP00} where a trade-off between atomicity of committing a choice
and divergence is discovered. In fact, 
\cite{NestmannP00} propose 
two encodings of the $\pi$-calculus with input-guarded 
choice into the choice-free fragment: one 
encoding is atomic with respect to choice but introduces divergence; the 
other encoding is divergence-free but replaces the atomic commitment of choice with gradual commitment. 
Therefore, there could be scenarios in which correct encodings that add divergence might still be worth having. 



% \cite{Palamidessi97,Palamidessi03} proposes 
% a notion of good encoding that includes: 
% uniformity (homomorphic, renaming preserving) and semantically reasonable (preserving observables and termination).
% The results depend heavily on the encoding being homomorphic; this makes sense as the setting is of 
% distributed implementations. With generic compositionality things would break.
% Important: as she says: ``since we are interested in negative results, we consider a minimal set of requirements''

A well-known definition of encoding is the one proposed by 
\cite{Palamidessi03} 
as part of a comparison of the expressive power of synchronous and asynchronous communication in the $\pi$-calculus.
In short, she showed that there is no encoding of 
the synchronous $\pi$-calculus with mixed-choice into
the asynchronous $\pi$-calculus without choice. This separation result holds
under a notion of encoding in which syntactic criteria are captured
by the notion of \emph{uniformity}, which is given by the following two conditions:

%and semantic criteria preserve a \emph{reasonable semantics}, 
%as in the following definition.

%\begin{mydefi}[Encoding, \cite{Palamidessi03}]\label{d:pre-good-enc}
%An \emph{encoding} $\enco{\cdot}$ respects the following conditionsquired to be
%\begin{enumerate}
% \item \emph{Uniform}, i.e.
\begin{enumerate}
 \item homomorphism with respect to parallel composition, i.e., $\enco{P \parallel Q} = \enco{P} \parallel \enco{Q}$; 
 \item preservation of renaming, i.e. for any permutation of names $\sigma$ in the domain of the target language, there exists 
a permutation $\theta$ in the domain of the target language such that, 
for all name $i$, $\sigma(i) = \theta(i)$ and $\enco{\sigma(P)} = \theta(\enco{P})$.
\end{enumerate}

Palamidessi argues that uniformity is tailored for the representations of distributed systems, 
in which issues such as connectivity and coordination should be taken into account by any notion of encoding. 
This is particularly evident in requiring 
homomorphism with respect to parallel composition
rather than generic compositionality as in (\ref{eq:compos}) above.
This can be considered as a strong syntactic criterion. 
However, as Palamidessi claims, 
in the context of distributed systems 
homomorphism with respect to parallel composition finds justification 
as it is essential to ensure that the encoding 
preserves the degree of distribution of the system, 
i.e. the encoding of a distributed system does not add 
coordinating processes (or sites).

Furthermore, in Palamidessi's expressiveness results, 
encodings are required to be \emph{semantically reasonable}.
Quoting \cite{Palamidessi03},
encodings are required to preserve 
\begin{quote}
a semantics 
which distinguishes two processes $P$ and $Q$ whenever there exists a (finite or infinite)
computation of $P$ in which the intended observables (some visible actions) 
are different from the observables in any (maximal) computation of $Q$.
\end{quote}

It is worth noticing that this is quite a liberal way of capturing requirements
such as operational correspondence and the reflection/preservation of
deadlocks and divergence, discussed above. 
\cite{Nestmann00} has studied the results in \citep{Palamidessi03} by taking
correctness criteria more precise than ``preservation of a reasonable semantics''.
Indeed, he shows that 
while the $\pi$-calculus with mixed-choice 
can be translated into into the asynchronous $\pi$-calculus, 
a trade-off 
between divergence and the exact notion of compositionality arises:
there are encodings that are uniform but that introduce divergence, 
whereas encodings that do not introduce divergence only respect generic compositionality.


% CITE THIS AS AN EXAMPLE OF HOW ENCODING VARY ACCORDING TO THE CIRCUMSTANCES:
% \citep{Yoshida02}. There the notion of \emph{standard encoding} is introduced, in Def 5.1. page 259. 
% As cited by \cite{LaneveV99,LaneveV03}: 
% ...Here a concept of standard encoding
% is used, which means that the encoding is homomorphic, respects injective substitutions,
% and preserves weak observations and reductions. In addition to this, the encodings
% are required to be message-preserving, that is, $[[u x]] \approx u x$...
% 
% The message preserving condition is used to proof negative results. As 
% she says (pg 262): `` the message preserving condition  means that we do not change the basic meaning of
% behavior by translations and is indeed satisÿed in the known fully abstract translations
% of -calculus into the asynchronous -calculus''

Recent works have questioned the r\^{o}le of full-abstraction 
as a correctness criteria in encodings of concurrent languages (see \cite{BeauxisPV08} for an insightful discussion).
Their motivation is that when one is interested in relative expressiveness
---rather than in, for instance, the transference of reasoning tools from one language to another---
full-abstraction is of little significance, as it is too focused on the actual equivalences considered.
This is precisely the motivation 
%of a recent proposal 
for a unified approach to 
correctness criteria in encodings recently proposed by \cite{Gorla08}.

Gorla's proposal defines a kind of meta-theory for relative expressiveness, based 
on a set of encodability criteria formulated in abstract terms. 
As in \citep{Felleisen91}, the criteria are divided into \emph{structural} (i.e., syntactic) and \emph{semantic}.
The former include 
a form of compositionality as in (\ref{eq:compos}) but where the context 
is parametrized by 
the set of free names of the source terms, and 
a condition on the independence from the actual names used in source terms that generalizes condition (2) in the 
definition of uniform encoding given by Palamidessi. 
Semantic criteria include 
a form of operational correspondence that is defined up to the ``garbage terms'' 
that an encoding might produce;
divergence reflection, that is, that the encoding does not add divergence; 
and \emph{success sensitiveness}, i.e., a criteria that requires
that based on some notion of ``success computation'' ensures that
a successful source term is mapped into a successful target term.
Sensible notions of success include observables such as barbs \citep{MiSa92} 
or the outcomes from tests as in behavioral equivalences/preorders based on testing \citep{NicolaH84}.
A significant advantage of the proposal in \citep{Gorla08} is that 
it can be exploited by diverse concurrent languages (with different behavioral equivalences)
and, to a certain extent, it can be used to reason abstractly about encodings and their properties. 
In order to illustrate its relevance, the proposal has been instantiated so as to obtain results 
previously proposed in the literature \citep{GorlaTR06}, and to offer more straightforward proofs for other results.

To conclude, these different proposals for the definition of encoding
and its associated correctness criteria only reinforce the idea that a unified notion of encoding 
is unlikely to exist. 
In fact, we have seen how the definitions vary depending on the final purpose of the expressiveness study. 
Hence, a particular definition of encoding should not be judged solely on the basis of its 
differences with respect to other notions of encoding, which will most likely be aimed at different purposes.
A current debate concerns the r\^{o}le of full-abstraction as advocated by, e.g., \cite{San923}.
In our view, 
%We think that 
%if one understands that 
the crucial insight here is to understand that 
(i) the transference of reasoning techniques from one language to another
and (ii) the study of issues of relative expressiveness 
are essentially two \emph{different goals} that expressiveness results can aim at.
As such, 
one cannot expect correctness criteria aimed at (i) 
to 
make sense in settings in which the interest is in (ii), and viceversa. 
% 
% be satisfactory
% then full-abstraction finds justification for issues related to (i). 
% Conversely, 
% when one is interested in studies
% This of course does not mean that full-abstraction should be necessarily considered when addressing issues related to (ii).

\subsection{Main Approaches to Expressiveness}\label{ss:expr-approaches}
Having reviewed some representative definitions of encoding, 
here we propose a very broad classification of 
approaches for obtaining expressiveness results. Our classification does not
intend to be exhaustive or conclusive; it provides us with a way of presenting certain 
used techniques and to emphasize on their differences.

\subsubsection{Encodability of Computational Models}
This is a rather widespread approach to 
studies of absolute expressiveness.
The objective is to 
to demonstrate the (full) computational expressiveness
of a language or model by means of the encodability of a
Turing complete model. Notice that, under certain conditions, such an encoding is enough to
demonstrate that most relevant decision problems are undecidable.

Examples of Turing complete models used in expressiveness studies 
are 
Random Access Machines (RAMs) \citep{ShepherdsonS63}, 
Minsky machines \citep{Minsky67}, and Turing machines. 
Roughly speaking, both RAMs and Minsky machines are models composed of \emph{registers}
(or \emph{counters})
that hold natural numbers, a set of labeled \emph{instructions}, 
and a \emph{program counter} indicating the instruction currently in execution. The main difference between the two
is that while a RAM considers a finite set of registers, a Minsky machine requires only two of them
to ensure Turing completeness.

One of the first works that have used this approach is \citep{BusiGZ00} 
in which 
RAMs are encoded into variants of the coordination language Linda. 
In turn, such work has served as inspiration for a number of works addressing similar concerns
(see, e.g., \citep{BusiZ00,BusiZ04,Busi09,MaffeisP05}). 
The use of complete Turing machines 
(i.e. with a ribbon or tape, a transition relation, initial and accepting states) 
has been reported by \cite{HLS02} in their study of the expressiveness of the Ambient logic.
Similarly, \cite{CardelliG00} have reported an encoding of Turing machines in the Ambient calculus.
In addition to Turing complete formalisms, models of computability strictly less expressive 
than Turing machines
have been considered for expressiveness purposes. 
\cite{Christensen93} shows that 
the class of languages generated by Basic Parallel Processes (BPP, a fragment of CCS without communication nor restriction) is contained in the class of context-sensitive languages. 
In the realm of 
(process) rewrite systems, efforts towards a general Chomsky-like hierarchy of process languages have been 
made by \cite{Moller96} and by \cite{Mayr00}. 
More recently, 
%This is the case in 
\cite{ArandaGNV07} study fragments of CCS with replication
are studied with respect to context-sensitive, context-free, and regular languages.

The fact that several works have appealed to encodings of Turing complete models has 
raised the question as to what criteria such encodings should satisfy.
That is, the issue of the notion of encoding that is crucial to studies of relative expressiveness
arises in issues of absolute expressiveness as well.
In this case, the criteria are oriented towards determining
%That is to say, 
how \emph{faithful} such encodings are with respect to the behavior of a Turing machine.
In fact, notions of Turing completeness 
that are ``weaker'' than the classical one
have been put forward for 
explaining the computational expressiveness of 
certain process calculi. \cite{MaffeisP05} and \cite{Bravetti09}
have analyzed and defined precisely these weaker notions.
Let us recall such criteria, as identified by \cite{Bravetti09}.

\begin{mydefi}[Turing completeness for process calculi, \citep{Bravetti09}]\label{d:tc}
A language $\mathcal{L}$ is said to be \emph{Turing complete}, if given 
a partial recursive function with a given input,
there is a  process (i.e., a term of the language) in $\mathcal{L}$
such that
\begin{enumerate}
 \item If the function \emph{is defined} for the given input, then \emph{every}
 computation of the process \emph{terminates} and make the corresponding output available;
 \item If the function \emph{is not defined} for the given input, then
 \emph{every computation} of the process \emph{does not} terminate.
\end{enumerate}
\end{mydefi}

There are process calculi in which Turing complete models can be encoded in such a way 
that at least 
the terminating computations respect the computations of the considered model. 
Such calculi satisfy the following weaker criterion.


\begin{mydefi}[Weak Turing completeness for process calculi, \citep{Bravetti09}]\label{d:wtc}
A language $\mathcal{L}$ is said to be \emph{weakly Turing complete}, if given 
a partial recursive function with a given input,
there is a process (i.e., a term of the language) in $\mathcal{L}$
such that
\begin{enumerate}
 \item If the function \emph{is defined} for the given input, then 
there exists 
\emph{at least} one 
 computation of the process that \emph{terminates} and make the corresponding output available;
 \item If the function \emph{is not defined} for the given input, then
 \emph{every computation} of the process \emph{does not} terminate.
\end{enumerate}
\end{mydefi}

Notice that the difference between the two notions is then in the first item.
Indeed, if the function is defined
according to the first notion every computation of the corresponding process terminates;
in the second notion, the corresponding process may have computations that do not terminate.
While encodings used to show Turing completeness for process calculi
as in Definition \ref{d:tc} are sometimes called \emph{deterministic} or \emph{faithful}
(see, e.g., \citep{Busi09}). 
In contrast, encodings used to show weak Turing completeness for process calculi
as in Definition \ref{d:wtc} are called \emph{non-deterministic}  or \emph{not faithful}
(see, e.g., \citep{Aranda09}).



In this dissertation we will consider calculi that satisfy the criteria given by 
Definition \ref{d:tc}, as well as calculi that satisfy the criterion given by
Definition \ref{d:wtc}.
In all cases, we shall exploit encodings of such calculi into 
Minsky machines. We therefore find it convenient to introduce such a model here.

\paragraph{Minsky machines}
A Minsky machine \citep{Minsky67} %(MM in the remainder of the paper) 
is a Turing complete model composed of   
a set of sequential, labeled   
instructions, and two registers.   
Registers $r_j ~(j \in \{0,1\})$ can hold arbitrarily large natural numbers.   
Instructions $(1:I_1), \ldots, (n:I_n)$ can be of two kinds:  
$\mathtt{INC}(r_j)$ adds 1 to register $r_j$ and proceeds to the next instruction;  
$\mathtt{DECJ}(r_j,k)$ jumps to instruction $k$ if $r_j$ is zero, otherwise it decreases register $r_j$ by 1 and proceeds to the next instruction.  

A \mma includes a program counter $p$ indicating the label of the instruction  being executed.   
In its initial state, the machine has both registers set to $0$ and the program counter $p$ set to the first instruction.  
%that the machine starts with zero in both registers and that $(1:I_1)$ is the first instruction to be  
%executed.   
The \mma stops whenever the program counter is set to a non-existent instruction, i.e. $p > n$.   
  
  
%\subsubsection*{Reduction in MMs}  

A \emph{configuration} of a \mma is a tuple $(i,m_0,m_1)$; it consists of the current program counter and the values of the registers. Formally, the reduction relation  
over configurations of a \mma, denoted $\minskred$, is defined in Figure
\ref{fig:mmdef}. 



\begin{figure}  
%\begin{table}  
%{\small
\begin{mathpar}  
\inferrule*[left=M-Inc]{i:\mathtt{INC}(r_j) \\ m_j' = m_j + 1 \\ m_{1-j}' = m_{1-j}}{(i,m_0,m_1)\minskred(i+1,m_0',m_1')}  
\and  
\inferrule*[left=M-Dec]{i:\mathtt{DECJ}(r_j,k) \\ m_j \neq 0 \\  m_j' = m_j - 1 \\ m_{1-j}' = m_{1-j}}{(i,m_0,m_1)\minskred(i+1,m_0',m_1')}
\and  
\inferrule*[left=M-Jmp]{i:\mathtt{DECJ}(r_j,k) \quad m_j = 0}{(i,m_0,m_1)\minskred(k,m_0,m_1)}  
\end{mathpar}  
%}
\caption{Reduction of Minsky machines}  
\label{fig:mmdef}  
\end{figure}  
%\end{table} 

\subsubsection{Decision/Representative Problems}
This is an approach to separation results. 
As argued by \cite{Gigio09}, the idea is to discriminate the expressiveness of two variants
of the same computational model by investigating the decidability
of some decision problem in the two different settings. 
This allows one to prove that a different interpretation for a given
concurrent computational model, or a simple extension of one
concurrent computational model, strictly increases the expressive
power. 

An example of this line of research is \citep{DufourdFS98}
in which separation results for 
Petri nets with Reset arcs are obtained from the 
(un)decidability of decision problems such as reachability, termination, coverability, and boundness.
In process calculi, this approach has been pioneered by
the already cited  work on the expressiveness of variants of Linda \citep{BusiGZ00}
where the 
decidability of termination 
is used to prove a separation result between two semantics of the language.
Such a decidability result is obtained 
by endowing the language with a
net semantics (in terms of contextual Place/Transition nets)
and by defining a deadlock-preserving mapping into finite Place/Transition nets.
Another significant application of such approach is 
\citep{Busi09}, 
in which separation results for variants of CCS with different constructs
for infinite behavior are reported. 
In \citep{Busi09} the focus is on the (un)decidability 
of termination and convergence of processes. 
It is shown that while both properties are undecidable for the variant of CCS with recursion, 
termination is decidable for the variant for replication.
While undecidability results are obtained by exhibiting (termination-preserving) 
encodings of RAMs (as described above), 
decidability results are obtained by appealing to the theory
of well-structured transition systems \citep{AbdullaCJT00,Finkel90,FinkelS01}.
In Chapter \ref{chap:forward} we shall apply the approach to 
separation in \citep{Busi09} in the context of a higher-order process calculus.

A somewhat related approach to separation results is the one that 
distinguishes two models based on their capability of solving some 
well-established problem. 
That is, a language $\mathcal{L}_1$ is considered to be more expressive than 
$\mathcal{L}_2$ if the problem can be solved in $\mathcal{L}_1$ but not in $\mathcal{L}_2$.
This is a natural approach to follow when the languages at hand
are both known to be Turing complete and hence 
a separation result based on the decidability of some property (as discussed before) is not an option.

Inspired in results by \cite{Bouge88} in the context of CSP, 
this approach was used by \cite{Palamidessi03} for showing the separation between
the $\pi$ calculus with mixed-choice and the asynchronous $\pi$-calculus
with separate choice. The separation is demonstrated by the fact that, 
under certain conditions,
the \emph{leader election problem} ---a problem of distributed consensus in the realm of distributed computing---
can be solved in the former but not in the latter. 
This approach has been rather successful for it has been applied to a number
of very diverse calculi (see, e.g., \cite{Bouge88,EneM99,Palamidessi03,VigliottiPP07}).
More recently, the approach based on leader election has been intensively studied by 
\cite{VigliottiThesis04} in the context of the Ambient calculus.
An excellent reference to this approach (and to separation results in general) is \citep{VigliottiPP07}.

Furthermore, while the use of widely known problems is a sensible option for separation results, 
%it is also possible to devise new problems from scratch. 
new problems have been also proposed. 
%This is the approach followed by 
This way, for instance, 
\cite{CarboneM03} have introduced \emph{matching systems} 
so as to define an expressiveness hierarchy of variants of the $\pi$-calculus with polyadic synchronization. 
Also, \cite{VersariBG09} have proposed the \emph{last man standing problem} in order to 
assess the expressive power of variants of CCS with global and local prorities.



\subsubsection{By Combinators}
This is a less studied approach to the expressiveness of concurrent languages.
It aims at 
%More than an approach, this could be regarded as a \emph{style} of obtaining separation results.Here the focus is in 
the assessing the expressive power 
of a language by identifying its set of \emph{combinators},
i.e., the elements of the language that are indispensable to represent 
the \emph{whole} behavior realizable in the language.
This is similar to the notion of combinators in the $\lambda$-calculus \citep{Barendregt84}.
Hence, each the combinators of a language is said to be \emph{essential}
for in the absence of one of them it is not possible to express the whole language
(possibly up to semantic equivalences).
Studying the expressiveness of a language
based on combinators then appears as a useful method to analyze and categorize its behavior.

The earliest attempt in this direction is by  \cite{Parrow90}, where the focus 
is on the expressiveness of two forms of parallel composition (called 
\emph{disjoint parallelism} and \emph{linking}) in the context of a small 
process calculus with synchronization primitives.
Parrow identifies three ``units'' which are 
responsible for generating all the finite-state behavior
that can be expressed in the language. He also establishes conditions under which
operators for parallel composition in other algebras can be defined. 
\cite{Parrow00} himself took this idea further to the context of mobile processes. 
In fact, he showed that every process in the synchronous $\pi$-calculus without 
sum and without matching 
can be mapped (up to weak bisimilarity) 
as a the parallel composition of a number of \emph{trios}, i.e.,
prefixes with length at most three, possibly replicated. 
It is also shown that \emph{duos}, i.e., prefixes of length at most two, are not
sufficient to produce the same result. A similar result is shown by \cite{LaneveV03}
for the Fusion calculus.


Based on the results in \citep{HondaY-POPL94,HondaY94}, 
\cite{Yoshida02} shows the \emph{minimality}
of \emph{five} concurrent combinators that characterize the 
expressive power of the asynchronous $\pi$-calculus without sum. 
Such combinators correspond to small processes implementing
output of messages, duplication of messages, and generation of links. 
Each of the five combinators is shown to be indispensable to represent the whole
behavior of the calculus. 
Similar ideas were explored by 
% 
% Also,  Raja and Shyamasundar also studies Quine combinators for the asynchronous
% -calculus [46]: 
% N. Raja, R.K. Shyamasundar, Combinatory formulations of concurrent languages, ACM TOPLAS 19
% %     (6) (1997) 899–915.
\cite{RajaS95,RajaS95Quine}.


% As examples of results on the expressiveness of 
% particular operators, mention that \cite{San98MFCS} showed that in the $\pi$-calculus guarded replication can replace all instances 
% of arbitrary replication while preserving strong equivalence.

\subsubsection{Other approaches}
In a slightly different approach to expressiveness issues, 
a number of works has appealed to the generality of structural operational semantics, their associated rule formats 
and properties, as a way of gaining insights on the expressive power of languages that fit certain rule formats.
For the sake of conciseness, we do not expand on these, 
and refer the interested reader to, e.g., \citep{Simone85,Vaandrager92,DsouzaB95}.

%Mention as early attempts work by Boudol (\citep{Boudol84}), de Simone, and Gonthier \citep{Gonthier85} which involve MEIJE and (S)CCS.



%\newpage


\subsection{Expressiveness for Higher-Order Languages}\label{ss:expr-ho}
We conclude this section by reviewing a number of proposals that 
address the expressiveness of higher-order languages.

%It is worth commenting on the discussion made in \cite{MilnerPW92a}, page 22, concerning the expressive power of 
%link and process passing. 


% List of works to be reviewed here:
% 
% \begin{enumerate}
% \item Sangiorgi's thesis (encoding of pi and lambda) (already analyzed above)
% \item Work by Vivas and Dam on the encoding of HoPi plus CCS restriction into pi.  
% \item Works on Homer: encoding of pi (already reviewed above), decidable fragments of Homer.
% 
% \end{enumerate}
% 
% 
% he expressiveness of higher-order communication
% has received little attention in the literature. 
% Higher-order  calculi (both sequential and concurrent) have been compared with 
% first-order calculi, but mainly as a way of investigating the
% expressiveness of 
% $\pi$-calculus and similar formalisms.



%\subsubsection{Sangiorgi 1}
Significant studies of the expressiveness of the higher-order communication
paradigm are reported in Sangiorgi's PhD dissertation \citep{San923}.
In Section \ref{ss:sangio-rep}
we have given the main ideas underlying  
the compilation $\mathcal{C}$ from higher-order into first-order processes,
which is central to 
his representability result.
In \citep{San923} the compilation $\mathcal{C}$ is used to study 
encodings of (variants of) the $\lambda$-calculus into the $\pi$-calculus.
An encoding of the lazy $\lambda$-calculus into \Hopi,
 denoted $\mathcal{H}$,  is proposed. 
The encoding $\mathcal{H}$ 
enjoys a tight operational correspondence; in fact, it allows to determine that
the lazy $\lambda$-calculus is a \emph{sub-calculus} of \Hopi.
Furthermore, 
it is shown that 
the composition of $\mathcal{C}$ with
$\mathcal{H}$ 
corresponds with the encoding of the lazy $\lambda$-calculus 
into the $\pi$-calculus proposed by \cite{Milner92}. 
Hence, 
the usefulness of $\mathcal{C}$ is shown by 
providing an alternative way of deriving results
and transferring reasoning techniques between the 
lazy $\lambda$-calculus and the $\pi$-calculus. 
A similar approach is followed for the call-by-value $\lambda$-calculus.


%\subsubsection{Amadio}

\cite{Amadio93} 
obtains a finitely-branching bisimilarity for CHOCS
by means of a reduction into bisimulation for a variant of the $\pi$-calculus.
In such a variant, processes are only allowed to exchange names of \emph{activation channels} (i.e. the channels that trigger a copy of a process in the representation of higher-order communication with first-order one).
The desired finitely-branching bisimilarity is obtained by relying on a
%finitely branching 
labeled transition system in which synchronizations on activation channels are distinguished.

% %In the search of 
% a finitely branching bisimilarity for CHOCS by %, the paper proposes 
% proposing
% a variant of the calculus in which internal actions associated to process activation are distinguished. Process activation is related to the classic encoding of higher-order communication by means of communication of private names, so-called \emph{activators}. 
% Bisimilarity in the proposed variant is simpler and characterizes bisimilarity in the original CHOCS. 
% The paper is hard to follow. COMPLETE

\cite{Amadio94} investigates 
Core Facile, a $\lambda$-calculus with synchronization primitives, parallel composition, and dynamic creation of names. 
It is intended to serve as an intermediate language between theoretical formalisms 
(such as CHOCS and the $\pi$-calculus) and actual programming languages such as Facile and CML. 
A control operator is introduced to manipulate evaluation contexts and to define a translation of synchronous communication into asynchronous one. This translation is shown to be adequate, i.e. equivalence of the translated terms implies equivalence of the original terms. 
By means of a Continuation-Passing Style translation into Core Facile, the control operator is shown to be redundant.
A translation of the asynchronous Core Facile into the $\pi$-calculus is also presented; 
this translation is  further studied in \citep{AmadioLT95}. 


%\subsubsection{Sangiorgi 2}
The expressiveness of the $\pi$-calculus wrt higher-order $\pi$ was first studied by 
\citet{San96int}, who isolated hierarchies of fragments of 
first-order and higher-order calculi  with increasingly expressive power. For the former, he identifies 
a fragment of 
the $\pi$-calculus in which mobility is {\em internal}, i.e., where outputs are 
only on private names ---no free outputs are allowed. 
This hierarchy is denoted as $\pi \mathrm{I}^n$, where the $n$ denotes the degree of
mobility allowed; e.g., $\pi \mathrm{I}^1$ does not allow mobility and corresponds to the core of CCS. The 
hierarchy in the higher-order case follows a similar rationale, and is based on the
{\em strictly higher-order} $\pi$-calculus, i.e., a higher-order calculus without
name-passing features. Also in this hierarchy, the less expressive language (denoted $\mathrm{HO}\pi^1$) corresponds
to the core of CCS. Sangiorgi shows that $\pi \mathrm{I}^n$ and $\mathrm{HO}\pi^n$ have the same expressiveness,
by exhibiting fully-abstract encodings. 
Sangiorgi and Walker's encoding of a variant
of $\pi$-calculus into Higher-Order $\pi$-calculus 
\citeyear{SaWabook} relies on the abstraction mechanism of 
the Higher-Order $\pi$-calculus (it needs $\omega$-order abstractions). 


%\subsubsection{Vivas}
Vivas et al \citep{VivasD98,VivasY02,Vivas01} study extensions of the higher-order $\pi$-calculus for which the usual encoding of higher-order into first-order \citep{San923} does not work.  
This is the case of higher-order calculi involving locations, in which certain operations 
cannot be reduced to reference passing, such as e.g., 
retrieving some piece of code in a certain location and executing it elsewhere.
This issue is first studied by \cite{VivasD98} who show that Sangiorgi's encoding schema breaks 
if \emph{blocking} ---a form of restriction based on dynamic scoping--- is added to the language.
Their motivation for such a construct is the modeling of cryptographic protocols;
they claim that usual restriction (based on static scoping) 
as found in the first- and higher-order $\pi$-calculus is not adequate for certain security scenarios. 
They consider first- and higher-order calculi with mismatching, and 
show that in the first-order case blocking has the same expressive power as matching and mismatching.
A rather involved schema for compiling higher-order calculi with blocking into first-order calculi is proposed;
it consists in the communication the syntax tree of a process.
\cite{VivasY02} propose an extension of a higher-order process language with a screening operator called \emph{filtering}.
The objective is to represent scenarios of code mobility in which resource access control involves both static and dynamic checkings.
The filtering operator is intended to dynamically restrict the visibility of channels of a process:
a filtered process can only perform actions present in its associated  set of polarized channel names (i.e. channel names with either output or input capabilities).
Similarly as blocking in \citep{VivasD98}, 
the filtering operator exploits dynamic binding to 
implement a form of encapsulation that 
blocks external communication in the filtered channels.
%but allowing internal communication 
In this case, the usual restriction operator is claimed to be inadequate as it might allow for scope extrusion of the filtered channels.
The higher-order language with filtering is studied with respect to the higher-order language proposed 
by \cite{YoshidaH99} (which is, essentially, a call-by-value $\lambda$-calculus augmented with $\pi$-calculus operators). 
This language is endowed with a type system that assigns \emph{interface types} to processes, i.e.
a type that limits the resources a process might have access. 
An encoding of the latter into the former is proposed as a way of understanding how dynamic checkings enforced by
the filtering operator can mimic the static checking enforced by the interface types.
The paper shows that the encoding behaves correctly only in the cases in which name extrusion is not involved.




%\subsubsection{Mikkel}
\cite{BundgaardHG06} investigate the expressive power of Homer by
encoding the synchronous $\pi$-calculus. They succeed in showing that
that higher-order process-passing together with mobile resources 
in, possibly local, named locations are enough to represent $\pi$-calculus name-passing.
In the Homer case, because of the mobile computing resources and the nested
locations, name-passing is a derived notion instead of a primitive.
Similarly as the encoding by \cite{Tho90},
the encoding 
of the $\pi$-calculus into Homer
is not fully-compositional: 
names 
are translated 
at the top-level, separately from the transition of processes.


\cite{BundgaardGHH09}
study  two approaches for obtaining finite-control fragments of Homer in which  barbed bisimilarity is decidable.
The first approach is based on a type system that bounds the size of processes in terms 
of their syntactic components (e.g. number of parallel components, location nesting). 
The second approach exploits results for the $\pi$-calculus and uses an encoding of the $\pi$-calculus 
into Homer to transport them in the form of a suitable subcalculus. 

% \subsubsection{Others}
% \citet{Tho90} and \citet{Xu07}
%  have proposed encodings of $\pi$-calculus
% into Plain CHOCS. These encodings make essential use of the
% relabeling operator of Plain CHOCS.  EXPAND ON THESE.
% 
% Another strand of  work on expressiveness 
% has looked at calculi for distributed systems and 
% compared  different  primitives for  migration and movement  of  processes  (or entire
% locations), which can be seen as higher-order constructs. We cite a few works.
%  \citet{PhillipsV04} proved that a fragment
% of mobile ambients (MA) \citep{CardelliG00} without restriction, communication primitives and the capability
% for dissolving ambients (open)
% is not encodable in the $\pi$-calculus with separate choice, i.e. 
% a choice where the summands must be all inputs or all outputs.
% They achieve this using the problem of electing a leader in a symmetric network.
% In the context of Boxed Ambients (BA, a variant of MA without the open capability)
% \citet{BugliesiCMS05} study mechanisms for controlling communication 
% and mobility interferences. They propose NBA, a calculus with modified 
% communication mechanism and capabilities; NBA includes a type system and allows 
% to encode the choice-free fragment of the synchronous $\pi$-calculus.
% 
% One could say that even if there are studies of expressiveness for KLAIM (which can be considered as a higher-order language)
% such studies do not consider the higher-order aspects of the language \cite{NicolaGP06}.
