We now comment on the 
origin and 
motivations for the constructs of \evol{}, review some related works, and discuss variants of the adaptation problems considered here.

\subsection{On the Constructs for Evolvability}\label{ss:facs}
%\todo{Comments on how/why we decided to isolate the constructs of \evol{}, based on the FACS short paper, describing the relationship between APs and passivation}


%\todo{Here we describe the process for the definition of \evol{}, starting from foundational studies on higher-order process calculi.}


%$\evol$ is related to 
% \emph{higher-order }process calculi such as the higher-order $\pi$-calculus \cite{San923},  
% Kell \cite{SchmittS04}, and Homer \cite{Mikkel04}. %\footnote{Notice that the discussion in \cite{BGPZFACS} focuses on the use of \evol{} for modeling component-based systems.}
% In a higher-order setting,  %process calculi,  
%  processes can be passed around in communications,
% and therefore they involve the instantiation of variables with terms, as in the $\lambda$-calculus.
%As hinted at above, evolvability actions 
%%The interaction of components and update actions 
%in \evol{} also involves term instantiation, and so 
%\evol{} can be considered as a 
%very basic higher-order process calculus.
%%for it communicates a process which is used to obtain the new term that replaces the updated component.
%Update capabilities in \evol{}  are also related to 
% the \emph{passivation} operator present in Kell and Homer, which allows 
%to suspend the execution of a running process. 
%In fact, passivation can be seen 
%as a ``decoupled'' version of our update capabilities, in which process
%suspension and  relocation take place in different stages; instead,
%our update actions perform suspension and relocation in a single transition.
%%and enforces a clear distinction between the location and the state of a component.
%The expressiveness study of a core calculus with passivation detailed in 
%\cite[Chapter 5]{Perez10} %---exploiting on encodings of Minsky machines into calculi with passivation---is also loosely related.
%is therefore loosely related to the present paper.

The origins of  the \evol{} calculus can be traced back to our own previous work on expressiveness and decidability results for core 
 \emph{higher-order process calculi} (see, e.g., \cite{LaneseIC10,GiustoPZ09,Perez10}). 
% These are calculi in which processes can be passed around in communications.
Below, we overview these previous works, and discuss the motivations %and design decisions 
that led us from higher-order communication to adaptable processes.
 
Higher-order (or \emph{process-passing}) concurrency is often presented as an alternative paradigm to the first-order (or \emph{name-passing})
concurrency of the $\pi$-calculus for the description of mobile systems. 
As in the 
$\lambda$-calculus, %whose basic computational step ---$\beta$-reduction---
higher-order process calculi 
 involve \emph{term instantiation}: 
%This means that 
a computational step %in higher-order calculi 
results in the instantiation of a variable with a term, which is copied as many times as there are occurrences of the variable.
The basic  operators of these calculi are usually those of
CCS: parallel composition, input and output prefix, and restriction.
Replication and recursion %are often omitted as they 
can be encoded.
Proposals of higher-order process calculi include
%CHOCS \cite{Tho89}, Plain CHOCS \cite{Tho93}, 
the higher-order $\pi$-calculus~\cite{San923}, %and, more recently, 
%more recent ones include 
Homer~\cite{HilBun04}, and Kell~\cite{SchmittS04}. 


With the purpose of investigating expressiveness and decidability issues in 
%the expressive power and associated decidability issues of 
the hi\-gher-order paradigm, 
a \emph{core} higher-order process calculus, called \hocore, was introduced \cite{LaneseIC10}. 
\hocore  is \emph{minimal}, in that only the operators strictly necessary to obtain
 higher-order communications are retained. 
% This way, continuations following output messages have been left out, so communication in \hocore is asynchronous.
Most notably, \hocore has no restriction
   operator.
   Thus all names are global, and
    dynamic creation of new names is impossible.
    %This makes
    %the absence of recursion also relevant, as 
%known encodings of fixed-point combinators in 
%higher-order  process calculi
%require restriction.
The grammar of \hocore processes is:
\[P :: = \inp a x. P \midd \Ho{a}{P} \midd P \parallel P \midd x \midd \nil \]  
An input process 
$\inp a x . P$ can receive  on
name $a$ a process to be substituted in the place of $x$ in the body $P$;
an output message  $\Ho{a}{P}$ sends the output object $P$ on $a$;  parallel composition 
allows processes to interact. 
As in CCS, in \hocore processes evolve from the interaction of 
complementary actions; this way, e.g., 
$$\Ho{a}{P} \parallel a(x).Q \arro{~~~} Q \sub P x$$ 
is a sample reduction.
(See \cite{LaneseIC10,Perez10} for complete accounts on the  theory of \hocore.)

While considerably expressive, \hocore is far from a specification language for settings involving (forms of) higher-order communication.
For instance, it lacks primitives for describing the \emph{localities} into which distributed systems are typically abstracted. 
Similarly, \hocore also lacks constructs for 
%influencing the execution of a running (higher-order) process.
%This is a particularly sensible requirement for the specification of systems featuring 
expressing
forms of evolvability and/or dynamic reconfiguration.
In order to deal with these aspects, higher-order process calculi such as Homer and Kell provide 
%so-called \emph{passivation} 
mechanisms that allow to \emph{suspend} running processes. 
%In a nutshell, 
Such mechanisms rely on a form of named localities for processes, so called \emph{suspension (or passivation) units}.
Inside a suspension unit, a process may execute and freely interact with their environment, but it may also be stopped at any time.
More precisely, let us consider the extension of \hocore with process suspension. % as just described.
Let $a[P]$ denote the process $P$ inside the suspension unit $a$.
Assuming  an LTS with actions of the form $P \arro{~\alpha~} P'$, % (---with the expected meaning---, 
the semantics of suspension is captured
%process suspension is formalized 
by the following two rules:
\[
\inferrule[\rulename{Trans}]{P \arro{~\alpha~} P' }{ a[P] \arro{~\alpha~} a[P']}  \qquad \quad \inferrule[\rulename{Susp}]{}{a[P] \arro{~a\langle P \rangle~} \nil }
\]
where $a\langle P \rangle$ corresponds to the output action in the LTS of \hocore (see \cite{LaneseIC10}).
While rule \rulename{Trans} defines the transparency of suspension units, rule 
\rulename{Susp} implements suspension: the current state of a located process is ``frozen'' as an output action, in which it can no longer evolve.
Hence, in this semantics
input prefixes may interact not only with output actions but also with suspension units; in fact, 
% by interaction with complementary input actions.
suspension of a running process is assimilated to regular process communication.

While this kind of semantics allows for a straightforward definition, 
the dual r\^{o}le of input prefixes induces a form of non determinism that one may regard as unnatural.
Also, such a definition is only possible for calculi featuring higher-order communication.
In the definition of $\evol{}$ we have opted for a minimal approach: 
we do not assume higher-order communication, and rely instead on a restricted form of term instantiation for defining 
update actions. As a result, $\evol{}$ enforces a separation of concerns, distinguishing 
interaction/communication from actions of dynamic reconfiguration.
This allows us to better concentrate on the fundamental aspects of evolvability for concurrent processes.

%As a simple example, 
%process $S = a[P_{1}] \parallel a(x).b[x \parallel x]$ defines a process $P_{1}$ running at suspension unit $a$, 
%in parallel with an input action which may suspend the content of $a$ and relocate two copies of it into locality $b$. 
%Assuming that $P_{1}$ evolves into $P_{2}$, 
%and given the above two rules, % \textsc{[Trans]} and \textsc{[Susp]}, 
%a possible evolution for $S$ is the process $b[P_{2} \parallel P_{2}]$.
%Observe how term instantiation 
%plays a prominent r\^{o}le in mechanisms for process suspension.
%
%{\bf MA SIAMO SICURI CHE QUESTE SIANO LE REGOLE CHE CARATTERIZZANO
%LA SUSPENSION? E' LA SINCRONIZZAZIONE FRA L'AZIONE $a\langle P \rangle$
%ED UN CORRISPONDENTE INPUT. INOLTRE SOTTO SI PARLA TANTO DEL FATTO CHE 
%NOI SIAMO PIU' BRAVI PERCHE' ATOMICI MENTRE CON LA PASSIVATION
%NON LO SONO. MA IN REALTA' DEL PUNTO DI VISTA DELLA
%SEMANTICA OPERAZIONALE SIAMO ESATTAMENTE TALI E QUALI (UNA UNICA
%RIDUZIONE CHE DUPLICA). FORSE SI POTREBBE MIGLIORARE UN PO' LA
%PRESENTAZIONE....}
%
%
%%This has constituted a fundamental observation in our developments.
%
%In spite of this simple formulation,
%%we observe that 
%suspension primitives are not entirely satisfactory for describing process evolution. %evolvability as in  component systems.
%The reason is that by assimilating suspension to communication, the evolvability of a running process is \emph{decoupled}
%into two phases: 
%\begin{itemize}
%\item one in which the state of the process is actually suspended and captured 
%and 
%\item one in which the suspended process state is used within a new context. 
%\end{itemize}
%In the previous example:  the first phase corresponds to capturing the state at $a$ as $P_{2}$, 
%while the second corresponds to substituting $P_{2}$ twice inside locality $b$. 
%%replacing such state into the context given by the continuation of the input at $a$.
%By considering that update actions are typically atomic operations in which suspension and relocation occur at the same time, 
%this decoupling turns out to be not realistic in terms of modeling purposes.
%As a remedy to this, 
%update synchronizations in \evol{} allow 
%to represent process update in a ``coupled'' style. 
%%As we have seen, in \evol{} 
%%evolution at $a$ is realized by 
%%the interaction of the adaptable process  
%%$\component{a}{P}$ 
%%with the update action 
%%$\update{a}{U}$, which leads to process 
%%$\fillcon{U}{P}$.
%The previous example would  be written in \evol{} as the process $S' = \component{a}{P_{1}} \parallel \update{a}{\component{b}{\bullet \parallel \bullet}}$, which evolves to $\component{b}{P_{2} \parallel P_{2}}$ in a single reduction.
%This way, by relying on the term instantiation feature of higher-order languages we obtain 
%atomic, more disciplined update actions.





\subsection{Related Work}
Related works from the point of view of proof techniques  have been already discussed in the Introduction.
Below, we comment on languages/formalisms  related to~\evol{}.

Loosely related to \evol{} are process calculi for fault tolerance  (see, e.g., \cite{BergerH00,NestmannFM03,RielyH01,FrancalanzaH07}).
These are variants of the $\pi$-calculus 
tailored for describing algorithms on distributed systems; hence, they include 
explicit notions of sites/locations, network, and failures.
A series of extensions to the asynchronous $\pi$-calculus 
so as to model distributed algorithms is proposed in \cite{BergerH00}.
One such extensions, aimed at representing process failure,
is a higher-order operation that defines \emph{savepoints}:
process $\mathsf{save}\langle P \rangle.Q$ defines the savepoint $P$ for the current location; 
if such a location crashes, then it will be restarted with state $P$. 
A value-passing calculus to represent and formalize algorithms of distributed consensus is introduced in \cite{NestmannFM03}; it 
includes a \emph{failure detector} construct $\mathcal{S}(k).P$ which 
executes $P$ if locality $k$ is \emph{suspected} to have failed.
The \emph{partial failure} languages of  \cite{RielyH01,FrancalanzaH07} feature similar constructs; 
such works aim at  developing bisimulation-based proof techniques for distributed algorithms.
%The language in \cite{KuhnrichN09} considers imperfect failure detectors, which may proceed even if the involved location has not failed.
Crucially, in the constructs for failure 
proposed in the above works (savepoints, failure detectors), 
the \emph{post-failure} behavior is defined statically, and does not depend on 
some runtime behavior. 
Hence, as discussed in Section \ref{s:examp}, these constructs are easily representable in \evol{}.
None of the above works addresses
adaptation properties related to failures nor studies 
decidability/expressiveness issues 
for the languages they work on.

\evol{} relies on transparent localities as a way of structuring communicating processes for update purposes.
The hierarchies induced by transparent localities are rather weak; this is in contrast to 
process hierarchies  in calculi such as Ambients~\cite{CardelliG00} or Seal~\cite{CastagnaVN05}.
The ambients in the Ambient calculus represent \emph{administrative domains} and 
act as containers of concurrent processes.
Ambients may be dissolved using the \textbf{open} primitive; 
transparent localities can only be eliminated in \evold{} by an explicit synchronization with 
a suitable update prefix.
Movement across the ambient hierarchy is achieved via the  \textbf{in}/\textbf{out} primitives; 
it is said to be  \emph{subjective} 
rather than \emph{objective}, as ambients move themselves and are not  moved by their context.
Adapting this distinction to our setting, it is fair to say that \evol{} features a form of \emph{objective update}, 
as an adaptable process
does not contain information on its future update actions: 
it evolves autonomously until it is updated by a suitable update prefix in its context.
A fundamental difference of Ambients 
with respect to higher-order process calculi 
is that movement is \emph{linear}: it is not possible to duplicate an ambient through its movement.
This aspect is one of the main differences between Ambients and Seal, in which process duplication is possible.
A main design guideline in Seal is security; in fact, it is intended as a calculus of sealed objects.
Within the hierarchy of seals, only parent/child communication is allowed, thus establishing a noticeable difference
with respect to the hierarchies of transparent localities in \evol{}.



A suspension-like construct is at the heart of  %can also be found in recent models for components based on process calculi.
MECo \cite{MontesiS10},  
%recently proposed 
a model for evolvable components.
It is
defined as a process calculus in which 
components %in the calculus in \cite{MontesiS10} 
feature a hierarchical structure, rich input/output interfaces, as well as channel communication.
Evolvability in MECo is enforced by a 
suspension-like 
construct that 
stops a component and \emph{extracts} its ``skeleton''.
Because of its focus on components, adaptation in MECo 
is mostly concerned about consistent changes in input/output interfaces; 
in our case, adaptation is defined in terms of some distinguished observables of the system, thus constituting a rather general way of characterizing correctness.
%A type system is used to avoid run-time errors.
%Hence, the main difference between our work and the model in \cite{MontesiS10} 
%is in the approach to evolvability: 
%while we 
%study variants of evolvability by means of 
%on syntactic characterizations of the update capabilities (as well as on static and dynamic topologies),
%the evolvability patterns studied in \cite{MontesiS10} focus on the
%different ways of coupling 
%component boundaries, possibly through channel communication.
%While the notion of component interface in \cite{MontesiS10} is richer
%and arguably more realistic than ours, 
%in the light of the discussion above,
%we claim that our notion (based on transparent 
%localities) is still interesting to represent component aggregations; 
%it also allows us to concentrate in the fundamental aspects of adaptation/correctness 
%inherent to dynamic reconfiguration.
%%Other related works are reviewed in Appendix \ref{ap:relw}.
\textsc{Comp} \cite{LienhardtFMCO10} is another process calculus for component models. It 
is intended to be the component model for the ABS modeling language; 
as such, it aims at providing a unified definition of evolvability for objects, components, and runtime modifications of programs. 
In \textsc{Comp}, 
 constructs for evolvability
 are based on the movement  primitives of the Ambient calculus 
% (i.e., in, out, open)
 rather than on suspension-based constructs, 
 as in \evol{} and MECo.
 Hence, the semantics of reconfiguration in \textsc{Comp} is quite different from that in \evol{}, which prevents more detailed comparisons.
 
In a broader setting, related to \evol{}
are formalisms for the specification of (dynamic) software architectures.
While some of them are based on process calculi, none of them relies on suspension-like constructs to 
formalize evolu\-tion/adap\-ta\-tion.
Below we review some of them; we refer the reader to \cite{BradburyCDW04,Bradbury04,Kell:jucs,CuestaFBG05} for more extensive reviews.


%The use of process calculi for giving formal grounds to component-based systems has been explored in, e.g., \cite{MageeDEK95,AllenG97,NierstraszA02}.
One of the earliest proposals for formal grounds to dynamic architectures is \cite{AllenG97}, where a formal system for architectural components which relies on (a fragment of) Hoare's CSP is introduced. The approach in \cite{AllenG97}, however, does not consider dynamic architectures. 
Darwin \cite{Magee96} is an Architecture Description Language (ADL)  for distributed systems; it aims at describing 
the \emph{structure} of static and dynamic component
architectures which may evolve at runtime.
The focus is then on the bindings of interacting components; 
the operational semantics of Darwin relies on a $\pi$-calculus model for handling such bindings.
Darwin features a mechanism of dynamic instantiation which allows 
arbitrary changes in the system architecture. 
Associated techniques for analyzing dynamic change in Darwin have been proposed in \cite{KramerM98,Kramer90}.
In comparison to \evol{}, the kind of changes possible in Darwin 
concern the system topology rather than the ``state'' of the 
interconnected entities, as in our case.
$\pi$-ADL \cite{Oquendo2004} is an ADL for dynamic and mobile architectures.
Formally defined as a typed variant of the higher-order $\pi$-calculus, $\pi$-ADL focuses on  a combination of 
 structural and behavioral perspectives: while the former describes the architecture in terms of components, connectors, and their configurations, 
the latter describes it in terms of actions and behaviors. $\pi$-ADL is at the heart of ArchWare-ADL \cite{MorrisonBOWG07,archware}, a layered ADL for active architectures.
ArchWare-ADL complements $\pi$-ADL with a style layer that allows the specification of components and connectors, and with an analysis layer which enables the specification of constraints on the styles. 
In contrast to \evol{}, $\pi$-ADL does not offer any construct for supporting system evolvability. 
In fact, 
while ArchWare-ADL 
supports forms of evolution (via 
mechanisms for stopping running programs and decomposing them into its main constituents)
these are not provided by the formal framework of $\pi$-ADL but by technologies on top of it \cite{MorrisonKBMOCWSG04}.
Pilar \cite{CuestaFBG05,CuestaFBB02,CuestaFBB01}
is an algebraic, reflective ADL.
Reflection in Pilar (defined as the capability of a system to reason and act upon itself) relies on 
the notion of reification which, roughly speaking, relates between entities in different levels of a specification for defining introspection capabilities.
The semantic foundation of Pilar is a first-order, polymorphic typed variant of the $\pi$-calculus; no constructs for dynamic update such as those in \evol{} are included in Pilar.



We conclude this review  by mentioning other works on formal approaches to dynamic update ~\cite{Gupta96,Biyani2008,WermelingerF02,StoyleHBSN07,CansadoCSC10}.
They all rely on different approaches  from ours.

In~\cite{Gupta96}, an investigation on  \emph{on-line software version change} is presented.
There, an on-line change is said to be \emph{valid} 
if the updated program eventually exhibits behavior of the new version.
The problem of determining validity of an on-line change is shown to be 
undecidable by relating it to the halting problem. 
The study in \cite{Gupta96}, however, 
limits to restricted instances of imperative languages. 
Moreover, the notion of validity says very little about correctness and adaptation.
A formal model for adaptation in asynchronous programs in distributed systems is introduced in~\cite{Biyani2008}.
Programs are expressed as guarded commands, and represented 
as automata; adaptation can be then described as transforming one automaton to another automaton.
The focus of \cite{Biyani2008} is the verification of the behavior of system during adaptation, 
considering the interaction between the new program and the old one. 
%Also in a formal setting, but less related to our work are 
%Inverardi and Wolf \cite{InverardiW95} who propose the use of CHAM abstract machine to describe architectures,
The use of graph rewriting/category theory to formalize software architecture reconfiguration
has been studied in~\cite{WermelingerF02}.
In \cite{Bierman}, the \emph{update calculus}, a typed $\lambda$-calculus with a primitive operation for updating modules, is proposed.
A development of this idea was carried out in \cite{StoyleHBSN07}, where  
a calculus for \emph{dynamic update} in typed, imperative  languages is proposed.
There, the focus is on \emph{type-safe} updates---intuitively, the consistent update of type $\tau$ with some new type $\tau'$.
There is no knowledge about future software updates;
type coercions mechanisms
are then used to recast new (in principle, unknown) types to old types.
In contrast, in our case ``update code'' is defined in advance. 
In fact, this is a conceptual difference between 
\emph{update} (as in works such as \cite{StoyleHBSN07})
and \emph{dynamic adaptation}, as we have considered it here.
A framework for structural component reconfiguration with
behavioral adaptation considerations is introduced in~\cite{CansadoCSC10}, 
where component architectures are given by \emph{nets}
of interacting components represented by 
%Labeled Transition Systems (LTSs). 
LTSs.
Notice that the concept of 
``behavioral adaptation''  in \cite{CansadoCSC10} is different from our notion of adaptation.
The former refers to the changes required in component interfaces so as to achieve effective compositions.
Instead, our notion of adaptation concerns a higher abstraction level, 
as we address the evolution of running processes through built-it adaptation mechanisms. 
