\newcommand{\nm}[1]{\ensuremath{\mathsf{#1}}}
 \newcommand{\nmu}[1]{\ensuremath{\mathtt{#1}}}
 \newcommand{\midef}{\ensuremath{\stackrel{\mathsf{def}}{=}}}

Next we present some concrete scenarios of adaptable processes 
and discuss their representation as \evol{} processes.
We also comment on how the adaptation properties proposed in the paper (and their associated decidability results) 
relate to such scenarios.

%In addition to using $\component{a}{P}$ and $\update{a}{U}$
%to represent adaptable processes and update actions, respectively, we
%use parallel composition $P \parallel Q$, 
%input and output prefixes $a.P$ and $\overline{a}.P$, 
%and replication $!P$.
%In what follows (and in the rest of the paper), 
%we use $\prod_{i=1}^{n} P _{i}$ to abbreviate $P_{1} \parallel \cdots \parallel P_{n}$ and %use 
%$\prod^{k} P$ to denote the parallel composition of $k$ instances of process $P$.\\

\subsection{Mode Transfer Operators}
In \cite{Baeten00}, 
dynamic behavior at the process level is defined by means of two so-called \emph{mode transfer} operators.
Given processes $P$ and $Q$, the \emph{disrupt} operator 
starts executing $P$ but at any moment it may abandon $P$ and execute $Q$ instead.
The \emph{interrupt} operator is similar,  but 
it returns to execute what is left of $P$ once $Q$ emits a termination signal.
We can represent similar mechanisms in \evol{} as follows:
%\begin{eqnarray*}
$$
\mathsf{disrupt}_{a}(P,Q)  \midef \component{a}{P} \parallel \update{a}{Q} \qquad
\mathsf{interrupt}_{a}(P,Q)  \midef  \component{a}{P} \parallel \update{a}{Q \parallel t_{Q}.\bullet} 
%\end{eqnarray*}
$$

Assuming that $P$ can evolve on its own to $P'$, 
the semantics of \evol{} %given in Section \ref{s:calculi} 
decrees that 
$\mathsf{disrupt}_{a}(P,Q)$ 
may evolve either to $\component{a}{P'} \parallel \update{a}{Q}$ 
(as locality $a$ is transparent)
or to $Q$ (which represents disruption at $a$).
%, thus discarding $P$.
Similarly, 
by assuming that $P$ was able to evolve into $P''$ just before being interrupted, process $\mathsf{interrupt}_{a}(P,Q)$ evolves
to $Q \parallel t_{Q}.P''$. 
Above, we assume that $a$ is  not used in $P$ and $Q$, and that 
%$t_{Q}$ is the name in which 
termination of $Q$ is signaled at the designated name $t_{Q}$. % only once.

These simple definitions show how
defining $P$ as an adaptable process at $a$ is enough to formalize its potential 
disruption/interruption.
It is worth observing that the encoding of 
$\mathsf{interrupt}_{a}(P,Q)$ can only be an \evold{1} process:
in the update action at $a$, 
there is a hole occurring behind a prefix (hence, it is not a \evol{2} process) and the 
topology of adaptable process is dynamic (since $a$ does not occur in $Q$, the adaptable process cannot be rebuilt after interruption).
In contrast, the encoding of 
$\mathsf{disrupt}_{a}(P,Q)$ is both an $\evold{1}$ and an $\evold{2}$ process, 
as in the update pattern there are no holes in the scope of prefixes (in fact, the update pattern does not have any holes).
%Observe that the processes above belong to $\evol{1}$, the choice between static and dynamic depends on the shape of $P$.

\subsection{Dynamic Update in Workflow Applications}
Designing business/enterprise applications in terms of \emph{workflows} is a common practice nowadays.
A workflow is a conceptual unit that describes how  a 
number of \emph{activities} coordinate to achieve a particular task.
A workflow can be seen as a container of activities; such 
activities 
are usually defined in terms of simpler ones, and 
may be software-based (such as, e.g., ``retrieve credit information from the database'') or may depend on human intervention  
(such as, e.g., ``obtain the signed authorization from the credit supervisor'').  
As such, workflows  are typically long-running and have a transactional character. %, and so they include  compensation activities.
A workflow-based application usually consists of a \emph{workflow runtime engine} 
that contains a number of workflows running concurrently on top of it; a \emph{workflow base library} on which activities may rely on; and of a number of
 \emph{runtime services}, which are
 application dependent and implement things such as transaction handling and communication with other applications.
 A simple abstraction of a workflow application is the following \evol{} process:
 $$
  App \midef \componentbig{\nm{wfa}}{\, \componentbbig{\nm{we}}{\nmu{WE} \parallel \nm{W}_{1} \parallel \cdots \parallel \nm{W}_{k} \parallel \component{\nm{\nm{wbl}}}{\nmu{BL}}} \parallel \nmu{S}_{1} \parallel \cdots \parallel \nmu{S}_{j}\,}
% App \midef \component{\nm{wfa}}{\, \component{\nm{we}}{\nmu{WE} \parallel \nm{W}_{1} \parallel \cdots \parallel \nm{W}_{k} \parallel \component{\nm{\nm{wbl}}}{\nmu{BL}}} \parallel \nmu{S}_{1} \parallel \cdots \parallel \nmu{S}_{j}\,}
 $$
 where the application is modeled as an adaptable process \nm{wfa} 
 which contains a workflow engine \nm{we} and a number of 
 runtime services $\nmu{S}_{1}, \ldots, \nmu{S}_{j}$.
In turn,  the workflow engine contains 
a number of workflows $\nm{W}_{1}, \ldots, \nm{W}_{k}$, a
process \nmu{WE} (which represents the engine's behavior and is left unspecified), 
and an adaptable process \nm{wbl} representing the base library (also left unspecified).
As described before, each workflow is composed of a number of activities.
We model each $\nm{W}_{i}$ as an adaptable process $\nm{w}_{i}$
containing 
a process $\nmu{WL}_{i}$
---representing the workflow's logic---, 
and $n$ activities. Each of them is formalized as an adaptable process $\nm{a}_{j}$ 
and an \emph{execution environment} $\nm{env}_{j}$:
%\todo{$ea_1$ stands for execution environment for $a_i$}
$$
\nm{W}_{i} = \componentbig{\nm{w}_{i}}{\,  \nmu{WL}_{i} \parallel \prod_{j=1}^n \big(\component{\nm{env}_j}{\nmu{P}_{j}} \parallel \componentbbig{\nm{a}_{j}}{ !u_j. \update{\nm{env}_j}{\component{\nm{env}_j}{\bullet \parallel \nmu{A}_{j}}}}\big) \,}
%\nm{W}_{i} = \component{\nm{w}_{i}}{\,  \nmu{WL}_{i} \parallel \prod_{j=1}^n (\component{\nm{env}_j}{\nmu{P}_{j}} \parallel \component{\nm{a}_{j}}{ !u_j. \update{\nm{env}_j}{\component{\nm{env}_j}{\bullet \parallel \nmu{A}_{j}}}}) \,}
$$
%where $\prod_{i} P_{i}$ denotes the parallel
%composition of the indexed processes $P_{i}$.
%\todo{sistemare descrizione qui}
The current state of the activity $j$ is 
represented by 
process $\nmu{P}_{j}$ running in $\nm{env}_{j}$.
Locality $\nm{a}_{j}$ contains an update action for $\nm{env}_{j}$, which is guarded by $u_{j}$ and always available. 
As defined above, such an update action allows to add process $\nmu{A}_{j}$ to the current state of the execution environment of $j$.
%where every action is encoded as component $a_j$ in parallel with its execution environment: when an action is activated the process $A_i$ representing the activity is sent to the execution environment and executed there.
% 
% $$
% \nm{W}_{i} = \component{w_{i}}{\, \nmu{WL}_{i} \parallel \component{a_{1}}{\nmu{A}_{1}} \parallel \cdots \parallel \component{a_{n}}{\nmu{A}_{i}} \parallel \, !\, \tilde{e_{k_{1}}}.\component{a_{n+1}}{\nmu{A}_{1}} \parallel \cdots \parallel !\, \tilde{e_{k_{m}}}.\component{a_{m}}{\nmu{A}_{j}} \,}
% $$
It can also be seen as a procedure that is yet not active, and that becomes active only upon reception
of an output at  $u_{j}$ from, e.g., $\nmu{WL}_{i}$.
Notice that by defining  
update actions on $\nm{a}_{j}$ 
(inside $\nmu{WL}_{i}$, for instance)
we can describe the evolution of the execution environment.
An example of this added flexibility is the process
$$
U_{1} = !\,\nm{replace}_{j}.\updatebig{\nm{a}_{j}}{\componentbbig{\nm{a}_{j}}{!u_{j}.\updatebig{\nm{env}_j}{\component{\nm{env}_j}{\bullet \parallel \nmu{A}_{j}^{2}}}}}
%U_{1} = !\,\nm{replace}_{j}.\update{\nm{a}_{j}}{\component{a_{j}}{!u_{j}.\update{\nm{env}_j}{\component{\nm{env}_j}{\bullet \parallel \nmu{A}_{j}^{2}}}}}
$$
Hence,  
given an output  at $\nm{replace}_{j}$, process 
$ \component{\nm{a}_{j}}{!u_{j}.\update{\nm{env}_j}{\component{\nm{env}_j}{\bullet \parallel \nmu{A}_{j}}}} \parallel U_{1}$
evolves to 
$\component{\nm{a}_{j}}{!u_{j}.\update{\nm{env}_j}{\component{\nm{env}_j}{\bullet \parallel \nmu{A}_{j}^{2}}}}$
%$\component{a_{i}}{\nmu{A}'_{n}} $, 
thus discarding $\nmu{A}_{j}$ in a \emph{future} evolution of $\nm{env}_{j}$.
This kind of \emph{dynamic update} is available in  commercial workflow engines, such as
the Windows Workflow Foundation (WWF) \cite{wwf}.
%, which provides workflow support for Windows applications.
Above, 
%Notice that 
for simplicity, %above 
we have abstracted from 
lock mechanisms that  
keep consistency between concurrent updates on $\nm{env}_{j}$ and $\nm{a}_{j}$.

In the above processes, it is worth observing that 
if the processes $\nmu{A}_{j}$ and $\nmu{A}_{j}^{2}$ contain no adaptable processes, then $\nm{W}_{i}$ is an $\evols{3}$ process.
This is because the update action at $\nm{env}_j$ recreates the adaptable process, and preserves the previous state with a hole that is in parallel to $\nmu{A}_{j}$.
Otherwise,  $\nm{W}_{i}$ would be an $\evold{3}$ process, as the topology of adaptable processes would change as a result of an update action on  $\nm{env}_j$. 
For the sake of the example, suppose an emergency activity that executes inside the workflow:
process $\nmu{P}_{j}$ would emit a signal representing an urgent request, and 
an update action  at $\nm{env}_j$ would represent a response to the emergency,
implemented as process $\nmu{A}_{j}$.
The two adaptation problems are useful to represent the future state of the workflow in which the emergency has been controlled:
\LG refers to an \emph{undetermined} future state  in which the request signal disappears (meaning that the emergency will be eventually controlled); 
whereas \OG refers to a \emph{fixed} future state in which the request signal disappears (meaning that the emergency will be controlled within a certain  bound).
The previous discussion on the topology of  $\nmu{A}_{j}$ is relevant in the light of our decidability results for these two properties:
if $\nm{W}_{i}$ is given as an $\evols{3}$ process, then both 
%kinds of adaptation properties (eventual and bounded) 
\LG and \OG
are decidable; otherwise, 
if $\nm{W}_{i}$ is given as an $\evold{3}$ process, then only 
\OG
%bounded adaptation properties 
would be decidable.

%If we assume that $A$ contains no adaptable process, the processes above belong to the \evols{3} class (otherwise they will be in \evold{3}). As hinted at in the introduction we know that for this class of processes \LG is decidable. For the sake of the explanation, we can imagine a scenario where the workflows represents the activity of a nuclear plant, and where we assume that $P_{j}$ could emit an error signal (i.e. a nuclear emergency) that is solved by an update action (which starts some countermeasures operations).
%As in $\evold{3}$ \LG is decidable, we can guarantee that a solution to the nuclear emergency will always be found in the given plan.
%


In the WWF, dynamic update can also take place at the
level of the workflow engine. 
This way, e.g., the engine may %be defined to 
\emph{suspend} those workflows which have been inactive for a certain amount of time.
This optimizes resources at runtime, and favors active workflows. 
We can implement this policy as part of the process \nmu{WE} as follows:
$$
U_{2} = !\,\nm{suspend}_{i}.\updatebig{\nm{w}_{i}}{!\, \nm{resume}_{i}.\component{\nm{w}_{i}}{\bullet}}
%U_{2} = !\,\nm{suspend}_{i}.\update{\nm{w}_{i}}{!\, \nm{resume}_{i}.\component{\nm{w}_{i}}{\bullet}}
$$
This way, given an output signal at $\nm{suspend}_{i}$, 
process 
$\component{\nm{w}_{i}}{\nmu{W}_{i}} \parallel U_{3}$
evolves to the persistent process $!\, \nm{resume}_{i}.\component{\nm{w}_{i}}{\nmu{W}_{i}}$
which can be reactivated at a later time.
%\todo{changed here}
Observe that, in case one considers policies such as $U_{2}$ then we would end up with an 
\evold{1} process, as the hole and an adaptable process occur guarded behind a prefix.

%Notice that, in case we consider suspend action, the process falls in the larger class of \evold{1} class since the bullet is guarded in the process and an adaptable process i guarded by a prefix.


\subsection{Scaling in Cloud Computing Applications}

In the emerging cloud computing para\-digm, applications are deployed in 
the infrastructure offered by external providers.
Developers act as clients: they only pay for the resources they consume 
(usually measured as the processor time in remote \emph{instances})
and for associated services (e.g., performance metrics or automated load balancing).
Central to the paradigm is 
the goal of optimizing resources for both clients and provider.
An essential feature towards that goal is \emph{scaling}: 
the capability that cloud applications have for expanding themselves in times of high demand, 
and for reducing themselves when the demand is low. 
Scaling can be appreciated in, e.g., the number of running instances supporting the application, 
and may have important financial effects. 
Consequently, cloud providers such as Amazon's Elastic Cloud Computing (EC2) \cite{autoscaling} offer 
libraries and APIs and services for \emph{autoscaling}; 
also common are external tools which build on available APIs to implement sophisticated scaling policies.
%\todo{  I would only comment on this below, or is it necessary to mention this twice? a vendor may guarantee that he provides services with a certain fixed upper bound, e.g., no more than 10 min after a client issues an alert requesting more resources.}

%\todo{Explain better this part... each group has its own policies ---a parole si puo' sottolineare anche come il control puo' anche agire sul cambio di policy a aseconda se siamo in europa o stati uniti}

Here we represent a cloud computing application as adaptable processes.
Our focus is in the formalization of scaling policies, 
drawing inspiration from 
%For the sake of the example, let us consider 
%the way in which scaling works for applications deployed in EC2.
%In EC2, applications have access to 
the autoscaling library %available to 
provided by 
%applications deployed in 
EC2.
For scaling purposes, applications in EC2 are divided into \emph{groups}, each defining different scaling policies for 
different parts of the application. This way,  e.g., 
the part of the application deployed in Europe can have different scaling policies
from the part deployed in the US.
Each group is then composed of a number of identical instances implementing the web application, and of active processes implementing the scaling policies.
This scenario can be abstracted in \evol{} as the process
$App  \midef  G_{1} \parallel \cdots \parallel G_{n}$, with 
$$
G_{i}  =  \componentbbig{g_{i}}{\, I \parallel \cdots \parallel I \parallel S_{dw} \parallel S_{up} \parallel \nmu{CTRL}_{i} \,}
%G_{i}  =  \component{g_{i}}{\, I \parallel \cdots \parallel I \parallel S_{dw} \parallel S_{up} \parallel \nmu{CONTROL}_{i} \,}
$$ 
where each group $G_{i}$ contains a fixed
number of running instances, each represented by 
$I = \component{\nmu{mid}}{\nm{A}}$, a process that abstracts an instance as an adaptable process 
with an identification $\nmu{mid}$ and state \nm{A}.
Also, $S_{dw}$ and $S_{up}$ stand for the processes implementing scaling down and scaling up policies, respectively.
Process $\nmu{CTRL}_{i}$ abstracts the part of the system which controls scaling policies
for group $i$.
In practice, this control relies on
external services (such as, e.g., services that monitor cloud usage and produce appropriate \emph{alerts}).
A simple way of abstracting scaling policies is the following:
%\todo{Sup e Sdw non sono parametrici, sono le due componenti che poi vengono rimpiazzate dalla nuova policy, forse bisogna aggiungere qualche parola su questo punto }
%\begin{eqnarray*}
$$
S_{dw}  =  \componentbbig{s_{d}}{\, !\,\nm{alert^{d}}.\prod^{j} \update{\nmu{mid}}{\nil}\, } \qquad
S_{up}  =  \componentbbig{s_{u}}{\, !\,\nm{alert^{u}}.\prod^{k} \updatebig{\nmu{mid}}{\component{\nmu{mid}}{\bullet} \parallel \component{\nmu{mid}}{\bullet}}\, }
%
%S_{dw}  =  \component{s_{d}}{\, !\,\nm{alert^{d}}.\prod^{j} \update{\nmu{mid}}{\nil}\, } \qquad
%S_{up}  =  \component{s_{u}}{\, !\,\nm{alert^{u}}.\prod^{k} \update{\nmu{mid}}{\component{\nmu{mid}}{\bullet} \parallel \component{\nmu{mid}}{\bullet}}\, }
$$
%\end{eqnarray*}
%where $\prod^{k} P$ denotes the parallel composition
%of $k$ instances of process $P$.
Given proper alerts  from $\nmu{CTRL}_{i}$, the above processes modify the number of running instances.
In fact, given an output at $\nm{alert^{d}}$ 
%(meaning the number of instances should be decreased by $j$), 
process $S_{dw}$ destroys $j$ instances.
This is achieved by leaving the inactive process as the new state of locality $\nmu{mid}$.
Similarly,  an output at $\nm{alert^{u}}$ 
%(meaning the number of instances should be increased  by $k$), 
process $S_{up}$ spawns $k$ update actions, each creating a new instance.

Observe that both $S_{dw}$ and $S_{up}$ are \evold{2} processes: since we represent instances as adaptable processes with state,
every modification enforced by the scaling policies will result in a different topology of adaptable processes. 
A correctness guarantee in this setting is that the cloud infrastructure satisfies the scaling requirements of client applications
within a fixed bound. 
More precisely, we would like to ensure that every scaling alert managed by $\nmu{CTRL}_{i}$ (requesting more instances, for instance) will disappear within a certain bound, meaning that the scaling request is promptly addressed by the cloud provider. This kind of reliability guarantees can be represented 
in terms of \OG, an adaptation problem which is decidable for \evold{2} processes.
Of course, the decidability of correctness guarantees depends much on their actual representations. Above, we have opted for simple, illustrative representations; clearly, different process abstractions may exploit other decidability results.
%For instance,  a simple variant of $S_{dw}$ which simply replaces the current state \nm{A} with $\nil$, preserving the adaptable process at \nmu{mid}, would be an \evols{2} process. Hence, reliability properties for scaling down policies could exploit the fact that \LG is decidable for \evols{2} processes, and so one could determine if scaling down requests are \emph{eventually} addressed by the provider.



%Above processes belong to the class \evold{2}. \todo{expand} As mentioned in the introduction, for those processes \OG turns out to be decidable. This way a vendor may guarantee that he provides services with a certain fixed upper bound after a client issues an alert requesting more resources. In the example above the alert would arise in $\nmu{CTRL}_{i}$ (e.g., a barb requesting more resources) and it can be decided  whether such resources will be effectively provided.

Autoscaling in  EC2 also includes the possibility of \emph{suspending} and \emph{resuming}
the scaling policies themselves. 
To formalize this capability, we proceed similarly as  we did for process $U_{2}$ above.
%which implements the suspension of workflows. 
This way, for the scale down policy,
one can assume that $\nmu{CTRL}_{i}$
includes a process 
$U_{dw} =  !\, \nm{susp_{down}}.\update{s_{d}}{! \, \nm{resume_{dw}}.\component{s_{d}}{\bullet}}$
%\qquad
%U_{up} =  !\, \nm{susp_{up}}.\update{s_{u}}{! \, \nm{resume_{up}}.\component{s_{u}}{\bullet}}
%$$
which, provided an output signal on \nm{susp_{down}}, captures
the current 
policy, and evolves into a process that allows to 
resume it at a later stage.
%Such a state can be resumed later. 
%(The case for the scaling up policy is analogous.)
Using the same principle, other modifications to the policies are possible.
For instance, a natural update is the one that modifies the scaling policies by 
changing the 
number of instances involved 
(i.e., $j$ in $S_{dw}$ and $k$ in $S_{up}$).
As before, if our specification includes the ability of suspending/resuming scaling policies as implemented by $U_{dw}$, then we would obtain an \evold{1} process.
%Notice that if we consider also this ability of suspending/resuming a policy then the process turns out to be in the class \evold{1} and consequently no property can be decided.
%Following that, a new persistent process guarded by an input signal on  \nm{resume_{dw}} (resp.  \nm{resume_{up}}) will be launched. That is, the scaling policies will be only reactivated upon the reception of suitable resuming signals.

