\chapter{Stochastic process algebra formalization of client-server models}\label{pepach}
In the previous chapter we pointed out the necessity of extending the client-server model in the following directions: 
\begin{itemize}
\item \textbf{Probability distributions and queue types.} Interarrival and service times at a queue of a client-server system may not be exponentially distributed. However, it is well-known that providing analytical resolution techniques for non-markovian system is a non-trivial task. We have been able to find a good approximation for deterministic service times in Chapter~\ref{clientserver}, but what about the modeling, for instance, of deterministic interarrival times? Moreover, the need of modeling different service disciplines at a queue may arise, e.g. queues exhibiting a load-dependent behaviour, as we will see in Chapter~\ref{TILERA}.
\item \textbf{Application model.} In the classic model, clients are assumed homogeneous with a fixed ideal interdeparture time $T_P$. However, clients can behave differently each other. Further, a client itself may exhibit a particularly complex behaviour. For example, a process may alternate different computational \textit{phases}. A phase is characterized by its own specific probability distribution of issuing memory requests. In the simplest scenario a phase of sequential elaboration is followed by a phase of communication (either point-to-point or collective); it is clear that the load generated on the memory system should be properly modeled according to the phase characterization.
\item \textbf{Hierarchical systems.} Hierarchical architectures are here to stay. In these systems more than one level of memory hierarchy is shared by the processing nodes (e.g.: shared level-2 caches). Conflicts for accessing shared resources may become significant for what concerns the under-load memory access latency. Somehow, we need to measure these conflicts; perhaps enhancing the client-server structure to model a hierarchy of servers (\textit{hyerarchical client-server systems with request-reply behaviour}).  
\end{itemize} 
It is obvious that if we want to take care of these aspects, resolution techniques must be adequately improved. Again, the problem is to determine a trade-off between the complexity of the resolution technique and the quality of the approximation. In light of this, the following methodology is proposed:
\begin{itemize}
\item the client-server model with request-reply behaviour remains the reference paradigm (where needed, servers may be structured on a hierarchy), 
\item but \textit{numerical} resolution techniques will be used to evaluate the under-load memory access latency, in place of the analytical ones.  
\end{itemize}
We advocate that the employment of numerical techniques can overcome the complexity deriving from the formalization of analytical ones. The idea is to describe the client-server model at the level of Markov Chains. There are a lot of resolution methods for moderately sized Continuous Time Markov Chain (CTMC) models, while iterative techniques exist for huge sized models~\cite{PAQA}. Since Markov processes can be difficult to construct, we will exploit, as intermediate description language, a \textit{stochastic process algebra} (SPA). An SPA approach is very intriguing because the aforementioned aspects may be addressed with a formal and structured approach.

The modeling of hierarchical systems and an in-depth analysis of the parallel application impact have been addressed, following the new SPA approach, in~\cite{ALBE}. On the other hand, in this document we will focus on two other concrete advantages: 
\begin{enumerate}
\item \textit{from the model point of view}, an extremely simple solution to integrate load-dependent queues;
\item \textit{from the quantitative analysis perspective}, we will see that results obtained with the numerical resolution technique are better than the ones obtained by means of the analytical techniques of Chapter~\ref{clientserver}.
\end{enumerate} 

The structure of the chapter is the following:
\begin{enumerate}
\item firstly, we introduce and describe the stochastic process algebra PEPA;
\item secondly, we show how to express a basic client-server model with the new formalism;
\item then the accuracy of the new resolution technique will be compared against experimental results;
\item finally, a very simple solution for modeling load-dependent queues is shown. 
\end{enumerate}

\section{PEPA: a process algebra for quantitative analysis}
Performance Evaluation Process Algebra~\cite{HIL} (\textit{PEPA}) is a high-level description language for Markov processes which belongs to the class of \textit{Stochastic Process Algebras}~\cite{SPA} (SPA). Among the wide class of SPAs, we chose PEPA because it is \textit{simple} but at the same time it has sufficient expressiveness for our purposes. The simplicity comes from the structure of the language: PEPA has only a few elements and a formal interpretation of all expressions can be provided by a structured operational semantics. In this section we introduce the minimal set of PEPA features strictly necessary to model client-server with request-reply behaviour systems; for a deeper understanding the reader is invited to consult~\cite{HIL}.

The quantitative analysis of PEPA models is based on Markovian Theory. A Markov process relies on the memoryless property of the exponential distribution. The following definition is particularly important.
\begin{de}\label{Markov}
\textbf{Markov Process.} A stochastic process $X(t)$, $t \in [0,\infty)$, with discrete state space $S$ is a Markov process if and only if, for $t_0 < t_1 < ... < t_n < t_{n+1}$, the joint distribution of $(X(t_0), X(t_1), ..., X(t_n), X(t_{n+1}))$ is such that
\[
Pr(X(t_{n+1} = s_{i_{n+1}} |  X(t_n) = s_{i_n}, ..., X(t_0) = s_{i_0}) = 
\]
\[
Pr(X(t_{n+1} = s_{i_{n+1}} | X(t_n) = s_{i_n})
\]
\end{de}
Intuitively, this means that the probability of $X$ to go into the state $s_{i_{n+1}}$ at time $t_{n+1}$ is \textit{independent} of the behaviour of $X$ \textit{prior} to the instant $t_n$. It is important to keep in mind the memoryless property when working with PEPA.

\subsection{The PEPA language}
A PEPA system is described as the composition of \textit{components} that undertake \textit{actions}. Components correspond to identifiable parts in the system. For instance, in our context, clients and servers will be components of the systems. A component may be atomic or may itself be composed by components. The language is indeed \textit{compositional} in sense that new components may be formed through the cooperation of other ones. Each component can perform a finite set of actions. An action has a duration (or delay) which is a random variable with an \textit{exponential distribution}. Consequently, the \textit{rate} of the action is given by the parameter of the exponential distribution. For example, the expression 
\begin{displaymath}
\begin{array}{rcl}
  P &\rmdef& (\alpha, r).Q\\
\end{array}
\end{displaymath}
represents the definition of a new component $P$ which can undertake an $\alpha$-action at rate $r$ to evolve into another component $Q$ (defined somewhere else). Since the duration of all actions of the system are exponentially distributed, it is intuitive to say that the stochastic behaviour of the model is governed by an underlying CTMC. 

The syntax of the PEPA language is formally defined by the following grammar.
\begin{displaymath}
\begin{array}{rcl}
  S & ::= & (\alpha, r).S\ |\ S + S\ |\ C_S \\  
  P & ::= & P \sync{\mathcal{L}} P\ |\ P/L\ |\ C
\end{array}
\end{displaymath}
$S$ denotes a \textit{sequential component} and $P$ denotes a \textit{model component} which executes in parallel. $C$ and $C_S$ stand for constants to denote either a sequential or a model component. The effect of the syntactic separations is to allow to build only components which are cooperation of only sequential components. This structuring is a necessary condition for building \textit{ergodic} Markov processes, i.e. processes amenable to steady-state analysis~\cite{HIL}.

\begin{figure}[h]
        \centerline{
               \mbox{\includegraphics[scale=0.70]{Images/pepa-sem}}
        }
        \caption{Structured Operational Semantic of PEPA.}
        \label{pepa-sem}
\end{figure}

\clearpage

The structured operational semantic is shown in Figure~\ref{pepa-sem}. Below an intuitive description of most used PEPA operator is provided. For a complete treatment the reader is invited to consult~\cite{HIL}.
\begin{itemize}
\item \textbf{Prefix ($(\alpha, r).P$)} This is the basic mechanism to express a sequential behaviour in PEPA. As already said, a component performs an $\alpha$-action with rate $r$ and it subsequently behaves as $P$.
\item \textbf{Choice ($P+Q$)} It represents a component that may behave either as $P$ or as $Q$. Assume that $\alpha$ and $\beta$ are the actions that enable respectively $P$ and $Q$, characterized by their own rate. The idea behind the Choice operator is that once an action has been completed, the other is discarded. For instance, if the first action to be completed is $\beta$ then the component moves to $Q$, ''forgetting'' the other branch.
\item \textbf{Cooperation ($P \sync{\mathcal{L}} Q$)} It denotes the cooperation between $P$ and $Q$ over $L$. $L$ is the cooperation set that contains those activities on which the components are \textit{forced} to synchronize. The rate of this shared activity  has to be altered to reflect the slower component in the cooperation (see how in Figure~\ref{pepa-sem}). It is important to notice that, for actions \textit{not} in $L$, components proceed \textit{independently} and \textit{concurrently} with their enabled activities. Actually, cooperation is a \textit{multi-way synchronization} since more than two components are allowed to jointly perform actions of the same type. 

When concurrent components do not have to synchronize the cooperation set $L$ is empty; in these cases we will use the abbreviation $P || Q$. We will use also a simple syntactic shorthand to denote an expression like $(P || P || ... || P)$ as $P[N]$, with $N$ the number of times that $P$ is replicated. 

Finally, we point out that there can be situations in which two components do synchronize, but the rate of the shared activity is determined by only one of the component in the cooperation. In this case the other component is defined as \textit{passive}. The rate of the activity for the passive component will be denoted with the symbol $\infty$. 
\end{itemize}

\subsection{On the resolution of PEPA models}
Solving a PEPA model means solving the underlying ergodic CTMC, i.e. computing the steady-state. We wrote and solved PEPA models using a classic tool like PEPA Workbench~\cite{PEPA-WORK}. This tool provides a lot of different numerical resolution techniques to solve the model. Different techniques can be employed depending on the size of the resulting CTMC: if the number of states is huge (hundreads of thousands) iterative yet approximate techniques are preferred. Anyway, the models that we treat are extremely small (they never exceed a hundred of states) thus the steady-state has been directly computed employing a very standard algorithm. In all other cases, e.g. when the number of clients significantly grow, a phenomenon known as \textit{state space explosion} may arise. However, thanks to the natural structure of our models, we may take fully advantage from both \textit{state-reduction} and \textit{fluid-approximation} techniques~\cite{HAYDEN}. Briefly, these techniques aim at solving the state space explosion by exploiting potential symmetries in the CTMC. The presence of symmetries can be informally deduced by looking at the PEPA expressions: for instance, in our models a set of homogeneous clients (''$Client[p]$'') will induce replicated sub-Markov chains in the underlying CTMC. These replicated subsystems are exploited to restructure the CTMC itself and lowering the state space size. Finally, notice that reduction techniques are automatized in tools like PEPA Workbench.


\section{Fitting general distributions in PEPA terms}
The exponential distribution is not always the most realistic event duration distribution when modeling a system architecture. One critical example involves a memory system that is able to retrieve a requested block in a fixed (constant) amount of time. 

Therefore we need a way to fit general (at least deterministic) distributions in a PEPA model, which unfortunately is completely based on CTMC (i.e. on exponential timings). A possible strategy is to make use of \textit{phase type distributions} to approximate general distributions in a standard PEPA model~\cite{DETPEPA}. Phase type distributions are particular distributions constructed by combining multiple exponential random variables. These can be used to approximate most general distributions. In the following we will concentrate on approximating a deterministic distribution.

The \textit{Erlang} distribution is a commonly used example of a phase type distribution which consists of an exponential distribution repeated $k$ times~\cite{PAP}. The probability density function for the Erlang distribution is as follows:
\[
f(x) = \frac{\lambda^k\ x^{k-1}\ e^{-\lambda x}}{(k-1)!}
\]
Notice that for $k=1$ it reduces to an exponential distribution. In PEPA this is generally modelled as a ticking clock~\cite{DETPEPA}:
\begin{displaymath}
	\begin{array}{rcl}
		\mathit{Clock_i} & \rmdef & (\mathit{tick},\mathit{t}).\mathit{Clock_{i-1}}\ \ \ : 1< i \leq k \\
		\mathit{Clock_1} & \rmdef & (\mathit{event}, t).\mathit{Clock_k}
	\end{array}
\end{displaymath}
where $t = k\lambda$. \textit{We will use the Erlang distribution to approximate deterministic events}; the greater the value of $k$ (i.e. the more ticks), the closer is the approximation of the Erlang distribution to a deterministic delay. Clearly the value of $k$ cannot be chosen at will, otherwise the number of states in the underlying CTMC could grow significantly. Fortunately, we will see that a value of $k$ is in the range $[2, 4]$ is sufficient to model a deterministic server.


\section{A PEPA model of client-server systems with request-reply behaviour}\label{pepacliserv}
\subsection{The general model}
A PEPA model for a client-server system with request-reply behaviour is shown below. 
\begin{displaymath}
	\begin{array}{rcl}
		\mathit{Client_{think}} & \rmdef & (\mathit{request},\mathit{r_{request}}).\mathit{Client_{wait}}\\
		\mathit{Client_{wait}} & \rmdef & (\mathit{reply},\top).\mathit{Client_{think}}\\ \\		
		
		\mathit{Server} & \rmdef & (\mathit{request},\top).\mathit{Server}+(\mathit{reply},\mathit{r_{reply}}).\mathit{Server}\\ \\

		\multicolumn{3}{l}{\mathit{Client_{think}}[p]\sync{request,reply}\mathit{Server}}
	\end{array}
\end{displaymath}
Each client (process) operates forever in a simple loop, completing in sequence the two phases $think$ and $wait$. The $request$ action (phase $think$) is a shared action between clients and server. It models the situation in which a client sends a request and the server receives it. On the other hand, the time needed to complete the $reply$ action (phase $wait$) is \textit{unspecified}, i.e. it will be imposed in another PEPA expression through the cooperation with another component. Therefore, $Client$ components see $reply$ as a pure synchronization operation. 

The server (memory macro-module) can either accept a request from one of the $p$ clients (action $request$) or send them a $reply$. The time to complete a $request$ action is obviously unspecified, because it depends only on the clients. The action $reply$ is shared to model the fact that a client can go back to the $think$ phase as soon as the server has handled its request.

Finally, the last expression instantiate a client-server model with $p$ clients. Notice that the cooperation set contain both the two shared actions $request$ and $reply$.

It is useful to highlight that even simpler solutions could be formalized: for instance, the synchronization on the action $request$ is not strictly necessary. However we decided to keep it for two reasons. First, it helps at understanding the semantics of the whole system (the ''request-reply behaviour''). Second, it will be necessary anyway in further extensions of this basic model. 

\subsection{Applying the model to the processors-memory system}
To instantiate the PEPA client-server model on a processors-memory system we need to know the usual parameters.
\begin{itemize}
\item $T_P$ is the mean time between two consecutive accesses of a processing node to the same memory macro-module;
\item $T_S$ is the average service time of the memory macro-module;
\item $p$ is the average number of processing nodes accessing the same memory macro-module.
\end{itemize}
It is also useful to write the base latency $t_{a0}$ as follows:
\[
t_{a0} = T_{req} + L_S + T_{resp}
\]
That is, we identify the \textit{request phase} $req$, the latency of the memory service $L_S$ and the \textit{response phase} $resp$. Both $T_{req}$ and $T_{resp}$ can be easily determined following the approach of Section~\ref{networks}.

With these parameters, the PEPA model can be instantiated as follows:
\[
r_{request} = \frac{1}{T_P + T_{req} + T_{resp}}
\]
\[
r_{reply} = \frac{1}{T_S}
\]

\subsection{Model resolution}
The steady-state analysis of a PEPA model gives access to:
\begin{itemize}
\item the average population in each state of the underlying CTMC;
\item the throughput of the actions.
\end{itemize} 
To determine the average response time of the memory macro-module $R_{Q_{server}}$ it is sufficient to know:
\begin{itemize}
\item the average number of clients $p_{wait}$ (out of $p$) that reside in the $Client_{wait}$ state;
\item the throughput $\lambda_{reply}$ of the action $reply$.
\end{itemize} 
Applying the Little's law (\ref{little}) we can extract the average time that a client spends in the $Client_{wait}$ state, \textit{that actually corresponds to $R_{Q_{server}}$}:
\[
R_{Q_{server}} = \frac{p_{wait}}{\lambda_{reply}}
\]

\textit{It is extremely important to notice that $R_{Q_{server}}$ is not the under-load memory access latency $R_Q$, rather it is the average time spent by a request at the memory macro-module}. However, to find out $R_Q$ it is enough to take into account the latency of the request phase $T_{req}$ and the latency of the reply phase $T_{resp}$. Finally, we end up with

\begin{equation}\label{rq-pepa}
R_Q = T_{req} + R_{Q_{server}} + T_{resp}
\end{equation}

\section{Quantitative comparison with respect to other resolution techniques}

\paragraph{Results}
To evaluate the accuracy of the PEPA client-server model we repeat the same kind of test we did for analytical resolution techniques (Section~\ref{cliserv-results}). Therefore we will compare:
\begin{itemize}
\item results of simulations performed with the JMT queueing networks simulator~\cite{JMT}, by means of which it has been emulated a client-server system;
\item results of the PEPA model;
\item results obtained with the classical client-server analytical technique. In particular, we will show again the results obtained by means of the system of equations~\ref{system-base} of Section~\ref{standard} (In the graphs, the curve name is $CS-X-S$, where $X$ denotes the distribution of the server service time, that will be either exponential or deterministic).
\end{itemize}
The parametrization of the client-server system is identical to the one adopted in Section~\ref{cliserv-results}:
\begin{itemize}
\item The number of clients is fixed to $p = 16$.
\item The average server service time is $T_S = 29 \tau$. This value is typical of DRAM2 memories, as we will see in Chapter~\ref{TILERA}.
\item The average process $think$ period $T_P$ represents the degree of freedom. The distribution of the $think$ periods is exponential. $T_P$ will take its value in the range $[100\tau-3000\tau]$. Being $p$ fixed, it is necessary to vary $T_P$ in a such a way to emulate all possible load states of the server (unloaded, partially loaded, congested, ...).
\item The request phase is $T_{req} = 6\tau$ while the response phase is $T_{resp} = 36\tau$, which are close to the ones of multi-core architectures, as we will see in Chapter~\ref{TILERA}.
\end{itemize}

\begin{figure}[h]
	\centerline{
		\mbox{\includegraphics[]{Grafici-Tesi/PEPA-EXP/valori-scala-log}}
	}
	\caption{$R_Q$ progress obtained by simulation and by analytical and numerical techniques for an exponential server.}
	\label{PEPA-EXP-valori}
\end{figure}

\begin{figure}[h]    
    \centering
    \subfloat[Absolute error.]{\includegraphics[]{Grafici-Tesi/PEPA-EXP/abs-err}}   
    
    \centering
    \subfloat[Relative error.]{\includegraphics[]{Grafici-Tesi/PEPA-EXP/rel-err}}   
    
    \caption{Errors of resolution techniques for an exponential server.}
    \label{PEPA-EXP-error}
\end{figure}

\begin{figure}[h]
	\centerline{
		\mbox{\includegraphics[]{Grafici-Tesi/PEPA-DET/valori-scala-log}}
	}
	\caption{$R_Q$ progress obtained by simulation and by analytical and numerical techniques for a deterministic server.}
    \label{PEPA-DET-valori}
\end{figure}

\begin{figure}[h]    
    \centering
    \subfloat[Absolute error.]{\includegraphics[]{Grafici-Tesi/PEPA-DET/abs-err}}   
    
    \centering
    \subfloat[Relative error.]{\includegraphics[]{Grafici-Tesi/PEPA-DET/rel-err}}   
    
    \caption{Errors of resolution techniques for a deterministic server.}
    \label{PEPA-DET-error}
\end{figure}

\clearpage

For an exponential server, Figure~\ref{PEPA-EXP-valori} shows the progress of $R_Q$ in the range of $T_P$ $[3000\tau-100\tau]$. Figure~\ref{PEPA-EXP-error} shows the estimation error of each resolution technique with respect to the simulation. Same curves are shown for a deterministic server in Figure~\ref{PEPA-DET-valori} and~\ref{PEPA-DET-error}. The deterministic behaviour is implemented in the PEPA model by means of a $k-Erlang$ distribution, $k \in \lbrace$2, 4$\rbrace$. 

\paragraph{Comments}
The results of the quantitative analysis are in general fairly good. Figure~\ref{PEPA-EXP-error} states that, for an exponential server, the PEPA approximation matches the simulation with a maximum relative error of $2\%$. Figure~\ref{PEPA-DET-error} shows that approximating the deterministic server service time with a 4-Erlang distribution leads to a maximum percentage error of roughly $6\%$. Hence, the overestimation experienced with $CS-X-S$ for loaded servers can be definitely overcome with PEPA models.

\section{An example: load-dependent service times in a client-server model}\label{pepa-load-dependent}
In this section we propose a PEPA model of a \textit{load-dependent server}. This solution will be exploited in Chapter~\ref{TILERA} where a cost model for a real architecture will be derived according to the methodology of client-server systems. 

Load-dependence implies that the offered service time varies basing upon the number of requests in the queue. In our context, the larger the number of requests in the queue, the \textit{lower} the experienced service time. This behaviour could be dictated by a lot of factors. For instance, before serving the first of the $X$ requests in the queue, the server could sort them according to some particular criterion. This is what happens in some memory systems~\cite{WIK-DDR, TILE-IO}: since consecutive accesses for ''adjacent'' pages (the ones that reside on the same memory row) can be served faster, a group of requests can be formerly reordered.

An example of a PEPA model for a load-dependent server is shown below. Notice two key elements: from one hand the \textit{simplicity} of the model in spite of a meaningful change in the semantics of the server; from the other the possibility of inferring easily the semantics of the server directly from the PEPA expression. 
\begin{displaymath}
	\begin{array}{rcl}	
		\mathit{Client_{think}} & \rmdef & (\mathit{request},\mathit{r_{request}}).\mathit{Client_{wait}}\\
		\mathit{Client_{wait}} & \rmdef & (\mathit{reply},\top).\mathit{Client_{think}}\\ \\
		
		\mathit{Server_0} & \rmdef & (\mathit{request},\top).\mathit{Server_1}\\
		\mathit{Server_1} & \rmdef & (\mathit{request},\top).\mathit{Server_2}+(\mathit{reply}, \mu).\mathit{Server_0}\\
		\mathit{Server_2} & \rmdef & (\mathit{request},\top).\mathit{Server_3}+(\mathit{reply}, 2\mu).\mathit{Server_1}\\
		\mathit{Server_3} & \rmdef & (\mathit{request},\top).\mathit{Server_4}+(\mathit{reply}, 3\mu).\mathit{Server_2}\\
		\mathit{Server_4} & \rmdef & (\mathit{request},\top).\mathit{Server_5}+(\mathit{reply}, 4\mu).\mathit{Server_3}\\ 
		\mathit{Server_i} & \rmdef & (\mathit{request},\top).\mathit{Server_{i+1}}+(\mathit{reply}, 5\mu).\mathit{Server_{i-1}}\ \ \ : 5 \leq i \leq N\\ \\

		\multicolumn{3}{l}{\mathit{Client_{think}}[N]\sync{request,reply}\mathit{Server_0}}
	\end{array}
\end{displaymath}
$N$ clients can synchronize with a server. Initially the server is empty ($Server_0$). Once a client generates a request, the server moves to $Server_1$ to indicate that one element is being served. If in the meanwhile another request arrives, then the server moves to $Server_2$ (one element is being served, the other is logically queued). The server can move up to $Server_N$, since $N$ is the maximum number of requests in the system. Depending on the number of requests $i$ in the server, the exhibited service time $T_S(i)$ varies. When a request is being served and the queue is empty, the service time is maximum, that is $T_S(1)=\frac{1}{\mu}$. As soon as new requests arrive, $T_S(i)$ decreases (in this \textit{specific} example the decrement is inversely proportional with $i$) up to $T_S(i) = \frac{1}{5\mu},\ i \geq 5$. The service time stabilizes at this value, even if new requests arrive. 


