\chapter{Stochastic Process Algebra Formalization of Client-Server Model}
\label{pepachapter}
In the previous chapter we have determined an elegant cost model for the under-load memory access latency in shared memory architectures. Unfortunately, a lot of problems and assumptions may impair the analytical resolution technique of the client-server model. We pointed out three key aspects.
\begin{itemize}
\item \textbf{Probability distributions and queue types.} Inter-arrival and service times at a queue of a client-server system may not be exponentially distributed. It is well-known that providing analytical resolution techniques for non-Markov system is a non-trivial task. Anyway, we know from~\cite{ASE} that a good approximation for deterministic service times (using the $M/D/1$ queue) can be achieved with~\ref{system-base}, but we can not say the same about the modelling of:
\begin{enumerate}
\item deterministic inter-arrival times
\item distributions different from the deterministic and exponential ones (if they were needed)
\item queues exhibiting a load-dependent behaviour
\end{enumerate}
\item \textbf{Impact of the parallel application.} In the classic model, clients are assumed homogeneous with a fixed ideal inter-departure time $T_{cl}$. However, we have seen that clients can behave differently each other for modelling heterogeneous processes. Moreover, we have already told about the alternation of different processes phases. A process itself can have a complex behaviour and modelling it through the mean value of a probability distribution could be a right approximation or not. In the simplest scenario a phase of sequential elaboration is followed by a phase of communication (either point-to-point or collective); it should be clear that the load generated on the memory system in these two phases may be drastically different, e.g. even an order of magnitude. 
\item \textbf{Hierarchical shared memory.} If the multi-cores trend follows the direction that has been taken, hierarchical shared memory will be a relevant feature. In these architectures, more than one level of memory hierarchy is shared by processing nodes (see Section~\ref{pn}). Therefore, conflicts for accessing shared resources could become significant for what concerns the under-load memory access latency. Somehow we need to measure also these kind of conflicts; perhaps enhancing the client-server structure to model a hierarchy of servers (\textit{hierarchical client-server systems with request-reply behaviour}).  
\end{itemize} 
It is obvious that if we want to take care of these aspects, the resolution techniques should be adequately improved. Again, the problem is to determine the trade-off between the complexity of the resolution technique and the quality of the approximated results. In light of this, the following methodology is proposed:
\begin{itemize}
\item the client-server model with request-reply behaviour remains the reference paradigm (where needed, server may be structured on a hierarchy), 
\item but \textit{numerical} resolution techniques will be used to evaluate the under-load memory access latency, in place of the analytical ones.  
\end{itemize}
We advocate that the employment of numerical techniques can overcome the complexity deriving from the formalization of analytical ones. The idea is to describe the client-server model at the level of Markov Chains. There are a lot of direct solution methods for moderately sized Continuous Time Markov Chain (CTMC) models, while iterative techniques exist for huge sized models~\cite{PAQA}. Since Markov processes can be difficult to construct by hand (this holds for human beings, but maybe not for a compiler), we will exploit as intermediate description language a \textit{stochastic process algebra} (SPA). An SPA approach is very intriguing because the aforementioned aspects may be addressed with a formal and structured approach.

The structure of the chapter is the following:
\begin{enumerate}
\item firstly, we introduce and describe the stochastic process algebra PEPA
\item secondly, we show how to express a basic client-server model with the new formalism
\item finally, the accuracy of the new resolution technique will be compared against experimental results
\end{enumerate}

Hierarchical systems and a methodology for an in-depth analysis of the parallel application impact will be formalized in the next chapter following this approach.

\section{PEPA: a Process Algebra for Quantitative Analysis}
\textit{Performance Evaluation Process Algebra}~\cite{HIL} (PEPA) is a high-level description language for Markov processes which belongs to the class of \textit{Stochastic Process Algebras}~\cite{SPA} (SPA). Among the wide class of SPAs, we choose PEPA because it is \textit{simple} but at the same time it has sufficient expressiveness for our purposes. The simplicity comes from the structure of the language: PEPA has only a few elements and a formal interpretation of all expressions can be provided by a structured operational semantics. In this section we just introduce the minimal set of PEPA features strictly necessary to model client-server with feedback systems; for a deeper understanding the reader is invited to consult~\cite{HIL}.

We recall that Markov processes rely on the memoryless property of the exponential distribution.
\begin{de}\label{Markov}
\textbf{Markov Process.} A stochastic process $X(t)$, $t \in [0,\infty)$, with discrete state space $S$ is a Markov process if and only if, for $t_0 < t_1 < ... < t_n < t_{n+1}$, the joint distribution of ($X(t_0), X(t_1), ..., X(t_n), X(t_{n+1}$) is such that
\[
Pr(X(t_{n+1} = s_{i_{n+1}} |  X(t_n) = s_{i_n}, ..., X(t_0) = s_{i_0}) = 
\]
\[
Pr(X(t_{n+1} = s_{i_{n+1}} | X(t_n) = s_{i_n})
\]
\end{de}
Intuitively, this means that the probability of $X$ to go into the state $s_{i_{n+1}}$ at time $t_{n+1}$ is \textit{independent} of the behaviour of $X$ \textit{prior} to the instant $t_n$ or, in other words, it depends \textit{exclusively} by the state $s_{i_n}$ of $X$ at time $t_n$. It is important to keep in mind this property when working with PEPA.

\paragraph{The Language}
A PEPA system is described as the composition of \textit{components} that undertake \textit{actions}. Components correspond to identifiable parts in the system. For instance, in our context, clients and servers will be the components of the systems. A component may be atomic or may itself be composed by components. The language is indeed \textit{compositional} in sense that new components may be formed through the cooperation of other ones. Each component can perform a finite set of actions. An action has a duration (or delay) which is a random variable with an \textit{exponential distribution}. Consequently, the \textit{rate} of the action is given by the parameter of the exponential distribution. For example, the expression 
\begin{displaymath}
\begin{array}{rcl}
  P &\rmdef& (\alpha, r).Q\\
\end{array}
\end{displaymath}
represents the definition of a new component $P$ which can undertake an action $\alpha$ at rate $r$ to evolve into another component $Q$ (defined somewhere else). Since the duration of all actions of the system are exponentially distributed, it is intuitive to say that the stochastic behaviour of the model is governed by an underlying CTMC. 

The syntax of the PEPA language is formally defined by the following grammar.
\begin{displaymath}
\begin{array}{rcl}
  S & ::= & (\alpha, r).S\ |\ S + S\ |\ C_S \\  
  P & ::= & P \sync{\mathcal{L}} P\ |\ P/L\ |\ C
\end{array}
\end{displaymath}
$S$ denotes a \textit{sequential component} and $P$ denotes a \textit{model component} which executes in parallel. $C$ and $C_S$ stand for constants to denote either a sequential or a model component (the effect of the syntactic separations is to allow to build only components which are cooperation of only sequential components, which has been proved in~\cite{HIL} to be a necessary condition for building \textit{ergodic} Markov processes, i.e. amenable to steady-state analysis).

\begin{figure}[h]
        \centerline{
               \mbox{\includegraphics[scale=0.70]{Images/pepa-sem}}
        }
        \caption{Structured Operational Semantic of PEPA.}
        \label{pepa-sem}
\end{figure}

\clearpage

The structured operational semantic is shown in Figure~\ref{pepa-sem}. Below an intuitive description of most used PEPA operator is provided. For a complete treatment the reader is invited to consult~\cite{HIL}.
\begin{itemize}
\item \textbf{Prefix ($(\alpha, r).P$)} This is the basic mechanism to express a sequential behaviour in PEPA. As already said, a component performs an action $\alpha$ at rate $r$ behaving subsequently as $P$.
\item \textbf{Choice ($P+Q$)} This operator represents a component that may behave either as $P$ or as $Q$. Assume that $\alpha$ and $\beta$ are the actions that enable respectively $P$ and $Q$, characterized by their own rate. The idea behind the Choice operator is that once an action has been completed, the other is discarded. For instance, if the first action to be completed is $\beta$ then the component moves to $Q$, ''forgetting'' the other branch.
\item \textbf{Cooperation ($P \sync{\mathcal{L}} Q$)} This operator denotes the cooperation between $P$ and $Q$ over $L$. $L$ is the cooperation set that contains those activities on which the components are \textit{forced} to synchronized. The rate of this shared activity  has to be altered to reflect the slower component in the cooperation (see how in Figure~\ref{pepa-sem}). It is important to notice that for actions not in $L$ components proceed \textit{independently} and \textit{concurrently} with their enabled activites. Actually cooperation is a \textit{multi-way synchronization} since more than two components are allowed to jointly perform actions of the same type. When concurrent components do not have to synchronize the cooperation set $L$ is empty; in these cases we will use the abbreviation $P || Q$ to denote $P$ and $Q$ running in parallel. We will use also a simple syntactic shorthand to denote an expression like $(P || P || ... || P)$ as $P[N]$, with $N$ the number of times that $P$ is replicated. Finally, we point out that there can be situations in which two components do synchronize, but the rate of the shared activity is determined by only one of the component in the cooperation. In this case the other component is defined as \textit{passive}. The rate of the activity for the passive component will be denoted with the symbol $\infty$. 
\end{itemize}

\section{A PEPA Formalism for Client-Server Model with Request-Reply Behaviour}
\subsection{Definition}
\label{cs-definition-pepa}
A PEPA program for the classical client-server model with request-reply behaviour (Section~\ref{csmodel}) can be instantiated to model a processors-memory system just knowing the following parameters:
\begin{itemize}
\item $T_P$, the mean time between two consecutive accesses of a processing node (executing a process) to a certain memory macro-module
\item $T_S$, the average service time of that memory macro-module
\item $p$, the average number of processing nodes accessing that memory macro-module
\item $T_{req}$, the base network latency for a memory request
\item $T_{resp}$, the base network latency for a memory reply
\end{itemize}

The resulting PEPA program is shown below.

\begin{displaymath}
\label{pepa-program-cs}
	\begin{array}{rcl}
		\mathit{r_{request_c}} & = & 1.0/\mathit{T_P}\\
		\mathit{r_{reply}} & = & 1.0/\mathit{T_S}\\	 \\
		
		\mathit{Client_{think}} & \rmdef & (\mathit{request},\mathit{r_{request}}).\mathit{Client_{wait}}\\
		\mathit{Client_{wait}} & \rmdef & (\mathit{reply},\top).\mathit{Client_{think}}\\ \\		
		
		\mathit{Server} & \rmdef & (\mathit{request},\top).\mathit{Server}+(\mathit{reply},\mathit{r_{reply}}).\mathit{Server}\\ \\

		\multicolumn{3}{l}{\mathit{Client_{think}}[p]\sync{request,reply}\mathit{Server}}
	\end{array}
\end{displaymath}

\  \\Each client models a process (on a processing node) that operates forever in a simple loop, completing in sequence the two phases $think$ and $wait$ (Figure~\ref{process-pepa}).

\begin{figure}[h]
    \begin{center}
        \includegraphics[]{Images/process-pepa}
    \end{center}
    \caption{A Client alternating $think$ phase to $wait$ ones.}
    \label{process-pepa}
\end{figure}

As already told, the length of the $think$ phase is $T_P$. At the end, a $request$ action is executed and the client waits for a $reply$, i.e. it starts the $wait$ phase. The $request$ action is a shared action between the clients and the server and it models the situation in which a client sends a request and the server receives it. The length of the $wait$ phase is $R_Q$. For this reason, the time needed to complete the $reply$ action (phase $wait$) is initially \textit{unspecified}. In fact, it will be imposed in another PEPA expression through the cooperation with another component. Therefore, $Client$ components see $reply$ as a pure synchronization operation. 

The server modelling the memory macro-module can either accept a request from one of the $p$ clients (action $request$) or send them a $reply$. The time to complete a $request$ action is obviously unspecified because it depends on clients. The action $reply$ is shared to model the fact that a client can go back to the $think$ phase as soon as the server has handled its request.

Finally, the last expression instantiate a client-server model with $p$ clients running in parallel that try to synchronize themselves with the server through the cooperation set containing both the two shared actions $request$ and $reply$.

It is useful to highlight that even simpler solutions could be formalized: for instance, the synchronization on the action $request$ is not strictly necessary. However we decided to keep it for two reasons. First, it helps to understand the semantic of the whole system (the ''request-reply behaviour''). Second, it will be necessary anyway in further extensions of this basic model. 

\subsection{Quantitative Comparison with respect to other Resolution Techniques}
\paragraph{Preliminary Considerations}
\label{pepaconsiderations}
Solving a PEPA model means solving the underlying ergodic CTMC, i.e. computing the steady-state. We wrote and solved PEPA models using the classic tool PEPA Workbench~\cite{PEPA-WORK}. This tool provides a lot of different numerical resolution techniques to solve the model. Different techniques can be employed depending on the size of the resulting CTMC: if the number of states is huge (hundreds of thousands) iterative yet approximate techniques are preferred. However, the models that we treat are extremely small (they never exceed a hundred of states) thus the steady-state has been directly computed employing a very standard algorithm. In all other cases, e.g. when the number of clients significantly grow, a phenomenon known as \textit{state space explosion} may arise. However, thanks to the natural structure of our models, we may take fully advantage from both \textit{state-reduction} and \textit{fluid-approximation} techniques~\cite{HAYDEN}. Briefly, these techniques aim to solve the state space explosion by exploiting potential symmetries in the CTMC. The presence of symmetries can be informally deduced looking at the PEPA expressions: for instance, in our model the set of homogeneous clients (''$Client[p]$'') induces replicated sub-Markov chains in the underlying CTMC. These replicated subsystems will be exploited to restructure the CTMC itself and lowering the state space size. 

\paragraph{Model Resolution}
\label{pepa-resolution}
Once found, steady-state information are exploited to derive the average response time $R_{Q_{server}}$ of the server. In particular these information include:
\begin{itemize}
\item the average population size of a state
\item the throughput of the actions
\end{itemize}

In our client-server model we are interested in the average number of clients that reside in state $Client_{wait}$ ($p_{wait}$) and in the throughput of the action $reply$ ($\lambda_{reply}$). Indeed, by applying the Little's law (\ref{little}), we can extract the average time that a client stays in the state $Client_{wait}$, which actually corresponds to $R_{Q_{server}}$:
\[
R_{Q_{server}} = \frac{p_{wait}}{\lambda_{reply}}
\]

It is extremely important to notice that $R_{Q_{server}}$ is \textit{not} the under-load memory access latency, but it is the average time spent by a request at the server. However, to find out $R_Q$ it is enough to take into account the base latency of the network as in~\ref{rq-pepa}.

\begin{equation}\label{rq-pepa}
R_Q = T_{req} + R_{Q_{server}} + T_{resp}
\end{equation}

\paragraph{Results}
To evaluate the accuracy of the PEPA client-server model, we have done a test with the following scenario: 
\begin{itemize}
\item the number of clients is fixed to $p = 16$.
\item the average server service time is $T_S = 29 \tau$. This value is typical of DRAM2 memories, as we have already mentioned in Section~\ref{results-het}. We assume it exponentially distributed.
\item the average $think$ period $T_P$ represents the degree of freedom. The distribution of the period is exponential. $T_P$ will take its value in the range $[100\tau-3000\tau]$. Being $p$ fixed, it is necessary to vary $T_P$ in a such a way to emulate all possible load states of the server, e.g unloaded, congested, partially congested and so on. As already told, since $p$ is fixed to $16$, for $T_P$ values greater than $800\tau$ the server is unloaded so cases of interest are in the range $[100\tau-800\tau]$ that, moreover, is the typical range of $T_P$ values that processes exploit in computational phases over a shared memory architecture.
\item $T_{req}$ and $T_{resp}$ are evaluating on the Tilera Tile64 following the methodology explained in Section~\ref{base}. The overall base memory access latency $t_{a_0}$ is equal to $72 \tau$ as already reported in~\ref{results-het}.
\end{itemize}

The under-load memory access time found through PEPA has been compared with the result of the following techniques:
\begin{itemize}
\item simulation performed with the JMT~\cite{JMT} Queuing Networks simulator, by means of which it has been implemented a client-server system.
\item analytical resolution of the client-server model reported in~\ref{system-base}. The comparison against more sophisticated resolution techniques or exploiting a server service time with a deterministic behaviour can be found in~\cite{FABIO}.
\end{itemize}

The graph in Figure~\ref{pepa} shows the progress of $R_Q$ for $T_P$ varying in the range $[0 \tau-3000 \tau]$. It contains the shapes of all the technique, i.e. JMT simulation, PEPA, classical client-server model. The other two graphs (Figure~\ref{pepaerrs}) show respectively the absolute and relative error of analytical and numerical resolution techniques against the results retrieved by the simulation. The very important result is visible in Figure~\ref{pepaerrs}(b): the graph states that, for an exponential server, the PEPA approximation matches the simulation with a maximum relative error of $2\%$.

\begin{figure}[h]
    \begin{center}
        \includegraphics[]{Grafici-Tesi/PEPA/RQpepa}
    \end{center}
    \caption{The $RQ$ shapes}
    \label{pepa}
\end{figure}

\begin{figure}[h]    
    \centering
    \subfloat[Absolute Error]{\includegraphics[]{Grafici-Tesi/PEPA/abserr}}

	\centering
    \subfloat[Relative Error]{\includegraphics[]{Grafici-Tesi/PEPA/relerr}}
    
    \caption{Errors of resolution techniques against the simulation}
     \label{pepaerrs}
\end{figure}
\clearpage

\section{Conclusion}
Besides being an important step toward the modelling of complex parallel application-architecture systems utilizing a compositional and structured high-level approach, PEPA turns out to be useful even for what concerns the quality of the approximation because numerically resolution techniques are involved. In spite of this, the complexity of these resolutions is mitigated by advanced techniques and use of proper tools. What we are going to verify in the next chapters is the possibility for this new formalism to extend the client-server model taking into account application constraints or more complex architectures, e.g. shared memory hierarchies.
