\chapter{Advanced Cost Models: impact of the Parallel Application}
\label{extensions}

In the last chapter we introduced PEPA, an high-level formalism to generate Markov chains in order to specify and to solve the client-server model in a different manner with respect to the classical analytical approach reported in~\ref{csmodel}. 

In this chapter we want to study if (and how) the impact of parallel applications could be modelled utilizing both analytical and numerical resolution techniques of the client-server model. We recall that parallel application are realized composing parallel paradigms in a structured approach. This way to operate will be fundamental for various assumptions that we will made during this chapter. The further intent is to verify what is the price to pay in terms of accuracy for these model enhancements so comparisons among techniques and against simulation will be made.

In particular, we want to remove some assumptions that we made in Section~\ref{assump}:
\begin{itemize}
\item processes in a parallel application could be different. An advanced cost model should treat this topic in order to be (more) precise.
\item processes could have a complex internal behaviour that may affect the under-load memory access latency. The idea is that drastic phase-dependent changes in accessing the memory may change the congestion on the memory macro-module in a heavy way (the so called \textit{bursts}).
\end{itemize}

The chapter has the following structure:
\begin{enumerate}
\item firstly, we give some definitions that will help us to formally recognize different processes or different phases of a process. Successively, we will see how this theory will be further applied in order to reduce the complexity of numerical resolution techniques in an orthogonal way with respect to techniques mentioned in~\ref{pepaconsiderations}.
\item secondly, we will study how is possible to catch the impact of process phases. Results of analytical and numerical resolution techniques will be compared against the output of a simulator developed in the University of Pisa.
\item finally, we will see how to model in PEPA a parallel application composed by different processes. Consequently, the obtained results will be compared against the JMT simulation and the analytical resolution technique introduced in~\ref{newcliserv}.
\end{enumerate}

\section{Processes Classes and Processes Phases}

Our intent is to model the workload given by a parallel application executing on a shared memory architecture. We want to do it because we claim that the under-load memory access latency $R_Q$ could change a lot for different applications. 

A first way to take into account the impact of a parallel application is to deal 
\begin{itemize}
\item with heterogeneous processes, i.e. processes differ for own memory requests frequency or, equivalently, their $T_P$
\item with a complex behaviour that a process could internally shows, i.e. computational phases followed by communication
\end{itemize}

In the previous chapters, we have already told about these aspects in an informal way. In order to give formal definitions for future treatments, we need to find a common point to decide when processes or phases differ. From the client-server model point of view, processes are modelled through modules that have a certain service time, that is the mean time between two consecutive memory requests. It is important to recall that this model is based on mean value quantities rather than probability density functions making the analysis simpler and sufficiently accurate for our purposes. Considering that, the common point for discerning among processes or phases should be just what we have called $T_P$. In the following subsections, we will give definitions of \textit{class of processes} and \textit{process phase} in such a way will be possible to use them for future treatments.

\subsection{Classes of Processes} 

The main idea is that processes belonging to the same class can be modelled in the same way, i.e. as homogeneous clients. Instead, processes of different classes should be modelled as heterogeneous clients. There are two main reasons for this classification:
\begin{itemize}
\item to establish in a formal way when a process differs from another one in such a way we will model them in a different way
\item to recognize sets of homogeneous processes that will be modelled in a way to reduce the model resolution complexity, e.g. by mean of \textit{aggregate} definition of clients
\end{itemize}

We have already mentioned that from the cost model point of view, the best parameter to classify processes is their $T_P$. However, this means to analyse all processes statically in order to derive their own $T_P$. In order to keep low the complexity of this procedure, we can exploit the benefits due to a structured parallel approach. Using always the same set of parallel paradigms to compose even complicated applications, the complier has to reason always on them. This means that it is able to immediately recognize sets of homogeneous processes inside a parallel application. For instance, consider a farm. The process classification can be done looking at the classical structure of the farm so all workers will belong to the same class because they are \textit{replicated} processes, the emitter will belong to another class and the collector to another one. The same holds for all the other paradigms used in a structured approach.

In this way we are able to classify processes looking at the parallel application structure. This also means that it does not matter how a process is made internally because we can discern among processes of a parallel application only looking at the used paradigm. On a hand, this way to operate reduces the complexity to analyse processes but, on the other hand, the processes classification is made on a base that is not the same of the client-server model, i.e. it does not take into account the fundamental parameter $T_P$. In fact, it may happen that two processes with same $T_P$ fill a different role in a parallel application so they will belong to different classes. Apparently, this seems a problem but, again, following a structured approach to parallel programming, situations like this are rare and if happen, their impact on the resolution can be consider negligible.

It is worthwhile to notice that how to determine the $T_P$ parameter is a different topic that we will treat in depth in the following. It is important to recall that up to now we were considering only processes characterized by an unique $T_P$ easily determined by profiling. Of course, this does not hold in all cases and we will see why.

At this point, we have the way to recognize when processes differ in a non expensive way in order to model them as heterogeneous clients or no. Apparently, this classification does not seem to introduce other particular advantages. In fact, if a process is modelled as a client with a certain service time, i.e. the $T_P$, two processes with same $T_P$ will be modelled as equal clients independently from their class. To note the further advantage we have to focus on the formalism wherein clients are represented in the various resolution techniques. 

For instance, consider how clients are represented in PEPA. In general, each client is a component with own definition. In case we are able to recognize in somehow when processes are equal, we could decrease the number of that definitions with a potential benefit in terms of resolution. Suppose to have classified $n$ processes in the same class $C$ in this way:

\[
C = \lbrace p_1, \cdots, p_n\rbrace
\]

We know that processes in the same class should be modelled in the same way, so we can define an \textit{aggregate} client component $P$ that holds for all the $n$ processes instead of having $n$ distinct (but equal) definitions. This way to operate decreases the size and the complexity of the generated Markov chain with global benefits in terms of resolution. As already mentioned, this approach allows to reduce the \textit{state-space explosion} in an orthogonal way with respect to the techniques introduced in Section~\ref{pepaconsiderations}. We will see how to apply this theory in the next section when we will talk about the client-server model with heterogeneous clients.

\subsection{Process Phases}
\label{procphases}

Up to now we were considering processes executing only a computational phase characterized by a certain $T_P$. This way to model does not reflect properly the behaviour of processes. Structured parallel application have the further property to be composed by processes that alternate computational phases to communication. A very common example is the process starting with a computational phase (the so called $think$ period) that will be followed by an inter-process communication (for instance a \textit{send}). As already said, the difference between $T_P$ in computational phases and $T_P$ in communication phases could be even an order of magnitude.

Our main intent is to understand how phases may impact in a $T_P$ derivation. A way to found the $T_P$ parameter of a process is by inspection of the sequential code. We have:

\begin{equation}
\label{TP}
T_P = \frac{T_c}{f}
\end{equation}

where

\begin{itemize}
\item $T_c$ is the completion time of the \textit{sequential} version. It takes into account all the base latencies, e.g. interconnection structures, and various stall times, e.g. bubbles in the CPU pipeline
\item $f$ is the number of faults of the last memory support \textit{exclusively private} of a processing node, i.e. that immediately before the shared memory hierarchy
\end{itemize}

It is important to notice that this technique is valid in case a process is (practically) always working. In fact, if the efficiency tends to one there are not \textit{stopping} periods. Of course, in parallel applications processes are not always working. It may happen that the emitter of a farm or processes implementing a \textit{reduce} in a map-reduce do not have an efficiency that tends to one. According to the definition of a module efficiency in~\cite{ASE}, we have:

\begin{equation}
\label{eff}
\xi = \frac{T_{S_{id}}}{T_S}
\end{equation}

where:
\begin{itemize}
\item $T_{S_{id}}$ is the \textit{ideal} service time of the module. For simplicity, if we suppose to deal with the emitter of a farm, then its ideal service time is just the time to execute a send ( we can consider negligible the rest of its behaviour, e.g. to receive and update the state of workers in case the farm is operating on demand).
\item $T_S$ is the \textit{effective} service time of the module. We know that it can be found as \[T_s=max \lbrace T_A,\ T_{s_{id}} \rbrace\]
\end{itemize}

At this point, the efficiency of the emitter can be rewritten as:

\[
\xi = \frac{T_{send}}{T_A}
\]

According to the theory, we would want that that the emitter and the workers are not bottlenecks in such a way to satisfy all requests (each one arriving every $T_A$). 

So considering an optimal parallelism degree and a proper design of the emitter we have:

\[
\frac{T_{send}}{T_A} < 1 \Longleftrightarrow T_{send} < T_A
\]

The direct consequence of this design is that in average the emitter interleaves \textit{send} primitives to \textit{stopping} periods wherein it waits for new incoming requests. In the latter period any code is executed so the mean time between two consecutive requests can not be derived only looking at the code of the \textit{send} but should also be taken into account the stopping period. These situations recur frequently in structured parallel applications and stimulate solutions where phases are considerate in an explicit way. For this reason, we will give a first attempt to model processes not always working after the following treatments about phases.

\section{How to deal with more Phases}

As already mentioned, problems about the derivation of $T_P$ can arise if processes show periods with sudden changes in memory requests frequency (the so called bursts). In case a computational phase with a certain $T_{P_1}$ is followed by another one characterized by a $T_{P_2}$, the overall completion time $T_c$ can be rewritten as the sum of the completion times $T_{c_1}$ and $T_{c_2}$ of the two phases. Reverting the Formula~\ref{TP} we have that 

\[
T_{c_1} = f_1\cdot T_{P_1}
\]
\[
T_{c_2} = f_2\cdot T_{P_2}
\]

where $f_1+f_2=f$. At the end, the overall $T_P$ will be a weighted average based on the number of faults per phase:

\begin{equation}
\label{tp2fasi}
T_P = \frac{T_c}{f} = \frac{f_1 T_{P_1} + f_2 T_{P_2}}{f_1+f_2}
\end{equation}

Being $T_P$ a value expressed in clock cycles, we just take the integer part to be correct from a logical point of view. 

Up to our studies, this was the best $T_P$ evaluation also in case more phases are involved. The accuracy of all resolution techniques presented in previous chapters is worse if processes exploit more phases. Before showing the results and our efforts in order to improve the accuracy in somehow, we want to clarify the concept of process phase that we have applied in this context. 

As already said in an informal way, a phase of a process is a lapse of time characterized by a certain $T_P$. At first sight, we can think that exist many levels of detail about to determine the beginning and the end of a phase. For instance, a phase could be the entire process life cycle or just the time between two consecutive memory requests. At this point, it is important to focus that we want to take into account phases in order to be more precise, but a complete and detailed treatment is not necessary because we recall that 
\begin{enumerate}
\item our cost model is based on average values
\item significant variations in $T_P$ values can be recognized only between computational and communication phases.
\end{enumerate}

Considering that, we need again to find a way to establish phase boundaries. A compiler can easily find them concentrating on the base of some higher level aspects, e.g. well know software limits. For instance, every time a process invokes a \textit{send} primitive, we can consider a phase the lapse of time needed to execute it.

\begin{de}
A process phase is a lapse of time characterized by a certain mean time between two consecutive memory requests and well recognizable software boundaries
\end{de}

The above definition puts some constraints but does not cover all the cases. For instance, it may happen that $T_P$ changes inside a computational phases, for instance in case a certain function is followed by another one quite different. In spite of this, we remark that substantial differences among $T_P$ of computational phases are not present so the achieved level of details is sufficient for a good starting point.

\section{Process Phases Modelling}
\label{testmp}

Having the theory to recognize process phases, we can evaluated their impact on the model. In this section we report:

\begin{itemize}
\item firstly, a brief summary on the effectuated tests about the impact of process phases modelled according to the Formula~\ref{tp2fasi}. It is worthwhile to recall that in this solution the $T_P$ derivation is made by mean of weighted average value based on number of faults.
\item successively, we present different ways to deal with process phases. For each solution we will show results and comparisons with previous versions.
\end{itemize}

Before doing that, we want to explain the test case. First of all, we have reproduced by simulation the behaviour of processes exploiting two different phases, i.e. a computational one (called $think$) followed by an inter-process communication ($send$), on a shared memory architecture. We have done it using an architectures simulator developed by our research group in the University of Pisa. 

The test case is the following:

\begin{itemize}
\item the number of processes in execution on the same amount of processing nodes is fixed to $p = 16$.
\item the average memory macro-module service time is $T_S = 29 \tau$. This value is typical of DRAM2 memories, as we have already mentioned in Section~\ref{results-het}. We assume it exponentially distributed as usual.
\item the mean time between two consecutive memory requests during the phase $think$ ($T_{P_t}$) represents the degree of freedom and it will take values in the range $[200\tau-800\tau]$. Instead, in the phase $send$, $T_{P_s}$ is fixed to $20\tau$. We can notice that the difference between $T_{P_t}$ and $T_{P_s}$ is an order of magnitude.
\end{itemize}

\subsection{Phases by mean of Weighted Average Value}
We recall that the value of $T_P$ in this technique is found according to the Definition~\ref{tp2fasi}. The under-load memory access time found by simulation has been compared with
\begin{itemize}
\item the one found by the classical analytical resolution of the client-server model reported in~\ref{csmodel}
\item the one obtained with the numerical resolution via PEPA explained in~\ref{cs-definition-pepa}
\end{itemize}

Figure~\ref{rq2tp} shows all the $RQ$ shapes, i.e. SIMULATOR, CS and PEPA-WA (WA stays for weighted average) while the remainder show respectively the absolute and relative error.

\begin{figure}[h]    

    \centering
    \includegraphics[]{Grafici-Tesi/2TP/rq2tp}
    \caption{Under-load Memory Access Latency.}
	\label{rq2tp}
\end{figure}

\begin{figure}[h]    
  	
  	\centering
    \subfloat[Absolute Error.]{\includegraphics[]{Grafici-Tesi/2TP/abserr2tp}}
    
    \centering
    \subfloat[Relative Error.]{\includegraphics[]{Grafici-Tesi/2TP/relerr2tp}}   
    
    \caption{Errors with respect to the simulation.}
    \label{err2tp}
\end{figure}
\clearpage

\paragraph{Comments}
First of all, we notice that for lower $T_P$ the numerical resolution is better than the analytical one while it is the opposite for higher $T_P$. Both resolutions show a maximum relative error around the $10\%-15\%$ in the range $T_P=[300\tau-700\tau]$ while on extreme $T_P$ values the relative error is bigger. Anyway, we would want it lower in all the range.

We tried to model the phase impact in other ways in order to reduce the gap between the under-load memory access latency of at least one resolution technique and the simulation. In the next subsections we will present these techniques.

\subsection{Explicit Phases}

The first idea is to model process phases in an explicit way as shown in Figure~\ref{explicit-phases}. This is exactly what happen to processes: different phases, each one effectuating memory requests with own rate, interleave among them. The idea is to catch the frequency wherein a process passes from a generic phase to another one.

\begin{figure}[h]
    \begin{center}
        \includegraphics[]{Images/explicit-phases}
    \end{center}
    \caption{The behaviour of a client exploiting a $think$ phase followed by a $send$ one.}
    \label{explicit-phases}
\end{figure}

An example on how this can be achieved in PEPA is reported below. Basically, we change the definition of client in such a way will be also possible to pass from a phase to another one.

\begin{displaymath}
	\begin{array}{rcl}
		\mathit{C_{think}} & \rmdef & (\mathit{request},\mathit{r_{request_t}}).\mathit{C_{wait_t}}+(\mathit{send},\mathit{r_{ts}}).\mathit{C_{send}}\\
		\mathit{C_{send}} & \rmdef & (\mathit{request},\mathit{r_{request_s}}).\mathit{C_{wait_s}}+(\mathit{think},\mathit{r_{st}}).\mathit{C_{think}}\\
		\mathit{C_{wait_t}} & \rmdef & (\mathit{reply},\top).\mathit{C_{think}}\\
		\mathit{C_{wait_s}} & \rmdef & (\mathit{reply},\top).\mathit{C_{send}}\\
	\end{array}
\end{displaymath}

\ \\Assume the two phases involved in Figure above, then frequencies $r_{ts}$ and $r_{st}$ are easily found reverting their period:

\[
r_{ts} = \frac{1}{f_t\cdot T_{P_t}}
\]

\begin{equation}
\label{rates}
r_{st} = \frac{1}{f_s\cdot T_{P_s}}
\end{equation}

Apparently, this way to operate has a problem. As already explained above, we are effectuating estimations on the base of the sequential code inspection so the length of phase periods are evaluated only considering the number $f$ of faults and the mean time between two consecutive memory requests ($T_P$). The length of a phase is crucial in our treatment because it influences directly the rate of some actions. Of course, problems arise when more processes are in execution because the impact of the under-load memory access time. 

Consequently, the rate estimations~\ref{rates}, that are found statically, can differ a lot from the effective ones. So phase periods should not be evaluated only considering the sequential version, i.e. $f\cdot T_P$, but have to include the impact of $R_Q$ too. This can be done in various ways, a solution is to adopt an iterative approach. However, we have to keep in mind that we want to keep low the complexity to found $R_Q$, so we prefer to introduce an approximation respect to complex procedures. Therefore, the length of phases can be estimated using the \textit{base} memory access latency $t_{a0}$ that can be derived easily at compilation time. Considering that, we set the rates in the following way:

\[
r_{ts} = \frac{1}{f_t\cdot (T_{P_t} + t_{a0})}
\]
\begin{equation}
\label{ratephases}
r_{st} = \frac{1}{f_s\cdot (T_{P_s} + t_{a0})}
\end{equation}

As already explained in Section~\ref{latencies}, basically $t_{a0}$ is the sum of terms: the latency $L_s$ of the server in case no conflicts are taken into account and the base network latency $T_{net} = T_{req} + T_{resp}$: 

\[
t_{a0} = L_s + T_{net}
\]

The idea is to substitute the base latency $L_s$ of the server with the under-load memory access latency $R_{Q_{server}}$ found with PEPA in order to obtain no more $t_{a0}$ but $R_Q$:

\[
R_Q = R_{Q_{server}} + T_{net}
\]

\paragraph{Results and comments}

Figure~\ref{rq2tp2} shows the under-load memory access latency $R_Q$ of the simulation (SIMULATOR), the previous version (PEPA-WA) and the new version with explicit phases (PEPA-EP). Figure~\ref{err-pepa2-2tp} shows the absolute and relative error. 

\begin{figure}[h]    

    \centering
    \includegraphics[]{Grafici-Tesi/2TP/rq2tp2}
    \caption{Under-load Memory Access Latency.}
	\label{rq2tp2}
\end{figure}

\begin{figure}[h]    
  	
  	\centering
    \subfloat[Absolute Error]{\includegraphics[]{Grafici-Tesi/2TP/abserr2tp2}}
    
    \centering
    \subfloat[Relative Error]{\includegraphics[]{Grafici-Tesi/2TP/relerr2tp2}}   
    
    \caption{Errors against the simulation of both numerical resolutions}
    \label{err-pepa2-2tp}
\end{figure}
\clearpage

We can see that the relative error (we call it $\phi$) of the new version (PEPA-EP) is higher than the error of the previous version (PEPA-WA). This is due to the approximation that we introduce in the evaluation of phases periods. Considering that, we can also affirm that the error depends by the difference between $t_{a0}$ and $R_Q$ hence it will be greater if their ratio grows:

\[
\frac{R_Q}{t_{a0}}\uparrow \Longrightarrow \phi\uparrow
\]

However, the error in this new version is almost constant in all the range and no more with enormous changes as before.

The potential advantage of this technique is that we believe that it could be used in order to model processes not always working. In fact, it is sufficient to pass from the $think$ phase to a $stopping$ one where no memory requests are generated. It is important to notice that the accuracy still depends by the ratio $\frac{R_Q}{t_{a0}}$.

How this can be achieved in PEPA is shown in the following code:

\begin{displaymath}
	\begin{array}{rcl}
		\mathit{C_{think}} & \rmdef & (\mathit{request},\mathit{r_{request_t}}).\mathit{C_{wait_t}}+(\mathit{stop},\mathit{r_{ts}}).\mathit{C_{stop}}\\
		\mathit{C_{stop}} & \rmdef & (\mathit{think},\mathit{r_{st}}).\mathit{C_{think}}\\
		\mathit{C_{wait_t}} & \rmdef & (\mathit{reply},\top).\mathit{C_{think}}\\
	\end{array}
\end{displaymath}

\ \\First of all, we recall that processes with efficiency $\xi<1$ are not bottlenecks so their effective service time $T_S$ is equal to the inter-arrival time $T_A$. Thanks to the structured parallel approach, we are able to estimate the effective service time of a process looking at $T_A$ that is a  well know input parameter. Considering that, the rates to switch from a generic phase to another one should be setted in this way:

\[
r_{ts} = \frac{1}{T_{s_{id}}} = \frac{1}{f_t\cdot (T_{P_t}+t_{a0})}
\]
\[
r_{st} = \frac{1}{T_S - T_{s_{id}}} = \frac{1}{T_A - T_{s_{id}}}
\]

\ \\If we apply the formulas above to the example of the Section~\ref{procphases}, we have that the emitter states the following parameters:

\[
T_{s_{id}} \simeq T_{send} \Longrightarrow \begin{cases} r_{ts}=\frac{1}{T_{send}} \\ r_{st}=\frac{1}{T_A-T_{send}} \end{cases}
\]

Unfortunately, this scenario has not been simulated yet so we have no results about this hypothesis, but it belongs certainly to future works.

\subsection{Phases by means of Average Clients}

Another way to deal with more phases is to estimate how many processes are in average in a certain phase. Consider the test case introduced in Section~\ref{testmp} with a number of processes equals to $p$. If we are able to recognize that $p_t$ processes are in average in the $think$ phase and $p_s$ processes are in $send$ phase, we could instantiate the client-server model with heterogeneous clients (see Section~\ref{hcpepa}) where $n$ clients have $T_{P_t}$ as service time while the other $m$ have $T_{P_s}$.

The problem lies in how to evaluate the average number of clients in a certain phase. The way to do it is to consider again the length of phases in such a way to found their ratio $r$. Successively $r$ can be used in order to evaluate $p_t$ and $p_s$ as

\[
p_s + r\cdot p_s = p \Longrightarrow p_s = \frac{p}{1+r}
\]
\[
p_t = r\cdot p_s
\]

The problem is again to find the right value of $r$ because it depends by the length of phases. The solution is to consider the base memory access latency $t_{a0}$ another time:

\[
r=\frac{f_t\cdot (T_{P_t} + t_{a0})}{f_s\cdot (T_{P_s}+t_{a0})}
\]

It is important to say that this way to operate introduces an ulterior approximation. In fact, being $p_s$ and $p_t$ an average number of processes either in \textit{send} or in $think$ phase, they must be integer numbers in order to use them in the client-server model with heterogeneous clients. Therefore, we need to approximate the found values to the closer integers. Of course, this approximation worsens the accuracy of the technique with respect to the previous one.

\subsection{Explicit Phases with Average Clients}

This solution is a mix of the previous versions. The idea is to use explicit phases in order to determine the average number $p_i$ of clients in a certain phase $i$. Once we have found these values, we calculate $R_Q$ as weighted average value on the number of clients $p$:

\begin{equation}
R_Q = \frac{\sum_i R_{Q_i}\cdot p_i}{p}
\label{rqmedio}
\end{equation}
 
Each phase $i$ has own $R_{Q_i}$ that, as such, can be evaluated as usual solving the client-server model only considering that phase. At this point, we know all the $R_{Q_i}$ and we have to evaluated the average number $p_i$ of clients for each phase $i$. We recall that the frequency to pass from a generic phase $i$ to another phase $j$ depends by $R_{Q_i}$ and that, following the approach explained in the Explicit Phases technique, phases can be modelled as explicit ones that interleave among them. If we are able to model all phases with the associated frequencies to pass from one to another one, the number of clients in a certain phase is given by the population of that phase in steady-state condition of the system.

We try to clarify the way to operate with a simple example. We consider only two phases: $think$ and $send$. The former has a certain $T_{p_s}$ while the latter is characterized by $T_{p_s}$. First of all, we apply the classical client-server model in order to derive respectively $R_{Q_t}$ and $R_{Q_s}$. 

At this point we use $R_{Q_t}$ and $R_{Q_s}$ \textit{as input} in a new system composed by $p$ modules each one characterized by $think$ and $send$ states (Figure~\ref{final-solution}). Of course, frequencies between phases are not more influenced by $t_{a0}$ as in Equations~\ref{ratephases}, but in this way:

\[
r_{ts} = \frac{1}{f_t\cdot (T_{P_t} + R_{Q_t})}
\]
\[
r_{st} = \frac{1}{f_s\cdot (T_{P_s} + R_{Q_s})}
\]

\begin{figure}[h]
    \begin{center}
        \includegraphics[]{Images/final-solution}
    \end{center}
    \caption{Explicit behaviour of a process.}
    \label{final-solution}
\end{figure}

Once we have found the steady-state solution of the new system, we have that the population in the state $think$ is the average number $p_t$ of processes in that state. The same holds for the state $send$. At this point we rewrite the Equation~\ref{rqmedio} for this specific case founding $R_Q$:

\[
R_Q = \frac{R_{Q_t}\cdot p_t + R_{Q_s}\cdot p_s}{p_t + p_s}
\]

\ \\This technique can be easily adopted with PEPA. We have already seen in Section~\ref{cs-definition-pepa} how to define a classical client-server model in this formalism. The code below simply states the new system composed by $p$ components each one characterized by a $think$-$send$ behaviour.

\begin{displaymath}
	\begin{array}{rcl}
		\mathit{C_{think}} & \rmdef & (\mathit{send},\mathit{r_{ts}}).\mathit{C_{send}}\\
		\mathit{C_{send}} & \rmdef & (\mathit{think},\mathit{r_{st}}).\mathit{C_{think}}\\ \\
		
		\multicolumn{3}{l}{\mathit{C_{think}}[p]}\\
\end{array}
\end{displaymath}

\paragraph{Results and comments}

Figure~\ref{rq2tp3} shows the under-load memory access latency $R_Q$ of the simulation (SIMULATOR), the versions PEPA-WA and PEPA-EP and the final version PEPA-EPAC. Figure~\ref{err-pepa3-2tp} shows the absolute and relative error.

\begin{figure}[h]    

    \centering
    \includegraphics[]{Grafici-Tesi/2TP/rq2tp3}
    \caption{Under-load Memory Access Latency.}
	\label{rq2tp3}
\end{figure}

\begin{figure}[h]    
  	
  	\centering
    \subfloat[Absolute Error]{\includegraphics[]{Grafici-Tesi/2TP/abserr2tp3}}
    
    \centering
    \subfloat[Relative Error]{\includegraphics[]{Grafici-Tesi/2TP/relerr2tp3}}   
    
    \caption{Errors against the simulation of numerical resolutions}
    \label{err-pepa3-2tp}
\end{figure}
\clearpage

In general this final solution gives a general improvement in accuracy since the relative error never exceed the $10\%$. Further, it can also be utilized to model processes not always working. In fact, it is sufficient to evaluate $R_{Q_t}$ and successively to set the rates as

\[
r_{ts} = \frac{1}{T_{s_{id}}}=\frac{1}{f_t\cdot (T_{p_t}+R_{Q_t})}
\]
\[
r_{st} = \frac{1}{T_A - T_{s_{id}}}
\]

Summarizing, this new version exhibit a better accuracy than PEPA-WA and succeed to model processes not always working as in PEPA-EP.

\subsection{Comments}

We have formalized the concept of processes class in order to be able to discern among processes without doubt and to introduce optimizations able to reduce the resolution complexity. Anyway, a first example on how this definition are applied is immediately shown in the next section. 

We have seen that a process classification should be made taking into account the mean time between two consecutive memory requests, i.e. $T_P$, because this parameter is at the base of the client-server model, but this is not possible without to increase the analysis complexity. Therefore, process classification is made looking at the structure of the parallel application.

Successively, it has been explained how $R_Q$ could also be derived in case processes show a complex internal behaviour, i.e. the so called phases. Therefore, a definition of process phase has been formalized and some techniques to deal with phases have been proposed and analysed.

\section{Heterogeneous Clients in PEPA}
\label{hcpepa}

In this section we will see how to model in PEPA a parallel application on a shared memory architecture and composed by processes with different $T_P$. We recall from Section~\ref{newcliserv} that an example could be the functional partitioning with independent workers. Of course, heterogeneous clients must be involved in order to model processes with different $T_P$.

\subsection{Definition}

The PEPA program taking into account heterogeneity is shown below. Thanks to the compositional approach of PEPA, we can directly reuse the same server component and the definition of a generic client already present in Section~\ref{cs-definition-pepa}. So basically, a generic client has the same behaviour as before and this implies that is unnecessary to add further operations apart from the already used \textit{request} and \textit{reply}. As a consequence of this structured approach, also the cooperation set in the last expression remains the same. 

Of course, a change occurs in the number of client definitions. In fact, we want to apply the theory seen above in order to recognize $C$ classes of processes. According to the theory, we do not want to have a definition per client but, in order to keep lower the resolution complexity, there must a number of client definitions equals to $C$. Every definition has own rate of request, that is peculiar for that given class. This rate is the inverse of the $T_P$ characterizing the class and it has been found according to the techniques explained in the previous section, i.e. by profiling in the easiest cases or using the explicit phases technique. The last expression of the program defines the overall system in which clients of $C$ classes run in parallel synchronizing themselves with the server. Obviously, each class of clients specifies the number of clients belonging to that class.

\begin{displaymath}
\label{pepa-program-mc}
	\begin{array}{rcl}
	\mathit{Client_{1_{think}}} & \rmdef & (\mathit{request},\mathit{r_{request_1}}).\mathit{Client_{1_{wait}}}\\
	\mathit{Client_{1_{wait}}} & \rmdef & (\mathit{reply},\top).\mathit{Client_{1_{think}}}\\ \\
	...\\ \\
	\mathit{Client_{C_{think}}} & \rmdef & (\mathit{request},\mathit{r_{request_C}}).\mathit{Client_{C_{wait}}}\\
	\mathit{Client_{C_{wait}}} & \rmdef & (\mathit{reply},\top).\mathit{Client_{C_{think}}}\\ \\
	\mathit{Server} & \rmdef & (\mathit{request},\top).\mathit{Server}+(\mathit{reply},\mathit{r_{reply}}).\mathit{Server}\\ \\
	\multicolumn{3}{l}{\mathit{Client_{1_{think}}}[p_1]\parallel ... \parallel\mathit{Client_{C_{think}}}[p_C]\sync{request,reply}\mathit{Server}[1.0]}\\
	\end{array}
\end{displaymath}

\subsection{Quantitative Comparison with respect to other Resolution Techniques}
\paragraph{Model Resolution}

Following the procedure in~\ref{pepa-resolution}, we have to found $R_{Q_{server}}$ in order to evaluate the under-load memory access latency. Having more \textit{wait} states, i.e. one for client definition, we can evaluate the average number of clients staying in the state $Client_{wait}$ as 

\begin{equation}
\label{rqserver}
R_{Q_{server}} = \sum_{i=1}^c \frac{p_{wait_i}}{\lambda_{reply}} = \frac{\sum_{i=1}^c p_{wait_i}}{\lambda_{reply}}
\end{equation}

where $p_{wait_i}$ is the average number of clients belonging to the state $Client_{i_{wait}}$ in steady-state condition of the system. Successively, it is sufficient to add the base network latencies for the request and for the reply to obtain the under-load memory access latency as usual:

\begin{equation}
R_Q = T_{req} + R_{Q_{server}} + T_{resp}
\end{equation}

\paragraph{Results}

We tested and compared the accuracy of this resolution technique against the results found for the test case defined in Section~\ref{results-het}. We briefly report the features of the scenario:
\begin{itemize}
\item the number of clients is fixed to $p = 16$; seven of them have a certain service time $T_{P_1}$, other seven a $T_{P_2}$ while the last two have a fixed $T_{P}=100 \tau$. The idea is to simulate a functional partitioning with independent workers and two service processes, i.e. a dispatcher and a gather. We recall that processes exploit only a phase in analogy to the example in Section~\ref{results-het}.
\item the distribution is exponential for all the service times. Since $p$ is fixed, the service times of the clients are the degree of freedom. In particular, in each test $T_{P_1}$ will be fixed to a certain value chosen in the range $[100 \tau-800 \tau]$ while $T_{P_2}$ will vary in the same range in such a way will be possible to find results for different load states of the server.
\end{itemize}

The graphs in Figure~\ref{rq-pepa-mc} and~\ref{rq2-pepa-mc} show the behaviour of $R_Q$ varying $T_{P_2}$ and with $T_{P_1}$ fixed respectively to $100\tau$, $300\tau$ and $500\tau$. In particular, the graphs state the $R_Q$ shape of the JMT simulation (SIMULATOR), PEPA and the analytical resolution introduced in Section~\ref{results-het} (CS). Successively, absolute and relative errors of PEPA and CS approaches against the results of the simulation are shown and compared.

As already explained in previous comments (Section~\ref{results-het}), CS resolution exhibits in general a gap in the $R_Q$ shape with respect to the one of the simulation. This phenomenon is worsen in case heterogeneous clients are involved, especially if the service time of different clients differs a lot. On the other hand, we  pointed out in previous chapter that the PEPA approach does not introduce errors when homogeneous clients are involved. 

On the base of the results presented here, we can clearly conclude that the PEPA approach does not suffer in accuracy even introducing heterogeneous clients. In fact, the maximum relative error in all graphs is below than $2\%$, that is the threshold that we have found for homogeneous clients in PEPA.

\begin{figure}[h]    
  	
  	\centering
    \subfloat[$T_{P_1} = 100 \tau$]{\includegraphics[]{Grafici-Tesi/MC/rqpepa100}}
    
    \centering
    \subfloat[$T_{P_1} = 300 \tau$]{\includegraphics[]{Grafici-Tesi/MC/rqpepa300}}   
    
    \caption{Under-load Memory Access Latency.}
    \label{rq-pepa-mc}
\end{figure}

\begin{figure}[h]    

    \centering
    \subfloat[$T_{P_1} = 500 \tau$]{\includegraphics[]{Grafici-Tesi/MC/rqpepa500}}
    
    \caption{Under-load Memory Access Latency.}
	\label{rq2-pepa-mc}
\end{figure}

\begin{figure}[h]    
    \centering
    \subfloat[$T_{P_1} = 100 \tau$]{\includegraphics[]{Grafici-Tesi/MC/abspepa100}}   

	\centering
    \subfloat[$T_{P_1} = 300 \tau$]{\includegraphics[]{Grafici-Tesi/MC/abspepa300}}   
        
    \caption{Absolute Error.}
    \label{abs1-pepa-mc}
\end{figure}

\begin{figure}[h]    

    \centering
    \subfloat[$T_{P_1} = 500 \tau$]{\includegraphics[]{Grafici-Tesi/MC/abspepa500}}   
    
    \caption{Absolute Error.}
    \label{abs2-pepa-mc}
\end{figure}

\begin{figure}[h]    

    \centering
    \subfloat[$T_{P_1} = 100 \tau$]{ \includegraphics[]{Grafici-Tesi/MC/relpepa100}}   
    
    \centering
    \subfloat[$T_{P_1} = 300 \tau$]{ \includegraphics[]{Grafici-Tesi/MC/relpepa300}}   

	\caption{Relative Error.}
    \label{rel1-pepa-mc}
\end{figure}

\begin{figure}[h]

    \centering
    \subfloat[$T_{P_1} = 500 \tau$]{ \includegraphics[]{Grafici-Tesi/MC/relpepa500}}
    
    \caption{Relative Error.}
    \label{rel2-pepa-mc}
\end{figure}
\clearpage

\section{Conclusion}

On the basis of the client-server model there is the evaluation of the time between two consecutive memory requests generated by a process in execution of a processing node, i.e. the so called $T_P$. This derivation is a crucial point in order to obtain accuracy of the involved resolution techniques. In fact, we have seen that good $T_P$ derivations bring results practically identical to the ones obtained by simulation at least for numerical resolution techniques.

In this sense, we can conclude that the major impact of parallel applications is due to phases that complicate the derivation of this fundamental parameter. Consequently, the accuracy worsens even if sophisticated techniques are utilized.