\chapter{Advanced Cost Models: Hierarchical Shared Memory}
\label{gerarchicochap}

Hierarchical shared memory is peculiar to multi-cores, especially if the trend follows the direction that has been taken. In these architectures, more than one level of memory hierarchy is shared by processing nodes (see Section~\ref{pn}). Therefore, conflicts for accessing shared resources could become significant for what concerns the under-load memory access latency because more queues must be travelled. In fact, if we consider for instance that also the second level of cache is shared among all (or only a subset of) cores, we have that this memory will be a first level of queue while the main memory (that continues to be shared) will be the second one. This behaviour can be extended up to a general number of hierarchy levels, e.g. also the third level of cache could be shared and so on. Of course, memory requests will travel all the queues only in some cases. 

Our main goal is to measure the impact of this hierarchy of queues in terms of performance indexes, i.e. the under-load memory access time $R_Q$. Of course, we want to do it according to the methodology followed so far hence the starting point will be the same, i.e. the client-server model. 

\section{Hierarchical Client-Server Model with Request-Reply Behaviour}
\label{gerarchico1}
Before starting to explain how to enhance the classical client-server model in order to deal with shared memory hierarchies, we do some initial assumptions:
\begin{itemize}
\item we consider a shared memory hierarchy composed by a second level of cache $L2$ and a main memory $M$
\item $M$ is shared among all the processing nodes. In the following, we concentrate only on a macro-module that we will consider shared among all the processing nodes. We will refer at it as $M$
\item $L2$ is shared among \textit{disjointed} subsets of processing nodes. This means that the same (and fixed) number $n$ of processing nodes can access own $L2$
\item the size of $L2$ is big enough to contain the entire working sets needed to processes to execute their job (obviously, we are considering only processes in execution on processing nodes that belong to the same subset, i.e. they share the same $L2$). In this way, we abstract the impact due to techniques for replacing blocks, i.e. the different ways to address caches  
\end{itemize}

Briefly, we can summarize the behaviour of a parallel application executed on a shared memory architecture with the above assumptions in the following way. 

A processing node $P$ (executing a process), in case a cache fault occurs, generates a memory request $req$ toward the second level of cache $L2$ that, as usual, will answer directly to the processing node in case the requested cache block belongs to it. Otherwise, $req$ will be forwarded to $M$ and $L2$ has to wait the reply from the memory before responding to the processing node. It is worthwhile to notice that in the mean time that $L2$ waits for a answer from $M$, it can reply to other requests of processing nodes provided that it is able.

\ \\In the former case, $P$ has to wait the time needed for
\begin{itemize}
\item the request $req$ to reach $L2$ travelling an interconnection network $P-C$,
\item to stay in the queue in front of $L2$ since it is shared among other processing nodes,
\item to be serviced by $L2$, i.e. an answer $ans$ is generated and sent toward $P$,
\item the answer $ans$ to reach $P$ travelling again the interconnection network $P-C$.
\end{itemize}

\ \\Instead, in the latter, $P$ regains the control after a time that could be potentially much greater because aggravated by the impact of a second queue and because the request has to exit to the chip to reach $M$. At the end, we have to add the time for
\begin{itemize}
\item $L2$ to generate a request toward the main memory,
\item the request to reach $M$ travelling another interconnection network $C-M$ (not the same as before and eventually out of chip),
\item to stay in the queue in front of $M$,
\item to be serviced by $M$, i.e. a reply $rep$ will be generated and sent back to $L2$
\end{itemize}

A way to model this behaviour is that clients still continue to model processing nodes, but the difference lies in more levels of server because we have to model more levels of shared memory. A classical view of the advanced client-server model taking into account the above assumptions is shown in Figure~\ref{gerarchico}. We notice that every module $S_i,\ i=1,\cdots,m$ is at the same time server towards its $n$ clients and client towards its server $S$. It is important to keep in mind that we can generalize all this aspects. In particular, we could have other hierarchical levels, e.g. a third one, or we can specify a different number of clients for any hierarchical level. In other words, we can think about a classical client-server model in which clients are realized following a compositional approach, e.g. a client could be an entire client-server model.

\begin{figure}[t]
        \centerline{
               \mbox{\includegraphics[scale=0.6]{Images/gerarchico.pdf}}
        }
        \caption{Hierarchical Client-Server Model with Request-Reply Behaviour}
		\label{gerarchico}
\end{figure}

At this point, we could follow two options:
\begin{enumerate}
\item to extend the analytical resolution summarized in Section~\ref{csmodel} in such a way the hierarchy is considered
\item to use directly the PEPA formalism  
\end{enumerate}

In principle, we were tempted for the first solution but some problems came out. The major obstacle is the increase in the complexity of the analytical resolution due to extensions needed to accomplish the goal so, at the end, iterative or numerical resolution techniques should be involved even then. Moreover, it is difficult to write the system of equations for a generic number of hierarchy levels. Currently, it is still an open problem to adapt the classical analytical resolution of the client-server model in order to satisfy server hierarchy.

Instead, exploiting the compositional property mentioned above for the model enhancement, a solution has been found in a easier way utilizing PEPA. In the next section, we are going to explain our contribution in this direction.

\subsection{Definition}

In the previous sections, we already told that it is possible to recognize statically the number of cache faults that will occur during the execution inspecting the sequential code. This brings to know also the frequency which the processing node generates memory requests, the so called $r_{request}$ (the inverse of $T_P$). It is worthwhile to note that in case shared memory hierarchies are involved, memory requests could be satisfied by any hierarchical levels. So a crucial point is to determine how many requests will be satisfied by a certain hierarchical level rather than another one. We are able to do this still by profiling. 

We know that when a cache fault occurs, a memory request is sent toward the upper memory level. Suppose to have the architecture described above, the request is sent to the second level of cache $L2$. We can easily check statically if the requested block will belong to $L2$ or not. If so, we can consider $L2$ able to reply; otherwise the memory request will be forwarded to $M$. This way of reason allows to estimate the number of requests satisfied by any hierarchical level and, at the end, to find the probability to satisfy a request in a certain hierarchical level rather than another one.

For instance, suppose that the number of requests satisfied by $L2$ is $c$ while $m$ is the number of requests satisfied by $M$. Let $p_c$ the probability to satisfy a request in $L2$ and $p_m$ the probability to satisfy a request in $M$, we have:

\[
p_c=\frac{c}{c+m}
\]

\[
p_m=\frac{m}{c+m}
\]

\ \\In fact we have that, in average, $p_c$ requests are satisfied by $L2$ while $p_m$ are satisfied by $M$. At this point, we can use this information in the PEPA program to model a shared memory hierarchy. 

The idea is to model processing nodes as clients able to generate requests toward either $L2$ or $M$. In other words, this means to have clients that can choose between two different actions, that are $request_c$ or $request_m$. Basically, this could be done in two different ways:
\begin{enumerate}
\item the first solution requires to write in a explicit way when a certain action must be taken by a client
\item the former capitalizes to the probabilities $p_c$ and $p_m$ to drive the choice between the two actions
\end{enumerate}
Before to introduce the adopted solutions in depth, it is worth to note that differences between the two solutions lie exclusively in how clients are made. The other components of the system, i.e. the one modelling $L2$ and the other one modelling $M$, will be the same for both solutions. Briefly, we say that the PEPA component modelling the memory will be identical to the server already defined in previous chapters. Instead, the second level of cache plays two roles: from one side it acts as a server while on the other side it is a client. How we will see soon, this  behaviour can be easily caught with PEPA.

\paragraph{First Version of Hierarchical Client-Server Model in PEPA}

We introduce the first solution through an example. For the time being, the parameter values are just constants. During the explanation of the test case, we will motivate choices about values. Suppose this scenario:
\begin{itemize}
\item $16$ processing nodes, each one with rate $r_{request}$ of memory request generation. For simplicity, we are assuming homogeneous clients characterized by an unique phase. Of course, it is important to remark that the theory in Chapter~\ref{extensions} could be applied.
\item as assumed previously, $M$ is shared among all the processing nodes while $L2$ only among disjoint subsets of $4$ processing nodes.
\item $p_c=\frac{3}{4}$ and $p_m=\frac{1}{4}$.
\end{itemize}

As mentioned above, we want to write explicitly in the program when a processing node generates a request toward $L2$ (action $request_c$) or toward $M$ (action $request_m$). The code below shows how this can be realized.

\begin{displaymath}
	\begin{array}{rcl}
		\mathit{r_{request}} & = & 1.0/\mathit{T_P}\\ \\
		
		\mathit{P_1} & \rmdef & (\mathit{request_c},\mathit{r_{request}}).\mathit{Pwait_1}\\
		\mathit{Pwait_1} & \rmdef & (\mathit{reply},\top).\mathit{P_2}\\
		\mathit{P_2} & \rmdef & (\mathit{request_c},\mathit{r_{request}}).\mathit{Pwait_2}\\
		\mathit{Pwait_2} & \rmdef & (\mathit{reply},\top).\mathit{P3}\\
		\mathit{P_3} & \rmdef & (\mathit{request_c},\mathit{r_{request}}).\mathit{Pwait_3}\\
		\mathit{Pwait_3} & \rmdef & (\mathit{reply},\top).\mathit{P4}\\
		\mathit{P4} & \rmdef & (\mathit{request_m},\mathit{r_{request}}).\mathit{Pwait_4}\\
		\mathit{Pwait_4} & \rmdef & (\mathit{answer},\top).\mathit{P1}\\ \\
		
		\mathit{Cache} & \rmdef & (\mathit{request_c},\top).(\mathit{reply},\mathit{r_{cache}}).\mathit{Cache}+(\mathit{request_m},\top).(\mathit{ask},\mathit{r_{ask}}).\mathit{Cache}\\ \\
		
		\mathit{Memory} & \rmdef & (\mathit{ask},\top).\mathit{Memory}+(\mathit{answer},\mathit{r_{memory}}).\mathit{Memory}\\ \\
		
		\multicolumn{3}{l}{\mathit{P_1}[16.0]\sync{request_c,request_m,reply}\mathit{Cache}[4.0]\sync{ask,answer}\mathit{Memory}[1.0]}\\
	\end{array}
\end{displaymath}

First of all, we notice that $r_{request}$ is evaluated in the same way as in previous sections. In fact, this is the general way to find it. A processing node is defined as a client composed by a sequence of states $P_i$ followed by waiting ones, i.e. $P_{wait_i}$. In a generic state $P_i$ a client performs an action that can be either $request_c$ or $request_m$. Instead, during a waiting state, a client can performs either a $reply$ or an $answer$. In case a client, during a generic state $P_i$, performs the action $request_c$, it will wait for a $reply$ in its following waiting state $P_{wait_i}$. Instead, if a $request_m$ is performed, the client will execute an $answer$. The length of this sequence, i.e. the parameter $i$, as well the frequency to perform a certain action rather than another one, is found looking at the probabilities $p_c$ and $p_m$ . In this case, $p_c=\frac{3}{4}$ and $p_m=\frac{1}{4}$ so we can define a client as a sequence of $4$ states $P_1,\ P_2,\ P_3,\ P_4$ followed by the respective $P_{wait_i},\ i=1,\cdots,\ 4$. Three states will perform the action $request_c$ (followed by a $reply$) while the other one will perform a $request_m$ (followed by an $answer$).

The component $Cache$ realizes the first server level, i.e. it is acting as $L2$. In case a request that it is able to satisfy is performed, it replies directly to the client executing a $reply$. Otherwise, it forwards the request to the upper server level through the action $ask$. These actions have a own rate, that is the inverse of the service time required by the $Cache$ to perform them. It is worthwhile to say that this values are architecture details that can be easily found.

As already told, the component $Memory$ is the same of previous programs while the last expression defines the entire system.

\subparagraph{Comments}

There are various considerations about the just introduced solution. First of all, it is important to say that the accuracy of the obtained results is very good as we will in the following section. In spite of this, some problems come out. The major constraint is just given by this explicit way to define the behaviour of a client. In fact, the sequence of interleaved $P_i-P_{wait_i}$ states is built taking into account the probabilities $p_c$ and $p_m$. Up to now we were dealing with $\frac{3}{4}$ and $\frac{1}{4}$, so a sequence of length $4$ can be easily written. Of course, the same approach would not be used if, for instance, $p_c=\frac{3}{17}$. The reason is simple and it does not belong in the complexity to write in PEPA the sequence but in how the underlying Markov chain is generated. In fact, longer sequences of actions bring to Markov chains with a more and more rigid structure. Since this property, the number of states composing rigid chains grows and to solve that chains becomes very expensive.

To accommodate any probability $p_c$ and $p_m$ and to keep reasonable the number of states forming the generated Markov chain, we need a more relaxed solution that we are going to explain in the next paragraph. Obviously, we will pay a price for this.

\paragraph{Second Version of Hierarchical Client-Server Model in PEPA}

The idea is to change the PEPA definition of client in order to drop the rigid structure coming out in the underlying Markov chain. Of course, all the other components will remain the same. This goal can be accomplished as shown in the following code.

\begin{displaymath}
	\begin{array}{rcl}
		
		\mathit{P} & \rmdef & (\mathit{request_c},\mathit{r_{request_c}}).\mathit{P_{wait_c}}+(\mathit{request_m},\mathit{r_{request_m}}).\mathit{P_{wait_m}}\\
		\mathit{P_{wait_c}} & \rmdef & (\mathit{reply},\top).\mathit{P}\\
		\mathit{P_{wait_m}} & \rmdef & (\mathit{answer},\top).\mathit{P}\\ \\
		
		\mathit{Cache} & \rmdef & (\mathit{request_c},\top).(\mathit{reply},\mathit{r_{cache}}).\mathit{Cache}+(\mathit{request_m},\top).(\mathit{ask},\mathit{r_{ask}}).\mathit{Cache}\\ \\
		
		\mathit{Memory} & \rmdef & (\mathit{ask},\top).\mathit{Memory}+(\mathit{answer},\mathit{r_{memory}}).\mathit{Memory}\\ \\
		
		\multicolumn{3}{l}{\mathit{P}[16.0]\sync{request_c,request_m,reply}\mathit{Cache}[4.0]\sync{ask,answer}\mathit{Memory}[1.0]}
	\end{array}
\end{displaymath}

A processing node is a component that can perform in a non deterministic way either the action $request_c$ or $request_m$. Of course, it is necessary to specify the rates of the actions, that are respectively $r_{request_c}$ and $r_{request_m}$. A way to do it is to evaluate the mean time between two consecutive requests toward $L2$ ($tp_c$) or toward $M$ ($tp_m$) as
\[
tp_c = T_P\cdot p_c
\]
\[
tp_m = T_P\cdot p_m
\]
and finally to revert them for having the rates:
\[
r_{request_c} = \frac{1}{tp_c}
\]
\[
r_{request_m} = \frac{1}{tp_m}
\]

It is worthwhile to note that $tp_c$ and $tp_m$ could be estimated directly by profiling without to evaluate the probabilities $p_c$ and $p_m$ that will be instead used to evaluate $T_{req}$ and $T_{resp}$ as reported below.

\subsection{Quantitative Comparison against the Simulation}

\paragraph{Model Resolution} 

The way to evaluate the under-load memory access latency in steady state condition of the system is basically the same as in the previous cases for both versions. Again, we base on Little's law to find out the so called $R_{Q_{server}}$. The difference lies in having more waiting states and more incoming rates to that states. Anyway, we can easily to adjust the Formula~\ref{rqserver} in this way:

\begin{equation}
R_{Q_{server}} = \sum_{i=1}^w \frac{p_{wait_i}}{\lambda_{reply}+\lambda_{answer}} = \frac{\sum_{i=1}^w p_{wait_i}}{\lambda_{reply}+\lambda_{answer}}
\end{equation}

where 
\begin{itemize}
\item $p_{wait_i}$ is the average number of clients belonging to the state $P_{wait_i}$ in steady-state condition of the system
\item $w$ is the number of waiting states. For instance it holds $4$ in the example of the first version while it is equal to $2$ in the last case.
\end{itemize}

Finally, as usual, we have to add the impact of interconnection structures for having the under-load memory access latency:

\[
R_{Q} = R_{Q_{server}}+T_{req}+T_{resp}
\]

It is important to recall that more interconnection structures are involved in hierarchical shared memory architectures, e.g. the already mentioned $P-C$ and $C-M$, so we have to take into account them in a proper way. The best solution is to consider again interconnection structures logically belonging to the server subsystem with the difference that the base network latencies $T_{req}$ and $T_{resp}$ are evaluated applying the definition of mean value:

\[
T_{req} = T_{req_{P-C}}\cdot p_c + T_{req_{C-M}}\cdot p_m
\]
\[
T_{resp} = T_{resp_{P-C}}\cdot p_c + T_{resp_{C-M}}\cdot p_m
\]

\paragraph{Results}

We have simulated via JSIM and expressed through the two PEPA versions the following scenario:

\begin{itemize}
\item $16$ processing nodes as in the previous tests
\item $M$ is shared among all the processing nodes and its service time $T_S$ is exponentially distributed with mean value $29\tau$ as in the previous tests
\item $L2$ is shared among disjoint groups of $4$ processing nodes. It is assumed to be big enough to contain all the working set needed to processes in execution over the same subset of processing nodes. Further, it spends in average $10\tau$ to reply directly to the processing node with the requested cache block while it forwards the request to the upper layer in $4\tau$ in average. Both these values are estimated taking into account second level caches with a proper size to be shared among various processing nodes.
\item $p_c=\frac{3}{4}$ and $p_m=\frac{1}{4}$. These are constants chosen in such a way will be possible to evaluate the impact of the first hierarchical level of servers, i.e. the second level cache. Of course, it is important to note that, in real architectures, a second level of cache will not have a so high probability to fault because the use of techniques and optimizations at compile time, e.g. prefetching.
\item $T_P$ is our degree of freedom. It will take values in the range $[25 \tau, 3000 \tau]$. The reason to choose very low $T_P$ values is quite simple: we are dealing with a primary cache so the average time between two faults is lower with respect to cases in previous chapters.
\end{itemize}

Figure~\ref{rqger} states the under-load memory access latency for the simulation (SIMULATION) and the two PEPA versions (respectively PEPAv1 and PEPAv2) while Figure~\ref{errore-ger} shows the absolute and relative errors of the two versions against the simulation.

\begin{figure}[h]
	\centerline{
		\mbox{\includegraphics[]{Grafici-Tesi/GER/rqgerarchico}}
	}
	\begin{center}
		\caption{Under-load Memory Access Latency.}
		\label{rqger}	
	\end{center}
\end{figure}
\begin{figure}[h]    
    \centering
    \subfloat[Absolute Error]{\includegraphics[]{Grafici-Tesi/GER/absger}}   
    
    \centering
    \subfloat[Relative Error]{\includegraphics[]{Grafici-Tesi/GER/relger}}   
    
    \caption{Errors of PEPA with respect to the simulation.}
    \label{errore-ger}
\end{figure}
\clearpage

First of all, we notice that both versions are underestimates of the simulation. The reason probably lies in the probabilist way to estimate some action rates. In confirmation to this, we have that the first version is very close to the simulation because it does not introduce much probabilistic behaviour. Instead, the second solution spaces out from the simulation only for lowest $T_P$. In fact, the shape of the second version approximates the simulation in a very good way for all $T_P$ range unless the first two values, that are $25\tau$ and $50\tau$. Therefore, for these $T_P$ values, the maximum relative error is registered as reported in Figure~\ref{errore-ger}(b).

On the other hand, we already know that the first version can not treat general cases because the structure of the generated Markov chains is very rigid and this is not suitable in terms of resolution. Instead, the second one is able to accommodate general cases without to enlarge the complexity. Therefore, we can conclude that the second solution is able to model hierarchical shared memory architectures in a good way being a trade-off between accuracy and complexity.

\section{Conclusion}

Due to the trend that multi-cores have been taken, to model hierarchical shared memory is becoming an important topic. In this chapter we have seen that this goal can be accomplished starting from the classical client-server model. Of course, enhancements are needed and they could be achieved in two principal way to operate: either extending the analytical resolution technique introduced in~\cite{ASE} or using the new formalism explained in Chapter~\ref{pepachapter} (PEPA). We decided for the latter because to extend the analytical resolution brings to a complexity increase that it is not worth. So we have seen two PEPA solutions on how is possible to treat hierarchical shared memory. On the base of tests that we have done, we concluded that the first one is very precise compared to the simulation but problems arise in terms of generated Markov chain. In fact, we have already told that the structure of the underlying Markov chain is rigid and this implies difficulty to solve it. On the other hand, the second version is a good trade-off between accuracy and complexity so it should be take into consideration.