\chapter{Introduction}

The \textit{High Performance Computing} (HPC) field studies the hardware-software interaction and applications characterized by requirements for high processing bandwidth, low response time, high efficiency and scalability.

Currently, \textit{multiprocessors} and \textit{multi-cores} are an important evolution/revolution from the technological point of view. These architectures are very complex and heterogeneous systems with parallelism exploited at processes level. The trend in multi-cores architectures seems that the number of cores per chip  is expected to double every two years. The idea is to substitute few complex and power-consuming CPUs with many smaller and simpler CPUs that can deliver better performance per watt. An important role is played by high bandwidth and low latency interconnection structures with limited degree (especially on-chip) while shared memory starts to be organized in hierarchies.

All this aspects have enormous implications from the software point of view. We point out that these architectures can be exploited efficiently provided that applications are able to do it. In spite of this relevant architectural change, the actual programming tools are at very low-level for a programmer without profound knowledge in the HPC field. Further, performance prediction and/or performance portability is missing or it is still in an initial phase. Summarizing, a wide gap still exists between shared memory architectures and parallel programming development tolls.

We advocate that a structured and methodological approach is able to reach this targets by mean of \textit{structured parallelism programming} (or \textit{skeleton} based parallel programming) in which a limited set of paradigms aims to provide standard and effective rules for composing parallel computations in a machine independent manner. The programmers have to use paradigms to realize the parallel application. The freedom of the programmer is limited but if paradigms allow composition, parametrization and ad-hoc parallelism, they become very easy to use from the programmer point of view and very useful to optimize from the compiler point of view. In fact, having a fixed set of paradigms the compiler has to "reason" completely on them inserting optimizations that could be platform-dependent or choosing the best implementation for the underlying architecture. All this means performance improvement without direct intervention of programmers. 

This important target is both application and architecture dependent and could be accomplished by a \textit{performance cost model} in association with a simplified view of the concrete architecture, i.e. the so called \textit{abstract architecture}~\cite{ASE}.  Considering that, a parallel compiler must be supplied of
\begin{itemize}
\item an \textbf{abstract architecture}, that is a simplified view of the concrete architecture able to describe the essential performance properties and abstract from all the others that are useless. It aims to throw away details belonging to different concrete architectures  and emphasizes all the most important and general ones. An abstract architecture for shared memory architectures could be the one in Figure~\ref{mimdint} wherein there exist many processing nodes as processes and the interconnection structure is fully interconnected.
\item a \textbf{cost model} associated to the abstract architecture. This cost model have to sum up all the features of the concrete architecture, the interprocess communication run-time support and the impact of the parallel application. Further, we strongly advocated that a cost model should be easy to use and conceptually simple to understand.
\end{itemize}

\begin{figure}[t]
        \centerline{
               \mbox{\includegraphics[scale=0.65]{Images/mp.pdf}}
        }
        \caption{Simplified view of Shared Memory Architecture}
		\label{mimdint}
\end{figure}

We remark that a complete and accurate cost model for these architectures is still missing and the aim of this thesis is just to give a contribution in this direction. We want to study how a detailed shared memory architecture-dependent cost model for parallel applications can be realized with particular care about the impact of the parallel application. 

The aim is to use cost models in the compiler technology in order to statically performs optimizations for parallel applications in the same way that nowadays compilers do for sequential code. This should allow programmers to write in an easier way, i.e. using high-level and user-friendly tools, parallel applications that exploit the underlying architecture as well because compilers are able either to choose the right implementation or to use low-level libraries, that are very important for the performance point of view. Further, performance portability should be maintained among different concrete architectures. To our knowledge, there is no other work moving in this specific direction a part our main source of reference~\cite{ASE}.

At processes level a parallel application can be viewed as a collection of cooperating processes via message passing. Formally, it is a graph wherein nodes are processes and arcs are communication channels among processes. This graph can be the result of a first compilation phase totally architecture independent and successively it can be easily mapped onto the abstract architecture for shared memory architecture because it has the same topology. All the outstanding concrete features are captured in two functions called $T_{send}$ and $T_{calc}$. These functions are evaluated taking into account several characteristics of the concrete architecture, e.g. interconnection structure, processing node, memory access latency and so on. At this point, the parallel compiler has all the elements to introduce the architecture dependency according to the cost model. As already told, this way to operate allows optimizations or choices among various implementations in such a way performance predictability and/or portability can be achieved.

Anyway, the idea to sum up all the salient features of a concrete architecture in only two functions is, on one side, very powerful and easy to use but, on the other side, it is not a quite simple derivation. 

In shared memory architectures various kind of resources are shared, e.g. memory modules and interconnection structures. The shared memory characteristic has, at the same time, pros and cons. On an hand it allows an easy way to design \textit{run-time support} for interprocess communication, i.e. the implementation of \textit{send} and \textit{receive}, as an extension of the uniprocessor run-time support that takes into account important issues peculiar to shared memory architectures like \textit{synchronization} or \textit{cache coherence}. On the other hand, since all the processing nodes have to access the shared memory for loading data or to communicate, the memory becomes a source of performance degradation due to conflicts. So the effectiveness of the shared memory approach depends on the \textit{latency} incurred on memory accesses as well as the \textit{bandwidth} of information transfer that can be supported. We can consider conflicts on shared memory the major source of performance degradation in these architectures. Considering that, $T_{send}$ and $T_{calc}$ will be principally affected by this phenomenon so a cost model should describe this situation in a proper way in order to ensure at least performance prediction.

Formally, the impact of shared memory conflicts can be modelled as a \textit{client-server} queuing system wherein clients $C_i$ are processing nodes accessing the same macro-module while the server $S$ is exactly that memory module, thus the under-load memory access latency is the \textit{server response time} (conventionally called $R_Q$). Figure~\ref{csint} shows this model that will be focus of interest in all the thesis.

\begin{figure}[h]
	\centerline{
		\mbox{\includegraphics[scale=0.7]{Images/cliserv}}
	}
	\caption{Client-Server System with Request-Reply behaviour.}
	\label{csint}
\end{figure}

In~\cite{ASE} the model is described through the following system of equations:

\begin{equation}
\left\{ \begin{array}{ll}
T_{cl} = T_P + R_Q\\
& \\
R_Q = W_Q(T_s,T_A) + t_{a0}\\
& \\
\rho = \frac{T_S}{T_A}\\
&\\
T_A = \frac{T_{cl}}{p}\\
&\\ \rho<1
\end{array} \right.
\end{equation}

\ \\Each client $C_i$ generates the next request only when the result of the previous one has been received. The behaviour of a client can be considered cyclic: local computational periods of average length $T_P$ alternates to waiting ones ($R_Q$), leading to a certain client average inter-departure time $T_{cl}$. Once we know $T_{cl}$ we can determine the server average inter-arrival time $T_A$ as $\frac{T_{cl}}{p}$ applying the \textit{Aggregate inter-arrival time} Theorem. Finally, the server response time $R_Q$ is given by the average waiting time $W_Q$ in the queue $Q$ plus a constant known in advance that is the base latency $t_{a0}$ of the server. Of course, $W_Q$ depends on the type of queue placed in front of the server. The last expression points out that the system has a self-stabilizing behaviour (the utilization factor $\rho$ of the server is less than one) so a steady-state solution exists. In this analytical approach we can find $R_Q$ as resolution of a second degree equation in $\rho$.

In the following, the client-server model will be described in other formalisms, e.g. either as closed queuing network or as Continued Time Markov Chain (CTMC), and $R_Q$ will be predicted through more resolution techniques, e.g. analytical and numerical. The reason is that we want to find a way to enhance the model for new behaviours and to improve its accuracy without increase the complexity of the resolution as much.

From this point of view, we know that Markov chains are a very powerful mathematical tool able to represent the behaviour of complex and concurrent systems as could be the Processors-Memory subsystem in shared memory architectures. Further, many numerical resolution techniques exist for moderately sized CTMC while iterative methods can be applied in case huge sizes are involved. Of course, Markov chains are difficult to build so we would want to abstract from them and also from their resolution techniques. 

For this purpose during the thesis, we will use a high level description language for Markov chains called \textit{Performance Evaluation Process Algebra} (PEPA). It belongs to the \textit{Stochastic Process Algebras} class and its usability comes out from the very formal interpretation of its expressions that is provided by an operational semantic. As we will see in Chapter~\ref{pepachapter}, PEPA is a paradigm able to specify Markov chains that allows to express a complex system as composition of smaller components. These characteristics in addition to the high level approach, fit PEPA also as formalism to enhance and to solve the client-server model. To our knowledge, this is the first attempt to use PEPA for performance modelling in the HPC field.

We advocate that PEPA is \textit{flexible} formalism for the client-server model able to reach \textit{accuracy} in under-load memory access latency estimations and able to \textit{accommodate} parallel application constraints. 

For flexibility we mean a formalism able to adapt itself nimbly to even drastic architectural and/or application dependent changes. This ability is necessary in order to deal changes with no much effort and without increase a lot the resolution complexity of the model. A notable example could be the architectural passage from non-hierarchical shared memory to shared memory hierarchies that are very common in multi-cores architectures. For its relevance, this aspect will be treated in depth in a chapter.

Further, the accuracy aspect is very important for quantitative reasons. In order to be used,  a performance cost model has to be precise. From this point of view, both analytical and numerical resolution techniques have been analysed and compared during the thesis. Of course, not always the more accurate solution is the best choice in terms of complexity so a good trade-off between this two contrasting requirements is needed.

Finally,  we would want a formalism also able to taken into account the impact of the parallel application executed on the shared memory architecture. In other words, this means to satisfy application constraints. Notable examples could be an application composed by different processes or just processes exploiting a complex internal behaviour. We will treat this topics in depth.

\paragraph{Organization of the Thesis}

The thesis is organized in 8 chapters. The first one is just this Introduction that aims to focus on the context, the objective and the structure of this thesis.

Chapter 2 provides an overview of the main concepts about multiprocessors and multi-cores exploiting parallelism at processes level. 

Chapter 3 summarizes the most important results of Queuing Theory that will be useful for future treatments. We recall that also the client-server model with request-reply behaviour reported in~\cite{ASE} is based on Queuing Theory.

Chapter 4 introduces two cost models for the Processors-Memory system: the former maps the system into a Closed Queuing Network while the latter is the client-server model already introduced. We will see pro and cons of both and their resolutions and we will propose a first variant of the second one taking into account a first application constraint: heterogeneous processes.

In order to enhance the model to take into account new architectural or application dependent aspects without increase the complexity, the PEPA formalism will be proposed in Chapter 5. Therefore, analysis and comparisons with other resolution techniques will be shown.

Chapter 6 examines the impact of parallel applications, i.e. applications composed either by different processes or with processes exploiting a complex internal behaviour. Also in this case, the theoretical contribution will be joined to experiments.

The shared memory hierarchy modelling and relative results will be treated in Chapter 7. 

Finally, Chapter 8 draws the conclusions.
