\chapter{Conclusion and Future Works}

Performance on shared memory architectures is dictated by the interrelation of concrete architecture details, parallel application constraints and run-time support of concurrency mechanisms. In order to exploit efficiently these systems is therefore necessary a methodological and structured approach to handle all this aspects.

On a hand, structured parallel programming is used in order to create parallel applications in an independent way from the underline architecture. Further, the use of a fixed set of paradigms to build parallel application allows optimizations, modularity and a methodology without increase the complexity as much. On the other hand, a cost model in association with an abstract architecture is needed from the performance point of view.

We have tried to give a contribution in this direction enhancing the cost model for shared memory architectures and its accuracy with particular care to parallel application constraints. The queuing based client-server model with request-reply behaviour has been our starting point. We have found other ways to express it with the goal to reach a formalism able to express flexibility, simplicity and a high-level approach. Of course, the resolution had to be a trade-off between complexity and accuracy. 

We have decided for a Stochastic Process Algebra language (PEPA) as formalism to describe the Processors-Memory subsystem in an elegant way. Further, the numerical resolution technique is very accurate.

The next step has been to verify how PEPA was able to enhance the classical model. Therefore, in this thesis advanced cost models have been defined. In particular, we have focused on:

\begin{enumerate}
\item \textbf{shared memory hierarchies}. This architectural aspect is more and more frequent in multi-cores architectures so an advanced cost model was needed. The impact of more levels of shared memory could impact on the performance so, once a cost model is provided, a way to deal with this hierarchical organization could be found. Further, we believe that a parallel application organized in a hierarchical way could exploit these architectures in a very efficient way. So the hierarchical client-server model with request-reply behaviour could be a good starting point to study this topic.

\item \textbf{impact of the parallel application}. A first direct impact of parallel applications on the client-server model is to influence $T_P$ values. We recall that $T_P$ is an input parameter of the client-server model so its derivation is fundamental in order to have accuracy. In this thesis we have seen that this value is influenced in different ways: either due to complex internal behaviours of processes (the so called process phases) or for heterogeneity among processes. 

Following the structured parallel programming approach, heterogeneous processes are present only in some paradigms and usually their impact can be considered negligible with respect to other involved processes in the parallel application. However, a PEPA cost model for heterogeneous processes has been formalized in this thesis in order to be more precise from at least two point of view: to consider heterogeneous processes (and not to abstract from them) in addition to the accuracy of numerical resolution techniques. Further, we recall that this cost model could be used in an orthogonal way for dealing with process phases. It is worthwhile to say that the procedure of analysis in order to recognize different processes can be easily achieved looking at the structure of the application. Once processes are subdivided in classes is possible to introduce optimizations in order to reduce the complexity of the resolution technique.

Also process phases are easily recognizable. Processes in structured parallel programming approach always interleave computational phases to inter-process communications and to establish when a \textit{send} starts or ends is quite simple because these limits are software boundaries. The difference in $T_P$ between computational and communication phases can be even an order of magnitude while, within a computational phase, $T_P$ can change but not so much. This makes sense to model only the so called $think$ and $send$ phases. Different approaches have been shown on how to deal with phases. 

Further, it has been explained how phases-dependent cost models could be also used for processes not always working, i.e. their efficiency is less than one. It is worthwhile to recall that in these processes the $T_P$ derivation can not be made only looking at the sequential code.
\end{enumerate}
