\chapter{Applying the cost model to a concrete architecture}\label{TILERA}
We apply the fundamental results of the previous chapters to a concrete architecture. The structure of the chapter is shown below.
\begin{enumerate}
\item The \textit{Tilera TILEPro64} multi-core architecture (from now on Tile64) is introduced. The main features of the architecture are described and commented according to the terminology of Chapter~\ref{shared}. 
\item The modeling of the memory system is of capital importance. A problem is that technical information available in literature are not sufficient to understand the behaviour of the memory and predict its service time. It will be shown how to derive it following a reverse engineering approach, i.e. analysing the memory traces retrieved by means of the Tile64 profiler.
\item The whole performance evaluation methodology is revisited with respect to a parallel program executed on the Tile64 architecture. We show how to instantiate the client-server model to predict the performance of the application. The predicted under-load memory access latency will be validated against experimental results.
\end{enumerate}

\section{Tilera TILEPro64 architecture overview}
The Tile64 architecture is a NUMA Chip MultiProcessor where multiple two-dimensional mesh networks interconnect 64 processing nodes (also known as \textit{tiles}). It is a notable example of \textit{Network Processor}, i.e. a general purpose architecture mainly oriented to network packet processing. Figure~\ref{tile64} illustrates the firmware organization of the architecture with emphasis on an individual processing node's structure.
\begin{figure}[h]
	\centerline{
		\mbox{\includegraphics[]{Images/tilera}}
	}
	\caption{The Tile64 firmware architecture}
	\label{tile64}
\end{figure}

\paragraph{Processing node}
Each processing node contains a RISC, out-of-order, pipeline CPU $P$, a private cache hierarchy $C$ and an interface unit $W$, which is basically a switch implementing deterministic network routing. Notably important is the structure of $C$. It is a private two-level cache hierarchy: a 16KB $L1$ cache (divided in 8KB $Data$ and 8KB $Instructions$) and a 64KB $L2$. This leads to a total of 5MB of on-chip cache. $L1$-$Data$ works in write-through mode, while (for logical reasons) $L2$ works in write-back mode. The processing node architecture is therefore extremely simple: in this perspective, notice the lack of both hardware multithreading and a floating point unit. 

\paragraph{Interconnection network}
The five mesh networks are implemented through the $W$ units in each processing node. The flow control is \textit{wormhole}, while the communication protocol between firmware units is based on \textit{timeslot}. Therefore, the latency that a stream of $m$ flits experiences for travelling $d$ units on the network is equal to (see equation~\ref{latency-timeslot} of Section~\ref{networks}):
\begin{equation}\label{traverse}
L = (m + d - 2) \ \tau
\end{equation}
Particularly important is the fact that each one of the five mesh addresses a specific functionality. For instance, a mesh $MDN$ is dedicated to the flow of memory access requests/replies between processing nodes and memory interfaces, another one is reserved to cache coherence messages, and so on. One of the important consequence of this structuring is that conflicts on $MDN$ are limited to memory requests and replies. This aspect, together with the fact that $MDN$ is extremely fast if compared to the external memory (see also Section~\ref{netonchip}), will allow us to simplify the evaluation of the under-load memory access latency by abstracting from the potential overhead of network conflicts.

\paragraph{Cache coherence}
The Tile64 implements a directory-based automatic cache coherence technique. As we said, bandwidth for cache coherence messages is reserved on an dedicated mesh. However, users are allowed to disable the automatic cache coherence in place of their own algorithm-dependent strategies. This is particularly interesting since provides that flexibility which lacks in many of the state-of-the-art CMPs. In the following, we will not go into the details of cache coherence mechanisms and their potential overhead on the architectural cost model, which is left as an open topic. 

\paragraph{Memory system}
Four memory controllers $MINF_i$ are displaced at two opposite edges of the chip. The role of $MINF$ is to interface the chip with the external memory $M$, which is a 64-bit DDR2 DRAM. It is worth noticing that the service time of $M$ is \textit{not deterministic}. For instance, \textit{consecutive} requests for adjacent pages, i.e. the ones that reside on the same memory row, are served faster with respect to random ones, since no overhead is paid for the so called ''row activation''. Locality properties, together with many others~\cite{TILE-IO}, are useful to optimize the global performance of the memory system, from both the bandwidth and the latency point of view. By exploiting these properties, $MINF$ implements a smart scheduling algorithm to re-order outstanding memory requests, instead of forwarding them to $M$ in a classical FIFO manner. The ordered stream of requests should help $M$ in offering a better average service time. Intuitively, the larger the number of outstanding memory requests, the better will be the result of the scheduling algorithm and consequently lower the exhibited service time. As we will see, this behaviour will be modeled representing the memory system as a load-dependent server.

\paragraph{Summary}
A summary of the Tile64 architectural details is shown in Table~\ref{tile-data}.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|}\hline
CPU clock & 800Mhz  \\\hline
L1 size &16KB (8KB D, 8KB I) \\\hline
L1-Data block size ($\sigma_{L1}$) & 16B \\\hline
L2 size & 64KB \\\hline
L2 block size ($\sigma_{L2}$) & 64B \\\hline
W routing latency & 1$\tau$ \\\hline
\end{tabular}
\caption{Architectural characteristics of a Tile64 processing node}
\label{tile-data}
\end{center}
\end{table}

\section{Memory access latency in Tilera architectures}
\subsection{Methodology}
The evaluation of the under-load memory access latency $R_Q$ is done according to the methodology of Chapter~\ref{pepach}. Summarizing, it consists of three steps.
\begin{enumerate}
\item Firstly, \textit{architecture-dependent} parameters are determined. Basically, we are interested in three parameters: the average service time of the memory system $T_S$, the base latency of the request phase $T_{req}$ and the base latency of the reply phase $T_{resp}$.
\item Then, \textit{application-dependent} parameters can be extracted from the specific parallel application. In particular, we understood the importance of knowing $T_P$ and $p$.
\item Finally, the above parameters can be used to instantiate a \textit{PEPA client-server model}, by means of which $R_Q$ will be derived.
\end{enumerate}
In this section we focus on points 1) and 3), while the analysis of application-dependent parameters will be addressed in the next section.

\subsection{Base latency}
\paragraph{Memory controller delay}
The internal firmware structure of $MINF$ is particularly complex~\cite{TILE-IO}, mainly because of the requests scheduler. However, we have experimentally verified that $MINF$ introduces almost always a \textit{constant} delay to forward either memory requests toward $M$ or memory replies toward the chip (equivalently: queueing overhead at $MINF$ is negligible). In the case of a request, the experienced latency is
\[
T_{MINF-REQ} = 2\tau
\]
while in the case of a reply we have 
\[
T_{MINF-RESP} = 41\tau
\]
The unbalance in these two latencies could be due to many different elements: for instance, the scheduler overhead is included in the reply phase. Yet another simpler possibility is that it is due to a profiler approximation.

\paragraph{Base latency of request and reply phases}
$T_{req}$ and $T_{reply}$ are functions of the processing node position $(x,y)$ inside the mesh ($x, y \in \lbrace$0...7$\rbrace$) with respect to a specific memory interface controller $MINF_z$ ($z \in \lbrace$0..3$\rbrace$). In the following, we fix $x$, $y$ and $z$ to exemplify the evaluation approach. We will assume $(x,y,z) = (3,3,0)$.

The units traversed during $T_{req}(3,3,0)$ and $T_{resp}(3,3,0)$ are highlighted in Figure~\ref{tile-path}.
\begin{itemize}
\item $T_{req}$ begins in the instant in which $L2$ puts a memory request into the network. The size of the request is \[m_1 = 4\] flits~\cite{TILE-IO}. The phase ends as soon as the request gets to $M$. 
\item $T_{resp}$ begins in the instant in which $M$ sends the first word of the reply $r$. The phase ends as soon as $L2$ receives \textit{the first $m_2$ flits of $r$}. Being $H = 3$ the size of the message's header, we have \[m_2 = \sigma_{L1} + H = 7\] This is due to the fact that the stall period of $P$ ends as soon as the word for which a fault has been generated is available in $L1$. Therefore, it is not necessary to wait for the whole $L2$ block carried by a message of size $m = \sigma_{L2} + H$. 
\end{itemize} 
From Figure~\ref{tile-path} we understand that the distance between $L2$ and $MINF_0$, in terms of traversed units, is $d = 6$. Equation~\ref{traverse} can be used to determine the pipeline latency for the paths $L2-MINF_0$ and $MINF_0-L2$. Adding the latency of $MINF_0$, that is $T_{MINF-REQ}$ for the request phase and $T_{MINF-RESP}$ for the reply phase, we can determine $T_{req}(3,3,0)$ and $T_{resp}(3,3,0)$ as follows.
\[
T_{req}(3,3,0) = (m_1 + d - 2) + T_{MINF-REQ} = 10\tau
\]
\[
T_{resp}(3,3,0) = (m_2 + d - 2) + T_{MINF-RESP} = 52\tau
\]
\begin{figure}[]
	\centerline{
		\mbox{\includegraphics[scale=0.7]{Images/tile-path}}
	}
	\caption{Traversed units during the request phase (red line) and reply phase (blue line)}
	\label{tile-path}
\end{figure}

Average values of $T_{req}$ and $T_{resp}$ for a set of tiles $S$ can be also meaningful. They can be derived in a very analogous way, by using the average distance $d_{avg}$, between tiles of $S$ and a specific $MINF$ unit, in place of $d$. 

\paragraph{Approximation} The previous evaluation hides a subtle approximation. Consider the two following transitories.
\begin{itemize}
\item At the \textit{beginning} of the \textit{request} phase, \textit{before getting stalled}, $P$ exploits the out-of-ordering technique to execute instructions for a total of $X$ additional cycles. 
\item At the \textit{ending} of the \textit{reply} phase, the first $\sigma_{L1}$ flits of the block loaded from $M$ have been received by $L2$. Actually, before using the requested word, $P$ remains stalled $Y$ additional cycles waiting for the $\sigma_{L1}$ words to be transferred from $L2$ to $L1$. 
\end{itemize}
However, we experimentally verified that $X \approx Y$, so the previous evaluation still represents a very good approximation. 

\subsection{Under-load latency}
\paragraph{Model of the memory system}
For the reasons explained in the previous section, the external memory macro-module $M$ \textit{and} its $MINF$ unit can be modeled as a \textit{load-dependent single-server queue}. We will indicate with $T_S(q)$ the average service time when $q$ elements are outstanding in the server. To determine $T_S(q)$, we undertook an accurate experimentation because no technical information is available in literature. We performed a lot of simulations; in each one of them, the memory has been subjected to different workloads. By sampling the queue size during each service, we ended up with a large set of couples $\langle q$, $T_S(q)\rangle$. Then, we fixed $q$ and took the average of $T_S(q)$, leading to the values of Table~\ref{load-dependent-queue}. 
\begin{table}[h]
\begin{center}
\begin{tabular}{| c | c |}\hline
$q$ &  $T_S(q)$ ($\tau$) \\\hline\hline
1 & 32.41 \\\hline
2 & 24.49 \\\hline
3 & 20.61 \\\hline
4 & 16.88 \\\hline
5 & 15.43 \\\hline
6 & 15.15 \\\hline
7 & 14.26 \\\hline
$\geq 8$ & 14 \\\hline
\end{tabular}
\caption{Service times for the Tile64 memory system}
\label{load-dependent-queue}
\end{center}
\end{table}

\paragraph{Under-load memory access latency} By taking into account the load-dependent nature of the memory system, $R_Q$ can be determined resorting on the PEPA model for load-dependent client-server systems of Section~\ref{pepa-load-dependent}. We already found the architecture-dependent parameters to instantiate the model, that is $T_{req}$, $T_{resp}$ and $T_S(q)$. In the next section we will focus on a specific parallel application to exemplify the prediction of $R_Q$ and to validate it against experimental results.

\section{Performance evaluation of a parallel application}
A parallel application will be executed on the Tile64 Architecture Simulator (from now on TAS)~\cite{TILE-REF}. In the following, we will distinguish between the memory latency $R_{Q_{sim}}$ sampled with the TAS profiler and the one predicted by the Tile64 cost model $R_Q$. The parallel application will be executed for different values of the parallelism degree $N$. At the end of the test we will compare the progress of $R_{Q_{sim}}$ and $R_Q$.

\paragraph{Parallel application}
The application in question implements a farm para\-digm. $N$ identical worker processes executes a very classic sequential algorithm, i.e. the \textit{matrix-vector multiplication}. Each worker receives, from an emitter process, a matrix of size $M=256KB$ bytes ($65536$ elements of size $4$ bytes), then executes the algorithm and finally sends the resulting vector to a collector process. 

\paragraph{Sequential performance}
A parameter we need to derive is $T_P$, i.e. the average time interval between two consecutive accesses at the same memory macro-module originated by the same processing node. Clearly, $T_P$ is an algorithm-dependent parameter. To determine $T_P$ we use the following formula. 
\[
T_P = \lceil\frac{Execution\ Time + Stall\ time}{\sharp\ L2\ Faults}\rceil
\]
Table~\ref{matrix-vector-data} shows some performance data of the sequential matrix-vector multiplication algorithm, retrieved by means of TAS. Instantiating the previous formula with these data, we obtain
\[
T_P = 1054\tau
\]
\begin{table}[h]
\begin{center}
\begin{tabular}{| c | c ||| c | c |}\hline
Execution time (not stalled) & 3152529$\tau$ & L2 Read faults & 4149 \\\hline
Pipelining	stall & 837122$\tau$ & L2 Write faults & 34 \\\hline
$L1$-Data stall &  94767$\tau$ & L2 Inst faults & 13 \\\hline
$L2$ stall	& 337210$\tau$ & & \\\hline
\end{tabular}
\caption{Performance data for the sequential Matrix-Vector multiplication executed on the Tile64 architecture.}
\label{matrix-vector-data}
\end{center}
\end{table}

\paragraph{On the value of $p$}
The other important point to discuss is the number $p$ of processor (out of $N$) that share a memory macro-module. We should find a $low-p$ mapping for \textit{this specific application}, that is a smart strategy of allocating data structures (either shared or private) to memory macro-modules, in such a way to minimize $p$~\cite{VAN}. Indeed, we know that performance problems, though related to network distance effects, are mainly influenced by the \textit{contention effects}. A possibility is to concentrate the data structures of a worker, both the ones of the program and the ones of the runtime support, \textit{on a single memory macro-module}. For instance, if the parallelism degree of the application were maximum, i.e. $N=64$, we would have $p=18$ since each of the four memory macro-modules would be shared by 16 workers plus emitter and collector. However, since we are interested in \textit{predicting} performance and not in the performance itself, we are going to assume $p=N$, that is all the processes will share the same macro-module. This is probably the worst strategy from the \textit{performance} viewpoint, but it allow us to analyse $R_{Q_{sim}}$ for more intensive workloads, since $p$ may be chosen at will in the range $[1-64]$. Without loss of generality, the data structures of the $p$ workers will be allocated on the macro-module interfaced by $MINF_0$.

\paragraph{Assumption}
The parallel application is composed by three kind of processes: emitter, collector and workers. This would imply to instantiate a client-server model with \textit{heterogeneous} clients, that is not the scope of our analysis. This kind of systems has been treated in~\cite{ALBE}. To simplify the study, we decided to focus only on worker processes. Hence:
\begin{itemize}
\item in the TAS experiments it is assumed the existence of an infinite input stream of matrices with interarrival time such that workers never get stalled waiting for data to elaborate. The collector simply discards the received elements.  Therefore, the workload on the memory system is generated by the worker processes. 
\item in the cost model we will have homogeneous clients (worker processes).
\end{itemize} 
Actually, this is not a restrictive approximation. Especially for high values of $N$, the impact of both the emitter and the collector become negligible with respect to the one of $N-2$ workers. This consideration has been experimentally verified: we saw that for $N \geq 16$ there is no meaningful change in the workload generated either by $N$ workers or by $N-2$ workers plus emitter and collector.

\paragraph{Structure of the test}
The TAS provides users with a lot of primitives to manage aspects like the program's data structures allocation and the process-to-tile (PTT) mapping. This allows us to set important parameters at will, like $p$ and the average distance between processes and a memory macro-module.
The application has been executed for $p \in \lbrace$4, 8, 16, 24, 32, 48, 64$\rbrace$. Some PTT mappings and the relative average distance $d_{avg}$ between tiles and $MINF_0$ are illustrated in Figure~\ref{used-tiles}. 
\begin{figure}[h]    
    \centering
    \subfloat[$p$ = 4, $d_{avg}$ = 2.5]{\includegraphics[]{Images/PTT-4}}   
    \hspace*{20pt}  
    \subfloat[$p$ = 16, $d_{avg}$ = 4]{\includegraphics[]{Images/PTT-16}}   
    
    \centering
    \subfloat[$p$ = 32, $d_{avg}$ = 5]{\includegraphics[]{Images/PTT-32}}   
    \hspace*{20pt}  
    \subfloat[$p$ = 64, $d_{avg}$ = 7]{\includegraphics[]{Images/PTT-64}}   
    
    \caption{Process-to-tile mappings.}
    \label{used-tiles}
\end{figure}


\paragraph{Results and comments}
Table~\ref{tile-pepa-summary} summarizes constants and parameters to instantiate the PEPA model for a load-dependent client-server system, 
\begin{table}[h]
\begin{center}
\begin{tabular}{| c || c |}\hline
$T_P$ & $1054\tau$\\\hline
$T_{req}$ & $(4+d_{avg})\tau$\\\hline
$T_{resp}$ & $(46+d_{avg})\tau$\\\hline
$T_S(i)$ & load-dependent, see Table~\ref{load-dependent-queue} \\\hline
$p$ & $\lbrace$4, 8, 16, 24, 32, 48, 64$\rbrace$\\\hline
\end{tabular}
\caption{Cost model constants and parameters.}
\label{tile-pepa-summary}
\end{center}
\end{table}

Figures~\ref{tile-resultss} and~\ref{tile-errors} compare the simulated and the predicted under-load memory access latency, respectively indicated with $R_{Q_{sim}}$ and $R_Q$. Thanks to both the fast on-chip interconnection network and the efficient memory system, even for high values of $p$ the experienced $R_{Q_{sim}}$ does not show a dramatic increment with respect to the base latency. Clearly, we should interpret this result in light of both the assumptions we made and the application we are modeling. For instance, we notice the absence of meaningful communication patterns between processes, that could lead to a remarkable degradation of $R_{Q_{sim}}$. Another example is that, for certain algorithms, the out-of-order behaviour of the processing nodes may lead to a significantly lower $T_P$ (and consequently to a greater workload on the memory system) by issuing more than one memory access request before getting stalled. In general, we advocate that even with a little change in either the sequential algorithm or in the parallel paradigm, the overall performance may drastically change. 

The Tile64 cost model, at least for this specific application, provides a very good approximation to $R_{Q_{sim}}$. Apart from the asymptotic behaviour of the two curves, that is identical, the maximum absolute error of $R_Q$ does not exceed $10\tau$, for a corresponding percentage error lower than $10\%$. 
\begin{figure}[h]
	\centerline{
		\mbox{\includegraphics[]{Grafici-Tesi/TILERA/Rq}}
	}
	\begin{center}
		\caption{Under-load memory access latency experienced with the simulation ($R_{Q_{sim}}$) and predicted with the cost model ($R_Q$).}
		\label{tile-resultss}	
	\end{center}
\end{figure}
\begin{figure}[h]    
    \centering
    \subfloat[Absolute error]{\includegraphics[]{Grafici-Tesi/TILERA/AbsoluteError}}   
    
    \centering
    \subfloat[Relative error]{\includegraphics[]{Grafici-Tesi/TILERA/RelativeError}}   
    
    \caption{The Tile64 cost model estimation error with respect to the simulation for a matrix-vector multiplication task farm.}
    \label{tile-errors}
\end{figure}




