\chapter{Queueing Theory Concepts}
We want to formalize a cost model basing on Queueing Theory concepts. Thus in this section we will refresh and summarize important results regarding both simple and intermediate queueing systems; the reader may consult~\cite{KLEIN, INGTLC} for a deeper understanding of those concepts that here will be just reviewed.  

\section{Description and Characterization of a Queue}
\paragraph{Description of Queues}
A queueing system models the behaviour of a server $S$ where clients (often known as jobs or client requests) arrive and ask for a service. In general, clients have to spend some time in a queue $Q$ waiting that $S$ is ready to serve them.   
\begin{figure}[h]
	\centerline{
		\mbox{\includegraphics[]{Images/queue}}
	}
	\caption{A queue}
	\label{queue}
\end{figure}
The scheme in Figure~\ref{queue} is a \textit{logical} one, not necessarily corresponding to the real structure of the system we are modelling. For instance, $Q$ could not physically exists or it could be even distributed among the clients. However, in some of these cases it turns out to be easier to study the whole system as a single logical queue. This kind of approximation can drastically reduce the complexity of the analysis and makes it possible to obtain an approximate evaluation, which is however meaningful provided that the mathematical and stochastic assumptions are validated. We will use and explain this approach in the next sections. 

Queue models are classified according to the following characteristics.
\begin{itemize}
\item The stochastic process $A$ that describes the arrivals of clients. In particular, we are interested in the probability distribution of the random variable $t_A$ extracted by $A$. $t_A$ represents the \textit{inter-arrival time}, that is the time interval between two consecutive arrivals of clients. Its mean value is denoted by $T_A$, the standard deviation by $\sigma_A$ and the mean rate of inter-arrivals by $\lambda = \frac{1}{T_A}$.
\item The stochastic process $B$ that describes the service of $S$. $B$ generates the random variable $t_S$ that represents the \textit{service time} of $S$, that is the time interval between the beginning of the executions on two consecutive requests. Its mean value is denoted by $T_S$, the standard deviation by $\sigma_S$ and the mean rate of services by $\mu = \frac{1}{T_S}$.
\item The \textit{number of servers or channels} $r$ of $S$, that is the parallelism degree of $S$. In the following, except for some specific cases, we will assume $r = 1$, that is a sequential server.
\item The \textit{queue size} $d$, that is the number of positions available in $Q$ for storing the requests. Notice that in computer systems this size is necessarily fixed or limited. Unfortunately most of the results in Queueing Theory have been derived for infinite length queues. However, the results provided for infinite queues will sufficiently approximate the case of finite ones, under assumptions that we will discuss case by case.
\item The \textit{population} $e$ of the system, which can be either infinite or finite.
\item The \textit{service discipline} $x$, that is the rule that specifies which of the queued requests will be served next. We will use the classical $FIFO$ discipline. 
\end{itemize}
Basing upon these information, queues can be classified according to the standard Kendall's notation (see~\cite{KLEIN} for more details). For instance, we will indicate with $M/M/1$ the queue with a single server where both input and service processes are Poisson ones.

\paragraph{Inter-departures Process}
The stochastic process $C$, that represents the departures from the system (inter-departure process), depends on the nature of the queue. For $A/B/1$ queues, being $T_P$ the average inter-departure time, an evident result is that $T_P = max (T_A, T_S)$.

A first interesting property is the following (see~\cite{ASE} for a simple proof):
\begin{thm}
\label{aggregate}
\textbf{Aggregate inter-arrival time.} If a queue $Q$ has multiple sources (i.e. multiple arrival flows) each one with an average inter-departure time $T_{p_i}$, the total average inter-arrival time to $Q$ is given by:
\[
T_A = \frac{1}{\sum_{i=1}^{N}\frac{1}{T_{p_i}}}
\]
\end{thm}

\paragraph{Characterization of Queues}
A first average measure of the \textit{traffic intensity} at a queue is expressed through the \textit{utilization factor} $\rho$.
\[
\rho = \frac{\lambda}{\mu} = \frac{T_S}{T_A}
\]
For our purposes an extremely important situation is given by $\rho < 1$. Under this situation the system \textit{stabilizes}, therefore it becomes possible to determine the so called \textit{steady-state} behaviour of the system. 

Other metrics of interest to evaluate the performance of a queueing system are:
\begin{itemize}
\item the \textit{mean number of requests in the system}, $N_Q$: the average number of client requests in the system including the one being served;
\item the \textit{waiting time distribution}: the time spent by a request in the waiting queue. We are practically interested in its mean value $W_Q$.
\item the \textit{response time distribution}: with respect to the waiting time distribution, it includes also the time spent in the service phase. We will denote its mean value as $R_Q$. Notice that $R_Q = W_Q + L_S$, where $L_S$ is the average service latency .
\end{itemize}

A very general result that can be applied to different kind of scenarios (not just Queueing Theory) is the Little's theorem.
\begin{thm}
\label{little}
\textbf{Little's law.} Given a \textit{stable} system ($\rho < 1$) where clients arrive with a rate $\lambda$ and the mean number of clients in the system $N_Q$ is finite, the average time spent by a client in the system $R_Q$ is equal to
\[
R_Q = \frac{N_Q}{\lambda}
\]
\end{thm}
The reasoning behind this theorem is intuitive, while the proof is quite complicated. The interested reader may consult~\cite{KLEIN} for a deeper explanation. 

\section{Notably important Queues}
\label{specialqueues}
The Queueing Theory is extensive and treats an incredible large number of special queues (that is, queues with a specific configuration $A/B/r/d/e/x$), some of which also particularly complicated. In order to keep limited the complexity of deriving the architecture cost model, we will be interested in a minimal (yet meaningful) subset of these queues. Therefore, in this section we illustrate the main results for only two peculiarly configurations: the $M/M/1$ and the $M/G/1$ queues.

\subsection{The $M/M/1$ Queue}
In a $M/M/1$ queue the arrivals occur according to a Poisson process with parameter $\lambda$. The services are exponentially distributed too, with rate $\mu$. The memoryless property of the exponential distribution, besides being simple to model, is very important in our context because it allows us to approximate a lot of different meaningful scenarios. The service discipline is FIFO and it is assumed that the queue size is infinite. It can be shown that the average number of requests in the system is equal to
\[
N_Q = \frac{\rho}{1-\rho}
\]
Applying the Little's law we obtain:
\[
W_Q = \frac{\rho}{\mu (1-\rho)} 
\]
\[
R_Q = \frac{1}{\mu (1-\rho)}
\]
It could be also proved that even if the queue has finite size $k$, the previous formulas still represent an acceptable result provided that the probability that a request gets stuck due to the full queue is an event with negligible probability.

\subsection{The $M/G/1$ Queue}
Although very common, the hypothesis on the exponential distribution of the service time could not be applicable in some concrete case of interests. For instance, there could be architectures in which the memory subsystem takes a \textit{constant} amount of time to handle a processor request. In these cases we are interested in the deterministic distribution.

We introduce the $M/G/1$ queue, where the symbol $G$ stands for \textit{general} distribution. All assumptions and considerations made for the $M/M/1$ are still valid, except for the distribution of the services: indeed with an $M/G/1$ we are able to model any distribution of the service time. For this queue we get the following fundamental results (coming from the so called \textit{Pollaczek-Khinchine formula}):
\[
N_Q = \frac{\rho}{1-\rho}\ [1-\frac{\rho}{2}(1-\mu^2 \sigma_S^2)]
\]
Applying the Little's law:
\[
R_Q = \frac{1}{\mu (1-\rho)}\ [1-\frac{\rho}{2}(1-\mu^2 \sigma_S^2)]
\]
A particular case of interest is the $M/D/1$ queue where the service time distribution is \textit{deterministic}, that is the variance is null. Imposing $\sigma_S = 0$ in the previous formula we get the expression of the average response time for a $M/D/1$ queue. 

\section{Networks of Queues}
\paragraph{Queueing Networks in general}

A queueing network is a system where a set of queues are interconnected in an arbitrary way. Figure~\ref{tandem} shows the simplest queueing network, that is two $M/M/1$ queues connected in sequence. The arrival process at the latter queue is exactly the output process of the former one; thus it is more correct to identify the second queue with the notation $./M/1$ to express the fact that the arrival process at the second queue is dependent from the rest of the network.  

\begin{figure}[h]
	\centerline{
		\mbox{\includegraphics[scale=0.60]{Images/tandem}}
	}
	\caption{Two $./M/1$ queues in series.}
	\label{tandem}
\end{figure}

There exist different classes of queueing networks. A first distinction can be made among cyclic and acyclic networks. It is also useful to distinguish between open and closed networks. The classification is particularly useful because in literature there are several theorems that show how, for specific classes of networks, there exists the possibility of deriving a so called \textit{product-form} solution. Solving a queueing network in product-form means that the performance of the whole system can be analytically derived in a compositional way, starting from the analysis of single queues in isolation. The key point is that a lot of different algorithms exist to evaluate the performance of product-form networks. This means that if we were able to model an architecture as a product-form queueing network, then we could apply an algorithm to extract some parameters of interest, like the system waiting time, and use them to estimate the under-load memory access latency. Unfortunately we will see that things are not so simple. In the following we explain the particularly meaningful class of closed queueing networks and we show an important result known as \textit{BCMP theorem}.

\paragraph{Closed Queueing Networks}
In a closed queueing network there cannot be neither arrivals nor departure outside the network. Thus the population of the network is constant. Equivalently, for reasons that will be clear in the next section, we like to think at these networks as systems where \textit{a new request is allowed to flow only when another request departs from the network}. Figure~\ref{closedtandem} shows the simplest closed queueing network. 

\begin{figure}[h]
	\centerline{
		\mbox{\includegraphics[scale=0.60]{Images/closedtandem}}
	}
	\caption{A closed system: two $./M/1$ queues in series with cycle.}
	\label{closedtandem}
\end{figure}

For a closed queueing network it is useful to introduce the concept of \textit{class}: all clients belonging to a specific class share the same routing politics at a queue. This means that clients belonging to different classes could be routed to different queues once serviced at the same queue. 

We end up this overview by showing one of the main results of the BCMP theorem~\cite{INGTLC}, which will be useful in the next chapter.
\begin{thm}
\label{BCMP}
\textbf{BCMP networks.} Consider a \textit{closed} queueing network in which clients can belong to different classes. Assume that all queues of the network have:
\begin{itemize}
\item FIFO service discipline;
\item exponential service time;
\end{itemize}
then for this kind of networks it is possible to derive a product-form solution.
\end{thm}
This (part of) theorem is a generalization of the \textit{Gordon-Newell} theorem~\cite{INGTLC}. The difference resides in the possibility of using classes of clients. However, notice that claiming that a product-form solution exists does not mean that it is also easy to determine it: for instance, in general, adding the client classification remarkably increase the complexity of the solution.