\documentclass[11pt]{article}
\usepackage[margin=0.8in]{geometry}                % See geometry.pdf to learn the layout options. There are lots.
\geometry{letterpaper}                   % ... or a4paper or a5paper or ... 
%\geometry{landscape}                % Activate for for rotated page geometry
%\usepackage[parfill]{parskip}    % Activate to begin paragraphs with an empty line rather than an indent

\usepackage[colorlinks=true]{hyperref}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{graphicx}
\usepackage{amssymb}
\usepackage{epstopdf}

\DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png}

\title{Project 2 Tasks}
\author{Saurabh V. Pendse (ID: 001026185, Unity ID : svpendse)}
\date{\today}
%\date{}                                           % Activate to display a given date or no date

\begin{document}
\maketitle
\section{Task 1}
\begin{figure}[h]
        \centering
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task1/figure_task1_waiting_fcfs.eps}
                \caption{Average waiting time vs $\rho$ for FCFS}
                \label{fig:task1fcfs}
        \end{subfigure}
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task1/figure_task1_waiting_lcfs.eps}
                \caption{Average waiting time vs $\rho$ for LCFS}
                \label{fig:task1lcfs}
        \end{subfigure}
        
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task1/figure_task1_waiting_sjf.eps}
                \caption{Average waiting time vs $\rho$ for SJF}
                \label{fig:task1sjf}
        \end{subfigure}
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task1/figure_task1_waiting_prinp.eps}
                \caption{Average waiting time vs $\rho$ for Priority-NP}
                \label{fig:task1prinp}
        \end{subfigure}
\caption{Results for Task 1}
\label{fig:task1a}
\end{figure}

\begin{figure}[h]
	\begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task1/figure_task1_waiting_prip.eps}
                \caption{Average waiting time vs $\rho$ for Priority-P}
                \label{fig:task1prip}
        \end{subfigure}
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task1/waiting_prinp_queues.eps}
                \caption{Queue-wise average waiting time vs $\rho$ (Priority-NP)}
                \label{fig:task1prinp2}
        \end{subfigure}

        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task1/waiting_prip_queues.eps}
                \caption{Queue-wise average waiting time vs $\rho$ for Priority-P}
                \label{fig:task1prip2}
        \end{subfigure}
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task1/waiting_all.eps}
                \caption{Average waiting times (for each discipline) vs $\rho$}
                \label{fig:task1all}
        \end{subfigure}

\caption{Results for Task 1}
\label{fig:task1b}
\end{figure}

The plots are shown in the Figures \ref{fig:task1a} and \ref{fig:task1b}. \ref{fig:task1fcfs}, \ref{fig:task1lcfs}, \ref{fig:task1sjf}, \ref{fig:task1prinp} and \ref{fig:task1prip} depict the average waiting time vs $\rho$ for each individual service discipline.

\ref{fig:task1prinp2} and \ref{fig:task1prip2} depict the queue-wise average waiting times for the Priority-NP and Priority-P disciplines respectively, whereas \ref{fig:task1all} shows the average waiting times for all service disciplines. 

It can be observed that the SJF service discipline consistently results in the lowest waiting times. FCFS and LCFS exhibit near identical average waiting time characteristics.

The Priority-NP discipline results in a higher average waiting time than SJF, FCFS and LCFS, however for larger values of $\rho$ (0.85, 0.95), it results in a lower average waiting time than FCFS and LCFS. Thus, as the arrival rate approaches $1$, the FCFS and LCFS disciplines exhibit exponential increase, whereas the Priority-NP discipline increases at a slower (almost linear) rate.

The Priority-P discipline clearly results in the highest average waiting time. Bulk of it comes from the fact that due to the pre-emptive nature of the discipline the customers in the lower priority queue (esp. queue 4) are almost never served. This is evident in the \ref{fig:task1prip2} as the average waiting time for the queue 4 increases exponentially with increasing arrival rates. The same trend is observed in the Priority-NP discipline, but the magnitude of the waiting time is much less than the Priority-P discipline.

In conclusion, if the average waiting time is the optimization metric, then SJF is the best service discipline. FCFS, LCFS and Priority-NP give a lower performance, while Priority-P is the worst.

\section{Task 2}

\begin{figure}[h]
        \centering
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task2/figure_task2_CLR_fcfs.eps}
                \caption{CLR vs $\rho$ for FCFS}
                \label{fig:task2clrfcfs}
        \end{subfigure}
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task2/figure_task2_CLR_lcfs.eps}
                \caption{CLR vs $\rho$ for LCFS}
                \label{fig:task2clrlcfs}
        \end{subfigure}
        
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task2/figure_task2_CLR_sjf.eps}
                \caption{CLR vs $\rho$ for SJF}
                \label{fig:task2clrsjf}
        \end{subfigure}
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task2/figure_task2_CLR_prinp.eps}
                \caption{CLR vs $\rho$ for Priority-NP}
                \label{fig:task2clrprinp}
        \end{subfigure}
\caption{Results for Task 2}
\label{fig:task2a}
\end{figure}

\begin{figure}[h]
	\begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task2/figure_task2_CLR_prip.eps}
                \caption{CLR vs $\rho$ for Priority-P}
                \label{fig:task2clrprip}
        \end{subfigure}
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task2/CLR_queues_prioritynp.eps}
                \caption{Queuewise CLR vs $\rho$ for Priority-NP}
                \label{fig:task2clrprinp2}
        \end{subfigure}
        
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task2/CLR_queues_priorityp.eps}
                \caption{Queuewise CLR vs $\rho$ for Priority-P}
                \label{fig:task2clrprip2}
        \end{subfigure}       
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task2/CLR_all.eps}
                \caption{Queuewise CLR vs $\rho$ for all service disciplines}
                \label{fig:task2clrall}
        \end{subfigure}      
        \caption{Results for Task 2}
        \label{fig:task2b}
\end{figure}
\begin{figure}        
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task2/figure_task2_running_fcfs.eps}
                \caption{Running time vs $\rho$ (FCFS)}
                \label{fig:task2runfcfs}
        \end{subfigure}
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task2/figure_task2_running_lcfs.eps}
                \caption{Running time vs $\rho$ for LCFS}
                \label{fig:task2runlcfs}
        \end{subfigure}
               
	\begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task2/figure_task2_running_sjf.eps}
                \caption{Running time vs $\rho$ for SJF}
                \label{fig:task2runsjf}
        \end{subfigure}
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task2/figure_task2_running_prinp.eps}
                \caption{Running time vs $\rho$ for Priority-NP}
                \label{fig:task2runprinp}
        \end{subfigure}
        
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task2/figure_task2_running_prip.eps}
                \caption{Running time vs $\rho$ for Priority-P}
                \label{fig:task2runprip}
        \end{subfigure}
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task2/running_all.eps}
                \caption{Running time vs $\rho$ for all disciplines}
                \label{fig:task2runall}
        \end{subfigure}

\caption{Results for Task 2}
\label{fig:task2c}
\end{figure}

The CLR plots are shown in the Figures \ref{fig:task2a} and \ref{fig:task2b}. \ref{fig:task2clrfcfs}, \ref{fig:task2clrlcfs}, \ref{fig:task2clrsjf}, \ref{fig:task2clrprinp}, \ref{fig:task2clrprip} show the overall CLR for each service discipline. \ref{fig:task2clrprinp2}, \ref{fig:task2clrprip2} depict the queue-wise CLR for the Priority-NP and Priority-P disciplines respectively.

From the individual plots, it can be seen that both FCFS and LCFS exhibit similar CLR characteristics. SJF results in the lowest CLR values (of the order of $10^{-4}$). This is due to the inherent bias of this discipline to finish shortest service time customers first. This means that on an average, there are likely to be fewer customers in the queue (due to the preference given to the shortest service time customers), and hence the queue can accommodate more incoming customers. Hence, it is less likely for a customer to be lost with this discipline as compared to the other disciplines.

The Priority-P and Priority-NP also exhibit similar CLR characteristics (shape of the curve only) but the CLR is much higher than either FCFS or LCFS. This is due to the fact that the higher priority queues are served immediately, however, the lower priority queues are rarely served. Since the customers are equally likely to enter any queue, the lower priority queues eventually become full and this dramatically increases the customer loss rate. This effect is evident in the plots \ref{fig:task2clrprinp2} and \ref{fig:task2clrprip2}. The high priority queues have a negligible CLR, but the lower priority queues have a significantly high CLR, esp. for higher values of $\rho$.

The effect is further magnified in the pre-emptive discipline, since in this case, even if a customer in the lower queue is serviced, it can be pre-empted by a customer from a higher priority queue. If a new customer arrives in the pre-empting customer's queue, it has to be served before the original customer can be serviced again. Also, there can be nested preemptions.

The running time plots (each point represents the time for 30 runs, \textit{as mentioned on the message board}) are shown in \ref{fig:task2runfcfs}, \ref{fig:task2runlcfs}, \ref{fig:task2runsjf}, \ref{fig:task2runprinp}, \ref{fig:task2runprip} show the running time for each service discipline. \ref{fig:task2runall} shows the running times for all service disciplines in a single plot. We observe an increasing trend in the running time with increasing values of $\rho$ for all the service disciplines. This can be explained by the fact that as $\rho$ increases, the inter-arrival times reduce, thereby increasing the frequency of arrivals. This leads to an increased average occupancy of the queue, with more number of customers being lost due to the queue being completely occupied. This in turn results in a longer simulation running time in order to serve $C = 100,000$ customers (due to the greater number of losses).

\section{Task 3}

\begin{figure}[h]
	\begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task3/clr.eps}
                \caption{CLR vs $\rho$ for Priority-P}
                \label{fig:task3clr}
        \end{subfigure}
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task3/waiting.eps}
                \caption{Average waiting time for each queue vs $\rho$.}
                %The confidence interval for the I/O queue 3 are extremely small, hence might not be readily visible.}
                \label{fig:task1waiting}
        \end{subfigure}

\caption{Results for Task 3}
\label{fig:task3}
\end{figure}
\begin{enumerate}
\item The results are shown in the Figure \ref{fig:task3}. The CLR for the CPU queue shoots up at higher values of $\rho$ because of an increased arrival rate, both due to external arrivals as well as the reentering customers from each of the three I/O queues. The approximate arrival rate is greater than $1.3\lambda$. Hence for $\lambda > \frac{1}{1.3} \equiv 0.77 $, the CLR increases significantly. This effect is evident for the data points corresponding to $\rho = 0.85$ and $\rho = 0.95$ respectively. Increased CPU CLR also results in an increased overall CLR. On the other hand, the CLR for the I/O queues is $0$ for all values of $\rho$ due to the fact that each I/O queue has a capacity of $30$ customers, while the arrival rate is only $0.1\mu = 0.1$. This is equivalent to the case with each I/O queue having an infinite queue capacity, which explains the "no losses" observed.
\item The waiting time follows a similar trend for the CPU, I/O and overall cases. As the value of $\rho$ increases, there is increased queueing in the CPU queue which results in the waiting times increasing exponentially up to about $18$ seconds. This also results in a similar behavior for the overall waiting time. The queueing is much less pronounced at each of the I/O queues due to the significantly lower arrival rates. Hence, the waiting time at each I/O queue is much less than the same at the CPU queue, and flattens out at about $1$ second.
\end{enumerate}
\newpage
\section{Task 4}

\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$\rho$ & I/O 1 queue size & I/O 2 queue size & I/O 3 queue size & Max. observed CLR & Min. $K_{IO}$ \\
\hline
0.1 & 3 & 3 & 3 & $1.645e^{-5}$ & 3\\
\hline
0.2 & 4 & 4 & 4 & $9.377e^{-6}$ & 4\\
\hline
0.3 & 5 & 5 & 5 & $1.656e^{-5}$ & 5\\
\hline
0.4 & 5 & 6 & 5 & $3.227e^{-5}$ & 6\\
\hline
0.5 & 6 & 5 & 5 & $4.666e^{-5}$ & 6\\
\hline
0.6 & 5 & 6 & 6 & $6.315e^{-5}$ & 6\\
\hline
\end{tabular}
\caption{Table showing the minimum queue size for each I/O queue, the Max. observed CLR and the minimum value of the parameter $K_{IO}$ for the queue loss rate to be less than $0.01\%$.}
\end{table}

\section{Task 5}

\begin{figure}
	\begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task5/cpu_visits.eps}
                \caption{Task 5 : Average number of visits to the CPU queue vs $\rho$}
                \label{fig:task5a}
        \end{subfigure}
        \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task5/io_visits.eps}
                \caption{Task 5 : Average number of visits to each I/O queue vs $\rho$}
                \label{fig:task5b}
        \end{subfigure}
        
         \begin{subfigure}[b]{0.48\linewidth}
                \centering
                \includegraphics[width=\linewidth]{../task5/all_visits.eps}
                \caption{Task 5 : Average number of visits to each queue vs $\rho$}
                \label{fig:task5c}
        \end{subfigure}

        \caption{Results for Task 5}
	\label{fig:task5}
\end{figure}

%\textit{Note : Even though confidence intervals are not required for the plots, they are shown in order to demonstrate the variance observed for the runs corresponding to every value of $\rho$.}
%\\ \\
Let us theoretically derive the metrics for the average number of visits to the CPU and I/O queues respectively : 
\begin{enumerate}
\item Let us first derive the average number of times a customer visits the CPU queue. Now, if the customer enters the web server, he has to visit the CPU queue at least once. Upon exiting the CPU queue, the customer leaves with a probability $0.7$ and stays in the system with a probability $0.3$ (and enters any one of the I/O queues). Because of the fixed feedback characteristic of the system i.e. the customer has to enter the CPU queue again if he is in any of the I/O queues, the probability that the customer enters the CPU queue again is $0.3^{2}$ and so on. Let $\overline{V_{CPU}}$ denote the average number of times a customer visits the CPU queue.
\begin{eqnarray}
\label{eq:task51} \therefore \overline{V_{CPU}} &=& 1 + 1(0.3) + 1(0.3)^{2} + \ldots  \\
&=& \sum_{i=0}^{\infty} (0.3)^{i} \\
&=& \frac{1}{1 - 0.3} \\
&=& \frac{1}{0.7} \\
&=& 1.4285
\end{eqnarray}

Note that this result is derived assuming an infinite queue capacity at the CPU, in which case it is independent of the parameter $\rho$ and is constant. However, this serves as a good approximation of the empirical results calculated for a finite queue capacity $K_{CPU} = 50$. The empirical results for $\overline{V_{CPU}}$ are tightly bound in the range $[1.427, 1.4305]$ for different values of $\rho$. This is shown along with the confidence intervals in the Figure \ref{fig:task5a}. This can be attributed to the inherent randomness in the web server simulation. For some runs, at higher $\rho$ values the losses in the CPU queue increase and hence the number of re-entering customers tends to decrease, since there is a greater probability of them being lost while entering back into the CPU queue. In this case the average number of visits goes slightly down. However in some cases, the newly entering customers might be lost more frequently instead of the re-entering customers. This explains the variance in the values for different values of $\rho$.

%This can be seen by the relatively widespread confidence intervals for each value of $\rho$.

%This can explained by the fact that as $\rho$ increases, the losses in the CPU queue increase, and hence the number of re-entering customers tends to decrease, since there is a greater probability of them being lost while entering back into the CPU queue. Thus the average number of visits goes slightly down as $\rho$ increases. However, the variance in the number of CPU observed was relatively large for each batch of $30$ runs for every value of $\rho$. This is probably due to the fact the in some instances the reentering customers might not be lost, instead, the new incoming customers might have been lost in which case the $\overline{V_{CPU}}$ is a bit higher.

\item For our experiments, we choose $K_{IO} = 30$. Given the service rate of $\mu = 1$ from the CPU queue, the arrival rate to any I/O queue is $\lambda_{IO} = 0.1\mu = 0.1$. The I/O service rate being $\mu_{IO} = 0.5\mu = 0.5$, $\rho_{IO} = 0.2$. The I/O queue follows the FCFS service discipline, hence the CLR expression is given by : 

\begin{displaymath}
CLR = \frac{(1-\rho) \rho^{K_{IO}}}{1 - \rho^{K_{IO} + 1}}
\end{displaymath}

For the given parameter values, $CLR = 8.59e^{-22}$. Hence, this is  equivalent to the infinite queue capacity scenario.

Using the result from equation \ref{eq:task51}, for every customer arriving in the CPU queue, the probability of him visiting a particular I/O queue is $0.1$. Thus, the average number of visits to each I/O queue should be $10\%$ of the visits to the CPU queue.Thus, $\overline{V_{IO}} = 0.1\cdot \overline{V_{CPU}} \equiv 0.1428$. This is consistent with the empirically observed results which are in the range $[0.1415, 0.1445]$ (again attributed to the inherent randomness of the simulation) and show no clear increasing or decreasing trends with increasing values of $\rho$. %This is shown along with the confidence intervals in the Figure \ref{fig:task5b}. 

The Figure \ref{fig:task5c} shows all the visit values plotted on the same plot. We observe an approximately constant behavior of the average visits (for each queue) vs $\rho$.
\end{enumerate}
%\item Now, let us derive the average number of times a customer visits each I/O queue. Note that since the arrival probabilities for each of the three I/O queues are equal, the average number should be the same for all the three I/O queues. In other words, this derivation is representative of all the three I/O queues. If the customer enters the system, the has to visit the CPU queue. However, he can visit a I/O queue with a probability of $0.1$. He then again goes back to the CPU and can again visit the same I/O queue with a probability of $0.1$. Thus, the expression in this case is identical to \ref{eq:task51}, but the first term in the geometric series is $0.1$, instead of $1$. Let $\overline{V_{IO}}$ denote the average number of times a customer visits the I/O queue.
%\begin{eqnarray}
%\label{eq:task51} \therefore \overline{V_{IO}} &=& 1(0.1) + 1(0.1)^{2} + 1(0.1)^{3} + \ldots  \\
%&=& \sum_{i=1}^{\infty} (0.1)^{i} \\
%&=& \frac{0.1}{1 - 0.1} \\
%&=& \frac{1}{9} \\
%&=& 0.1111
%\end{eqnarray}
%\end{enumerate}
%Thus, even if customers arrive at a higher rate in the system for increasing $\rho$, the average number of times a customer visits each queue remains constant. 
%
%The values in the Figure \ref{fig:task5} also validate the 70-10-10-10 rule. The average number of visits in each of the I/O queues i.e. $0.1428$ is approximately $10\%$ of the average number of visits to the CPU queue i.e. $1.432$.


\end{document}  