\documentclass{article}

\setlength{\textheight}{25.7cm}
\setlength{\textwidth}{16cm}
\setlength{\unitlength}{1mm}
\setlength{\topskip}{2.5truecm}
\topmargin 260mm \advance \topmargin -\textheight
\divide \topmargin by 2 \advance \topmargin -1in
\headheight 0pt \headsep 0pt \leftmargin 210mm \advance
\leftmargin -\textwidth
\divide \leftmargin by 2 \advance \leftmargin -1in
\oddsidemargin \leftmargin \evensidemargin \leftmargin
\parindent=0pt

\frenchspacing

\usepackage[english,dutch]{babel}
\selectlanguage{dutch}

\usepackage{listings}
\usepackage{graphicx}

\lstset{language=C++, showstringspaces=false, basicstyle=\small,
  numbers=left, numberstyle=\tiny, numberfirstline=false,
  stepnumber=1, tabsize=8,
  commentstyle=\ttfamily, identifierstyle=\ttfamily,
  stringstyle=\itshape}

\title{Computer Architecture - Assignment 3}
\author{Ricardo Meijer (0801496) \& Sander van Rijn (0935972)}

\begin{document}

\maketitle

\section{Influence of assumption}

The assumption we worked with for this assignment was that the bottleneck of the program was the cache/memory. First of all, this assumption is not so strange, as this will actually be the case for a lot of programs.\\
In terms of conflicts/misses, there will be no difference between our assumption and any other cases. No matter how many instructions are executed in between memory calls, if the required data is not available in the registers/cache, it will be a miss and a memory operation has to be performed.\\
Bandwidth is a slightly different story, since the ammount of instructions executed in between memory calls do influence the bandwidth. However, bandwidth can only become lower compared to our assumption, as no less instructions can be executed in between if memory is already the bottleneck. Thus, the bandwidth measured using this assumption indicates a maximum average over the entire program.\\

\section{Discussion of results}

Appendix A shows the results generated by our program for both the lisp and spic testfiles. The first thing that can be noticed is that for each file and Dinero settings, the ammount of conflicts is the same over all memory designs. This is expected as the program and cache settings are the same for each run, resulting in the same ammount of requests to the memory, regardless of what memory is simulated.\\

The next most obvious thing is that for both testfiles, the ammount of conflicts is horribly high and the bandwidth is horribly low when there is no cache and only a 1 word deep write buffer. It is not difficult to imagine why the results are so bad for this setting: every request goes straight to the memory, making the memory become even more of a bottleneck than normal.\\

Using the results, one can also see that bank memory has a clearly higher bandwidth than standard or DRAM memory. A likely explanation is that a request can be made to each bank simultaneously, without having to switch in between like with DRAM memory. Although DRAM memory does have a short next-access-time, this is countered by switching pages being expensive. Depending on the cache options and pagesize used, DRAM can be slightly faster than standard memory, but is often slower. This indicates that most requests require the DRAM memory to switch pages instead of staying on the same page. If requests could be made such that it will stay on the same page more often, the bandwidth would be significantly higher.\\

Finally, it can be seen that an 8-bank works better than a 4-bank, and that DRAM with a pagesize of 1024 words beats the 64-word pages. For bank memory, this again proves that being able to process multiple requests simultaneously has a distinct advantage. With 8-bank memory, the chance of a request stalling all following requests is reduced by half, since each possible result to modulo 4, has 2 possible results in modulo 8 (e.g. if $n \% 4 = 1$ it could be 1 or 5 in modulo 8). For DRAM memory, a bigger pagesize means that a request is less likely to cause the memory to have to switch pages. It also helps that the next-access-time for the larger pagesize was 1 cycle less (especially when dealing with thousands of requests).\\

\section*{Appendix}

\subsection*{A: Results}

These are the results of our program with the given files:\\
\lstinputlisting{res.txt}\ \\

\subsection*{B: Code}

Shell script:\\
\lstinputlisting{calculate.sh}\ \\

Makefile:\\
\lstinputlisting{Makefile}\ \\

Read.cc\\
\lstinputlisting{read.cc}\ \\

Control.h:\\
\lstinputlisting{control.h}\ \\

Control.c:\\
\lstinputlisting{control.cc}\ \\

Std.cc:\\
\lstinputlisting{std.cc}\ \\

Bank.cc:\\
\lstinputlisting{bank.cc}\ \\

DRAM.cc:\\
\lstinputlisting{DRAM.cc}
\end{document}