\section{Results}
\subsection{Speed}
Emulating only the CPU execution, speeds of approximately 10MHz are achievable 
without any major optimizations being done. Testing the CPU was done using a 
small custom written program in assembly code. The program loaded all registers 
with \textit{0xFF}, and contained three nested loops decrementing these registers, 
which made a total of ~83 million cycles. 

At that point, a program was considered finished when encountering the 
\textit{no operation} (NOP) instruction which in this test case was placed after 
the loops for the purpose of terminating the execution. The 10MHz result was 
computed on an 800MHz dual-core laptop computer running GNU/Linux, compiled with 
-O2 optimizations. The compilers used were ghc 6.10.2 and 6.8.2. Since the program 
was written in a strict manner at that point, the 6.10-bug (see below) did not 
manifest itself. \\

Testing the PPU was difficult, as one had to have a fully working graphics core 
working in tandem with the CPU core and all this synchronized with the 
\textit{Enterprise Pulling} component. And even then one had to visually compare 
the results with the expected ones. These tests proved to be good enough together 
with printouts and tracing of the program. Being able to print the current 
VRAM-address, together with seeing how colours and tiles appeared on the screen 
was a great aid in correcting PPU bugs. \\

Due to our solution of having separate components and gluing them together with 
\textit{Enterprise Pulling}, lazy evaluation was a must. This together with a lack 
of strictness annotations because of lack of time resulted in speed slow down 
of about a factor five compared to a strict CPU. We managed to minimize this
factor by running only the communication lazy, and the CPU and PPU strict.
\cite{ST} \\

The original NES has a framerate of 60 \textit{frames per second} (FPS), whereas our 
implementation has around 10 FPS. This does not mean that lazy programs execute 
slower than the strict ones, but rather that the lazy evaluation requires more 
annotations and careful profiling to achieve the same speed. We have been profiling
the code to some extent, but we have not had time to analyze the results to the
extent that would have made it really useful, thus optimizations have not been a
priority. \cite{RWH}

\subsection{Correctness}
The CPU has been very carefully implemented to imitate the original behaviour of 
the 2A03. This has been straightforward as the 2A03 is well-documented. Not only 
has the expected behaviour been implemented but also various bugs that exist in 
hardware. For example, when executing the \textit{push status to stack} (PHP) instruction, 
the break flag is set. \cite{nesbugs}
With the PPU on the other hand one can only do visual comparison of the graphics output,
making it hard to test if the result was the expected one.
We have used both the real hardware and other emulators for doing this comparison with satisfying results.

\section{Delimitations \& Choices}
When working on a project on a very limited time frame, there are many things 
that need to be taken into account, and also many things that one might not have 
time to implement. We have had to decide both on what we wanted to include and 
exclude in the emulator, and most importantly how to do it. \\

Very early in the project we realized that the APU for the NES is not only poorly 
documented, but also a completely analogue device. This led to the decision that 
we would not include it at all, unless we were pretty much done with the more 
important parts and had time over. Another decision that was made early in the 
project was that we were to code for simplicity and readability rather than for speed.

\subsection{Memory Mappers}
We have chosen not to implement memory mappers. There are many different mappers, 
and having to implement them would have been very time consuming ordeal, and would 
have taken focus off the more important parts in the project. Adding mappers is 
discussed in the future work section.

\subsection{Functional vs Monadic}
Purely functional programs have the benefit of not having any side effects what 
so ever. A CPU however, is highly stateful, and passing a state as explicit argument 
to each function would be tiresome. Therefore, the program is written using monads 
that can hold an implicit state. To use explicit argument passing was discarded early in the
planning phase. In addition to this one can layer monads to achieve more functionality. \cite{monads}

\subsection{CPU}
The CPU has been one of the more central parts of the project, which naturally led 
to many insights during the development. We had to change several things in the implementation of the CPU 
realizing that we needed a specific feature, for example faster mutable arrays. \cite{arrays}
The main changes in the CPU have been regarding data structures. We have gone through several "phases" 
in modelling of the CPU: 

\subsubsection{State Monad}
State monad was used for the CPU, holding a record with memory and internal state such as various registers.
At this point the memory was a purely functional array, \textit{DiffUArray},
residing inside the state record. All instructions operated more or less directly 
on the state. After a while we realized it would be a lot better to have a shell 
of library functions operating on the CPU, so that the underlying implementation 
could be changed without having any effect on the instruction set.

\subsubsection{ST Monad}
Switching to the ST Monad allowed us to use \textit{STUArray}, as \textit{DiffUArray} proved to be 
all too slow for our needs. When using mutable arrays, we had to change the 
implementation of the CPU type from 
\begin{code}
State CPU a 
\end{code}
to
\begin{code}
ReaderT (STRef s CPUEnv) (ST s) a
\end{code}
Adding the \textit{reader monad transformer} enabled us to simplify the code and 
increase readability by having an implicit state.
This implementation became the final one with only minor changes.

\subsubsection{Memory}
The implementation of memory has also changed several times during the project. 
At first when we were working with the State monad, the memory was simply a pure 
array from 0x0000 to 0xFFFF. Since this array proved to be very memory consuming 
because of it being purely functional and all different versions of it residing 
in the memory, we switched to the mutable ST array library. At the same time, 
we also decided that it would be logical to divide the memory into several 
smaller arrays, partly because this would make mirroring a lot easier, but also 
due to the fact that the memory in the real NES is divided into logical sections (fig \ref{cpumap}).
In the working implementation there are three parts of the memory; 
\textit{lowmem} (0x0-0x800), \textit{ppumem} (0x2000-0x2007) and \textit{uppmem} (0x4000-0xFFFF). 

\subsection{PPU}
The PPU component did not go through as many changes as the CPU since we learned 
a lot how we would go about from the modelling of the CPU. From the get go we 
used the same memory layout with mirroring and implementing the whole PPU 
environment in a similar fashion as the CPU. Furthermore we based the graphics on 
per-scanline rendering instead of per-pixel due to the more simplified approach 
this entailed as explained in the PPU implementation section. Lastly we did not have
time to finish the sprite rendering part and thus it is a future work.

\subsection{Lazy vs. Strict}
Our implementation relies on Haskell being lazy because the two communicating components, the CPU and the PPU, 
are constantly consuming each others lists and at the same time producing more data. 
Writing that in a strict manner would not have been possible, since one would 
have to compute the whole list before using it, and computing an infinite list 
is impossible. From our testing running things strict was generally faster because computational thunks are minimized 
and lazy computations demand more profiling and optimizations to achieve the same speeds.\cite{RWH}
When we worked solely with the CPU, using a strict monad was not 
a problem, since there was no need for synchronization nor communication.
However, when producing lists in a lazy manner such as:
\begin{code}
cpu = do
    opCode <- fetch
    x      <- execute opCode
    ~xs    <- cpu
    return (x:xs)
\end{code}
there is no way to do that strict, since that version of the function will try 
to evaluate xs and it will never terminate.

\subsection{Communication}
We had several choices on how to handle the communication between the CPU and the PPU. 
There were three different techniques that were feasible besides running both 
components in the same loop; \textit{STM}\cite{stm}, \textit{MVars}\cite{mvars}, \textit{Enterprise Pulling}.
Running both units together (unified looping) in the same loop was considered first, but we found it
unsatisfactory as we wanted to avoid an inelegant monolithic approach. \\

STM is an interesting technique because it allows for changes as transactions and 
thus avoiding readers/writers problem and similar concurrency related problems. 
However a big problem using STM is that all actions have to be performed in the 
IO monad, which has been one of the things we wanted to avoid as much as possible.\cite{RWH7}

Another possible way to communicate would be to use MVars, which are part of the 
concurrency library in ghc. This was of course considered, but since this is the 
more low-level way of doing concurrency with locks and semaphores, this was directly 
disregarded in favor of STM.

The reason why we chose the enterprise pulling technique over all others, is that 
it is completely free from side effects, as it only relies on lazy lists being 
consumed and produced. The \textit{unified looping} style would of course also be free 
from side effects, but where the enterprise pulling is elegant, the unified 
looping looks awful. 

\section{GHC 6.10}
When we ran the CPU code with strict ST monad, all worked well. However, when 
changing to lazy which was needed to make the communication with PPU possible, 
we encountered a bug in ghc 6.10. As it turns out, lazy ST is broken in that 
version of ghc, making recursive loops as needed by our main loop yield 
$\ll$loop$\gg$ as output instead of the intended result. Normally, $\ll$loop$\gg$ 
is outputted when the next thunk of data refers to the current, thus entering an 
infinite loop without a change. In our program however, every loop modifies the 
current state, so the current thunk can never refer to the previous one. When this was first 
encountered we thought that some part of our code was erroneous and a lot of 
time was spent on debugging. Finally we posted on the haskell-cafe mailing list,
where the code was found to be correct, and a bug was filed. \cite{ghcbug}

\section{Related Work}
    \subsection{BeaNES \cite{emuBeanes}}
        NES emulator written in Java but does not seem to be under development 
        any more, only an early alpha released.
        Straight forward implementation overall. Code is easy to read.
    \subsection{Nintendulator \cite{emuNinlator}}
        NES emulator written in C.
        Precise emulation but not readable code because of high 
        optimizations and a pixel based PPU.
    \subsection{OmegaGB \cite{emuOmega}}
        GameBoy emulator done in haskell.
        Very messy structure, unmaintainable code.
        Uses state monad.
    \subsection{Coroutines \cite{coroutines}}
        Coroutines is a concept which can be used rather much as Enterprise pulling, 
        but is most used in an imperative setting. Coroutines are commonly used to 
        implement lazy lists, producer/consumer patterns etc.

\section{Future Work}
\subsection{Sound}
Adding sound would probably be the most prioritized feature to be implemented, 
since the sound brings another level of immersion to the gameplay. 
Implementing sound would be similar to the implementation of other components 
where the communication would still be managed by the \textit{enterprise pulling} technique.

\subsection{Mappers}
One thing that was not considered at all in the project was the use of memory 
mappers. It would be very interesting to add this feature in the future, as 
most NES games use them. One approach for doing this would be by writing a 
typeclass for Mappers, having read and write functions. This would be similar
to an interface in Java, which is how BeaNES does it.

\subsection{State saving}
State saving would likely be the easiest one to implement. One would only need to dump 
current CPU and PPU state. The hard thing when implementing 
this would probably be to have a good representation of data and code for loading external states.

\subsection{Netplay}
Netplay would be an interesting extension that would make it possible to play against other 
people online, thus simulating the feature of several controllers when using the real NES.

\subsection{Parellellism}
Since the CPU and PPU are two separate components, it would be interesting to 
parallellize the program. Haskell has support for parallellizing programs using 
the \textit{par}\cite{par} function.

\subsection{PPU Improvements}
Currently the rendering in the PPU is based on scanlines. This makes some trickery 
used by programmers to create cool effects impossible, since they rely on changing 
the values in PPU memory in the middle of a scanline. To achieve this one would 
have to be even more stateful in the PPU and keep track of x and y coordinates 
on the screen. \cite{emuNinlator} As mentioned previously, very few games rely on 
such precise timings, so this has been disregarded.

Another thing that we would like to work on in the future, is to alter the PPU into a more
readable implementation, since the current algorithm is a very low-level one, 
obtained by reverse engineering \cite{loopyppu}. One way to optimize the PPU could be done by 
simulation, instead of emulation. Then one would not need to store all low-level
details that are stored at the moment, but could focus on what is outputted.
