\documentclass[english,a4paper,titlepage]{article}
\usepackage[utf8]{inputenc}
%\usepackage[T1]{fontenc}
\usepackage{babel}


%   Matematik symboler
\usepackage{amsmath}

%   Specielle fonte
\usepackage{amsfonts} 

%   Flere symboler
\usepackage{amssymb}      

%   Tekst inden i matematik
%   \text{}
\usepackage{amstext}

%   Grafik
\usepackage{graphicx}

%   Header
\usepackage{fancyhdr}
%---------------Hvilke makroer vil vi bruge; eksempler
%-------------------------

%  Groups, rings, etc.
   \newcommand{\A}{{\mathbb A}}
   \newcommand{\N}{{\mathbb N}}
   \newcommand{\Z}{{\mathbb Z}}
   \newcommand{\Q}{{\mathbb Q}}
   \newcommand{\R}{{\mathbb R}}
   \newcommand{\C}{{\mathbb C}}
   \newcommand{\fg}{{finitely generated }}
   \newcommand{\ann}{{\operatorname{ann}}}

%  Frequently used functions, functors, etc.
   \newcommand{\Spec}{\operatorname{Spec}}
   \newcommand{\gr}{\operatorname{gr}}
   \newcommand{\Hom}{\operatorname{Hom}}
   \newcommand{\CH}{\operatorname{CH}}

%  Shortcuts for finite sums and products
   \newcommand{\nsum}{{\sum_{i=1}^n}}
   \newcommand{\ncap}{{\cap_{i=1}^n}}
   \newcommand{\bncap}{{\bigcap_{i=1}^n}}
   \newcommand{\bnoplus}{{\bigoplus_{i=1}^n}}
   \newcommand{\noplus}{{\oplus_{i=1}^n}}
   \newcommand{\nprod}{{\prod_{i=1}^n}}
   \newcommand{\ncup}{{\cup_{i=1}^n}}
   \newcommand{\bncup}{{\bigcup_{i=1}^n}}

%  Definer evt. selv flere paa samme maade,
%  eller slet de som du ikke skal bruge
\newcommand{\qed}{\hfill \square{\raggedright}}
%\newcommand{\qed}{\hfill \mbox{\raggedright \rule{.07in}{.1in}}}

\newcommand{\code}[1]{\begin{verbatim}#1\end{verbatim}}


\title{A Python VM in C}
\author{Henrik Krogh Nielsen and Torin Finnemann Jensen}
\date{\today}

\pagestyle{fancy}
\newcommand{\tstamp}{\today}   
%\renewcommand{\chaptermark}[1]{\markboth{#1}{}}
\renewcommand{\sectionmark}[1]{\markright{#1}}
\lhead[\fancyplain{}{\thepage}]         {\fancyplain{}{\rightmark}}
\chead[\fancyplain{}{}]                 {\fancyplain{}{\title}}
\rhead[\fancyplain{}{\rightmark}]       {\fancyplain{}{\thepage}}
%\lfoot[\fancyplain{}{}]
%{\fancyplain{\tstamp}{\tstamp}}
%\cfoot[\fancyplain{\thepage}{}]         {\fancyplain{\thepage}{}}
%\rfoot[\fancyplain{\tstamp} {\tstamp}]  {\fancyplain{}{}}
%\markboth{Header}{Header}

\begin{document}
\maketitle
\thispagestyle{empty}
\newpage
\tableofcontents
\newpage
%\listoffigures
%\listoftables
%\newpage

\section{How to run the code}
To run the code do the following:\\
Unzip the \texttt{pyvm22.tgz} file on a Linux machine, enter the
directory and type \texttt{make}.\\
The \texttt{pyvm} VM takes as it's single argument the name of a
\texttt{.py} file with Python code to be run (on an x86 machine).\\
Benchmark programs can be found the the \texttt{performanceTests}
directory.

\section{Introduction}
Our goal from the beginning was to create a Virtual Machine supporting
the core functionalities in Python. The VM would be written in C, and
consist of a parser, a JIT-compiler compiling to native x86 code and a
garbage collector.\\
The main focus would be on the JIT-compiler, so we would either do a
very simple interpreter, or completely skip it. Also the parsing
itself felt less interesting so we would if possible acquire a parser
from somewhere else. We did not decide exactly how complex we wanted
the garbage collector to be, instead we would let time decide.


\section{Status}
Our final VM reflects our starting ambitions. We have a parser, a
JIT-compiler and a simple garbage collector. It runs and supports a
subset of Python. For several reasons we have been forced to exclude
more Python functionality, than we had hoped for. We were too slow to
get started on the project, and when we were still on an early stage,
one of the original group members skipped out of the group (and the
course altogether), which left us with more work to do per person. In
the end these things meant, that we have had too little time to reach
all of our ambitions. 

\subsubsection*{What we would have supported}
The goal was to support the object oriented aspects of Python, having
everything as objects - i.e. classes, methods, functions, primitives
as well as regular objects. We found good reasons for supporting
exceptions and figured out a way of handling these as well. Also we
wanted to support a large subset of the expressions and statements in
the language.


\subsubsection*{What we support}
Things did not go as well as we would have liked and we have left out
larger parts of the language, than we hoped we would. A number of
things is implemented and believed to be working correctly. This
include most of the basic simple statements, such as expression
statements, assignments, and continue statements and to some degree
break statements. Continue and break would not make much sense without
for or while loops, and these compound statements are implemented as
well, and so is the if statement. These are all implemented in
assembly and we produce the native code for these operations
directly. The print statement is implemented as well, but relies on a
call to a function implemented in C. Besides from the print statement
only garbage collection will invoke C code.

We support most binary operations on integer literals including
arithmetic, shifting and bit wise operations. The support for string
literals is very limited though pretty much only allowing the printing
of them.  
%%%
%Supported features:\\
%-atomic integer expressions
%-expression statement
%-assignment statement\\
%-print statement (through call to c method)\\
%-break statement\\
%-continue statement\\
%-if statement \\
%-while statement \\
%-for statement \\
%-arithmetic binop
%-bit wise binop
%-shifting binop


\subsubsection*{What is not supported}
There may be one of two reasons for something not supported: We
deliberately decided not to do it, or we decided not to do it because
of lack of time.\\ From the beginning we decided to exclude
dictionaries, which are associative arrays mapping indexes which can
be any immutable type to values. The same goes for anonymous functions:
Python supports the possibility to declare a lambda function within a
function, which we for simplicity did not implement. Neither did we
look into implementing support for generator objects and the yield
statement.\\ Lack of time prohibited implementation of the class
system. This was one of the things we really wanted to do, but as
deadline came close it proved unfeasible to do. Later in this text we
will try to outline the ideas we had for an implementation.

Besides from the things not implemented some parts of the existing
code have not had much testing so they might not work correctly. The
break statement is not tested to the full extend. Also code for
augmented assignment (\texttt{i += 42}), tuple assignment (\texttt{(x,
y) = (2, 3)}) and indexing (\texttt{a[42]}) might not work properly --
and list comprehension may compute a lot, but does not generate
lists. But worse is it with exception handling which make up a lot of
code, but probably has very big holes in it. Along with exceptions the
code lacks the proper Python error handling -- that is there are
plenty of safety checks where they are needed, but no exceptions are
raised as supposed. All in all the debugging time has been saved for
the most important parts. Unfortunately giving arguments to functions
has been dropped in the end to make the VM work.

\section{Architecture}
The VM is kept in a very simple architecture. We have a main program
starting up the Virtual Machine. It takes a python source-file as
argument and issues a call to our parser. The parser returns an
Abstract Syntax Tree, which we feed to our JIT-compiler. The JIT scans
the AST and builds an environment to make lookups fast. Then it starts
compilation.\\
During run-time we execute in much the same manner as a stack
machine. We use a heap to store objects, although for convenience we
have chosen to keep specific objects outside of the heap. Heap space
for an object is allocated based on class information about the object
size, and garbage collection is done in a stop-and-copy manner when
space cannot be allocated within the current heap.\\
The final VM is rather compact with the main function and initializing
\texttt{pyvm.c}. The VM i logically divided into the garbage collector
in \texttt{gc.c}, the assembler in \texttt{asm.c}, JIT-compiler in
\texttt{jit.c}, the \texttt{core.c} handles the environments for the
jit during compilation and the \texttt{obj.c} are so the generated code
can call back into C.

\section{Techniques}
This section presents and discusses the various techniques used in each
of the elements of VM described in the architecture section above.

\subsection{Parsing}
As mentioned doing the parsing ourselves was not a priority to
us. Instead we found a Python parser for C at
http://evanjones.ca/software/pyparser.html. The parser is from 2005,
but it has been extracted from the official Python code. It does not
support all Python types, but it supports enough for our needs. \\


\subsection{The heap, objects and garbage collection}

\subsubsection*{The heap}
The heap is basically just a continuous space in the memory which we
allocate at start up, and later use for storing our objects. In
general all objects are kept in the heap, however there are a few
exceptions to this, i.e. class objects and code objects. Whenever a new
object is to be created a fitting amount of heap space is
allocated. During runtime the garbage collector may allocate another
space in memory for the heap and copy live objects to the new space,
as described later. In our VM the heapsize is hard-coded to a fixed
size of 128KB. It can be changed in the init\_gc() method if testing
with a smaller size is needed.


\subsubsection*{Objects}
Objects consist firstly of a header with two pointers one
to the objects class and one to a dictionary, and secondly a number of
pointers, pointing to any object attributes that have been found in
advance through a simple scan of the class.\\
\begin{figure}[h]
\begin{center}
\includegraphics[width=30mm]{object.png}
\caption{A heap object}
\end{center}
\label{fig:obj}
\end{figure}
The class pointer of the header is included for two reasons. It
enables identification of the listed attributes, and it makes it
possible for the garbage collector to lookup the size the current
object takes up in the heap.\\
To cope with Pythons ability to extend objects dynamically the
object header also contains a reference to a dictionary that in turn
contains references to the objects we didn't allocate space for
directly in the heap. This means using declared class variables will
be faster than using later added variables, which we find an
acceptable price for keeping garbage collection easy: We avoid either
fragmenting the single object or possibly having to move objects
around at every allocation.\\ 

\subsubsection*{The garbage collector}
Python guarantees no garbage collection: \begin{quote}An implementation is
allowed to postpone garbage collection or omit it altogether - it
is a matter of implementation quality how garbage collection is
implemented, as long as no objects are collected that are still
reachable.\cite{pythonref}\end{quote} We did however implement a simple
stop-and-copy garbage collector as described in \cite{wilson} to free
up heap space when needed. The garbage collector does not operate on
the objects stored outside the heap though and these will persist
throughout execution.\\  

The garbage collector is invoked whenever a new object is about to be
allocated, but is too large to fit in the remaining heap space. It
works by firstly allocating a new heap space (to-space) and then
scanning the stack for pointers into the old heap space (from-space)
evacuating any objects found (root set) to the new space. The new heap
space is then scanned and if more objects are found in the old heap
these are evacuated as well. All references are updated a long the
way. Finally the old heap space is freed to the system. The choice to
allocate and free memory to the system every time garbage is collected
may slow us a bit and could potentially give a problem if the system
cannot give us what we need - on the other hand we think this is a
nicer behavior towards the system. It would be no real issue to
change the implementation to reuse and keep the same two heap
spaces.\\  

\begin{figure}[h]
\centering
\includegraphics[width=115mm]{gc.png}
\caption{Garbage collection process}
\label{fig:gc}
\end{figure}

In figure \ref{fig:gc} three stages of a garbage collection is
illustrated: Before, during and after. In the before image we see two
root elements referred to by the stack, two more live objects and one
dead object.\\
In the during image, the root elements have been copied
to the to-space along with one of the live objects. In the from space
where once lay the copied objects are now merely a pointer to the
to-space copy. We also note how pointers from the stack are updated to
the new locations as well as pointers internally between the objects
are also updated - except of course pointers to objects still to be
copied which refers to the from-space objects.\\
In the after image the process is completed and all that is left to be
done is to free the space of the from-space.\\
This scheme ensures fast allocation whenever the heap has room for the
object in question, and it avoids both internal and external
fragmentation. On the downside the garbage collector, when running,
halts execution in the VM for a time proportional to the number of
live objects. Also while the garbage collector runs, the VM
requires twice the memory that it supplies to the program it runs.\\  


\subsection{JIT-compilation and the stack}
One efficient way of gaining performance in a VM is by stripping away
the layers of abstractions and indirections of an interpreter and using
the CPU directly. By compiling just-in-time before execution most
overhead of interpreting a language like Python can be optimized
away. Besides from a relatively big performance overhead from the
actual compilation the greatest downside of this strategy is the
complexity of handling the CPU in an efficient manner -- choosing the
right op-codes, using all the registers and controlling the use of the
stack are important for performance, but are difficult to accomplish. 

\subsubsection*{JIT-compilation}
Our JIT-compiler is a two stage method-by-method compiler.
Method-by-method compilation is chosen mainly to ease the
implementation of the VM. Aiming for a tiny VM there is not room for a
great deal of static analysis and optimizing steps nor for letting an
interpreter handle any of the initial or less frequent parts of the
Python programs -- and certainly not if the goal is for an interpreter
to supply the compiler with hints for performance. The compiler simply
compiles an entire program in one go and it is then difficult, without
analysis, to split up a program any other way than method by method.

The first stage -- the ``check'' stage -- of the compilation has a
dual purpose. The primary goal is to split up the functions and other
code from each other in such a way that the blocks of Python code is
laid out in single blocks in memory. To accomplish this each function
in compiled on its own with it's own ``check'' and ``compile''
stages. That is the inner most functions are compiled on their own
before the surrounding code is compiled. But the ``check'' stage is
also used to collect information about variables in the block about to
be compiled. This way the number of local variables in a function is
known before the actual compilation. To ease comparison and lookup of
variable names they are also all put into a constant string pool
during this stage.

The second stage is the ``compilation'' stage. The compiler must first
add ``clean-up'' instructions at the beginning and end of each
function / block of code to be compiled. This mainly consists of
setting up the stack and returning properly a function call. The
actual compilation of the Python code can now (almost) be completed in
a single scan, outputting assembler while reading in the Python
code. The few exceptions to this simple design is mostly the
need for jumping in the assembler to a location not yet reached in the
compilation. Here the code is simple to patch when the point is
actually reached. A complicated example of this is the \texttt{break}
statement where multiple breaks must jump to the same undetermined
location so they all will have to be patched up at the end -- linked
lists must be used for this. But it is more complicated when one tries
to \texttt{break}, \texttt{continue} of return out of on obvious
\texttt{try - finally} block. This is attempted solved by duplicating
the \texttt{finally} parts where needed. Unfortunately as each
\texttt{finally} part and can break out of other\texttt{finally} parts
exponential code blow-up is a rare, but very real, risk.

As construction of classes in Python is an almost entirely dynamic
process it is an advantage to see the class definitions as independent
blocks of code as well. Unlike functions they are executed at once,
but by allowing classes to execute in their own local environment the
local variables can be collected afterwards to comprise the methods of
the class. This way no conditional or loop statement in a class
definition can confuse the actual class creation. But given the bases
of the class to inherit from and the methods to support the difficult
part is still the post processing of these to create a proper class.

To support the JIT-compiler an assembler has been created. Much like
the V8 assembler this one is little more than a collection of
functions to be called when a specific assembler instruction is to be
formatted and written in the code. But to get a bit of abstraction
specific functions for handling getting/setting/pushing/popping values
to and from local variables or object attributes and registers have
been implemented.

But not everything in the compiled Python code is assembled. Vital
parts such as garbage collection and the print statement have been
necessary to implement in C. For this to work functionality have been
implemented to make it possible for the generated assembly to call
safely back into C. Since the calling back and forth is very limited,
there are no problems with handling pointers for garbage collection
and other complicating factors in the transitioning.

\subsubsection*{The stack}
To make a fast and tiny JIT-compiler it was chosen to use the x86
architecture as a stack machine. Even at places where the x86 has
unused registers values will still be pushed on the stack. Here the
simplicity of knowing where ones values are stored is chosen over a
more efficient use of the registers. This greatly simplifies the
compilation of even the most complicated expressions. Also temporary
variables for handler \texttt{for} loops are pushed on the stack.

The stack is also used consistently to store local variables. Not much
work has gone into optimizing e.g. use of registers here. This choice
makes it the job of the compiler to set up the local variables on the
stack before and after functions are executed. By initializing all
local variables before they are even used the generated code is
guaranteed to know the locations of all variables -- but a check to
see if a variable has been assigned to is still needed before it can
be read.

Of course arguments to function calls are also intended to be pushed
on the stack. But as Python supports variable number of arguments for
functions and even default values for missing arguments some overhead
of handling this is needed. One choice would be to locate this in the
initialization of a functions code. But since the return address of
the call will always be pushed on top of the actual arguments it is
perhaps easier to locate the correct setup of arguments at the call
site -- making the caller check the functions argument count and
perhaps putting excess arguments in a tuple. The main downside of
this is of course a bit larger code.

But when the arguments are properly setup on the stack it is much
easier to know the size of the stack frame and the location of all
variables -- the base pointer is not even needed. By relying only on
the stack pointer during normal execution it is tempting to use the
base pointer for marking the \texttt{try - except - finally} points on
the stack. Assigning the base pointer to the stack pointer will then
pop the stack to frame of the exception handling. A saved base pointer
can then be popped and a \texttt{return} will lead to a jump to the
exception handler code.

\subsubsection*{Class-based object orientation}
As Python is a class-based object oriented language it would have been
nice to have implemented support for user-defined classes and objects
including object attributes and method dispatching.

Even though Python is as dynamic as JavaScript when it comes to
attributes of object one does not \emph{have} to resort to
implementing maps in Python to handle attributes efficiently for most
programs. Maps gives a very efficient way of mapping attribute names to
memory positions as more and more are added dynamically. On the other
hand as Python is still class-based the methods of the class and it's
ancestors gives a very good idea of which attributes they expect of
their own objects, and the ``check'' stage of the VM is actually able
to make an analysis of which attributes should be preallocated when
creating a new object.

As an easy extension to this the VM is also able to figure out which
methods and attributes to make room for in the class by using the same
analysis for local variables as for functions. As Python supports
multiple inheritance it would be a good idea to duplicate the methods
inherited from super-classes when creating a new class to eliminate the
need of looking up a method recursively \emph{and} making the methods
closer to the actual object. One small problem with this is the Python
feature of assigning new methods to classes. As even the slightest
change in methods can make major changes down the hierarchy the
program must either know all the duplications to change or, a bit
simpler, tag the name of the method so \emph{no} method of that name
is to be looked up in the fast way. The same duplication of attribute
tables is possible, but there should be no needed for a backup method
as there is for methods.

But all these demands an efficient way of looking an attribute or
method name for both the object and the class. The only real choices
seems to be between hash tables and sorted tables with the possibility
of using binary search. In this project the focus has been on the
latter. Unfortunately the binary search has no implemented tables to
search in.

\subsection{Optimizations}
From the beginning of the project we decided to put our main focus on
a JIT-compiler. While we had other optimizations in mind as well, we knew
getting a functioning JIT-compiler running was first priority and
would take the longest time. Other considered optimizations were
inline caching and method inlining.

\subsubsection*{Inline caching and method inlining}
We never found the time to actually do any implementation in this
area, however, we have had it in mind during the construction of the
VM, and an implementation could be done as described in the following.

There are actually two values of interest for caching. The most
important might actually be the index of the result of an argument or
method lookup. Secondly when a method has been looked up further
speedup can be accomplished by inline caching the actual address to
call allowing the CPU to know where to go well ahead of time.

When caching a value it is vital to have the correct safety checks to
know if the value is correct for a given object and recalculating the
value otherwise. Checking if your argument or method index or method
address is correct can easily be handled by checking if the class of
the current object is the same as the class of the previous. Since
inherited methods are sometimes overridden some will vary a lot even
between classes in the same hierarchy. Here in particular polymorphic
inline caching of method address may be a good idea by allowing more
than one method address to be cached at each call site.

Handling the safety checks for argument indexes is not as obvious when
considering multiple inheritance. In single inheritance a solution
could be to lay out the attributes inherited before the ones just
added so the objects in the hierarchy at least looks alike. Each class
can the have a safety table with a list of all the classes its objects
looks like (that would be the class hierarchy). When looking up an
attribute index one would now just need to get a bit more information
out of it: In what class the attribute is first seen and how far down
the hierarchy the class is -- the index and its expected value in the
safety table for the check to succeed. It will not be to difficult to
use the same under multiple inheritance except it will not be as
precise. The question is now just if the same system can be used for
methods and whether it can compete with polymorphic inline caching.

\section{Performance}

As we spend some time compiling at the start of the runtime, we
suspect we are somewhat slower on programs with short runtime compared
to code length as opposed to programs with for instance a lot of
looping. We have written two very simple iteration based programs, one
computing the 45th Fibonacci number several times and one calculating
the moves to solve the Towers of Hanoi puzzle.\\
Due to the language support we have been limited to these relatively
simple test programs and have thus unfortunately not done any heavy
object manipulation or recursion.
\subsection{Fibonacci}
We calculated the 45th Fibonacci number because it fits in 32
bits. The official Python VM is not limited to 32 bits, so in order
not to cheat, by having Python doing a lot extra number calculation we
decided to stay within 32 bits. Instead we repeated the calculation
for various amounts of times as shown in the table below.\\
In the test we 128KB heap size, which would be plenty if not for the
fact that we actually allocate new objects in the heap every time we
assign to a variable. So we have done some garbage collection during the
testing, but it did not seem to have any large impact on the
results.\\
The Python source code for this test is included in fib.py\\
\begin{center}
  \begin{tabular}{ r | r | r | r }
    Runs & Python runtime (ms) & PyVMiC runtime (ms) & ratio\\ \hline
    10 & 16 & 2 & 8 \\ \hline
    100 & 22 & 4 & 5.5 \\ \hline
    1000 & 66 & 5 & 13.2 \\ \hline
    10000 & 498 & 15 & 33.2 \\ \hline
    100000 & 4775 & 109 & 43.8\\ \hline
    1000000 & 46965 & 1033 & 45.5 \\ \hline
    10000000 & 468752 & 10323 & 45.8 \\ \hline
    \hline
  \end{tabular}
\end{center}

The table shows us, that at fewer runs the speed gain from the
compilation is less significant due to the overhead of the compilation
it self. In the three last cases with more than 100k runs the ration
between our runtime and Pythons runtime seems to stabilize.

\begin{figure}[h]
\centering
\includegraphics[width=90mm]{graphfib.png}
\caption{Fibonacci comparison}
\label{fig:fib}
\end{figure}


\subsection{Towers of Hanoi}
The algorithm calculates the moves only to move the discs from pole
zero to pole three in the classic Towers of Hanoi problem, and the
code does not output the moves to measure the speed more
accurately. The same goes for this test, as for the Fibonacci, that we
ran with 128KB heap size, which means some garbage collection takes
place when the amount of discs is sufficiently high.\\
The Python source code for this test is included in toh.py\\
\begin{center}
  \begin{tabular}{ r | r | r | r }
    Discs & Python runtime (ms) & PyVMiC runtime (ms) & ratio\\ \hline
    4 & 15 & 2 & 7.5 \\ \hline
    8 & 16 & 3 & 5.3 \\ \hline
    12 & 25 & 3 & 8.3 \\ \hline
    16 & 120 & 11 & 10.9 \\ \hline
    20 & 1937 & 101 & 19.2\\ \hline
    24 & 28926 & 1541 & 18.8 \\ \hline
    25 & 55408 & 3080 & 18.0 \\ \hline
    26 & 115695 & 6224 & 18.6 \\ \hline
    27 & 215647 & 12556 & 17.1 \\ \hline
    \hline
  \end{tabular}
\end{center}

This table confirms the story from the Fibonacci test. We are faster
even for small instances of the problem, but the gain from compilation
really kicks in when we reach 20 discs (=1.05 mio iterations) or
larger, where we seem to stabilize at almost 20 times faster
executions than Python. 

\begin{figure}[h]
\centering
\includegraphics[width=90mm]{graphtoh.png}
\caption{Towers of Hanoi comparison}
\label{fig:toh}
\end{figure}

\subsection{Conclusions}
The two tests we have done shows, that we can do simple arithmetic
operations in loops very fast compared to the Python VM. We see
speedups of between 20 and 40 times, which we find very good. Of
course similar performance gains will be harder to achieve in programs
where execution yields less repeated code, as the penalty of
compiling everything would be higher.


\section{Conclusions}
\subsection{Course objectives}
\subsubsection*{Describe the challenges of implementing a virtual
  machine for a language without classes}
Python does in fact have classes, so in our project we have not dealt
explicitly with this issue. However, Python is not statically typed
and this means we have faced issues similar to what one might come
across in a language without classes.\\
While we never really got into the business of implementing the Python
class system, one major focus point for us in the design phase was to
find a way of determining if an object has a given method and
to see if we could find an efficient way of doing method
inlining. Python allows dynamically adding and overwriting methods of
classes and subclasses and single object instances during
execution. One method for handling this is presented in
\cite{selfpaper}. Here maps are used as a way of grouping and
describing clones, i.e. objects that share structure and differ only in
the values assigned to the variables. This allows compilation with
good use of inlining for all clones belonging to a map whereas
compiling for each object would unlikely pay off.\\
Our idea for this was not to implement maps, but instead rely on the
classes of the objects for optimization even if these are mutable. Our 
approach would be good in the standard cases where objects do not stray
away from their class, which we believe or hope is the frequent
case. When Pythons dynamic capabilities are used we would be forced to
fall back to a slower way of doing things with less inlining possible.

  
 
\subsubsection*{Evaluate the pros and cons of the various types of JIT
  compilation}
\cite{adaptive} discuss various optimizations for virtual machines
specifically selective optimization and various feedback-directed
optimizations. The basis for doing selective optimization is running
an interpreter and the on the fly deciding which parts to compile with
a JIT-compiler. They  mention reference counting and sampling as
standard techniques for finding these hot spots, which are typically
entire methods or functions. \cite{tracebased} describes another way
of, lazily, discovering hot code parts or traces while doing
interpretation as a way of determining what to compile.\\
Both articles, however, assume that the virtual machine runs as an
interpreter which relies on a JIT-compiler to improve performance in
the frequently executed code parts. We do not have an interpreter,
instead we compile everything. This of course gives us a slower start
up, as we do a lot of compilation before we execute. On the other hand
all our code should in principle run faster, and we have no need of
profiling mechanisms to identify hot spots and we wont have
compilation pauses during execution.

\subsubsection*{Explain techniques for optimization and deoptimization
  for supporting debugging}
In the previous paragraphs we have explained a bit about the
optimizations we do or have thought of doing, for example
JIT-compiling and inline caching. \\
 We have decided not to focus on support for debugging. We do not have 
a way of deoptimization nor do we do any profiling. It has been more
important to us, to get the VM working and supporting more and more of
the Python language.



\subsubsection*{Evaluate the pros and cons of various garbage
  collection algorithms}
We were from the beginning keen on having at least a simple garbage
collector, and we discussed how much time we wanted to invest in
implementing one. We ended up implementing the stop-and-copy garbage
collector as described in \cite{wilson}. The pros of this scheme is
its simplicity and that it automatically avoids defragmentation. On
the downside it halts the VM during execution and has to copy all live
objects at every run. Also it requires twice as much memory as the VM
can offer to the program it runs. For the tests we have done, we have
not felt either of the drawbacks though, but these tests were not
really memory intensive so that was expected.

\subsection{Interesting conclusion}
As a positive surprise the stop and copy garbage collected turned out
to be quite easy to implement and didn't even seem like much of a
bottleneck for the programs tested on the VM. Even scanning through
the stack -- initiated by C, but partly taken over by assembly -- was
no big problem.

All in all the calling back and forth between generated assembler and
C code didn't cause as many problems as feared so implementing an
assembler, but keeping the garbage collector and print handling in C
turned out to be an OK idea. The biggest challenge was getting the
stack pointer correct everywhere in the generated code -- otherwise
nothing works.

Creating an assembler pretty much from scratch turned out to work OK
as well. In fact because Python is such a big and complicated language
it was a much bigger problem getting around all the parts of the
JIT-compiler needed than making the compiler call the assembler.

\subsection{Final words}
While we did manage to get a JIT-only virtual machine running with a
garbage collector and performing quite fast in the supported
operations, we unfortunately did not have the time to implement as
much as we would have liked. We had ideas for the missing features,
and even have code in place for some parts of it, just not enough to
support them. The main reason for lacking time has not been any
specific feature or problem delaying us, but rather the fact that we
got started a bit too late. The code is in many ways prepared for the
missing features, however, as we have had them in mind from the
beginning, and we would not expect implementation of them to be too
cumbersome because of this.\\  
Apart from adding the missing features there is still room for many
optimizations on the compiled code, and options for choosing a more
advanced garbage collection scheme, but with the implementations we
have in place, we feel we have a relatively fast and competitive basis
for making these improvements. 

\newpage
\bibliographystyle{unsrt}
\bibliography{ref}
\end{document}
