pyOODSM has been developed in a iterative manner, meaning that an initial simple version has been constructed. This version has been the object of a performance analysis, which shows strengths and weaknesses of the design. The performance analysis has then formed the basis for developing a series of improvements, namely:

\begin{itemize}
\item Changing the object location model to a home based design.
\item Introducing the possibility of pinning \cite{pinning} objects to clients.
\item Introducing a simple prefetching module.
\end{itemize}

The effects of these improvements has been evaluated by performing the performance analysis once again, and the results for the initial version is compared with the versions with the varies improvements.%, as well as with the results for the initial version of pyOODSM.

\subsection{General design concepts}
The general concept, on which pyOODSM is designed, is the fact that Python offers full introspection into objects. Particularly all Python objects contains a list of the instances attributes, namely the \texttt{\_\_dict\_\_} list. This dictionary maps attribute names to actual values. This provides a easy way to clone objects:

\begin{lstlisting}[frame=single, language=python]
myinstance = myclass()
mynewinstace = mydummyclass()
mynewinstance.__dict__ = myinstance.__dict__
\end{lstlisting}

The \texttt{\_\_dict\_\_} can be treated as any other attribute in the object, with the exception that Pythons accessors and mutators methods for attributes, \texttt{\_\_setattr\_\_}, \texttt{\_\_getattr\_\_} and \texttt{\_\_delattr\_\_} is not invoked. This means that the dictionary can be passed as an argument to a function, and hence can also be exchanged between processes by serializing. 

The only part missing is to realize that all Python objects inherits the methods \texttt{\_\_setattr\_\_}, \texttt{\_\_getattr\_\_} and \texttt{\_\_delattr\_\_}, for setting, reading and deleting attributes respectively. This means that it is possible to implement a super class for shared objects, which overloads these methods with methods that has the ability to determine whether an object is local or has been migrated to another node in the pyOODSM network. Furthermore, this super class has to implement the ability to sending the contents of the \texttt{\_\_dict\_\_} list, leaving the object as a stub. By a stub we mean an object with a \texttt{\_\_dict\_\_} list that only contains a number of control attributes (5) that should not be migrated as they describe the state of the local object. 

A general algorithm for accessing or mutating an objects attribute can the be written as:

\begin{lstlisting}[frame=single, language=python]
Lock access to the local object
if the object is not local:
     fetch the objects attributes from the server
     insert the fetched attributes into the objects __dict__
     update the control attributes to show that the object exists locally
else:
     pass
Perform the operation as usually
Release access to the local object       
\end{lstlisting}

This lead to the basic design, depicted on figure \ref{design}. From figure \ref{design} it is clear that the system will consists of the following components:

\begin{itemize}
\item Central server
\item Manager that implements the clients
\item Super class
\end{itemize}

For improving the ease of running parallel programs on a distributed memory architecture like a NOW based cluster, a class for starting processes on nodes, using ssh, has been implemented. This is not a part of pyOODSM as such, but it enables the programmers to write entire parallel programs, including startup and deployment scripts, in Python.

The fetching of objects is described in sections \ref{cs} and \ref{hb}, as different versions has been implemented.

One could speculate in the possibility of accessing an object remotely ie. sending the data to the object instead of sending then object to the data. This is not a part of pyOODSM in its present form, as it would pull pyOODSM from a DSM design towards a RMI design. A consequence of this approach is that if a number of nodes is accessing an object in an contentive manner, the object will ping-pong between the nodes. Depending of the nature of the object and data in question, a remote access strategy might yield better results, but such a consideration should be a part of the analysis that leads to choosing the system used for implementing a given task. It is however important to note that pyOODSM will be able to handle such a ping-pong, as shown when considering the n-body dynamics problem.

\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth]{design.eps}
\end{center}
\caption{Design of the initial version of pyOODSM}\label{design}
\end{figure}

\subsection{Interprocess communication}
Developing a DSM system, it becomes quite clear that a fundamental part of the system is the ability to communicate between processes, which do not share memory. An easy and effective way to do IPC under these conditions, is to use a RPC system. pyOODSM uses PYRO \cite{pyro} to implement IPC. PYRO has the advantage that it is written completely in Python, making it easy to use for IPC. Furthermore PYRO shows quit good performance.


\subsection{Initial version -  Central server}\label{cs}
The initial version of pyOODSM is based on a central server through which all communication is done. This design is described in \cite{pastset} as the simplest distribution model for a DSM system. Unfortunately, this design does not scale very well, but being simple to understand and implement, a central server design can be used as a initial design that can be changed later on. 

The central server handles all operations on the pyOODSM network, including transporting objects between nodes and maintaining lists of objects and clients.

A client has to register on the central server, in order to be a part of the pyOODSM network. Clients can leave the network at any time, by sending a request to the server to be unregistered. 

It is quite clear that the central server becomes a bottleneck in almost any situation where there is contention, either on a single object or on many objects. Even in the situation where many objects is accessed on a non contentive manner, there will still be contention on the central server.

\subsection{Home based design}\label{hb}
I order to eliminate the bottleneck in the central server, a better distribution model for pyOODSM has to be applied. This section is about choosing a better design for pyOODSM that will allow for heavier contention on single objects as well as on many different objects. 

According to \cite{pastset}, the location model, that offers the best scalability versus system complexity, is a home based design. In a home based design, all objects will have an assigned home node. This home node must hold the location of the object at all times. By location, we mean the location of the last node that requested the object, as this will assure that objects is migrated i the order in which they where requested. A home based design comes in two versions:

\begin{enumerate}
\item The home nodes holds the location of the objects. When an objects is requested, the home nodes fetches the object to the home node, and then sends it to the requesting node.
\item The home nodes hold the location of the objects, When a nodes request an object from the home node, it returns the location of the object. The requesting node can then fetch the object from the nodes that holds the object.
\end{enumerate} 

For pyOODSM, the second version of a home based design is chosen. This will assure minimum network traffic, as object location is of small and fixed size, but objects them self can potentially be quite large.

It is quite clear that, using a home based location model, it becomes important to chose the objects home nodes in a ``good'' way. By ``good'' we mean the distribution of home nodes, that yields the minimum contention on the individual home nodes. From \cite{first_touch} it seems that the most effective way to chose the home node for an object is by a First Touch strategy. First Touch mean that the first client that registers the object will be the home node for the object. This strategy yields good results in \cite{first_touch} and is quite simple to implement, making it a good choice for pyOODSM. The First Touch strategy does however have one problem, namely that there is a risk that one client becomes home node for all the objects in the network, effectively reintroducing the central server design. In most cases however, the distribution of home nodes will be significantly better.

The central server in pyOODSM is kept, but its role is greatly reduced, when moving to a home based design. In the home based version of pyOODSM, the servers only role is to perform operations that has effect on the pyOODSM network as a hole. This includes keeping track of clients, keeping a list of shared objects home nodes and shutting down the network. Basically, the only difference in the servers role, is that object migrations is performed directly between client nodes and not through the server. 

\subsection{Pinning}
To further reduce the network traffic, pyOODSM is equipped with the ability to let clients pin objects. Pining an object means that the object is locked to the client, until the client releases (unpins) the object. This feature provides a way of synchronizing processes in a more effective way than using polling, which is the only way to go, in the initial version of pyOODSM. 

Adding the ability to pin objects in pyOODSM, is quit easy. pyOODSM objects is simply equipped with a extra lock object, through which access must be acquired prior to a migration. Pinning objects is then simply the task of locking the object with this lock. If the object is not local at the time of pinning, the object is fetched and then locked.

Introducing pinning enables the programmer to construct a deadlock in the same way as with a mutex lock in a normal shared memory threading environment, as the top level semantic of pinning an object is the same a locking a objects to a Python process. At a first glance, this sound like a usability drawback. However, the programmer is not worse off than if he was to write a normal multi-threading program, meaning that it is no more difficult to write a program for a distributed memory architecture, using pyOODSM, than writing the same program for a shared memory architecture.

It might be interesting to have read-only objects, but since the point with pyOODSM is to offer normal object semantic in a distributed memory environment, read-on objects would not really fit into this scheme, since read-only objects is not a part of Python semantics.  

\subsection{Prefetching}
According to \cite[chapter 25]{parallel} it is possible to achieve some improvement of a DSM system, by introducing a prefetching module, that can utilize waiting time in the clients, to prefetch data that is likely to be used in the near future. According to \cite{prefetching} there exists the following prefetching strategies:

\begin{itemize}
\item History prefetching
\item Aggregate prefetching
\item Effective prefetching
\item Adaptive data granularity
\end{itemize}  

\subsubsection{History prefetching}
History prefetching works by collecting a history of fetches and from this history, the most likely next object is chosen. This strategy is quite easy and works because most programs access objects in some sort of repetitive manner. However, this strategy will also give rise to a number of false prefetches ie. the wrong objects is prefetched. Especially the strategy suffer from the Accumulated Waiting Phenomenon (AWP) and the Waiting Synchronization phenomenon (WSP) \cite{prefetching}.

\subsubsection{Aggregate prefetching}
This strategy is related to the history model. A limited history is collected and then apply some sort of pattern matching to the history, to produce a better guess on what object the client needs next. This strategy suffers from the same problems as the history strategy. These will however be suppressed to some degree, because the pattern matching should yield better guesses \cite{prefetching}.

\subsubsection{Effective prefetching}
This strategy works by tagging objects with status tags and the use these tags along side with a history to determine which object that is to be prefetched. The use of status tags provides better ways to determine whether an object really is to be prefetched. The use of status tags suppresses the effects of the AWP and WSP significantly, but does not eliminates these phenomenas \cite{prefetching}.  

\subsubsection{Adaptive data granularity}
The adaptive data granularity strategy works by transferring only parts of the data that needs to be prefetched. This strategy makes little or no meaning at all in a object oriented DSM system, unless the system implements full replication, which pyOODSM does not.

\subsubsection{Prefetching strategy used in pyOODSM}
pyOODSM is equipped with a prefetching module, that implements a simple history prefetching strategy. The history reaches only one fetch operation back in time. 

This strategy will naturally lead to some false prefetches, but the strategy is quite easy to implement and there is still a good chance that a prefetch will be correct as most programs will access objects in some sort of repetitive manner, which will give robust histories and hence good chances for correct prefetches. 



