To measure the performance of pyOODSM, and the effects of the applied improvements, a number of experiments has been performed. These consists of using pyOODSM to write parallel versions of computational heavy, scientific problems. The problems chosen is:

\begin{itemize}
\item Tumor radiation.
\item Protein folding.
\item Heat equation.
\item Heat equation runned for a fixed number of time steps.
\item N-body simulation.
\end{itemize}

All these problems is runned of a cluster consisting of 8 Intel CORE 2 QUAD 2,26 Ghz processors. This means that there is a grand total of 32 processors. The processors is connected with a gigabit Ethernet.

%For all the problems, the runtime is measured and speedup and CPU utilization is presentet.

The results is presented as speedups, compared to a sequential version of the same problem, written in Python. This approach is chosen in order to show the effects of the DSM system, regardless of what other optimizations might be invoked. In this work we have not compared pyOODSM directly with other systems such as MPI, PVM or the like. However, it is possible to compare the speedups we have achieved with published result for such other systems of interest.


\subsection{Tumor radiation}
This problem is about simulating the radiation treatment of a brain tumor. A CT image of a brain containing a tumor, along with the positioning of five radiation cannons, is the input, and the output is a CT image of the brain, with markings of where radiation particles has deployed their energy. 

When a particle makes its way through the brain, the chance for the particle to deploy its energy at a given position, depends of the density of the matter at the position. On a CT image, a pixel gets darker when the matter becomes denser. This means that one can use the pixel value to determine the likelihood that the particle deploys its energy at a given position \cite{pycsp}.

The problems is a variation of the Monte Carlo simulation, and should therefore be an example of a problem that can achieve good speedups, when parallelized.

\subsubsection{Decomposition}
At first glance, it seems obviously to parallelized the problem by letting the cannons be processes, but even though a real world example would consists of more cannons, there would almost always be significantly more processors than cannons. Therefore, the problem is parallelized by dividing the the number of particles evenly among the processors, and then let each processor simulate all the cannons, but for on $\frac{1}{n}$ part of the total particles, $n$ being the number of processors.   

%\subsubsection{Results}
\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth]{radspeedup.eps}
\end{center}
\caption{Speedup for the tumor radiation problem}\label{radspeedup}
\end{figure}

\subsection{Protein folding}
I recent years, research into protein folding has become a hot topic. This interest arises from a recognition that a number of chronic deceases is caused by bad foldings of important proteins within the human body.

The task of simulating foldings of a protein, has proven to be a very computational heavy task. Luckily, the problem is easy to parallelize and shows good speedups on parallel hardware.

A protein molecule consists of a number of amino acids. There exists 20 different types of amino acids. A protein molecule can bee seen as a string of pearls where the pearls is the amino acids. The folding we seek is the one that ``rolls up the string of pearls'' to a minimal unbound energy state \cite{pycsp}.

As an experiment to examine pyOODSM, it is to cumbersome to develop a complete protein folding program. Therefore, the simplifications stated in \cite{pycsp} has been adapted. These are: 

\begin{itemize}
\item There is only two types of amino acids: hydrophilic (P) and hydrophobic (H).

\item Folds are only done in 2d and only in angels that are multipla of 90 degrees.
\end{itemize}

\subsubsection{Decomposition}
The problem can be solved as a tree search, much like the TSP problem. The problem can be parallelized by using a producer-comsumer network. The producer folds the protein to a certain depth. This gives rise to a number of partially folded proteins, that the consumers can fold until they are finished. All that is needed afterward is to compare the score of all the folded proteins, in order to return the one with the highest score as the result of the algorithm.

To parallize this problem there is a need of a shared data structure that can contain proteins. In this versions, a distributed queue class is implemented as a pyOODSM object. The object is implemented as a normal queue object, but being a pyOODSM object, it automatic becomes a distributed queue class. Two instances of this class is used: one for containing partially folded proteins, and one for containing proteins that has been folded completely.


%\subsubsection{Results}

\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{proteinspeedup.eps}
\end{center}
\caption{Speedup for the protein folding problem}\label{proteinspeedup}
\end{figure}

\subsection{Heat equation}\label{sor}
This problem is about simulating the heat distribution in an block of matter, surrounded by cold and hot reservoirs. The block is modeled as a matrix of points, each containing a local temperature. The simulation uses successive over relaxation (SOR) and the temperature in each point can be computed by the following formula:

\begin{equation}\label{heateqn}
h_{i,j} = \frac{(h_{i-1, j}) + (h_{i+1, j}) + (h_{i, j-1}) + (h_{i, j+1})}{4}
\end{equation}

All points are computed over and over again, until some criteria of convergence is reached. In this version, convergence is reached when the points changes less than some $\epsilon$ from one timestep to the next. Convergence is computed by  this formula:

\begin{eqnarray}
\Delta & = & \sum_{i,j} h\_new_{i, j} - h\_old_{i,j} \\
\epsilon & = & 0,008 * height * width \\
Convergence & = & \left\{ \begin{array}{rl}
  True & \texttt{if } \Delta < \epsilon\\
  False & \texttt{if } \Delta \geq \epsilon
\end{array} \right.
\end{eqnarray}

In the formula, $height$ and $width$ is the height and width of the clock that are the subject of the simulation.


\subsubsection{Decomposition}
The problem is paralized by a red-black coloring of the points. This method is describe by many as a good method of dividing the points in such a way that any given point only relies on points that are all ready computed \cite{parallel}.

%\subsubsection{Results}

\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth]{sordeltaspeedup.eps}
\end{center}
\caption{Speedup for the heat equation problem}\label{sordeltaspeedup}
\end{figure}

\subsection{Heat equation with fixed number of time steps}
This problem is the same as the heat equation problem, described in section \ref{sor}, but in this version, the algorithm is runned for a fixed number of time steps, which eliminates the shared delta object. The problem is parallelized exactly as the normal heat equation problem.

%\subsubsection{Results}

\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth]{sorspeedup.eps}
\end{center}
\caption{Speedup for the heat equation problem - fixed timesteps}\label{sorspeedup}
\end{figure}


\subsection{N-body simulation}
This problem is about simulating a number of particles gravitational interaction with each other. This type of problem is often used within the field of e-science, because it can be used to model a number of scenarios within the fields of physics, chemistry and astronomy.

The algorithm is very simple, but also computational very heavy:

\begin{enumerate}
\item For all the particles, the force on each particle from all the other particles, is calculated.
\item Each particle is moved, according the resulting force on the particle, calculated above.
\end{enumerate}

This is done for a predetermined number of time steps. 

It is clear that,  because all particles depends on all the other particles, each time step has the computational complexity of $O(N^{2})$.

\subsubsection{Decomposition}
There exists a number of techniques to reduce the complexity of the N-body algorithm, among these are the Barnes-Hut method \cite{parallel}. In this experiment however, the algorithm will be parallelized in its naive form, in order to determine how well pyOODSM performs in a situation where shared objects are to be used by all the processes. 

To parallelize the problem, the particles is divided into sub galaxies, where a galaxy is just a number of particles, with no respect to thees particles positions. In each iteration, all processes reads all the galaxies, and then performs the algorithm on the sub galaxy that has been assigned to the process. The access to the galaxies is done in a barrel-shifted manner, in order to lower the contention of the individual galaxies.


%\subsubsection{Results}

\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth]{nbodyspeedup.eps}
\end{center}
\caption{Speedup for the N-body problem}\label{nbodyspeedup}
\end{figure}

