\section{Evaluation}
\label{sec:eval}
In this section we present an early evaluation of our methodology. Before getting into the automatic refactoring process, we
first focused on manual code transformation to figure out exact refactoring rules. We started by an application, Pi, that has
been mentioned in previous works on integrating Hadoop and APARAPI. This is a Monte Carlo approach to estimate the number Pi in which
a number of sample points are randomly generated to fall into an square with dimension size 1. Then the number Pi is estimated based
on the number of points falling inside square's inscribed circle. An implementation of this application in MapReduce is provided in 
Hadoop example files. In that implementation, JobTracker spawns multiple map processes each of which generates a portion of random numbers
and works on them. However, our implementation is similar to previous works where mappers read points from the file and do the computation on
them. In this case for each point, a mapper reads two number from the disk, and do a small computation. So, the map is not so compute intensive.
This pattern is popular in simple MapReduce applications. We intentionally chose this variant of computing Pi to test how well APARAPI 
can help up gain performance for a wide class of \textit{simple} MapReduce application. If we get comparable performance in these cases, we
can then argue about benefits of our methodology in more complex MapReduce applications with compute intensive maps.

In order to evaluate our methodology quantitatively, we choose a moderate system configuration. The evaluation system is a single node machine
with Ubuntu Linux installed to be used by chroot. The machine has a Intel Xeon E5410 @ 2.33GHz and an nVIDIA GeForce 9800 GX2. You see the
CPU is fairly powerful while the GPU is not intended for high-performance computing.

Fig.~\ref{fig:pi_graph} shows the initial results when running PI application in two modes. The Hadoop mode is for the case when we use normal
Hadoop for computation. This mode just use the CPU. The Hadoop+APARAPI mode uses APARAPI to offload map operations on GPU. The graph shows 
comparable performance of Hadoop+APARAPI compared to usual Hadoop operation. Remember that the GPU in our machine is not powerful, and the 
application we chose follows a typical computation pattern as most of the simple MapReduce applications. The performance penalty is because of two reasons.
First cause is the redundant copy of data between the host and device. This happens once before calling the kernel in APARAPI to copy input data
to the kernel, and once after the execution of kernel completes. Second reason is that map function of our PI implementation is not compute intensive.


\begin{figure}
 \vspace{1.0ex}
 \centering
 \begin{minipage}[t]{1.10\columnwidth}
   \includegraphics[width=\columnwidth]{figures/pi_graph}
 \end{minipage}
 \caption{Comparison of execution times of running PI with Hadoop and Hadoop+APARAPI}
 \label{fig:pi_graph}
 \vspace{-4.0ex}
\end{figure}

\section{Future Plan}
\label{sec:plan}
The report shows an early evaluation and feasibility study of our methodology. 
There are several directions on going forward in this project. First, we have to change our system configuration and run the whole system on a cluster
of machines with GPUs which are intended for high-performance computing and comparable in price by recent CPUs. Second, one should look at more complex
MapReduce applications with more compute intensity in their map phase. Machine learning algorithms (like the ones implemented in Mahout) are good candidates.
We plan to work on several Mahout applications. Third, automating the whole refactoring process should be considered which is the final goal of the project.

