\section{Introduction}
\label{sec:intro}

In recent years, dealing with datasets in the order of
terabytes, petabytes or even exabytes is a must in both
research and industry areas, including machine learning,
bioinfomatics, social network analysis, etc. People need
to examine large amounts of unstructured data with a variety
of types to discover unknown patterns and correlations
and other important information~\cite{website:big-data-analytics}.

Big data analytic requires a framework to distribute the
work among hundreds or thousands of machines. Cloud
computing together with the MapReduce programming model/runtime
can successfully bridge this gap. Cloud computing has
recently become a mainstream commodity in industry in
that it makes computing that was once done locally now
can be done in the ``cloud'', leading to increasingly
amount of computing to be pushed into the cloud and the
data volume is growing rapidly.
MapReduce~\cite{map-reduce-paper} is a programming model
proposed by Jeffery Dean and Sanjay Ghemawat from Google
Inc. that combines \textit{map} and \textit{reduce} which
are two high-order functions in functional languages. The
\textit{map} function takes an input key/value pair and
produces a list of intermediate key/value pairs. The
intermediate values associated with the same key are grouped
together and then passed to the \textit{reduce} function.
The \textit{reduce} function takes intermediate key with
a list of values and processes them to form a new list of
values. MapReduce model can scale up to thousands of machines
to meet real world computation requirement, and a wide range
of computing problems could be presented by MapReduce model
including machine learning, data mining, etc.
Apache Hadoop~\cite{website:apache-hadoop} is a MapReduce
framework that can run clusters with large number of commodity
machines to conduct cloud computing. The Hadoop MapReduce
runtime can distribute the input data, schedule the program
execution and handle all inter-machine communications,
machine failures, load balancing and locality issues,
releasing the user's burden to handle all these details.
The simplicity of the MapReduce programming model and the
quality of services provided by the Hadoop MapReduce ecosystem
successfully speedup the performance and reduce programming
cost for big data analytics.

However, the MapReduce architecture still suffers from several
limitations~\cite{website:azinta-blog}. For example, all
intermediate results from \textit{map} or \textit{reduce} stage
is output to disk before it can be used by the next stage which
cause performance penalty; each node in MapReduce cluster is
typically a muticore CPU with no GPU co-processors attached,
therefore in order to get the processing speed-up, hundreds or
thousands of servers must be involved, resulting in substantial
investment and high on-going power consumption. To tackle those
constraints, the big data community has turned towards different
parallel implementation environments and architectures.

GPU was designed orignally for graphics processing. 
It is a many-core machine with multiple SIMD multiprocessors
(SM) that can run thousands of concurrent threads. Because of its
highly parallel power, it has developed into a more general purpose
processor, GPGPU(General Purpose GPU), for scientific and engineering
applications. GPGPU owns the features of powerful, progammable,
and highly parallel~\cite{Owens:2008:GC}.
The cheapness of GPUs can enable small companies and
organizations to compete with large business who can afford to
deploy very large Hadoop CPU clusters for big data analytics.
According to~\cite{website:gpu-big-data}, a growing number of people
are using GPUs for big data anlytics to make better, real-time decisions.

% Fig.~\ref{fig:gpu-1} shows the speedup by using GPUs in neural networks
% application.

% \begin{figure}
%  \vspace{1.0ex}
%  \centering
%  \begin{minipage}[t]{0.90\columnwidth}
%    \includegraphics[width=\columnwidth]{figures/gpu-1}
%  \end{minipage}
%  \caption{10x Speedup on image detection using neural networks}
%  \label{fig:gpu-1}
%  \vspace{-6.0ex}
% \end{figure}

%% \begin{figure}
%%  \centering
%%  \begin{minipage}[t]{0.90\columnwidth}
%%    \includegraphics[width=\columnwidth]{figures/gpu-2}
%%  \end{minipage}
%%  \caption{World's largest artificial neural networks with GPUs}
%%  \label{fig:gpu-2}
%%  \vspace{-5.0ex}
%% \end{figure}

In this project, we will work on using the parallel power of
GPUs to improve the performance of real-time big data analytics
with rather lower cost than CPU clusters. We leverage the
APARAPI framework~\cite{website:aparapi} to support the
execution of Hadoop applications in Java language on heterogeneous
architectures, without the effort of rewriting the entire program.
APARAPI framwork provide the functionality of translating Java
bytecode to OpenCL code, but it has many restrictions and the
original Hadoop application cannot be directly processed by APARAPI
but must by modified beforehand. In our approach, we propose
an automatic source-to-source refactoring from the original
Java code of map function to a modified version map function which
is friendly to the underlying APARAPI framework. The process is
completely transparent to the user so that he/she does not need
to spend effort on rewriting the application. We believe that our
approach can help majority of exisiting Hadoop applications to directly
take advantage of GPU computing power to improve the performance and
at mean time can prevent adding programming complexity to the user.

% We will explore different
% algorithms commonly used in big data analytics and examine if there
% are sufficient amount of computationally intensive work that can
% transplant from CPUs to GPUs and can be completed in a way to hide
% memory and network latency. There are previous work done on GPU+MapReduce
% with the effort to build a general platform for such problems, however,
% complexity and optimization challenges is an issue with many of these
% solutions. Instead, we propose application specific solutions for commonly
% used big data analytics algorithms. We expect optimization to be the main
% challenge and we plan to provide an easy to use framework for the end user
% to mask the complexity GPU and CPU computation environments. 

For the remaining sections, Sec.~\ref{sec:background} introduces
the related background on MapReduce, OpenCL and Aparapi; Sec.~\ref{sec:method}
describes the core methodology, including rules for automatic refactoring
and technical challenges; Sec.~\ref{sec:eval} presents results from
the evluation; Sec.~\ref{sec:related} discusses other work on optimzing
MapReduce on different platforms.
