\section{Proposal}
\label{sec:proposal}
Big data is inherently an IO intensive problem while many analytical applications are  computation intensive which makes big data analytics both IO and computation intensive. Majority of proposed solutions target each of these requirements individually, a class of these systems deploy computation close to storage to overcome the big data challenge and some utilize various parallel techniques to improve the processing time.

Big data analytics requires new platforms with improved storage and computation, however, depending on the application and nature of data, these requirements will vary. We propose an application specific solutions for commonly used big data analytics algorithms. In the next section we discuss the preliminary list of these algorithms.

\subsection{Candidate Algorithms}
In this section, we list several algorithms that commonly used
in big data analytics and have potential to be optimized by GPUs
environments.

\subsubsection{Canopy Clustering}
Canopy Clustering is a simple, fast and accurate method for grouping
objects into clusters~\cite{website:mahout} which is commonly used
in Machine Learning (ML). During the processing,
all objects are represented as points in a multidimensional space and
the algorithm creates a ``Canopy'' containing all these points and
iterates on the remainder of the dataset until the initial set is
empty, accumulating a set of ``Canopies'' with each contains
one or more points. The iterative nature of the algorithm makes the
large data set refined oftenly and prohibits large run times. Using
the GPU accelerator may bridge this gap.

\subsubsection{Self-Organizing Maps}
The Self-Organizing Maps (SOP)~\cite{Kohonen-2001}, also known as
Kohonen feature map (KFM), is a special type of Artificial Neural
Network (ANN) model. It consists of one layer of ndimensional units
(neurons) and is connected with the network input. During the training
stage, the input data is presented to the map repeatedly and the winning
neuron (the closest map neuron in terms of Euclidean distance) is
determined. The computation instensive part of the
algorithm is the determination of the Euclidean distance.
The required synchronization of map updates
is the limiting factor for any distributed-memory implementation.
However, by using specific hardware support for global synchronization
in GPUs, performance of SOP has potential to be improved on GPU architecture.

\subsubsection{High Energy Physics Data Analysis}
The goal of the analysis is to execute a set of analysis
functions on a collection of data files produced by high-energy physics experiments.
The data analysis framework is ROOT, and the
analysis functions are written using an interpreted language
of ROOT named CINT. 
After processing each data file,
the analysis produces a histogram of identified features.
These histograms are then combined to produce the final
result of the overall analysis. This data analysis task is both
data and compute intensive and fits very well in GPUs. Fg.~\ref{fig:HEP} shows the program flow of
this analysis once it is converted to a MapReduce implementation. 

\begin{figure}
  \vspace{1.0ex}
  \centering
  \begin{minipage}[t]{0.90\columnwidth}
    \includegraphics[width=\columnwidth]{figures/hep}
  \end{minipage}
  \caption{MapReduce for the High Energy Physics data analysis}
  \label{fig:HEP}
  \vspace{-6.0ex}
\end{figure}
