\begin{abstract}
Big data is inherently an IO intensive problem while many
analytical applications are computation intensive which makes
big data analytics both IO and computation intensive. Majority
of proposed solutions target each of these requirements
individually, a class of these systems deploy computation close
to storage to overcome the big data challenge and some utilize
various parallel techniques to improve the processing time.
Combining goods of both worlds, parallel computing and distributed
systems, has been a long lived challenge. In this project,
we will leverage APARAPI framework to explore how to support
the execution of Hadoop applications in Java language on heterogenous
architectures without the effort of rewriting the entire program.
We believe that our approach can make improve the performance of
Hadoop applications by taking the advantage of compute power of
accelerators and would not increase the programming complexity
to user.
\end{abstract}
