
%Data warehouse systems are playing critical roles in the society
%for various businesses, scientific research, and government activities,
%which demand high computing and data accessing powers
%to process complex analytical queries over huge amounts of data.
%The rapid growth of data makes query processing tasks even more challenging.
%Researchers in database community have followed the advancement of
%many-core Graphic Processing Unit (GPU) for unique opportunities to meet the challenge.

In the past decade, special-purpose graphic computing units (GPUs) %originally designed for computer entertainment applications
have evolved into general-purpose computing devices,
with the advent of efficient parallel programming models, such as CUDA \cite{cuda} and OpenCL \cite{opencl}.
Because of GPU's high computational power, how to accelerate various workloads on GPUs
has been a major research topic in both the high performance computing area and the database area 
\cite{DBLP:conf/vldb/BandiSAA04,
DBLP:conf/sigmod/GovindarajuLWLM04,
he:primitive,
cwi:uva,
jimgray:terasort,
satish:fastsort,
he:compression,
ross:olap,
DBLP:journals/pvldb/HeY11,
he:gdb,
tim:uva,
DBLP:journals/pvldb/AoZWSWLLL11,
Haicheng:fusion,
DBLP:conf/icde/LiebermanSS08,
DBLP:journals/pvldb/WangHLWZS12}.
In high performance computing area,
GPUs as accelerators have already been widely deployed to process performance-critical tasks.
For example, according to the June 2013's Top 500 list,
more than 50 supercomputers have been equipped with accelerators/coprocessors (mostly NVIDIA GPUs),
compared to less than 5 six years ago.
%\footnote{http://www.slideshare.net/fullscreen/top500/presentation-of-the-40th-top500-list/1}.
However, in the database area,
we can hardly find
any major data warehousing system (e.g., Teradata, DB2, Oracle, SQL Server)
or MapReduce-based data analytical system (e.g., Hive, Pig, Tenzing)
that has truely adopted GPUs for productions,
despite the existence of many research papers optimizing various database operations on GPUs
which have already shown the significant performance benefits when utilizing GPUs.
%Therefore, the critical question raised in the database area is:
%{\it why the general-purpose, high performance GPUs
%have not been used for critical query processing in data warehousing systems?}

To understand the reason behind this fact, this paper addresses the following issues 
with both technical and experimental bases: 

\begin{itemize}

\item Where does time go when processing warehousing queries on GPUs? (Section 4.1)
\item How do existing software optimization techniques affect query performance on GPUs? (Section 4.2 - 4.4)
\item Under what conditions will GPU significantly outperform CPU for warehousing queries? (Section 5.1) 
%\item When programming warehousing queries on GPUs, which programming model is more suitable and more supportive, CUDA or OpenCL?
\item How do different GPU hardwares and their supporting systems affect the query performance? (Section 5.2)

\item How does the advancement of GPU hardware affect query performance? (Section 6)

\end{itemize}


\begin{comment}
A deep analysis of the GPU architecture and data warehousing query characteristics
is needed to gain insightful answers to this question.
GPUs have a large number of processing cores
with high computational powers and high memory bandwidths \cite{wiki:gpgpu}.
For instance, NVIDIA's Kepler based GPU GTX 680 can achieve a peak performance of more than 3 TFLOPS,
with a peak memory bandwidth of more than 180GB/s \cite{cuda}.
The rich data parallelism in warehousing queries, which naturally matches GPU's parallel architecture,
makes GPUs to be very promising devices to accelerate query processing in data warehousing environments.
However, it is unclear how these claimed performance numbers of GPUs can be
converted into the real query performance
in practice when processing complex warehousing queries with diverse query characteristics.

When processing queries on GPUs, data need to be transferred from the host memory to the GPU device memory
through a PCIe bus before the queries can be executed on GPUs,
and the results must be transferred back to deliver to users
after queries finish execution on GPUs
\footnote{In this paper, we don't consider the GPU integrated in the CPU (e.g., AMD APU), since such an architecture is mostly designed for desktop or mobile systems}.
With the data-intensive nature, the performance of warehousing queries on GPUs
is largely determined by two fundamental factors as follows,
which we call the {\it Yin and Yang} of
query processing on GPUs (In ancient Chinese philosophy, Yin and Yang represent two opposite forces that are interconnected and interdependent in the natural world):
 
\begin{itemize}

\item The {\it Yin} of PCIe Data Transfer: Moving data between host memory and GPU device memory.
\item The {\it Yang} of Kernel Execution: Executing the required query computations on the data resident in the GPU device memory.

\end{itemize}

The PCIe data transfer overhead (the negative aspect), and the actual query execution time on GPUs (the positive aspect),
vary from query to query.
The high dynamics of the warehousing queries at run time imply that gaining benefits
from processing queries on GPUs can be conditional and workload dependent.
Therefore, the purpose of this paper is to provide a comprehensive study of
how the performance of warehousing queries are affected by various factors
including query characteristics, software optimization techniques and GPU capabilities.

\end{comment}

\subsection{The Framework of Our Study}

The key to answering these questions is to fundamentally understand
how the two basic factors of GPU query processing,
which we call the {\it Yin and Yang}
\footnote{In ancient Chinese philosophy, Yin and Yang represent two opposite forces that are interconnected and interdependent in the natural world.}
,
are affected by query characteristics, software optimization techniques, and hardware environments. 
The {\it Yin} represents PCIe data transfer, which transfers data between host memory and GPU device memory.
The {\it Yang} represents kernel execution, which executes the query on the data stored in the GPU device memory.
To characterize these two factors, 
we have conducted a comprehensive three-dimensional study of query processing on GPUs
as shown in Figure \ref{fig:overview}.
We target at the star schema queries
because they are the typical workloads in practical warehousing systems \cite{DBLP:conf/cidr/StonebrakerBCCGHHLRZ07}.
Our study is based on the following three sets of research efforts.

\begin{figure}
\centering
\epsfig{file=graph/motivation/Overview.eps,width=0.30\textwidth}
\vspace{-0.15in}
\caption{Research Overview: A 3-Dimension Study of Processing Warehousing Queries on GPUs.}
\vspace{-0.15in}
\label{fig:overview}
\end{figure}

\textbf{Implementation of a GPU Query Engine:}
We have designed and implemented a GPU query engine using CUDA and OpenCL
which can execute on both NVIDIA/AMD GPUs and Intel CPUs.
Based on the algorithms proposed in prior research work,
we have made the best effort to implement various warehousing operators.
%We have also optimized the engine with a set of software optimization techniques.
%Although we know it is more trustful for a performance measurement research
%to target more objective and mature systems,
%we have no choice but implementing our own engine at the current early stage of hardly finding available GPU-optimized query execution systems.

\textbf{Experimental Evaluation and Performance Comparison:}
%Based on our GPU query engine,
%we have conducted intensive experiments to examine
%how query performance, e.g., the PCIe data transfer overhead and the kernel execution time,
%will be affected when varying the factors in the three dimensions in Figure \ref{fig:overview}.
Based on our GPU query engine,
1) we studied warehousing query behaviors and analyzed effects of various software optimization techniques;
2) we compared the performance of warehousing queries on GPU
with MonetDB \cite{DBLP:journals/debu/IdreosGNMMK12},
which is a representative high-performance analytical query engine;
and 3) we investigated how different GPU hardwares and programming models can affect the performance of warehousing workloads.


\textbf{Modeling and Predictions:}
We have proposed an analytical model to characterize and quantify
the query execution time on GPUs. 
The model accuracy is verified by detailed experiments with different hardware parameters.
Based on the model,
we predict how possible advancement of future GPU hardwares
will improve query performance.


\subsection{Contributions of Performance Insights}

Our comprehensive study quantitatively demonstrates that:
1) GPU only significantly outperforms CPU (4.5x - 6.5x speedups) for certain queries 
when data are prepared in the pinned host memory;
2) GPU has limited speedups (around 2x) for queries dominated 
by selection or by intensive random device memory accesses, 
or when data are not in the pinned host memory;
3) The major obstacle to OpenCL portability is vendors' subtle implementations of the specification
which can cause both performance and functional problems for warehousing workloads;
and 4) The peak performance increase in the evolving GPU generations has limited
performance benefits for processing warehousing queries.

\begin{comment}
\textbf{Contribution of the Model}:
Our proposed model can accurately capture the behaviors of query execution on GPUs.
%The model can be used to predict how hardware parameters
%in future GPU advancement can bring performance increase for data warehousing queries.
%Based on the accuracy validation of the model,
%we show that the performance of GPU query executions is both understandable and predictable.
%With the model, we predict future query execution performance 
%with 1) up to \textbf{10\%} performance increase by doubling device memory bandwidth
%and 2) up to \textbf{30\%} increase by doubling PCIe bandwidth.


\textbf{Contribution of the Testbed}:
Our GPU query engine can be used to conduct further GPU query research,
such as performance test, algorithm development, and hybrid system building.
\end{comment}


The rest of the paper is organized as follows.
In Section 2 we present the implementation of our GPU query engine.
Section 3 describes the experimental environment. 
We study the warehousing query behaviors and the effects of software techniques in Section 4
and conduct detailed performance comparisons in Section 5.
In Section 6 we introduce our cost model
and explore the impacts of GPU hardware advancement on query processing.
We introduce the related work in Section 7
and conclude our work in Section 8.


