

%Data warehouse systems are playing critical roles in the society
%for various businesses, scientific research, and government activities,
%which demand high computing and data accessing powers
%to process complex analytical queries over huge amounts of data.
%The rapid growth of data makes query processing tasks even more challenging.
%Researchers in database community have followed the advancement of 
%many-core Graphic Processing Unit (GPU) for unique opportunities to meet the challenge.

In the past decade, special-purpose graphic processors originally designed for computer entertainment applications
have fully evolved to the general-purpose graphics computing units (GPUs),
with the rising of efficient programming models and tools, such as CUDA and OpenCL.
How to accelerate various workloads on GPUs
has been a major research topic in both the high-performance computing (HPC) domain and in the data processing domain
In HPC domain,
GPUs as co-processors have already been widely deployed to process performance-critical tasks.
For example, according to the latest Top 500 list,
more than 60\% machines are equipped with co-processors (mostly NVIDIA GPUs),
compared to less than 5\% six years ago.
However, in data processing domain,
we can hardly find that GPUs
have been truly adopted in major data warehousing systems (e.g., Teradata, DB2, Oracle, SQL Server)
or in MapReduce based data analytical systems (e.g., Hive, Pig, Tenzing), 
despite the existence of many research papers optimizing various database operations on GPUs.
Therefore, an important question raised here is: why the {\it general-purpose} GPUs
have not been used for critical query processing in data warehouse systems?

Deep analysis of the GPU architecture and data warehousing query characteristics
is needed to gain insightful answers to this question.
GPUs have large numbers of Single-Instruction-Multiple-Data (SIMD) processors and
bring high computing powers and high memory bandwidths.
The rich data parallelism in analytical queries, which naturally matches GPU's SIMD architecture,
makes GPUs to be very promising devices to accelerate query processing in data warehouse environments.
However, it is unclear how these claimed performance numbers of GPUs can be really converted into the real query performance
in practice when processing large data warehousing queries with diverse query characteristics. 

With the nature of data-intensive computation,
processing data warehousing queries on GPUs
is significantly complicated when compared to GPU's pure acceleration for loop computations in HPC workloads.
Instead, the performance for data processing is determined by
two fundamental factors as follows,
which we call the {\it Yin and Yang} of GPU query executions (In ancient Chinese philosophy, Yin and Yang represent two opposite forces that are interconnected and interdependent in the natural world):

\begin{itemize}

\item The {\it Yin} of Data Transfer: Moving data between host memory and GPU device memory\footnote{In this work, we don't consider the GPU integrated in the CPU (e.g., AMD APU), since such an architecture is mostly designed for desktop or mobile systems}.

\item The {\it Yang} of Kernel Execution: Executing the required computations on the data resident in the GPU device memory.

\end{itemize}

As a separate device in the host machine, a GPU has to execute kernels after the required data objects have been transfered to the device memory,
and the computation results must be transfered back to the host memory for the final output to users.
We call such data transfers {\it Yin} (the negative factor) because their performance cannot be improved
by the GPU capabilities (the increase of GPU cores),
but the PCIe bus bandwidth.
On the other hand, the {\it Yang} (the positive factor),
the performance of kernel execution on data in the device memory is in general affected by the so-called computing power of the GPU.
However, how this factor is affected by diverse query features and GPU architectures needs 
a deep study of how queries are executed using the SIMD cores in GPUs.
Therefore, the purpose of this paper
is to provide a comprehensive study of
how these two factors are affected by various factors including query characteristics, query execution engine techniques and GPU capabilities.

In this work, we target at the star schema queries
because they are the typical workloads in practical warehousing systems
We use the workload defined in the Star Schema Benchmark (SSBM)
which has been widely used in various data warehousing research work.
Our goal is to answer the following questions with technical and experimental bases:

\begin{enumerate}

\item How do different query characteristics affect query performance on GPUs? What is the reason behind it?

\item How do existing execution optimization techniques affect performance of different types of queries on GPUs?

\item As GPU's peak performance continues to grow,
will the performance of SSBM queries increase correspondingly? 
How does the evolvement of GPU hardware affect SSBM query performance?

\end{enumerate}

To answer these questions, we have conducted a comprehensive three-dimension study of query execution performance 
by varying query characteristics, software techniques and hardware parameters, 
as shown in Figure \ref{fig:overview}.
Our study is based on the following three kinds of research efforts.

\begin{figure}
\centering
\epsfig{file=overview.eps,width=0.40\textwidth}
\vspace{-0.15in}
\caption{Research Overview: A 3-Dimension Study of Processing Warehousing Queries on GPUs.}
\vspace{-0.15in}
\label{fig:overview}
\end{figure}


\textbf{Engine Implementation:} 
We have designed and implemented a GPU-optimized query engine.
Based on the existing algorithms proposed by prior research work,
we made the best effort to implement the basic operators for various operations.
We also optimized the engine with a set of software optimization techniques,
including data compression, join optimization and transfer overlappings.
%Although we know it is more trustful for a performance measurement research
%to target more objective and mature systems,
%we have no choice but implementing our own engine at the current early stage of hardly finding available GPU-optimized query execution systems.


\textbf{Experimental Evaluation:} 
Based on our GPU query execution engine,
we had conducted intensive experiments to examine
how query performance, e.g., the data transfer performance ({\it the Yin}) and the kernel execution performance ({\it the Yang}), 
would be affected when varying the factors in the three dimensions.
First, we examined how the two factors are affected with various variations for single operations (e.g., join and aggregation) and 
the Star Schema Benchmark (SSBM) queries.
Second, we studied how different queries can be accelerated by applying the software optimization techniques,
and analyzed how these techniques can optimize a query according to its query characteristics.
Finally, by conducting our experiments in three generations of NVIDIA GPUs(GTX 480, 580, and 680),
we studied how hardware variations can affect the query execution performance.


\textbf{Modeling and Predictions:}
In order to further understand the query execution behaviors measured in the experiments
and to answer the question of "where does time go",
based on our intensive experimental results, 
we have proposed an analytical model to characterize how the time of a query execution process
can be calculated based on data movements between host memory and device memory, and basic operations inside GPU devices.
For a given GPU hardware configuration and a given query,
the model can be used to predict the query execution performance.
To examine the accuracy of our model,
we used careful experiments with different hardware parameters for various complex SSBM queries.
Based on our proposed model,
we predict how the possible improvement of future GPU hardware parameters
can increase the performance of processing data warehousing queries using GPUs.

To the best of our knowledge, this work is the first comprehensive study
to present quantitative results of how complex data warehousing queries are executed on GPUs
with diverse software optimization techniques and GPU hardware configurations.
Specifically, our study makes the following findings.

1. Data and query related issues:
\begin{itemize}
\item A GPU-effective table structure should avoid data alignment issues when designing column data type and width.

\item The most time consuming query in GPUs is dominated by hash join with both high selectivity and irregular memory access for dimension table data (e.g., Q3.1).

\end{itemize}


2.  Software optimization related issues:
\begin{itemize}
\item Unfortunately data compression cannot accelerate the kernel execution of the aforementioned query type (e.g., Q3.1), 
although data compression can decrease data transfer time generally.
\item While invisible join can significantly accelerate the kernel execution of the aformentioned query type, 
it will degrade the performance of queries with very low selectivity (e.g., Q3.2 - Q3.4).
\item The CUDA UVA programming technique is most suitable for accelerating queries with low selectivities
(e.g., Q3.2 - Q3.4, Q2.2 - Q2.3) since their one-pass scan-dominated nature can create better opportunities to exploit transfer overlapping.
\end{itemize}

3. GPU architecture related issues:
\begin{itemize}
\item SSBM executions in GPU devices are dominated by PCIe bandwidth and device memory bandwidth, 
and query kernels with intensive irregular data accesses are unable to fully exploit the device memory bandwidth.

\item The performance of SSBM queries can be significantly accelerated by the transition from PCIe 2.0 to PCIe 3.0,
but do not obtain performance improvement accordingly from the recent three generation of NVIDIA GPUs (GTX 480, 580, 680).
This implies that SSBM queries are not likely to benefit from the possible advancement of GPU hardwares
in the near future.
\end{itemize}


