We have comprehensively evaluated GPU query execution performance
with detailed analysis and comparisons between GPUs and CPU.
We conclude that the reasons why GPUs have not been adopted in  data warehouse systems
include: 1) GPUs only significantly outperform CPU for processing certain kinds of queries when data are available in the pinned memory;
2) considering both performance and portability, current programming models are not supportive enough for warehousing workloads;
and 3) the performance of warehousing queries doesn't increase correspondingly with the rapid advancement of GPU hardwares.

However, our analysis and comparisons give two clear R\&D directions for adopting GPUs in the fittest way.
First, a CPU/GPU hybrid query engine can maximze the hardware combination efficiency with task scheduling either in the query level or in the operator level.
Second, GPUs should run query engine for the purpose of real-time business intelligence analytics for main memory database systems with minimal interference for transactions executed on CPUs.
Furthermore, the role of GPUs could also change 
considering the potential NVIDIA GPUDirect technique,
which allows more efficient communication among GPU devices and storage devices.
An important future research topic is to study how to make GPUs directly process data stored in the permanent storage medias.


The query engine is open to the public and can be accessed at
\url{http://code.google.com/p/gpudb/}.



\begin{comment}
We summarize major findings as follows.

1. Data and query related issues:
\begin{itemize}
\item A GPU-effective table structure should avoid data alignment issues when designing column data type and width.

\item The most time consuming query in GPUs is dominated by hash join with both high selectivity and irregular memory access for dimension table data (e.g., Q3.1).

\end{itemize}


2.  Software related issues:
\begin{itemize}
\item Unfortunately data compression cannot accelerate the kernel execution of the aforementioned query type (e.g., Q3.1), 
although data compression can decrease data transfer time generally.
\item While invisible join can significantly accelerate the kernel execution of the aformentioned query type, 
it will degrade the performance of queries with very low selectivity (e.g., Q3.2 - Q3.4).
\item The CUDA UVA programming technique is most suitable for accelerating queries with low selectivities
(e.g., Q3.2 - Q3.4, Q2.2 - Q2.3) since their one-pass scan-dominated nature can create better opportunities to exploit transfer overlapping.
\item OpenCL is a more suitable and support programming model considering its comparable performance with CUDA and better functional portability.

\end{itemize}

3. GPU architecture related issues:
\begin{itemize}
\item NVIDIA GPU and AMD GPU with similar PCIe transfer bandwidth and GPU device memory bandwidth have similar
PCIe transfer overhead and kernel execution time for processing warehousing quereis, but NVIDA GPU has a better
overall performance.

\item SSBM executions in GPU devices are dominated by PCIe bandwidth and device memory bandwidth, 
and query kernels with intensive irregular data accesses are unable to fully exploit the device memory bandwidth.

\item The performance of SSBM queries can be significantly accelerated by the transition from PCIe 2.0 to PCIe 3.0,
but do not obtain performance improvement accordingly from the recent three generation of NVIDIA GPUs (GTX 480, 580, 680).
This implies that SSBM queries are not likely to benefit from the possible advancement of GPU hardwares
in the near future.
\end{itemize}

\end{comment}


