There are already a set of research papers on optimizing various database operations on GPUs
\cite{DBLP:conf/vldb/BandiSAA04,
DBLP:conf/sigmod/GovindarajuLWLM04,
he:primitive,
cwi:uva,
jimgray:terasort,
satish:fastsort,
he:compression,
ross:olap,
DBLP:journals/pvldb/HeY11,
he:gdb,
tim:uva,
DBLP:journals/pvldb/WangHLWZS12,
DBLP:journals/pvldb/AoZWSWLLL11,
Haicheng:fusion,
DBLP:conf/icde/LiebermanSS08}.
Our unique contribution in this paper is presenting a comprehensive study for complex
data warehousing queries with different software optimizations and hardware configurations.
Several existing research work are related to our software optimization study.
Data compression on GPU has been studied in \cite{he:compression},
and transfer overlapping has been studied in \cite{tim:uva,cwi:uva}.
Compared to these work, our work focus on how
these techniques can optimize different types of complex queries.

A cost model for GPU query processing was proposed in \cite{he:gdb}. 
The essential difference between it and our model
is how to estimate the time spent on GPU device memory access,
which is the most important step to accurately estimate the cost of GPU query processing.
The previous model assumes a fixed uncoalesced bandwidth that is applied to all different
uncoalesced memory accesses. However, this assumption is not consistent with
the current NVIDIA GPU where uncoalesced accesses with
certain patterns can have a 100\% memory bus utilization \cite{cuda:memory}.
Our model takes a hardware feature oriented methodlogy
to estimate the acutal memory transations in GPU device memory,
which can better estimate the GPU memory bus utilization. 


\begin{comment}

In this section, we will briefly describe the previous research work 
related to analytical query processing on GPU.
\subsection{Analytical Query Processing on GPU}
\subsubsection{Join}
As join is considered to be the most time consuming operation in the data warehouse environment,
many studies have focused on how to accelerate its performance on GPU.
Researchers in \cite{he:primitive} design a set of highly optimized primitives on GPU to implement the relational
join algorithms which efficiently utilized GPU's high parallelism and high memory bandwidth. 
The performance on GPU is 2-7X faster compared to its performance on multi-core CPU.
To better utilize the memory bandwidth, a multi-channel data processing technique is proposed in \cite{cwi:uva} to efficiently execute foreign-key joins in CPU/GPU hybrid environment.
The sequential accessed table is placed in the host memory while the random accessed table is placed in the device memory.
The data are fetched to GPU from different channels when join is executed on GPU.
The work in \cite{tim:uva} adopts a similar data placement strategy as \cite{cwi:uva} but further improves the join performance
by better utilizing the PCI-e bandwidth between CPU and GPU through CUDA Unified Virtual Address \cite{cuda}.

\subsubsection{Sort}
Sorting is another key operation in data warehouse system.
The existing work have proposed efficient design and implementations of various sorting algorithms on GPU.
Researchers in \cite{jimgray:terasort} present a hybrid sorting architecture to efficiently sort large amount of data with wide keys on disk.
The sorting algorithm implemented on GPU is a hybrid radix-bitonic sort which is faster and has a good price performance.
The work in \cite{satish:fastsort} describes the design and implementation of SIMD architecture friendly sorting algorithm for
radix sort and merge sort on both CPU and GPU devices. Merge-sort is considered to outperform radix sort on future many-core
architectures for its SIMD friendly nature.

\subsubsection{Cost Model}
An accurate cost model is important for query optimization. In the GPU scenario, the most important part 
is to accurately estimate the kernel execution cost of a given query on GPU.
Researchers in \cite{he:gdb} view GPU as a black box and models the kernel execution cost of a query as computation cost and
memory access cost where the computation cost is estimated using a calibration-based method and the memory access cost is
estimated based on whether the memory access can be coalesced or not. 
To generate a query execution plan on GPU, a plan must be first generated for CPU
and then the cost model can help decide whether each operator of the query plan should be executed on CPU, GPU or in a hybrid way.
The work in \cite{cwi:uva} proposes a theoretical model to estimate the execution cost of unpartitioned hash join 
on GPU based on memory access patterns.

\subsubsection{Other Work}
Several algorithms are proposed in \cite{2004:query} to accelerate the selection and aggregation operation
by efficiently utilizing GPU's pixel processing engines.
The database compression techniques,
which can effectively reduce the memory footprints,
have been evaluated on GPU for column-stored databases in \cite{he:compression}. The work demonstrates that query processing performance can be further improved
through efficient compression since the data transfer cost through the low bandwidth PCI-e bus is great.
The work \cite{ross:olap} proposes algorithms to resolve GPU shared memory bank conflicts for foreign-key join and aggregation by
reordering the fact table.
Researchers in \cite{heimel:plan} adopt a different approach as they use GPU as an accelerating processor for query optimization instead of query execution.

%\subsection{Limitations of Existing Work}
%The previous research exist several limitations when we consider executing complex analytical queries on GPU.
%\begin{itemize}
%\item Some query characteristics which have a great influence on the query performance on GPU are not studied in existing literatures.
%For instance, the number of projected columns and the attribute size of the projected columns for the join operation.
%\item The technique which is effective in one query operator may not be useful for a complex query. For instance, the UVA technique
%proposed for foreign-key join will be ineffective if the results of the join will be used by another join operator.
%\item What is the best query execution plan in CPU/GPU hybrid environment is unknown.
%\end{itemize}

%We address these limitations by proposing an analytical model for complex query execution on GPU which will be introduced in Section 3.

\end{comment}
