
\begin{figure}[b]
\centering
\epsfig{file=graph/setup/gpudb.eps,width=0.30\textwidth}
\caption{GPU Query Engine Architecture}
\vspace{-0.15in}
\label{fig:gpudb}
\end{figure}


\subsection{Engine Structure and Storage Format}
Figure \ref{fig:gpudb} shows the architecture of our query engine.
It is comprised of an SQL parser, a query optimizer and an execution engine.
The parser and optimizer share the same codes with YSmart \cite{DBLP:conf/icdcs/LeeLHWHZ11}.
The execution engine consists of a code generator and pre-implemented query operators using CUDA/OpenCL.
The code generator can generate either CUDA drive programs or OpenCL drive programs,
which will be compiled and linked with pre-implemented operators.

The engine adopts a push-based,
block-oriented execution model which executes a given query plan tree in post-order sequence.
It tries to keep data in GPU device memory as long as possible
until all the operations on the data are finished. 
%This principle is similar to the idea proposed in \cite{thomas:llvm}
%which keeps data in CPU cache when executing a query.

We choose column-store for our engine since we target warehousing workloads.
In our implementation, each table is stored as a collection of columns,
where each column is stored in a separate file on the disk.
Our engine uses the late materialization technique \cite{abadi:materialization}
and performs tuple re-construction through a special GPU kernel
when projecting the final results.

In our engine, the codes executed on CPU are responsible for allocating and releasing GPU device memory,
transferring data between the host memory and the GPU
device memory, and launching different GPU kernels.


\subsection{Query Operators}
Our engine implements four operators required by star schema queries,
each of which is implemented with representative algorithms based on the state of the art of research.
%Our purpose is not to compare the performance of different algorithms, but to examine how different query characteristics affect query performance.

\textbf{Selection.}
Selection's first step is to sequentially scan all the columns in the predicates for predicate evaluation,
with the result stored in a 0-1 vector.
The second step is to use the vector to filter the projected columns.

\textbf{Join.}
We implement the unpartitioned hash algorithm that has been proved to
perform well for star schema queries on multi-core and many-core
platforms \cite{eth:join,yinan:join, tim:uva}.
We implement the hash table using both Cuckoo hash \cite {DBLP:journals/tog/AlcantaraSASMOA09}
and chained hash.
For chained hash, hash conflicts can be avoided by making the size of
hash table twice the cardinality of the input data with a perfect hash function theoretically \cite{DBLP:books/cu/MotwaniR95}.
In our study, the chained hash performs well than the Cuckoo hash.
This is because star schema queries have low join selectivities,
and Cuckoo hash needs more key comparisons than chained hash when there is no
match for the key in the hash table. 

\textbf{Aggregation.}
We implement the hash based aggregation which involves two steps.
The first step is to sequentially scan the group-by keys and calculate the hash value for each key.
The second step is to sequentially scan the hash value and the aggregate columns
to generate aggregation results.

\textbf{Sort.}
Sort operator will sort the keys first. After the keys are sorted,
the results can be projected based on the sorted keys which is a gather operation.
Since sort is usually conducted after aggregation, the number of tuples
to be sorted is usually small which can be done efficiently through bitonic sort.
%We adopt the GPU merge sort algorithm presented in \cite{satish:fastsort}.

\begin{comment}
It is implemented in three steps:
building a hash table,
probing the hash table,
and generating join results.
The hash table contains two parts: hash buckets and hash entries, both of which are stored in a continuous memory region.
Each hash bucket contains the number of hash entries belong to this bucket
and the starting position for its first entry, where each entry contains a (key, value) tuple.
Hash conflicts are resolved through separate chaining technique.
\end{comment}


\subsection{Implementation Details}
\textbf{Use of GPU Memory.} Our engine utilizes both device memory and 
local shared memory. For selection, only device memory
is utilized. For join and aggregation, the hash table will be put in the local
shared memory when its size is smaller than the local shared memory size. 
For sort, all the keys are sorted and merged in the local shared memory.

\textbf{Data Layout.} Each column is stored in a continuous memory in GPU device memory,
which has the Array-Of-Structure (AOS) format.
The Structure-Of-Array (SOA) format, which can provide
coalesced access for scanning irregular data,
doesn't provide
performance benefits for our workloads because the accesses of irregular data (string data from dimension tables) 
are dominated by random accesses during join operations.

\textbf{GPU Thread Configurations}. The thread block size is configured to be at least 128 and
the largest number of thread blocks is configured to be 4096. Each thread in the thread block
will process a set of elements from the input data based on its global thread ID.
For example, for a configuration with 256 threads per block and 2048 thread blocks,
the thread with global ID 0 will process the data with the index of 0, 2048*256,
2*2048*256 until the end of the data.

\textbf{GPU Thread Output.}
Our engine avoids the synchronizations among threads when they write to the same memory region at the same time.
This is achieved by first letting each thread count the number of results it will generate,
and then performing a prefix sum on the count result.
In this way each thread knows its starting position in the region and can
write to the region without synchronization.



