In this section,
we present the implementation of our GPU query execution engine
which is specially designed for executing star schema queries as in the SSBM benchmark.
The engine is actually an automatic translator from SQL to CUDA programs written in C language.
In this engine, the components for SQL parsing and query optimizations
are based on the code of our existing YSmart SQL-to-MapReduce translator \cite{DBLP:conf/icdcs/LeeLHWHZ11}.


\subsection{Data Layout and Engine Structure}
%When processing queries on GPU,
%data need to be copied from host memory to GPU's device memory before processing.
%As data transfer is an expensive operation,
Minimizing unnecessary data transfers between host memory and GPU device memory 
is an important goal when designing data layout format.
Therefore, we choose the column-store format as the data layout format for our GPU query engine 
considering its performance advantages in data warehouse environments.
In our implementation, each table is stored as a collection of columns,
where each column is stored in a separate file on the disk.
Tuples can be re-constructed by merging the attributes in the same row from the columns.
Our engine uses late materialization \cite{abadi:materialization} so that it does not perform tuple re-construction 
until projecting the final results of the query.

Our query engine adopts a push-based, 
block-oriented execution model which executes a given query plan tree in post-order sequence.
As the cost of PCIe data transfer is high, we try to keep data in GPU device memory as long as possible
until all the operations on the data are done. 
This principle is similar to the idea proposed in \cite{thomas:llvm}
which tries to keep data in CPU cache when executing a query.

\subsection{Query Operators}
We have implemented four data warehouse operators required by SSBM queries: selection, join, aggregation and sort.
For each operator, we implement a representative algorithm based on the state of the art of research.
%Our purpose is not to compare the performance of different algorithms, but to examine how different query characteristics affect query performance.

\textbf{Selection.}
Selection is implemented in two steps.
The first step is to sequentially scan all the columns in the predicates and evaluate the predicates.
The predicate evaluation result is stored in a vector for filtering the projected columns.
The second step is to sequentially scan the filter
and only output the attributes from the projected columns when the element in the filter indicates true for the corresponding predicate.\\

\textbf{Join.}
We implement the unpartitioned hash algorithm that has been proved to be 
competitive compared to other algorithms and
advanced in handling data skew on multi-core and many-core
platforms \cite{yinan:join, tim:uva}. 
It is implemented in three steps:
building a hash table,
probing the hash table,
and generating join results.
The hash table contains two parts: hash buckets and hash entries, both of which are stored in a continuous memory region.
Each hash bucket contains the number of hash entries belong to this bucket
and the starting position for its first entry, where each entry contains a (key, value) tuple.
Hash conflicts are resolved through separate chaining technique.
As the size of dimension table is usually small, we can avoid hash conflicts by making the size of hash table
twice the cardinality of the dimension table theoretically.
\\

\textbf{Aggregation.}
We implement the hash based aggregation which involves two steps.
The first step is to sequentially scan the group-by keys and calculate the hash value.
The second step is to sequentially scan the group-by keys and the aggregate columns
to generate aggregation results.\\

\textbf{Sort.}
Sort operator will sort the keys first. After the keys are sorted,
the results can be projected based on the sorted keys which is a gather operation.
Different algorithms have been proposed to effectively sort the keys.
We adopt the GPU merge sort algorithm presented in \cite{satish:fastsort}.
\\

When implementing each operator, synchronizations are needed among threads 
that are trying to write to the same memory region at the same time.
As GPU has a high synchronization cost,
we try to reduce the number of synchronization operations as much as possible.
This is achieved by letting each thread write results to a separate region in the result buffer.
Take the selection for example.
In the first step when each thread evaluates the predicates,
it also counts the number of its evaluated predicates that are true.
After the first step, each thread knows how many tuples it will output.
Then by calculating a prefix sum on the count,
each thread knows its starting output position in the result buffer and can directly output the attributes
to the buffer without synchronization in the second step.
This principle applies to the implementation of other operators as well.


