In this section we describe the measurement workloads and their parallel implementations.

In data warehouse environment, data are organized based on star schema \cite{stonebraker:dw} and
the analytical query is star join query which has several foreign-key joins between fact table and
dimension tables followed by aggregation and sort. 
To understand the impact of various characteristics on query performance on GPU,
we start measurement from single query operator. We choose selection, join(hash join),
aggregation and sort as the representative operators and comprehensively study each operator's performance under
various conditions. 
Then we measure the performance of analytical query on GPU using the Star Schema Benchmark queries 
which can good represent the query characteristics in data warehouse environment.

\begin{comment}

\begin{table*}
\centering
\caption{Notations for the Cost Model}
\begin{tabular}{|c|c|} \hline
Notations&Descriptions \\ \hline
\begin{math}B_p\end{math}&The transfer bandwidth between host memory and device memory\\\hline
\begin{math}B_g\end{math}&The bandwidth of the device memory\\\hline
\begin{math}C_g\end{math}&The size of a cache line in the device memory\\\hline
\begin{math}T\end{math}&The total cost of executing a query on GPU\\\hline
\begin{math}N\end{math}&The number of operations of executing a query on GPU\\\hline
\begin{math}T_t\end{math}&The PCI-e transfer cost of executing a query on GPU\\\hline
\begin{math}T_s\end{math}&The sequential memory access cost when executing a query on GPU\\\hline
\begin{math}T_r\end{math}&The random memory access cost when executing a query on GPU\\\hline
\end{tabular}
\label{table:notation}
\end{table*}

The total execution cost of a query depends on the query execution plan and the execution cost
of each query operator, the estimation of which needs the statistics about the relations, like data size
and selectivity \cite{microsoft:model}. We assume these statistics are available and can be directly used
by our model.
We will first introduce how to model the cost of each operator
and then describe how to calculate the cost for the entire query.
The notations used in this section are listed in Table~\ref{table:notation}.

\end{comment}

\subsection{Single Query Operator}

\subsubsection{Selection}
Selection can be represented in the following format:\\
\\\textbf{Select} L1, L2,...\\
\textbf{From} R\\ 
\textbf{Where} predicate;
\\

The semantics of the selection operator is simple. It sequentially scans a table 
and projects the tuples that meet the predicate condition.
The parallel algorithm to implement selection operation on GPU contains the following steps:\\

\textbf{Step 1: }Each thread reads one tuple from the predicate columns and evaluate the
predicate. A filter vector is used to store the result of the predication.
Each thread also maintains a local counter and increase its value if the current tuple meets the predicate.
This process will be repeated until all the tuples have been evaluated.

\textbf{Step 2: }Perform the prefix sum based on each thread's local counter 
to get the total number of projected tuples and
each thread's starting output position in the result buffer.

\textbf{Step 3: }Each thread evaluates one element from the filter vector.
If the value indicates the tuple meets the predicate condition, read the tuple from the
projected columns and write it to the result buffer.
This process will be repeated until all the elements in the filter vector have been processed.\\

The key factors that affect the selection performance include the selectivity, projected tuple size, the number of prediates
and the complexity of each predicate.

\begin{comment}
In Step 1 all the predicate columns are sequentially scanned and the filter vector is sequentially written. There are
$||R||$*m predicate evaluations and $||R||$*(m-1) logic operations to generate the filter vector. 
In Step 2 the cost prefix sum is trivial and not related to the query characteristics.
In Step 3 the filter
vector is sequentially scanned but the projected columns are randomly read and written. The filter vector is evaluated for $||R||$
times.

\textbf{Cost calculation}

Based on the above analysis, we can calculate the cost of the selection operator.
The following notations to used.\\
\\
$||R||$ - cardinality of the table R \\ 
n - number of projected columns\\
\begin{math}K_i\end{math} - the attribute size of the ith project column\\
m - number of predicate columns\\
\begin{math}P_i\end{math} - the attribute size of the ith predicate column\\
r - selectivity of the predicates\\

The number of operations:
\begin{displaymath}
N = ||R||*m + ||R||*(m-1) + ||R||
\end{displaymath}

The sequential device memory access cost:
\begin{displaymath}
T_s = \dfrac {||R||*\sum_{i=1}^{m}P_i + 2*||R||} {B_g} 
\end{displaymath}

The random device memory access cost:
\begin{displaymath}
T_r = \dfrac {2*r*||R||*\sum_{i=1}^{n}K_i * C_g} {B_g}
\end{displaymath}

The PCI-e transfer cost is:
\begin{displaymath}
T_t = \dfrac{||R||*(\sum_{i=1}^{n}K_i + \sum_{i=1}^{m}P_i) + r*||R||*(\sum_{i=1}^{n}K_i)} {B_p}
\end{displaymath}
\end{comment}

\subsubsection{Join}
Join can be represented in the following format:\\
\\\textbf{Select} R1,R2..., S1,S2...\\
\textbf{From} R, S\\
\textbf{Where} R.key = S.key;
\\

Our measurement focuses on two representative hash join algorithms: the unpartitioned hash join
and the radix hash join.

\textbf{Unpartitioned hash join}

The unpartitioned hash join first builds a hash table for the entire dimension table. Then it sequentially
scans the fact table and probes the hash table to find a match. Tuple will be projected by the operator
only if a corresponding match is found during the probe. The parallel algorithm to implement the unpartitioned
hash join on GPU contains the following steps:\\

\textbf{Step 1: }Each thread reads one primary key from the dimension table and calculates its hash value.
Increase the number of the keys that are hashed to this value by 1. After all the primary keys have been processed,
perform a prefix sum to calcuate the start writing position for each hash value.

\textbf{Step 2: }Each thread reads one primary key from the dimension table and calculates its hash value.
The primary key and the row id will be inserted into the corresponding bucket in the hash table. This process will
be repeated until all the primary keys have been processed.

\textbf{Step 3: }Each thread reads one foreign key from the fact table and calculates its hash value. Probe the
hash table to find a match. A filter vector is used to store the row id of primary key in the dimension table 
if there is a match.
Each thread also maintains a local counter and increase its value by 1 if a match is found in its processed keys.
This process will be repeated until all the foreign keys have been processed.

\textbf{Step 4: }Perform a prefix sum on the thread's local counter to calcuate the tuple number of join results
and the starting output position for each thread in the result buffer. 

\textbf{Step 5: }Each thread evaluates one element from the filter vector. If the value indicates a join match, read the tuples from
the projected columns of the fact table and dimension table and write them to the result buffer. This process will
be repeated until all the elements in the filter vector have been evaluated.\\

\begin{comment}
In Step 1, the primary keys of the dimension table is sequentially scanned but the keys and row ids are randomly written to the hash table.
A hash value is calculated for each primary key. In step 2, the foregin keys of the fact table are sequentially scanned 
and the filter vector is sequentially written. A hash value is calculated for each foreign key.
In Step 3, the filter vector is sequentially scanned and but the projected tuples are randomly read and written.
Each element in the filter vector is evaluated.\\
\end{comment}

\textbf{Radix hash join}

The radix hash join first partitions both dimension table and fact table into the same number of partitions.
Then each pair of the partitions are joined to generate the projected columns. To get a better
performance, the average partition size of the dimension table should not exceed the size of the cache. 
The parallel algorithm to implement the radix hash join contains the following steps:\\

\textbf{Step 1: }Determine the number of partitions based on the cache size and the size of the dimension table.
Partition the two tables.

\textbf{Step 2: }For each pair of partition, read the smaller table to the cache. Then each thread will read one
foreign key and  try to find a match by searching the dimension table partition in the cache. A filter vector is used
here to store whether a match is found, similar to the unpartitioned hash join. 
This process will be repeated until all the elements have been processed.

\textbf{Step 3: }Each thread evaluates one element from the filter vector. If the value indicates a join match, read the tuples from
the projected columns of the fact table and dimension table and write them to the result buffer. This process will
be repeated until all the elements in the filter vector have been evaluated.\\

The key factors that impact the join performance include join selectivity, the number of projected columns from fact table
and dimension table, the tuple number of dimension table and fact table and the attribute size of the projected columns.

\begin{comment}
\textbf{Cost calculation}

We use the following notations to model the cost of join operator.\\
\\
$||R||$ - cardinality of the table R\\
$||S||$ - cardinality of the table S\\
n - number of projected columns from fact table\\
m - number of projected columns from dimension table\\
$R_i$ - the attribute size of the ith projected column from dimension table\\
$S_i$ - the attribute size of the ith projected column from fact table\\

We assume the join keys are 32-bit integers and the primary key and foreign key columns are not projected
by the join operator. The PCI-e transfer cost is:
\begin{displaymath}
T_t = \dfrac{(||R||*\sum_{i=1}^{n}R_i + ||S||*\sum_{i=1}^{m}S_i)*(1+r) + 4*(||R||+||S||))} {B_p}
\end{displaymath} 

\end{comment}

\subsubsection{Aggregation}
Aggregation can be represented in the following format:\\
\\\textbf{Select} L1, L2,...\\
\textbf{From} R\\ 
\textbf{Group By} Key;
\\

There exist two major aggregation methods in the current system: hash based aggregation and sort based aggregation.
We focus on hash based aggregation here.

The hash based aggregation first caculates a hash value for each group by key.
The tuples that have the same hash value are placed into the same group. Then the aggregation
function can be calculated for each group. The parallel algorithm to implement hash based aggregation contains
the following steps:

\textbf{Step 1: }Each thread reads one group by key and calculate its hash value. Increase the number of
keys that are hashed to this value by 1. After all the keys have been processed, perform a prefix sum
to calculate the starting write position for each hash value.

\textbf{Step 2: }Each thread reads one group by key. Calculate the key's hash value and put tuples
into the corresponding position. This process will be repeated until all the tuples have been processed.

\textbf{Step 3: }Each group will be processed by one thread group. Each thread in the thread group reads one tuple 
and calculate the aggregation functions. This process will be repeated until all the tuples have been processed.

The key factors that affect the performance of aggregation include the width of the group by key, the complexity
of the aggregation function and the number of projected columns. 

\subsubsection{Sort}
Sort can be represented in the following format:\\
\\\textbf{Select} L1, L2,...\\
\textbf{From} R\\ 
\textbf{Order By} Key;
\\

The representing sort algorithms are radix sort and merge sort.

\textbf{Radix sort}

\textbf{Merge sort}

The factors that affect the performance of sort include the width of the key and the tuple size of the projected
columns. 

\subsection{Analytical Query}
We use the Star Schema Benchmark queries as the workloads to measure the performance of analytical query on GPU. 
