
\begin{table*}
\centering
\caption{Notations for the Cost Model}
\begin{tabular}{|c|c|} \hline
Notations&Descriptions \\ \hline
\begin{math}B_r\end{math}&The read bandwidth of the device memory\\\hline
\begin{math}B_w\end{math}&The write bandwidth of the device memory\\\hline
\begin{math}B_i\end{math}&The transfer bandwidth from host memory to device memory\\\hline
\begin{math}B_o\end{math}&The transfer bandwidth from device memory to host memory\\\hline
\begin{math}C_r\end{math}&The read segment size of the device memory\\\hline
\begin{math}C_w\end{math}&The write segment size of the device memory\\\hline
\begin{math}S_i\end{math}&The size of input data\\\hline
\begin{math}S_o\end{math}&The size of result\\\hline
\begin{math}W\end{math}&The number of threads in a thread group\\\hline
\begin{math}T_i\end{math}&The device memory access cost of the ith step to finish the query on GPU\\\hline
\begin{math}T_t\end{math}&The data transfer cost\\\hline
\end{tabular}
\label{table:notation}
\vspace{-0.15in}

\end{table*}

We first model the cost of each query operator in our query engine. Then we evaluate our models
on GTX 680 with pinned memory for different query characteristics.
The notations used here are listed in Table \ref{table:notation}.

For each query operator, we model the memory access cost for each of its major kernel operations.
The memory cost is calculated as the number of memory transactions multiplied by the size of the memory segment.
While the size of the memory segments is constant for a given GPU, the estimation of the number
of memory transactions depends on the how the data are accessed.
Take two extreme cases for example. When a column of 1000 integers is sequentially scanned, the
number of memory transactions is \begin{math}\dfrac{1000}{W}\times \lceil\dfrac{4\times W}{C_r}\rceil\end{math}.
On the other hand, when the scan of the aforementioned column is totally random, the number of memory
transactions is 1000.

\subsection{Selection}

\textbf{Cost estimation}

The following notations are used to estimate the cost of selection.\\
\\
$||R||$ - cardinality of the table R, \\
n - number of projected columns,\\
\begin{math}K_i\end{math} - the attribute size of the ith projected column,\\
m - number of predicate columns,\\
\begin{math}P_i\end{math} - the attribute size of the ith predicate column, and\\
r - selectivity of the predicates.\\

In the first step of selection, each column in the predicate is sequentially scanned
and evaluated, and the results are stored in a filter. 
The approximate device memory access cost is calculated as:
\begin{align}
T_1 &=\sum_{i=1}^{m} (\lceil \dfrac {P_i\times W} {C_r} \rceil \times  \dfrac {||R||} {W}
+ \lceil \dfrac{4\times W}{C_r}\rceil \times  \dfrac{||R||}{W} ) \times \dfrac{C_r} {B_r} \nonumber\\
&+\sum_{i=1}^{m} \lceil\dfrac{4\times W}{C_w} \rceil \times  \dfrac{||R||}{W} \times \dfrac{C_w} {B_w}
\nonumber .
\end{align}

In the second step, the filter is sequentially scanned for each project column, but
the projected column is randomly read and written. The number
of tuples read by each thread group depends on the data distribution of the predicate
columns. Since we don't sort any column involved in the predicate,
we assume data are uniformly distributed.
Then the device memory access cost is:
\begin{align}
T_2&=\sum_{i=1}^{n}
	(\lceil \dfrac{4\times W}{C_r}\rceil 
	\times  
		\dfrac{||R||}{W}
+ 	\lceil\dfrac{K_i}{4} \rceil
	\times
	\dfrac{||R||} {W}) \times \dfrac {C_r}{B_r}
 \nonumber \\
&+ ||R|| \times  r \times
	\sum_{i=1}^{n} 
		\lceil\dfrac{K_i}{4} \rceil 
		\times 
		\dfrac{C_w}{B_w}
		\nonumber .
\end{align}

The size of the input data and the size of the result which can be calculated as follows:
\begin{displaymath}
S_i = ||R||\times  (\sum_{i=1}^{n}K_i + \sum_{i=1}^{m}P_i) .\\
\end{displaymath}

\begin{displaymath}
S_o = r\times  ||R||\times  (\sum_{i=1}^{n}K_i)
\end{displaymath}

The data transfer cost can be calculated as:
\begin{displaymath}
T_t = \dfrac{S_i} {B_i} + \dfrac{S_o} {B_o} .
\end{displaymath}

\subsection{Join}
The unpartitioned hash join first builds a hash table for the entire dimension table. Then it sequentially
scans the fact table and probes the hash table to find a match.
A filter is used to store the ids of matched dimension tuples.
Tuples are projected
only if the corresponding element of the filter indicates a match.

\textbf{Cost estimation}

The notations related to join query are list as follows.
\\
r - join selectivity,\\
$||R||$ - cardinality of the fact table R,\\
$||S||$ - cardinality of the dimension table S,\\
n - number of projected columns from fact table,\\
m - number of projected columns from dimension table,\\
$R_i$ - the attribute size of the ith projected column from fact table, and\\
$S_i$ - the attribute size of the ith projected column from dimension table.\\

When calculate the join cost, we assume the join keys are 4-byte integers and are not projected
by the join operator.

When building the hash table, the primary keys of the dimension table are scanned twice.
The first scan is to calculate the start output position for each hash key.
The second scan is to write the primary keys to the hash table with the tuple ids.
While primary keys are sequentially scanned, writes to the hash table can be considered random.
Then the approximate cost of memory access is:
\begin{align}
T_1 = 2\times
		\dfrac{||S||} {W}
	\times
		\lceil \dfrac {4\times  W} {C_r} \rceil
	\times 
		\dfrac {C_r} {B_r} \nonumber \\
+ ||S|| 
	\times 
		\lceil \dfrac{4\times  2} {C_w}\rceil 
	\times
		\dfrac {C_w} {B_w}
	\nonumber .
\end{align}

When probing the hash table, the foreign keys of the fact table are sequentially scanned.
For each foreign key, its hash value is calculated and the number of hash entries
for the corresponding bucket is read.
If the number is greater than 0, the position and the actual value of the ids of the
corresponding dimension tuple is read.
The filter is sequentially written either with the ids of the dimension table or 0.
The scan of foreign keys, and the read and write of filter are sequential, while
other requests can be considered random.
Thus the approximate memory access cost is estimated as:
\begin{align}
T_2 &= (
	\dfrac{||R||} {W} 
	\times  
	\lceil \dfrac {4\times W} {C_r} \rceil 
	+ ||R||
	+ 3 \times 
		||R||
	\times r 
	\times 
			\lceil \dfrac{4} {C_r}\rceil 
	) \times \dfrac {C_r}{B_r}
	\nonumber \\
&+ \lceil \dfrac{4\times W}{C_w} \rceil
	\times
		\dfrac{||R||}{W} 
	\times
		\dfrac {C_w}{B_w}
	\nonumber .
\end{align}

When projecting the join results, the filter is sequentially scanned.
The read of the fact table depends on the data distribution of the foreign keys
while the read of the dimension table can be considered as a total random read.
We consider the worst case that the foreign keys are uniformly distributed.
Then the approximate cost of memory access is calculated as:
\begin{align}
T_3 &= \sum_{i=1}^{n}(
	\lceil \dfrac{4\times W}{C_r} \rceil \times  \dfrac{||R||}{W} +
	\lceil \dfrac {R_i} {4} \rceil 
	\times  \dfrac{||R||}{W}
	) \times \dfrac{C_r}{B_r}
\nonumber \\
& +(
	\sum_{i=1}^{m}\lceil \dfrac{4\times W}{C_r} \rceil \times \dfrac{||R||}{W} 
	+||R|| \times r \times
	\sum_{i=1}^{m} \lceil\dfrac{S_i}{C_r} \rceil 
	)
	\times \dfrac{C_r}{B_r}
	\nonumber \\
& + ||R||\times r\times (\sum_{i=1}^{n}\lceil\dfrac{R_i}{4}\rceil+ 
	\sum_{i=1}^{m}\lceil \dfrac {S_i} {4} \rceil)
	\times \dfrac {C_w}{B_w}
\nonumber .
\end{align}

The input data size and the result size can be calculated as:
\begin{displaymath}
S_i = ||R||\times \sum_{i=1}^{n}R_i + ||S||\times \sum_{i=1}^{m}S_i + 4\times (||R||+||S||) .
\end{displaymath}

\begin{displaymath}
S_o = (||R||\times \sum_{i=1}^{n}R_i + ||R||\times \sum_{i=1}^{m}S_i) \times r  .
\end{displaymath}

The data transfer cost can be calculated as:
\begin{displaymath}
T_t = \dfrac {S_i}{B_i} + \dfrac{S_o}{B_o} .
\end{displaymath}\\

\begin{comment}
\textbf{2. Device can hold the entire dimension table but only part of the fact table.}

In this case the fact table needs to be partitioned based on the current available device memory and
each partition will be joined with the dimension table on GPU.
The total cost is calculated as the sum of the cost of the join between the dimension table and each partition. 

\textbf{3. Device memory can neither hold the dimension table nor the fact table.}

In this case, both the dimension table and fact table need to be partitioned
and the join will be executed similar to the nested loop join. 
Each pair of dimension table partition and fact table partition is joined on GPU.
Same data will be transferred to GPU multiple times.
To reduce the data transfer cost, the number of dimension table partitions should be kept low.
The total cost is calculated as the sum of all the join cost, each of which is calculated similar to situation 1.
\end{comment}


\subsection{Aggregation}

\textbf{Hash based aggregation}

The hash based aggregation first calculates a hash value for each group by key.
Then the tuples that have the same hash value are calculated for the aggregation functions.

\textbf{Cost estimation}

We use the following notations to model the cost of the aggregation operator:\\
\\
$||R||$ - cardinality of the table R, \\
n - number of projected columns,\\
\begin{math}K_i\end{math} - the attribute size of the ith projected column,\\
w - the width of the group by key, and\\

In the first step the group by keys are sequentially scanned. The number of
groups and the start output position of each group is calculated.
The approximate device memory access cost can be calculated 
as:
\begin{displaymath}
T_1 = 
\dfrac{||R||}{W} \times \lceil \dfrac {w} {4}\rceil 
\times \dfrac{C_r}{B_r}
+||R||\times \lceil\dfrac{w}{4}\rceil
\times \dfrac{C_w}{B_w}
\end{displaymath}

In the second step all the group by keys are sequentially scanned.
The aggregation results are randomly written
to each group based on the hash value of the group by keys.
The device memory access cost is:
\begin{align}
T_2 &= (
	\dfrac{||R||}{W}
	\times \lceil \dfrac {4	\times  W} {C_r}\rceil
	+ \dfrac{||R||}{W} \times  \sum_{i=1}^{n} \lceil \dfrac {W \times K_i} { C_g}\rceil
	)
	\times \dfrac{C_r}{B_r}
\nonumber \\
& + (||R||* \lceil\dfrac{w}{4} \rceil 
	+ 2\times ||R||\times  \sum_{i=1}^{n}\lceil\dfrac{K_i}{4}\rceil
	)
	\times \dfrac {C_w}{B_w}
\nonumber .
\end{align}

The size of input data and the size of result can be calculated as:
\begin{displaymath}
S_i = ||R|| \times  (w+\sum_{i=1}^{n}K_i) .
\end{displaymath}

\begin{displaymath}
S_o = v \times  (w+\sum_{i=1}^{n}K_i) .
\end{displaymath}

The data transfer cost is:
\begin{displaymath}
T_t = \dfrac {S_i} {B_i} + \dfrac {S_o} {B_o} .
\end{displaymath}

\subsection{Sort}

The merge sort divides the order by keys into a set of partitions and sorts each partition.
The size of each partition should be small enough such that each partition can be fit
into the device's cache.
After the sorting step, all partitions are merged together.
The last step is to project the columns to the result buffer in the right order.

\textbf{Cost estimation}

We use the following notations to model the cost of the sort operator:\\
\\
$||R||$ - cardinality of the table R, \\
n - number of projected columns,\\
\begin{math}K_i\end{math} - the attribute size of the ith projected column,\\
w - the width of the order by key,\\
p - the number of partitions, and\\
d - the number of keys in each partition\\

We assume that the order by keys are projected by the operator.
The required size of the device memory is approximately twice the size of the input data:
\begin{displaymath}
S_r = 2\times  ||R|| \times  \sum_{i}^{n}K_i .
\end{displaymath}

The cost of the key partition step is trivial.
In the sorting step, each partition is sequentially scanned and the result is sequentially written
to result buffer.
The sorting process operates on the device's cache.
Thus the device memory access cost is:
\begin{displaymath}
T_1 = \dfrac {||R||}{W} \times \lceil\dfrac {4\times w} {C_r}\rceil
	\times \dfrac{C_r}{B_r}
	+ \dfrac {||R||}{W} \times \lceil\dfrac {4\times w} {C_r}\rceil
	\times \dfrac{C_w}{B_w}.
\end{displaymath}

In the merging step $\log p$ merge steps are needed.
The merging steps involves a sequential scan of the two partitions
and multiple random reads of the two partitions. The number of random reads
depends on the partition size.
The device memory cost is:
\begin{align}
T_2 &=\log p \times  \dfrac{||R||}{W}\times \lceil\dfrac {W\times w} {C_r}\rceil \nonumber * \dfrac{C_r}{B_r}\\
&+ \sum_{i=1}^{\log p} d\times  2^{i-1} \times \log ( d\times  2^{i-1}) \times  2^{\log p - i}\times \lceil\dfrac{w}{C_r}\rceil
* \dfrac{C_r}{B_r} \nonumber \\
&+\log p \times {||R||}\times \lceil\dfrac {W\times w} {C_w}\rceil \nonumber * \dfrac{C_w}{B_w}\nonumber 
\end{align} 

When generating the final result, the projected columns are randomly read and sequentially written
to the result buffer. We consider the worst case that no two reads come from the same memory segment.
Then the device memory access cost is:
\begin{align}
T_3 &= (
	\dfrac{||R||}{W} 
	\times \lceil \dfrac{4 \times W}{C_r}\rceil
	+ ||R||\times \sum_{i=1}^{n}\lceil\dfrac{W\times K_i}{4}\rceil
	) \times \dfrac{C_r}{B_r}
	\nonumber \\
&+ \dfrac{||R||}{W} 
	\times \sum_{i=1}^{n}\lceil\dfrac {K_i} {4}\rceil
	\times \dfrac{C_w} {B_w}
	\nonumber .
\end{align}

The data transfer cost is:
\begin{displaymath}
T_t = \dfrac {S_r} {B_p} .
\end{displaymath}


\subsection{Evaluation}

We evaluate our model on GTX 680 for different query characteristics and the results
are shown from Figure \ref{fig:eval}.

As seen from the figures, the models can achieve a high accuracy in estimating the
actual query performance. The only exception is when the query has many operations
on irregular data like string. The reason is that currently GPU is inefficient in handling
irregular data.

\begin{figure*}[ht]
\centering
\subfigure[\# of columns]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/model/select/1.ps}
	\label{fig:modelselect1}
}
\subfigure[width of columns]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/model/select/2.ps}
	\label{fig:modelselect2}
}
\subfigure[\# of predicates]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/model/select/3.ps}
	\label{fig:modelselect3}
}
\subfigure[Selectivity (\%)]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/model/select/5.ps}
	\label{fig:modelselect5}
}
\subfigure[Selectivity (\%)]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/model/join/1.ps}
	\label{fig:modeljoin1}
}
\subfigure[\# of fact columns]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/model/join/2.ps}
	\label{fig:modeljoin2}
}
\subfigure[\# of dim columns]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/model/join/3.ps}
	\label{fig:modeljoin3}
}
\subfigure[Dim column width]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/model/join/6.ps}
	\label{fig:modeljoin6}
}
\subfigure[Width of group by keys]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/model/agg/1.ps}
	\label{fig:modelagg1}
}
\subfigure[\# of agg columns]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/model/agg/2.ps}
	\label{fig:modelagg2}
}
\subfigure[\# of sort columns]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/model/sort/1.ps}
	\label{fig:modelsort1}
}
\caption {Evaluate the models for different query characteristics}
\label{fig:eval}
\end{figure*}



