In this section we first introduce our modeling methodology
and then show a case study of modeling join.
After presenting the model accuracy evaluation, 
we use the model to predict the impact of GPU hardware advancements on warehousing query performance.

\subsection{Model Methodology}
Our model focuses on the architecture that GPU is connected with CPU through a PCIe bus.
Data must be transferred to the device memory before the query executes
and results will be transferred back to the host memory after the query finishes.
We assume that data are already available in the host memory
and are laid out in a column-store format.
When estimating the query execution cost, some statistics like data size and
selectivity of the tables are needed. We assume that these statistics are available
and can be directly used by our model.

The total cost of executing a query on GPU consists of PCIe data transfer cost and kernel execution cost.
While the PCIe data transfer cost can be estimated based on the available table statistics,
the key is to estimate the query's kernel execution cost.
As we have already discussed,
the performance of data warehousing queries on GPUs are mainly bounded GPU device memory accesses.
Thus we focus on the estimation of device memory access cost
and use device memory access time as the metric to represent the query kernel execution cost.

The memory access time of a given query 
can be calculated as the amount of actual accessed data in GPU device memory 
divided by the bandwidth of GPU device memory. 
In our model we view the GPU device memory as a group of continuous memory segments,
each of which is the basic access unit of the device memory.
We use the concept thread group to represent the basic threads management unit in GPU,
which is similar to NVIDIA's warp and AMD's wavefront.
Threads in the same thread group execute in lockstep and when they need to access the device memory,
the number of needed memory segments will be calculated and then the corresponding segments are fetched to the thread group.

With the above abstraction of GPU device, we can estimate the number of actual device memory
transactions and calculate the amount of accessed data. 
We don't distinguish between the coalesced access and uncoalesced access to device memory because
the memory bus utilization is determined by the number of actual memory transactions,
not by whether the access can be coalesced or not.
For a given operator, the estimation of the number of memory transactions depends on the implementation 
of the query operator and the distribution of the data.
The estimation of the cost of a complex query is based on the estimation of the cost of
each single query operator. The total cost can be calculated as the sum of the costs
of all the single query operators.
In the next section, we model the join operator of our query engine as a case study
for our methodology. Due to limited page space, the models and evaluation for other query operators
are presented in \url{http://www.cse.ohio-state.edu/~yuanyu/report.html} .

\subsection{Cost Model for Join}

\begin{table*}
\centering
\caption{Notations for the Cost Model}
\begin{tabular}{|c|c|} \hline
Notations&Descriptions \\ \hline
\begin{math}B_r\end{math}&The read bandwidth of the device memory\\\hline
\begin{math}B_w\end{math}&The write bandwidth of the device memory\\\hline
\begin{math}B_i\end{math}&The transfer bandwidth from host memory to device memory\\\hline
\begin{math}B_o\end{math}&The transfer bandwidth from device memory to host memory\\\hline
\begin{math}C_r\end{math}&The read segment size of the device memory\\\hline
\begin{math}C_w\end{math}&The write segment size of the device memory\\\hline
\begin{math}S_i\end{math}&The size of input data\\\hline
\begin{math}S_o\end{math}&The size of result\\\hline
\begin{math}W\end{math}&The number of threads in a thread group\\\hline
\begin{math}T_i\end{math}&The device memory access cost of the ith step to finish the query on GPU\\\hline
\begin{math}T_t\end{math}&The data transfer cost\\\hline
\end{tabular}
\label{table:notation}
\vspace{-0.15in}

\end{table*}

\begin{figure*}[ht]
\centering
\subfigure[Selectivity (\%)]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/model/join/1.ps}
	\label{fig:modeljoin1}
}
\subfigure[\# of fact columns]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/model/join/2.ps}
	\label{fig:modeljoin2}
}
\subfigure[\# of dim columns]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/model/join/3.ps}
	\label{fig:modeljoin3}
}
\subfigure[Dim column width]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/model/join/6.ps}
	\label{fig:modeljoin6}
}
\begin{comment}
\subfigure[Attribute Size of Fact Table]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/exp/join/7.ps}
	\label{fig:join7}
}
\end{comment}
\vspace{-0.15in}
\caption {Evaluate join model for different query characteristics}
\label{fig:evaljoin}
\vspace{-0.28in}

\end{figure*}



\begin{comment}
We first model the cost of query execution on our query engine. Then we evaluate our model
on different GPUs and investigate how the evolvement of GPU affects the query processing on GPU. 

\subsection{Single Query Operator}

\subsubsection{Selection}

\textbf{Cost estimation}

The following notations are used to estimate the cost of selection.\\
\\
$||R||$ - cardinality of the table R, \\
n - number of projected columns,\\
\begin{math}K_i\end{math} - the attribute size of the ith projected column,\\
m - number of predicate columns,\\
\begin{math}P_i\end{math} - the attribute size of the ith predicate column, and\\
r - selectivity of the predicates.\\

In the first step of selection, each column in the predicate is sequentially scanned
and evaluated, and the results are stored in a filter. 
The approximate number of device memory accesses is calculated as:
\begin{align}
T_1 &=\sum_{i=1}^{m} (\lceil \dfrac {P_i\times W} {C_r} \rceil \times  \dfrac {||R||} {W}
+ \lceil \dfrac{4\times W}{C_r}\rceil \times  \dfrac{||R||}{W} ) \times \dfrac{C_r} {B_r} \nonumber\\
&+\sum_{i=1}^{m} \lceil\dfrac{4\times W}{C_w} \rceil \times  \dfrac{||R||}{W} \times \dfrac{C_w} {B_w}
\nonumber .
\end{align}

In the second step, the filter is sequentially scanned for each project column, but
the projected column is randomly read and written. The number
of tuples read by each thread group depends on the data distribution of the predicate
columns. Since we don't sort any column involved in the predicate,
we assume data are uniformly distributed.
Then the approximate number of device memory accesses is:
\begin{align}
T_2&=\sum_{i=1}^{n}
	(\lceil \dfrac{4\times W}{C_r}\rceil 
	\times  
		\dfrac{||R||}{W}
+ 	\lceil\dfrac{K_i}{4} \rceil
	\times
	\dfrac{||R||} {W}) \times \dfrac {C_r}{B_r}
 \nonumber \\
&+ ||R|| \times  r \times
	\sum_{i=1}^{n} 
		\lceil\dfrac{K_i}{4} \rceil 
		\times 
		\dfrac{C_w}{B_w}
		\nonumber .
\end{align}

The size of the input data and the size of the result which can be calculated as follows:
\begin{displaymath}
S_i = ||R||\times  (\sum_{i=1}^{n}K_i + \sum_{i=1}^{m}P_i) .\\
\end{displaymath}

\begin{displaymath}
S_o = r\times  ||R||\times  (\sum_{i=1}^{n}K_i)
\end{displaymath}

The data transfer cost can be calculated as:
\begin{displaymath}
T_t = \dfrac{S_i} {B_i} + \dfrac{S_o} {B_o} .
\end{displaymath}

When the size of the input data is larger than the size of the available device memory,
the input data should be partitioned so that each partition can be processed on GPU.
The cost of processing each partition can be estimated based on the above analysis.

\subsubsection{Join}

The unpartitioned hash join first builds a hash table for the entire dimension table. Then it sequentially
scans the fact table and probes the hash table to find a match.
A filter is used to store the ids of matched dimension tuples.
Tuples are projected
only if the corresponding element of the filter indicates a match.

\textbf{Cost estimation}
\end{comment}

The notations related to join query are list as follows,
while other used notations are listed in Table \ref{table:notation}.\\
\\
r - join selectivity,\\
$||R||$ - cardinality of the fact table R,\\
$||S||$ - cardinality of the dimension table S,\\
n - number of projected columns from fact table,\\
m - number of projected columns from dimension table,\\
$R_i$ - the attribute size of the ith projected column from fact table, and\\
$S_i$ - the attribute size of the ith projected column from dimension table.\\

When calculate the join cost, we assume the join keys are 4-byte integers and are not projected
by the join operator, as is the case for all SSBM queries.

When building the hash table, the primary keys of the dimension table are scanned twice.
The first scan is to calculate the start output position for each hash key.
The second scan is to write the primary keys to the hash table with the tuple ids.
While primary keys are sequentially scanned, writes to the hash table can be considered random.
Then the approximate cost of memory access is:
\begin{align}
T_1 = 2\times
		\dfrac{||S||} {W}
	\times
		\lceil \dfrac {4\times  W} {C_r} \rceil
	\times 
		\dfrac {C_r} {B_r} \nonumber \\
+ ||S|| 
	\times 
		\lceil \dfrac{4\times  2} {C_w}\rceil 
	\times
		\dfrac {C_w} {B_w}
	\nonumber .
\end{align}

When probing the hash table, the foreign keys of the fact table are sequentially scanned.
For each foreign key, its hash value is calculated and the number of hash entries
for the corresponding bucket is read.
If the number is greater than 0, the position and the actual value of the ids of the
corresponding dimension tuple is read.
The filter is sequentially written either with the ids of the dimension table or 0.
The scan of foreign keys, and the read and write of filter are sequential, while
other requests can be considered random.
Thus the approximate memory access cost is estimated as:
\begin{align}
T_2 &= (
	\dfrac{||R||} {W} 
	\times  
	\lceil \dfrac {4\times W} {C_r} \rceil 
	+ ||R||
	+ 3 \times 
		||R||
	\times r 
	\times 
			\lceil \dfrac{4} {C_r}\rceil 
	) \times \dfrac {C_r}{B_r}
	\nonumber \\
&+ \lceil \dfrac{4\times W}{C_w} \rceil
	\times
		\dfrac{||R||}{W} 
	\times
		\dfrac {C_w}{B_w}
	\nonumber .
\end{align}

When projecting the join results, the filter is sequentially scanned.
The read of the fact table depends on the data distribution of the foreign keys
while the read of the dimension table can be considered as a total random read.
We consider the worst case that the foreign keys are uniformly distributed.
Then the approximate cost of memory access is calculated as:
\begin{align}
T_3 &= \sum_{i=1}^{n}(
	\lceil \dfrac{4\times W}{C_r} \rceil \times  \dfrac{||R||}{W} +
	\lceil \dfrac {R_i} {4} \rceil 
	\times  \dfrac{||R||}{W}
	) \times \dfrac{C_r}{B_r}
\nonumber \\
& +(
	\sum_{i=1}^{m}\lceil \dfrac{4\times W}{C_r} \rceil \times \dfrac{||R||}{W} 
	+||R|| \times r \times
	\sum_{i=1}^{m} \lceil\dfrac{S_i}{C_r} \rceil 
	)
	\times \dfrac{C_r}{B_r}
	\nonumber \\
& + ||R||\times r\times (\sum_{i=1}^{n}\lceil\dfrac{R_i}{4}\rceil+ 
	\sum_{i=1}^{m}\lceil \dfrac {S_i} {4} \rceil)
	\times \dfrac {C_w}{B_w}
\nonumber .
\end{align}

The input data size and the result size can be calculated as:
\begin{displaymath}
S_i = ||R||\times \sum_{i=1}^{n}R_i + ||S||\times \sum_{i=1}^{m}S_i + 4\times (||R||+||S||) .
\end{displaymath}

\begin{displaymath}
S_o = (||R||\times \sum_{i=1}^{n}R_i + ||R||\times \sum_{i=1}^{m}S_i) \times r  .
\end{displaymath}

The data transfer cost can be calculated as:
\begin{displaymath}
T_t = \dfrac {S_i}{B_i} + \dfrac{S_o}{B_o} .
\end{displaymath}\\

\begin{comment}
\textbf{2. Device can hold the entire dimension table but only part of the fact table.}

In this case the fact table needs to be partitioned based on the current available device memory and
each partition will be joined with the dimension table on GPU.
The total cost is calculated as the sum of the cost of the join between the dimension table and each partition. 

\textbf{3. Device memory can neither hold the dimension table nor the fact table.}

In this case, both the dimension table and fact table need to be partitioned
and the join will be executed similar to the nested loop join. 
Each pair of dimension table partition and fact table partition is joined on GPU.
Same data will be transferred to GPU multiple times.
To reduce the data transfer cost, the number of dimension table partitions should be kept low.
The total cost is calculated as the sum of all the join cost, each of which is calculated similar to situation 1.


\subsubsection{Aggregation}

\textbf{Hash based aggregation}

The hash based aggregation first calculates a hash value for each group by key.
Then the tuples that have the same hash value are calculated for the aggregation functions.

\textbf{Cost estimation}

We use the following notations to model the cost of the aggregation operator:\\
\\
$||R||$ - cardinality of the table R, \\
n - number of projected columns,\\
\begin{math}K_i\end{math} - the attribute size of the ith projected column,\\
w - the width of the group by key, and\\

In the first step the group by keys are sequentially scanned. The number of
groups and the start output position of each group is calculated.
The approximate number of device memory accesses can be calculated 
as the number of accessed to scan the group by keys:
\begin{displaymath}
T_1 = 
\dfrac{||R||}{W} \times \lceil \dfrac {w} {4}\rceil 
\times \dfrac{C_r}{B_r}
+||R||\times \lceil\dfrac{w}{4}\rceil
\times \dfrac{C_w}{B_w}
\end{displaymath}

In the second step all the group by keys are sequentially scanned.
The aggregation results are randomly written
to each group based on the hash value of the group by keys.
The number of device memory accesses is:
\begin{align}
T_2 &= (
	\dfrac{||R||}{W}
	\times \lceil \dfrac {4	\times  W} {C_r}\rceil
	+ \dfrac{||R||}{W} \times  \sum_{i=1}^{n} \lceil \dfrac {W \times K_i} { C_g}\rceil
	)
	\times \dfrac{C_r}{B_r}
\nonumber \\
& + (||R||* \lceil\dfrac{w}{4} \rceil 
	+ 2\times ||R||\times  \sum_{i=1}^{n}\lceil\dfrac{K_i}{4}\rceil
	)
	\times \dfrac {C_w}{B_w}
\nonumber .
\end{align}

The size of input data and the size of result can be calculated as:
\begin{displaymath}
S_i = ||R|| \times  (w+\sum_{i=1}^{n}K_i) .
\end{displaymath}

\begin{displaymath}
S_o = v \times  (w+\sum_{i=1}^{n}K_i) .
\end{displaymath}

The data transfer cost is:
\begin{displaymath}
T_t = \dfrac {S_i} {B_i} + \dfrac {S_o} {B_o} .
\end{displaymath}

When the current available device memory is not large enough, we need to partition the
table so that each partition can be handled by GPU, the cost of which can be calculated as above.
After grouping each partition, all the partitions
need to be merged together and the final aggregation will be calculated. The cost of this
step depends on the size of the table. If the data size is much larger than the size of the
available device memory, multiple merging will be needed which will increase the total cost.

\subsubsection{Sort}

The merge sort divides the order by keys into a set of partitions and sorts each partition.
The size of each partition should be small enough such that each partition can be fit
into the device's cache.
After the sorting step, all partitions are merged together.
The last step is to project the columns to the result buffer in the right order.

\textbf{Cost estimation}

We use the following notations to model the cost of the sort operator:\\
\\
$||R||$ - cardinality of the table R, \\
n - number of projected columns,\\
\begin{math}K_i\end{math} - the attribute size of the ith projected column,\\
w - the width of the order by key,\\
p - the number of partitions, and\\
d - the number of keys in each partition\\

We assume that the order by keys are projected by the operator.
The required size of the device memory is approximately twice the size of the input data:
\begin{displaymath}
S_r = 2\times  ||R|| \times  \sum_{i}^{n}K_i .
\end{displaymath}

The cost of the key partition step is trivial.
In the sorting step, each partition is sequentially scanned and the result is sequentially written
to result buffer.
The sorting process operates on the device's cache.
Thus the number of device memory accesses is:
\begin{displaymath}
T_1 = \dfrac {||R||}{W} \times \lceil\dfrac {w} {4}\rceil
	\times {C_r}{B_r}
	+ \dfrac {||R||}{W} \times \lceil\dfrac {w} {4}\rceil
	\times {C_w}{B_w}.
\end{displaymath}

In the merging step $\log p$ merge steps are needed.
When merging two partitions, each element's output position is first
calculated which can be done by comparing each element with the elements
in the other partition. The comparing process can be efficiently done using
binary search method since all the partitions are already sorted. 
Thus the merging steps involves a sequential scan of the two partitions
and multiple random reads of the two partitions. The number of random reads
depends on the partition size.
The number of device memory accesses is:
\begin{displaymath}
T_2 =\log p \times  \dfrac{||R||}{W}\times \lceil\dfrac {W\times w} {C_g}\rceil
+ \sum_{i=1}^{\log p} d\times  2^{i-1} \times \log k \times  2^{\log p - i}\times \lceil\dfrac{w}{C_g}\rceil .
\end{displaymath} 

When generating the final result, the projected columns are randomly read and sequentially written
to the result buffer. We consider the worst case that no two reads come from the same memory segment.
Then the number of device memory accesses is:
\begin{align}
T_3 &= (
	\dfrac{||R||}{W} 
	\times \lceil \dfrac{4 \times W}{C_R}\rceil
	+ ||R||\times \sum_{i=1}^{n}\lceil\dfrac{W\times K_i}{4}\rceil
	) \times \dfrac{C_r}{B_r}
	\nonumber \\
&+ \dfrac{||R||}{W} 
	\times \sum_{i=1}^{n}\lceil\dfrac {K_i} {4}\rceil
	\times \dfrac{C_w} {B_w}
	\nonumber .
\end{align}

The data transfer cost is:
\begin{displaymath}
T_t = \dfrac {S_r} {B_p} .
\end{displaymath}

When the input data size is larger than the size of available device memory, the data needs to be partitioned
so that each partition can be sorted by GPU, the cost of which can be calculated as above.
After all the partitions have been sorted, we need to further divide each partition into chunks such that
we can merge one chunk from each partition in GPU at a time. For each merging step, the cost can be calculated similar to
 the merging of two partitions in the merging step.

\end{comment}

\vspace{-0.3in}
\subsection{Model Evaluation}


\begin{figure}
\centering
\includegraphics[width=1.2in, height=2.4in, angle=270]{graph/model/evaluation.ps}
\vspace{-0.15in}
\caption{Error rate of estimated performance on 680.
}
\label{fig:evalssb}
\vspace{-0.20in}
\end{figure}

We evaluate our cost model both for the join operator and for the SSBM queries.
We use the NVIDIA GTX 680 with pinned host memory as the platform. 
For join operator, we compare the estimated and actual performance of join operator
under various query characteristics.
When evaluating our model for SSBM queries,
we define the error rate as:
\begin{displaymath}
error\_rate = \dfrac {measured\_time - estimated\_time} {measured\_time}
\end{displaymath}

Figure \ref{fig:evaljoin} and Figure \ref{fig:evalssb} present the evaluation results.
As shown in the figures,
the estimated performance are very close to the actual performance in most cases,
which demonstrates the effectiveness of our cost model.
Considering the differences between the estimated and actual performance,
generally two factors account for this.
First,
many work executed on GPU need to be initiated by CPU.
In this case information and some data must be transferred between
CPU and GPU.
Second, GPU is inefficient in handling irregular data accesses.
For example, in GTX 680, the GPU memory transaction must be aligned on 32.
When threads inside the warp issue unaligned memory access requests,
more memory transactions will be generated even if
the threads access the data in a coalesced manner.
In this way, it is difficult to accurately estimate 
the memory access, which can be seen from Q3.1 and Q4.1
in Figure \ref{fig:evalssb}.

\subsection {Impacts of hardware advancement}

\begin{figure}
\centering
\includegraphics[width=1.2in, height=2.4in, angle=270]{graph/model/improve.ps}
\vspace{-0.15in}
\caption{Normalized kernel execution time on GTX 480}
\label{fig:modelnorm}
\vspace{-0.2in}

\end{figure}

To study the impact of GPU hardware on query performance,
we run SSBM queries on three generations of NVIDIA GPUs: GTX 480, 580 and 680 with pinned memory.
We focus on the kernel execution time which is determined by the GPU internal architectures.
We normalize the kernel execution time on the kernel execution time of SSBM queries running on GTX 480.
The results are shown in Figure \ref{fig:modelnorm}.
As can be seen,
the differences of kernel execution time are around 10\% from most queries
when running on these three GPUs.
Compared to the improvement of GPU's peak performance (more than 2 times from GTX 480 to 680), the performance
gain is very small.
The reason is that the performance of warehousing queries are mainly bounded by GPU device memory accesses.
They cannot benefit much from the increased computational power.


\begin{figure}
\centering
\includegraphics[width=1.2in, height=2.4in, angle=270]{graph/model/predict.ps}
\vspace{-0.15in}
\caption{Estimated SSBM performance with different GPU hardware configurations}
\vspace{-0.3in}
\label{fig:modelpredict}

\end{figure}

To predict the possible impact of the advancement of GPU hardwares on query performance,
we use our model, which has been proved effective in estimating query performance on GPUs,
to estimate the query performance with different GPU hardware configurations.
We double PCIe transfer bandwidth and GPU device memory bandwidth
 independently based on GTX 680's hardware parameters
to see how the overall SSBM performance change.
We use the performance of SSBM queries on GTX 680 as the baseline.

The result is shown in Figure \ref{fig:modelpredict}.
Doubling the PCIe transfer bandwidth is more effective than doubling
the device bandwidth for most queries.
as most queries are still dominated by PCIe data transfer.
But as the PCIe transfer bandwidth increases, the query execution time spent
on PCIe transfer and kernel execution become comparable.
However,
in the real world scenario, the bandwidth of GPU device memory
grows at a much slower pace compared to the improvement of its peak performance. 
In this case,
the performance of data warehouse queries is not likely to benefit much
from the advancement of GPU hardwares. 


