\begin{table*}
\centering
\caption{Default workload configuration for each query operator}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
Operator&Selectivity&Number of projected columns&Predicate/Agg Function&Type&Number of tuples\\ \hline
Selection&10\%&1&1 with min comparison&Integer&40 million\\\hline
Join&10\%&1 from each table&N/A&Integer&5 million and 80 million\\\hline
Agg&1\%&1&1 with SUM function&Integer&20 million\\\hline
Sort&N/A&1&N/A&Integer&2 million\\\hline
\end{tabular}
\label{table:workload}
\vspace{-0.25in}
\end{table*}

\begin{table*}
\centering
\caption{Factors to study for each query operator}
\begin{tabular}{|c|c|c|c|} \hline
Operator&Selectivity&Column number&Column width \\ \hline
Selection&Selectivity&Projected column and predicates&Projected column\\\hline
Join&Selectivity&Projected column&Projected column\\\hline
Agg&N/A&Agg column&Group by keys\\\hline
Sort&N/A&Projected column&Sort keys\\\hline
\end{tabular}
\label{table:factors}
\vspace{-0.15in}
\end{table*}



In this section we explore the impact of 
different query characteristics on query performance using our GPU query engine.
We first study the four major warehousing operations (selection, join, aggregation, and sort)
and then investigate behaviors of complex queries using SSBM queries.


\subsection{Test Environment and Measurement}
We conduct our study on a NVIDIA's GTX 580 GPU with 2048 MB device memory.
It has 8 streaming multiprocessors (SMs), each of which has 192 processing cores.
Each SM has a configurable 16KB/48KB on-chip L1 cache and shared memory,
and all SMs share an off-chip 768K L2 cache on top of the device memory.
Users can explicitly disable the L1 cache for their applications when compiling the programs.
In our experiments we keep L1 cache on all the time.
The host device is equipped with an Intel Core i7 3770k Quad-Core 3.5GHZ processor and 32 GB memory,
running operating system Red Hat Enterprise Linux 6.4.
The GTX 680 is connected to the host through a 16x PCIe 3.0 bus with a theoretical peak bandwidth of 16.0 GB/s.

Our GPU query engine is implemented with CUDA and complied with CUDA 5.0 toolkit.
To understand query behavior on GPU,
we use NVIDIA's command line profiling tool \textit{nvprof} in CUDA 5.0 toolkit to profile our program.
The \textit{nvprof} tool can collect activities of all GPU kernels, and data transfer operations between host and device
for a given GPU program. It can also access GPU hardware counters and capture various events.
In our experiments, we measure \textit{gst\_request} and \textit{gld\_request},
which indicate the total number of issued memory requests in GPU device,
and \textit{global\_store\_transaction}, \textit{l1\_global\_load\_hit}, and \textit{l1\_global\_load\_miss},
which indicate the total number of actual happened memory transactions in GPU device. 

Before conducting detailed experiments,
we first measure the PCIe transfer bandwidth
and GPU device memory bandwidth
since data access bandwidth has a large impact on analytical query performance.
To measure the PCIe transfer bandwidth, we transfer 256MB data from host to device
and from device to host independently.
Both host memory and device memory are initialized before data are transferred.
To measure the bandwidth of GPU device memory, we launch two simple kernels
, one reading 256MB integers from GPU device memory in a coalesced manner,
one writing 256MB integers to GPU device memory in a coalesced manner.
The measured results are shown in Table \ref{table:parameter}.
In the experiments we use pageable host memory unless clearly specified.
\begin{table}
\centering
%\caption{PCIe transfer and GPU device memory bandwidth for GTX 580}
\caption{GTX 580 Speeds}
\begin{tabular}{|c|c|} \hline
Parameters&Value \\ \hline
Device memory sequential read&129.946274 GB/s\\\hline
Device memory sequential write&150.405258 GB/s\\\hline
PCIe HtoD for pageable memory&6.30 GB/s\\\hline
PCIe DtoH for pageable memory&6.20 GB/s\\\hline
PCIe HtoD for pinned memory&6.65 GB/s\\\hline
PCIe DtoH for pinned memory&6.64 GB/s\\\hline
\end{tabular}
\label{table:parameter}
\vspace{-0.2in}
\end{table}


\subsection{Study On Single Operators}
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=1.8in, height=2.4in, angle=270]{graph/exp/cpu/1.ps}
\label{fig:efficiency}
\vspace{-0.15in}

\caption{Efficiency}
\end{figure}
\end{comment}


\subsubsection{Workloads}
Different workloads are needed to study the impact of query characteristics on
each query operator's performance.
For each operator, the workloads have similar formats, only being different in the factors to study.
In this case,
we only describe the default workload for each operator and the performance factors we will study.
Readers can figure out the workload used for each factor.
For example, when studying factor A, only factor A in the default workload will be changed.
All other factors remain the same.

Table \ref{table:workload} presents the configurations of the default workloads 
for each query operator
and Table \ref{table:factors} presents the query factors we will study. 
We use selection as an example to illustrate these two tables.
In our experiments, the default selection workload runs on a
table with 40 million tuples. The selection operation has 1 predicate with a min comparison
between a table column and a number, which has a selectivity of 10\%.
1 column will be projected by the selection operator. All the columns are integers. 
The main query characteristics we study for selection include the selectivity,
the number of projected columns and the number of predicates,
and the width of projected column.

\subsubsection{Execution Time Breakdown}

\begin{figure*}[ht]
\centering
\subfigure[Selection Time Breakdown]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/exp/select/breakdown.ps}
	\label{fig:selectbreakdown}
}
\subfigure[Join Time Breakdown]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/exp/join/breakdown.ps}
	\label{fig:joinbreakdown}
}
\subfigure[Agg Time Breakdown]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/exp/agg/breakdown.ps}
	\label{fig:aggbreakdown}
}
\subfigure[Sort Time Breakdown]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/exp/sort/breakdown.ps}
	\label{fig:sortbreakdown}
}
\subfigure[Selection Memory Access]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/exp/select/mem.ps}
	\label{fig:selectmem}
}
\subfigure[Join Memory Access]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/exp/join/mem.ps}
	\label{fig:joinmem}
}
\subfigure[Agg Memory Access]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/exp/agg/mem.ps}
	\label{fig:aggmem}
}
\subfigure[Sort Memory Access]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/exp/sort/mem.ps}
	\label{fig:sortmem}
}
\vspace{-0.15in}
\caption {Execution time breakdown and GPU device memory characteristics for query operators}
\label{fig:breakdown}
\end{figure*}

\begin{figure*}[ht]
\centering

\subfigure[Selectivity (\%)]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/exp/select/5.ps}
	\label{fig:select5}
}
\subfigure[\# of projected columns]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/exp/select/1.ps}
	\label{fig:select1}
}
\subfigure[Projected column width]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/exp/select/2.ps}
	\label{fig:select2}
}
\subfigure[\# of Predicates]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/exp/select/3.ps}
	\label{fig:select3}
}
\vspace{-0.15in}
\caption {Selection performance for different query characteristics}
\label{fig:select}
\vspace{-0.15in}
\end{figure*}

\begin{figure*}[ht]
\centering
\subfigure[Selectivity (\%)]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/exp/join/1.ps}
	\label{fig:join1}
}
\subfigure[\# of fact columns]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/exp/join/2.ps}
	\label{fig:join2}
}
\subfigure[\# of dim columns]{
	\includegraphics[width=1.2in,height=1.6in,angle=270]{graph/exp/join/3.ps}
	\label{fig:join3}
}
\subfigure[Dim column width]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/exp/join/6.ps}
	\label{fig:join6}
}
\begin{comment}
\subfigure[Attribute Size of Fact Table]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/exp/join/7.ps}
	\label{fig:join7}
}
\end{comment}
\vspace{-0.18in}
\caption {Join performance for different query characteristics}
\label{fig:join}
\vspace{-0.12in}
\end{figure*}


\begin{figure*}[ht]
\centering
\subfigure[Groupby key width]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/exp/agg/1.ps}
	\label{fig:agg1}
}
\begin{comment}
\subfigure[Number of Distinct keys(\%)]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/exp/agg/2.ps}
	\label{fig:agg2}
}
\end{comment}
\subfigure[\# of agg columns]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/exp/agg/3.ps}
	\label{fig:agg3}
}
\subfigure[Sorted key width]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/exp/sort/2.ps}
	\label{fig:sort1}
}
\subfigure[\# of sort columns]{
	\includegraphics[width=1.2in,height=1.6in, angle=270]{graph/exp/sort/1.ps}
	\label{fig:sort2}
}
\vspace{-0.2in}
\caption {Aggregation and sort performance for different query characteristics}
\label{fig:aggsort}
\vspace{-0.25in}
\end{figure*}



To understand the behavior of each query operator,
we breakdown the execution time of running each operator's default
workload on GPU into PCIe host to device transfer time (HtoD),
PCIe device to host transfer time (DtoH),
and kernel
execution time, which is further divided based on each query operator's
major kernel operations. For each kernel operation,
we also measure the number of issued GPU device memory access requests
and the number of actual memory access transactions
happened in the device.
The difference between these two numbers indicates
the corresponding kernel operation's memory access pattern 
and the utilization of the GPU device memory bandwidth.
In the ideal case, the number of issued memory requests 
is the same as the number of actual memory access transactions that happen in GPU,
which indicates a good memory access pattern and a high utilization
of GPU device's memory bandwidth.
For example, when a query sequentially scans an integer column, the GPU device
memory accesses can be coalesced and the two numbers are the same. 
On the other hand, when a query's access pattern of GPU device memory becomes irregular,
the number of actual memory transactions becomes larger than the issued memory requests.
The larger the differences,
the poorer utilization of GPU device memory bandwidth.
The results are shown in Figure \ref{fig:breakdown}.

The figure shows 
the execution time of a kernel is correlated to the number of actual memory transactions.
The larger the number of memory transactions is,
the more time is spent on the kernel.
This implies that all these query operators are bounded by GPU device memory accesses.
%which is similar to what have been discussed in CPU environments \cite{ailamaki:dbms, cwi:radix}.

When considering the PCIe data transfer time and the kernel execution time,
the access frequency of the transferred data and the data access pattern determine the ratio of the
kernel execution time in the total execution time.
For selection operator, the PCIe transferred data are only accessed once,
and most of the data accesses are coalesced. Only generating selection results access the device memory
in an irregular way,
where more write transactions are generated than the number of write requests,
as seen in Figure \ref{fig:selectmem}. 
In this case, most of the execution time are spent on PCIe data transfer.
Things are different for join, aggregation and sort.
For join operator, although most of the transferred data are still accessed once, a relative large portion
of these data accesses, to be more specific, the read accesses to probe hash table
as shown in Figure \ref{fig:joinmem}, are random which generates much more read transactions
than read requests. This makes kernel execution time comparable
to PCIe transfer time for join operator.
The portion of irregular data accesses in the total number of data accesses
is further increased for aggregation operator as seen in Figure \ref{fig:aggmem},
which makes more time spent on kernel execution than on PCIe transfer.
Sort operation also spends more time on kernel execution, shown in Figure \ref{fig:sortbreakdown}.
As seen in Figure \ref{fig:sortmem}, the kernel to merge sort keys generates much more memory
requests than the rest kernel operations,
as the PCIe transferred data are accessed multiple times in this step.
This explains why the execution time of sort is dominated by kernel execution.

\subsubsection{Performance Variants}

To investigate the impacts of query characteristics on query performance, 
we vary different query characteristics from the default workloads
and study their performance.
The results are shown from Figure \ref{fig:select} to Figure \ref{fig:aggsort}.

For selection operator, PCIe data transfer time dominates the execution time in all cases, as shown in Figure \ref{fig:select},
which indicates that selection only query is a very expensive operation on GPU.
Among all query characteristics, selectivity is the only one that has relative less impact on 
PCIe data transfer but more on kernel execution.
This is because increasing selectivity will increase the amount of selection results written to device memory
which is accessed in an irregular way.

For join, we observe that query characteristics have different impacts on the PCIe data
transfer time and kernel execution time.
Varying characteristics related to fact table, which include the number of projected columns and the width 
of projected column , will have larger impact on PCIe data transfer than kernel execution,
as shown in Figure \ref{fig:join2}.
On the other hand, varying characteristics related to dimension table
will impact more on kernel execution than
on PCIe data transfer, as seen in Figure \ref{fig:join3} and Figure \ref{fig:join6}.
This is due to the random access of the dimension table in join operator.
For join selectivity, it has a greater impact on kernel execution time.
As selectivity increases, the number of random read when probing hash table,
and the number of join results that are randomly written to GPU device memory, are increased at the same time. 
This will significantly increase the kernel execution time.

Similarly, for aggregation and sort, when the query characteristics
affect the kernel operation that either accesses the transferred data multiple times, or accesses the
data in an irregular way, they will have more impacts on kernel execution
than PCIe data transfer, as is the case for 
the width of the sorted key shown in Figure \ref{fig:sort1}, and
the number of aggregated columns shown
in Figure \ref{fig:agg3}.
Otherwise, they will have more impacts on PCIe data transfer than on kernel execution,
as the width of the aggregation keys shown in Figure \ref{fig:agg1} and the number of projected
columns for sort shown in Figure \ref{fig:sort2}.

\subsection{Study on SSBM Queries}
We now investigate how different query characteristics affect the performance of complex queries 
using the SSBM benchmark. 
We test the benchmark with a scale factor of 10, 
which generates the fact table with 60 million tuples.

\begin{figure}
\centering
\includegraphics[width=1.4in, height=2.4in, angle=270]{graph/exp/ssb/base.ps}
\vspace{-0.15in}
\caption{SSBM query performance on GPU}
\vspace{-0.3in}


\label{fig:ssbbase}
\end{figure}

Figure \ref{fig:ssbbase} shows the query performance.
We breakdown the execution time into
PCIe transfer time (Transfer) and kernel execution time (Kernel) to get a high level picture
of how time are spent for SSBM queries.
As can be seen in the figure, queries in the same query flight have almost the same PCIe data transfer time.
This is because they process the same amount of data from fact table which dominates the PCIe data transfer operation.
However, queries have different kernel execution time due to the differences in their query characteristics.
We perform the analysis of the kernel execution time in the unit of query flight
since queries in the same flight have similar characteristics.

Queries in query flight 1 contain selection on both dimension table and fact table,
foreign key join on the selection results followed by aggregation operation.
Since the size of fact table is much larger than the size of the dimension table,
the execution time of all queries in this flight are dominated by the selection
operation on fact table.
As we have already discussed, PCIe transfer time always dominates the execution time
of the selection operator.
This explains the high percentage of transfer time in the total execution time for queries in query flight 1.

Queries in query flight 2 to flight 4 contain several selection operations on dimension tables,
several foreign key joins between fact table and dimension table, followed by aggregation and sort.
Their kernel execution time are all dominated by the time spent on foreign key joins.

For queries in flight 2, one key difference among their query characteristics
is the join selectivity,
which decreases from query 2.1 to query 2.3.
As higher join selectivity implies higher kernel execution time, the kernel execution time will decrease
from query 2.1 to query 2.3, as is shown in the figure.
Considering PCIe data transfer time and kernel execution time,
although these queries are dominated by join operations, most of their execution time are still spent
on PCIe data transfer.
Generally two join characteristics, the low join selectivity (the highest join selectivity is only 4\%)
and several projected columns from fact table,
attribute to this.
As have already discussed for join operator, both of these two characteristics will increase the portion
of data that are accessed once with coalesced manner, which makes PCIe transfer dominates
the total execution time.

The query performance in query flight 3 can be divided into two groups: query 3.2 to 3.4,
the execution time of which are dominated by PCIe data transfer, and query 3.1, the kernel execution time
of which is even longer than its PCIe data transfer time. We use query 3.1 as an example to
illustrate the performance differences.

\begin{verbatim}
Query 3.1 from SSBM:
select c_nation, s_nation,
      d_year, sum(lo_revenue) as revenue
from  customer, lineorder, supplier, date
where lo_custkey = c_custkey
      and lo_suppkey = s_suppkey
      and lo_orderdate = d_datekey
      and c_region = 'ASIA'  and s_region = 'ASIA'
      and d_year >= 1992 and d_year <= 1997 
group by c_nation, s_nation, d_year
order by d_year asc, revenue desc;
\end{verbatim}

Generally two factors attribute to query 3.1's long kernel execution time.
First, query 3.1 involves accesses of string data from dimension tables,
to be more specific, the accesses of \textit{c\_nation} from \textit{customer} and
the accesses on \textit{s\_nation} from \textit{supplier}, both of which have a width of 15.
Because of the random access of dimension table data and GPU's inefficiency in handling the
string data, kernel execution time will increase.
Secondly, query 3.1 has a high join selectivity.
The join selectivities for \textit{customer} and \textit{supplier} are both 20\%.
Higher selectivity will increase the number of accesses to dimension tables.
In fact, a large portion of query 3.1's kernel execution time is spent on the accesses
of the string data.
For query 3.2 to 3.4, they share many characteristics with query 3.1 but have a much
lower kernel execution time.
The reason is that they have much lower join selectivities and several projected columns
from fact table, which make their execution time dominated by PCIe transfer, similar as
queries in flight 2.

Queries in flight 4 have similar characteristics as in flight 3.
Query 4.1 and 4.2 both have a high join selectivity and contain operations on string data from dimension tables
similar to query 3.1,
while query 4.3 has relative low join selectivity.
But the kernel execution time of query 3.1 is longer than queries in flight 4.
The main reason is that the first executed joins for queries in flight 4 don't access any column from
dimension table while query 3.1 does.

\textbf{Summary.} For join dominated query, higher selectivity and more projections of irregular data from dimension tables will significantly increase the kernel execution time.

