\begin{table*}[htbp]
\centering
\caption{\textbf{SSBM Summary}.
The table lists the major operations and the Filter Factors (FF) for each SSBM query.
L represents the fact table lineorder and D, S, C and P represent the four dimension
tables: date, supplier, customer and part. 
}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Query&Operation&FF on L&FF on D&FF on S&FF on C&FF on P&Overall Selectivity\\\hline
q1.1&$\sigma(L) \bowtie \sigma(D)$&0.47*3/11&1/7&-&-&-&0.019\\\hline
q1.2&$\sigma(L) \bowtie \sigma(D)$&0.2*3/11&1/84&-&-&-&0.00065\\\hline
q1.3&$\sigma(L) \bowtie \sigma(D)$&0.1*3/11&1/364&-&-&-&0.000075\\\hline
q2.1&$L \bowtie \sigma(P) \bowtie \sigma(S) \bowtie D $&-&-&1/5&-&1/25&0.008\\\hline
q2.2&$L \bowtie \sigma(P) \bowtie \sigma(S) \bowtie D $&-&-&1/5&-&1/125&0.0016\\\hline
q2.3&$L \bowtie \sigma(P) \bowtie \sigma(S) \bowtie D $&-&-&1/5&-&1/1000&0.0002\\\hline
q3.1&$L \bowtie \sigma(C) \bowtie \sigma(S) \bowtie \sigma(D) $&-&6/7&1/5&1/5&-&0.034\\\hline
q3.2&$L \bowtie \sigma(C) \bowtie \sigma(S) \bowtie \sigma(D) $&-&6/7&1/25&1/25&-&0.0014\\\hline
q3.3&$L \bowtie \sigma(C) \bowtie \sigma(S) \bowtie \sigma(D) $&-&6/7&1/125&1/125&-&0.000055\\\hline
q3.4&$L \bowtie \sigma(C) \bowtie \sigma(S) \bowtie \sigma(D) $&-&1/84&1/125&1/125&-&0.00000076\\\hline
q4.1&$L \bowtie \sigma(S) \bowtie \sigma(C) \bowtie \sigma(P) \bowtie D $&-&-&1/5&1/5&2/5&0.016\\\hline
q4.2&$L \bowtie \sigma(S) \bowtie \sigma(C) \bowtie \sigma(P) \bowtie \sigma(D) $&-&2/7&1/5&1/5&2/5&0.0046\\\hline
q4.3&$L \bowtie \sigma(S) \bowtie \sigma(C) \bowtie \sigma(P) \bowtie \sigma(D) $&-&2/7&1/25&1/5&1/25&0.000091\\\hline
\end{tabular}
\label{table:workload}
\end{table*}

\subsection{Workloads}
We use the Star Schema Benchmark (SSBM) \cite{oneil:ssb}
which has already been widely used in various data warehousing research studies \cite{DBLP:conf/sigmod/AbadiMH08,DBLP:journals/pvldb/CandeaPV09}.
It has one fact table {\it lineorder} and four dimension tables {\it date,supplier,customer,part},
which are organized in a star schema fashion, as is shown in Figure \ref{fig:starschema}.
There are a total of 13 queries in the benchmark, divided into 4 query flights.
Table \ref{table:workload} summarizes the major characteristics of the SSBM queries.
In our experiments, we run the benchmark with a scale factor
of 10 which will generate the fact table with 60 million tuples.

\begin{figure}
\centering
\epsfig{file=graph/setup/ssbm.eps,width=0.40\textwidth}
\caption{Schema of SSBM}
\label{fig:starschema}
\end{figure}


\begin{table}
\centering
\caption{Hardware Specifications}
\begin{tabular}{|c|c|c|c|} \hline
Processors&\# of Cores&GFLOPS&Bandwidth(GB/s)\\\hline
NVIDIA 480&480&1345&177.4\\\hline
NVIDIA 580&512&1581.1&192.4\\\hline
NVIDIA 680&1596&3090.4&192.256\\\hline
AMD 7970&2048&3788.8&264\\\hline
Intel Core i7&4&112&25.6\\\hline
\end{tabular}
\label{table:hardware}
\end{table}

\subsection{Experimental Environments}

\subsubsection{Hardware Platforms}
We conduct our experiments on four GPUs: NVIDIA GTX 480, 580, 680 and AMD HD 7970.
NVIDIA GTX 480 and 580 only support PCIe 2.0 while NVIDIA GTX 680 and AMD HD 7970
support PCIe 3.0.
Each GPU will be connected to a PCIe 3.0 bus when conducting experiments on it.
The host device is equipped with the Intel Core i7 3770k Quad-Core 3.5GHZ processor with 32 GB memory.
Table \ref{table:hardware} lists the major hardware parameters for these processors.

\subsubsection{Software platforms}
All the experiments are conducted under Red Hat Enterprise Linux 6.4 (kernel 2.6.32-358.2.1).
The NVIDIA GPUs use NVIDIA Linux driver 310.44 with CUDA SDK 5.0.35.
The AMD HD 7970 uses AMD Linux driver Catalyst 13.1 with AMD APP SDK 2.8.
We use the query performance on MonetDB (version 11.15.3) to represent the state of the art of query performance on CPU.
OpenCL query engine on Intel Core i7 is compiled with Intel 2013 XE beta SDK.

\subsection{Measurement}

\subsubsection{Methodology and tools}
When measuring the overall query execution time,
we assume that data are already in the host memory and exclude the disk loading time.

We use NVIDIA's command line profiling tool \textit{nvprof} in CUDA 5.0 toolkit
to profile the query behavior on NVIDIA GPUs.
For the OpenCL query engine, we use OpenCL events to collect the kernel execution time and PCIe transfer time.
When measuring query performance on MonetDB, we put the data in a ramdisk to exclude the disk loading time.

\subsubsection{Measurement of bandwidth}

\begin{table}
\centering
\caption{GPU Bandwidth Measurement}
\begin{tabular}{|c|c|c|c|c|} \hline
&480&580&680&7970\\\hline 
Read(GB/s)&114.59&129.95&127.65&202.76\\\hline
Write(GB/s)&138.34&150.41&153.43&116.44\\\hline
HtoD pageable(GB/s)&6.30&6.30&6.30&9.80\\\hline
HtoD pinned(GB/s)&6.65&6.65&12.28&11.13\\\hline
DtoH pageable(GB/s)&6.20&6.20&6.22&9.19\\\hline
DtoH pinned(GB/s)&6.64&6.64&12.75&11.81\\\hline
\end{tabular}
\label{table:bandwidth}
\end{table}


Before conducting detailed experiments on all the GPUs,
we first measure two critical parameters: the PCIe transfer bandwidth and the GPU device memory bandwidth.
To measure the former, we transfer 256MB data between host memory and GPU device memory.
It is worth noting that we distinguish the pageable host memory from the pinned host memory.
To measure the latter, we launch two GPU kernels which read/write 256MB integers from/to GPU device memory in a coalesced manner.
The measured results are reported in Table \ref{table:bandwidth}.
As is shown in Table \ref{table:bandwidth},
the PCIe transfer bandwidth becomes higher when the host memory is pinned (e.g., doubled for GTX 680).
The reason is that for pinned memory, data can be directly transferred using GPU DMA engine.
However, for pageable memory, data need to be copied to a pinned DMA buffer first
before transferred using GPU DMA engine \cite {amdsdk}.


