\label{sec:comparison}
Having studied query execution behaviors and software optimization effects on GPU,
we are in a position to compare
our GPU query engine with the CPU counterpart under different conditions.
Our purpose is to reveal the advantages and disadvantages of the GPU engine,
as well as the suport and limitations of the current GPU programming environments. 
Specifically, we will answer the following questions:


\begin{itemize}

\item Under what conditions will GPU significantly outperform CPU for processing warehousing queries?
(Section \ref{sec:cpugpu})

\item 
Which programming model is more suitable and more supportive for programming warehousing queries on GPU, CUDA or OpenCL?
(Section \ref{sec:cudaopencl})

\item 
How do different GPU hardwares and their supporting systems affect
query performance when their basic harware parameters are similar?
(Section \ref{sec:nvidiaamd})

\item
With the functional portability of OpenCL, how will the OpenCL query engine that is designed for GPU
perform compared to MonetDB?
(Section \ref{sec:cpus})

\end{itemize}

\begin{figure}
\centering
\includegraphics[width=1.2in, height=2.4in, angle=270]{graph/perfComp/overall.ps}
\vspace{-0.15in}
\caption{SSBM performance comparison. For the performance on Intel Core i7, 
the performance of Q4.1 and Q4.2 are the performance on OpenCL engine while
the rest are the performance on MonetDB.}
\label{fig:overall}
\end{figure}

\begin{figure}
\centering
\includegraphics[width=1.2in, height=2.4in, angle=270]{graph/perfComp/speedup.ps}
\vspace{-0.15in}
\caption{SSBM performance speedup}
\label{fig:speedup}
\end{figure}


\subsection{Comparisons of GPU and CPU}
\label{sec:cpugpu}

\textit{
\begin{itemize}
\item The GPU query engine outperforms the CPU query engine for processing all SSBM queries.
However, the performance speedup varies significantly depending on query characteristics and system setups.
\item The key to obtain high query execution performance on GPU is to prepare the data
in the pinned memory, where 4.5x-6.5x speedups can be observed for certain queries.
When data are in the pageable memory, the speedups are only 1.2x-2.6x for all SSBM queries.
\item GPU has limited speedups (around 2x) for queries: 1) dominated by selection operations,
and 2) dominated by random accesses to dimension tables caused by high join selectivities and
projected columns from dimension tables.
\end{itemize}
}

Our comparisons are based on the following two kinds of performance numbers.
First, the GPU performance is the performance of the CUDA engine on NVIDIA GTX 680. 
%The performance comparisons between different programming models and between NVIDIA GPU and AMD GPU
%are discussed in Section \ref{sec:cudaopencl} and Section \ref{sec:nvidiaamd}.
Second, the CPU performance for each query is the better one
between the performance of MonetDB and of our OpenCL query engine on Intel Core i7. 
We conduct the experiments under two conditions:
1) data are available in the pinned memory;
and 2) data are available in the pageable memory.
Figure \ref{fig:overall} shows the execution time of SSBM queries and
Figure \ref{fig:speedup} shows the performance speedup of GPU over CPU.

\subsubsection{Data are available in the pinned memory}

When data are available in the pinned memory, both the data compression
technique and the transfer overlapping technique can be utilized to accelerate the query execution on GPU.
As can be seen in Figure \ref{fig:speedup}, GPU outperforms CPU in all SSBM queries.
However, the performance speedup varies significantly.
The performance differences come from the differences in query characteristics.
Whether we can gain significant speedup when processing query on GPU depends on whether the query can fully benefit from
different software optimization techniques and whether it can utilize the GPU hardware effectively.
We divide the performance speedup into two categories.

\textbf{Category of Low speedup.}
For Q1.1 to Q1.3 and Q3.1, processing on GPU can only gain around 2x speedup, as is shown in Figure \ref{fig:speedup}.
Queries in flight 1 are dominated by selection operations.
They cannot benefit from the transfer overlapping technique.
Although data compression technique can reduce the PCIe transfer overhead,
the kernel execution performance cannot be improved.
Since selection doesn't involve much computation, processing on GPU will not have significant performance speedup.
Q3.1 is dominated by the random accesses of data from dimension tables. It cannot benefit
much from both the data compression technique and the transfer overlapping technique.
Furthermore, the random accesses cannot effectively utilize the bandwidth of GPU device memory.
In this case, we cannot gain significant performance speedup.

\textbf{Category of High speedup.}
For Q2.1 to Q2.3, Q3.2 to Q3.4 and Q4.1 to Q4.3, processing on GPU can gain a 4.5x to 6.5x speedup
, as is shown in Figure \ref{fig:speedup}.
The kernel execution time of Q2.1 to Q2.3 are dominated by the hash probing operation of the join operation.
It can benefit from both the data compression technique and the transfer overlapping technique.
The kernel execution time of Q3.2 to Q3.4 and Q4.1 to Q4.3 are dominated by both the hash probing operation
and the projection of join results from the fact table.
The projection of join results can benefit from the transfer overlapping technique.
In this case, queries which are dominated by hash probing operation and result projection operation from
the fact table can gain a significant speedup when processed on GPU.

\subsubsection{Data are available in the pageable memory}

When data are available in the pageable memory,
only data compression technique can be utilized to accelerate the query execution on GPU.
As can be seen in Figure \ref{fig:speedup}, 
the performance speedup degrades greatly compared to data in the pinned memory.
Most SSBM queries only gain a speedup of around 2x.
For Q1.2 and Q1.3, the performance speedups are only 1.15x.
The main reason is that the PCIe transfer bandwidth cannot be fully utilized 
when data are in the pageable memory.
The benefits of GPU's high memory bandwidth and high computational power are mostly counteracted
by the high PCIe transfer overhead.

\subsection{Impacts of programming models and GPU hardwares}


\textit{
\begin{itemize}
\item From both the performance and programming perspective, 
CUDA is more suitable and supportive for processing warehousing queries.
\item Without using the pinned memory, the NVIDIA OpenCL query engine
can have similar performance with the CUDA engine.
However, NVIDIA OpenCL haven't well supported pinned host memory.
\item The performance slowdown when porting NVIDIA CUDA (on GTX 680) to AMD OpenCL (on 7970) 
is not caused by the differences in hardware efficiencies (PCIe transfers time or kernel executions), 
but by AMD's OpenCL implementation for GPU memory management.
\item The major obstacle to OpenCL portability is not performance slowdown of GPU kernel executions but 
subtle differences of vendor implementations for the OpenCL specification. 
\end{itemize}
}

\begin{comment}

\begin{figure}
\centering
\includegraphics[width=1.2in, height=2.4in, angle=270]{graph/exp/ssb/cudaopencl.ps}
\vspace{-0.15in}

\caption{Performance of SSBM using CUDA and OpenCL}
\label{fig:cudaopencl}
\vspace{-0.15in}

\end{figure}

\begin{figure}
\centering
\includegraphics[width=1.2in, height=2.4in, angle=270]{graph/exp/ssb/openclBreak.ps}
\vspace{-0.15in}

\caption{Normalized OpenCL performance on CUDA performance without transfer overlapping}
\label{fig:openclbreak}
\vspace{-0.15in}

\end{figure}

\end{comment}

\begin{figure}
\centering
\includegraphics[width=1.2in, height=2.4in, angle=270]{graph/exp/ssb/openclBreakPageable.ps}
\vspace{-0.15in}

\caption{Normalized OpenCL performance over CUDA}
\label{fig:openclbreakpageable}
\vspace{-0.15in}

\end{figure}

\subsubsection{Comparisons of CUDA and NVIDIA OpenCL}
\label{sec:cudaopencl}

To compare these two programming models, we focus on the NVIDIA GTX 680  which can run both CUDA programs and OpenCL programs.

\textbf{Programming differences.}
Since the design of CUDA and OpenCL share many concepts in common,
the programming efforts are similar for warehousing queries.
However, NVIDIA's OpenCL implementation makes it impossible to apply all
software optimization techniques when running OpenCL engine on NVIDIA GPU. 
The problem is NVIDIA OpenCL doesn't well support pinned host memory. 
In the experiments we find that on NVIDIA GPU,
the sum of regular allocated device memory
and the memory allocated with flag CL\_MEM\_ALLOCATE\_HOST\_PTR,
which should be allocated in pinned host memory \cite{nvidiaopencl},
cannot exceed the total size of GPU device memory.
In this case, we cannot prepare the data in the pinned memory before query execution
because of the large size of the data
and can only utilize the data compression technique.

\begin{comment}
We first compare the performance of SSBM queries on our CUDA and OpenCL implemented engines,
both with software optimizations. The results are shown in Figure \ref{fig:cudaopencl}.

What is supprising here is that SSBM queries perform better on the OpenCL implemented engine.
To understand the reason behind it, we remove the transfer overlapping optimization from our engine
so that we can break down the execution time.
We re-run all the SSBM queries (with pinned host memory and data compression)
and we break down the execution time into PCIe Transfer time (Transfer),
GPU kernel execution time (Kernel) and Other (Other).
We normalize the OpenCL performance on CUDA for each part.
The results are shown in Figure \ref{fig:openclbreak}.

As can be seen in Figure \ref{fig:openclbreak}, SSBM queries have similar kernel execution time and CPU time on both engines.
However, there is a big difference in the PCIe transfer performance. The OpenCL implemented
engine has much lower PCIe transfer overhead.
The reason is that when we allocate pinned host memory using CL\_MEM\_ALLOCATE\_HOST\_PTR in OpenCL,
the NVIDIA implemented OpenCL allocates the memory in GPU device instead of the host memory.
In this case, there are no PCIe transfer for data that should be in the pinned memory.

To fairly compare these two programming models, we further replace the pinned memory
with the pageable memory. We also normalize OpenCL performance on CUDA for each part and
the results are shown in Figure \ref{fig:openclbreakpageable}.

When using pageable memory, the two programming models have almost the same performance.
This indicates that OpenCL is a better programming model.
\end{comment}

\textbf{Performance differences.}
Considering the above limitation,
we compare the query performance with pageable host memory and data compression
technique.
The CUDA query engine and the OpenCL query engine use the same thread configurations
and the same algorithms.
We breakdown the execution time into PCIe transfer (Transfer), kernel execution (Kernel),
and other (Other) which mainly includes allocating and releasing GPU device memory and other operations on CPU.
We normalize the OpenCL performance on CUDA for each part.
%The results are shown in Figure \ref{fig:openclbreakpageable}.

As is shown in Figure \ref{fig:openclbreakpageable}, warehousing queries implemented in CUDA
and in OpenCL have almost the same performance. This differs from the results of the HPC applications
where a significant performance difference exists when simply porting the CUDA implementation
into OpenCL implementation \cite {DBLP:journals/pc/DuWLTPD12}.
The difference is mainly determined by the characteristics of the warehousing workloads,
the performance of which are not bounded by computing but by PCIe data transfers and memory accesses.
First, both programming models don't affect the PCIe transfer bandwidth.
Second, both programming models can well support GPU memory hierarchy.
The computation-oriented optimization techniques from CUDA compiler as reported in \cite{DBLP:journals/pc/DuWLTPD12}
doesn't apply to warehousing workloads.

\subsubsection{Comparisons of NVIDIA and AMD GPUs}
\label{sec:nvidiaamd}

\begin{figure}
\centering
\includegraphics[width=1.2in, height=2.4in, angle=270]{graph/exp/ssb/nvidiaamd.ps}
\vspace{-0.15in}

\caption{NVIDIA Versus AMD}
\label{fig:nvidiaamd}
\vspace{-0.15in}

\end{figure}


\begin{figure}
\centering
\includegraphics[width=1.2in, height=2.4in, angle=270]{graph/exp/ssb/amdBreak.ps}
\vspace{-0.15in}

\caption{Normalized SSBM performance on AMD GPU}
\label{fig:amdBreak}
\vspace{-0.15in}

\end{figure}

\begin{comment}
NVIDIA and AMD are the two major GPU vendors.
GPUs from these two vendors differ in their internal architectures.
For NVIDIA and AMD GPUs with similar PCIe transfer bandwidth and GPU device bandwidth, it is unclear how the warehousing queries will perform on them.
To answer this question, we compare the performance of SSBM queries on the CUDA query engine on NVIDIA GTX 680
\end{comment}
We compare the performance of SSBM queries on the CUDA query engine on NVIDIA GTX 680
with the performance of SSBM queries on the OpenCL query engine on AMD HD 7970. 
Both engines have been optimized with the data compression technique and the transfer overlapping technique. 
The results are shown in Figure \ref{fig:nvidiaamd}.

As can be seen in Figure \ref{fig:nvidiaamd},
SSBM queries on NVIDIA 680 outperforms SSBM queries on AMD 7970. 
The performance gap is almost constant among all SSBM queries.
To understand the performance differences, we remove the transfer overlapping technique from both engines so that we can breakdown
the execution time. We breakdown the execution time into PCIe transfer (Transfer), GPU kernel execution (Kernel),
and other (Other) which mainly includes allocating and releasing GPU device memory and other operations on CPU.
For each part, we normalize the performance on the performance of AMD GPU. The results are shown in Figure \ref{fig:amdBreak}.

\begin{comment}
\begin{algorithm}
\caption{Execution time profiling}
\label{profile}

Allocate a buffer from GPU Device memory;

Record startTime;

Transfer data from host memory to the buffer;

Wait for the transfer to finish;

Record endTime;

Release the buffer from GPU Device memory;
\end{algorithm}
\end{comment}

As is shown in Figure \ref{fig:amdBreak}, these two GPUs have comparable performance for PCIe data transfer and kernel execution.
Since these two GPUs have comparable hardwares, we expect that they should have similar performance for SSBM queries.
Considering the Other time, AMD GPU has a much longer execution time.
To more clearly explain why the CPU time is much longer on AMD GPU,
we use a simple data transfer process
to illustrate where the time is spent.

We first allocate a buffer from GPU device memory. Then
we transfer the data to the buffer. We measure the transfer time in two different ways. In the first way
the total time is recoded as the difference
between the time when we launch the PCIe transfer and the time when the transfer is finished, both of which
are measure using wall clock time.
In the second way, we measure the transfer time using OpenCL events.
When we examine these two transfer times, the one that is measured in the second way is what we expect based on
the measurements of PCIe transfer bandwidth. However, the one that is measured in the first way is much longer
than the one measured in the second way. This attributes to the overall performance gap between processing SSBM queries
on NVIDIA and AMD. The reason may relate to AMD OpenCL's implementation of
memory object management.
When allocating a memory from GPU device memory, AMD driver defers the allocation until the
memory object is first used. When the memory initialization cost is high, the performance suffers.

\subsection{Comparisons of OpenCL query engine on CPU with MonetDB}
\label{sec:cpus}


\textit{
\begin{itemize}
\item Porting the OpenCL query engine from GPUs to CPU can work well by changing each thread's memory access pattern
and thread configurations. 
\item MonetDB outperforms the OpenCL query engine for processing selection dominated queries
and join dominated queries with low selectivities.
\item The OpenCL query engine has comparable or better performance for join dominated queries
with high selectivities.
\end{itemize}
}

\begin{figure}
\centering
\includegraphics[width=1.2in, height=2.4in, angle=270]{graph/exp/ssb/openclMonet.ps}
\vspace{-0.15in}

\caption{SSBM performance comparison on CPU}
\label{fig:openclMonet}
\vspace{-0.15in}

\end{figure}

Our OpenCL query engine can work on both CPUs and GPUs.
Studying the performance of the OpenCL query engine on CPUs will help partition
the workload among CPUs and GPUs.
We compare the performance of the SSBM queries on the OpenCL query engines
with the performance on MonetDB. 
For the OpenCL query engine, we make changes to our GPU based algorithms
to adapt to CPU architectures.
We change the access pattern of each thread when running on CPU. Each thread will access
a continuous range of the data instead of accessing the data in a stride way.
All other algorithms remain the same.
The execution time is shown in Figure \ref{fig:openclMonet}.

MonetDB significantly outperforms our OpenCL query engine on CPU for selection dominated queries,
such as Q1.1 to Q1.3.
The performance gap is caused by the inefficiency of GPU algorithms when executed on CPU.
In the implementation of GPU algorithms for selection operator, the where predicates need to be evaluated
and the number of results must be calculated before generating selection results.
In this case, when there are duplicated columns in the where predicates and the projected columns
as is the case for Q1.1 to Q1.3 in SSBM,
the GPU selection algorithm will scan the column twice which is not necessary when executed on CPU.
This will increase the execution time of our OpenCL query engine when executing on CPU.
 
The performance gap between our OpenCL query engine on CPU and MonetDB for join
dominated queries is determined by join selectivities.
We observe that the performance advantage of MonetDB over our OpenCL query engine
decreases as the join selectivity increases.
For example, for Q3.2 to Q3.4 which have very low selectivities, MonetDB is more than 2x faster compared to 
our OpenCL engine. However, for Q4.1 and Q4.2, which have high selectivities, our OpenCL engine even
outperforms MonetDB. This can be further proved by Q2.1 to Q2.3, as is shown in Figure \ref{fig:openclMonet}. 
The performance gap between our OpenCL query engine and MonetDB increases as the query selectivity decreases
from 0.008 to 0.00016.
We believe this is because MonetDB can effectively utilize CPU cache when join selectivities are low.
For our OpenCL query engine on CPU, optimizing for the CPU cache is our future work.


\begin{comment}
\subsection{Performance Portability}
\label{sec:portable}

\begin{figure}
\centering
\includegraphics[width=1.2in, height=2.4in, angle=270]{graph/exp/ssb/port.ps}
\vspace{-0.15in}

\caption{SSBM performance on different processors}
\label{fig:portability}
\vspace{-0.15in}

\end{figure}

OpenCL has already provided functional portability among different computing platforms.
However, it is unclear whether the performance of OpenCL implemented queries is also
portable considering the differences in the processor architectures, in the compilers
and in their implementations of OpenCL.

To examine the portability of the OpenCL implemented warehousing queries, we compare the performance
of SSBM queries on our OpenCL implemented engine on Intel Core i7, NVIDIA GTX 680 and AMD HD 7970.
The results are shown in Figure \ref{fig:portability}.

As shown in the figure, the relative performance among SSBM queries are the same on different
computing platforms although the absolute performance numbers differ.
This is because the performance of warehousing queries is mainly bounded by the PCIe bandwidth
and the GPU device memory bandwidth.
This implies that we can develop a cost model to estimate the performance of warehousing queries
that work well on different computing platforms.

\end{comment}
