In this section we examine how different software
optimization techniques affect SSBM query performance on GPU.
We focus on three optimization techniques:
data compression technique and invisible join technique,
which have been proven effective in CPU environments,
and CUDA Unified Virtual
Addressing technique, which can improve the performance of GPU programs. 
All the experiments in this section are carried out on GTX 580 GPU.

\subsection{Effect of Data Compression}

\begin{figure}[b]
\centering
\includegraphics[width=1.4in, height=2.4in, angle=270]{graph/exp/ssb/compression.ps}
%\caption{Performance comparison between running SSBM on uncompressed data and on compressed data}
\vspace{-0.15in}

\caption{Performance of data compression.}
\label{fig:ssbcompression}
\vspace{-0.15in}

\end{figure}

Our GPU query engine supports three light weight data compression schemes widely used in column-store systems:
Run Length Encoding, Bit Encoding and Dictionary Encoding.
All these schemes can achieve a good compression ratio without incurring
too much computation cost.
\\

\textbf{Run Length Encoding.}
We apply Run Length Encoding to sorted columns. The elements that are stored in
continuous positions in the column with the same value are replaced with a tuple (value, count)
where value represents the repeated value and count represents the number of elements that
are replaced. A header is added to the compressed column to indicate the compression type
and the number of distinct values in the column.
\\

\textbf{Bit Encoding.}
The Bit Encoding scheme tries to use the least number of bits to represent a value.
When using Bit Encoding, we first find out the largest number of bits needed to represent a value in the column.
Then all the values in the column will be stored using this number bits.
Bit Encoding is usually used with Dictionary Encoding compression scheme in our engine.
\\

\textbf{Dictionary Encoding.}
We apply Dictionary Encoding to columns that only have a limited number of distinct values.
To compress a column using Dictionary Encoding scheme, we first find out all the distinct values in the column,
which stored in an array at the beginning of the column.
Then each value in the column is replaced with the value's index in the array.
Since the column usually has a limited number of distinct values, we can further apply
the Bit Encoding scheme to achieve a better compression ratio.
\\

For performance benefits,
fact table is stored in multiple disk copies.
Each copy of the fact table is sorted on a different foreign key column.
All the sorted columns are compressed using Run Length Encoding scheme. 
The rest columns
are compressed using Dictionary Encoding and Bit Encoding schemes whenever possible.
Dimension tables are not compressed since their size is much smaller compared
to the size of fact table.
Table \ref{table:compression} shows the compression ratio
for the fact table columns used in SSBM queries. 

\begin{table}
\centering
\caption{Compression ratio for fact table columns when sorted
on different foreign keys}
\begin{tabular}{|c|c|c|c|} \hline
Column&lo\_custkey&lo\_partkey&lo\_suppkey \\ \hline
lo\_custkey&1\%&100\%&100\%\\\hline
lo\_partkey&100\%&3\%&100\%\\\hline
lo\_suppkey&50\%&50\%&0.1\%\\\hline
lo\_orderdate&50\%&50\%&50\%\\\hline
lo\_extendedprice&100\%&100\%&100\%\\\hline
lo\_quantity&25\%&25\%&25\%\\\hline
lo\_discount&25\%&25\%&25\%\\\hline
lo\_revenue&100\%&100\%&100\%\\\hline
lo\_supplycost&50\%&50\%&50\%\\\hline
\end{tabular}
\label{table:compression}
\vspace{-0.25in}

\end{table}

Our engine will directly operate on the compressed data whenever possible.
One representative operation that can directly work on the compressed data
is the hash probing operation in join operator.
It directly scans the compressed foreign keys and probes the hash table.
As foreign key columns are usually compressed with high compression ratios,
operating directly on the compressed data will significantly reduce the number
of hash probing operations.
On the other hand, some operations have to decompress the data during their execution,
such as, results projection operation.
The decompression will generate many irregular device memory accesses
, which makes data decompression an expensive operation.

When running queries on compressed data, we choose the data that are most likely to reduce
the query execution time. For example, for queries with multiple foreign key joins,
we use the compressed data where the foreign key column used in the first join
is compressed using RLE scheme.
Figure \ref{fig:ssbcompression} shows the kernel execution time and PCIe transfer time
of running SSBM queries on uncompressed data (BaseKernel and BaseTransfer),
and on compressed data (CompressedKernel and CompressedTransfer).
As shown in the figure,
data compression can effectively reduce the PCIe transfer time for all queries
due to the reduced amount of transferred data.
However, it has different impacts on kernel execution.

For queries dominated by join operations, the query characteristics of the first executed
join have a large impact on how much benefit the kernel can get from running on compressed data.
Generally speaking, the percentage of the execution time of hash probing operation in the total
execution time of the query determines the benefits.
Join selectivity and the number of projected columns, especially the projected columns from
dimension table,
affect the ratio of the execution time of hash probing operation.
Queries with more number of projected columns, and at the same time, higher selectivities
are less likely to benefit from data compression.
This is because the execution time of these queries are more likely to be dominated by
the operation of projecting results, which cannot benefit much from data compression.
This will diminish the performance benefits we get in hash probing operation.
Query 3.1 is one example of this. As have been shown, it has a selectivity of 20\% for
the join between \textit{lineorder} and \textit{customer}, and it projects \textit{c\_nation}
from \textit{customer} with width of 15.
A large portion of the kernel time is spent on the projection of \textit{c\_nation}, which
cannot benefit from data compression. 
As the selectivity becomes lower, the execution time of projecting results decreases at a
faster speed than other kernel operations. In this case, benefits from data compression
will increase, as is the case for queries in query flight 2.
When the execution becomes even lower, the hash probing operation will have few random accesses.
The benefits from data compression becomes less, as is shown for query 3.2 to 3.4.

For queries that are dominated by selection operations, as is the case for queries in query flight 1,
we observe that their kernel execution time cannot benefit much from data compression technique.
Considering the kernel behaviors of the selection operation,
most of its kernel operations access GPU device memory in a coalesced manner.
Although some kernel operation, for example, generating the result filter,
can directly work on compressed data, the performance benefit is not much since
GPU can well handle coalesced memory accesses.
On the other hand, the decompression operation when generating selection results
changes the original coalesced memory accesses into irregular accesses, which
increases the kernel execution time.

\textbf{Summary. }%1) Data compression technique can improve query performance.
%2) Considering kernel execution time,
Queries dominated by selection and by join with higher selectivities and more projected columns 
are less likely to benefit much from data compression.

\subsection{Effect of Invisible Join}

\begin{figure}
\centering
\includegraphics[width=1.2in, height=2.4in, angle=270]{graph/exp/ssb/invisible.ps}
%\caption{SSBM query performance w/o and w/ invisible join technique}
\vspace{-0.15in}

\caption{Performance of invisible join}
\label{fig:invisiblejoin}
\vspace{-0.3in}

\end{figure}

Invisible join is an optimization technique proposed in \cite{abadi:join} to improve the
performance of star schema joins in CPU environments.
It rewrites the foreign key joins into predicates on fact table,
which can be evaluated at the same time before generating the final results.
One benefit of this technique is that the number of random accesses to dimension tables can
be greatly reduced.
When rewriting the joins,
a between-predicate-rewriting technique
can be utilized to transform the operation of probing the hash table into selection
operations on foreign key columns in the fact table if the primary keys lie in a continuous value range.

Currently our query engine doesn't support rewriting the joins automatically at run time.
In this case, we manually rewrite all the queries before examining their performance.
To make the primary keys lie in a continuous range, all the dimension tables are sorted on
corresponding columns.
After query rewritten, selection on dimension table
and hash join operation are completely
replaced with selections on the fact table for queries in query flight 1.
For the rest queries, selection on dimension
tables and hash probing operations are replaced with selections on the fact table.

Figure \ref{fig:invisiblejoin}
shows the performance of SSBM queries on GPU without invisible join technique (BaseTransfer and BaseKernel),
and with join invisible join technique (inviTransfer and inviKernel).

%As seen in the figure, invisible join has no impact on PCIe transfer time since
%it doesn't change the amount of transferred data from fact table.

When considering kernel execution time, the changes of kernel behaviors
brought by invisible join technique are reduced number of random accesses in hash probing operation, 
reduced number of accesses to dimension tables, and increased number of sequential accesses of the foreign key
columns.
Whether a query can benefit from this optimization depends on whether its execution time is
dominated by hash probing operation or the operation on data from dimension table.
The performance of queries with high selectivities, and with operations on dimension table data
are more likely to be improved.
Query 3.1 and 4.1 are this kind of examples.
A large portion of their execution time are spent on the accesses of the data from dimension tables.
Reducing the number of accesses to dimension tables will significantly increase their performance.
On the other hand, for queries with low selectivities and have multiple foreign key joins,
They can benefit few from invisible join technique.
In the worst case, it even degrades  performance, for example, for query 3.2 to 3.4.
This is because these queries have a very limited number of accesses to dimension tables.
In this case, the benefit brought by invisible join is so small that it cannot counteract
the increased kernel time by selection operations on the foreign key columns.
For selection dominated queries, as is the case for queries in flight 1, their kernel
execution times remain almost the same.

\textbf{Summary. }Queries dominated by join with higher selectivities 
and projection of columns from dimension tables are more likely to benefit from
invisible join.

\subsection{Effect of UVA}

CUDA UVA (Unified Virtual Addressing) technique \cite{cuda}
provides a unified address space for host memory and GPU device memory.
With UVA, kernels can directly access the data in host memory
by pinning the corresponding host memory.
The performance benefits of utilizing UVA generally comes from two aspects:
the increased PCIe transfer bandwidth from pageable memory to pinned memory,
and the overlap between kernel execution and PCIe transfer.

To examine the impact of UVA on query performance, we pin the host memory that is used to store
all columns from fact table. Generally there are two reasons for this choice.
First, the UVA accessed data should be accessed in a coalesced way to fully utilize the PCIe bandwidth.
Second, the size of fact table is much larger than the dimension table and most of the PCIe transfer
time is spent on the transferring of data from fact table.
Figure \ref{fig:uva} shows the SSBM query performance on GPU
without (Base) and with UVA technique (UVA).
Since the PCIe transfer operations become implicit after utilizing UVA technique,
we only present the total execution time here.

\begin{figure}
\centering
\includegraphics[width=1.2in, height=2.4in, angle=270]{graph/exp/ssb/uva.ps}
%\caption{SSBM query performance w/o and w/ UVA technique}
\vspace{-0.15in}

\caption{Performance of UVA}
\label{fig:uva}
\vspace{-0.2in}

\end{figure}



For queries in query flight 1, the performance degrades after we utilize UVA technique.
This is because some columns from fact table
are accessed more than one times by the kernel. When these data are accessed through UVA , the
low PCIe bandwidth compared to the bandwidth of GPU device memory
will increase the query execution time.

When the data are accessed only once through UVA, as for queries in query flight 2 - 4,
query performance increases.
As shown in the figure, the performance gains are different for different queries.
As the performance gains mainly come from the sequential access of fact table columns,
the more time spent on these operations, the more performance gain the query will get.
So queries with low selectivities, and with more columns from fact table are more likely
to benefit from UVA technique, such as query 2.2 to 2.3 and query 3.2 to 3.4.
Increased selectivity, and more projected columns from dimension tables
will increase the kernel time than spent on hash probing and accessing of data from
dimension table that cannot benefit from UVA technique because of their random access
pattern, as for the rest queries. 

\textbf{Summary. } Queries dominated by join with lower selectivities and
more number of project columns from fact table will benefit more from UVA technique.
