\label{sec:software}
In this section we present the characterization of query behaviors and the effects of software optimizations
when executing SSBM queries on NVIDIA GTX 680.

\subsection{SSBM Query Behaviors}

\begin{figure}
\centering
\includegraphics[width=1.4in, height=2.4in, angle=270]{graph/exp/ssb/base.ps}
\vspace{-0.15in}
\caption{Baseline of SSBM queries on NVIDIA GTX 680.
}
\vspace{-0.15in}
\label{fig:ssbbase}
\end{figure}


Figure \ref{fig:ssbbase} shows the baseline SSBM performance conducted on GTX 680 with pinned memory.
We breakdown the execution time into PCIe transfer (Transfer), kernel execution(Kernel)
and other (Other) which mainly includes time spent on initializing data structures on CPU before
launching the kernels and allocating and releasing GPU device memory.
As is shown in Figure \ref{fig:ssbbase}, most execution time for SSBM queries are spent on PCIe transfer
and kernel execution.
To understand the query behaviors, we further breakdown the execution time and shows the percentage of the major
operations for SSBM queries in Figure \ref{fig:ssbbreak}.

\begin{figure}
\centering
\epsfig{file=graph/exp/ssb/breakdown.eps,width=0.45\textwidth}
\vspace{-0.15in}
\caption{SSBM execution time breakdown}
\vspace{-0.2in}
\label{fig:ssbbreak}
\end{figure}


%\textbf{PCIe transfer time.}
Since the size of fact table is much larger than the size of dimension tables,
the number of columns of fact table used in the query determines the PCIe transfer time.
Queries in the same query flight have almost the same PCIe data transfer time
since they process the same amount of data from the fact table.

\textbf{Query flight 1.}
The kernel execution time of queries in flight 1 are dominated by selection operations,
as is shown in Figure \ref{fig:ssbbreak}.
Most of the kernel execution are spent on the predicate evaluation of the selection (predicateEval)
and generating selection results (genSelectRes).

\textbf{Query flight 2.}
For queries in flight 2, a large portion of their kernel execution time are spent on
the hash probing operation in the join operator (hashProbe) and generating join results (genJoinFact and genJoinDim), as is shown in Figure \ref{fig:ssbbreak}.
One key difference among their query characteristics
is the join selectivity,
which decreases from Q2.1 to Q2.3.
As higher join selectivity implies higher kernel execution time, the kernel execution time will decrease
from Q2.1 to Q2.3, as is shown in Figure \ref{fig:ssbbase}.

\textbf{Query flight 3.}
The query behaviors in flight 3 can be divided into two groups: Q3.1 and Q3.2 to Q3.4.
The kernel execution time of Q3.1 are dominated by the access of dimension tables when
generating join results (genJoinDim) while the kernel execution time of Q3.2 to Q3.4
are dominated by hash probing probing operation (hashProbe) and accessing the data from the fact table
when generating the join results (genJoinFact). We use Q3.1 as an example to illustrate the differences.

\begin{verbatim}
Q3.1 from SSBM:
select c_nation, s_nation,
      d_year, sum(lo_revenue) as revenue
from  customer, lineorder, supplier, date
where lo_custkey = c_custkey
      and lo_suppkey = s_suppkey
      and lo_orderdate = d_datekey
      and c_region = 'ASIA'  and s_region = 'ASIA'
      and d_year >= 1992 and d_year <= 1997 
group by c_nation, s_nation, d_year
order by d_year asc, revenue desc;
\end{verbatim}

Q3.1 has a high join selectivity.
The join selectivities for \textit{customer} and \textit{supplier} are both 20\%.
Each of these two joins needs to access the data in the dimension tables to form the results.
To be more specific, they access \textit{c\_nation} from \textit{customer} and
\textit{s\_nation} from \textit{supplier}.
As the join selectivities are high, there are lots of random accesses to the dimension tables.
In this case, a large portion of the kernel execution time are spent on this part.
For Q3.2 to Q3.4, they share many characteristics with Q3.1 but with very low join selectivities.
Their execution time are dominated by the sequential scan of fact table data in hash probing
and generating result operations.

\textbf{Query flight 4.}
The execution time of queries in flight 4 are dominated by hash probing and
generating join results from fact table. 
Q4.1 and Q4.2 both have high join selectivities while Q4.3 has a relatively low join selectivity.
Q4.1 has similar query characteristics as Q3.1 but doesn't spend much time on accessing the data from dimension tables.
The main reason is that the first executed join for Q4.1 doesn't access any column from
dimension table while Q3.1 does.

%\textbf{Summary.} For join dominated query, higher selectivity and more projections of irregular data from dimension tables will significantly increase the kernel execution time.

\begin{figure*}[ht]
\centering
\subfigure[Speedup of data compression]{
        \includegraphics[width=1.2in,height=2in,angle=270]{graph/exp/ssb/compression.ps}
        \label{fig:ssbcompression}
}
\subfigure[Speedup of transfer overlapping]{
        \includegraphics[width=1.2in,height=2in,angle=270]{graph/exp/ssb/uva.ps}
        \label{fig:uva}
}
\subfigure[Speedup of invisible join]{
        \includegraphics[width=1.2in,height=2in, angle=270]{graph/exp/ssb/invisible.ps}
        \label{fig:invisiblejoin}
}
\vspace{-0.15in}
\caption {Effects of different software optimization techniques}
\vspace{-0.15in}
\label{fig:breakdown}
\end{figure*}

\subsection {Effects of data compression}

Our GPU query engine supports three light weight data compression schemes
that have already been widely used in column-store systems:
Run Length Encoding (RLE), Bit Encoding and Dictionary Encoding.
%All these schemes can achieve an effective compression ratio without incurring
%high computation costs.

\begin{comment}

\textbf{Run Length Encoding.}
We apply RLE to sorted columns. The elements, which are stored in
continuous positions in the column with the same value, are replaced with a tuple (value, count)
where value represents the repeated value and count represents the number of elements being
replaced. A header is added to the compressed column to indicate the compression type
and the number of distinct values in the column.
\\

\textbf{Bit Encoding.}
The Bit Encoding scheme tries to use the least number of bits to represent a value.
When using Bit Encoding, we first identify the largest number of bits needed to represent a value in the column.
Then all the values in the column will be stored using this number of bits.
Bit Encoding is usually used with Dictionary Encoding compression scheme in our engine.
\\

\textbf{Dictionary Encoding.}
We apply Dictionary Encoding to columns that only have a limited number of distinct values.
To compress a column using Dictionary Encoding scheme, we first find out all the distinct values in the column,
which are stored in an array at the beginning of the column.
Then each value in the column is replaced with the value's index in the array.
Since the column usually has a limited number of distinct values, we can further apply
the Bit Encoding scheme to achieve a better compression ratio.
\\
\end{comment}

For performance benefits,
fact table is stored in multiple disk copies.
Each copy of the fact table is sorted on a different foreign key column.
All the sorted columns are compressed using RLE.
The rest columns
are compressed using the other two schemes whenever possible.
Dimension tables are not compressed since their sizes are much smaller compared
to the size of fact table.
Table \ref{table:compression} shows the compression ratio
for the fact table columns used in SSBM queries. 

\begin{table}
\centering
\caption{Compression ratio for fact table columns when sorted
on different foreign key columns.}
\begin{tabular}{|c|c|c|c|} \hline
Column&lo\_custkey&lo\_partkey&lo\_suppkey \\ \hline
lo\_custkey&1\%&100\%&100\%\\\hline
lo\_partkey&100\%&3\%&100\%\\\hline
lo\_suppkey&50\%&50\%&0.1\%\\\hline
lo\_orderdate&50\%&50\%&50\%\\\hline
lo\_extendedprice&100\%&100\%&100\%\\\hline
lo\_quantity&25\%&25\%&25\%\\\hline
lo\_discount&25\%&25\%&25\%\\\hline
lo\_revenue&100\%&100\%&100\%\\\hline
lo\_supplycost&50\%&50\%&50\%\\\hline
\end{tabular}
\label{table:compression}
\vspace{-0.25in}

\end{table}

The query engines can obtain significant performance benefits when directly working
on the compressed data \cite {DBLP:conf/sigmod/AbadiMF06}.
Thus our engine directly operates on the compressed data whenever possible.
One representative operation that can directly work on the compressed data
is the hash probing operation in join operator.
It directly scans the compressed foreign keys and probes the hash table.
As foreign key columns are usually compressed with high compression ratios,
operating directly on the compressed data will significantly reduce the number
of hash probing operations.
On the other hand, some operations have to decompress the data during their execution,
such as result projection operation.
The decompression will generate many irregular device memory accesses
which makes it an expensive operation.

\begin{comment}
When running queries on compressed data, we choose the data that are most likely to reduce
the query execution time. For example, for queries with multiple foreign key joins,
we use the compressed data where the foreign key column used in the first join
is compressed using RLE scheme.
\end{comment}

Figure \ref{fig:ssbcompression} shows the speedup of PCIe data transfer,
kernel execution and the overall performance after data are compressed.
Disk loading time is not included
in the total execution time.
For all queries, data compression can effectively reduce the PCIe transfer time
due to the reduced amount of transferred data.

For selection dominated queries, as is the case for queries in flight 1,
their kernel execution time cannot benefit much from the data compression technique.
Most of their kernel operations access data in a coalesced manner.
Although some kernel operations, such as generating the filter vector,
can directly work on compressed data, the performance benefit is not much since
GPU can well handle coalesced memory accesses.

For queries dominated by join operations,
when a large portion of the kernel execution time are spent on generating join results,
their kernel execution time cannot benefit much from the data compression technique,
as the case for Q3.1, Q4.1 and Q4.2.
These queries usually have high join selectivities and several projected columns from
both the fact table and the dimension tables.
When a large portion of the kernel execution time are spent on hash probing operations,
their kernel execution time can benefit greatly from the data compression technique,
as is the case for queries in flight 2.
When queries have very low selectivities and several projected columns from the fact table,
as is the case for Q3.2 to Q3.4,
their execution time will be dominated by the coalesced accesses of the data from the fact table.
They cannot benefit much from the data compression technique.

%\textbf{Summary. }%1) Data compression technique can improve query performance.
%2) Considering kernel execution time,
%Queries dominated by selection and by join with higher selectivities and more projected columns
%are less likely to benefit much from data compression.

\subsection{Effects of Transfer Overlapping}
\label{sec:uva}

\begin{comment}
In this section we investigate the effect of another important optimization technique:
transfer overlapping.
\end{comment}

Both OpenCL and CUDA support a unified address space for host memory and GPU device memory.
The GPU kernels can directly access the data stored in the pinned host memory.
No explicit PCIe data transfer is needed.
We use transfer overlapping to refer to this technique.

The performance benefits of utilizing transfer overlapping come from two aspects:
the increased PCIe transfer bandwidth from pageable memory to pinned memory,
and the overlapping between PCIe transfer and kernel execution.

To examine its impact on query performance, we pin the host memory that is used to store
the data from fact table. Generally there are two reasons for this.
First, the host resident data should be accessed in a coalesced way to fully utilize the PCIe bandwidth.
Second, the size of fact table is much larger than the dimension table and most of the PCIe transfer
time is spent on transferring data from fact table.

We compare the performance of SSBM queries with transfer overlapping with the baseline. 
The performance speedup is shown in Figure \ref{fig:uva}.
Since the PCIe transfer operations become implicit with transfer overlapping,
we only present the speedup of the total execution time.

For queries in flight 1, the performance doesn't improve. 
This is because some columns from fact tables are accessed more than one time by the kernel.
In this case, the relatively low PCIe bandwidth compared to the bandwidth of GPU device memory
will counteract the benefits of the overlapping of kernel execution and PCIe data transfer.

When data are accessed only once through PCIe bus, as for queries in flight 2 - 4,
query performance will improve. 
As the performance gains mainly come from the sequential access of fact table columns,
the more time spent on these operations, the more performance gains the query will get.
So queries with low selectivities, and with more columns from fact table are more likely
to benefit from transfer overlapping, such as Q2.2 to Q2.3 and Q3.2 to Q3.4.
Increased selectivity, and more projected columns from dimension tables
will increase the kernel time than spent on hash probing and accessing of data from
dimension tables that cannot benefit from this technique because of their random access
pattern, as for the rest queries.

%\textbf{Summary. } Queries dominated by join with lower selectivities and
%more number of project columns from fact table will benefit more from UVA technique.

\subsection{Effects of Invisible Join}
\label{sec:invisible}

Data compression and transfer overlapping can improve the performance of a wide range of queries.
However, they are not effective for queries dominated by random accesses to data in the dimension tables, like Q3.1.
Invisible join is an optimization technique that can help improve the
performance of this kind of queries.

Invisible join was proposed in \cite{DBLP:conf/sigmod/AbadiMH08} to improve the
performance of star schema joins in CPU environments.
It rewrites the foreign key joins into predicates on fact table,
which can be evaluated at the same time before generating the final results.
One benefit of this technique is that the number of random accesses to dimension tables can
be greatly reduced.
When rewriting the joins,
a between-predicate-rewriting technique
can be utilized to transform the operation of probing the hash table into selection
operations on foreign key columns in the fact table if the primary keys lie in a continuous value range.

Currently our query engine doesn't support rewriting the joins automatically at run time.
In this case, we manually rewrite all the queries before examining their performance.
To make the primary keys lie in a continuous range, all the dimension tables are sorted on
corresponding columns.
After query rewritten, selection on dimension table
and hash join operation are completely
replaced with selections on the fact table for queries in query flight 1.
For the rest queries, selection on dimension
tables and hash probing operations are replaced with selections on the fact table.

Figure \ref{fig:invisiblejoin}
shows the speedup of PCIe data transfer, kernel execution and overall performance when enabling invisible join.
Since invisible join doesn't change the amount of transferred data from fact table,
it has no impact on PCIe transfer time.

Whether the kernel execution time of a query can benefit from this optimization
depends on whether its execution time is
dominated by hash probing operation or the operation on data from dimension table.
The performance of queries with high selectivities, and with operations on dimension table data
are more likely to be improved, as is the case for Q3.1.
On the other hand, for queries with low selectivities and have multiple foreign key joins,
they cannot benefit much from invisible join technique.
In the worst case, the kernel execution time even degrades, for example, for Q3.3 and Q3.4.
This is because these queries have a very limited number of accesses to dimension tables.
In this case, the benefit brought by invisible join is so small that it cannot counteract
the increased kernel time by selection operations on the foreign key columns.
For selection dominated queries, as is the case for queries in flight 1, their kernel
execution times remain almost the same.

To apply the invisible join technique, both dimension tables and foreign keys in fact table
need to be reorganized which may be very costly for general purpose warehousing systems.
Therefore, we do not include invisible join technique in our following performance studies.

%\textbf{Summary. }Queries dominated by join with higher selectivities
%and projection of columns from dimension tables are more likely to benefit from
%invisible join.


