\documentclass{vldb}
\usepackage{graphicx}
\usepackage{color}
\usepackage{balance}
\usepackage{subfigure}
\usepackage[ruled,vlined]{algorithm2e}
\usepackage{verbatim}
\usepackage{url}
\usepackage{setspace}

\newtheorem{theorem}{Theorem}

\begin{document}

\title{On Packing Very Large R-trees}

\numberofauthors{1}
\author{
\alignauthor
Haoyu Tan{$~^{\dag}$}, Wuman Luo{$~^{\dag}$}, Huajian Mao{$~^{\ddag}$}, Lionel M. Ni{$~^{\dag}$}\\
       \affaddr{$~^{\dag}$Hong Kong University of Science and Technology, China}\\
       \affaddr{$~^{\ddag}$National University of Defense Technology, China}\\
       \email{$~^{\dag}$\{hytan, luowuman, ni\}@cse.ust.hk, $~^{\ddag}$huajianmao@nudt.edu.cn}
}

\maketitle

\begin{abstract}
Many emerging applications require analyzing large spatial datasets. In
these applications, efficient query processing relies on spatial access
methods such as R-trees. For datasets that are fairly static, R-trees are
often built as a data loading process using packing techniques. However,
traditional R-tree packing algorithms can only run on a single machine
and thereby cannot scale to very large datasets. In this paper, we
design and implement a general framework for parallel R-tree packing
using MapReduce. This framework sequentially packs each R-tree level from
bottom up. For lower levels that have a large number of rectangles,
we propose a partition-based algorithm for parallel packing. We also
discuss two spatial partitioning methods that can efficiently handle
heavily skewed datasets.

To evaluate the performance, we conduct extensive
experiments using large real datasets. The size of the datasets is up
to 100GB and the number of spatial objects is up to 2 billion. Besides
range queries, k-nearest neighbor searches and spatial joins are
also used for evaluation. To the best of our knowledge, it is the first
work that evaluates the query performance of packed R-trees on such
large datasets, with spatial queries other than range queries. The
results confirm the scalability of our proposed framework and parallel
packing algorithms. It is also shown that our packed R-trees have
good query performance and optimal space utilization.
\end{abstract}

\section{Introduction}
In recent years, many analytical applications require processing
terabyte-scale or even petabyte-scale spatial datasets
collected from sensors, mobile phones, GPS devices, artificial
satellites, and others. Examples include location-based
web services~\cite{GoogleLatitude}\cite{FacebookPlaces},
smart transportation~\cite{Shanghaigrid}, spatial data
mining~\cite{Arase:2010}, scientific research~\cite{SDSS}\cite{Earth:2007},
etc. One of the most typical queries in these applications is to
retrieve all objects within a given rectangle (or hyper rectangle), which
is referred to as \emph{range query}. Other frequent queries include
\emph{k-nearest neighbor search} (k-NN query) and \emph{spatial join}. To
enable efficient processing of the above queries, it is often necessary
to make use of spatial access methods such as R-trees~\cite{Guttman:1984}.

Dynamic versions of R-trees such as the original
R-tree~\cite{Guttman:1984}, the R+-tree~\cite{Sellis:1987}, and the
R*-tree~\cite{Beckmann:1990} are often built by inserting one object
at a time. In case of analytical applications where data are fairly
static, R-trees are often built by \emph{packing} techniques instead
of dynamic insertions. Packing techniques consider the input dataset
as a whole and construct an R-tree once and for all. Comparing to
dynamic versions of R-trees, the main advantages of packed R-trees
include faster construction, higher space utilization and better R-tree
structure~\cite{Leutenegger:1997}. Existing R-tree packing algorithms fall
into two categories: bottom-up and top-down. The bottom-up algorithms,
including X-Sort~\cite{Roussopoulos:1985}, Hilbert-Sort~\cite{Kamel:1993}
and Sort-Tile-Recursive (STR)~\cite{Leutenegger:1997}, sequantially pack
each R-tree level from the leaf nodes up to the root node. Others
such as VAMSplit~\cite{chen:2002} and Top-Down Greedy Split
(TGS)~\cite{Garcia:1998} pack an R-tree from the highest level to the
lowest by recursively splitting the data.

Since all existing packing algorithms are proposed for non-parallel
environments, they become inefficient as the data size grows. For
example, all bottom-up approaches involve sorting all rectangles at least
once. In case that data cannot fit into memory, external sort is required.
However, external sort over large datasets is extremely slow on a single
machine due to frequent I/O operations. For today's data-intensive
applications, there is an urgent need to introduce parallelism into
R-tree packing process. To deal with vast amount of data, the MapReduce
paradigm originally proposed by Google~\cite{MapReduce} has received
growing interest for being a scalable shared-nothing parallel processing
platform. Its open-source implementation, Hadoop~\cite{Hadoop}, has
been widely used in large clusters with thousands of machines in Yahoo!,
Facebook, Twitter, etc. In this paper, we use Hadoop as the underlying
parallel processing platform.

Based on MapReduce, we design and implement a general framework for
packing very large R-trees. Our framework packs an R-tree from bottom
up. Specifically, parallel algorithms are used to pack the leaf nodes and
a number of higher levels. Once the number of rectangles is sufficiently
small to be efficiently packed in memory, all remain levels are then
constructed using serial packing algorithms. To analyze the cost of
packing R-trees using our framework, we propose a simple formula to
demonstrate that the time of packing the first level (leaf nodes)
dominates the total packing time.

To efficiently pack a single level, especially the first level, We
propose a partition-based parallel algorithm that is easy to implement
using the MapReduce programming model.  We also describe two spatial
data partitioning strategies for parallel packing. We study how to
divide heavily skewed data into small partitions that can fit into
memory. Each partition is then packed separately using any serial packing
algorithm. Our partition strategies consider both the spatial similarity
and the storage efficiency. As a result, the parallel packed R-trees
have similar query performance and space utilization as those packed
using serial algorithms.

We select four representative types of R-trees packed by our framework for
evaluation. They differ in data partitioning strategies, serial packing
algorithms, or both. The datasets for evaluation are extracted from the
TIGER dataset and a GPS dataset collected over two years from about 6,000
taxies in Shanghai. The data volume is up to 100GB and the number of
the spatial objects is up to 2 billion. Different from previous works
that only consider range queries, we also report the query performance
results of k-NN queries and spatial joins. To the best of our knowledge,
no existing work has measured the R-tree performance with all these queries
on such large datasets.

The rest of the paper is organized as follows. In Section~\ref{sec:2} we
briefly review the existing serial packing algorithms and give an overview
of the MapReduce paradigm. In Section~\ref{sec:3},  we propose the general
R-tree packing framework and analyze its performance. Parallel algorithms
for packing one R-tree level are described in Section~\ref{sec:4}. In
Section~\ref{sec:5}, we present the experimental results. Finally,
our conclusion is drawn in Section~\ref{sec:6}.

\section{Related Work}\label{sec:2}
\begin{figure*}[t]
\centering
\subfigure[X-Sort R-tree]{\label{fig1:1}\includegraphics[width=0.32\textwidth]{figures/fig1_1.eps}}
\subfigure[Hilbert-Sort R-tree]{\label{fig1:2}\includegraphics[width=0.32\textwidth]{figures/fig1_2.eps}}
\subfigure[STR R-tree]{\label{fig1:3}\includegraphics[width=0.32\textwidth]{figures/fig1_3.eps}}
\caption{MBRs of level-3 nodes of three packed R-trees. The original dataset
contains 1,283,750 MBRs. Each node contains at most 75 children,
which leads to 234 MBRs at level 3.}
\label{fig:1}
\end{figure*}

In this section, first we review the existing R-tree packing algorithms and
then briefly describe the MapReduce paradigm.

\subsection{Serial Packing Algorithms}
For the convenience of discussion, we assume that the dataset contains
$N$ MBRs (minimum bounding boxes), each one is associated with a unique
identifier (ID). In addition, we use $m$ to denote \emph{node capacity},
i.e., the maximum number of entries in a node. Unless otherwise
specified, we consider 2-d MBRs (rectangles). Higher dimension cases
are discussed when generalizing the 2-d case is not trivial.

Existing packing algorithms can be divided into two categories: bottom-up
approaches and top-down approaches. Bottom-up algorithms build an R-tree
one level at at time, from the lowest level (leaf nodes) to the highest
one (the root node). The MBRs of the result nodes of each level is used
as input to build the nodes of the next level. Algorithms fall into this
category include X-Sort R-tree~\cite{Roussopoulos:1985}, Hilbert-Sort
R-tree~\cite{Kamel:1993}, and STR R-tree~\cite{Leutenegger:1997}. On the
other hand, top-down algorithms build an R-tree from the root node down
to the leaf nodes. Examples include VAMSplit R-tree~\cite{chen:2002}
and Top-Down Greedy Split (TGS)~\cite{Garcia:1998}. An extensive
survey of R-tree packing and bulk-loading techniques can be found
in~\cite{Manolopoulos:2005}.  In this paper, we draw our attention on
the bottom-up packing algorithms.

\smallskip
\textbf{X-Sort R-tree:}\enspace
The X-Sort R-tree was proposed by Roussopoulos and Leifker~\cite{Roussopoulos:1985}.
The main process of this approach is summarized as follows:
\begin{enumerate}
\item Sort all the rectangles by the $x$-coordinate of their
centroid or one of the corners. 
\item Divide the sorted sequence into $\lceil N/m\rceil$
equal-sized consecutive groups in order that all rectangles of each
group can be packed into one node.
\item Pack the rectangles of each group into a node.
Assign a r to the node and calculate the
MBR.
\item Recursively pack the calculated MBRs into nodes using the above
steps. This process terminates when there is only one group, of which the
corresponding node is the root of the packed R-tree.
\end{enumerate}

It has been shown in~\cite{Kamel:1993} that Hilbert-Sort R-tree outperforms
X-Sort R-tree in almost all cases. However, it is worth mentioning that
this is the first packed R-tree proposed which inspires the successors.

\smallskip
\textbf{Hilbert-Sort R-tree:}\enspace
The Hilbert-Sort R-tree~\cite{Kamel:1993} makes use of the Hilbert
Space-Filling Curve which is capable of roughly clustering points that are
relatively close in the space. The process of this algorithm is almost the
same as X-Sort R-tree. The only difference is that the centroids of the
rectangles are sorted by their Hilbert values, that is, the distance from
the origin to the point along the filling curve. Experiments showed that
Hilbert-Sort R-tree is significantly better than X-Sort R-tree and the
original R-tree in terms of both query efficiency and storage utilization.

Since calculating Hilbert value uses the coordinates of all
dimensions, the Hilbert-Sort packing algorithm can directly apply to
MBRs of higher dimensionality. However, the main drawback of this
algorithm is the expensive computation cost in calculating Hilbert
values~\cite{Butz:1971}. This issue becomes worse as both the data
size and the dimensionality grows.

\smallskip
\textbf{STR R-tree:}\enspace
STR (Sort-Tile-Recursive) is another bottom-up packing
algorithm~\cite{Leutenegger:1997}. The basic idea to divide the space into
$\lceil N/m\rceil$ slabs, each contains approximately $m$
centroids of rectangles. The division is performed on all dimensions,
with each dimension has approximately the same number of slices. We
illustrate the process in 2-d space as an example. First, the centroids of the
rectangles are sorted by their x-coordinates. Then create $r$ equal-sized slices
with respect to x-coordinate, in which $r=\lceil\sqrt{N/m}\rceil$.
Next, for each slice, sort the centroids again by their y-coordinates and create
$r$ equal-sized slices with respect to y-coordinate. For $k$-d cases, the slicing
will be performed $k$ times. The division process is applied until all R-tree
levels are packed.

The performance of STR R-tree is not always better than Hilbert-Sort R-tree.
In fact, the latter can be significantly better in some cases~\cite{RTreeBook}.
In addition, as STR packing involves a large number of runs of sorting, expensive
I/O operations makes it inefficient when the data size far exceeds the memory size.

An illustration of the packed R-trees using bottom-up methods is shown in
Figure~\ref{fig:1}. As we can see, the X-Sort packing algorithm results
in very thin MBRs, which is a negative factor for query efficiency. The
quality of the other two R-trees depends on the data distribution as
well as the query loads. Detailed evaluations and comparisons can be
found in~\cite{Leutenegger:1997}.

\subsection{Hadoop MapReduce}
MapReduce is a simplified data processing model originally proposed
by Google for data-intensive applications. Usually, both the input and
the output data are stored on a distributed file system optimized for
sequential access such as the Google File System (GFS)~\cite{GFS}. The
Apache Hadoop~\cite{Hadoop} project provides an open-source implementation
of MapReduce along with the Hadoop Distributed File System (HDFS)
which is similiar to GFS. Our R-tree packing framework is grounded on
the Hadoop version of MapReduce and the HDFS.

\subsubsection{HDFS}
The HDFS is designed to scale to tens of petabytes of storage. It runs
on top of the local file systems of the cluster nodes. An HDFS cluster
consists of a \emph{namenode}, a \emph{secondary namenode}, and a number
of \emph{datanodes}. The content of a file is stripped into blocks. To
optimize sequential access to large files, the size of a block is usually
set to 64MB (or larger). Each block is replicated and stored on different
datanodes. The default replication level is 3, which implies that the
number of failure nodes should be at least three to corrupt a block.
The metadata, such as the mapping from a file to its blocks, are stored
on the namenode. The secondary namenode periodically creates a snapshot
of the namenode. It is used to restart a failed namenode without having
to replay the entire journal of file system actions.

\subsubsection{MapReduce}
MapReduce provides a simple programming model for data-intensive
applications. This programming model treats data as key/value pairs. In
most cases, users only need to specify two functions: the \emph{map}
function and the \emph{reduce} function. The map function is applied to
each input record $(k1,v1)$, after which it generates a list of output
pairs $list(k2,v2)$. Then the elements of $list(k2,v2)$ are grouped by
key.  That is, pairs with the same key are assigned to the same group. The
grouping is performed automatically by MapReduce platform. After grouping,
$list(k2,v2)$ is transformed to $list(k2,list(v2))$ and the reduce
function is applied to each $(k2,list(v2))$ to generate $list(k3,v3)$
which is the final result. The whole process can be summarized as below.
\begin{quote}
\begin{tabular}{llr}
Map & \texttt{(k1, v1)} & $\longrightarrow$ \texttt{list(k2, v2)}\\
Reduce & \texttt{(k2, list(v2))} & $\longrightarrow$ \texttt{list(k3, v3)}\\
\end{tabular}
\end{quote}

\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figures/mroverview.eps}
\caption{Data flow in MapReduce}
\label{fig:mroverview}
\end{figure}

Because of its simplicity, this programming model can be easily realized
in parallel, as has been done in Hadoop. Most importantly, it allows
elegant handling of failures. The data flow of a MapReduce job is
illustrated in Figure~\ref{fig:mroverview}. In map phase, each map task is assigned to
a portion of the input files called \emph{split}. By default, a split
contains a single HDFS block and the number of map tasks equals to the
number of file blocks. The output pairs of each map task is partitioned
and sorted by key. Each partition corresponds to a reduce task and is
saved on the local file system. In shuffle/sort phase, each reduce task
fetches the corresponding partitions from all map tasks. After receiving
all the partitions, the reduce task merges the partitions to generate
a single partition that is sorted by key. Then the pairs can be easily
grouped in their sorted order. Finally, each reduce task invokes the
reduce function for each group and writes the result pairs to HDFS,
which is referred to as the reduce phase.

During the job execution, tasks may fail at any time. If a map task
fails, all intermediate results already generated will be discarded and
a new copy of the map task will be started. If a reduce task fails,
after starting the new copy of the reduce task, all partitions belongs
to it will be fetched and merged again. Note that recovering failures
in MapReduce does not require starting over the whole job. As such,
applications using MapReduce can easily scale to thousands of machines,
where failures are normal.

\section{General Framework}\label{sec:3}
\begin{figure*}[t]
\label{fig:packing_overview}
\centering
\scalebox{0.9}{\input{packing_overview.pstex_t}}
\caption{An overview of bottom-up parallel R-tree packing framework}
\end{figure*}
Packing R-trees in real systems can be considered as an \emph{extract,
transform and load} (ETL) process. Initially, spatial objects are stored
in raw files, relational database tables, Key/Value stores, or any other
form of storage. Since MapReduce is best at processing large files on
HDFS, a pre-packing MapReduce job is executed to extract the ID, MBR
and other required attributes from each spatial object and write them to
files on HDFS. Next, taking the generated files as input, R-tree packing
algorithms are performed to generate an R-tree saved in intermediate
binary files.  Finally, a specific algorithm reads the intermediate
binary files and loads the R-tree nodes into the target system.
In the rest of this section, we focus on how to efficiently perform the
\emph{transform} (R-tree packing) stage in parallel. For simplicity,
we assume that only the ID and the MBR of each spatial object will
be stored in the leaf nodes of the packed R-tree. We use \emph{entry}
to refer to an (ID,MBR) pair.

\subsection{Framework Design}
Based on bottom-up packing approaches, we propose a flexible framework
that aims at packing very large R-trees. An overview of the framework is
shown in Figure~\ref{fig:packing_overview}. To be precise, we refer to
parallel packing and in-memory serial packing as \emph{external packing}
and \emph{internal packing}, respectively. Since packing level $l$
takes the result entries of level $l-1$ as input, it is not likely to pack
different levels at the same time. Therefore, to speedup the packing
process, we put effort in introducing parallelism into packing a single
level. It is obvious that after packing several levels, the total size
of the nodes will be sufficiently small to fit into the main memory of
a single machine. Once reaching this point, we use traditional in-memory
packing algorithms to pack all the remain levels. The detailed process
is decribed as follows.

\begin{enumerate}
\item Prepare the initial input files which contain key/value pairs in
the form of $(e,null)$, where $e$ is an entry extracted from the original
dataset and $null$ is a dummy value. To save the cost of parsing text,
we use the \emph{sequence file} supported by Hadoop~\cite{White:2009}
to store the entries.
\item Calculate the total size of the input files. If all the input files
can be loaded into memory, then go to the last step. Otherwise, a
parallel packing algorithm (Step 3--4) is performed over the input files.
\item Execute one or more MapReduce jobs to pack the input entries into
R-tree nodes. Each node is assigned a unique ID and saved in
a intermediate binary file on HDFS. Note that there are multiple binary R-tree
files for each level as nodes are generated in parallel.
\item Extract the ID and MBR of each packed node and write them to
sequence files on HDFS in the same form mentioned in Step 1. Go to Step
2 with these files as the input.
\item Load all entries into memory and perform a serial packing algorithm
to pack all the rest levels. The nodes of each level are saved in a
separate binary file. After the root node is packed, some informations
of the R-tree including the ID of the root node, node capacity, and the
storage size of each node are written to the metadata file. These infomations
are needed for loading the intermediate R-tree files into a target system.
\end{enumerate}

Parallel packing algorithms (Step 3--4) will be discussed in detail
in Section~\ref{sec:4}.  It is worth pointing out that we can use
different packing algorithms for different levels. Particularly, when
performing internal packing, top-down approaches or even dynamic insertion
algorithms can be used. There is a large number of combinations of packing
algorithms, which leads to various types of packed R-trees. Without loss
of generality, we assume that for a specific R-tree, the same external
packing algorithm is used.

%We use notation $A^{l*}\text{--}B$ R-tree to
%denote an R-tree of which the first $l_*$ levels are packed externally
%using parallel algorithm $A$ and the remain levels are packed internally
%using serial algorithm $B$.

\subsection{Cost Analysis}
Before we propose concrete packing algorithms, we demonstrate that
in most situations, nearly all the cost is spent on packing the first
level. We begin with the following theorem.

\begin{theorem}\label{thm:1}
Let $n$ denote the number of input MBRs and $m$ denote
the node capacity. The computation and I/O complexity of external
packing a single level is $o(n)$. For internal packing a single level,
the computation complexity is $o(n)$ and the I/O complexity is $o(n/m)$.
\end{theorem}

\begin{proof}
To pack a single level, we need to process each MBR at least once,
thus the computation complexity is $o(n)$ for both external and internal
packing. For external packing, because all MBRs are read from HDFS, the
I/O complexity is $o(n)$. For internal packing, the I/O complexity is
$o(n/m)$, since the number of result MBRs is at least $\lceil n/m \rceil$.
\end{proof}

Note that Theorem~\ref{thm:1} gives a very loose lower bound for the
computation complexity. In real situations, the computation complexity
is usually $o(n\log n)$, as sort is often used as a subroutine. However,
if query efficiency is completely unconcerned, the lower bound $o(n)$
can be reached by packing MBRs sequentially with the input order.

\begin{theorem}
Let $C_l$ denote the cost of external packing level $l$. Then we have
\begin{equation*}
\frac{C_l}{C_{l+1}}\sim o(m)\;.
\end{equation*}
\end{theorem}

\begin{proof}
Suppose $C_l=n\cdot g(n)$, then $C_{l+1}=n/m\cdot g(n/m)$. According to
Theorem~\ref{thm:1}, $C_l\sim o(n)$. Hence $g(n)\sim o(1)$. It follows that
\begin{equation*}
\frac{C_l}{C_{l+1}}=m\cdot\frac{g(n)}{g(n/m)}\sim o(m)\;,
\end{equation*}
which completes the proof.
\end{proof}

From the above theorem we can learn that the cost of external packing
a lower level is significantly more than external packing a higher
one. Assuming other overhead is negligible and the node capacity is $50$,
for example, if it takes $45$ minutes to pack the leaf nodes, then packing
the second level is expected to take only $54$ seconds or less. Again,
this is a loose lower bound that is reached only when the complexity of
the packing algorithm is $O(n)$. In common cases, the gap between $C_l$
and $C_{l+1}$ is even larger.

To study the execution time of packing the leaf nodes, the following
theorem takes the effect of parallelism into account:

\begin{theorem}\label{thm:3}
Let $S$ denotes the level of concurrency in parallel external packing, $l_*\in\{1,
2,...,\lceil\log_m N\rceil\}$ denotes the last level using external
packing, and $T_l$ denotes the execution time of packing level $l$.
Assume that both external packing algorithm and internal packing algorithm
have the same computation complexity $O(n\log n)$ and I/O complexity
$O(n)$. Then it is hold that
\begin{equation*}
\frac{T_1}{T_{total}}\ge\frac{1}{(1+\frac{S}{m^{l_*-1}})\cdot\frac{m}{m-1}}
\approx\frac{1}{1+\frac{S}{m^{l_*-1}}}\;,
\end{equation*}
where $T_{total}=\sum_{l=1}^d T_l$.
\end{theorem}

\begin{proof}
See Appendix.
\end{proof}

We can conclude from the last theorem that packing the leaf
nodes accounts for most of the total execution time. For example, if the
concurrency level is $100$, node capacity is $50$, and only the first
two levels are packed externally, then we can estimate that the time for
packing the leaf nodes is at least $94\%$ of the total time. Therefore,
according to Amdahl's law, the key to decreasing the execution time is
to speedup packing the first level. We hereby focus on parallel single
level packing in the following discussion.

\section{Parallel Single Level Packing Algorithms}\label{sec:4}
In this section, we propose parallel algorithms for packing one level of
an R-tree. All our algorithms use a similar framework which conforms to
the MapReduce programming model.

\subsection{General Algorithm}
Briefly, the input entries are divided into a number of partitions
such that each partition is sufficiently small to fit into memory. Then
each partition is packed separately using any serial packing algorithm.
The process can be done using only one MapReduce job. Specifically, for
each input entry $e$, the \texttt{map} function emits a pair ($p$, $e$),
where $p$ is a partition number decided by a certain partitioning method.
After shuffle/sort, the pairs are grouped by the partition number.
As a result, each time the \texttt{reduce} function is invoked, it
is given all entries falling into the same partition as the input.
The \texttt{reduce} function then use serial packing algorithms to
generate packed R-tree nodes. Each packed node is assigned a unique
ID. The packed nodes are serialized to a binary file and their (ID,MBR)
pairs are taken as the job output. Note that the format of the job input
and output are essentially the same, which enables seamless chaining
of algorithms for different levels. The sketch of the algorithm
is shown in Algorithm~\ref{alg:general_algorithm}.

\begin{algorithm}[t]
\label{alg:general_algorithm}
\caption{The general algorithm}
\SetAlgoLined

\textbf{Function:} \texttt{map}\\
\SetKwFunction{GetPartitionNumber}{GetPartitionNumber}
\SetKwFunction{Emit}{Emit}
\SetKwData{vnull}{null}

\KwData{$(e,\vnull)$ in which $e$ is an entry (ID,MBR).}
\Begin{
  $p \longleftarrow \GetPartitionNumber(e.\text{MBR})$\;
  $\Emit(p, e)$\;
}
\bigskip
\textbf{Function:} \texttt{reduce}\\
\SetKwFunction{AnySerialPackingAlgorithm}{AnySerialPackingAlgorithm}
\SetKwFunction{Emit}{Emit}
\KwData{$(p,E)$ in which $p$ is a partition number and
$E=\{e_1,e_2,...\}$ is the set of all entries in partition $p$.}
\Begin{
  $packedNodes \longleftarrow \AnySerialPackingAlgorithm(E)$\;
  create an R-tree file $rfile$\;
  \ForEach{$node$ in $packedNodes$}{
    serialized $node$ and append the result to $rfile$\;
    $e' \longleftarrow (node.\text{ID},node.\text{MBR})$\;
    $\Emit(e',\vnull)$\;
  }
  close $rfile$\;
}
\end{algorithm}

\subsection{Data Partitioning Methods}
To specialize the general algorithm, it remains to decide how to
partition the input entries (i.e., the implementation of function
\texttt{GetPartitionNumber} in Algorithm~\ref{alg:general_algorithm}).
Specifically, given $n$ input entries, we aim at dividing them into
$r$ partitions (denoted by $P_1,P_2,...,P_r$) in a way that preserves spatial
similarity. In other words, if two entries belong to the same partition, then
their MBRs tend to be close in the space. Besides, the data partitioning method
must satisfy the following two constraints.
\begin{enumerate}
\item \emph{Memory Constraint:}\enspace
Each partition contains at most $n_c$ entries, where $n_c$ is the maximum
number of entries that can fit into memory. Namely,
\begin{equation*}
|P_i|\le n_c,\,
\end{equation*}
where $i\in \{1,2,...,r\}$.
\item \emph{Storage Constraint:}\enspace
The storage utilization of the packed nodes is near-optimal. Namely,
\begin{equation*}
\frac{\lceil\frac{n}{m}\rceil}{\sum_1^r \left\lceil\frac{|P_i|}{m}\right\rceil} \approx 1\;.
\end{equation*}
Note that the above inequality assumes that the serial packing algorithm that
packs each partition has the optimal space utilization.
\end{enumerate}
To satisfy both constraints, care must be taken to decide the
number of partitions, i.e., the value of $r$. For example, if
$r<\lceil\frac{n}{n_c}\rceil$, then the memory constraint cannot
be satisfied, according to the pigeonhole principle.  On the other
hand, as the value of $r$ increases, the storage utilization tends to
decrease. Consider the extreme case in which each partition contains only
one entry. It is obvious that the storage utilization is $\frac{1}{m}$
which is much smaller than $1$. In the following discussion, we describe
two scalable data partitioning methods that satisfy the above constraints.
Note that both methods partition the input entries according to the
centroids of their MBRs.

\subsubsection{Hilbert Partitioning}
Hilbert filling curve defines a one-to-one mapping between a $k$-d
space and an 1-d curve that fairly well preserves spatial locality. It
is straightforward to use Hilbert filling curve to partition the input entries.
The steps are listed below. We assume that $i\in\{1,2,...,n\}$.
\begin{enumerate}
\item For each entry $e_i$, calculate the Hilbert value $h_i$ of the
centroid of $e_i$.MBR.
\item Assume all $h_i$ fall in range $[H_0,H_r)$. We can safely use $0$
and $+\infty$ as the value of $H_0$ and $H_r$ respectively. Split the
range $[H_0,H_r)$ into $r$ subranges. The splitting points are denoted
by $H_j$ in ascending order where $j\in\{1,2,...,r-1\}$.
\item Assign each entry to a partition according to the corresponding
Hilbert value. Specifically, if $H_{j-1}\le h_i<H_j$ where
$j\in\{1,2,...,r\}$, then entry $e_i$ is assigned to partition $P_j$.
\end{enumerate}

We fix the value of $r$ to $2\lceil \frac{n}{n_c} \rceil$ and use sampling
technique to decide the splitting points. In particular, we randomly
select a small portion of input entries and sort them by the Hilbert
value. Then we calculate the $r$-quantiles of the sample and use them as
the splitting points. As a rule of thumb, sample quantiles are very close
to the population quantiles as long as the size of the sample is not too
small. In practice, we set the sampling probability to $0.001$, which is
enough to result in approximately equal-sized partitions. Therefore,
in real situations the number of entries in each partition should not
deviate far from $n_c/2$ (almost impossible to exceed $n_c$), which
ensures the memory constraint. For the storage utilization constraint, it
is trivially satisfied since the number of partitions is relatively small.

Hilbert partitioning has many advantages, in which the most important
one is that the partitions have approximately the same number of objects,
regardless of the data skewness. However, the major drawback of Hilbert
partitioning is that the overlapped regions between the MBRs of partitions
can be very large, which may affect the query performance of the packed
R-trees.

\subsubsection{Recursive Grid Partitioning}
Let $U$ denote the MBR of all centroids of input MBRs. Consider
an $a\times a$ grid that decomposes $U$ into $a^2$ equal-sized
\emph{cells}. It is obvious that each cell would contain at most $n_c$
points when $a$ is sufficiently large. Therefore, we may simply
use the grid as a partitioning method. However, the size of the cell is
determined by the most dense region, which may result in a large portion
of cells each of which contains only a few points. As a consequence, the
packed R-tree may have a large number of non-full nodes, violating the
storage utilization constraint.

The recursive grid partitioning mitigates the problem by treating the
dense regions differently. Specifically, we use an $a_1\times a_1$ grid,
in which the value of $a_1$ is relatively small, to decompose
$U$ into $a_1^2$ level-1 cells. Each cell is associated with an
id calculated by its row and column number. Next, for each level-1 cell
that has more than $n_c$ points, we use an $a_{2_{id}}\times a_{2_{id}}$
grid to further decompose it into level-2 cells. This process continues
until all cells, regardless of their level, contain at most $n_c$
points. An illustrative example of recursive grid partitioning is shown
in Figure~\ref{fig:grid}.

\begin{figure}[t]
\centering
\includegraphics{figures/grid.eps}
\caption{An example of recursive grid partitioning}
\label{fig:grid}
\end{figure}

Below, we describe a heuristic algorithm to determine the value of $a$ for
a given cell.
\begin{enumerate}
\item Let $a=2^b$, where $b=\left\lceil\sqrt{\frac{n}{n_c}}\right\rceil$.
\item Use an $a\times a$ grid to decompose the cell. Calculate the number of
points in each cell and save the result in a matrix $M$.
\item Using matrix $M$, estimate the storage utilization of the packed nodes.
If it is acceptable (e.g., greater than $99\%$), then the value of $a$ is returned.
Otherwise, we let $a=a/2$ and go back to Step 2.
\end{enumerate}

By determining the value of all $a$'s, we actually build a small spatial
index for data partitioning. Comparing to Hilbert partitioning, the
recursive grid partitioning avoids overlapping between partitions. When
the MBRs instead of their centroids are considered, although the
overlapping regions still exist, they tend to be much smaller than that in Hilbert
partitioning.

By choosing different data partitioning methods and serial packing
algorithms, our framework can generate various types of packed
R-trees. We select four representative algorithms for evaluation as
listed in Table~\ref{tbl:algs}. Note that we use the same serial packing
algorithm in both external and internal packing.

\begin{figure*}[t]
\centering
\subfigure[TIGER datasets]{\label{fig:TIGER-dist}\includegraphics[width=0.48\textwidth]{figures/TIGER-Point-Data-Distribution}}
\subfigure[GPS datasets]{\label{fig:GPS-dist}\includegraphics[width=0.48\textwidth]{figures/GPS-Point-Data-Distribution}}
%\subfigure[TIGER datasets]{\label{fig:TIGER-dist}\includegraphics[width=0.48\textwidth]{figures/blank}}
%\subfigure[GPS datasets]{\label{fig:GPS-dist}\includegraphics[width=0.48\textwidth]{figures/blank}}
\caption{Spatial distribution of MBRs}
\label{fig:data-dist}
\end{figure*}

\begin{table*}[ht]
\begin{minipage}{0.5\textwidth}
\renewcommand{\arraystretch}{1.2}
\caption{Representative Packing Algorithms}
\medskip
\centering
\begin{tabular}{|l|l|r|}
\hline
\textbf{Algorithm} & \textbf{Data Partitioning} & \textbf{Serial Packing}\\ \hline
HH & Hilbert & Hilbert-Sort \\ \hline
HSTR & Hilbert & STR \\ \hline
GH & Recursive Grid & Hilbert-Sort \\ \hline
GSTR & Recursive Grid & STR \\ \hline
\end{tabular}
\label{tbl:algs}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\renewcommand{\arraystretch}{1.2}
\caption{Datasets Summary}
\medskip
\centering
\begin{tabular}{|l|l|r|r|}
\hline
\textbf{Name} & \textbf{Type} & \textbf{Lines/Points} & \textbf{Size}\\ \hline
TIGER-Region & Region Data & 707,633,320 & 39GB\\ \hline
TIGER-Point & Point Data & 782,256,291 & 44GB\\ \hline
GPS-Region & Region Data & 1,806,997,382 & 101GB\\ \hline
GPS-Point & Point Data & 1,918,581,554 & 108GB\\ \hline
\end{tabular}
\label{tbl:datasets}
\end{minipage}
\end{table*}

\section{Evaluation}\label{sec:5}
In this section, we present the experimental results of the packing
performance of our framework, as well as the query performance and the
storage utilization of the packed R-trees.

\subsection{Experiment Settings}
\subsubsection{Hadoop Cluster}
To perform evaluation, we use a cluster of 13 machines.  Each machine
has a single quadcore Intel Core i7-950 3.0GHz processor, 8GB DRAM, and
two 2TB 7.2K RPM harddisks. All machines are hosted in a single rack and
interconnected with a gigabit Ethernet switch.  Each machine runs Hadoop
version 0.20.2 on Ubuntu Linux 10.04 LTS (64-bit).  One of the machines
is configured as both jobtracker and namenode. The other machines are
configured as computing nodes (tasktracker and datanode).  Both map and
reduce slots of each computing node are set to $4$ in accordance to the
number of cores. As a consequence, at most 48 map tasks and 48 reduce
tasks can run concurrently in our cluster. Each map and reduce tasks can
use up to 2GB virtual memory. The block size of HDFS is set to 64MB and
each block is replicated 3 times for fault-tolerance.

\subsubsection{Datasets}
We use four real datasets in our experiments (summerized in
Table~\ref{tbl:datasets}).  They are extracted from two data sources:

\begin{itemize}
\item \textbf{2010 Census TIGER/Line$^\text{\textregistered}$
Shapefiles}~\cite{TIGERLine:2010}: This data source contains 74,872
shapefiles of 47 categories, with a total data size of 67G in ZIP
format. Because it is usually not meaningful to mix all data from
different categories together, we only use EDGES, the largest category
containing 3,234 shapefiles, whose total data size is 11GB in ZIP
format. In EDGES data, each spatial object is a polyline representing an
edge of geometric regions of the USA such as states, cities, districts,
roads, mountains, etc. We extract the MBRs of all line segments and the
points from EDGES polylines to form TIGER-Region and TIGER-Points datasets
respectively. The number of MBRs in each dataset is approximately 0.7
and 0.8 billion respectively.

\item \textbf{Taxi GPS Trace Data}:
This data source contains GPS trace data collected in a period of 2 years
from around 6,000 taxis in Shanghai. Similar to TIGER data, we extract
the MBRs of the line segments and the points from taxi trajectories to
create the other two datasets. The number of MBRs in each dataset is
approximately 1.8 and 1.9 billion respectively.
\end{itemize}

Figure~\ref{fig:data-dist} shows the spatial distribution of the
MBRs. Each graph contains 10,000 points sampled from the TIGER-Points
dataset and the GPS-Points dataset respectively. In order to show more
details, we only plot the region that contains most of the data. It can
be seen that both datasets are heavily skewed. The MBR distribution of
region datasets is similar to the corresponding point datasets.

\subsubsection{R-tree Parameters}
In our experiments, an R-tree entry is a 54-byte data structure consisting
of a 20-byte ID, a 2-byte integer indicating the dimensionality (fixed
to 2), and four 8-byte float numbers indicating the coordinates of the
MBR. The page size of our packed R-trees is set to 4KB. As a result,
an R-tree node can store at most 76 entries, one of which saves the ID
and MBR of itself and the other 75 are reference entries to child nodes.
As we consider static R-trees, the node capacity is therefore set to 75 to
maximize storage utilization.

\subsection{Speedup and Scaleup}
\begin{figure*}[ht]
\centering
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/Jobtime-Detail}
\caption{Detailed execution time}
\label{fig:jobtime}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/Speedup}
\caption{Speedup}
\label{fig:speedup}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/Scaleup}
\caption{Scaleup}
\label{fig:scaleup}
\end{minipage}
\end{figure*}

We use GPS-Point, the largest dataset, to evaluate the packing
performance. The first two levels are packed externally using MapReduce
and other levels are packed internally. Given the size of the dataset and
the node capacity, we find that such packing plan is the most efficient
one. External packing of each level involves two MapReduce jobs.
The first job is used for preprocessing. For Hilbert partitioning, it
samples the input entries and decides the Hilbert values that will be
used as spliting points. For recursive grid partitioning, it creates a
small index describing the structure of space decomposition. The output
of the first job is a file stored on HDFS. The second job uses
this file for data partitioning at the map phase and then packs each
partition at the reduce phase.

Let $T^{pre}_l$ and $T^{pack}_l$ denotes the execution time of the jobs
of external packing level $l$, and $T^{in}$ denotes the time for internal
packing. With our settings, the overall packing time is calculated by
\begin{equation*}
T_{total}=T^{pre}_1+T^{pack}_1+T^{pre}_2+T^{pack}_2+T^{in}\;.
\end{equation*}

Figure~\ref{fig:jobtime} shows the detailed packing time. In all cases,
packing the first level accounts for most of the packing time, which
confirms Theorem~\ref{thm:3}. We can also see that GSTR and GH are
slightly slower than HSTR and HH. The difference is mainly made by the
preprocessing job. The preprocessing job of both partitioning methods
scans all entries at the map phase. At the reduce phase, however, Hilbert
partitioning directly writes out the sample entries while recursive grid
partitioning needs extra time to compute the grid index. For the packing
job, all algorithms run at a similar speed.

Figure~\ref{fig:speedup} shows the performance gain as the number
of tasktrackers increases. All algorithms share a similar trend
of speedup. When the number of tasktrackers is less than or equal to 8, the
speedup is almost linear. On the other hand, in case of 12 tasktrackers,
the speedup value is around 4.5, 25\% below the ideal value. We suspect
this is due to that the number of datanodes is fixed to 12. Specifically,
since the main cost of the packing job is writing intermediate R-tree
files to HDFS, the speedup remains near-linear as long as the I/O
bandwidth of all datanodes is unsaturated. When the I/O bandwidth is
saurated, the performance gain of adding more tasktrackers is merely
from reducing computation time. An educated guess is that if the number
of datanodes scales at the same rate with the number of tasktrackers,
the speedup of overall packing time would near-linear.

Figure~\ref{fig:scaleup} shows the scaleup trend of the algorithms. We
scale both the size of the dataset and the number of tasktrackers at
the same rate. The result shows that the scaleup of all algorithms is
almost linear. Therefore, our packing framework has excellent scalability
and is able to scale to very large datasets.

\subsection{Query Performance}
We use the number of accessed pages to measure the query performance of
the packed R-trees. Comparing to query time, page access is irrelevant
to many factors such as I/O latency, CPU speed, programming language,
etc. To evaluate queries using skewed datasets, regions of different
\emph{density} should be treated differently, in which density of
a region is defined as the ratio of the number of MBR centroids
to the region area. For each dataset, we divide the region show
in Figure~\ref{fig:data-dist} into $256\times256$ subregions. We
further define three types of subregions, namely \emph{low-density},
\emph{med-density}, and \emph{high-density}. For TIGER data, the density
of such subregions is 200, 2,000, and 20,000 times the average density
respectively. For GPS data, the density of such subregions is 0.1,
1, and 100 times the average density respectively. These settings are
determined according to frequent queries and dataset features.  For each
density type, we randomly select 10 subregions and then randomly select
10 points in each subregion. We use term \emph{QP set} (query point set)
to refer to these points (100 for each density type). They will be used
as MBR centroids in the following evaluation.

\subsubsection{Range Query}
\begin{figure*}[ht]
\centering
\begin{minipage}{\textwidth}
\centering
\subfigure[Low-density regions]{\includegraphics[width=0.32\textwidth]{figures/TIGER-Region-Range-Query-Low-Density}}
\subfigure[Med-density regions]{\includegraphics[width=0.32\textwidth]{figures/TIGER-Region-Range-Query-Med-Density}}
\subfigure[High-density regions]{\includegraphics[width=0.32\textwidth]{figures/TIGER-Region-Range-Query-High-Density}}
\caption{Range queries on TIGER-Region}
\label{fig:range-TIGER}
\end{minipage}
\begin{minipage}{\textwidth}
\centering
\subfigure[Low-density regions]{\includegraphics[width=0.32\textwidth]{figures/GPS-Region-Range-Query-Low-Density}}
\subfigure[Med-density regions]{\includegraphics[width=0.32\textwidth]{figures/GPS-Region-Range-Query-Med-Density}}
\subfigure[High-density regions]{\includegraphics[width=0.32\textwidth]{figures/GPS-Region-Range-Query-High-Density}}
\caption{Range queries on GPS-Region}
\label{fig:range-GPS}
\end{minipage}
\end{figure*}

We evaluate range query performance with varying region size. We limit the
number of result MBRs to 100,000. Hence the largest query window should
contain approximately 100,000 MBRs in high-density regions. We define
$1/100$ of the area of the largest query window as \emph{unit area}.
Hence the range of the window size is from 1 unit area to 100 unit area.
Given a window size and a point in QP set, we can then calculate a query window.
Since for each density type the corresponding QP set contains 100 points, there
are 100 range queries performed with a specific window size. We use the average
number of page access to show the results.

Figure~\ref{fig:range-TIGER} and Figure~\ref{fig:range-GPS} shows
the performance of range queries over TIGER-Region and GPS-Region
respectively. The performance of range queries over point datasets is
similar to that of region datasets. For TIGER data, GH and GSTR R-trees
outperform HSTR and HH R-trees, while for GPS data, the opposite result
is observed. This difference may be attributed to MBR overlapping. In
TIGER data, line segments are extracted from non-intersect polylines. As
a result, most MBRs are either disjoint or adjacent (rather than
intersect). It is obvious that when grid partitioning is used, MBRs of
grid partitions tend to not overlap each other. For Hilbert partitioning,
it is common that two MBRs are assigned to different partitions even they
are spatially close. Consequently, MBRs of Hilbert partitions tend to
overlap regardless of the distribution of the original MBRs. Therefore,
grid partitioning outperforms Hilbert partitioning by taking advantage
of the non-intersect feature of TIGER data. However, in case of GPS data,
most MBRs intersect nearby MBRs, which possibly makes Hilbert partitioning
superior to grid partitioning.

We also observe that the gap of range query performance between different
R-trees becomes narrower as the region density increases. This is because
the leaf pages containing any result MBR must be accessed. When the
result set is large, such pages contribute to most of the page access,
no matter what specific R-tree is used. 

\subsubsection{Spatial Join}
\begin{figure*}[ht]
\centering
\begin{minipage}{\textwidth}
\centering
\subfigure[Low-density regions]{\includegraphics[width=0.32\textwidth]{figures/TIGER-Region-Join-Query-Low-Density}}
\subfigure[Med-density regions]{\includegraphics[width=0.32\textwidth]{figures/TIGER-Region-Join-Query-Med-Density}}
\subfigure[High-density regions]{\includegraphics[width=0.32\textwidth]{figures/TIGER-Region-Join-Query-High-Density}}
\caption{Spatial join queries on TIGER-Region}
\label{fig:join-TIGER}
\end{minipage}
\begin{minipage}{\textwidth}
\centering
\subfigure[Low-density regions]{\includegraphics[width=0.32\textwidth]{figures/TIGER-Point-kNN-Query-Low-Density}}
\subfigure[Med-density regions]{\includegraphics[width=0.32\textwidth]{figures/TIGER-Point-kNN-Query-Med-Density}}
\subfigure[High-density regions]{\includegraphics[width=0.32\textwidth]{figures/TIGER-Point-kNN-Query-High-Density}}
\caption{k-NN queries on TIGER-Point}
\label{fig:knn-TIGER}
\end{minipage}
\end{figure*}

We use the following query to evaluate the R-tree performance with regard
to spatial joins:
\begin{verbatim}
SELECT e1, e2 FROM dataset WHERE e1.ID != e2.ID
  AND qr.contains(e1.MBR) AND qr.contains(e2.MBR)
  AND e1.MBR.intersects(e2.MBR);
\end{verbatim}
where \texttt{qr} indicates a query range. This query is a spatial join
with a range constraint. The result contains all pairs of entries whose
MBRs are intersecting and within range \texttt{qr}. The query ranges are
the same as those used in the previous experiments.

Our spatial join implementation is based on \texttt{SpatialJoin1}
algorithm proposed in~\cite{Brinkhoff:1993}. Initially, the root page
is loaded. For any pair of child entries, if they intersect with each
other and both of them intersect the query range, then the join will
be performed over them recursively until reaching the leaf nodes.
When two leaf nodes are joined, the joining condition is checked over
each pair of entries.  Note that the size of page buffer is unlimited
in our implementation such that it is guaranteed that any page be loaded
from disk only once during a query.

Figure~\ref{fig:join-TIGER} shows the page access of spatial join
queries over TIGER data. The results with respect to GPS data are
not shown because such spatial joins are not meaningful when there
are too many overlaps between nearby MBRs. We can see that GSTR R-tree
outperforms the other three in all cases. With GSTR R-tree, our spatial
join query reads $10\%$ to $25\%$ fewer pages than that with the second
best one.  Most importantly, such vantage of GSTR R-tree does not fade
in high-density regions. We can also see that HH R-tree does not perform
well on MBR-intersect spatial joins. We argue that it is because HH
R-tree assigns intersecting MBRs across more nodes than other packed
R-trees. Additionally, it is worth pointing out that other datasets may
have different results.

\subsubsection{k-NN Query}
Besides range queries and spatial joins, R-trees can effectively
help searching nearest neighbors as well. To evaluate the k-NN query
performance, we implement the k-NN algorithm exploiting R-trees primarily
proposed by \cite{Roussopoulos:1995}. This algorithm uses the euclidean
distance as the distance measure. Similar to spatial joins, we cache
all pages that have been loaded in memory during the execution of a k-NN
query. Note that other k-NN algorithms may have different results.

Figure~\ref{fig:knn-TIGER} shows the page access results on the
TIGER-Point dataset. The value of $k$ increases from 1 to 100. For each
$k$, we run 100 k-NN queries with points in QP set. The page access
is the average of all queries with the same $k$. It can be seen that
in low-density and med-density regions, the page access of HH R-tree is
about $80\%$ of the page access of GH R-tree which is the second best one.
In case of high-density regions, the performance of HH and GH R-trees tend
to converge and they access about $15\%$ fewer pages than the other two.
In high-density regions, partitions generated by grid partitioning contain
more points than regions with lower density. It implies that a k-NN search
is more likely to be done within fewer partitions, of which the points
are packed using Hilbert-Sort in GH algorithm. We thus speculate
that Hilbert packing is particularly suitable for the k-NN algorithm.

It is worth mentioning that the page access of k-NN queries is less
sensitive to region density than range queries and spatial joins. From
Figure~\ref{fig:range-TIGER}--\ref{fig:join-TIGER} we can see that
the page access and region density are almost linearly correlated.
On the other hand, the page access of k-NN queries is relatively
stable over all regions.
\begin{table}[b]
\renewcommand{\arraystretch}{1.2}
\caption{Storage Utilization}
\medskip
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{Dataset} & \textbf{HH} & \textbf{HSTR} & \textbf{GH} & \textbf{GSTR}\\ \hline
TIGER-Region & 100\% & 99.75\% & 100\% & 99.24\%\\ \hline 
TIGER-Point & 100\% & 99.60\% & 100\% & 99.29\%\\ \hline 
GPS-Region & 100\% & 99.91\% & 98.34\% & 96.37\%\\ \hline 
GPS-Point & 100\% & 99.75\% & 98.22\% & 96.87\%\\ \hline 
\end{tabular}
\label{tbl:storage}
\end{table}

\subsection{Storage Utilization}
Table~\ref{tbl:storage} compares the storage utilization of the packed
R-trees. We observe that HH R-trees have optimal storage utilization for
all datasets because entries are packed sequentially along the filling
curve. The only storage loss occurs at the boundaries of partitions
which is negligible. Recursive grid partitioning and STR packing cause
storage loss because they may generate partitions and strips containing
very few entries. Nevertheless, the storage utilization of all our packed
R-trees is higher than $96\%$ which is desirable for most applications.

\section{Conclusion and Future Work}\label{sec:6}
In this paper, we designed an R-tree packing framework using MapReduce.
This framework packs each R-tree level separately from bottom up. For
levels that cannot be efficiently packed with one machine, we designed
a general parallel packing algorithm which first divides input entries
into partitions and then use traditional serial packing algorithms to
pack each partition. We proposed two data partitioning methods, namely
Hilbert partitioning and recursive grid partitioning. By combining
different partitioning methods and serial packing algorithms, our
framework can support various types of packed R-trees. In addition,
we proposed a cost model to demonstrate that packing the leaf nodes
accounts for most of the packing cost.

Based on the framework, we implemented four representative R-trees namely
HH, HSTR, GH, and GSTR. To evaluate the packed R-trees, we used four real
datasets containing up to 2 billion spatial objects whose total data
size is up to 100GB.  The experimental results showed that our R-tree
packing framework has desirable scalability in that the scaleup trend is
almost linear. We also conducted extensive comparison of query performance
between the packed R-trees. Three types of queries including range query,
spatial join, and k-NN query were evaluated. The results showed that none
of the packed R-trees is superior to others in all situations. Therefore,
the flexibility of the framework is an important feature for changing
scenarios. Additionally, the storage utilization of all packed R-trees
evaluated is near-optimal, which is desirable for very large datasets.

Our future work has several directions. First, we plan to develop
a bulk-insertion technique for very large R-trees. This issue has
been discussed in~\cite{Chen:1998} and \cite{Lee:2006}. However,
similar to traditional packing techniques, these methods cannot scale
to large datasets. Second, we plan to design efficient spatial query
processing algorithms in MapReduce using R-trees. Since MapReduce and
HDFS are primarily designed for sequential scan, it might be challenge
to integrate tree-structured index which requires lots of random access
into them without much loss of efficiency. Last, we argue that the R-tree
and its variants might not be the most suitable ones for analytical batch
processing workloads. We plan to investigate the use of flat-structured
and append-only indexing techniques in context of very large spatial
databases.

\begin{spacing}{1.08}
\bibliographystyle{unsrt-abbrv}
\bibliography{rtreepack} 
\end{spacing}

\bigskip
\begin{appendix}
Proof of Theorem~\ref{thm:3}:
\begin{proof}
Let $T^E$ and $T^I$ denotes the total time of external packing and
internal packing, respectively. Assume that when packing a level without
parallelism, the computation time is $t_a\cdot n\log n$ and the I/O time
is $t_b\cdot n$, where $t_a$ and $t_b$ are constant factors.

Let $s$ denote the speedup factor ($s\le S$ since the optimal speedup
cannot exceed the concurrency level). We have $T^E=T^E_a+T^E_b$ where
\begin{align*}
& T^E_a= \frac{1}{s}\sum_{l=1}^{l_*} t_a\cdot \frac{n}{m^{l-1}}\log\frac{n}{m^{l-1}}\;,\\
& T^E_b= \frac{1}{s}\sum_{l=1}^{l_*} t_b\cdot\frac{n}{m^{l-1}}\;;
\end{align*}
and $T^I=T^I_a+T^I_b$ where
\begin{align*}
& T^I_a= \sum_{l=l_*+1}^d t_a\cdot \frac{n}{m^{l-1}}\log\frac{n}{m^{l-1}}\;,\\
& T^I_b= \sum_{l=l_*+1}^d t_b\cdot\frac{n}{m^{l-1}}\;.
\end{align*}
It is easy to see that
\begin{equation*}
\frac{t_a\cdot n\log n}{T^E_a+T^I_a}\ge\frac{t_b\cdot n}{T^E_b+T^I_b}\;,
\end{equation*}
which follows that
\begin{align*}
\frac{T_1}{T_{total}} &= \frac{t_a\cdot n\log n+t_b\cdot n}{s\cdot(T^E_a+T^I_a)+s\cdot(T^E_b+T^E_b)} \\
&\ge \frac{t_b\cdot n}{s\cdot(T^E_b+T^I_b)} \\
&= \frac{1}{\sum_{l=1}^{l_*} \frac{1}{m^{l-1}}+s\cdot\sum_{l=l_*+1}^d\frac{1}{m^{l-1}}} \\
&\ge \frac{1}{\sum_{l=1}^{+\infty}\frac{1}{m^{l-1}}+S\cdot \sum_{l=l_*+1}^\infty \frac{1}{m^{l-1}}} \\
&=\frac{1}{(1+\frac{S}{m^{l_*-1}})\cdot\frac{m}{m-1}} \\
&\approx\frac{1}{1+\frac{S}{m^{l_*-1}}}\;.
\end{align*}
This completes the proof.
\end{proof}

\end{appendix}

\end{document}
