\section{Introduction\label{sec:intro}}
Ubiquitous graph data coupled with advances in graph analyzing
techniques are pushing the database community to pay more attention to
graph databases.  Efficiently managing and answering queries against
very large graphs is becoming an increasingly important research topic
driven by many emerging real world applications, including XML
databases, GIS, web mining, social network analysis, ontologies, 
bioinformatics, etc.

Among them, graph reachability query has attracted a lot of research
attention.  Given two vertices $u$ and $v$ in a directed
graph, the reachability query asks if there is a path from $u$ to
$v$.  Graph reachability is one of the most common
queries in a graph database.  In many other applications where graphs
are used as the basic data structure (e.g., XML data management), it
is also one of the fundamental operations. Thus, efficient processing
of reachability query is a critical issue in graph databases.

\subsection{Applications}
Reachability queries are very important for many XML databases. 
The typical XML documents are tree structures. In such cases, 
the reachability query simply corresponds to ancestor-descendant search (``//''). 
However, with the widespread usage of ID and IDREF attributes,
which represent relationships unaccounted for by a strict tree structure,
it is sometimes more appropriate to represent 
the XML documents as directed graphs. 
Queries on such data often invoke a reachability query.
For instance, in bibliographic data which contains a paper citation network, such as in Citeseer, 
we may ask if author A is influenced by paper B, which can be represented as a simple path expression 
${\tt //B//A}$. A typical way of processing this query is to
obtain (possibly through some index on elements) elements A and B and then test if author A is
reachable from paper B in the XML graph.
Clearly, it is crucial to provide efficient support for 
reachability testing due to its importance for complex XML queries. 

Querying ontology is becoming increasingly important as many large domain ontologies are being constructed. 
One of the most well-known ontologies is the gene ontology (GO)~\footnote{http://www.geneontology.org}.
GO can be represented as a directed acyclic graph (DAG) in which nodes are concepts (vocabulary terms) and edges are relationships ({\em is-a} or {\em part-of}). 
It provides a controlled vocabulary of terms to describe a gene product, e.g. proteins or RNA, in any organism. 
For instance, we may query if a certain protein is related to a certain biological process or has certain molecular function. 
In the simple case, this can be transformed into a reachability query on two vertices over the GO DAG. 
As a protein can directly associate with several vertices in the DAG, the entire query process may actually invoke several reachability queries. 

The recent advance in system biology has accumulated a large amount of graph data, i.e. various kinds of biological networks ranging from gene-regulatory network, protein-protein interaction, signal transduction network, to 
metabolic networks.  Many databases are being constructed to maintain these data. 
Biology and bioinformatics are actually becoming a key driving force for graph databases.
Here again, reachability is one of the fundamental queries frequently used. 
For instance, we may ask if one gene is (indirectly) regulated by another gene, or 
if there is a biological pathway between two proteins. 

\subsection{Prior Work}
\label{prior}

In order to tell whether a vertex $u$ can reach another vertex $v$
in a directed graph $G=(V,E)$, we can use two ``extreme'' approaches.
The first approach traverses the graph (by DFS or BFS), which will
take $O(n+m)$ time, where $n=|V|$ (number of vertices) and $m=|E|$
(number of edges).  This is apparently too slow for large graphs.  The
other approach precomputes the transitive closure of $G$, i.e., it
records the reachability between any pair of vertices in advance.
While this approach can answer reachability queries in $O(1)$ time,
the computation of transitive closure has complexity of
$O(mn)$~\cite{Simon88} and the storage cost is $O(n^2)$.  Both are
unacceptable for large graphs.  Existing research has been trying
to find good ways to reduce the precomputation time and storage cost
with reasonable answering time.

A key idea which has been explored in existing research is to utilize
simpler graph structures, such as chains or trees, in the original
graph to compute and compress the transitive closure and/or help with
reachability answering.

\paragraph*{\bf Chain Decomposition Approach}
Chains are the first simple graph structure which has been studied in
both graph theory and database literature to improve the efficiency of
the transitive closure computation~\cite{Simon88} and to compress the
transitive closure matrix~\cite{Jagadish90}.  The basic idea of chain
decomposition is as follows: the DAG is partitioned into several
pair-wise disjoint chains (one vertex appears in one and only one
chain).  Each vertex in the graph is assigned a chain number and its
sequence number in the chain.  For any vertex $v$ and any chain $c$,
we record at most one vertex $u$ such that $u$ is the smallest vertex (in
terms of $u$'s sequence number) on chain $c$ that is reachable from
$v$.  To tell if any vertex $u$ reaches any vertex $v$, we only need
to check if $u$ reaches any vertex $v'$ in $v$'s chain and $v'$ has a
smaller sequence number than $v$.


Currently, Simon's algorithm~\cite{Simon88}, which uses chain decomposition
to compute the transitive closure, has worst case complexity
$O(k\cdotp e_{red})$, where $k$ is width (the total number of chains)
of the chain decomposition and $e_{red}$ is the number of edges in the
transitive reduction of the DAG $G$ (the transitive reduction of $G$
is the smallest subgraph of $G$ which has the same transitive closure
as $G$, $e_{red}\leq e$).  Jagadish {\em et al.}~\cite{Jagadish90} applied chain
decomposition to reduce the size of transitive closure matrix.  It
finds the minimal number of chains from $G$ by transforming the
problem to an equivalent network flow problem, which can be solved in
$O(n^3)$, where $n$ is the number of vertices in DAG $G$.  Several
heuristic algorithms have been proposed to reduce the computational
cost for chain decomposition.

Even though chain decomposition can help with compressing the
transitive closure, its compression rate is limited by the fact that
each node can have no more than one immediate successor.  In many
applications, even though the graphs are rather sparse, each node can
have multiple immediate successors, and the chain decomposition can
consider only at most one of them.

\paragraph*{\bf Tree Cover Approach}
Instead of using chains, Agrawal {\em et al.} use a (spanning) tree
to ``cover'' the graph and compress the transitive closure matrix. They 
show that the tree cover can beat the best chain
decomposition~\cite{SIGMOD:AgrawalBJ:1989}.  The proposed algorithm finds the best tree
cover that can maximally compress the transitive closure.  The cost of
such a procedure, however, is equivalent to computating the transitive
closure.
 
The idea of tree cover is based on interval labeling.  Given a tree,
we assign each vertex a pair of numbers (an interval).  If vertex $u$
can reach vertex $v$, then the interval of $u$ contains the interval
of $v$.  The interval can be obtained by performing a postorder
traversal of the tree.  Each vertex $v$ is associated with an interval
$(i,j)$, where $j$ is the postorder number of vertex $v$ and $i$ is
the lowest postorder number among its descendants (each vertex is a
descendant of itself).
 
Assume we have found a tree cover (a spanning tree) of the given DAG
$G$, and vertices of $G$ are indexed by their interval label.  Then,
for any vertex, we only need to remember those nodes that it can reach
to, but the reachability is not embodied by the interval labels.
Thus, the transitive closure can be compressed.  In other words, if
$u$ reaches the root of a subtree, then we only need to record the
root vertex as the interval of any other vertex in the subtree is
contained by that of the root vertex.  To answer whether $u$ can reach $v$,
we will check if the interval of $v$ is contained by any interval
associated with the vertices we have recorded for $u$.

\paragraph*{\bf Other Variants of Tree Covers (Dual-Labeling, Label+SSPI, and GRIPP)}
Recently, several studies try to address the deficiency of the tree
cover approach by Agrawal {\em et al.} Wang {\em et al.}~\cite{Wang06}
develop the Dual-Labeling approach which tries to improve the query
time and index size for the sparse graph as the original tree cover
would cost $O(n)$ and $O(n^2)$, respectively.  For a very sparse
graph, they claim the number of non-tree edges $t$ is much smaller
than $n$ ($t<<n$).  Their approaches can reduce the index size to
$O(n+t^2)$ and achieve constant query answering time.  Their major
idea is to build a transitive link matrix, which can be thought of as
the transitive closure for the non-tree edges.  Basically, each
non-tree edge is represented as a vertex and a pair of them is linked
if the starting of one edge $v$ can be reached by the end of another
edge $u$ through the interval index ($v$ is $u$'s descendant in the
tree cover).  They develop approaches to utilize this matrix to answer
the reachability query with constant time.  In addition, the tree
generated in dual-labeling is different from the optimal tree cover as
here the goal is to minimize the non-tree edges.  This is essentially
equivalent to the transitive reduction computation which has proved to
be as costly as the transitive closure computation.  Thus, the total
index time of their approach (including the transitive reduction) is
$O(nm+t^2)$.  Clearly, the major issue of this approach is that it 
depends heavily on the number of non-tree edges. If the $t>n$ or 
$m_{red} \geq 2n$, this approach will not help with the computation
of transitive closure, or compress the index size.

Label+SSPI~\cite{Chen05} and GRIPP~\cite{Trissl07} aim at minimizing the index
construction time and index size.  They achieve $O(m+n)$ index
construction time and $O(m+n)$ index size.  However, this is at the
sacrifice of the query time, which will cost $O(m-n)$.  Both
algorithms start by extracting a tree cover.  Label+SSPI utilizes
pre- and post-order labeling for a spanning tree and an additional
data structure for storing non-tree edges.  GRIPP builds the cover
using a depth-first search traversal, and each vertex which has
multiple incoming edges will be duplicated accordingly in the tree
cover.  In some sense, their non-tree edges are recorded as non-tree
vertex instances in the tree cover.  To answer a query, both of them
will deploy an online search over the index to see if $u$ can reach
$v$.  GRIPP has developed a couple of heuristics which utilize the
interval property to speedup the search process.
%In addition, they show that if the directed graph contains some large connected components, then their algorithm can achieve almost constant query time. 
%However, as we can easily compress any strongly connected component into one vertex and then answer reachability over the DAG, this approach will not be able to deal with very large DAG which is the essential difficulty of reachability query. 
%% In addition, we note that GRIPP employ heuristics to extract the tree
%% cover as its goal is try to help speedup the search process.  
%% It also implemented their indexing approach over relational
%% database system using store procedure.

\paragraph*{\bf 2-HOP Labeling}
The 2-hop labeling method proposed by Cohen {\em et al.}~\cite{cohen2hop}
represents a quite different approach. Intuitively, it tries to
identify a subset of vertices $V_s$ in the graph which ``best''
captures the connectivity information of the DAG.  Then, for each
vertex $v$ in the DAG, we record a list of vertices in $V_s$ which can
reach $v$, denoted as $L_{in}(v)$, and a list of vertices in $V_s$
which $v$ can reach, denoted as $L_{out}(v)$.  These two sets record
all the necessary information to infer the reachability of any pair of
vertices $u$ and $v$, i.e., if $u \rightarrow v$ then, $L_{out}(v)
\cap L_{in}(v) \neq \emptyset$, and vice versa.  For a given labeling,
the index size is $I=\sum_{v \in V}|L_{in}(v)|+|L_{out}(v)|$.  They
propose an approximate (greedy) algorithm based on set-covering which
can produce a $2$-hop cover with size no larger than the minimum
possible $2$-hop cover by a logarithmic factor.  The minimum $2$-hop
cover is conjectured to be $\tilde{O}(nm^{1/2})$.  However, their
original algorithm will require computing the transitive closure first
with an $O(n^4)$ time complexity to find the good $2$-hop
cover.

Recently, several approaches have been proposed to reduce the
construction time of $2$-hop.  Schenkel {\em et al.} propose the HOPI
algorithm, which applies a divide-and-conquer strategy to compute
$2$-hop labeling~\cite{hopiedbt}.  They reduce the $2$-hop labeling complexity
from $O(n^4)$ to $O(n^3)$, which is still very expensive for large
graphs.  Chen {\em et al.}  propose a geometric-based algorithm to
produce a $2$-hop labeling.  Their algorithm does not require the
computation of transitive closure, but it does not produce the
approximation bound of the labeling size by Cohen's approach~\cite{ChengYLWY06}.



\begin{table}
{\small
  \begin{tabular}{llll}\hline
&Query time&Index time&Index size\\ \hline
 Transitive Closure&$O(1)$&$O(nm)\footnotemark[1]$&$O(n^2)$\\
 Optimal Chain Cover\footnotemark[2] &$O(k)$& $O(nm)$ &$O(nk)$\\ 
 Optimal Tree Cover \footnotemark[3]&$O(n)$& $O(nm)$ &$O(n^2)$\\
 2-Hop\footnotemark[4]&$\tilde{O}(m^{1/2})$ & $O(n^4)$&$\tilde{O}(nm^{1/2})$\\
 HOPI\footnotemark[4]& $\tilde{O}(m^{1/2})$ & $O(n^3)$&$\tilde{O}(nm^{1/2})$\\
 Dual Labeling&$O(1)$& $O(n+m+t^3)$&$O(n+t^2)$\\
 Labeling+SSPI & $O(m-n)$ & $O(n+m)$ & $O(n+m)$ \\
 GRIPP & $O(m-n)$ & $O(n+m)$ & $O(n+m)$ \\
 \hline
  \end{tabular}
}

\caption{Complexity comparison}
  \label{tab:cmp}
\end{table}
\footnotetext[1]{$m$ is the number of edges and $O(n^3)$ if using Floyd-Warshall algorithm~\cite{CORMEN90}}
\footnotetext[2]{$k$ is the width of chain decomposition; Query time can be improved to $O(log k)$ (assuming binary search) and index time becomes $O(mn+n^2logn)$, which includes the cost of sorting.}
\footnotetext[3]{Query time can be improved to $O(log n)$ and index time becomes $O(mn+n^2logn)$.}
\footnotetext[4]{The index size is still a conjecture.}

\subsection{Our Contribution}
\label{contribution}
In Table~\ref{tab:cmp} we show the indexing and querying complexity of
different reachability approaches.  Throughout the above comparison
and several existing studies~\cite{Trissl07,Wang06,hopiedbt}, we can
see that even though the $2$-hop approach is theoretically appealing,
it is rather difficult to apply it on very large graphs due to its
computational cost.  In the meantime, as most of the large graph is
rather sparse, the tree-based approach seems to provide a good starting
point to compress transitive closure and answer reachability query.  Most
of the recent studies try to improve different aspects of tree-based
approach~\cite{SIGMOD:AgrawalBJ:1989,Wang06,Chen05,Trissl07}.
Note that since we can effectively transform any directed graph into a DAG 
by contracting strongly connected components into a vertex and utilize the DAG to answer the reachability query, 
we will only focus on DAG for the rest of the paper. 

Our study is motivated by a list of challenging issues which
tree-based approaches do not adequately address.  First of all, the
computational cost of finding a good tree cover can be rather
expensive.  For instance, it costs $O(mn)$ to extract a tree
cover with Agrawal's optimal tree cover~\cite{SIGMOD:AgrawalBJ:1989}
and Wang's Dual-labeling tree~\cite{Wang06}.  Second, the tree cover
cannot represent some rather common types of DAGs, for instance, the
Grid type of DAG~\cite{hopiedbt}, where each vertex in the graph links to its right
and upper corners.  For a $k\times k$ grid, the tree cover can
maximally cover half of the edges and the compressed transitive
closure is almost as big as the original one.  We believe the
difficulty here is that the strict tree structures is too limited to
express many different types of DAGs even when they are very sparse.
From another perspective, most of the existing methods which utilize
the tree cover are greatly affected by how many edges are left
uncovered.

Driven by these question, in this paper, we propose a novel graph
structure, referred to as {\em path-tree}, to cover a DAG.  It creates a
tree structure where each node in the tree represents a path in the
original graph.  This potentially doubles our capability to cover
DAGs.  Given that many real world graphs are very sparse wherein the
number of edges is no more than $2$ times of the number of
vertices, the path-tree provides us a better tool to cover the DAG.
In addition, we develop a labeling scheme where each label has only
$3$ elements in the path-tree to answer the reachability query.  We
show that a good path-tree cover can be constructed in $O(m+nlogn)$ time.
Theoretically, we prove that the path-tree can always perform
the compression of transitive closure better than or equal to the
optimal tree cover approaches and chain decomposition approaches.
Finally, we note that our approach can be combined with existing methods to
handle non-path-tree edges.
We have performed a detailed experimental evaluation on both real and synthetic datasets.  Our results show that the path-tree cover can significantly reduce the transitive closure size and improve query answering time. 

The rest of the paper is organized as follows. 
In Section~\ref{pathtree}, we introduce the path-tree concept and an algorithm to construct a path-tree from the DAG. 
In Section~\ref{theory}, we investigate several optimality questions related to path-tree cover. 
In Section~\ref{experiments}, we present the experimental results. 
We conclude in Section~\ref{conc}. 



\comment{
We also introduce a heuristic, referred to as the {\em Fat-Node} technique,
to achieve fast transitive closure computation and reachability query
answering.  Essentially, the fat-node is the vertices in the DAG whose
transitive closure is bigger than a certain threshold.  We propose
approaches to treat these types of vertices separately to achieve the best
compression and computation results for the transitive closure.
}




\comment{
Two important lessens: 
1. The strict optimization is likely to involve computing the transitive closure. 
Recent works tries to utilize heuristics. 
What will be the good criteria for the underlying tree?
2. How to generalize the tree? 
   How about planar graph? Yang's argument. Extract the maximal and the maximal number of edges, and the cost of compression $O(nlogn)$. 
   Not much better than tree. 
3. What is the good strategy to handle the non-tree edge or uncovered edge?
   How this problem may relate with $2-HOP$? 

The lessen learns from $2$-Hop: some nodes are more important than other nodes. 
With respect to $2$-Hop, reduce the maximal number of transitive closure pairs...
}


\comment{



\subsection{Challenges}
Given an $n$-vertex, $m$-edge directed graph, we have two na\"ive
approaches to handle reachability queries.  One is to use the single
source shortest path algorithm, that is, for any two vertices, we use
the shortest path algorithm to determine if they are connected. This
approach may take $O(m)$ query time, but requires no extra data
structure besides the graph itself for answering reachability queries.
The other extreme is to compute and store the transitive closure of
the graph.  It answers reachability query in constant time but need
$O(n^2)$ space to store the transitive closure of an $n$-vertex graph.
Many applications involve massive graphs, yet require fast answering
of reachability queries.  This makes the na\"ive approaches
infeasible.


%% The problem has at least three challenges:

%% We can answer reachability queries in constant time if we compute and
%% store the transitive closure of the graph.  This however, is
%% infeasible for large sparse graphs because of the $\Omega(n^2)$ space
%% requirement for storing the transitive closure of an $n$-vertex graph.

Several approaches have been proposed to encode graph reachability
information using vertex labeling
schemes~\cite{SIGMOD:AgrawalBJ:1989,cohen2hop,hopiedbt,hopiicde}. A
labeling scheme assigns labels to vertices in the graph, and it
answers reachability queries by comparing the labels of the vertices.
%% For tree structures, reachability queries (a.k.a.  ancestor descendent
%% queries) can be answered in constant time by using labels of size
%% $O(\log n)$~\cite{abiteboul01}.  
Interval-based labeling is best for tree structures. For graphs,
however, reachability queries may take $O(m)$ time using the
interval-based approach. Cohen et al.~\cite{cohen2hop} proposed the
2-hop labeling scheme so that for sparse graphs reachability queries
can be answered efficiently using relatively less storage.  However,
it has been shown that there exist graphs for which any reachability
labeling is of size $O(nm^{1/2})$, which yields to $O(n^2)$ in the
worst case.  Correspondingly, each 2-hop label has average length
$O(m^{1/2})$, which means answering reachability queries requires
$O(m^{1/2})$ comparisons.

Furthermore, labeling can be a time costly process. For instance,
finding optimal 2-hop labels is NP-hard.  Using approximation
algorithms, Cohen et al.~\cite{cohen2hop} reduced the complexity to
$O(n^4)$, and later, the HOPI algorithm proposed by Schenkel et
al.~\cite{hopiedbt,hopiicde} reduced it to $O(n^3)$. But it is still
impractical for massive graphs.

%% Cohen {\it et al.} state in~\cite{cohen2hop} that the complexity most
%% existing schemes only work well for specific types of graphs.  %% Several
%% approaches have been introduced on Consider a directed graph $G =
%% (V,E)$.  



%naive approaches


%% Graph reachability is an important problem with many applications,
%% including knowledge representation, programming language analysis,
%% and more recently, XML query processing.  

%% If the answer is yes, we say that u can reach v. Graph reachability
%% is a well-known problem with many practical applications. 


%% Consider a directed graph $G = (V,E)$. Graph reachability is the
%% following decision problem: Given two nodes {\bf u} and {\bf v} in
%% $G$, is there a path from {\bf u} to {\bf v}? 
%% The problem has attracted some new attention because of the rise of
%% XML in data exchange.






%% Interval based labeling is widely used.

%% \begin{figure}[!h]
%% \centering
%% \includegraphics[height=3.5cm]{t0.eps}
%% \caption{The original graph}
%% \end{figure}


\begin{table}[htbp]
%%   \centering

}


\comment{

\subsection{Our Approach and Contributions}
We propose a novel method called dual labeling to handle reachability
queries for massive, sparse graphs. The goal is to optimize both query
time and labeling time. Our method consists of two schemes, Dual-I and
Dual-II. The Dual-I labeling scheme has constant query time, and for
sparse graphs, the labeling complexities of both Dual-I and Dual-II
are almost linear.  The Dual-II scheme has higher query complexity but
uses less space in practice. Table~\ref{tab:cmp} compares our dual
labeling approaches with existing approaches.

In our approach, we consider a graph as having two components: a tree
(spanning tree) plus a set of $t$ non-tree edges. For sparse,
tree-like graphs, we assume $t\ll n$. As we have previously mentioned,
many real life graphs are sparse.

The two components together contain the complete reachability
information of the original graph.  The dual labeling scheme
seamlessly integrates i) interval-based labels, which encode
reachability in the spanning tree, and ii) non-tree labels, which
encode additional reachability in the rest of the graph.  At query
time, we first consult the interval-based labels to see if two nodes
are connected by tree edges, if not, we consult non-tree labels, and
check if they are connected by paths that involve non-tree edges.  For
Dual-I, both operations have constant time complexity.  For Dual-II,
the second operation takes $O(\log t)$ time. Since $t\ll n$ for sparse
graphs, $O(\log t)$ is often negligible. Furthermore, the two set of
labels can be assigned by depth-first traversal of the graph, which is
of linear complexity. The preprocessing step may take $O(t^3)$ time in
the worst case. However, as we will demonstrate in our experiments,
this cost is almost negligible for sparse graphs.  To check
reachability encoded by non-tree labels, the Dual-I approach relies on
an additional data structure of size $t^2$.  Since the spanning tree
of a connected graph has $n-1$ edges, the number of non-tree edges $t$
is at most $m-n+1$.  This means for XMark datasets~\cite{xmark} whose
edge/vertex ratio is approximately 1.15, the extra storage is about
$0.022 n^2$, and for the HumanCyc dataset~\cite{humancyc}, the extra
storage is about $0.009 n^2$, both of which is much smaller than the
$n^2$ space used by the transitive closure matrix. Still, we introduce
a tradeoff between time and space. By paying a negligible cost of
$O(\log t)$ in query time, the Dual-II scheme manages to use much less
space in query processing\footnote{Although in the worst case the
  space requirement for Dual-II is still $O(n+t^2)$, in practice the
  space requirement is  much less.}.

From our discussion above, it is clear that $t$, the number of
non-tree edges, is an important performance factor in our approach.
In this paper, we show that we can reduce $t$ without losing
reachability information in the original graph if we choose the
spanning tree carefully. As a matter of fact, if we find spanning
trees in the minimal equivalent graph of the original graph, we can
minimize $t$, thus further improving query and indexing performance.

%% We also introduce two optional tradeoffs between space and time.
%% First, we show that for very tree-like graphs (that is, $t \ll n$), we
%% can further reduce labeling space by using $O(\log t)$ time for
%% reachability query. Since $t\ll n$, $O(\log t)$ is still very
%% efficient even for large graphs. %% which will greatly reduce labeling space and still
%% %% provide efficient query support. 
%% Second, 
%%  number of non-tree edges we must keep track of. %% As we shall see in detail, the number of non-tree edges
%% has a big impact on the space and time complexity of the labeling
%% scheme.


 
%% In summary, this paper makes the following contributions.
%% \begin{itemize}
%% \item We introduce a novel reachability labeling method called {\it
%%     dual labeling}. The Dual-I scheme enables us to answer graph
%%   reachability queries in $O(1)$ time, while state of the art labeling
%%   schemes require $O(m^{1/2})$ time.
%% %, where $m$ is the number of edges in the graph.
%% %%   Except for the transitive closure method that require $O(n^2)$
%% %%   storage, 
%% %% \item the label size is the same as interval-based labeling (each node
%% %%   has a pair of integers as its label)\footnote{If we count the non-tree
%% %%   label as a part of the index}
%% \item Our reachability labeling algorithms are applicable to massive
%%   graphs because the complexity of creating dual lables is close to
%%   linear for sparse graphs. The best known 2-hop algorithm, however,
%%   has complexity $O(n^3)$.
%% \item The space requirement of our labeling scheme scales up
%%   graciously with graph density.  The Dual-I scheme relies on a data
%%   structure of size $O(t^2)$, where $t$ indicates the density of the
%%   graph. We show that, for many real life graphs such as
%%   HumanCyc~\cite{humancyc}, the size of the data structure is
%%   insignificant and it can be stored in memory.
  
%% \item We propose the Dual-II scheme, which tradeoff between space and
%%   query time. However, the $O(\log t)$ query time cost it introduces
%%   is negligible for sparse graphs where $t\ll n$. We also introduce a
%%   preprocessing step to improve overall query and indexing performance
%%   of our approach.
%% %%   t)$ algorithm for answering reachability query, which %% instead of $O(1)$ query time, we
%% %%   reduces label space in more than half. %% For
%% %% %%   tree-like graphs where %% label space c  an be reduced
%% %% %%   if we %% Alternatively, the label
%% %% %% %%   size can be reduced with
%% %%   can manage $O(\log t)$ query. %% bstorage can be reduced to
%% %% %%   queries the number of is For many applications, We propose
%% %% %%   bbanotbher labeling scheme as a complement for the graphs that
%% %% %%   contain a very low percentage of non-tree edges, i.e., are extremely
%% %% %%   tree-like.  Compared to our previous labeling scheme, this
%% %% %%   alternative significantly reduces the storage space of the index for
%% %% %%   such graphs. As a tradeoff, it takes $O(\log t)$ query time where
%% %% %%   $t$ is the number of non-tree edges.  Since
%% %%   $t$ is much smaller than $n$, $O(\log t)$ creates no often
%% %%   negligible.c
%% %% \item We provide an optional preprocessing step to find an equivalent
%% %%   subgraph of the input graph, so that the number of non-tree edges is
%% %%   minimized, thus further reducing the storage requirement.
%% \end{itemize}

%\subsubsection*{Paper Organization}
The rest of the paper is organized as follows. 
%% Section~\ref{sec:review} we give an overview of our approach. %% In
%% Section~\ref{sec:prep}, we discuss some preprocessing steps that
%% reduce the complexity of the
%% graph reachability problem. %% , including removing strongly c
%onnected
%% components and reducing a graph to its minimal equivalent form.
In Section~\ref{sec:related}, we survey some related work in the field
of graph reachability.  Section~\ref{sec:dual} presents the dual
labeling scheme for encoding graph reachability.
Section~\ref{sec:tradeoff} introduce the time space tradeoff to
further reduce label size. In Section~\ref{sec:meg} we discuss an
optional processing step that
finds the minimal equivalent graph of the input graph. %% discuss two optional
%% tradeoffs between time and space, focusing on improving the
%% performance for extremely tree-like graphs.
Section~\ref{sec:expr} evaluate our approach on several datasets, and
we conclude in Section~\ref{sec:conclude}.

%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "paper"
%%% End: 
>>>>>>> .r6

}



\comment{
The major ideas to handle reachability query is to build indexes based on the reachability labeling. 

Simply speaking, each vertex in the graph is assign with certain labels and the reachability 
between any two vertices can be quickly determined based on their labels. 
  Several approaches have been proposed for this purpose, such as interval labeling (tree cover) and 2-hop labeling, etc. 
  However, due to the huge number of vertices in many real world graphs (some large graphs easily contains millions of vertices), the computational cost and/or the labeling (index) size are still too expensive for the existing methods to be practically usable. 
}