\section{Our approach of Mquery}
\label{sec:approach}

%provide api to load the data. memory and file mode. automatically build the index and graph and compress it.
%methods to build the index and compress it
%methods to build the graph and compress it
%provide api to do the query
%api1. keywords and type to node. algorithm, function
%api2. two nodes, find the shortest paths, brief
%api3. two nodes, find all paths length < l, brief
%api4. a node, find all certain type of nodes within length, algo

%exp
%memory cost (certain api, index size, small data, large data)
%time cost (certain api, before and after compression? rdql?)
%case study (photo -> related person contact?)
%scenario

Mquery is a toolkit designed for efficient queries and search on
mobiles, which will be very beneficial for the development of
further applications on mobiles.  In order to run the queries, the
toolkit first needs to load graph data. The toolkit provides APIs to
support the input of graph data from file system, database and main
memory of mobiles. After loading the graph data, our toolkit will
construct an inverted index content information of the nodes in the
graph. Since the index size is too large for mobiles, a compression
algorithm is applied in  our toolkit to greatly decrease the memory
cost. After the initialization step, the toolkit can process
different kinds of queries. In our work, we are mainly focusing on
the following four major APIs.

\begin{enumerate}
\item API1: Given a set of query keywords and a node type,
rank the nodes of such type based on the content information.
\item API2: Given two nodes, find the shortest path between them.
\item API3: Given two nodes and a length bound, find all paths between them within the bound.
\item API4: Given a node, a node type and a length bound, find all nodes of such type within the
bound.
\end{enumerate}

In the following sections, we will first show how the compression
algorithm for the inverted index works in our toolkit.Then we will
introduce the implementation of the four different APIs mentioned
above. Finally we will give some interesting applications on mobiles
which can be easily implemented by our APIs.

\subsection{Index and Graph Compression}
There are many state-of-art methods on compressing the inverted
index. For example, blahblah. In our work, we want to find a balance
between memory cost and time cost. After evaluating different
techniques on compressing the index, we choose Simple9 coding
[reference, Inverted index using word-aligned binary codes] to
compress the inverted index. In the experimental section, we will
find that the size of the index can decrease significantly while the
time cost for compression and decompression is trivial.

The key idea of Simple9 coding is to pack as many values as possible
into a 32-bit word. This is done by dividing each word into
4 status bits and 28 data bits, where the data bits can be
partitioned in 9 different ways. For example, if the next 7
values are all less than 16, then we can store them as 7 4-bit
values. Or if the next 3 values are less than 512, we can store
them as 3 9-bit values (leaving one data bit unused). Simple9
uses 9 ways to divide up the 28 data bits: 28
1-bit numbers, 14 2-bit numbers, 9 3-bit numbers (one bit
unused), 7 4-bit numbers, 5 5-numbers (three bits unused),
4 7-bit numbers, 3 9-bit numbers (one bit unused), 2 14-bit
numbers, or 1 28-bit numbers. The 4 status bits store which
of the 9 cases is used. Decompression can be optimized by
hardcoding each of the 9 cases using fixed bit masks, and
using a switch operation on the status bits to select the case.

In the inverted index, each word $w_i$ has a sequence of data node id $d_1, d_2, \ldots, d_{s_i}$. Before
compression, storing these ids needs $s_i$ integers. After packing these $s_i$ numbers using Simple9 coding,
the memory cost decrease significantly. However, this is not the end. We get a step further to increase the compression ratio.
We sort the sequence to assure $d_1 < d_2 < \ldots < d_{s_i}$. Then we store $d_1, d_2-d_1, \ldots, d_{s_i}-d_{s_i-1}$ instead of
the original sequence. Because the average value of the sequence becomes smaller, it is expected that
a 32-bit word can store more value. So the compression ratio will become higher.

We also apply Simple9 coding to compress the graph storage. The graph is stored by adjacency list. So if the graph
contains $n$ nodes, the list is actually $n$ sequnces of integers, which can be compressed similarly.

\subsection{API1}
\label{secsub:index construction}

Once we store the graph and build up the inverted index, APIs are provided to handle cetain basic queries.
The first API is used to find the nodes related to the given query. It is intend for the basic content based
search. For each data node $d$, suppose its description is a set of words $D_d=\{w_1, w_2, \ldots, w_\}$

\begin{enumerate}
\item[f1] $s_d(q) = \Sigma_{w_i \in q}{tf_d{w_i}}$
\item[f2] $s_d(q) = \frac{\Sigma{tf_d(w_i)tf_q(w_i)idf^2(w_i)}}{\sqrt{\Sigma{(tf_d(w_i)idf(w_i))^2}}\sqrt{\Sigma{(tf_q(w_i)idf(w_i))^2}}}$
\item[f3] $s_d(q) = \Sigma_{w_i \in q}{tf_d(w_i)idf(w_i)}$
\end{enumerate}

$idf(w)$ is the inverse term frequency [reference ]

\subsection{API2}
The function of this API is to find the shortest path between two nodes. Finding shortest path is one
of the basic function in graph query. It is especially useful in revealing how close the two nodes
are in structural manner. We assume that in the graph, the weight of edges are all positive. So we
can use Dijkstra algorithm to find the shortest path. In our implementation, we use bi-directional Dijkstra
to make it faster. Moreover, we provide a

\subsection{API3}

\subsection{API4}

\subsection{Application}

\subsubsection{app 1}

\subsubsection{app 2}

\subsubsection{app 3}

\subsubsection{app 4}
