\section{Proposed approach of Mquery}
\label{sec:approach}


%exp
%memory cost (certain api, index size, small data, large data)
%time cost (certain api, before and after compression? rdql?)
%case study (photo -> related person contact?)
%scenario


Mquery is a toolkit designed for efficient queries and search on
mobiles, which will be very beneficial for the development of
further applications on mobiles.  In order to run the queries, the
toolkit first needs to load graph data. The toolkit provides APIs to
support the input of graph data from file system, database and main
memory of mobiles. After loading the graph data, our toolkit will
construct an inverted index for content information of the nodes in
the graph. Since the index size is too large for mobiles, a
compression algorithm is applied in  our toolkit to greatly decrease
the memory cost. After the initialization step, the toolkit can
process different kinds of queries. In our work, we are mainly
focusing on the following four major APIs.

\begin{itemize}
\item Key Query API: Given a set of query keywords $\{w_1,\ldots,w_n\}$ and a node type $t$,
rank the nodes of type $t$ based on keywords and their descriptions.

\item Neighbor Query API: Given a node $v_s$, a node type $t$ and a length
bound $l$, find all nodes $v_t$ of type $t$ satisfying that the
shortest path from $v_s$ to $v_t$ is less than $l$.

\item Shortest Path Query API: Given two nodes $v_s$ and $v_t$, find the shortest path from $v_s$ to $v_t$.

\item Subgraph Query API: Given two nodes $v_s$, $v_t$ and a length bound $l$, find all paths from $v_s$ to $v_t$ whose length is less than $l$.
\end{itemize}

In the following sections, we will first show how the compression
algorithm for the inverted index works in our toolkit. Then we will
introduce the implementation of the four different APIs mentioned
above. Finally we will give some interesting applications on mobiles
which can be easily implemented by our APIs.

\subsection{Index and Graph Compression}
There are many state-of-art methods on compressing the inverted
index. For example, \cite{ Zhang:performance} implements
Variable-Byte Coding, Simple9 (S9) Coding, Rice Coding and PForDelta
Coding etc. In our work, we want to find a balance between memory
cost and time cost. After evaluating different techniques on
compressing the index, we choose Simple9 coding \cite{ Anh:inverted}
to compress the inverted index. In the experimental section, we will
find that the size of the index can decrease significantly while the
time cost for compression and decompression is trivial.

The key idea of Simple9 coding is to pack as many values as possible
into a 32-bit word. This is done by dividing each word into 4 status
bits and 28 data bits. Simple9 uses nine ways to divide up the 28
data bits: twenty-eight 1-bit numbers, fourteen 2-bit numbers, nine
3-bit numbers (one bit unused), seven 4-bit numbers, five 5-numbers
(three bits unused), four 7-bit numbers, three 9-bit numbers (one
bit unused), two 14-bit numbers, or one 28-bit numbers. The 4 status
bits store which of the nine cases is used. Decompression can be
optimized by hardcoding each of the nine cases using fixed bit
masks.

\begin{figure}[h]
\centering
  \includegraphics[width=8.5cm]{Figures/s9.eps}\\
  \caption{An example of Simple9 coding}\label{s9}
\end{figure}

Figure \ref{s9} shows an example of compressing four numbers 13, 20,
50 and 100. Because that the maximum number 100 is less than $2^7$
but larger than $2^5$, we cannot store the number 100 using just 5
bits. To maximize the number of integers stored in memory, Simple9
will choose mode 4, which means using four 7bits number, to pack
them. We can see that the storage size is decreased from four 32-bit
integers to one 32-bit integer.

In the inverted index for the content information, each word $w_i$
has a sequence of data node id $d_1, d_2, \ldots, d_{s_i}$, which
means $w_i$ is contained in the content information of data node
$d_j$, $j \in \{1, 2, \ldots, s_i\}$. Before compression, storing
these ids needs $s_i$ integers. After packing these $s_i$ numbers
using Simple9 coding, the memory cost will decrease significantly.
However, this is not the end. We get a step further to increase the
compression ratio. We sort the sequence to guarantee $d_1 < d_2 <
\ldots < d_{s_i}$. Then we store $d_1, d_2-d_1, \ldots,
d_{s_i}-d_{s_i-1}$ instead of the original sequence. Because the
average value of the sequence becomes smaller, it is expected that a
32-bit word can store more values. So the compression ratio will
become higher.

We also apply Simple9 coding to compress the graph storage. In our
toolkit, the graph is stored by adjacency list. So if the graph
contains $n$ nodes, the list is actually $n$ sequences of integers,
which can be compressed similarly.

\subsection{Key Query API}

Once we store the graph and build up the inverted index, four APIs
are implemented in our toolkit to process certain basic queries. The
first API can be used to discover the nodes(objects)related to the
given query based on the content information.


It is intended for the basic content based search. Suppose the
description of each node is a set of words. Let $W = \{w_1, w_2,
\ldots, w_n\}$ be the set of all words which appear in the
descriptions. Given a query $q$, which is also a set of keywords, we
propose three functions to calculate the score of each description
$d$. The functions are as follow:
\begin{equation*} s_d^1(q) =
\Sigma_{w_i \in q}{tf_d{w_i}} \end{equation*}
\begin{equation*}
s_d^2(q) = \frac{\Sigma_{w_i \in q,w_i \in
d}{tf_d(w_i)tf_q(w_i)idf^2(w_i)}}{\sqrt{\Sigma_{w_i \in
d}{(tf_d(w_i)idf(w_i))^2}}\sqrt{\Sigma_{w_i \in
q}{(tf_q(w_i)idf(w_i))^2}}} \end{equation*}
\begin{equation*}s_d^3(q) = \Sigma_{w_i \in q}{tf_d(w_i)idf(w_i)}
\end{equation*}
The meaning of the notations are as follow:

\begin{itemize}
%\item {\em For users.}
\item {$tf(w_i)$} is the term frequency of word $w_i$. $tf_d(w_i)$ denotes
the term frequency of word $w_i$ in the description of node $d$.
\item {$idf(w_i)$} it the inverse term frequency os word $w_i$. It can be
thought of as the usefulness in bits of a keyword to a keyword
retrieval system \cite{Kenneth:inverse}. The value of it is
$\log{N/N(w_i)}$ where $N$ is the total number of nodes and $N(w_i)$
is the number of nodes whose descriptions contain word $w_i$.
\end{itemize}

With the help of inverted index, the calculation of the score of all
three functions can be done in $O(\Sigma_{w_i \in q}{N(w_i)})$ where
$N(w_i)$ is what in the definition of $idf(w_i)$.

$s_d^1(q)$ is the sum of the term frequency. It is the simplest and
fastest function while the scoring result is not as good as
$s_d^2(q)$ and $s_d^3(q)$. $s_d^2(q)$ calculates the cosine of
tf-idf vectors[reference, the same as above] of description and
query. It gives the most reasonable scoring result among the three
functions but cost relatively more time. $s_d^3(q)$ finds a balance
between the scoring result and time cost.

With this API, we can implement some basic keyword-based search
application. For example, photos may contain some tag information as
their description. Then we can search the photos we want base on
their tags. Similarly, we can find short messages or calender event
by typing in related keywords.

\subsection{Neighbor Query API}
Given a starting node $v_s$, a node type $t$ and a length bound $l$,
the function of this API is to find the set of all nodes
$V_t=\{v_t|v_t\in V, d(v_s,v_t) < l\}$ of type $t$ where
$d(v_s,v_t)$ denotes the shortest paths between $v_s$ and $v_t$.
This API is designed to find the certain neighbors of a given node,
which is a quite common query in structural recommendation and
searching work and can help navigational graph query. For example,
It helps a lot when we want to find data of certain type which
relates to a photo, short message or an event in the calender.
Dijkstra algorithm is applied and we implement it with heap
structure because the graph generally is a sparse one and we use
adjacency list to store the graph. It is a little different from the
original Dijkstra algorithm because we don't want the shortest path
between two nodes. So the algorithm terminates when the value of
minimum distance label is larger than bound $l$.

There is also a special case. When all weights are 1, we use
Breadth-First-Search instead of Dijkstra. BFS also finds the correct
$T$ in such case while the complexity of BFS is lower than Dijkstra.
Therefore, in such case, the time cost can be lower.

Note that if we change the

\subsection{Shortest Path Query API}
The function of this API is to find the shortest path between two
nodes $v_s$ and $v_t$. Finding shortest path is one of the basic
function in graph query. It is especially useful in revealing how
close the two nodes are in structural manner. We assume that in the
graph, the weight of edges are all positive. So we can use Dijkstra
algorithm to find the shortest path. In our implementation, we use
bi-directional Dijkstra to make it faster. Moreover, we provide an
approximation algorithm to deal with the real large graph. The key
idea is to generate a much smaller candidate subgraph first and then
conduct Dijkstra algorithm on it. We carefully grow the
neighborhoods around the two nodes. Initially the candidate subgraph
we want to generate is empty and each time we select a node and add
it into the subgraph. The expansion will terminate until a stopping
condition is reached. Suppose the set of nodes expanded from $s$ is
$V_s$ and the set of nodes expanded from $t$ is $V_t$. We set a
connectivity threshold $\theta$ and the stopping condition is $|V_s
\cap V_t| \ge \theta$. The Algorithm \ref{algorithm:shortest path}
gives the high level pseudocode.

\incmargin{1.8em}
\restylealgo{boxruled} \linesnumbered
\begin{algorithm}
 \caption{ Shortest Path Query  \label{algorithm:shortest path}}
\SetLine \KwIn{ A data graph $G=(V,E)$, two nodes $v_s$ and $v_t$,
threshold $\theta$} \KwOut{An approximate shortest path from $v_s$
to $v_t$} \BlankLine

 $V_s\leftarrow \{v_s\}, V_t\leftarrow\{v_t\}$

\While{$|V_s \cap V_t| < \theta$}{

// Expand $V_s$ and $V_t$

pick a node $v_s$ that $\exists v \in V_s$, $(v, v_s)\in E$

pick a node $v_t$ that $\exists v \in V_t$, $(v, v_t)\in E$

}

Bi-directional Dijkstra on the induced subgraph of $G$ over $V_s
\cup V_t$

\end{algorithm}

\decmargin{1.8em}

\normalsize


There is also a special case. When all weights are 1, we use
Breadth-First-Search to find the shortest path instead of Dijkstra.
The complexity of BFS is lower than Dijkstra. Therefore, in such
case, the time cost can be lower.

\subsection{Subgraph Query API}
The function of this API is to find all paths between $v_s$ and
$v_t$ whose lengths are less than a length bound $l$. This is
actually an extension of Shortest Path Query. Finding such paths can
be meaningful when we want to exploring more information about the
connection of the two nodes. Depth-First-Search alone is enough to
find all required paths. However, we find that the time cost of DFS
without any optimization is unbearably high. To solve this problem,
we make use of our Neighborhood Query API to reduce the searches. We
first apply Neighborhood Query API to find set of nodes $V_t = \{v |
d(v,v_t)<l\}$ where $d(v,v_t)$ denotes the shortest paths between
$v$ and $v_t$. The set $V_t$ and $dist(v)=d(v, v_t)$ for each $v\in
V_t$ can serve as a lower bound in DFS search. In this way, we can
cut many useless search branches and the algorithm becomes much
faster. The pseudocode of the algorithm is as Algorithm
\ref{algorithm:subgraph}

\incmargin{1.8em} \restylealgo{boxruled} \linesnumbered
\begin{algorithm}
 \caption{ Subgraph Query  \label{algorithm:subgraph}}
\SetLine \KwIn{ A data graph $G=(V,E)$, two nodes $v_s$ and $v_t$,
bounded length $l$} \KwOut{All paths from $v_s$ to $v_t$ whose
lengths are less than $l$} \BlankLine

// Use Neighborhood Query to find $V_t$ and $dist(v)$ for each $v\in
V_t$ first.

// Then start DFS with $node = v_s$ and $length = 0$

\SetKwFunction{KwFn}{DFS} \KwFn{node, length}

\Begin {

\ForEach{$v \in child(node)$} {

\If {$length+weight(node,v)+dist(v)<l$} {

\KwFn{v,length+weight(node,v)}

}

}

}
\end{algorithm}

\decmargin{1.8em}

\normalsize
