\section{Experimental Results}
\label{sec:exp}

\subsection{Cost Analysis}
In graph query processing, the major concern is Query Response Time,
in our work, the Query Response Time can be defined as,
\begin{equation}\label{response_time} T_{search} + T_{rdql} + T_{ranking} \end{equation}
where $T_{search}$ is the time spent in the search step, $T_{rdql}$
is the time spent by the standard RDQL query language after the
reconstruction of graph query $q$ and $T_{ranking}$ is the time
required to rank the final result set. Usually, the RDQL processing
time dominates the equation \label{response_time} . From our
experiments, we find that the $T_{rdql}$ is almost 8 to 9 times to
$T_{search}$. Approximately, the ranking time does not change too
much for a given query. Thus, the key to improve query response time
is to minimize the $T_{rdql}$ and one way to reduce that time is to
minimize the query size, i.e., try to substitute as many frequent
subgraphs in the query by the id node as possible.


%The proposed approach for social influence analysis is very general and can be applied to analyze different kinds of networking data.
 In this section, we validate the efficiency and effectiveness of our proposed Mquery by
 reporting our experiments. Our experiments demonstrate that:
 \begin{enumerate}
 \item[1.] For different data size, the inverted index size and
 graph storage size cost less memory than normal one.
 \item[2.] For different data size, our toolkit cost almost the same
 time on some basic operation comparing to the normal one.
 \item[3.] We effectively and efficiently implements the application
 mentioned in section XX.
 \end{enumerate}

 We apply our Mquery on two kinds of datasets: a photo dataset and a
 sms dataset.

 \begin{enumerate}
 \item[1.]
 \item[2.]
 \end{enumerate}

 All our experiments are performed on a 2.2GHz, 2GB memory, Intel PC
 running Ubuntu 9.10 except one on Nokia N900 (600MHz, 256M RAM) with Maemo linux operating system.
 The program is written in C/C++ compiled with gcc/g++.
 %and another is to recommend tags for
 %web pages (del.icio.us\footnote{\url{http://delicious.com/}})

\subsection{Memory Cost of the MQuery}
Our toolkit is able to cost less memory because we apply compression
on the storage. In this section, we present the memory cost
performance of Mquery on the two datasets. To more carefully explore
and demonstrate the performance, we don't simply conduct experiment
on the whole datasets only. Instead, we randomly pick out some of
data from the datasets so we can test on different data size. For
example, we have more than 10 thousand photos in our first data set.
We randomly select 100, 500, 1000, 1500, $\ldots$, 9500, 10000
photos and record the memory cost before and after compression.

To avoid some other factors which may influence the memory and show
more clearly about the memory performance, we first demonstrate the
index size only before and after compression. The index size is
measured by the number of 32-bit integers needed to record the whole
inverted index.

Two figures

Similarly, the following figures comparing the graph size before and
after compression. The graph size is measured by the number of
integers needed to store the whole graph.

Two figures

Besides, we can see from the total memory cost that the overall
memory cost is decrease on the two datasets.

a chart

From the figures above we can see that on average both the index
size and graph size become one third of the original sizes after
compression. From the charts, we can also see that the compression
helps us save about XXXK (XX percent) memory to store the index and
graph.

\subsection{Time Cost of the MQuery}

Though the compression successfully decrease the memory cost, will
the additional time cost of compression and decompression make the
time cost unbearable? In this section, we will verify that the
compression only increase the time cost very little. And all APIs
can run in a very low time cost, which ensures that making use of
our APIs will not destroy the overall effectiveness.

First, we show the time cost of storing graph and building index. We
conduct experiment both on computer and N900 with the photo dataset.
From the table we can see that compressing the index and graph
accounts for XXms on the computer and XXXms on the N900. The
compressing time is about XX percent of the original time, which we
think is acceptable.

A table

Second, we execute API1 to evaluate the time cost of
decompressing the inverted index. Because that executing API1 for 1
time costs really little time which is hard to accurately measured,
we execute the same query for 1000 times. The following table shows
the experiment result. From the table we can see that the additional
time needed for decompressing accounts for only XX percent of the
total cost. So the performance is also acceptable.

A table

Similarly, we execute API3 to evaluate the time cost of
decompressing the graph. Also we execute the same query for 100
times. In table XX we can see that decompressing also accounts
little. Therefore, compression does a good job on both memory and
time performance and helps improve our toolkit.

Finally, we will show that every API is time efficient by table XX
which includes the time cost of each API. We choose the photo
dataset. Again, for each API we randomly choose 1000 query for each
API and test their executing time. From the average time cost, we
see that all our API can finish in Xms.

A table

\subsection{Performance of Application}
We have proposed several application which can be implemented by
composing our APIs. Here we want to demonstrate that our toolkit
help applications run effectively and efficiently.
