%!TEX root = main.tex
\begin{table}[t]
    \centering
    \caption{All parameter settings used in the experiments.}
    \label{tab:Parameter}
    \begin{tabular}{|c|l|l|l|l|l|l|}
    \hline
    Parameters & \multicolumn{6}{c|}{Pool of Values} \\
    \hline
    Datasets & \multicolumn{3}{c}{Twitter} & \multicolumn{3}{|c|}{News} \\
    \hline
    \# of users & \underline{\textbf{1M}} & 5M & 10M & \underline{\textbf{0.1M}} & 0.5M & 1M \\
    \ max vertex degree & \underline{\textbf{0.7M}} & 0.7M & 0.7M & \underline{\textbf{16K}} & 29K & 30K \\
    \ average vertex degree & \underline{\textbf{81.6}} & 82.5 & 46.1 & \underline{\textbf{9.2}} & 6.8 & 7.0 \\
    \hline
    \# of records & \multicolumn{3}{c}{\underline{\textbf{10M}},15M,20M} & \multicolumn{3}{|c|}{\underline{\textbf{0.5M}},1M,5M} \\
    \hline
    keyword frequency & \multicolumn{6}{c|}{low,medium,\underline{\textbf{high}}} \\
    \hline
    degree of query user & \multicolumn{6}{c|}{low,medium,high} \\
    \hline
    top-k & \multicolumn{6}{c|}{1,\underline{\textbf{5}},10,15,$\ldots$,50} \\
    \hline
    dimension  & \multicolumn{6}{c|}{\underline{\textbf{(0.1,0.1,0.1)}}(0.1,0.3,0.5)(0.1,0.5,0.3)(0.3,0.1,0.5)} \\
    coefficient ($\alpha,\beta,\gamma$)  &
    \multicolumn{6}{c|}{(0.3,0.5,0.1)(0.5,0.1,0.3)(0.5,0.3,0.1)} \\
    \hline
    \end{tabular}
\end{table}
%
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.24\textwidth}
  \centering
  \includegraphics[width=\textwidth]{pics/tweet_frequency}
  \caption{vary frequencies:twitter}
  \label{fig:tweet_frequency}
\end{subfigure}%
\hspace{0.4mm}
\begin{subfigure}[b]{0.24\textwidth}
  \centering
  \includegraphics[width=\textwidth]{pics/tweet_user}
  \caption{vary user degree:twitter}
  \label{fig:tweet_user}
\end{subfigure}%
\hspace{0.4mm}
\begin{subfigure}[b]{0.24\textwidth}
  \centering
  \includegraphics[width=\textwidth]{pics/tweet_topk}
  \caption{vary top-k:twitter}
  \label{fig:tweet_topk}
\end{subfigure}%
\hspace{0.4mm}
\begin{subfigure}[b]{0.24\textwidth}
  \centering
  \includegraphics[width=\textwidth]{pics/tweet_param}
  \caption{vary parameters:twitter}
  \label{fig:tweet_param}
\end{subfigure}
% twitter above
\\
\begin{subfigure}[b]{0.24\textwidth}
  \centering
  \includegraphics[width=\textwidth]{pics/news_frequency}
  \caption{vary frequencies:news}
  \label{fig:news_frequency}
\end{subfigure}%
\hspace{0.4mm}
\begin{subfigure}[b]{0.24\textwidth}
  \centering
  \includegraphics[width=\textwidth]{pics/news_user}
  \caption{vary user degree:news}
  \label{fig:news_user}
\end{subfigure}%
\hspace{0.4mm}
\begin{subfigure}[b]{0.24\textwidth}
  \centering
  \includegraphics[width=\textwidth]{pics/news_topk}
  \caption{vary top-k:news}
  \label{fig:news_topk}
\end{subfigure}%
\hspace{0.4mm}
\begin{subfigure}[b]{0.24\textwidth}
  \centering
  \includegraphics[width=\textwidth]{pics/news_param}
  \caption{vary parameters:news}
  \label{fig:news_param}
\end{subfigure}
\vspace{1mm}
\caption{Performance for various settings}
\label{fig:full_data}
\vspace{-5mm}
\end{figure*}


\section{Experiment Results}\label{sec:ExperimentResult}
We implement the proposed solution on a CentOS server (Intel i7-3820 3.6GHz CPU with 60GB RAM) and compare with the baseline solutions on two large yet representative real world datasets: Twitter and Memetracker from SNAP\footnote{http://snap.stanford.edu/}. The original Twitter dataset contains 17M users, 476M tweets. Memetracker is an online news dataset which contains 9M media and 96M records. Twitter encapsulates a large underlying social graph but short text information (average number of distinct non-stopped keywords in a tweet is 7); Memetracker has a smaller social graph but rich in text information (average number of distinct non-stopped keywords in a news is 30). The datasets with different features are used to test our proposed solution. Since both raw social graphs have a lot of isolated components, we sample the users that formed a connected component to demonstrate the effectiveness of our solution. Accordingly we filter the documents/tweets based on the sampled users, resulting in the datasets used in our experiments. Table \ref{tab:Parameter} contains all the parameters used in our experiments, and those highlighted in bold denote the default setting unless specified otherwise. For scalability tests, we make three samples of the social graph and three samples of the text documents for each dataset. For query keywords, we randomly sample keywords with length 1, 2, 3. We are aware of four factors that may have impact on the overall performance: keyword frequency, query user, top-k and coefficients for different dimensions for the ranking function (Equation \ref{eq:RankingFunction}).

As no existing work supports searching along all three dimensions, we would like to compare the proposed solution with several baseline methods to demonstrate the effectiveness.
\begin{itemize}
\item \textbf{Time Pruning (TP)}: The state of the art on real time social keyword search \cite{Twitter:Earlybird,ProvenanceDiscovery,NUS:TI} sort the inverted
list by reverse chronological order, in order to return the latest result. Thereby, we implement a TA approach that can take advantage of such order to retrieve
the top-k results.
%\item \textbf{Time and Distance Pruning (TDP)}: This method builds on top of the \textbf{TP} scheme, while the difference is: \textbf{TP} uses the direct Dijkstra's algorithm, whereas \textbf{TDP} computes the social distance with the improved Dijkstra that has distance pruning features as described in Section \ref{sec:ShortestPath}.
\item \textbf{Frequency Pruning (FP)}:
    Without considering efficient updates in the real time social network, traditional approach for personalized keyword search sorts the inverted list
    by keyword frequencies \cite{PersonSearch:Schenkel:2008:SIGIR,PersonSearch:Sihem:2008:VLDB}. \textbf{FP} is the implementation of TA on this type of inverted list.
%\item \textbf{Frequency and Distance Pruning (FDP)}: Similar to the different between \textbf{TP} and \textbf{TDP}, we add the distance pruning rules to \textbf{FP} to form \textbf{FDP}. The main purpose to include \textbf{TDP} and \textbf{FDP} is demonstrate the effectiveness of our 3D inverted list and cube TA algorithm against existing schemes without taking into consideration the effect of the distance pruning methods.
\item \textbf{3D Index (3D)}: It is the complete solution that we proposed in this paper, i.e. CubeTA (Algorithm \ref{algo:cubeTA}) with our optimized distance query computation (Algorithm \ref{algo:2HopDistancePrune}).% is used to retrieve the top-k results.
%\item \textbf{Simple 3D inverted list without Distance Pruning (S3D)}: This method answers the query by using our proposed 3D inverted lists. The processing is the same as CubeTA, except that \textbf{S3D} computes the social score by calling the naive Dijkstra's algorithm. The intention of this benchmark is to show the effectiveness of the distance pruning techniques.
\end{itemize}

\noindent All methods consist of two parts of computation for retrieving the top-k results. The first part is the time to evaluate all candidates' social relevance scores which we refer it as \emph{social}. The second part is the time spent other than the social relevance computation which we refer it as \emph{text}. We adopt these two notions throughout all figures in this section.


\begin{figure}[htp]
\centering
\begin{minipage}[b]{0.23\textwidth}
    \centering
    \includegraphics[width=\textwidth]{pics/partition_compare}
    \caption{Effect of increasing social partitions for both datasets}
    \label{fig:partition_compare}
\end{minipage}
\hspace{0.1mm}
\begin{minipage}[b]{0.23\textwidth}
    \centering
    \includegraphics[width=\textwidth]{pics/tree_compare}
    \caption{Hierarchial tree partition vs. static partition}
    \label{fig:tree_compare}
\end{minipage}
\end{figure}
\vspace{-3mm}

\subsection{Discussion on Social Partitions}
Before presenting the results to compare all the methods, we investigate the query performance with different social partitions in the 3D list. Although more social partitions in the 3D list bring better accuracy in distance estimates, such accuracy increase has slowed down due to the small world phenomenon and the high clustering property of the social network. Moreover, the query processor needs to visit more cubes due to the additional social partitions. Thus, as shown in Fig. \ref{fig:partition_compare} that there is an optimal setup for the number of social partitions for both datasets. For twitter dataset, the optimal number of social partitions is 32 while that for news dataset is 64. Even though twitter dataset has a larger social graph than news dataset, twitter has a higher average degree resulting in a higher degree of clustering. This brings more difficulties in distinguishing the vertices in terms of their social distances.
%by more social partitions.

We also study the impact of hierarchical partition (proposed in Sec. \ref{sec:HierarchicalPartition}) on query processing. Three factors will impact on the performance: (1) Number of documents within a time slice, (2) Height of the partition tree $pTree$, (3) Number of partitions to keep in the 3D list. Since the memory is usually constrained by a maximum limit, we cannot keep too many partitions. Therefore, we fix the number of partitions as the optimal setting just mentioned. Fig. \ref{fig:tree_compare} shows the performance results on twitter dataset when we vary the number of documents within a time slice and also the height of the partition tree. We find: (1) More fine-grained social partition lead to better query performance but it will slow down the index update as allocating records to the 3D list requires tree traversal; (2) An optimal setup exists for number of documents to be allocated in a time slice (10000 in this case).

The hierarchical partition achieves better performance than the static partition, but it will involve many discussions on parameter settings. Due to the space constraint, the static partition is used in $\textbf{3D}$ to compare with the two baselines: $\textbf{TP}$ and $\textbf{FP}$. We have seen that the time slice size and the number of social partitions have corresponding optimal setups and we will adopt this setting throughout the experiment. Besides, we use 10 intervals for the text dimension.

\vspace{-4mm}
\noindent\begin{figure}[htp]
\centering
\begin{subfigure}[b]{0.24\textwidth}
  \centering
  \includegraphics[width=\textwidth]{pics/tweet_prune}
  \caption{twitter data}
  \label{fig:tweet_prune}
\end{subfigure}%
\hspace{0.4mm}
\begin{subfigure}[b]{0.24\textwidth}
  \centering
  \includegraphics[width=\textwidth]{pics/news_prune}
  \caption{news data}
  \label{fig:news_prune}
\end{subfigure}
\caption{Effect of distance pruning}
\label{fig:distance_prune_results}
\end{figure}
\vspace{-5mm}

\subsection{Efficiency Study}\label{subsec:efficiency}
%\begin{figure}[hf]
%\centering
%\begin{subfigure}{0.25\textwidth}
%  \centering
%  \includegraphics[width=\textwidth]{pics/tweet_first}
%  \caption{twitter data}
%  \label{fig:tweet_first}
%\end{subfigure}%
%\begin{subfigure}{0.25\textwidth}
%  \centering
%  \includegraphics[width=\textwidth]{pics/news_first}
%  \caption{news data}
%  \label{fig:news_first}
%\end{subfigure}
%\caption{Performance for default settings}
%\label{fig:first}
%\end{figure}
\subsubsection{Evaluating Our Pruning Techniques}

Fig. \ref{fig:distance_prune_results} shows the experimental results for both datasets with default settings. In general, \textbf{3D} outperforms the other by a wide margin. In addition, prunings based on the time dimension (\textbf{TP}, \textbf{3D}) have better performance than the pruning based on the textual dimension (\textbf{FP}) because searching along the text dimension is yet another multi-dimensional search for multiple keywords, and the well-known curse of dimensionality problem reduces the pruning effect along the text dimension. In contrast, the time dimension is just a scalar so it is more efficient to prune.

To see the effect of the distance optimizations proposed: direct, in-circle and out-of-circle pruning, we apply them to $\textbf{TP}$, $\textbf{FP}$ and $\textbf{3D}$. We find that, with better distance pruning methods, the time for distance queries is greatly reduced for all methods. Moreover, we have confirmed that \textbf{3D} works better with the optimized distance query computation and enables more efficient pruning compared to the other two. When all methods employ the optimized distance query computation (equipped with all pruning techniques in Sec. \ref{sec:ShortestPath}), \textbf{3D} achieves 4x speedups against \textbf{TP} and 8x speedups against \textbf{FP}.

In the rest of this section, we investigate the query processing time (of CubeTA + complete optimized distance computation) by varying the keywords frequencies, the query user, the choice of top-k and the dimension weights respectively.
%Since out-of-circle pruning has superior performance effect for all three methods: $\textbf{TP}$, $\textbf{FP}$ and $\textbf{3D}$, we only present results for methods that use the out-of-circle pruning.

\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.24\textwidth}
  \centering
  \includegraphics[width=\textwidth]{pics/tweet_graph}
  \caption{twitter data}
  \label{fig:tweet_graph}
\end{subfigure}%
\hspace{0.4mm}
\begin{subfigure}[b]{0.24\textwidth}
  \centering
  \includegraphics[width=\textwidth]{pics/news_graph}
  \caption{news data}
  \label{fig:news_graph}
\end{subfigure}
\caption{Scalability: Effect of increasing social graph size}
\label{fig:vary_graph}
\end{figure}
%\begin{table}[t]
%    \centering
%    \caption{Performance for evaluating social distances with different pruning disabled. The units are in milliseconds.}
%    \label{tab:PruningCheck}
%    \begin{tabular}{|c|S|L|S|c|}
%    \hline
%    Twitter & No Warm-up Queue & No Early-Determination & No Early-Pruning & all \\ \hline
%    Graph:1M  & 266.12 & 1340.7 & 271.85 & 31.75  \\ \hline
%    Graph:5M  & 257.41 & 2377.2 & 531.31 & 124.8 \\ \hline
%    Graph:10M & 1986.5 & 6018.4 & 2407.2 & 69.34  \\ \hline
%    News & No Warm-up Queue & No Early-Determination & No Early-Pruning  & all  \\ \hline
%    Graph:0.1M  & 2.56 & 17.5 & 1.34 & 0.80  \\ \hline
%    Graph:0.5M  & 38.3 & 67.6 & 55.9 & 11.2  \\ \hline
%    Graph:1M    & 38.7 & 129.0 & 52.0 & 12.8 \\ \hline
%    \end{tabular}
%\end{table}
\subsubsection{Varying Keyword Frequency}
Fig. \ref{fig:tweet_frequency} and \ref{fig:news_frequency} show the query processing time over different ranges of keyword frequency. Keywords with high and medium frequencies are the top 100 and top 1000 popular keywords respectively, whereas the low frequency keywords are the rest which appear at least 1000 times. In both datasets, we have the following observations: (1) Among all approaches, \textbf{3D} dominates the rest for all frequency ranges. (2) As query keywords become more popular (i.e. with higher frequencies), the performance speedup by \textbf{3D} against other methods becomes larger. Intuitively, there are more candidate documents for more popular query keywords. \textbf{3D} effectively trims down candidates and retrieves the results as early as possible.

\subsubsection{Varying Query User}

We further study the performance w.r.t. users with degrees from high to low. The high, medium and low degree users denote the upper, mid and lower third in the social graph respectively. We randomly sample users from each category to form queries. The results are reported in Fig. \ref{fig:tweet_user} and \ref{fig:news_user}. We find: (1) \textbf{3D} achieves a constant speedup compared to the rest of the methods regardless of the query user's degree. (2) The social relevance computation for \textbf{TP} and \textbf{FP} takes longer than \textbf{3D}, even though \textbf{TP} and \textbf{FP} have the same distance pruning technique as \textbf{3D}. This is because \textbf{3D} prunes more aggressively on social dimension using time and text dimension whereas the other two methods only have one dimension to prune. As illustrated later in Sec. \ref{sec:scalability}, such advantage is magnified when the graph goes larger.

\subsubsection{Varying Top-k}
We vary the top-k value from 1 to 50. As shown in Fig. \ref{fig:tweet_topk} and \ref{fig:news_topk}, the performance slowly drops with more required results. Since we are doing high dimensional search, result candidate scores tend to be close to each other. Therefore it is more meaningful to identify results quickly for smaller top-k values, which also explains why we set the default top-k as 5. \textbf{3D} retrieves the results extremely fast compared to other methods and scales against even larger top-k value. For \textbf{TP}, the performance is very poor for news dataset against larger top-k values as news dataset contains more text information, so time pruning becomes less effective. Lastly for \textbf{FP}, it almost exhausts all possible documents in the inverted lists to get the top-k results which is almost a linear scan.

\subsubsection{Varying Dimension Coefficients}
Lastly, for each dimension we consider, i.e. time, social and text, we assign different weights to them to test the flexibility of our scheme. As shown in Fig. \ref{fig:tweet_param} and \ref{fig:news_param}, \textbf{3D} remains superior over the other methods. Even when the weight of time dimension is small, i.e. 0.1, \textbf{3D} is still better than \textbf{TP} where both methods use the time dimension as the primary axis. $\textbf{3D}$ does not perform well when the weight of the social dimension is the largest among all dimension weights because the \textbf{small world} property makes it hard to differentiate users effectively in term of social distance.

\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.24\textwidth}
  \centering
  \includegraphics[width=\textwidth]{pics/tweet_text}
  \caption{twitter data}
  \label{fig:tweet_text}
\end{subfigure}%
\hspace{0.4mm}
\begin{subfigure}[b]{0.24\textwidth}
  \centering
  \includegraphics[width=\textwidth]{pics/news_text}
  \caption{news data}
  \label{fig:news_text}
\end{subfigure}
\caption{Scalability: Effect of increasing text information}
\label{fig:vary_text}
\end{figure}


%\begin{table}[t]
%	\centering
%	\caption{Comparison of size and construction time for different indices on Twitter dataset}
%	\label{tab:TweetIndexResource}
%	\begin{tabular}{|c|c|c|c|c|c|c|}
%	\hline
%	&	\multicolumn{3}{|c|}{Index Size (Gb)} & \multicolumn{3}{|c|}{Update Time (Secs)} \\
%	\hline
%	Vary Graph Size &	\textbf{TP} & \textbf{FP} & \textbf{3D} & \textbf{TP} & \textbf{FP} & \textbf{3D} \\
%	\hline
%	Graph:1M & 3.17 & 4.22 & 4.95 & 6.49 & 120.81 & 23.42 \\
%	Graph:5M & 3.18 & 4.22 & 5.08 & 7.43 & 126.52 & 28.13\\
%	Graph:10M & 3.10 & 4.09 & 5.35 & 16.91 & 123.47 & 23.40  \\
%	\hline
%	Vary Tweet Size & \textbf{TP} & \textbf{FP} & \textbf{3D} & \textbf{TP} & \textbf{FP} & \textbf{3D} \\
%	\hline
%	Tweet:10M & 3.17 & 4.22 & 4.95 & 6.27  & 118.41 & 23.09 \\
%	Tweet:15M & 4.75 & 6.34 & 6.80 & 9.54  & 200.17 & 34.67 \\
%	Tweet:20M & 6.34 & 8.65 & 8.18 & 12.64 & 284.63 & 46.29 \\
%	\hline
%	\end{tabular}
%\end{table}
\vspace{-1mm}
\subsection{Scalability}\label{sec:scalability}

%\begin{table}[t]
%	\centering
%	\caption{Comparison of resources usage and construction performance for different indices on News dataset}
%	\label{tab:NewsIndexResource}
%	\begin{tabular}{|c|c|c|c|c|c|c|}
%	\hline
%	&	\multicolumn{3}{|c|}{Index Size (Gb)} & \multicolumn{3}{|c|}{Construction Time (Secs)} \\
%	\hline
%	Vary Graph Size &	\textbf{TP} & \textbf{FP} & \textbf{3D} & \textbf{TP} & \textbf{FP} & \textbf{3D} \\
%	\hline
%	Graph:0.1M & 0.53 & 0.79 & 2.05 & 1.11 & 12.00 & 4.61 \\
%	Graph:0.5M & 0.53 & 0.79 & 2.11 & 1.15 & 10.27 & 5.09 \\
%	Graph:1M   & 0.46 & 0.73 & 1.91 & 1.15 & 10.18 & 5.04 \\
%	\hline
%	Vary News Size & \textbf{TP} & \textbf{FP} & \textbf{3D} & \textbf{TP} & \textbf{FP} & \textbf{3D} \\
%	\hline
%	News:0.5M & 0.53 & 0.79 & 2.05 & 1.10  & 12.00  & 4.66\\
%	News:1M   & 1.12 & 1.52 & 2.77 & 2.23  & 29.48  & 8.65\\
%	News:5M   & 6.01 & 7.99 & 8.00 & 11.81 & 225.03 & 42.97\\
%	\hline
%	\end{tabular}
%\end{table}

In order to test the scalability of our proposed solution, we decide to scale along the social graph size and the number of user-posted documents respectively.

First, we test the scalability when the social graph grows while the number of records remains the same. It naturally follows the empirical scenario that users issue real time queries for the most recent posts in a very big social network.
%we have a very big social network while query users only concern the records posted most recently for a real time query he/she issues.
%
In Fig. \ref{fig:vary_graph}, we find both \textbf{TP} and \textbf{FP} spend much longer time in the social distance computation. By contrast, \textbf{3D} limits the time for distance query to minimal due to the simultaneous 3-dimension pruning upon the 3D list. Therefore, \textbf{3D} is capable of handling increased volume of the social graph.
We also tested what happens if one of the three pruning techniques, i.e. early-determination, early-pruning and warm-up queue, is missing. The results are shown in Table \ref{tab:PruningCheck} where we observe that early-determination is the most powerful pruning. Nevertheless, all prunings must work together to ensure an efficient distance computation with the increasing graph size.

Second, we test each approach w.r.t. varying number of records posted while fixing the social graph sizes to the default values. As we can see from Fig. \ref{fig:vary_text}, \textbf{3D} remains to be the best against the rest and is able to maintain the performance to near constant. Therefore, it verifies that our proposed method is also scalable against high text volume.

%\begin{figure}[t]
%    \centering
%        \includegraphics[width=0.5\textwidth]{pics/update_query}
%    \caption{The query time (milliseconds) when records are being ingested.}
%    \label{fig:update}
%\end{figure}
%
%\begin{figure}[t]
%    \centering
%        \includegraphics[width=0.5\textwidth]{pics/update_time}
%    \caption{Average index update time (seconds/million records) when records are being ingested.}
%    \label{fig:update}
%\end{figure}
%
%\begin{figure}[t]
%    \centering
%        \includegraphics[width=0.5\textwidth]{pics/update_size}
%    \caption{Index size (Gigabytes) when records are being ingested.}
%    \label{fig:update}
%\end{figure}

\begin{figure*}[htp]
\begin{minipage}[c]{0.32\textwidth}
    \captionof{table}{Performance for evaluating social distances with different pruning disabled. The units are in milliseconds.}
    \label{tab:PruningCheck}
    \scriptsize
    \begin{tabular}{|c|c|c|c|}
    \hline
    \textbf{Twitter Graph}               & 1M & 5M & 10M  \\ \hline

    No Warm-up Queue            & 266.1& 257.4 & 1987 \\ \hline
    No Early-Determination      & 1341 & 2377  & 6018 \\ \hline
    No Early-Pruning            & 271.9& 531.3 & 2407 \\ \hline
    all                         & 31.75& 124.8 & 69.34 \\ \hline

    %Graph:1M  & 266.12 & 1340.7 & 271.85 & 31.75  \\ \hline
%    Graph:5M  & 257.41 & 2377.2 & 531.31 & 124.8 \\ \hline
%    Graph:10M & 1986.5 & 6018.4 & 2407.2 & 69.34  \\ \hline
    \textbf{News Graph}                  & 0.1M & 0.5M & 1M \\ \hline
    No Warm-up Queue            & 2.56 & 38.3 & 38.7 \\ \hline
    No Early-Determination      & 17.5 & 67.6 & 129 \\ \hline
    No Early-Pruning            & 1.34 & 55.9 & 52.0 \\ \hline
    all                         & 0.80 & 11.2 & 12.8 \\ \hline
%    Graph:0.1M  & 2.56 & 17.5 & 1.34 & 0.80  \\ \hline
%    Graph:0.5M  & 38.3 & 67.6 & 55.9 & 11.2  \\ \hline
%    Graph:1M    & 38.7 & 129.0 & 52.0 & 12.8 \\ \hline
    \end{tabular}
\end{minipage}
\hspace{1mm}
\begin{minipage}[c]{0.21\textwidth}
    \includegraphics[width=\textwidth]{pics/update_query}
    \caption{The query time when records are being ingested.}
    \label{fig:update_query}
\end{minipage}
\hspace{1mm}
\begin{minipage}[c]{0.21\textwidth}
    \includegraphics[width=\textwidth]{pics/update_time}
    \caption{The index update rate with incoming records.}
    \label{fig:update_time}
\end{minipage}
\hspace{1mm}
\begin{minipage}[c]{0.21\textwidth}
    \includegraphics[width=\textwidth]{pics/update_size}
    \caption{The index size with incoming records.}
    \label{fig:update_size}
\end{minipage}
\end{figure*}
\vspace{-1mm}
\subsection{Handling Fast Update}\vspace{-1mm}
Lastly we studied how our framework deals with fast and continuously coming new data. The experiment is conducted in this way: we measure the query time, online index update rate and index size while new tweets are being ingested continuously from 10M to 20M tweets, and the results are presented in Fig. \ref{fig:update_query}, \ref{fig:update_time}, \ref{fig:update_size} respectively. Results for the news dataset is similar to Twitter, while not presented due to space limit. We find: (1) For query time, time based pruning methods achieve fairly stable performance while text pruning is getting worse when tweets are being ingested; \textbf{3D} outperforms \textbf{TP} by 5-7x speedup and \textbf{3D} is even more stable against index update. (2) For online index update time, \textbf{TP} is the most efficient method due to its simple index stricture. Nevertheless, \textbf{3D} also demonstrates its fast-update feature as it clearly outperforms \textbf{FP} and the margin against \textbf{TP} is quite small. (3) For index size, the space occupied by \textbf{3D} is close to \textbf{TP} as we stored the 3D list as time slices of trees (Sec. \ref{sec:Index}) to avoid empty cubes. For \textbf{FP}, more space is required as a large tree structure is used to maintain a sorted list w.r.t keyword frequencies.

In summary, equipped with its fast update and efficient space management features, $\textbf{3D}$ has the advantage in handling real time personalized query against the other methods.

%As we are dealing with real time query, this phenomenon naturally follows as people care about documents that are posted most recent.







