%!TEX root = main.tex
\begin{figure}[t]
\begin{minipage}[b]{0.24\textwidth}
    \centering
        \includegraphics[width=\textwidth]{pics/hirarchical_division}
    \captionof{figure}{Tree partition of the social graph in Fig. \ref{fig:social_graph}.}
    \label{fig:binary_graph_part}
\end{minipage}
\begin{minipage}[b]{0.24\textwidth}
    \centering
        \includegraphics[width=\textwidth]{pics/partition_builder}
    \captionof{figure}{Example of building time slice 1 of the 3D list for \quotes{icde} on the partition tree in Fig. \ref{fig:binary_graph_part}.}
    \label{fig:partition_builder}
\end{minipage}
\end{figure}
\section{3D index optimization}\label{sec:HierarchicalPartition}

In the 3D index, the social dimension is statically partitioned.
Intuitively, the larger the number of partitions is, the more
accurate the estimate of the social score will be; and this will
translate to more effective pruning. However, a large number of
partitions on the social dimension will severely tax the system resources
and this is not scalable for large social networks. Moreover the
static partition strategy does not capture the nature of online
social activities. Given a period of time, people who are socially
related are likely to publish similar information on the online network.
A fine-grained yet time-aware social partition is required for more efficient pruning. Therefore we develop a dynamic division strategy on social dimension using the hierarchical graph partition.

\subsection{Hierarchical Graph Partition Index}
We improve static social division scheme by a hierarchical graph partition using a binary tree denoted as $pTree$. Fig.~\ref{fig:binary_graph_part} serves an example of a partition tree of the social graph mentioned in Fig. \ref{fig:social_graph}. The root of $pTree$ contains all vertices in $G$. Each child node in the tree represents a sub partition of $G$. A sub partition is represented as $G_{[h,idx]}$ where $h$ denotes the level in $pTree$ and $idx$ is the position of $G_{[h,idx]}$ at level $h$. For the 3D list, we still keep $c$ partitions as described in Sec. \ref{sec:Index}. The improvement is that the $c$ partitions are formed by nodes in $pTree$ instead of uniform graph partitions.

The 3D list dynamically updates the social dimension when new records are inserted. The update procedure maintains $c$ partitions within a time slice. When $u_r$ posted a new record $r$, within each time slice where $r$ is inserted into, we try to find the sub partition $G_{[h,idx]}$ to store $r$. Traversing from the root of $pTree$, if $G_{[h',idx']}$ has already contained some records or $G_{[h',idx']}$ is a leaf node of $pTree$, we insert $r$ into $G_{[h',idx']}$. Otherwise we traverse to the child partition of $G_{[h',idx']}$ that contains $u_r$. For any two nodes $G_{[x,idx_x]}$ and $G_{[y,idx_y]}$ on $pTree$, we denote their Lowest Common Ancestor by $LCA(G_{[x,idx_x]},G_{[y,idx_y]})$. After insertion, if we find $c+1$ non-empty sub partitions, we merge two sub partitions $G_{left}$ and $G_{right}$ to form $G_{[h^{*},idx^{*}]} = LCA(G_{left},G_{right})$, and $G_{[h^{*},idx^{*}]}$ has the lowest height $h^{*}$ among all possible merges.
%
If there is a tie in $h^{*}$,
%the height of the lowest common ancestor,
the merge that involves the least number of records will be executed.

Recall Example~\ref{exmp:cubeTA}, Fig. \ref{fig:partition_builder} demonstrates how to divide the social partition dynamically using a hierarchical partition for the 3D list of the keyword \quotes{icde} in Fig. \ref{fig:text_3D_inverted}. Since we are still building three partitions within a time slice, $r6-r10$ are inserted into the leaves of the partition tree when they arrive chronologically. When $r11$ arrives, we first identify that it should be inserted into $G_{[2,1]}$. However we can only keep three partitions, so $G_{[2,0]}$ and $G_{[2,1]}$ are merged to form $G_{[1,0]}$ shown in Fig. \ref{fig:partition_builder} when inserting $r11$. Note that in a time slice we only store sub partitions that contain records, and the tree structure is just a virtual view.

The advantage of our hierarchical partition is that, we have a fine-grained social partitions which improve the estimation of the cube's relevance score. This enables more opportunities for powerful pruning along the social dimension. At the same time, we still have $c$ partitions and the resource requirement does not increase. Again in Fig. \ref{fig:binary_graph_part}, the score under each tree node represents the social distance estimated from $u_1$. The static partition scheme estimates the distance from $u_1$ to $u_7$ as 0.2 where the hierarchical partition scheme gives $0.5$ which is closer to the true value (0.6).

\begin{figure}[t]
    \centering
        \includegraphics[width=0.5\textwidth]{pics/hirarchical_inverted_index}
    \caption{Reordered inverted list for keyword \quotes{icde} by using the hierarchical partition w.r.t. user $u_1$.}
    \label{fig:hierarchical_inverted_list}
\end{figure}
\subsection{CubeTA on Hierarchical Graph Partition Index}
CubeTA has to be extended to incorporate the hierarchical partition to improve the pruning efficiency. Since the partition scheme is improved from a single layer to multi layers, we need to change the method to estimate social distances. In the pre-processing step, all leaf nodes of partition tree $pTree$ are computed for distance to each other. When a user $u$ submits a query, we first identify the leaf node $G_{u}$ that this user matches. Then the distance $\textbf{SD}(u,G_{[h,idx]})$ from $u$ to any partitions $G_{[h,idx]}$ is estimated using $\min{\textbf{SD}(u,G_{[h',idx']})}$ and $G_{[h,idx]}$ is an ancestor of $G_{[h',idx']}$ in $pTree$. Suppose user $u_1$ submits a query, then the social distances from $u_1$ to other sub partitions are estimated in Fig. \ref{fig:binary_graph_part}. The distances from $G_{u_1}$ to all leaf nodes are first retrieved from the pre-computed index and the value is shown below the nodes. Then distance from $u_1$ to $G_{[1,0]}$ and $G_{[1,1]}$ are estimated as 0.1 and 0.4 respectively by taking the minimum value of their leaf nodes.

After reordering the social dimension w.r.t. user $u_1$, we can visualize the 3D list of \quotes{icde} as Fig. \ref{fig:hierarchical_inverted_list}. For each social partition, the estimated social distances are listed in the cell. The partitions may vary across different time slice but the number of partitions remains the same. CubeTA can be applied directly to the hierarchical index.


We also see the feasibility of the 3D list and CubeTA which two of them easily incorporate the hierarchical index into efficient query processing.
