
%%%%%%%%%%%%%%%%%%%%%%% file typeinst.tex %%%%%%%%%%%%%%%%%%%%%%%%%
%
% This is the LaTeX source for the instructions to authors using
% the LaTeX document class 'llncs.cls' for contributions to
% the Lecture Notes in Computer Sciences series.
% http://www.springer.com/lncs       Springer Heidelberg 2006/05/04
%
% It may be used as a template for your own input - copy it
% to a new file with a new name and use it as the basis
% for your article.
%
% NB: the document class 'llncs' has its own and detailed documentation, see
% ftp://ftp.springer.de/data/pubftp/pub/tex/latex/llncs/latex2e/llncsdoc.pdf
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\documentclass[runningheads,a4paper]{llncs}

\usepackage{amsmath}
\usepackage{amssymb}
\setcounter{tocdepth}{3}
\usepackage{graphicx}
\usepackage{diagbox}
\usepackage{epstopdf}
\usepackage{epsfig}
\usepackage[ruled, vlined, linesnumbered]{algorithm2e}
\usepackage{subfigure}

\usepackage{url}
\newcommand{\keywords}[1]{\par\addvspace\baselineskip
\noindent\keywordname\enspace\ignorespaces#1}

\newtheorem{lem}{Lemma}

\begin{document}

\mainmatter  % start of an individual contribution

% first the title is needed
\title{Multi-objective Spatial Keyword Query with Semantics}

% a short form should be given in case it is too long for the running head
%\titlerunning{Lecture Notes in Computer Science: Authors' Instructions}

% the name(s) of the author(s) follow(s) next
%
% NB: Chinese authors should write their first names(s) in front of
% their surnames. This ensures that the names appear correctly in
% the running heads and the author index.
%
\author{Jing Chen%
%\thanks{Please note that the LNCS Editorial assumes that all authors have used
%the western naming convention, with given names preceding surnames. This determines
%the structure of the names in the running heads and the author index.}%
\and Jiajie Xu}
%
%\authorrunning{Lecture Notes in Computer Science: Authors' Instructions}
% (feature abused for this document to repeat the title also on left hand pages)

% the affiliations are given next; don't give your e-mail address
% unless you accept that it will be published
\institute{Department of Computer Science and Technology, Soochow University\\
\url{20164227012@mail.suda.edu.cn}}

%
% NB: a more complex sample for affiliations and the mapping to the
% corresponding authors can be found in the file "llncs.dem"
% (search for the string "\mainmatter" where a contribution starts).
% "llncs.dem" accompanies the document class "llncs.cls".
%

\toctitle{Lecture Notes in Computer Science}
%\tocauthor{Authors' Instructions}
\maketitle


\begin{abstract}
Nowadays spatial keyword query is one of the most important applications in location based services. In early years, research on spatial keyword query mainly focussed on the spatial and textual similarities for single objective. Though some subsequent work got improved by applying algorithms to multiple objectives instead of single objective, they still ignored the semantic understanding of textual descriptions in spatial web objects and queries. To address this issue, this paper studies the problem of semantic based multiple objectives spatial keyword query. It aims to find the object set having minimum distance in spatial and semantical context. In order to achieve this goal, we propose a novel indexing structure named LIR-tree, which integrates spatial and semantical informations in a hierarchical manner, so as to prune the search space effectively in query processing. Extensive experiments are carried out to evaluate the proposed algorithms.
\keywords{Spatial keyword query, Multiple objectives, Probabilistic topic model, Topic distribution, Semantic similarity}
%\keywords{Multiple Objections, Collective Query, Semantic Similarity, Query Optimization, Locality Sensitive Hashing}
\end{abstract}

\section{Introduction}

%早期研究主要集中在spatial keyword approximate query。但是xxx【缺点】。[zhihu]提出了基于lda的语义扩展，支持对象文本与查询的语义匹配。

%现实应用场景中查询经常是多目标的。针对这一问题，[曹欣]。但是在文本方面只是精确匹配，xxx。需要支持多目标同时考虑语义的索引和查询处理方法。

Spatial keyword query is widely used in location based service (LBS) systems to recommend users the needed services or places to visit. The study on this topic has attracted a great deal of attention so far. Existing methodologies mainly study the efficient retrieval of spatial web objects that can best match the query in terms of both spatial and textual relevances. The spatial keyword query itself sometimes has multiple objectives, which may lead to none or few objects that can fully match the keywords in query. To address this problem, \cite{cao2011collective} returns a group of objects that can cover all required keywords with reasonable spatial distribution. But the keyword match cannot help us to find out those objects with highly related semantics but low similarity in spellings, such as $market$ and $Wal$-$Mart$. This limitation motivates us to investigate other approaches that aim to capture the semantic relatedness to multi-objective spatial keyword queries.
\begin{figure}[htbp]
  \centering
  \includegraphics[width=8cm,height=5cm]{image/figure2.pdf}
  \caption{Distribution of Spatial Web Objects}
  \label{fig:distribution_of_spatial_web_objects}
\end{figure}

\vspace{5pt}
\noindent \emph{Example 1.} Figure~\ref{fig:distribution_of_spatial_web_objects} shows an example with ten spatial web objects, each has a geographical location and a set of keywords. A user issues a query $q$ with a set of query objectives $\{(market),(western \  restaurant),(cinema)\}$ to find collective spatial web objects. By using traditional methods \cite{cong2009efficient,de2008keyword} to process each objective in query independently, the objects \{$O_2,O_4,O_9$\} are returned because of the spatial and textual similarities to query. Alternatively by using the collective keyword querying method \cite{cao2011collective}, the search engine tends to return a more qualified result such as \{$O_1,O_2,O_3$\}, because they are coherent in spatial and been close to the query together. However if we check the semantics on top of keywords more carefully, instead of \{$O_1,O_2,O_3$\}, we can easily observe that \{$O_6,O_7,O_8$\} is the set of objects that should be returned, because they are best matched in spatial, and all objectives in the query can be fully matched in semantics. The key issue is how to take the semantics into account and process the query efficiently. 

To represent the semantics of each spatial web object and query objective, we can apply powerful tools in the field of machine learning, such as probabilistic topic model or word embedding. By running them on textual descriptions, query objectives (e.g., $market$ in q of Fig.~\ref{fig:distribution_of_spatial_web_objects}) and spatial web objects are represented as high dimensional vectors called topic distribution in semantic space. A topic distribution indicates the semantic relevance between a textual description and a latent topic, and accordingly, the similarity between an object and an objective in query can be measured on top of their topic distributions. In this way it is possible to find the collective object set that can satisfy all query objectives while coherent in spatial and close to the query point.

While the incorporation of semantics help us to return more meaningful feedbacks, the query processing becomes more challenging and time-consuming for three main reasons: firstly, finding the optimal result (the subset according to spatial and semantic similarity) is an NP complete problem, which cannot be solved in a polynomial time; secondly, existing spatial keyword indices, such as IR-tree\cite{cao2010retrieving}, cannot be directly used to organize the information of spatial web objects because of its difficulties in representing their topic distribution regarding to semantics. Last but not the least, the high dimensionality of vector (topic distribution in semantics) deteriorates the pruning effectiveness in query processing due to the large dead space. 

To address all above difficulties, we propose a novel query processing mechanism that has good efficiency and precision. To ensure the pruning effect in semantic space, we take advantage of the locality sensitive hashing (LSH) to hash the objects by their high dimensional topic distributions. Each bucket is understood as a semantic tag, and the LSH mechanism ensures that  objects in the same bucket to have consistent semantic meanings. We design a candidate bucket set oriented searching mechanism to reduce the search space. It retrieves and compares local result for each candidate bucket set, and finally derives a result in global optimum. In addition, a more efficient approach is proposed to avoid checking all candidate bucket sets while ensuring high accuracy of the result. The main contributions of the paper can be briefly summarized as follows:  
\begin{itemize}
 \item We formalize a probabilistic topic model based similarity measure between a multi-objective query and a set of objects; 

 \item We design a semantic hashing based algorithm on top of LSH method, so that rational results can be derived by making use of the collective spatial keyword querying technologies.  

 \item We propose a novel mechanism that can start from a good result directly, and then guide us to improve the result to ensure accuracy by certain swap operations. 

 \item We conduct an extensive experiment analysis based on real spatial databases and make the comparisons with baseline algorithms, and then demonstrate the efficiency of our method.
\end{itemize}

The rest part of this paper is organized as follows. We present some necessary concepts and formally define the problem in Section2. Section 3 presents a baseline method. Two advanced solutions are introduced in Section 4 and Section 5 respectively. Section 6 reports the experimental observations. This paper is concluded in Section 8 after a brief review of related work in Section 7.

\section{Preliminaries and problem definition}

In this section, we introduce some preliminaries about probabilistic topic model and then formalize the problem of this paper.

\subsection{Probabilistic Topic Model}

Probabilistic topic model is an efficient technique on theme interpretation and document classification. In this paper, we apply one of the most frequently used probabilistic topic models, i.e. the \emph{Latent Dirichlet Allocation(LDA)} model to understand the semantic meanings of textual description over latent topics. Each latent topic, or topic in short, is a feature that represents a semantic meaning derived from LDA. By carrying out statistical analysis on the large amount of textual descriptions, the LDA model automatically derives the semantic relevance of a text to all latent topics, known as topic distribution defined as follows: 

\vspace{5pt}
\noindent \textbf{\emph{Definition 1.}} (\emph{Topic Distribution}) Given a text, a topic distribution derived from LDA is a high dimensional vector that describes the semantic relevance between the text and each latent topic. We use $TD_W$ to denote the topic distribution of a text $W$ over finite latent topics, and a component $TD_W[i]$ indicates the relevance between $W$ and the $i_{th}$ latent topic. 

\begin{table}[!htbp]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\diagbox[width=12em,trim=l]{textual descriptions}{topics} & exercise & movie & drink & shop & food \\
\hline
market (in $O_1$) & 0.09 & 0.09 & 0.09 & 0.64 & 0.09 \\
\hline
restaurant (in $O_2$) & 0.04 & 0.04 & 0.16 & 0.04 & 0.72 \\
\hline
cinema (in $O_3$) & 0.07 & 0.72 & 0.07 & 0.07 & 0.07 \\
\hline
noodles (in $O_5$) & 0.07 & 0.07 & 0.07 & 0.07 & 0.72 \\
\hline
Wal-Mart (in $O_6$) & 0.07 & 0.07 & 0.07 & 0.72 & 0.07 \\
\hline
theater (in $O_7$) & 0.04 & 0.84 & 0.04 & 0.04 & 0.04 \\
\hline
KFC (in $O_8$) & 0.03 & 0.03 & 0.03 & 0.03 & 0.88 \\
\hline
\end{tabular}
\setlength{\abovecaptionskip}{8pt}
\caption{topic distributions of textual descriptions}
\label{table:topic_distributions}
\end{table}

\noindent \emph{Example 2.} Table~\ref{table:topic_distributions} shows the LDA interpretation on all the spatial web objects in Figure~\ref{fig:distribution_of_spatial_web_objects}. Each tuple in Table~\ref{table:topic_distributions} is a topic distribution over five topics. Each component is the relevance between the text and a specific topic, for example, $TD_{market}[1]=0.09$ means the relevance between $market$ and $exercise$ is 0.09. We can learn from Table~\ref{table:topic_distributions} that for the same textual description $movie$, $TD_{cinema}[2]=0.72$ and $TD_{theater}[2]=0.84$ which means $theater$ has high coherence over $cinema$ while $TD_{Wal-Mart}[2]=0.07$ which means $Wal-Mart$ is distinct to $cinema$.

\subsection{Problem Definition}

A spatial web object is a place of interest in LBS systems, and it is formalized as $o=(o.\lambda,o.\psi)$ where $o.\lambda$ is the position of $o$ and $o.\psi$ is the textual information for describing $o$. A user issues a multi-objective query $q = (q.\lambda, q.\Psi)$, where $q.\lambda$ represents a geographical location, and $q.\Psi$ is a set of query objectives which are textual descriptions for describing an activity intention. In the rest of this paper, we simply use \emph{objects} to represent \emph{spatial web objects}.

\vspace{5pt}
\noindent \textbf{\emph{Definition 2.}} (\emph{Spatial Distance}) The objects in the result set are supposed to be not only close to query, but also close with each other. We thus follow collective spatial keyword query \cite{cao2011collective} and measure the spatial distance $D_S$ from a query $q$ to an object set $O$ as follow:
\begin{eqnarray}
\mathcal{D}_{S}(q,O) = \beta \times &max_{o_i\in O}&{(||q.\lambda,o_i.\lambda||)} + \nonumber\\
			&  &(1-\beta)\times max_{o_i,o_j\in R}{(||o_i.\lambda,o_j.\lambda||)}
\end{eqnarray}
where $\beta \in [0,1]$ is a user-specified weight parameter. The spatial measure allows us to find a set of objects close to query and have spatial coherence with each other. That means, the objects are rationally distributed in spatial when $D_S(q, O)$ is small.

\vspace{5pt}
\noindent \textbf{\emph{Definition 3.}} (\emph{Semantic Distance}) Semantic distance $\mathcal{D}_{T}$ between a query $q$ and an object set $O$ can be measured on top of their topic distributions through LDA. By calculating the distance between high dimensional vectors, we define $\mathcal{D}_{T}$ to range [0,1] by using the sigmoid function as follow:
\begin{eqnarray}
\mathcal{D}_{T}(q,O) = \sum_{q.\Psi_{i} \in q.\Psi}{min_{o_j \in O} (d_T(q.\Psi_{i}, o_j)) }
\end{eqnarray}
\begin{eqnarray}
d_T(q_i, o_j) = \frac{2}{1 + e^{-\sqrt{\Sigma{(TD_{q_i}[z]-TD_{o_j}[z])^{2}}}}}-1
\end{eqnarray}
where $\mathcal{D}_{T}(q,O) \in [0,1]$. It is obvious that when semantic distance is smaller, the query $q$ and a correspond object $o$ are more relevant in semantics.

\vspace{5pt}
\noindent \textbf{\emph{Definition 4.}} (\emph{Distance}) By combining spatial distance $\mathcal{D}_{S}(q,O)$ and semantic distance $\mathcal{D}_{T}(q,O)$, we define the distance $\mathcal{D}ist(q,O)$ of query $q$ and object set $O$ in Equation below.
\begin{eqnarray}
\mathcal{D}ist(q,O) = \alpha \times \mathcal{D}_{S}(q,O) + (1-\alpha) \times \mathcal{D}_{T}(q,O)
\end{eqnarray}
where $\alpha \in [0,1]$ is a user-specified weight parameter that balance spatial distance $\mathcal{D}_{S}$ and semantic distance $\mathcal{D}_{T}$.

\vspace{5pt}
\noindent \textbf{\emph{Problem Statement.}} Given an object set $O$ and a query $q=(q.\lambda,q.\Psi)$, the multi-objective spatial keyword query (MoSKQ, in short) in this paper aims to return a subset $O'$ of objects $O$($O' \subset O$, $|O'|\leqslant|q.\Psi|$) where $O'$ has minimum distance and semantically matches enitre query objectives collectively, such that $\forall O''\subset O$, $|O''|\leqslant |q.\Psi|$  and $\mathcal{D}ist(q,O')\leqslant \mathcal{D}ist(q,O'')$.

\section{Baseline Algorithm}

In this section, we propose a baseline algorithm which seeks to find the optimal result within a subspace in an incremental fashion. A lower bound and an upper bound are used to stop the searching process in the middle if possible.

Starting from a search region centered at the query $q$ with a radius $r$, we execute an exhaustive search to get the best object set $R$ which minimums the distance function($|R| \leqslant |q.\Psi|$). Then we enlarge the search radius $r=r+\Delta r$ and search in the new region to find a best object set $R'$. During this process, we use a set $S$ to store all the solutions found. In this process, an upper bound $\mathcal{UB}=min_{R \in O}(\mathcal{D}ist(q,R))$ and a lower bound $\mathcal{LB}=\alpha \times \beta \times r$ are dynamically maintained. 
%Spatial distance between q and a subset $O'$ of all objects is calculated by $D_S'(q,O')=\beta \times max_{o_i\in O}{(||q.\lambda,o_i.\lambda||)}$. 
%Then we measure the distance to query of all possible subsets $O'$ where $|O'|\leqslant |q.\Psi|$ and all query objectives can be matched. Next, we set the subset with minimum distance as inital object set. The distance of inital object is the upper bound $\mathcal{UB} = min_{O' \in O}(\mathcal{D}ist(q,O'))$.
%Obviously, the final returned object set must have a distance less than $\mathcal{UB}$ and higher than a lower bound $\mathcal{LB} = \alpha \times \beta \times R$.

During the process, if $\mathcal{UB} < \mathcal{LB}$ or the search radius extends to the most distant object, the algorithm terminates and returns the best group in the solution set $S$, because all unvouched objects are hopeless to obtain lower distance. But obviously, this algorithm may require an exhaustive search sometimes because the bound is relative loose. Therefore some more efficient approaches are required to search capable results.

\section{Semantic Hashing Based Algorithm}

In this section, we propose a novel solution called semantic hashing based algorithm (SH-algorithm in short) to speed up the querying process. In Section~\ref{subsec:index_structure}, we introduce the details of the LDA based indexing structure. Section 4.2 plots the search algorithm over the index.

\subsection{Index Structure}
\label{subsec:index_structure}

In this subsection, we devise a new index, namely LIR-tree, based on LSH and IR-tree. The index is composed of two parts, i.e., the LSH part and the IR-tree part. As is known, LSH \cite{buhler2001efficient,datar2004locality,slaney2008locality} is a method widely used for similarity search in high dimension. We first utilize LSH to preprocess the whole objects in the dataset, i.e, hash the object into buckets based on their topics distributions. Every bucket in LSH can be regarded as the tag of the semantic meanings of the objects in it. That is to say, the objects in the same bucket are considered to be similar in semantics. In this way, all the objects in the dataset derive the bucket ids that they are hashed into, which makes the semantic similarity search possible based on the bucket ids. Then we use IR-tree \cite{chen2006efficient,rocha2011efficient,yao2010approximate} to organize the objects according to their geographical locations and corresponding bucket ids for a given query with specified location and bucket ids.

\begin{figure}[htbp]
	\centering
	\includegraphics[width=1\textwidth]{image/figure3.pdf}
	\caption{An example of LIR-tree}
	\label{fig:LSH_Index_and_LIR_tree}
\end{figure}

\vspace{5pt}
\noindent
\textbf{\emph{LSH part.}} The LSH is a well-known index scheme for high-dimensional similarity search with the basic idea to use a family of locality-sensitive hashing functions to map the objects into the same buckets with high probability. LSH hash families have the property that objects close to each other have a higher colliding probability than those far apart, which is determined by different distance measure functions. In this paper, we use the hash family proposed by Datar et al. \cite{datar2004locality} based on $p$-stable distributions \cite{indyk1998approximate,zolotarev1986one}, which is defined as:
\begin{eqnarray}
h\left(p\right) = \lfloor\frac{a\times p + b}{W}\rfloor
\end{eqnarray}
Where $a$ is a random topics distribution vector, $W$ represents the width of the hash function, $b$ is a random variable belongs to $[0,W]$. 
%Note that each hash function $h_{a,b}(p)$ maps $p$ to a line. The line is divided into slots of length $W$, which are the buckets that objects will be hashed into. 
All the objects in the dataset are divided into corresponding buckets based on their topic distributions. Each bucket can be considered as the semantic tag of the objects in this bucket and the objects in the same bucket have high proximity in semantics. We record the geographical location and the bucket ids that every object in dataset are hashed into.

\vspace{5pt}
\noindent
\textbf{\emph{IR-tree part.}} The IR-tree part of LIR-tree is similar to the conventional inverted R-tree, except that we store the inverted list of the buckets of the objects derived by LSH, rather than the keywords that describe the objects. All the objects in the dataset are organized using the R-tree according to their geographical location. Since the objects also have the bucket ids that they are hashed into, we build the inverted list of the R-tree node in a bottom up fashion. The inverted list of both leaf node and non-leaf node includes the \emph{buckets} and the \emph{objects} that are hashed to this bucket.

\subsection{Search Algorithm}

In this subsection, we propose a search algorithm that prunes the search space effectively over the proposed index. The prune process is complished on topic layer and spatial layer respectively.

Let us consider how to match all query objectives first. Recall that all objects in the dataset have a topic distribution after applying the LDA model to interpret their textual descriptions. The objects are then hashed into buckets by LSH on top of their topic distributions. By using LSH, objects in a same bucket are supposed to be consistent in semantics, each bucket can thus be understood as a semantic tag. Given a query q, we can derive a topic distribution for each objective in q, and then hash the query objectives into the LSH buckets in the same way to objects. By taking advantage of the LSH structure, we can simply evaluate if an object can match a query objective by if they share a same semantic tag (i.e. in a same LSH bucket), and accordingly, the semantic distance can be rewritten to:
\begin{eqnarray}
d_{T}(q.\Psi_{i},o) = 
\begin{cases}
    0   \ \ \ \ \ \ \ \  \text{$\exists B_{ij}: q.\Psi_{i} \in B_{ij} \ and\  o \in B_{ij}$}\\
\infty  \ \ \ \ \ \ \  otherwise
\end{cases}
\end{eqnarray}
where $q$ and $o$ are the query and the object respectively. Equation 6 means that the objects in the same buckets that $q$ is hashed into have high proximity in semantics with the query, while the objects in other buckets are not semantically relevant. Objects which are semantically independent with query can be pruned directly. 

On top of the LSH based semantic distance defined above, the next issue is to justify if a given object set are semantically relevant to all query objectives. Conceptually, the relevance requires us to find an object from the set to share a same bucket for each query objective (having a same semantic tag). To define the semantic relevance more clearly, we further define the concept of \emph{candidate bucket set} as follows.

\vspace{5pt}
\noindent 
\textbf{\emph{Definition 5.}} (\emph{Candidate bucket set}) A candidate bucket set is a smallest unit of buckets to ensure the relevance to a query about its objectives. Given a query $q$, a candidate bucket set $cbs$ satisfy the following two requirement:
(1) containment. A $cbs$ contains at least one bucket set of each query objective $q_{i}$ $\in$ $q.\Psi$ such that BS($q_{i}$) $\cap$ $cbs$ $\neq$ $\emptyset$, where BS($q_{i}$) is the set of buckets $q_{i}$ is hashed to; 
(2) minimum. The above condition fails for each subset of it, i.e., $cbs'\subsetneqq cbs$. 

The candidate bucket set ensures that all query objectives can be matched. Given a set of objects O, it can be return if and only if the union of buckets containing an object in O can cover a candidate bucket set of query q. In Figure~\ref{fig:LSH_Index_and_LIR_tree}, $\{b_{11},b_{22},...,b_{LM}\}$ is a candidate bucket set but $\{b_{11},b_{21},...,b_{L1}\}$ is partly coverd $\{TD_{q1},TD_{q3}\}$ missing $TD_{q2}$ which going against the containment property to be a candidate bucket set.

Here we describe the searching mechanism. The target of SH-algorithm search is to find the object set such that: (1) its related bucket set covers at least one candidate bucket set to meet all query objectives; (2) the overall distance is minimum. We use a candidate bucket set oriented searching mechanism. For each candidate bucket set, if each bucket is regarded as a keyword (denoting a semantic tag), our problem can be transfered to the well studied collective spatial keyword query \cite{cao2011collective}. We apply the Top-Down Search algorithm \cite{cao2011collective} to obtain the best object set for the given candidate bucket set (line 7). The basic idea of the Top-Down Search algorithm is to perform a best-first search on the IR-tree to find the covering node sets, such that some objects from these nodes can constitute a group to cover all required buckets in the set. We process the covering node set with the lowest cost to find covering node sets from their child nodes. While reaching a covering node set consisting of leaf nodes, a group of objects with the lowest cost can be found by performing an exhaustive search (lines 6-10). The Top-Down Search algorithm return the exact result set and it is invoked $L^{|q.Psi|}$ times accoring to Lemma~\ref{candidatebucketset}.


\begin{lem}\label{candidatebucketset}
	There are \textbf{L} hash tables and query $q$ have $|q.\Psi|$ textual descriptions. The SH-based Algorithm would retrieve collective objects for $L^{|q.\Psi|}$ times.
\end{lem}

\textbf{Proof:} Assuming that there are L hash tables. Each textual description can be hashed into L hash buckets. There are N textual descriptions in query corresponding $|q.\Psi|$ bucket sets which have the same length $L$, $l(BS_i)=L, i\in[1,|q.\Psi|]$. Entire candidate bucket sets produced by Cartesian product $BS_1 \times BS_2 \times ... \times BS_N$ and obviously the size of this set is $L^{|q.\Psi|}$. 

SH-based Algorithm sets each candidate bucket set as argument to retireve correspond collecive objects. Therefore, this query step will be repeated $L^{|q.\Psi|}$ times. Lemma 1 can be proven. ~~$\blacksquare$

\begin{algorithm}[htb]
\caption{\emph{SH} based Search Algorithm}
\KwIn  {IR-tree $ir$, query $q$, $\lambda$\\}
\KwOut {a set of objects $O$\\}

	$O=\emptyset$;\\

	$CBS$ $\leftarrow$ $BS(q_1) \times BS(q_2) ... \times BS(q_n)$;\\
	
	\For{each $cbs$ in $CBS$}
	{
		$O'$ and $Dist(q,O')$ $\leftarrow$ Top-Down Search(q);\\
		\If {$Dist(q,O')$ $<$ $Dist(q,O)$}
		{
			$Dist(q,O)$ = $Dist(q,O')$;\\
			$O$ = $O'$;\\
		}
	}		
    return $O$ and $Dist(q,O)$;\\
\end{algorithm}

This method avoids the worst situation in which the whole search region needs to be scrutinized. The search process is subject to at most $BS(q_1) \times ... \times BS(q_n)$ candidate bucket sets, each of them calls for a Top-Down Search whose time complexity is $|q.\Psi|-|N|+1$, where $N$ represents the number of nodes which cover the query keywords and each node contributes at least one object to the final result\cite{cao2011collective}. The time complexity of SH-based Algorithm is $L^{|q.\Psi|} \times (|q.\Psi|-|N|+1)$ which cannot be solved in polynomial time either. The computational overhead will rapidly increase when the number of query objectives grows.

\section{Distance Based Replacement Algorithm} 

This section presents a novel strategy called Distance Based Replacement (DBR) Algorithm, which starts at the SH-basd algorithm but aims to find a high quality result more efficiently. Instead of taking every possible candidate bucket set as input, the DBR algorithm randomly sample a number of candidate bucket sets and derive a result based on SH algorithm. Then it aim to improve the result by replacement iteratively. Besides, the DBR Algorithm takes both spatial and topic dimensions into consideration, and LIR-tree still be used to manage objects.

At the beginning of DBR, we randomly sample some candidate bucket sets and choose one object set with minimum distance through SH-based Algorithm. Object in this set is replaced individually and iteratively until a stable object set is found according to Lemma~\ref{distancereplacement}. A stable object set is obtained by iteratively replaceing object until no reduction can be achieved on distance. The critical operation is that how to set the standard for replacement. In the procedure of replacement, an object in intial object set replaced by an object which beyond this set and has farther $\mathcal{MG}$ than others. The important concept \emph{marginal gain} $\mathcal{MG}$ defined as
\begin{eqnarray}
\mathcal{MG}(o_i,o_j) = Dist(q,O)-Dist(q,O')
\end{eqnarray}
where $O=\{o_1,..,o_i,..,o_n\}$ is the initial object set and $O'=\{o_1,..,o_j,..,o_n\}$ represents a replaced object set where $o_i$ is replaced by $o_j$. The object $o_i$ should be replaced by $o_j$ while $o_j$ has the $\max \limits_{o_j \in O}(\mathcal{MG}(o_i,o_j)) $ and $\mathcal{MG}(o_i,o_j)>0$ in dataset. The replacement strategy terminates when no object can be found to touch positive marginal gain, i.e., for each object $o_i'$ in object set $\forall o_j' \in O, \mathcal{MG}(o_i',o_j') < 0$. The final stable object set can be found by repeating the replacement strategy several times. The stable object set is final result of DBR Algorithm. And this result may have lower distance than object set found by SH-based Algorithm.

\begin{lem} \label{distancereplacement}
	Given dataset and bucket sets related to query objectives. The stable object set can be found through distance based replacement strategy.
\end{lem}

\textbf{Proof:}  We assume that the object set after distance based replacement strategy is not stable. Thus there must exist an object $o_i$ in this object set which can be replaced by another object, i.e., we can find an object $o_j$ and $\mathcal{MG}(o_i',o_j') > 0$. It runs counter to the termination conditions. Therefore, the object set must be stable. Lemma 2 has been proved. ~~$\blacksquare$

\begin{algorithm}[htb]
	\caption{Distance based Replacement Search Algorithm}
	\KwIn  {IR-tree $ir$, query $q$, $\lambda$\\}
	\KwOut {a set of objects $V$\\}
	
	$\mathcal{D}$ist(q,V)=$\infty$;\\
	$CBS$ $\leftarrow$ CartesianProduct(q,O);\\
	
	tempSet $\leftarrow$ $TopDownSearch(randomCombination(CBS))$;\\
	
	V = $\min(Dist(q,tempSet))$;
	
	\For{object $o_i$ in $V$}
	{
		\For{object $o_j$ in $O$}
		{
			$(o_i',o_j')$=$argmax(\mathcal{MG}(o_i,o_j))$;\\
			V=Replace($o_i$,$o_j'$);\\
		}
	}
	return V;\\
\end{algorithm}

The pseudocode is shown in Algorithm 2. Firstly, we obtain all candidate bucket sets $CBS$ through calculating Cartesian product of buckets which are corresponding to each query objective(lines 2). Next, we randomly select some candidate bucket sets and search for the object set $V$ with minimum distance(lines 3-4). Then, we compute $\mathcal{MG}(o_i,o_j)$ to find an object $o_j$ with maximum marginal gain and apply DBR strategy to find a stable object set(line 5-8). Through DBR strategy, the problem of time consuming caused by messive candidate bucket sets has been solved. Assuming there are $n$ objects in datasets and the stable object set is found after $m$ replacement, the time complexity of DBR algorithm is $O(n \times m)$.

\section{Experiment Study}

In this section, we conduct extensive experiments on real datasets to evaluate the performance of our proposed algorithms.

\subsection{Experiment Settings}

We use a real object dateset which is created by using the online check-in records of Foursquare within the areas of New York City. Each record contains the user ID, venue with geographical location (place of interest) and the tips written in English. We put the records belonging to the same object to form textual descriptions of the objects, and the textual descriptions for each place are interpreted into a probabilistic topic distribution by the \emph{LDA} model. The number of objects in this dataset is 206,097 in sum.

\begin{table}[!htbp]
	\centering
	\begin{tabular}{c|c|c}
		\hline
		\textbf{Parameter} & \textbf{Default Value} & \textbf{Desccription} \\
		\hline
		$|q.\Psi|$ & 3 & number of query objectives \\
		\hline
		$\alpha$ & 0.5 & weight factor for spatial distance\\
		\hline
		$\beta$ & 0.5 & weight for distance function \\
		\hline
		$L$ & 100 & number of hash table \\
		\hline
		$M$ & 8 & number of \emph{LSH} function \\
		\hline
		$t$ & 50 & number of latent topics \\
		\hline
	\end{tabular}
	\setlength{\abovecaptionskip}{8pt}
	\caption{default values of parameters}
	\label{table:default_values}
\end{table}

We compare the query time cost of proposed algorithms respectively. The default values for parameters are given in Table~\ref{table:default_values}. All algorithms are implemented in Java and run on a PC with Intel core CPUs at 2.5GHz and and 4GB memory.

\subsection{Performance Evaluation}

\noindent \emph{(1) Comparisons of proposed methods}

In this part, we vary parameters in Table~\ref{table:default_values} to compare SH-based Algorithm and DBR Algorithm.

\noindent \textbf{Effect of $|q.\Psi|$.} We investigate the the effect of $|q.\Psi|$ on the efficiency and accuracy of the proposed algorithms. As shown in Figure~\ref{fig:effect_of_query}, the query time of baseline algorithm and CBR algrithm is lower than DBR Algorithm. With the increase of query points, all algorithms incur more time cost and more objects are visited since they all utilize high dimensional index to retrieve candidate object around each query objective. That because it search multiple times when we have more query objectives. Query time will dramatically  increase when the quantity of query objectives beyond 6 due to massive candidate bucket sets created by Cartesian product.

\noindent \textbf{Effect of $t$.} We procees to examine the effect of the number of topics for topic models by ploting query time and the average distance between the returned object set and query. As Figure~\ref{fig:topicsnumber} shown, the query time showing an ascending tendency when value of $t$ goes up. The reason is that with the increase of $t$, each calaulation of semantic distance takes more time.

\noindent \textbf{Effect of $\alpha$.} Figure~\ref{fig:AlgorithmLamda} shows the influence of weight paramenter $\alpha$ ranges $[0,1]$. The curves show a unified tendency to increase first and decrease when $\alpha$ beyond a special value. And the SH-based algorithm always takes most time to complish search process, besides, the DBR Algorithm has the best time performance. The SH-based algorithm and DBR Algorithm are more accurate than baseline algorithm. And the distance cost is smoothly increase when the value of $\alpha$ grows because of more effective than $\beta$.

\noindent \textbf{Effect of $\beta$.} Comparing Figure~\ref{fig:AlgorithmLamda} and Figure~\ref{fig:AlgorithmBeta}, $\alpha$ and $\beta$ have the same changing trend on query time. But the influence of $\beta$ is less than $\alpha$ because $\beta$ is the weight of spatial distance while $\alpha$ is used to weight distance cost.
\begin{figure}
	\setlength{\abovecaptionskip}{8pt}
	%	\captionsetup{belowskip=-14pt}
	\begin{minipage}[t]{0.49\linewidth}
		\begin{minipage}[t]{0.49\linewidth}
			\centering
			\includegraphics[width=1.22in,height=0.9in]{image/AlgorithmQvalueNYC.eps}
			~~(a) Efficiency
		\end{minipage}
		\begin{minipage}[t]{0.49\linewidth}
			\includegraphics[width=1.22in,height=0.9in]{image/Accofqpsi.eps}
			\centering{(b) Accurancy}
		\end{minipage}
		\caption{Effect of $|q.\Psi|$}
		\label{fig:effect_of_query}
	\end{minipage}%
	\begin{minipage}[t]{0.49\linewidth}
		\begin{minipage}[t]{0.49\linewidth}
			\centering
			\includegraphics[width=1.22in,height=0.9in]{image/topicsnumber.eps}
			~~(a) Efficiency
		\end{minipage}
		\begin{minipage}[t]{0.49\linewidth}
			\includegraphics[width=1.22in,height=0.9in]{image/Accoft.eps}
			\centering(b) Accurancy
		\end{minipage}
		\caption{Effect of $t$}
		\label{fig:topicsnumber}
	\end{minipage}
\end{figure}
\begin{figure}
	\setlength{\abovecaptionskip}{8pt}
	%	\captionsetup{belowskip=-14pt}
	\begin{minipage}[t]{0.49\linewidth}
		\begin{minipage}[t]{0.49\linewidth}
			\centering
			\includegraphics[width=1.22in,height=0.9in]{image/lamdaNYC.eps}
			~~(a) Efficiency
		\end{minipage}
		\begin{minipage}[t]{0.49\linewidth}
			\includegraphics[width=1.22in,height=0.9in]{image/distanceNYC.eps}
			\centering{(b) Accurancy}
		\end{minipage}
		\caption{Effect of $\alpha$}
		\label{fig:AlgorithmLamda}
	\end{minipage}%
	\begin{minipage}[t]{0.49\linewidth}
		\begin{minipage}[t]{0.49\linewidth}
			\centering
			\includegraphics[width=1.22in,height=0.9in]{image/betaNYC.eps}
			~~(a) Efficiency
		\end{minipage}
		\begin{minipage}[t]{0.49\linewidth}
			\includegraphics[width=1.22in,height=0.9in]{image/distanceLA.eps}
			\centering(b) Accurancy
		\end{minipage}
		\caption{Effect of $\beta$}
		\label{fig:AlgorithmBeta}
	\end{minipage}
\end{figure}

\vspace{5pt}
\noindent 
\emph{(2) Evaluations of LSH parameters}

\vspace{5pt}
The performance of SH-based Algorithm and DBR Algorithm is mainly influenced by LIR-tree. LIR-tree can be measured by parameters hash tables $L$ and hash families $M$. We will tune these parameters to evaluate the performance of LIR-tree in sequence in this part. Since the LSH is a similarity index structure, an inaccurate result is found for query. So we analyze the performance of LIR-tree in search quality, search speed. Ideally, the LIR-tree search system should be able to achieve high-quality search results with higher speed. The search quality is measured by the distance to query and search speed is measured by query time, which is the time spent to answer a query. Space requirement is measured by total number of hash tables needed.

\begin{figure}
	\setlength{\abovecaptionskip}{8pt}
	%	\captionsetup{belowskip=-14pt}
	\begin{minipage}[t]{0.49\linewidth}
		\begin{minipage}[t]{0.49\linewidth}
			\centering
			\includegraphics[width=1.22in,height=0.9in]{image/AlgorithmLvalueNYC.eps}
			~~(a) Efficiency
		\end{minipage}
		\begin{minipage}[t]{0.49\linewidth}
			\includegraphics[width=1.22in,height=0.9in]{image/Accofl.eps}
			\centering{(b) Accurancy}
		\end{minipage}
		\caption{Effect of $L$}
		\label{fig:Algorithm_SH_mvalue}
	\end{minipage}%
	\begin{minipage}[t]{0.49\linewidth}
		\begin{minipage}[t]{0.49\linewidth}
			\centering
			\includegraphics[width=1.22in,height=0.9in]{image/AlgorithmMvalueNYC.eps}
			~~(a) Efficiency
		\end{minipage}
		\begin{minipage}[t]{0.49\linewidth}
			\includegraphics[width=1.22in,height=0.9in]{image/Accofm.eps}
			\centering(b) Accurancy
		\end{minipage}
		\caption{Effect of $M$}
		\label{fig:Algorithm_cbr_mvalue}
	\end{minipage}
\end{figure}

\noindent \textbf{Effect of $L$.} $L$ is the number of hash tables which exhibits impact on accuracy of result set and space consumption. Intuitively, a larger $L$ indidates more information provided by the LIR-tree, which facilitates the accuracy of object sets. To achieve higher quality with lower time, we vary $L$ from 10 to 50. When we increase hash table $L$, as shown in Figure~\ref{fig:Algorithm_SH_mvalue}, simultaneously the query time will smoothly increase. The experiment indicates that DBR Algorithm takes less query time on same number of topics than SH-based Algorithm.

\noindent \textbf{Effect of $M$.} $M$ is a fundamental parameter for constructing a hash table. A group of experiments are conducted to evaluate the performance of LIR-tree under different $M$. We vary $M$ from 6 to 10 with $L$ fixed as 30. The space requirement just vary slightly for different settings with fixed $L$, so we are only concerned with the accuracy of returned collective object set measured by the distance. Figure~\ref{fig:Algorithm_cbr_mvalue} indicates that query time and accuracy affected by $M$ indeed. The query time goes up smoothly as $M$ increases. And similar to Effect of $L$, the DBR Algorithm costs much less time to obtain result due to the lower time complexity.

To sum up, compared SH-based Algorithm with DBR Algorithm, $DBR$ Algorithm can achieve a relatively high accuracy within short time in all settings. And DBR Algorithm takes less time to obtain high quality collective object set.

\section{Related Work}

With the prevalence of spatial objects associated with textual information on the Internet, spatial keyword queries that exploit both location and textual description are gaining in prominence. A spatial keyword query takes a user location and user-supplied keywords as arguments and returns web objects that are spatially and textually relevant to these arguments. Many contributions are already made in the literature that study different aspects of spatio-textual querying. Some efforts are made to support the \emph{SKBQ} \cite{cong2009efficient,de2008keyword,li2012desks,zhang2016inverted} that requires exact keywords match, which may lead few or no results to be found. To overcome this problem, lots of work have been done to support the \emph{SKAQ} \cite{li2013spatial,rocha2011efficient,yao2010approximate}, which ensures the query results are no longer sensitive to spelling errors and conventional spelling differences. Many novel indexing structures are proposed to support efficient processing on \emph{SKBQ} and \emph{SKAQ}, such as \emph{IR-tree} \cite{cong2009efficient}, \emph{IR$^2$-tree} \cite{de2008keyword}, \emph{MHR-tree} \cite{yao2010approximate}, \emph{S2I} \cite{rocha2011efficient}, etc. Numerous work studies the problem of spatial keyword query on \emph{collective querying} \cite{cao2011collective}, \emph{why-not questions} \cite{tran2010conquer}, \emph{continuous querying} \cite{barbieri2009c}, \emph{interactive querying} \cite{jin2010interactive}, etc. Specifically, \cite{zhang2016inverted} addresses a more challenging problem on spatial keyword top-$k$ queries, where some known object is unexpectedly missing from a result; and \cite{tran2010conquer} investigates a novel problem, namely, continuous top-$k$ spatial keyword queries on road networks; \cite{jin2010interactive} eliminates the requirement of users to explicitly specify their preferences between spatial proximity and keyword relevance by enhancing the conventional queries with interaction; \cite{cao2011collective} studies the problem of retrieving a group of spatial objects such that the keywords in that group cover all those in the query, and the group of objects are nearest to the query location and have the lowest inter-object distances; \cite{qian2016efficient} proposed methods to retrieve objects which are semantic related to query keywords. But as far as we know, none of those
existing approaches can retrieve spatial objects that are semantically relevant
but morphologically different which are collectively cover all user-supplied keywords. Therefore, in this paper, we investigate the topic
model based collective spatial keyword querying to recommend users collective spatial objects that
have both high spatial and semantic similarities to query.



To the best of our knowledge, the works that retrieve groups of spatial web objects relate to CSKQ\cite{cao2011collective} and mCK query\cite{zhang2009keyword,zhang2010locating} which take a set of keywords as arguments. The CSKQ algorithm takes a query with a set of keywords and returns a group of objects that are exactly match the query keywords 
respectively. Our algorithm not only takes rational spatial distribution into consideration but also committed to match collective objects on semantic dimension which draw on the experience of \cite{qian2016efficient} .

%The processing of $k$ nearest neighbor queries in spatial databases is a classical subject. Our work is related to top-k query processing\cite{}.

\section{Conclusion}

This paper committed to the problem of retireving a group of spatial web objects more effectively and reasonably by converting keywords matching to topic distribution. The probabilistic topic model is utilized to interpret the textual descriptions attached to spatial objects and user queries into topic distributions. To support the efficient top-k spatial keyword query in spatial, topic and textual dimension, we propose a novel method to manage objects which combines IR-tree and LSH index structure, and effective searching algorithm to prune the high dimensional search space regarding to spatial, semantic and textual similarities. Extensive experimental results on real datasets demonstrate the effciency of our proposed method.

\section*{Acknowledgement.}

\bibliographystyle{plain}
\bibliography{refer}



\end{document}