\section{Search-based Query Suggestion}
\label{sec:suggestion}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Query Suggestion vs. Web Search}\label{sect:suggestion:framework}

To provide suggestions for both popular and unpopular queries, we introduce a unified feature representation for queries.  Both the corresponding search results and query log sessions are leveraged as the context information.  Based on the representation, suggesting queries could be performed in a similar way as ranking Web pages in general Web search.  

More specifically, candidate queries to be suggested could be considered as Web pages to be ranked.  To rank the suggestions, both the similarity between them and the original query and their query frequencies are calculated as dynamic ranks and static ranks (PageRank) of Web pages respectively.  Linear combination is used to combine the two ranks.  Similar to the snippets of Web page search results that are used to facilitate users' relevance judgment, we also generate a picture and certain text to describe the relationship between the given query and each suggestion. A comparison of the process of Web search and query suggestion could be seen in Fig.~\ref{Fig:Fig2} and Fig.~\ref{Fig:Fig3}.

\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig2}} \caption{The flowchart of Web search.}
\label{Fig:Fig2}
\end{figure}

\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig3}} \caption{The flowchart of search-based query suggestion.}
\label{Fig:Fig3}
\end{figure}

Since query log session and search result based feature representation can handle popular and unpopular queries respectively (except a few non-popular queries with no log and search results), we can calculate the relevance between any given query and queries in log and fetch the relevant suggestions.  Consequently, both popular and non-popular queries could be effectively handled.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Query Representation}\label{sect:suggestion:representation}

Given that most queries are short, it is not a trivial task to effectively represent them. A straightforward way is to extract features from the queries themselves.  There are at least two reasons that will lead to ineffectiveness. On the one hand, some similar queries might have low literal similarity.  For example, ``Microsoft CEO" and ``Steve Ballmer" are literally very different. On the other hand, some queries with high literal similarity are not similar.  For example, ``java island" and ``java programming" refer different things. A more effective way of representing queries is to leverage the context information. In the following, we will introduce two kinds of context information: search results and query log sessions.

\subsubsection{Search Result Context}
Given a search engine, top search results are usually highly related to the input query and could be used to represent it.  We use vector space model to represent queries. More specifically, a query $q_i$ could be represented by a vector:
\begin{equation}
\mathcal{F}^{sr}_{q_i} = (r^{sr}_{i,1}, r^{sr}_{i,2}, \cdots, r^{sr}_{i,M})
\end{equation}
where $r^{sr}_{i,j}$ represents the relevance between $q_i$ and the $j^{th}$ phrase and $M$ is the number of all possible phrases in a given language such as English. Here we first introduce the phrase generation process. Note that phrase-level representations are used instead of word-level representations as in \cite{Sahami:06}.  Phrase-level representations have several advantages over word-level ones.  For example, for the query ``tiger", the related phrases will be ``tiger woods", ``white tiger" and etc.  With word-level representation, related words will include ``woods" and ``white".  As a result, queries with results containing ``woods" or ``white" will be considered as relevant suggestions, which is inappropriate.

To obtain the phrase-level query vector, there are three steps:
\begin{enumerate}
\item Extract keywords.
\item Combine keywords into key phrases.
\item Calculate the relevance of the key phrases.
\end{enumerate}

To be efficient and practical, instead of using whole Web pages, only the title and snippet of the Web pages from top search results are used to generate keywords.  Words with the number of appearances in title or snippet of top search results larger than a predefined threshold will be considered as related keywords.  Considering that some non-popular queries may have only few results, normalized frequency of the words are used as follows:
\begin{equation}
f_{norm}(w) = \frac{|D(w)|}{|D|}
\end{equation}
where $w$ is a word, $D$ is the set of titles and snippets of current query $q$ and $D(w)$ is a subset of $D$ each of which contains $w$. If $f(w)$ is larger than a threshold, $w$ will be considered as a relevant keyword of $q$.

Based on the extracted keywords, key phrases are further determined by combining sequential keywords as shown in Table~\ref{Table:Table1}.
\begin{table}
\begin{center}
\caption{The Key Phrase Generation Algorithm}
\label{Table:Table1}
\begin{tabular}{cl}
\hline
  & \textbf{KeyPhraseGenerate}($W$) \\
\hline\hline
1. & Initialize $W$ as the set of all keywords. \\
2. & \textbf{do} \\
3. & \hspace{0.2in} \textbf{for} each two items $a$ and $b$ in $W$ \\
4. & \hspace{0.4in} $cor_{ocurr}(ab) = \frac{|D(ab)|}{\max(|D(a)|,|D(b)|)}$ \\
5. & \hspace{0.4in} \textbf{if} $cor_{ocurr}(ab) > threshold$ \textbf{then}\\
6. & \hspace{0.6in} $f_{norm}(ab) = \frac{|D(ab)|}{|D|}$\\
7. & \hspace{0.6in} $W = W \bigcup \{ab\}$\\
8. & \hspace{0.4in} \textbf{endif}\\
9. & \hspace{0.2in} \textbf{endfor} \\
10. & \textbf{while} (there are any new phrases generated). \\
11. & Return $W$ as the key phrase set. \\
\hline
\end{tabular}
\end{center}
\end{table}

In the algorithm, $cor_{ocurr}(ab)$ represents the relevance of two items (either keywords or phrase generated in each iteration) $a$ and $b$. Items $a$ and $b$ will be combined into a new phrase only if $cor_{ocurr}(ab)$ is larger than a predetermined threshold ($0.5$ is used currently).  The normalized frequency of a phrase is defined in a similar way to that of a keyword. By treating new generated phrase as a word, the process could be iterated until no new phrase is generated.

Similar to the TF-IDF weighting, we use the multiplication of phrase frequency and inverse query frequency to weight phrases. The inverted query frequency is used to deemphasize those general phrases that related to most queries, e.g. ``contact us".
\begin{equation}
r^{sr}_{i,j} = \frac{f_{norm}(j)}{f_{global}(j)}
\end{equation}
where $f(j)$ is a normalized frequency of $j^{th}$ phrase and $f_{global}(j)$ is the number of queries to which the $j^{th}$ phrase is related.  $f_{global}(j)$ is set to $1$, if the $j^{th}$ phrase is not related to any query.  For those phrases not in the set $W$, their $r^{sr}$ values will be $0$.

\subsubsection{Query Log Session Context}
The other context is query log session.  The queries co-occurring with the user submitted query in a certain number of query log sessions could be used as key phrases.

Table~\ref{Table:Table2} shows two query log session examples in a query log.  There are two typical query log sessions containing query ``tiger".  In the first session, user-1 wanted to search for the animal ``tiger".  He first used ``tiger" as the query.  Then he realized that most of the search results are related to ``tiger woods" or ``Mac OS of Apple".  So he appended ``animal" to the query and later used more specific query ``Bengal tiger" to refine the search result. In the second session, user-2 wanted some information of animals.  He started with ``tiger", and then followed by ``lion" and other animals.
\begin{table}
\begin{center}
\caption{Query Log Session Examples in a Query Log}
\label{Table:Table2}
\begin{tabular}{ccc}
\hline
 & \textbf{User-1} & \textbf{User-2} \\
\hline\hline
$1^{st} entry$ & tiger & tiger \\
$2^{nd} entry$ & tiger animal & lion \\
$3^{rd} entry$ & Bengal tiger & snake \\
\hline
\end{tabular}
\end{center}
\end{table}

Queries are related if they appear in a substantial number of user query log sessions (consecutive queries). Query log session-based algorithms leverage the knowledge of query usage history from numerous users. Therefore, useful queries, which do not contain the original query strings, could be suggested. For example, ``Nicole Kidman" is suggested for ``Tom Cruise".

Representing query by query log session context is similar to search result context. We also represent query q by a vector space model:
\begin{equation}
\mathcal{F}^{log}_{q_i} = (r^{log}_{i,1}, r^{log}_{i,2}, \cdots, r^{log}_{i,M})
\end{equation}
where $M$ is the number of all possible phrases. Similarly its relevance function can be expressed as follows:
\begin{equation}
r^{log}_{i,j} = \frac{N_{session}(q_i,j)}{N_{session}(j)}
\end{equation}
where $N_{session}(q_i,j)$ represent the number of sessions containing both $j^{th}$ phrase and $q_i$, and $N_{session}(j)$ is the number of sessions containing $j^{th}$ phrase. $r^{log}_{i,j}$ is set to $0$ if the $j^{th}$ phrase is not contained in any session.

\subsubsection{Query Representation}
We can combine two parts of phrases for a query $q_i$ together as:
\begin{eqnarray}
\mathcal{F}_{q_i} & = & \alpha \times \mathcal{F}^{sr}_{q_i} + \beta \times \mathcal{F}^{log}_{q_i} \nonumber \\ 
& = & \alpha \times (r^{sr}_{i,1}, r^{sr}_{i,2}, \cdots, r^{sr}_{i,M}) + \\
& & \beta \times (r^{log}_{i,1}, r^{log}_{i,2}, \cdots, r^{log}_{i,M}) \nonumber
\end{eqnarray}

We use a constant $0.5$ for both $\alpha$ and $\beta$ in the experiment. In next section, we will use propagation process to smooth the weight between phrases from search result context and query log session context.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Relevance Propagation}\label{sect:suggestion:propagation}
Using common phrases to compute the relevance between queries is risky, since most queries may contain sparse common phrases. Many phrases may have the same meaning but different literal expressions, such as ``bike" and ``bicycle", ``NY" and ``New York". Increasing the size of phrases list for each query can improve the result, but the effects are limited. On the other hand, phrases from two different resources are liner combined by an empirical constant value. We also wish to smooth their weight.

Actually, all the queries can be represented together as a large matrix $P$, where $P_{i,j} = r^{sr}_{i,j} + r^{log}_{i,j}$. Thus we can represent queries with a bipartite graph as illustrated in Fig.~\ref{Fig:Fig4}, and $P$ is the adjacency matrix of bipartite graph. To solve these two problems, we can propagate the relevance between queries and phrases in the bipartite relevance graph.
\begin{figure}
\centerline{\includegraphics[height=3in]{fig4}} \caption{Query representation with bipartite graph.}
\label{Fig:Fig4}
\end{figure}

\subsubsection{Relevance in bipartite graph}
Each query can be expressed as a ranked key phrases list.  Assume there are $N$ queries, we can define a query relevance matrix $R$, in which $R_{i,j}$ represents the relevance between each pair of queries $q_i$ and $q_j$:
\begin{equation}
R_{i,j} = \sum_{1\leq k \leq M}P_{i,k}\times P_{j,k}
\end{equation}

Similarly, we define the phrase relevance in an M-by-M matrix K, in which $K_{i,j}$ represents the relevance between phrase $p_i$ and $p_j$:
\begin{equation}
K_{i,j} = \sum_{1\leq k \leq M}P_{k,i}\times P_{k,j}
\end{equation}

The relevance between each two nodes in bipartite graph $G$ can be represented as:
\begin{equation}
G = \left[\begin{array}{cc} Q & P \\ P^{T} & K \end{array}\right]
\end{equation}

We can construct the adjacency matrix $A$ of $G$ using $P$ easily:
\begin{equation}
A = \left[\begin{array}{cc} 0 & P \\ P^{T} & 0 \end{array}\right]
\end{equation}

Suppose we want to traverse the graph $G$ starting from node $a$. The probability of taking a particular edge $<a, b>$ is proportional to the edge weight over all the edges from $a$. Thus we have the normalized Markov transition matrix $A^{N}$ as follows:
\begin{equation}
A^N_{a,b} = \frac{A_{a,b}}{\sum^{N+M}_{i=1}A_{a,i}}
\end{equation}
Threfore, the relevance propagation problem can be converted to a traditional random-walk problem to compute the relevance between each pair of query and phrase \cite{Sun:05}.

\subsubsection{Propagation algorithm}
First of all, we can construct $G$ by Eq. 9. The parts $R$, $P$, $P^{T}$, and $K$ of $G$ can be constructed by Eq. 7 and Eq. 8. For the part $K$, we can initialize $K_{i,j}$ with $1$ if $i=j$, and $0$ otherwise. The propagation process can be expressed as follows:
\begin{equation}
G' = (1-c)\times G' \times A^{N} + c\times G
\end{equation}
where $c$ is the probability of restarting the random-walk from the original graph. (We use a constant 0.15 for c) \cite{Sun:05}.

In our system, we employ two million top queries as the suggestion candidates and extract five million key phrases. This results in a very large relevance matrix $G$, and makes the propagation impractical. To be scalable, we propose a distributed solution to compress the transition matrix $A$ and speed up the process. For any query, as related phrases discovered from search results and query log sessions are limited, most items in the space vector model will be equal to 0, and the matrix $A$ can be significantly compacted. In each propagation round, we only need to propagate the relevance with compacted $A$. And then we partition the matrix into ten parts and distribute the computing on ten machines for each propagation round. To make $(1-c)^{k}$ equal to $10^{-4}$ we need approximately $k=57$ iterations \cite{Strang:03, Sun:05}.

Though the size of $G$ is very huge in size, we can compute the propagation process offline which will not affect the online efficiency. For a new given query, we only need to search the candidates with a steady-state matrix $G$. We will discuss it in the next section

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Query Search}\label{sect:suggestion:search}

For a new submitted query $q^{sub}$, we can also represent this query through the method presented in Section 3.2. Since in Section 3.3, we have already propagated the relevance between queries and phrases, we can compute the relevance score easily by Eq. 7 and avoid the problem of sparse common phrases. Thus it is a searching problem as presented in Section 3.1.

Based on the query representation and similarity function, the relevance score of a suggested query $q^{sug}$ for the user submitted query $q^{sub}$ is:
\begin{equation}
score(q^{sug}, q^{sub}) = \delta \times R_{q^{sug}, q^{sub}} + \psi \times \log{Q_{q^{sug}}}
\label{Eq:relevance}
\end{equation}
The left part of the score $R$ represents the similarity between query $q^{sug}$ and $q^{sub}$, and the right part $Q$ represents the quality (popularity) of query $q^{sug}$. Similar to a search engine, both dynamic rank and static rank are considered and linearly combined as in Web search.

In order to accelerate the searching process, we can build an inverted list of phrases, as shown in Fig.~\ref{Fig:Fig5}. When a user submits a query, we first get its phrase representation $R(p|q)$. Since the size of phrases is limited, we can joint its corresponding query lists in each inverted list and then rank them based on the score function Eq.~\ref{Eq:relevance} and return the top queries to the user.
\begin{figure}
\centerline{\includegraphics[height=3in]{fig5}} \caption{Inverted list for searching queries.}
\label{Fig:Fig5}
\end{figure}

Since the candidate suggestions are huge in size, the time cost of the above method without any optimization is still not acceptable. The phrases of $q$ usually have a long tail. The top 20\% phrases may have 80\% relevance energy of the whole list, and the rest part may have little impact to the final rank. In order to efficiently rank the related queries, we can stop when we reach 80% relevance energy of the whole list. Thus we first rank the phrases and only compute top $t$ phrases where $t$ satisfies:
\begin{equation}
\sum^{t}_{j=1}P_{i,j} > 0.8 \times \sum_{j}P_{i,j}
\end{equation}

Similarly, we can also rank the inverted query lists for each phrase and reserve the top $s$ queries for each phrase $P_j$ where s satisfies:
\begin{equation}
\sum^{s}_{i=1}P_{i,j} > 0.8 \times \sum_{i}P_{i,j}
\end{equation}

For each inverted list, we can compute it offline. For the phrase list of $q$, the size of list is quite small.  This process can further accelerate the searching process.  In this way, the system is able to response within one second for any query, whereas it will take about $80$ seconds without cutting the long tail.

Since almost all the user queries have search results, in the worst case, we can represent the queries using the search result context and suggest the relevant queries. In this way, our method can handle both popular queries and non-popular queries.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Suggestion Snippet}\label{sect:suggestion:snippet}
Even though the suggestion list is well organized, some obstacles still remain unsolved. For example, ``Nicole Kidman" is suggested for the query ``Tom Cruise", but users may not know the relationship between ``Tom Cruise" and ``Nicole Kidman", and may feel confused with this suggestion. Describing the relationship between the suggested query and the input query may help users to further formulate their queries, as shown in Fig.~\ref{Fig:Fig6}.
\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig6}} \caption{An example of the suggestion snippet in our system.}
\label{Fig:Fig6}
\end{figure}

Based on the proximity strategy for queries in search engine, when we submit two query terms together to a search engine, we will get the results include both of two query terms with higher priority than other results. We treat snippets in search results as candidates and we may get relationship information from these results. We convert the relationship extracting problem to the results ranking problem.

Similar to representing the queries by search result context, we can also represent this joint query $q^{joint}$ (joint two queries with a blank) with a vector space model:
\begin{equation}
\mathcal{F}_{q^{joint}} = (r_1, r_2, \cdots, r_M)
\end{equation}
where $r_i$ represents the relevance between $q^{joint}$ and the $i^{th}$ phrase.

Then the relevance for each snippet $T$ and $q^{joint}$ can be expressed as follows:
\begin{equation}
\mathcal{R}(T, q^{joint}) = \sum_{i|\mbox{$i^{th}$ phrase appealing in $T$}}r_i. 
\end{equation}
Since $r_i$ denotes the importance of the $i^{th}$ phrase to the joint query, the function $\mathcal{R}(T, q^{joint})$ will rank snippets which contain more important contents with higher scores. And we can choose the top one to show.

Similar to the textual relationship snippet, we also submit the joint query to an image search engine and get the first picture as pictorial description. Textual and pictorial description of the relationship between the query and the suggestion are provided to help users understand the suggestions
