\documentclass{sig-alt-release2}

\renewcommand{\baselinestretch}{0.965}
\renewcommand{\floatsep}{3pt}
\renewcommand{\textfloatsep}{12pt}

\usepackage{cite}
\usepackage{graphicx}
\usepackage{mathrsfs}

\begin{document}

\conferenceinfo{CIKM'08,} {October 26--30, 2008, Napa Valley,
California, USA.} \CopyrightYear{2008}
\crdata{978-1-59593-991-3/08/10}

\title{Search-based Query Suggestion}

\numberofauthors{1}
\author{
Jiang-Ming Yang$^\dagger$, Rui Cai$^\dagger$, Feng Jing$^\ddagger$, Shuo Wang$^\dagger$, Lei Zhang$^\dagger$ and Wei-Ying Ma$^\dagger$\\
        \affaddr{$^\dagger$ Microsoft Research Asia}\\
        \email{\{jmyang, ruicai, shuowang, leizhang,
wyma\}@microsoft.com}\\
        \affaddr{$^\ddagger$ NeoGrid Inc}\\
        \email{scenery\_jf@hotmail.com}
}

\maketitle

\begin{abstract}

In this paper, we proposed a unified strategy to combine query log and search results for query suggestion. In this
way, we leverage both the users' search intentions for popular
queries and the power of search engines for unpopular queries. The
suggested queries are also ranked according to their relevance and
qualities; and each suggestion is described with a rich snippet
including a photo and related description.

\end{abstract}

\category{H.3.3}{Information Storage and Retrieval}{Information
Search and Retrieval}[query formulation, search process]

\terms{Algorithms, Design, Experimentation}

\keywords{Query suggestion, query representation, log session
mining}


\section {Introduction}
\label{sec:introduction}

In Web search, users tend to use terms as queries but not natural
language. And they seldom adopt advanced options such as Boolean
operators but would like to refine the queries themselves.
Therefore, the query suggestion is very necessary to help users
formulate queries. Actually, query suggestion has been considered
as a must-have feature for search engines, as well as an active
topic in academic research. Existing query suggestion approaches
could be classified into two categories based on the data they
used. One is log-based \cite{Zhang:06, Zhao:06} and the other is
search result-based \cite{Sahami:06}. Query log-based approach and
search result-based approach have their own merits and weaknesses,
which make them suitable for different kind of queries. This
observation motivates us to investigate a new method by
integrating both of them in a real system.

\section{Search-based Query Suggestion}
\label{sec:suggestion}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

To provide suggestions for both popular and unpopular queries, we
introduce a unified feature representation for queries.  Both the
corresponding search results and query log sessions are leveraged
as the context information. Then, to rank the suggestions, both
the similarities and query frequencies are calculated, which are
similar to dynamic ranks and static ranks (PageRank) of Web pages
respectively. The process of query suggestion could be seen in
Fig.~\ref{Fig:Fig3}.

\begin{figure}
\centerline{\includegraphics[width=3.5in]{figure1}} \caption{The
flowchart of search-based query suggestion which is similar to web
search.} \label{Fig:Fig3}
\end{figure}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Query Representation}\label{sect:suggestion:representation}

In the following, we will introduce two kinds of context
information: search results and query log sessions.

\subsubsection{Search Result Context}
Instead of using whole Web pages, only the title and snippet of
the Web pages from top search results are used to generate
keywords.  A query $q_i$ could be represented as:
\begin{equation}
\mathcal{F}^{sr}_{q_i} = (r^{sr}_{i,1}, r^{sr}_{i,2}, \cdots,
r^{sr}_{i,M})
\end{equation}
where $r^{sr}_{i,j}$ represents the relevance between $q_i$ and
the $j^{th}$ phrase and $M$ is the number of all possible phrases
in a given language such as English:
\begin{equation}
r^{sr}_{i,j} = \frac{D_{i,j}}{\sum_{j}{D_{i,j}}} \cdot
log\frac{|Q|}{1+|\{q_i:p_j\in SR_i\}|}
\end{equation}

Considering that some non-popular queries may have only few
results, normalized frequency of the words are used in the left
part, where $p_j$ is a word, $D_{i,j}$ is the number of
occurrences of the considered word $p_j$ in the set of titles and
snippets of current $q_i$. Similar to the tf-idf weighting, we use
the multiplication of phrase frequency and inverse query frequency
to weight phrases. $Q$ is the query set, $SR_i$ is the search
result content corresponding to $q_i$.

\subsubsection{Query Log Session Context}
The other context is query log session.  The queries co-occurring
with the user submitted query in a certain number of query log
sessions could be used as key phrases. Queries are related if they
appear in a substantial number of user query log sessions
(consecutive queries). Query log session-based algorithms leverage
the knowledge of query usage history from numerous users.
Therefore, useful queries, which do not contain the original query
strings, could be suggested. We also represent $q_i$ as:
\begin{equation}
\mathcal{F}^{log}_{q_i} = (r^{log}_{i,1}, r^{log}_{i,2}, \cdots,
r^{log}_{i,M})
\end{equation}
where $M$ is the number of all possible phrases. Similarly its
weighting function can be expressed as follows:
\begin{equation}
r^{log}_{i,j} = \frac{N_{i,j}}{\sum_{j}{N_{i,j}}}\cdot
log\frac{|Q|}{1+|\{q_i:p_j\in log_i\}|}
\end{equation}
where $p_j$ is a phrase, $N_{i,j}$ is the number of occurrences of
$p_j$ in the sessions of current query $q_i$, $Q$ is the
query set, and $log_i$ is the log session content corresponding to
query $q_i$.

\subsubsection{Query Representation}
We can combine the two parts of phrases for a query $q_i$ together as:
\begin{eqnarray}
\mathcal{F}_{q_i} & = & \alpha \times \mathcal{F}^{sr}_{q_i} + \beta \times \mathcal{F}^{log}_{q_i} \\
& = & \alpha \times (r^{sr}_{i,1}, r^{sr}_{i,2}, \cdots, r^{sr}_{i,M}) + \beta \times (r^{log}_{i,1}, r^{log}_{i,2}, \cdots,
r^{log}_{i,M}) \nonumber
\end{eqnarray}

We use a constant $0.5$ for both $\alpha$ and $\beta$ in the
experiment.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Query Search}\label{sect:suggestion:search}

For a new submitted query $q^{sub}$, we can also represent this
query through the method presented in Section 2.1 and the
relevance score of a suggested query $q^{sug}$ for the user
submitted $q^{sub}$ is:
\begin{equation}
score(q^{sug}, q^{sub}) = \delta \times R_{q^{sug}, q^{sub}} +
\psi \times \log{Q_{q^{sug}}} \label{Eq:relevance}
\end{equation}
The left part of the score $R$ represents the similarity between
query $q^{sug}$ and $q^{sub}$, and the right part $Q$ represents
the quality (popularity) of query $q^{sug}$. Similar to a search
engine, both dynamic rank and static rank are considered and
linearly combined as in Web search.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Suggestion Snippet}\label{sect:suggestion:snippet}
Users may not know the relationship between suggestions and
original query. Describing the relationship between the suggested
query and the input query may help users to further formulate
their queries, as shown in Fig.~\ref{Fig:Fig6}.
\begin{figure}
\centerline{\includegraphics[width=3.5in]{screen}}
\caption{Suggestion snippet in our system.} \label{Fig:Fig6}
\end{figure}

Based on the proximity strategy for queries in search engine, we
submit two query terms together to a search engine. We can also
represent this joint query $q^{joint}$ (joint two queries with a
blank) similarly. Then the relevance of snippets be expressed as
follows:
\begin{equation}
\mathcal{R}(S, q^{joint}) = \sum_{i}{r_i} \quad (\mbox{for all the phrase in $S$}).
\end{equation}

\section{Evaluations}
\label{sec:evaluations}

In the experiment, we used 3 month query log data of a
commercial search engine in 2007. We divided the top queries into
7 groups based on their frequencies, and randomly
sampled 10 queries from each group for testing.
Four commercial search engines were selected for the comparison.

Seven subjects were invited to rate the suggestions with 5-point
scores (``Precisely related", ``Approximately related",
``Somehow related, but unclear or useless", ``Approximately
unrelated", and ``Clearly unrelated"). The higher a score is, the
more relevant the suggestion is. We randomly selected
suggestions of $20$ queries for each person. And suggestions of each query
were labeled by at least two persons. We transformed the 5-point
scores to precision by $(Score-1)/{4} \times 100\%$
and recall by ${N_{i}}/{N_{T}} \times 100\%$, where
$N_{T}$ represents the number of queries in a group and $N_{i}$ the number of queries with at least
one suggestion. The F-measures of different groups are shown in the top part of Fig.~\ref{Fig:exp}.
Our method consistently has the best performance.
\begin{figure}
\centerline{\includegraphics[width=3.5in]{exp1}}
\centerline{\includegraphics[width=3.5in]{exp2}}
\caption{Comparison of the F-measures and satisfactions for different frequency
groups and systems.} \label{Fig:exp}
\end{figure}

High relevance doesn't necessarily mean better user experience.
For example, for ``Tom Cruise", suggestions such as ``Tom Cruise
Picture", ``Tom Cruise Photos", and so on are relevant but
duplicate. Therefore, diversity and other properties that can not
directly reflected by relevance are also important for query
suggestion. Therefore, we also conduct a blind test. We built a
labeling tool with five columns each corresponds to a suggestion
method and fit them into the columns randomly. For each method, only the top 8 suggestions
were kept. Users were asked to rank the five results using score $1\sim 5$. The
satisfaction ratio was then calculated as $(Score-1)/{4} \times 100\%$. The result is
presented in the bottom part of Fig.~\ref{Fig:exp}. The performance of
our method is quite good for both top queries and long tail queries.

\begin{thebibliography}{1}

\bibitem{Sahami:06}
M.~Sahami and T.~D. Heilman.
\newblock A {Web}-based kernel function for measuring the similarity of short
  text snippets.
\newblock In {\em WWW 2006}, pages 377--386.

\bibitem{Zhang:06}
Z.~Zhang and O.~Nasraoui.
\newblock Mining search engine query logs for query recommendation.
\newblock In {\em WWW 2006}, pages 1039--1040.

\bibitem{Zhao:06}
Q.~Zhao, S.~C.~H. Hoi, and et. al.
\newblock Time-dependent semantic similarity measure of queries using
  historical click-through data.
\newblock In {\em WWW 2006}, pages 543--552.

\end{thebibliography}


\balancecolumns

\end{document}
