\section{Evaluations}
\label{sec:evaluations}

\subsection{Experiment Setup}

In the experiment, we use three month query log data of a commercial search engine in 2007 and generate the top query list. We use the top two million queries as suggestion candidates. The threshold of inverted list and phrases list was set to be a consistence score $0.8$ for each query. Due to the computing complexity, only the relevance of matrix $G$ was propagated until an acceptable result was reached.

We divide the top ten million queries in the query log into seven frequency groups based on query frequencies: top $1K$, $1K\sim 10K$, $10K\sim 1M$, $1M\sim 2M$, $2M\sim 3M$, $3M\sim 10M$ and $>10M$.  For each group, 10 queries were randomly sampled which result in 70 queries as the testing set.

To evaluate the proposed search-based suggestion method, a representative set of queries was chosen. Commercial search engines including Google.com, Yahoo.com, Ask.com and Live.com were selected for the comparison since all of them have the query suggestion feature to suggest semantically related queries now.  We fetch the search result pages for selected queries from these commercial search engines and  estract their query suggestions for comparison.

\subsection{Query Suggestion Coverage}
To compare the suggestion coverages of the proposed system and other commercial search engines, the percentage of queries with at least one suggestion is calculated by:
\begin{equation}
coverage = \frac{N_{i}}{N_{T}} \times 100\%
\label{Eq:Coverage}
\end{equation}
where $N_{T}$ represents the number of queries in current frequency group and $N_{i}$ represents the number of queries with at least one suggestion. The result is shown in Fig.~\ref{Fig:Fig7}.  The horizontal axis corresponds to the query frequency groups. The vertical axis corresponds to the coverage. From Fig.~\ref{Fig:Fig7}, we can see that for both popular and unpopular queries, our method could generate suggestions while others could only generate suggestions for popular ones. For example, for all the testing queries with rankings larger than $1M$, Google.com has no suggestions. Moreover, the coverage of Yahoo.com and Live.com decreases rapidly as the popularity of the queries decreases
\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig7}} \caption{Coverage of 70 queries in different frequency group.}
\label{Fig:Fig7}
\end{figure}
 
\subsection{Suggestion Results}
To explicitly show the suggestion results, some representative queries were chosen based on their query frequencies.  

For high frequent queries, we chose the name of two celebrities: ``Tom Cruise" and ``Tiger Woods".  Both expansion suggestions and non-expansion suggestions could be suggested for these popular queries.  For ``Tom Cruise", the expansion suggestions include ``Tom Cruise Movies" and ``Tom Cruise Interview" and the non-expansion suggestions include ``Nicole Kidman" and ``Katie Holmes" which are his ex-wife and wife respectively.  The expansion suggestions could help the user more precisely formulate his search intention.  While the non-expansion suggestions might be a trigger for other related searches.

For unpopular queries, we chose two long queries: ``who is the richest man in the world" and ``function of Microsoft excel worksheet".  For the question-like query ``who is the richest man in the world", Our method could not only provides some alternative questions, such as ``richest man" and ``world's richest man" but also provides some answers, e.g. ``bill gates fortune", ``bill gates worth" and ``warren buffet".

\subsection{Search Result and Query Log Session}
To justify the necessity of using search result context to improve query suggestions generated only from query log session, we built two systems. One only leveraged search result context and the other one only leveraged query log session. To compare their performances, suggestions from these two systems were intermixed together for user study.  Seven subjects were invited to rate the suggestions with 5-point relevance scores.  The subjects are skillful search engine users but have no knowledge of the proposed method. The scores corresponding to the relevance types between two different queries could be classified into several types as follows: ``Precisely related", ``Approximately related", ``Somehow related, but unclear or useless", ``Approximately unrelated" and ``Clearly unrelated".  In order to unify the standard for labeling, we also prepare some well defined samples of each relevance type for users. The higher a score is, the more relevant the suggestion is to a query.  For example, score $5$ means ``Precisely related" and score 1 means ``Clearly unrelated".  We randomly selected suggestions of $20$ queries for each person with each suggestion set at least labeled by two persons. We transform the 5-point scores to the percentage precision by:
\begin{equation}
precision = \frac{Score-1}{4} \times 100\%
\label{Eq:Precision}
\end{equation}

The average precisions by search result context and query log session are presented in Fig.~\ref{Fig:Fig8}. Apparently, suggestions from query log session are more relevant and precise for top queries. But the query log session-based system cannot handle unpopular queries after the frequency group $3M$. In contrast, the search result context-based system can always produce acceptable suggestions even for a long tail query.
\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig8}} \caption{Average Precisions of Suggestions from Search Result Context and Log Session in different frequency groups.}
\label{Fig:Fig8}
\end{figure}

\subsection{Relevance Evaluation}
In the Section, we will evaluate the performance between our method and other four commerce search engines. Similar to the process in Section 4.5, to evaluate different query suggestion systems, suggestions from the five systems were intermixed together and then labeled by seven people.

Even though all five systems are designed to suggest semantically related queries, for an unpopular query, a system may have no suggestions. To be fair, we only include the queries that have suggestion(s) in all the five systems.

Due to some systems have no common part with others in some frequency groups, as shown in Fig.~\ref{Fig:Fig9}; we compute the precision according to:
\begin{itemize}
\item We process the common part of frequency groups: top $1K$, $1K\sim 10K$, and $10K\sim 1M$ for all five systems.
\item We process the common part of frequency groups: $1M\sim 2M$, $2M\sim 3M$, and $3M\sim 10M$ for the systems without Google.com.
\item We process the common part of frequency group: $>10M$ for three systems: our method, Yahoo.com and Live.com.
\end{itemize}

\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig9}} \caption{Average Precisions when all the systems have results.}
\label{Fig:Fig9}
\end{figure}

Based on the Eq.~\ref{Eq:Precision}, the average precisions of the five systems are presented in Fig.~~\ref{Fig:Fig9}. The average precisions of all the five systems are between 80\% and 90\%.  Even though Google.com only provides suggestions for the queries within top $1M$ queries, its average precision is still the lowest. Since Yahoo.com always generates expansion suggestions which are more likely to be considered as relevant, its average precision is the highest.

Fig.~\ref{Fig:Fig10} shows the trends of the five systems when the query frequency decreases. Yahoo.com, Live.com and our method have similar precisions and outperform Ask.com and Google.com.
\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig10}} \caption{Precisions when all the systems have results in different frequency groups.}
\label{Fig:Fig10}
\end{figure}

We also evaluate the systems with F-measure score defined as follows:
\begin{equation}
F = 2\times \frac{P\times R}{P+R}
\end{equation}
where $P$ is calculated by Eq.~\ref{Eq:Precision} for all the queries with a least suggestions in a frequency group and $R$ is fetched directly by the coverage ratio in Eq.~\ref{Eq:Coverage}.  The results of different frequency groups for the five systems are shown in Fig.~\ref{Fig:Fig11}. Our method consistently has the best performance in all the groups.
\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig11}} \caption{F-measure when all the systems have results in different frequency groups.}
\label{Fig:Fig11}
\end{figure}

\subsection{Blind Test}
High relevance doesn't necessarily mean better user experience.  For example, for ``Tom Cruise", relevant suggestions might be ``Tom Cruise Picture", ``Tom Cruise Photos", ``Tom Cruise family", ``Tom Cruise Wife" and so on.  As all the suggestions are very similar, users might not be very satisfied.  Therefore, diversity and other properties that can not directly reflected by relevance are also important for query suggestion.

A blind test was conducted to evaluate the user experience of different query suggestion systems.  For each system, only the top eight suggestions will be used with others discarded.  Since Ask.com has three categories of suggestions, the number of suggestions for each category was chosen according to ratio of the number of suggestions in the category to the total number of suggestions.  We built a labeling tool with five columns each corresponds to a suggestion method.  To avoid the impact of the order of columns, the results of the five methods were fit into the columns randomly. Users should rank the five results. We assign the score $1~5$ for each rank result, 5 means the most satisfied result while 1 means the most dissatisfied result. We can calculate satisfaction ratio by:
\begin{equation}
satisfaction = \frac{Score-1}{4} \times 100\%
\end{equation}

The result is presented in Fig.~\ref{Fig:Fig12}. The performance of our method is quite good for both top queries and long tail queries.
\begin{figure}
\centerline{\includegraphics[width=3.5in]{fig12}} \caption{Satisfaction of 70 queries in different frequency group.}
\label{Fig:Fig12}
\end{figure}
