
%%%%%%%%%%%%%%%%%%%%%%% file query_expansion.tex %%%%%%%%%%%%%%%%%%%%%%%%%
%
% This is the LaTeX source for the instructions to authors using
% the LaTeX document class 'llncs.cls' for contributions to
% the Lecture Notes in Computer Sciences series.
% http://www.springer.com/lncs       Springer Heidelberg 2006/05/04
%
% It may be used as a template for your own input - copy it
% to a new file with a new name and use it as the basis
% for your article.
%
% NB: the document class 'llncs' has its own and detailed documentation, see
% ftp://ftp.springer.de/data/pubftp/pub/tex/latex/llncs/latex2e/llncsdoc.pdf
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


\documentclass[runningheads,a4paper]{llncs}

\usepackage{amssymb}
\usepackage{multirow}
\usepackage{subfigure}

\setcounter{tocdepth}{3}
\usepackage{graphicx}

\usepackage{url}
\urldef{\mailsa}\path|{xzwang, cyliu, grxue, yyu}@apex.sjtu.edu.cn|
\newcommand{\keywords}[1]{\par\addvspace\baselineskip
\noindent\keywordname\enspace\ignorespaces#1}

\begin{document}

\mainmatter  % start of an individual contribution

% first the title is needed
\title{Click Prediction for Product Search on C2C Web Sites}

% a short form should be given in case it is too long for the running head
\titlerunning{Click Prediction for Product Search on C2C Web Sites}

% the name(s) of the author(s) follow(s) next
%
% NB: Chinese authors should write their first names(s) in front of
% their surnames. This ensures that the names appear correctly in
% the running heads and the author index.
%
\author{Xiangzhi Wang\and Chunyang Liu \and Guirong Xue\and Yong Yu%
}
%
\authorrunning{XZ. Wang et al.}
% (feature abused for this document to repeat the title also on left hand pages)

% the affiliations are given next; don't give your e-mail address
% unless you accept that it will be published
\institute{Department of Computer Science and Engineering,
Shanghai Jiao Tong University, Shanghai 200240, China\\
\mailsa
}

%
% NB: a more complex sample for affiliations and the mapping to the
% corresponding authors can be found in the file "llncs.dem"
% (search for the string "\mainmatter" where a contribution starts).
% "llncs.dem" accompanies the document class "llncs.cls".
%

\toctitle{Lecture Notes in Computer Science}
\tocauthor{Authors' Instructions}
\maketitle


\begin{abstract}
Millions of dollars turnover is generated every day on popular ecommerce web sites. In China, more than 30 billion dollars transactions were generated from online C2C(Customer to Customer) market in 2009. With the booming of this market, predicting the click probability for search results is crucial for user experience, as well as conversion probability. The objective of this paper is to propose a click prediction framework for product search on C2C web sites. Click prediction is deeply researched for sponsored search and web search, however, to our best knowledge, few studies were reported referred to the domain of online product search. We validate the performance of state-of-the-art techniques used in sponsored search for predicting click probability on C2C web sites. Besides, significant C2C site-based features are developed based on the characteristics of product search and a combined model is trained. Plenty of experiments are performed on the click log of a popular C2C web site, and the results demonstrate that the combined model improves both precision and recall significantly.
\keywords{Click Prediction, Logistic Regression, Ecommerce, C2C}
\end{abstract}
\section{Introduction} \label{sec:intro}
%%%%%add sub sections
The past decade is an exponentially increasing period for online ecommerce business. China, as one of the fastest growing markets in the world, generated 30 billion dollars trade volume from online C2C web sites in 2009. More than 200 million registered users attract more than 200,000 sellers set up online-store on popular C2C web sites every month. With the booming of products in the database, each search might result in thousands of items matching user's query. While more and more products can be selected in front of the users, it is increasingly hard for products search engine to rank the item list and predict correct item for buyer to click and go deeper after that action. We summarize that the problem of predicting the click probability on search results is crucial for product search based on three fold motivations:
\begin{itemize}
 \item Click probability can be used as an important score to rank the list of search results. As the cascade model illustrated in \cite{nick}, the higher position the item with more click probability is ranked with, the less energy will be costed for user to make click decision, which in turn improves user experience.

 \item Simply, the conversion probability can be defined as:
 \begin{equation} \label{eq:buy}
 P(purchase|item) = P(purchase|click)\cdot P(click|item)
\end{equation}
So if item with more click probability can be selected in to the result page, a bargain will be concluded more possibly, which in turn increase the revenue of the company. Of course this point might be debatable as C2C web sites should also consider the fairness of presenting items on search result page.
 \item The click prediction information can be leveraged in both browsing and searching advertisements.
\end{itemize}
Therefore, in this paper, we address the problem of predicting the click probability on items for online C2C web sites. One of the most straightforward methods is to predict click probability from historical CTR(click through rate), however, this historical information is unavailable for new items published. Moreover, historical information is sometimes biased while product popularity changing frequently with time passed.

Click prediction has been deeply researched in domain of sponsored search recently. \cite{regelson,matthew,haibin} proposed state-of-the-art methodologies which can be referred as possible solutions. We build a model with features extracted from the log of a C2C web site as proposed in these papers and validate the performance of the model on search results of C2C web site. The evaluation results prove that the techniques used in sponsored search also work in C2C product search. However, we conclude the characteristics of online C2C web site differentiate from web search engine site as follows:
\begin{itemize}
\item Unlike users of web search engine with definite goals to populate queries, more than half of online buyers are surfing on the C2C web sites without any definite searching objective. Buyers can even make a search without any query terms by clicking links while browsing the web pages.
 \item With complicated search patterns provided by C2C web sites, online buyers have more control on the result page. As shown in Figure \ref{figure:ebay}, most C2C web sites allow buyers to select \textit{sort type} for result list, including price ascending, price descending, credit ascending, etc. Also, buyers can add various \textit{filters} for the retrieval process, such as price range, item location and so on.
 \item As shown in Figure \ref{figure:ebay}, the result page of C2C web site is more delicate than web search result page. With item picture, price, rank of seller credit, shipping rate and other item related information being presented on the search result page, buyers can compare items comprehensively before a click action delivered.
\end{itemize}
Based on above points, a novel model more suitable for product search can be built. We explore a significant set of features based on the characteristics of C2C web sites as described above and combine them to the original model. A large data corpus is collected from a real online C2C web site, and plenty of experiments are performed. The results demonstrate that significant improvements are obtained after the C2C site-based features is combined to the model. To analyze the features more deeply, we group them into four different dimensions: \textit{search, buyer, seller and item}. Models trained with features of each group are tested to clarify the difference of contribution of each feature group. We also test models trained with each single feature and present the top 5 models to demonstrate the most important features. Finally, an interesting discussion is presented on the problem of unbalanced class distribution encountered in our research.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{./image/ebay2.eps}
\caption{Search result page in C2C web site}
\label{figure:ebay}
\end{figure}
The rest of this paper is organized as follows. Section \ref{sec:related} provides a brief description of related work. The detail of our methodology is presented in Section \ref{sec:alg}. We address the data set, evaluation metrics and experiments in Section \ref{sec:exp}. Finally, conclusion and future work is discussed in Section \ref{sec:conc}.

\section{Related Work}\label{sec:related}
Predicting the probability that a user click on an advertisement is crucial for sponsored search  because the probability can be used to improve ranking, filtering, placement, and pricing of ads. Thus click prediction problem in domain of sponsored search is deeply researched recently years. Moira \textit{et al.} proved  different terms have different potential of receiving a sponsored click and estimated click probability based on term level CTR \cite{regelson}. CTR of clusters of related terms were also used for less frequent or novel terms. Besides term level information, Matthew \textit{et al.} explored various features related to ads, including the page the ad points to, and statistics of related ads and built a model to predict the click probability for new ads \cite{matthew}. Different from user-independent models in previous work, Cheng \textit{et al.} developed user-specific and demographic-based features and built a user-dependent model to predict click probability \cite{haibin}. The topic of predicting click probability for advertisements is very similar to the problem addressed in this paper, however, as we discussed in Section \ref{sec:intro}, the characteristics of product search make this task more complicated. So valuable features can be developed based on these specialties.

As people search the web, certain of their actions are logged by a search engine. Click logs are representative resources to extract patterns of user behavior. Xue \textit{et al.} incorporated user behavior information to optimize web search ranking \cite{guirong}. In \cite{nick}, Nick \textit{et al.} discussed the click bias caused by document position in the results page and modeled how probability of click depends on position. We adopted his proposal and adjust the bias for CTR related features. \cite{georges} developed explicit hypothesis on user browsing behavior and derived a family of models to explain search engine click data. \cite{eugene_learn} generalized approach to model user behavior beyond click-through for predicting web search result preferences.

To the best of our knowledge, not much research has been studied focusing on domain of product search in online C2C web sites. And this task becomes more and more crucial with the booming of the business. Based on characteristics of C2C web sites, Wu \textit{et al.} developed similar features as our work to predict conversion probability \cite{xiaoyuan}. Though both this paper and \cite{xiaoyuan} focus on online ecommerce business, the problems to be solved are completely different. Moreover, the models are different because of the different formulations of problems. Wu \textit{al et.} formulated their problem as $P(purchase|item)$ which is independent of query, search and buyer, while our formulation is $P(click|query, search, buyer, item)$ that is related to query, search and buyer contrarily.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{./image/workflow.eps}
\caption{System Framework}
\label{figure:framework}
\end{figure}
\section{Click Prediction} \label{sec:alg}
The framework of our click prediction system is straightforward, though the process for user to make click decision is complicated. As shown in Figure \ref{figure:framework}, a set of items is selected by product search engine according to the query user submitted. The complicated process of item retrieval is out of the scope of this paper. Here we assume the input of our system is a set of query-item pairs with features extracted from log data. Click probability is estimated by the model trained during offline phase. Finally, this score can be used in ranking, filtering or advertisement matching components.
\subsection{Logistic Regression Model} \label{sec:model}
We formulate click prediction as a supervised learning problem. To classify a query-item pair into \textit{CLICK/NON-CLICK} class, logistic regression model is utilized for this task. It is a generalized linear model used for binomial regression, which makes it perfectly suitable for this problem. In statistics, logistic regression is used for predicting the probability of occurrence of an event by fitting data to a \textit{logit} function logistic curve \cite{applied}. The simple logistic model has the form:
\begin{equation} \label{eq:logit}
logit(Y) =\log\frac{p}{1-p}=\beta_{0}+\sum_{i=0}^{n}\beta_{i}\cdot X_{i}
\end{equation}
where $p$ is the probability of occurrence of an event $Y$ and $X_1,X_2,...,X_n$ are the independent variables(predictors); $\beta_{0}$ is called the \textit{intercept} and $\beta_{i}$ is called \textit{regression coefficients} of $x_{i}$. Maximum likelihood method is used to learn the intercept and regression coefficients. Applied the antilog on both sides, we can transform the predictor values into probability:
\begin{equation} \label{eq:probability}
p=P(Y|X=X_1,X_2,...,X_n)=\frac{e^{\beta_{0}+\sum_{i=1}^{n}\beta_{i}\cdot X_{i}}}{1+e^{\beta_{0}+\sum_{i=1}^{n}\beta_{i}\cdot X_{i}}}
\end{equation}
where each of the regression coefficients $\beta_{i}$ represent the contribution of corresponding predictor value $X_{i}$. So a positive regression coefficient $\beta_{i}$ means larger(or smaller) $X_i$ is associated with larger(or smaller) logit of Y, which in turn increases(or decreases) the probability of the outcome.

With the regression model, we can formulate our click prediction model as:
\begin{equation} \label{cp_formula}
P(click|query,item)=\frac{\exp{(\vec{w}\cdot \vec{f})}}{1+\exp{(\vec{w}\cdot \vec{f})}}
\end{equation}
where $\vec{f}$ represents the feature vector extracted from the query-item pair, while $\vec{w}$ is the corresponding regression coefficient vector.
\subsection{Features} \label{sec:alg.features}
\setcounter{secnumdepth}{3}
Previous work proposed a large scope of features used for building click prediction model in sponsored search. Besides of that, we analyze the characteristics of product search as well as user behavior and develop a significant set of online C2C web site-based features. To make it more clear and systematic, we group the features into four dimensions as: \textit{search, buyer, item, seller}, which are corresponding different roles of product search on C2C web sites. In Section \ref{sec:exp}, we compare the different models built with features of each dimension. To make a clear distinction for C2C site-specified features from features that are widely used in sponsored search, we mark the C2C site-specified features with underline in the rest of section.
\subsubsection{Search Features} \label{search}
Unlike web search engine, besides text query submitted to search engine, users are allowed to define complicate search patterns. We group this kind of features into search features to represent user behavior information.
\begin{description}
  \item[\underline{Sort Type}] To compare items more intuitively, most C2C web sites allow users to select sort pattern for result list. We believe the sort pattern user selected can reflect the status of the user when submitting the query. Our statistic data indicates different CTR is obtained according to different sort type selected. Nominal values including \textit{price ascending, price descending, time left, seller credit} are extracted for each query-item pair.
  \item[\underline{Search Filter}] Abundant types of filters are provided in C2C web sites.
    \begin{itemize}
    \item \textit{Category}, user can indicate of which category items should be returned.
    \item \textit{Price interval}, user can define the price range to filter items.
    \item \textit{Sale type}, when an item is published to online store, the seller needs to decide that the item is sold with auction style or fixed price style.
    \end{itemize}
\item[\underline{View Type}] Users can choose presentation form for the result list: \textit{list} means put the items into a list style; \textit{grid} makes the items presented in a matrix style with enlarged picture on each position.
\item[Query] The complication degree of query reflects the clarity of purchasing intention for buyers. We extract query term count, unique term count separately to represent this info. Besides of that, we also extract historical CTR on each term to represent the click potential of specific term buyer submitted :
\begin{equation}
CTR_{t} = \frac{\sum_{q,t\in q}c(q)}{\sum_{q,t\in q}pv(q)}
\end{equation}
where $q$ is query that contains term $t$, while $c(q)$ and $pv(q)$ represent the click count and page view count received from the query respectively.
\end{description}
\subsubsection{Item Features} \label{item}
While search features determine to a certain degree of click potential of the search, item features impact a certain extent that which item is more possibly for user to click.
\begin{description}
  \item[\underline{Price}] One of the most significant factors impacts a user to perform a click is the price of item. Under the same condition of other aspects, user apparently will choose the cheapest one among the items list. We calculate mean price of item list for the query. And for each item, the deviation from mean price is calculated. Certainly the shipping rate of item is an important factor, especially for those relatively cheap items. So same method is applied for shipping rate.
  \item[\underline{Category CTR}] For C2C web sites, one of the most important differences from web search engine is that all the items or documents are labeled with product category by sellers. CTR on category reflects click potential of items that belong to this category. As category can be very sparse, according to EBay, 30,000 leaf categories are maintained on US site, we extract CTR of root category for each item:
\begin{equation}
CTR(rc)=\frac{\sum_{j\in rc}c(j)}{\sum_{j\in rc}pv(j)}
\end{equation}
where $rc$ is root category, and each $j$ is item whose root category is $rc$.
  \item[Explicit Matching] Features representing the degree of apparent matching of each query-item pair are also extracted. This kind of features represent presentational relevance of query-item pair on the result page.
      \begin{itemize}
\item \textit{Text similarity}, cosine similarity is calculated between item title and query to represent lexical match
\item \underline{\textit{Location match}}, though not all of users apply location filter for their queries, items in the same location as the user are preferred.
\end{itemize}
\end{description}
\subsubsection{Buyer Features} \label{buyer}
As illustrated in \cite{haibin}, personalized model performs effectively in sponsored search for predicting click probability. We refer this proposal and extract both demographic and user-specific features for each buyer.
\begin{description}
  \item[Demographic] Gender, age and location of the buyer are extracted as nominal values.
  \item[Buyer CTR] Click-through rate of the buyer is calculated for a period of one week.
  \item[\underline{Purchase Count}] Times of the buyer purchased product on the site successfully.
  \item[\underline{Buyer credit rank}] Some C2C web sites provide mechanism to rank credit or reputation for buyers and sellers through their transaction feedback logs. The rank is determined by a score which is calculated by both count of positive feedbacks and negative feedbacks from historical transaction records.
\end{description}
\subsubsection{Seller Features} \label{seller}
We develop seller related features for mainly two fold reasons: firstly, the information of seller reflects the devotion of the seller to the online store, which can implicitly impact the attractiveness of his items on the result list; secondly, buyers are concerned for the reputation or credit of sellers.
\begin{description}
  \item[Demographic] Gender, age and location of the seller are extracted as nominal values
  \item[Seller CTR] Click-through rate of the seller is calculated for a period of a week.
  \item[\underline{Product count}] Reflecting the business scale of the seller. Professional online sellers always possess of large number of items.
  \item[\underline{Seller credit rank}] As described in Buyer features.
  \item[\underline{Attractiveness degree}] Popular C2C web sites provide the functionality of 'Save the store to favorite' or 'Watch the item'. The number of being stored up reflects the attention of buyers paid to this seller.
\end{description}
%****************************************************
%\subsection{Feature Pre-processing} \label{sec:alg.bias}
%A great proportion of features described in Section \ref{sec:alg.features} are sparse numbers which is a common problem in machine learning. To conquer this phenomenon, we perform log transformation on all the numeric features and discretize them into nominal values with Fayyad's  MDL method \cite{fayyad}. As show in Equation \ref{eq:mdl}, the algorithm leverage entropy of candidate partitions to select binary boundaries for discretization:
%\begin{equation} \label{eq:mdl}
%E(F,T,S)=\frac{|S_{1}|}{|S|}Entropy(S_{1})+\frac{|S_{2}|}{|S|}Entropy(S_{2})
%\end{equation}
%where $F$ represents the feature to discretized, $S$ is the set of instances and $T$ means a partition boundary. For a given feature $F$, the boundary $T_{min}$, is selected as a binary discretization boundary that minimizes entropy function over all possible partition boundaries. Recursive iteration is performed on both the partitions divided by $T_{min}$ until certain stopping criteria matched. In our experiment, we limit the number of partitions to avoid skewed nominal values.
%*******************************************************************************
\subsection{Emendation on Position Biased CTR}
Various CTRs are extracted in previous section. However, as proved in \cite{nick}, position bias also exists in buyers' click actions. The probability of click is influenced by the position of the item presented in the result list. Buyers prefer to click items ranked on the top of the result list. We use a position-normalized statistic known as clicks over expected clicks (COEC) to take account for this position bias for all features CTR related\cite{haibin}.
\begin{equation}
COEC=\frac{\sum_{pos=1}^{R}c(pos)}{\sum_{pos=1}^{R}pv(pos)\cdot CTR_{avg}(pos)}
\end{equation}
where the numerator is the number of clicks performed on each position $pos$; the denominator is expected clicks that would be received averagely after being presented $pv(pos)$ times at $pos$, and $CTR_{avg}(pos)$ is the average click-through rate for $pos$ in the result page.
\section{Experiments} \label{sec:exp}
We implemented the click prediction system as described in previous section. A large data corpus was collected from a real popular C2C web site. Plenty of experiments were performed to evaluate our methodology. In this section, we first give a brief description on our data set. Then the evaluation metrics are introduced. Finally, we present evaluation results of experiments performed on different aspects.
\subsection{Data Set} \label{sec:exp.dataset}
The training and test data used in our experiments were sampled from a real popular C2C web site within a period of a week. Each sample is a query-item pair labeled with \textit{click} or \textit{non-click} which was extracted from the click log of this web site. We removed the records of users who searched or clicked more than 300 times a day to clear robot's data or spam data. In the other hand, records of users with less than 5 actions a day were also removed as this kind of data contains little statistical information and can be treated as noise. After that, 235,007 page views were randomly collected, with 130,865 clicked items and 9,504,443 non-clicked items.

As most other C2C web sites, more items are presented on each result page than web search engine, which makes the problem of unbalanced class distribution more critical. Less than $1.5\%$ items were labeled as clicked in this corpus. Unbalanced data is a common problem in machine learning. If we build the model in the usual way, the model would get more than $98\%$ accuracy by predicting all the items as non-click under the aim to minimize error rate. Our approach to adjust the modeling is to down sample the non-clicked samples to even up the classes. While tuning the proportion of clicked samples, the important thing to consider is the cost of mis-classification, which is the cost of incorrectly classifying a click sample as non-click, vise versa. We leave it to marketing strategy as the business consideration is out of scope of this paper. An experiment was performed on evaluating the performance changing with the growth of positive examples proportion in the training data set and a brief discussion is presented in Section \ref{sec:exp.unbalance}.

\subsection{Evaluation Metrics}
As a binary classification problem, we defined clicked and non-click item as positive example and negative example respectively. So \textit{TP rate}, \textit{FP rate}, \textit{Precision}, \textit{Recall} and \textit{F-Measure} score were referred as evaluation metrics. The \textit{TP(True Positive)} rate is the proportion of examples which are classified as class \textit{x}, among all examples which truly have class \textit{x}. The \textit{FP(False Positive)} is the proportion of examples which are classified as \textit{x}, but belong to a different class, among all examples which are not of class \textit{x}. \textit{F-Measure} score is defined as:
\begin{equation}
F-Measure= \frac{2\cdot Precision\cdot Recall}{Precision+Recall}
\end{equation}
Effectiveness on both positive and negative class are interested and corresponding results are provided in the rest of this section. All evaluation results presented below were obtained through 10-fold cross validation.
\subsection{Performance Evaluation}
Features proposed and used for sponsored search described in Section \ref{sec:alg.features}(without underline marked) were extracted to build a model as baseline which we referred as \textit{Web} in the rest of this paper. The C2C site-based features(marked with underline) were combined to the \textit{Web} model, which is referred as \textit{C2C} model. With $50\%$ positive examples sampled in the training data set, 10-fold cross validation was performed on both of the two models. The performance evaluation are presented in Table \ref{table:evaluation}.
\begin{table*}
\centering
\caption{Evaluation}\label{table:evaluation}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{} & \multicolumn{2}{|c|}{TP rate} & \multicolumn{2}{|c|}{FP rate} & \multicolumn{2}{|c|}{Precision} & \multicolumn{2}{|c|}{Recall}  & \multicolumn{2}{|c|}{F-Measure}\\
\cline{2-11}
 & \textit{C2C} & \textit{Web} & \textit{C2C} & \textit{Web} & \textit{C2C} & \textit{Web} & \textit{C2C} & \textit{Web} & \textit{C2C} & \textit{Web} \\
\hline
Click & 0.546 & 0.533 & 0.329 & 0.401 & 0.624 & 0.571 & 0.546 & 0.533 & 0.582 & 0.551\\
\hline
Non-Click & 0.671 & 0.599 & 0.454 & 0.467 & 0.597 & 0.562 & 0.671 & 0.599 & 0.632 & 0.580\\
\hline
\end{tabular}
\end{table*}

From Table \ref{table:evaluation}, it turns out that state-of-the-art techniques used in sponsored search are also suitable for predicting the click probability for product search on C2C web site. Precision for \textit{Click} and \textit{Non-Click} is promising while the recall of \textit{Click} class is a little poor. After the combination of C2C site-based features, the performance got improved significantly on all of the evaluation metrics. Precision and recall for \textit{Click} is improved by $9.3\%$ and $2.4\%$ respectively; for \textit{Non-Click}, $6.2\%$ and $12.0\%$ improvements are obtained for precision and recall separately. The results definitely prove the effectiveness of C2C site-specific features in click prediction problem.
\subsection{Feature Analysis} \label{sec:exp.case}
According to different roles of product search, we grouped features into four different dimensions: \textit{search}, \textit{item}, \textit{buyer}, \textit{seller}. To analyze the contribution of each dimension, we trained different models based on each group features. The comparison is presented in Figure \ref{figure:dimension}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{./image/dimension.eps}
\caption{Feature group comparison}
\label{figure:dimension}
\end{figure}

From the figure, we can conclude that features related to items and sellers contribute most to the combined model. For commercial search, user-based model did not work as effective as in sponsored search. However, it is not out of our expectation. For sponsored search, the goal of user to search on web search engine is to find relevant information related to the query rather than browsing advertisements. The habits, age, gender and other user-specific features certainly will impact the click through rate for advertisements presented \cite{haibin}. Contrarily, for product search, users are focusing on comparing items and sellers. Though with different demographic info or personal characteristics, the criterion of evaluating an item is relative common: better price, trustworthy seller, attractive description and so on. So the contribution of user-specific features got weakened while the influence of item-related features got enhanced. Also, from the figure, we validated our claim that search styles reflecting status of the user impact click actions.

Besides analysis on feature groups, we also evaluated models trained with each feature separately to rank the importance of each single feature. The top 5 important features are \textit{Seller CTR}, \textit{Sort Type}, \textit{Item Left Time}, \textit{Root Category CTR}, \textit{Seller Credit}. Out of our expectation, features related to price and explicit matching did not show its effectiveness as we supposed.

%\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
%  \hline
  % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ...
%  & Seller CTR & Sort Type & Left Time & Root Category CTR & Seller Credit & Buyer CTR & Search View Type & Buyer Location & Query Count & Search Time \\
%  \hline
%  Accuracy Rate & 58.9\% & 54.7\% & 53.3\% & 53\% & 52.8\% & 52.6\% & 51.9\% & 51.7\% & 51.5\% & 51.3\% \\
%\end{tabular}
\subsection{Unbalance Data Re-sample} \label{sec:exp.unbalance}
As we discussed in Section \ref{sec:exp.dataset}, the proportion of positive examples impacts performance of the prediction for both positive and negative classification. Though we do not expect to make decision of what is the best cut-off point, we still analyzed the change of performance according to the the re-sample emendation process. In Figure \ref{figure:unbalance}, with increasing of positive example ratio, the prediction on positive instances becomes more accurate while the performance of negative classification decreases as expected. We believe the scope of 45\% to 55\% for the proportion of \textit{Click} examples is a reasonable range for an online prediction system.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{./image/proportion.eps}
\caption{Unbalanced data re-sample. Horizontal axis represents the proportion of positive examples in the training data set.}
\label{figure:unbalance}
\end{figure}
\section{Conclusion and Future work} \label{sec:conc}
In this paper, we proposed a click prediction modeling solution for product search based on the characteristics of online C2C web sites. Both C2C-based features and state-of-the-art features used in sponsored search are developed and regression model is utilized with the feature set. To summarize, we conclude the contribution of this paper as follows:
\begin{itemize}
\item We present a novel problem in domain of product search. To the best of our knowledge, few studies were reported on this domain.
\item We validate the feasibility of transforming techniques used in sponsored search to the domain of product search.
\item Significant features based on characteristics of C2C web sites are developed and experiments prove promising improvements are obtained from the combined model.
\item Our methodology for predicting click probability is general and extensible for applying to other C2C web sites
\end{itemize}
However, as a start up problem in this domain, more potential data can be mined in this task in the future. For example, buyers are more easily attracted by distinctive item with delicate picture or description containing plenty of adjective words. There is a trend that C2C web sites import B2C stores in their business which might impact the search style and user behavior. And an interesting problem is how to adjust ratio of positive examples under the consideration of business factors.
\begin{thebibliography}{10}
\bibitem{xiaoyuan} Xiaoyuan Wu, Alvaro Bolivar: Predicting the conversion probability for items on C2C ecommerce sites. CIKM 2009: 1377-1386
\bibitem{guirong} Gui-Rong Xue, Hua-Jun Zeng, Zheng Chen, Yong Yu, Wei-Ying Ma, WenSi Xi, WeiGuo Fan, Optimizing web search using web click-through data, Proceedings of the thirteenth ACM international conference on Information and knowledge management, November 08-13, 2004
\bibitem{regelson} Regelson, M. and Fain, D. Predicting click-through rate using keyword clusters. In Second Workshop on Sponsored Search Auctions, 2006.
\bibitem{matthew} Matthew Richardson, Ewa Dominowska, Robert Ragno, Predicting clicks: estimating the click-through rate for new ads, Proceedings of the 16th international conference on World Wide Web, May 08-12, 2007
\bibitem{thorsten} Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrooke, Geri Gay, Accurately interpreting click-through data as implicit feedback, Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, August 15-19, 2005
\bibitem{eugene_imrove} Eugene Agichtein, Eric Brill, Susan Dumais, Improving web search ranking by incorporating user behavior information, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006
\bibitem{haibin} Haibin Cheng, Erick Cantú-Paz: Personalized click prediction in sponsored search, Proceedings of the Third International Conference on Web Search and Web Data Mining, WSDM 2010
\bibitem{nick} Nick Craswell, Onno Zoeter, Michael Taylor, Bill Ramsey, An experimental comparison of click position-bias models, Proceedings of the international conference on Web search and web data mining, WSDM 2008
\bibitem{massimiliano} Massimiliano Ciaramita, Vanessa Murdock, Vassilis Plachouras, Online learning from click data for sponsored search, Proceeding of the 17th international conference on World Wide Web, WWW 2008
\bibitem{eugene_learn} Eugene Agichtein, Eric Brill, Susan T. Dumais, Robert Ragno: Learning user interaction models for predicting web search result preferences. SIGIR 2006
\bibitem{georges} Georges Dupret, Benjamin Piwowarski: A user browsing model to predict search engine click data from past observations. SIGIR 2008
%\bibitem{fayyad} Usama M. Fayyad, Keki B. Irani: Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning. IJCAI 1993
\bibitem{applied} David W. Hosmer and Stanley Lemeshow. Applied logistic regression(Wiley Series in probability and statistics). Wiley-Interscience Publication, September 2000.
\end{thebibliography}
\end{document}
