\documentclass{article}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{graphicx}
\textheight=10in
\pagestyle{empty}
\usepackage[margin=1.5cm]{geometry}
\usepackage{float}
\usepackage{amsmath}

\begin{document}
\title{Personalized search using online Bayesian updation}
\date{}
\author{Rohith K Menon and Varun Loiwal}
\maketitle 
\section{Abstract}


With the information explosion that we see now a days, it is increasingly important to get access to relevant data. Web search relevance is a thoroughly studied problem and there have been algorithms and models developed for the same. While it is important to get access to relevant data, the notion of relevance is not the same for everyone/every user. One cap fits all solution does not work very well in all conditions. Personalizing the document relevance is another area that has been much researched about. Motivated by the applications of online Bayesian learning, we apply the same to the problem of personalizing search results by learning from user feedback. Specifically we try to solve the problem of personalizing a search within a book by incorporating user feedback.

\section{Introduction}

Relevant web search results based on state-of-the-art models to compute page ranks was a big win. But page rank is a metric which is good on average. There will still be people who are not very happy with the results. Here comes the idea of a personalized search. Different people have different choices, and if the search results can incorporate the choice of a user, then the relevance of the results goes up. Learning the preferences of a user can have a wide variety of applications. \\

We have applied this concept on personalizing the search within a book. Consider you want to search for some topic and you want to figure out which chapter gives you the best information about it. If the model already knows what chapters you previously liked, or in a broader picture, what books you like, then it can provide better suggestions for the chapters/books. And with each search, the model can learn by the user interaction. Now, the user no longer needs to remember his preferences.

\section{Problem Statement}


Given a book with set of chapters $C_i$, and a set of words $T_{ij}$ which represents the word $T_j$ in chapter $C_i$. Now given a term $t$, we wish to find an ordering such that the chapters of the book are ordered according to the relevance of the chapter with respect to the given search term. Once the search results are displayed, user interacts with the results to indicate their preferences by liking certain search result ($C_i$, $t$). User feedback is then used to recompute the relevance scores for the chapters and the new ordering of the results are shown to the user.

\section{Model without user feedback}

We model this problem by ordering the search results for chapters given a search term as a Bayesian probability.

\begin{align*}
&P(Chapter = C_i|Term = t) \\
\\
&= \frac{P(Term = t|Chapter = C_i) * P(Chapter = C_i)}{P(Term = t)} \\
\\
&\approx P(Term = t|Chapter = C_i) * P(Chapter = C_i)
\end{align*}

The naive model where the user likes are not taken into account is modelled using the above model. Here we consider the term as the word in the chapter. The probability $P(t|C)$ is computed by calculating the word counts for the words in the chapter and the chapter probability $P(C)$ is computed using the the number of words that a chapter has.

\begin{align*}
P(word = T_{ij}|Chapter = C_i) = \frac{n(T_{ij})}{n(C_i)} \\
P(Chapter = C_i) = \frac{n(C_i)}{N}
\end{align*}

Given a term t, we compute the $P(Chapter = C_i|t)$ for all chapters and order the search results of chapters in the decreasing order of $P(C|t)$. This would be a static model and will produce the same results for the same term. There is no user feedback. In the next version, we allow users to interact with the search results and design a model which can learn the user feedback in an online fashion.

\section{Model with user feedback}

The static model discussed in above section which does not allow for user interaction is augmented with capability to incorporate user likes. A like is represented as chapter, word tuple ($C_i, T_{ij}$). With subsequent searches, the user provides more such like tuples. The idea here is to reorder the search results which now will also account for all likes that the user has performed. Let the likes that the user has performed be represented as ($C_i^{(k)}, T_{ij}^{(k)}$) which represents the $k^{th}$ like that the user has performed.

We assume the following distributions for the chapters as well as the words within those chapters. For the chapters, we assume them to be distributed according to a multinomial distribution with $\mu_d$ as the parameter.

\begin{center}
\fbox{
\begin{minipage}[b]{1in}
\begin{align*}
  C|\mu_c &= Multinomial(\mu_c) \\
  \mu_c &= Dirichlet(\alpha_c)
\end{align*}
\end{minipage}
}
\end{center}

For words within a chapter also, we assume a multinomial distribution with dirichlet priors. The parameters of the dirichlet distribution is initially assigned to $n(T_{ij})$ for word $T_{ij}$ in chapter $C_i$.

\begin{center}
\fbox{
\begin{minipage}[b]{1in}
\begin{align*}
  T|C,\mu_{T|C} &= Multinomial(\mu_{T|C}) \\
  \mu_{T|C} &= Dirichlet(\alpha_{T|C})
\end{align*}
\end{minipage}
}
\end{center}

We assume that the parameters $\mu_c$ of the chapters follows a dirichlet distribution with parameters $\alpha_c$. We assume that all the chapters initially are equally likely, and hence $\alpha_c^{(0)} = 1$ for all chapters. Recursive Bayesian updation is performed to learn the likes that the user performs. For every like the user performs, we update the weights and recompute the probabilities. Then we make a new ordering of the search results based on the likes that the user has performed. This process basically involves two phases an update phase and a predict phase.

\subsection{Update phase}

This is the phase where the probabilities are updated according to the likes made by the user. The update phase essentially boils down to adding a count of 1 to the dirichlet priors of the chapters and the word for that given chapter. The update equations are given by:

\begin{center}
\fbox{
\begin{minipage}[b]{1in}
\begin{align*}
  \mu_C|C^{(1)} = c_i, T^{(1)} = t_j &\approx Dirichlet(\alpha_C^{(1)}) \\
  \text{where }\alpha_C^{(1)} &= \alpha_C^{(0)} + 1[C^{(1)} == c_i] \\
  \mu_{T|C}|C^{(1)} = c_i, T^{(1)} = t_j &\approx Dirichlet(\alpha_{T|C}^{(1)}) \\
  \text{where }\alpha_{T|C}^{(1)} &= \alpha_{T|C}^{(0)} + 1[T^{(1)} == t_j \text{ and } C^{(1)} == c_i]
\end{align*}
\end{minipage}
}
\end{center}
The new posteriors computed becomes the new priors and are used to recursively compute the new posteriors.

\subsection{Predict Phase}

The prediction is done based on the assumed multinomial distribution of the chapters and words given chapters. From the assumed distribution and the computed posteriors which are used as the priors, we come up with the following equations for a chapter given a term and all the previous likes.

\begin{center}
\fbox{
\begin{minipage}[b]{1in}
\begin{align*}
  P(C = c_i|T_{1..k},C_{1..k-1},\mu) &\approx \mu_{T|C}^{(k-1)}(T_k|C = c_i).\mu_C^{(k-1)}(C = c_i) \\
  where & \\
  \mu_C^{(l)}(C = c_i) &= \frac{\alpha_C^{(l)}(C = c_i)}{\sum_{C}{\alpha_C^{(l)}(C = c_i)}} \\
  \mu_{T|C}^{(l)}(T = t_j, C = c_i) &= \frac{\alpha_{T|C}^{(l)}(T = t_j, C = c_i)}{\sum_{T}{\alpha_{T|C}^{(l)}(T = t_j, C = c_i)}}
\end{align*}
\end{minipage}
}
\end{center}

These equations naturally follow from the expected values of the dirichlet distributions of the parameters of the multinomial distribution for chapters and word given chapters.

\section{Experiments \& Results}

\subsection{Experiment 1: Start server (no previous knowledge)}
\begin{itemize}
\item Search for term "Information"
\item Search for term "Security"
\item Like one result for the term "Security".
\item Search for term "Information" again to see the change.
\end{itemize}

\begin{figure}[H]
\centering
\includegraphics[width=3.5in]{../images/1.png}
\caption{Search term="Information" with no learning}
\end{figure}

\begin{figure}[H]
\centering
\includegraphics[width=3.5in]{../images/2.png}
\caption{Search term="Security", with no learning.}
\end{figure}

\begin{figure}[H]
\centering
\includegraphics[width=3.5in]{../images/3.png}
\caption{Search term="Security", after liking the result 3 i.e. Chapter 1. The result at rank 3 moved to rank 1.}
\end{figure}

\begin{figure}[H]
\centering
\includegraphics[width=3.5in]{../images/4.png}
\caption{Search term="Information", after 1 like of the result for term "Security. Result at rank 3 moved to rank 1.}
\end{figure}

\subsection{Experiment 2: Restart server to wipe out previous knowledge}
\begin{itemize}
\item Search for term "Information"
\item Search for term "Security"
\item Like multiple (here 3) results for the term "Security".
\item Search for term "Information" again to see the change.
\end{itemize}

\begin{figure}[H]
\centering
\includegraphics[width=3.5in]{../images/1.png}
\caption{Search term="Information" with no learning}
\end{figure}

\begin{figure}[H]
\centering
\includegraphics[width=3.5in]{../images/2.png}
\caption{Search term="Security", with no learning.}
\end{figure}

\begin{figure}[H]
\centering
\includegraphics[width=3.5in]{../images/5.png}
\caption{Search term="Security", after liking results at rank 3, 4, 5. The result was same as before except that the scores for these 3 results increased.}
\end{figure}

\begin{figure}[H]
\centering
\includegraphics[width=3.5in]{../images/6.png}
\caption{Search term="Information", after multiple likes of the results for term "Security". Result at rank 3 moved to rank 2 (i.e. lesser increase).}
\end{figure}

\section{Conclusion}

Our model over time learned the user preferences. And this was also reflected in terms which were searched for the first time. As we had envisioned, the model was giving better results. The next step would be to extend it for multiple books.

\end{document}
