\subsection{Naive Bayesian}
Naive bayesian is a simple classifier that assumes that the influence of one attribute on the class is indepdent of the influence of any other attribute. Despite the fact that this assumption doesnt hold true for most datasets, Naive Bayesian has been shown to perform well \cite{zhangbayes}. The Naive Bayesian model is as follows:

We want to predict: 

\begin{equation}
P(C|F_1,F_2 ... F_n)
\end{equation}

i.e. the probability of a class(+1 or -1 for our classification problems). Using bayes theorem
\begin{equation}
P(C|F_1,F_2 ... F_n) = \frac{p(C)p(F_1,..F_n|C)}{p(F_1,..F_n)}
\end{equation}
Now using the indepdence assumption
\begin{equation}
P(C|F_1,F_2 ... F_n) = \frac{p(C)p(F_1|C)p(F_2|C) .. p(F_n|C)}{p(F_1,..F_n)}
\end{equation}

Thus for k class, the Naive bayes model is:

\begin{equation}
classify(f_{1},f_{2},...f_{n})=argmax_{c}p(C=c)\prod_{i=1}^{n}p(F_{i}=f_{i}|C=C)
\end{equation}

\subsubsection{Adapting Naive Bayesian}
\begin{itemize}
\item \textbf{Discretization} - The Naive bayesian model relies on variables being categorical. Hence in order to adapt our numerical data to the naive bayesian model, we discretized the numerical data. We tried two different discretization approaches:
\begin{itemize}
\item \textbf{Supervised} - Supervised discretization was performed using Fayyad \& Irani's MDL \cite{fayyad1992hcv} method. 
\item  \textbf{Unsupervised} - Unsupervised discretization was performed using simple binning with no of bins set to 10.
\end{itemize}
For our dataset, due to the extremely skewed nature of the dataset Supervised discretization performs poorly, creating in most cases 2-3 bin only. Hence we rely on unsupervised discretization for our results
\item \textbf{Feature Selection} - We performed feature selection by calcuation the Information Gain Ratio for each of the attributes and then ranking the attributes from highest to lowest. Section 5 presents the result of taking a different set of these values for naive bayesian calcuation.

\item \textbf{Handling Missing Data} - We handle the missing data using the techniques outlined in \cite{domingos1996bic}. We use "?" as the value for all missing data. Given semantic information about the data it would be possible to try alternate approaches such as taking average value or median value. Also in order to account for a feature value with missing prior probability, $p(F_i|C)$ we use $p(C_i)/N$ where n is the no of instances.
\end{itemize}
