% please textwrap! It helps svn not have conflicts across a multitude of
% lines.
%
% vim:set textwidth=78:

\section{Results and Evaluation}
\label{sec:results}
In this section, we present and evaluate the final models from each of the two methods.

\subsection{Linear SVM}

\begin{figure}[!t]
\begin{center}
\begin{tabular}{l l}
$- 0.025 *$ RFA\_2\_1 = 4 \\
$+ 0.010 *$ RFA\_2\_1 = 1 \\
$- 0.008 *$ RFA\_2\_1 = 3 \\
$- 0.006 *$ RFA\_2\_2 = E \\
$+ 0.010 *$ RFA\_2\_2 = G \\
$+ 0.004 *$ RFA\_2\_2 = F \\
$- 0.013 *$ RFA\_2\_2 = D \\
$- 0.005 *$ RFA\_3\_0 = S \\
$+ 0.016 *$ RFA\_3\_1 = 4 \\
$+ 0.004 *$ RFA\_3\_1 = 3 \\
$+ 0.898$ \\
\end{tabular}
\end{center}
\caption{\figtitle{Linear SVM Model.}
The model produced by the linear SVM using the optimal cost value C found.}
\label{fig:linear_model}
\end{figure}

Figure \ref{fig:linear_model} shows the final model produced by our linear
SVM. We suspect that the weights are small because there is low signal in this
data set for positive examples as also found in
\cite{efficient_auc_optimization}. The high bias corresponds to the features
that we are unable to utilize due to the memory overhead of running such a
large data set. While running smaller sets of examples with more features, we
observe that the bias drops and that the extra features are assigned more
weight. These results suggest that the linear model could be improved while
running all $95,412$ examples with more features.

\begin{figure*}[h!]
\center
\vspace{-2mm}
\includegraphics[height=2.5in]{img/linearROC.eps}
\vspace{-2mm}
\caption{\figtitle{The Linear SVM ROC.}
The Linear SVM's receiver operating characteristic (ROC) shows the
relationship between the true positive and false positive rate. We hypothesize
that the curve is close to an even rate of true positive to false positives
due to low signal in the data set and low weights in our model. 
}
\label{fig:linear_roc}
\end{figure*}

The low weights in our final model also match our results from our receiver
operating characteristic (ROC) graph. The ROC curve in Figure \ref{fig:linear_roc} shows a
slightly positive true positive to false positive rate using the optimal C
value with an AUC of \optauc{}. The slightly positive rate concurs with the low weights since large
variations in patterns in the attributes will not have large effects on the
outcome of the model.

\subsection{Naive Bayes Learner}
We use Naive Bayes learner for comparison with the performance of the linear SVM model.
We choose this learner because it is fast compared to other learners
like logistic regression and non linear SVM using RBF kernels. Since there are no algorithmic
settings required for this learner, we don't optimize any settings for this model. We use
the same set of features as used in linear SVM for accurate comparison and obtain the AUC
using 10-fold cross validation.The ROC curve is shown in Figure \ref{fig:bayes_roc}. The low
positive rate is concurrent with the fact that there is low signal in this data set and also
because we use very less number of features for the model.

\begin{figure*}[h!]
\center
\vspace{-2mm}
\includegraphics[height=2.5in]{img/AUCnaivebayes598.eps}
\vspace{-2mm}
\caption{\figtitle{The Naive Bayes Learner ROC.}
The Naive Bayes learner receiver operating characteristic (ROC) shows the
relationship between the true positive and false positive rate.}

\label{fig:bayes_roc}
\end{figure*}

\begin{figure*}[h!]
\center
\vspace{-2mm}
\includegraphics[height=2.5in]{img/bayes604.eps}
\vspace{-2mm}
\caption{\figtitle{The ROC for Naive Bayes with $75$ Features.}
By increasing the number of features from $10$ to $75$, the AUC improves from
\optaucb{} to $0.604$.
}
\label{fig:bayes_roc_75}
\end{figure*}

\subsection{Comparison}
On comparing the AUC obtained from both the models, we find that the linear SVM performs better
than Naive Bayes with the limited set of $10$ features. 
We find that in case of Naive Bayes, the AUC improves to about 
$0.604$ with $75$ features and $10$ fold cross validation as shown in Figure
\ref{fig:bayes_roc_75}. We could not verify the same behavior for linear SVM due 
to limitations in memory. 

\subsection{Limitations}
We note that the models that we have built may not necessarily accurately
predict responses to the direct mailing because we did not have a test set to
check our models on. 

Our models are built using all the $95,412$ examples from the training data but do not contain all of
the features that improve AUC values. The features that improved AUC values were found using smaller number of examples in linear SVM 
and all examples in Naive Bayes learner. We did not include these features
because the memory footprint of training with all of these features became
prohibitively large in case of linear SVM. We compare the two models using the same features and hence our Naive Bayes learner
uses the same feature set. As a result, our model does not contain all of the features that improve AUC.


