% please textwrap! It helps svn not have conflicts across a multitude of
% lines.
%
% vim:set textwidth=78:

\section{Methodology}
\label{sec:methodology}
In this section, we explain our methodology for finding the optimal set of
parameters to produce our models.

\subsection{Test Flow}
To start diving into this large training set, we first establish a flow to
evaluate whether a decision makes a positive impact on training accurate
models. To evaluate performance, we choose AUC since it is the parameter we
ultimately try to optimize. We then set up a cross validation using stratified
sampling to evaluate the models produced by our linear SVM and Naive Bayes
method.  
We start with different number of folds in our cross validation which change
with the size of the examples we use to test our flow. We then establish the flow as
explained in Section \ref{sec:flow} by trial and error except that we move
normalization and replacing missing values inside cross validation.

\subsection{Selecting Features}
\label{sec:identify}
We first eliminate the amount donated, TARGET\_D, from the set of features we
select because it leaks data about whether somebody will donate. Using
TARGET\_D will produce an invalid model.

Next, we reduced our runtime by cutting the number of examples down to $20,000$
and the number of folds in our cross validation down to $3$. This
reduction in the training set gave us the ability to test many combinations of
features.

To begin sieving through the numerous attributes, we replaced our Fast Large
Margin SVM with a linear regression model with a high ridge factor of $10^7$
as suggested in \cite{message_board}. We recoded feature types as needed to
build our model using a linear regression. We ran through all $480$ features
only using about $5$ to $20$ at a time and identified the features that had a
high absolute t-score or low p-value.

After identifying all the features that predicted TARGET\_B well with a linear
regression model, we used the same features in building our models of linear SVM and Naive Bayes. 
However, we added and removed some features
from this set that improved AUC by trial and error. We aimed for a very small
set of features to make runs feasible in Rapid Miner with the limited amount
of memory available to us.

After establishing our features, we validated that they still produced
reasonable AUC values for increased training data set sizes of $50,000$ to $95,412$
 and 10-fold cross validations.

\subsection{Identifying Optimal Values of $C$ in linear SVM model}
\label{sec:grid_search}

\begin{figure}[!t]
\begin{center}
\begin{tabular}{|c|c|}   \hline
{\bf C} & {\bf AUC} \\ \hline
$2$     & $0.600$   \\ \hline
$4$     & $0.601$   \\ \hline
$8$     & $0.591$   \\ \hline
$16$    & $0.597$   \\ \hline
$32$    & $0.599$   \\ \hline
$64$    & $0.590$   \\ \hline
$128$   & $0.592$   \\ \hline
$256$   & $0.594$   \\ \hline
$512$   & $0.608$   \\ \hline
$1024$  & $0.596$   \\ \hline
\end{tabular}
\end{center}
\caption{\figtitle{Linear SVM C vs. AUC.}
The results of grid optimization stepping from $2^1$ to $2^{10}$ in powers of
two show that a value of $C = $\optc{} produces the best AUC value of \optauc{}.}
\label{fig:c_vs_auc_linear_tab}
\end{figure}

To find the optimal values of cost $C$ for our linear SVM, we used a grid search to
try values. We wrapped our 10-fold cross validation inside a grid search and
tried values of $C$ as seen in Figure \ref{fig:c_vs_auc_linear_tab}. As seen
from the table, our optimal value of $C = $\optc{} produces an AUC of
\optauc{} for linear SVM. We only used powers of two to quickly search the space of values of
$C$. The high value of $C = $\optc{} shows that there is low regularization which is observed when
the number of training examples is large. This is similar to what is explained in \cite{class_notes}. 

