% please textwrap! It helps svn not have conflicts across a multitude of
% lines.
%
% vim:set textwidth=78:

\section{Methodology}
\label{sec:methodology}
In this section, we explain our methodology for finding the optimal set of
parameters to produce our models.

\subsection{Selecting Features}
\label{sec:identify}
The features that are important in predicting TARGET\_D may not have
importance for the SVM model where we predict
for TARGET\_B. This is because the features that are good predictors
for TARGET\_D had importance in conjunction with TARGET\_B =1. Whereas
this time we need to predict for TARGET\_B in the case of which the
same correlation does not hold. In other words, the features may have
high values in both TARGET\_B = 1 and TARGET\_B = 0 and hence will not
impact it's prediction. We also eliminated TARGET\_D as it had direct
correlation to TARGET\_B prediction leaking data.

We selected 20 features at a time from the list of 480 features and
used a high range for the parameter C to determine features with high
weights positive and negative. We later selected these features and
incrementally added features which improved recall values for the
positive class. In particular we observed that RFA* had a positive
impact on the recall values which correlates that the Recency,
Frequency and Amount have a direct correlation with TARGET\_B = 1.

\subsection{Identifying Optimal Values of C and $\upgamma$}
\label{sec:grid_search}
To identify the optimal cost value C and $\upgamma$, we used a grid search
optimization. For tuning our linear SVM, we performed our grid search for C
from \fix{number} to \fix{number} in increments of \fix{number}. Figure
\ref{linear_grid} shows the results of the search for the linear case.

To find the best values of C and $\upgamma$ for the RBF kernel, we used a grid
search from \fix{number} to \fix{number} in increments of \fix{number}. Figure
\ref{rbf_grid} shows the results of the search. These figures demonstrate that
we have the minimum in the searched area.

To give a measure of optimality when finding these values, we modified our
flow to use cross validation on the learning data. Inside of our grid
searches, we placed our 2-fold cross validations. We chose two folds to
decrease our runtimes. We moved our replace missing features and z-score
normalization inside of our cross validations in order to normalize each fold
instead of incorperating all of the data in normalization.
