% please textwrap! It helps svn not have conflicts across a multitude of
% lines.
%
% vim:set textwidth=78:


\section{Modeling}
\label{sec:flow}

\begin{figure*}
\center
\includegraphics[height=2.5in]{img/svmProcess.eps}
\vspace{-2mm}
\caption{\figtitle{The Final Linear SVM.}
We first load the data, and recode any features needed. Features like RFA\_3
are split into recency, frequency and amount. We identify and recode any
missing values that help classification. We convert all nominal values to
binominals, then to numerical. We replace missing values,
normalized everything and pass everything to the SVM.
}
\label{fig:svm_process}
\end{figure*}

In this section, we explain the flow of data we used to find the best
model we could find. Figure \ref{fig:svm_process} shows the flow used in the linear
SVM. The Naive Bayes method uses the same preprocessing and just replaces the Fast
Large Margin operator with a Decision Tree.

\subsection{Initializing Data}
We start our flow by loading the raw data and manually assign each useful feature a type such
as numeric, real, or binominal. We pick each attribute's type depending on
the transformations we plan to perform to improve their usefulness. 

\subsection{Improving Feature Usefulness}
To begin improving feature's usefulness, we begin by converting compound types
into distinct features. For example, for all of the recency, frequency and
amount fields (e.g. RFA\_3), we split these into three separate features and remove
the original attribute.

Next, we transform missing values into classifiers for identifying TARGET\_B.
We accomplish this by replacing the missing values with special values that
represent the meaning of missing values. To illustrate, missing values in the
date a particular gift was returned ( indicated that the example did not donate
during this period. Thus, replacing missing values with zero and discretizing
by binning into a zero and non-zero bin creates a new attribute indicating
whether a person donated during the period.

After this step, we convert all of our nominal values to binominal values to
maintain their independence. Eventually, we convert all of our
features to numerical values as against the default method that transforms them all into
equal rankings which relates them linearly. By not converting to binominals,
we incorrectly encode the feature's meaning.

Next, we convert all of our features to numerical values so that the SVM can
build a model using them. We then normalize the data using a z-score
transformation and replace all of the missing features with their average.

\subsection{Constructing the Model}
After preprocessing all of our data, we construct two models using all
$95,412$ examples. We build one using a Fast Large Margin with an optimal cost $C$ of
\optc{} and the other using a Naive Bayes learner. Identifying these optimal
values are explained in Section \ref{sec:grid_search} and the produced models
are presented in Section \ref{sec:results}.
