% please textwrap! It helps svn not have conflicts across a multitude of
% lines.
%
% vim:set textwidth=78:


\section{Modeling}
\label{sec:flow}
In this section, we explain the flow of data we used to find the best
model we could find.

\subsection{Initializing Data}
We start our flow by loading the raw data and manually assign each useful feature a type such
as numeric, real, or binominal. We explain how to identify useful features in
Section \ref{sec:identify}. We assign the features their types depending on
the transformations we plan to perform to improve their usefulness. 

\subsection{Filtering Examples}
\label{sec:filter_examples}
After setting up the initial types, we filter examples to produce a set that
we can train upon. We aim for a learning set that has roughly the same number
of positive and negative examples so that we can build a model that does not
trivially guess the overwhelming majority to get a high accuracy. Secondly, we
want our learning set to be large so that we have more examples to train on.
To fulfill these requirements, we choose all the positive examples and randomly
sample twice as many negative examples since the number of negative examples
dwarf the positive ones. By making only two thirds of the examples negative,
we force our models to not trivially choose all negative and have a high
accuracy, but keep some of the disproportionality.

\subsection{Improving Feature Usefulness}
Now that we reduced our runtime, we perform transformations to improve the
usefulness of our features. We find that converting all of the features into
binominals improves the usefulness of the features over equal ranking because
binominals embody the independence of the features where as equal ranking
invalidly relates them linearly. 

After this step, we convert all of our nominal values to numerical values so
that the SVM can build a model. From here, we replace missing values with the
average and normalize our data by applying a z-transformation. We found that
performing these steps improves the accuracy of the generated model.

\subsection{Constructing the Model}
After preprocessing all of our data, we construct two models. One uses a L2 SVM primal
solver with a cost factor C of \fix{number} and the other uses a C-SVC SVM
with a radial base function (RBF) kernel.
