\section{Methodology}
\label{sec:methodology}

% TODO: reorganize this information possibly
\ignore{
were independent and should not be related linearly. Converting everything to
binomials first embodies independencebecause otherwise they'd be converted to
numerical values. The equal rankings obtained from simply converting 
these features to numerical do not produce useful results as
the original values were independent and hence don't should not be related linearly. Since some of the features
had missing values we apply the replace missing features operator that replaced the mode on nominal features and
the average on numeric and real features.We store the data as a new repository after these steps again.
We then discretized all of the integer and binary values into $10$
bins to \fix{EXPLAIN WHY WE DISCRETIZED}.

\subsection{Preprocessing Data}
As part of the RapidMiner process used to load the raw data available
in the form of a CSV file, we assign different types like numeric, real, binominal to the
attributes.We assign attributes like dates and categories like Income as 'numeric' since we 
want to discretize them eventually.Some of the features guessed as polynomial are converted to binominal.
The data thus read and stored as a repository will be retrieved by the later processes
in order to speed up computation and eliminate processing of data again and again.
After this step we retrieve the repository and filter those examples that have $TARGET_B = 1$, 
discretize all the numeric values into $10$ bins except real values.
We then remove all useless features, convert all nominal features into
binominal and then all nominal to numerical. 

\subsection{Building a Model}
\label{sec:model}
After formatting the feature set and storing the preprocessed data as explained in Section, we prepare our model for linear
regression. For this we first retrieve data stored and use the grid search optimization technique along with
cross validation. Within cross validation we normalize the numeric and real values separately in training and test segments using z-score transformations
to prevent leakage from training to test folds.
In the training stage we use the linear regression operator and pass the model to the test stage that uses 
the weights learnt from training to evaluate the prediction. We use different ranges for ridge parameter $0$ to $8000$ over $50$ steps 
in a 10-fold cross validation using shuffling to separate the examples.We then get the RMSE and the optimum ridge parameter as outputs of the grid search operator.
The plots of RMSE vs the ridge parameter is shown in Fig. The final coefficients learnt from the linear regression applied to the entire data set is shown in Table.
The figure showing the different processes that we used is also shown in Fig.
}

\subsection{Identifying Optimal Ridge Values}
This section explains how the optimal ridge value was obtained.

\subsection{Identifying Useful Features}
In order to identify useful features, we applied various techniques that
filtered attributes at different granularities. We started with filtering
examples as described in Section \ref{sec:filter_examples}. 
This process reduced our set from about $95,000$ examples to $4843$ examples.
We sieved through the set of features and their explanations as in [1] and
reduced the
feature set to $187$.

To further reduce this subset of features, we ran sets of ten
features at a time with a high ridge value ($10^7$) through 3-fold cross
validation on a linear regresssion; however, we moved the normalization
process inside of the cross validation. By using the high ridge value, we identified our potential candidate
features by choosing the ones that that had high coefficients with low standard error in the linear model.
Because these features had high coefficients and low standard errors, they
were likely to affect the final model and the prediction greatly.
We kept all of the features that had low p-values which resulted in our final
set of useful features because the p-values indicated how useful they were to
the prediction. These results can be seen in Table
\ref{table:training_coefficients} and discussed in Section
\ref{sec:training_coefficients}.

\ignore{
\subsection{Limitations}
}
