% please textwrap! It helps svn not have conflicts across a multitude of
% lines.
%
% vim:set textwidth=78:

\section{Results and Evaluation}
\label{sec:results}
In this section, we present the final models for predicitng TARGET\_B and
TARGET\_D and evaluate the performance of these models on the test data. 

\subsection{Linear Regression}

\begin{figure*}[!h]
\center
\includegraphics[height=3in]{img/ridgermse.eps}
\vspace{-2mm}
\caption{\figtitle{Ridge vs RMSE for linear regression.}
Using grid parameter search, the optimal value of the ridge parameter, lambda,
that minimized RMSE is \optridge{}.
}
\label{fig:ridge_rmse}
\end{figure*}

\begin{figure*}[!t]
\center
\includegraphics[height=3in]{img/linearmodel.eps}
\vspace{-2mm}
\caption{\figtitle{Final linear regression model.}
The final linear regression model trained on the entire training data.
}
\label{fig:linear_model}
\end{figure*}

\begin{figure*}[!h]
\center
\includegraphics[height=3in]{img/finalprocess_lin.eps}
\vspace{-2mm}
\caption{\figtitle{Final process for linear regression.}
}
\label{fig:linear_process}
\end{figure*}

Figure \ref{fig:linear_model} shows the final model produced by our linear
regression learner. This model predicts the donation amount for each example
which has TARGET\_B = 1. We used the same features as in our linear regression
model from Assignment 1, and trained the model with the same ridge parameter
of value \optridge{}. The RMSE incurred by this model during 10-fold cross
validation using the entire training data is \optrmse{}, as shown in Figure \ref{fig:ridge_rmse}.

\subsection{Logistic Regression}

\begin{figure*}[h!]
\center
\includegraphics[width=5in]{img/recal.eps}
\vspace{-2mm}
\caption{\figtitle{Confusion matrix for logistic regression.}
}
\label{fig:confusion_matrix}
\end{figure*}

\begin{figure*}[!t]
\begin{center}
\begin{tabular}{l l}
$- 0.099 *$ RFA\_2\_1 = 4 \\
$+ 0.110 *$ RFA\_2\_1 = 1 \\
$- 0.044 *$ RFA\_2\_1 = 3 \\
$- 0.015 *$ RFA\_2\_1 = 3 \\
$- 0.066 *$ RFA\_2\_2 = E \\
$+ 0.105 *$ RFA\_2\_2 = G \\
$+ 0.027 *$ RFA\_2\_2 = F \\
$- 0.105 *$ RFA\_2\_2 = D \\
$- 0.058 *$ RFA\_3\_0 = S \\
$+ 0.009 *$ RFA\_3\_1 = 4 \\
$+ 0.001 *$ RFA\_3\_1 = 3 \\
$+ 0.012 *$ ADATE\_10 \\
$- 0.019 *$ ADATE\_11 \\
$+ 0.034 *$ ADATE\_12 \\
$- 0.064 *$ NGIFTALL \\
$+ 0.002 *$ LASTGIFT \\
$- 0.110 *$ LASTDATE \\
$- 0.020 *$ AVGGIFT \\
$+ 0.688$ \\
\end{tabular}
\end{center}
\caption{\figtitle{Fast Large Margin Logistic Regression Model.}
The model produced by the linear SVM using the optimal cost value C found.}
\label{fig:logistic_model}
\end{figure*}

\begin{figure*}[!h]
\center
\includegraphics[height=3in]{img/finalprocess_log.eps}
\vspace{-2mm}
\caption{\figtitle{Final process for linear regression.}
}
\label{fig:logistic_process}
\end{figure*}

Figure \ref{fig:logistic_model} shows the final model produced by our logistic
regression learner. We use Fast Large Margin operator with the L2 Logistic 
Regression kernel to train our logistic regression model for predicting 
donation amounts. We choose this operator instead of the Linear Regression 
operator in Rapidminer due to the fast runtimes of this operator and lower
memory requirements. We use the same features for our logistic regression
model as we used for the linear SVM model in Assignment 2 and Naive Bayes model in
Assignment 3. We reason that since the prediciton target, TARGET\_B is the
same for these classifiers, the features which were found to be good
predictors before would be good predictors of TARGET\_B in our logistic
regerssion learner as well. We used a grid parameter search to find the optimal 
value of the cost parameter C that produces the best performance using 3-fold 
cross validation. However, varying the value of C did not cause significant 
difference in performance, and hence, we chose to assign a value of \optc{} to C. 

We use positive class recall and precision as the measure of performance of
our linear regression model. Positive class recall is a measure of how many
positive examples were actually predicted positive, which is a good metric to
determine the accuracy of predictions of TARGET\_B = 1. Positive class
precision is likewise a measure of how many of the predicted positive examples
are actually positive, which is also a good metric of TARGET\_B = 1
prediction accuracy. The positive class recall and precision obtained by our
logistic regression model are \optrecall{} and \optprecision{} respectively, as shown in Figure \ref{fig:confusion_matrix}.


\subsection{Applying Final Models to Test Data}

The two regression models are then applied to the test data as shown in 
Figure \ref{fig:linear_process} and Figure \ref{fig:logistic_process}. The linear
regression model predicts the probability of donation, i.e. the probability of
TARGET\_B = 1. We call this probability $p(x)$. The logstic regression model
estimates the donation amount, i.e. the value of TARGET\_D, given that
TARGET\_B = 1. We call this estimation $a(x)$. The resulting $p(x)$ and $a(x)$
values for each example $x$ are written into separate CSV files which are then 
processed by a C++ program. The C++ program reads the two files line by line 
and obtains the product of the two predictions, $p(x).a(x)$. We choose to
solicit each exmaple $x$ for which the product is greater than the cost of 
solicitation, $\$0.68$. For each example $x$ to be solicited, the C++ program
looks up the valtargt.txt file to calcualte the actual donation amount. 
These donation amounts are then summed up to provide the final donation amount 
obtained as a result of soliciting the subset of examples satisfying the optimality 
criterion.

By applying the models we obtain from linear regression training and logistic
regression training to the test data, we obtain a net donation amount of
\optdonation{} from \optsol{} solicitations, which results in a net profit
of \optprofit{} after subtracting the total cost of soliciation, $0.68 *$
number of solicitations.

\subsection{Limitations}
The winners of the KDD Cup 98 competition \cite{kdd_cup_winners} obtained a profit of $\$14,712$
with soliciting $56,330$ donors, whereas the baseline profit is $\$10,560$ which is got by
soliciting all the donors. We are able to achieve a profit of \optprofit{} on soliciting
\optsol{} donors. We observe that the number of individuals solicited by our model is very high
indicating a weaker logistic regression model that attributes a high probability to most of the donors responding
to a solicitation.
We chose to use Fast Large Margin's Logistic regression kernel which is more memory efficient
compared to the original Logistic Regression operator in RapidMiner.
However this led us to have fewer parameters to tune in order to obtain the best recall on the datset and
we had to depend only on the available features for prediction. This we hypothesize has limited the
predictions of our logistic regression model. The linear regression model on the other had achieves an RMSE of
\optrmse{} on the training dataset, thereby confirming our hypothesis since the final prediction is a product
of the two.

