\subsection{SVM Results}

Table~\ref{table:all} lists the results for using all the 230 attributes for classification.This resulted in the allocation -1 to all test cases when the linear, radial, polynomial and sigmoid kernels were used. But when sigmoid kernel was paremeterized, it gave less BAC but it was able to identify certain positive examples.
As discussed before we thought this behaviour can be because of the way we are converting the categorical variables and this was proved by training the categorical data using svm. Even after parameterizing it did not give good performance as shown in table~\ref{table:ocategorical}.

\begin{table}[h!]
\caption{Classification using all 230 variables}
\centering
\resizebox{10cm}{!} {
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline 
230 Vars & \multicolumn{3}{c|}{Sensitivity} & \multicolumn{3}{c|}{Specificity} & \multicolumn{3}{c|}{BAC}\tabularnewline
\hline
Method & C & A & U & C & A & U & C & A & U\tabularnewline
\hline
Linear & 0 & 0 & 0 & 1 & 1 & 1 & 0.5 & 0.5 & 0.5\tabularnewline
\hline 
Poly & 0 & 0 & 0 & 1 & 1 & 1 & 0.5 & 0.5 & 0.5\tabularnewline
\hline 
Radial & 0 & 0 & 0 & 1 & 1 & 1 & 0.5 & 0.5 & 0.5\tabularnewline
\hline 
Sigmoid & 0 & 0 & 0 & 1 & 1 & 1 & 0.5 & 0.5 & 0.5\tabularnewline
\hline 
Sigmoid, c=2 & 0.078 & 0.05 & 0 & 0.928 & 0.996 & 0.929 & 0.50 & 0.49 & 0.49\tabularnewline
\hline
\end{tabular}
}
\label{table:all}
\end{table}


\begin{table}[h!]
\caption{SVM Classification using categorical data only}
\centering
\resizebox{9cm}{!} {
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline 
Categorical  & \multicolumn{3}{c|}{Sensitivity} & \multicolumn{3}{c|}{Specificity} & \multicolumn{3}{c|}{BAC}\tabularnewline
\hline
Method & C & A & U & C & A & U & C & A & U\tabularnewline
\hline 
Linear & 0 & 0 & 0 & 1 & 1 & 1 & 0.5 & 0.5 & 0.5\tabularnewline
\hline 
Poly & 0 & 0 & 0 & 1 & 1 & 1 & 0.5 & 0.5 & 0.5\tabularnewline
\hline 
Radial & 0 & 0 & 0 & 1 & 1 & 1 & 0.5 & 0.5 & 0.5\tabularnewline
\hline 
Sigmoid & 0 & 0 & 0 & 1 & 1 & 1 & 0.5 & 0.5 & 0.5\tabularnewline
\hline 
Sigmoid, c=2 & 0 & 0 & 0 & 1 & 1 & 1 & 0.5 & 0.5 & 0.5\tabularnewline
\hline
\end{tabular}
}
\label{table:ocategorical}
\end{table}


Just using numerical data and after normalizing better results were obtained then before and SVM performed better then naive bayesian and decision tree in churn. One more experiment that was conducted was that of using the top 2 categorical variables for each of churn, appetency and upselling obtained from information gain ratio. For appetency and churn, it allocated -1 to all test cases but for upselling using radial kernel and its $\gamma$ value to 2 it gave comparable results as the just numerical data. This suggests that if we can obtain a better way to represent categorical data with such huge statespace into numeric for SVM,the score from SVM can improve.


\begin{table}[h!]
\caption{SVM Classification using normalized numerical data}
\centering
\resizebox{12cm}{!} {
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline 
Numerical  & \multicolumn{3}{c|}{Sensitivity} & \multicolumn{3}{c|}{Specificity} & \multicolumn{3}{c|}{BAC}\tabularnewline
\hline 
Method & C & A & U & C & A & U & C & A & U\tabularnewline
\hline
Linear & 0 & 0 & 0 & 1 & 1 & 1 & 0.5 & 0.5 & 0.5\tabularnewline
\hline 
Poly & 0 & 0 & 0 & 1 & 1 & 1 & 0.5 & 0.5 & 0.5\tabularnewline
\hline 
Radial & 0 & 0 & 0.042 & 1 & 1 & 0.999 & 0.5 & 0.5 & 0.521\tabularnewline
\hline 
Sigmoid & 0.599 & 0.564 & 1 & 0.454 & 0.459 & 0.007 & 0.527 & 0.511 & 0.503\tabularnewline
\hline 
Sigmoid, c=2 & 0.304 & 0.128 & 0.008 & 0.835 & 0.925 & 0.999 & 0.57 & 0.527 & 0.503\tabularnewline
\hline
\end{tabular}
}
\label{table:nnumerical}
\end{table}
