\subsection{Support Vector Machines}
\setlength{\parskip}{0.2cm}
\noindent Support vector machines\cite{svm} are a set of related supervised learning methods that are used for classification. In Support Vector machine,each instance in the training set contains one “target
value” (class labels) and several “attributes” (features). The goal of SVM is to produce a model which predicts target value of data instances in the testing set given only their attributes.
For our case, the training file contains the information about the customers and we have labels associated for these customers for each of the appetency, churn and upselling. Thus to SVM we give
the training data file combined with the labels of the class we want to predict and obtain the model that can be used to predict the labels of the test data.

\noindent SVM works on the principle that it tries to form the hyperplane between the datapoints given that separates these datapoints into 2 sets in higher dimension space by mapping these
data points into the higher dimension using the feature vectors that are obtained using the attributes of the data. One set contains positive class labels and 
other set negative class labels.     

\noindent Thus given a training set of instance-label pairs (xi , yi ), i = 1, . . . , l where $x_{i}\epsilon{R^{n}}$ and
$y\epsilon\{1,−1\}$ , the support vector machines (SVM)(Boser et al., 1992; Cortes and
Vapnik, 1995)require the solution of the following optimization problem:


$f(x)=w^{t}x+b\sum\alpha_{k}y_{k}x_{k}.x+b=\sum_{k}\alpha_{k}y_{k}<x_{k},x>$ 

   
\noindent where w represents the normal to the hyperplane and $<x_{k},x>$ maps the point x to higher dimension space. SVM tries to find the hyperplane in this higher dimension
that has the maximum margin(b) between the 2 parallel planes to the hyperplane that separates the datapoints into two different sets. alpha represent the support vectors, the vectors that lie
on the edge of the parallel hyperplanes. f(x) takes a fixed value of +1 or -1.

\noindent In mapping the point from one dimension  to a higher dimension in linear SVM a dot product is used between them but there exists non-linear kernels that used
other functions to achieve this task. This is needed because at times the distribution of the data is such that we cannot find a linear separating hyperplane
even in higher dimensions. The kernel functions that we explored are as follows:


Polynomial :  $(x.x^{'}+1)^{d}$ 

Radial : $exp(-\gamma||x-x^{'}||^{2})$ 

Sigmoid Kernel : $tanh(\kappa x.x^{'}+c).$               


\noindent We used the software SVMLight for classification of our dataset using SVM. The data we had was first processed to bring to the format usable by SVMLight\cite{svmlight}\cite{fast_svm}. 

\subsubsection{Adapting SVM}
\begin{itemize}
\item \textbf{SVM with default library values} SVM works on numeric data and as we had categorical data, we had to convert the categorical data to numerical data. The first approach taken in converting the categorical
data to numerical data was to give each distinct value in the statespace of the varaible a distinct number. We then tested the SVM with default parameters for linear and non-linear kernels. 
The dataset we have is a very skewed dataset and using just the default parameters gives the model that allocates -1 to the test data. Thus it was clear from 
this that we have to parameterize the parameters and obtain the model. After parameterizing the kernels the we obtain the results with sigmoid model that 
were able to detect positive examples in the test data. That made us clear that our data is not linearly distributed\cite{kernel}.

\noindent The results obtained from the above experiments were not convincing enough and we looked back at our data processing and came to a conclusion that our
approach of converting categorical data to the format for SVM might be wrong. This was confirmed by using just the categorical data for SVM to obtain the mode
and use that to classify the test data.
\item \textbf{Normalization} We then used only Numerical data form our training data to obtain the model for classifying the test data. This data was then normalized to bring to the same
scale. This was done because the data we had for 2 different features had very different ranges. For example one attribute had a maximum range to 16286 whereas other
had just 28. To bring these attributes to scale and avoid the attribute that had more higher range to dominate the attribute that has lower range we normalized the 
data to [-1,1].  Another advantage is to avoid numerical difficulties during the calculation. Because kernel values usually depend on the inner products of
feature vectors, e.g. the linear kernel and the polynomial kernel, large attribute values might cause numerical problems.Normalization was done using the min
-max method. 



Min-Max Normalization : $v^{'}=\frac{v-min_{A}}{max_{A}-min_{A}}(newmax_{A}-newmin_{A})+newmin_{A}.$ 


 
\noindent The other methods that were considered for normalization were Z-score and decimal scaling. But the other two methods don't preserve
the relation between the data hence min-max normalization was used. \cite{normalization}This gave us good results and they are discussed in the results section.
                                                          
\item \textbf{Categorical Data} The Naive-Bayesian algorithm performed very good using the categorical data and we thought of including the categorical data in obtaining the model using SVM.
The other approach we followed for converting the categorical data to numerical was to make all the distinct values in the statespace of a feature, a new feature
in the dataset. And for every customer make that new feature as 1 whenever it has that value and others zero. For example, if a attribute had 3 distinct 
values {acdb,aber,aqwe} for Var1, then we make 3 new features {acdb,aber,aqwe} for this one feature Var1 and whenever a customer had the value say "acdb" for
Var1 than in this new configuration he will have his feature "acdb" set to 1 and rest all set to 0. This was not feasible for us as the statespace of many of our
attributes was 30000 and thus it introduced a lot many columns in the dataset then we could process for SVM. To avoid this we took the top 2 attributes that had
the highest ranking in categorical obtained using info-gain ratio. We experimented with this and the score obtained for appetency and churn were not good but
that for upselling were comparable to the one with just the numerical data. This showed that inclusion of categorical data can give us a better
score if we can use all the categorical attributes in deriving the model for SVM. Thus in our future work we would like to explore better ways of converting 
categorical data with a high statespace to numerical.     
\end{itemize}
