\subsection{Decision Trees}
\setlength{\parskip}{0.2cm} Decision Trees are tree structures where leaves represent classifications and branches represent conjunctions of features that lead to those classifications. We used a java based toolset, Weka\cite{weka} for generating the decision trees. 


\noindent Weka consists of a collection of machine learning algorithms for data mining tasks. It contains tools for data pre-processing, classification, regression,clustering, association rules, and visualization. Weka needs the input files of both the training and the test set in a specialized format (.arff)\cite{arff}. Hence, before applying any of these methods, we have to convert the raw data into .arff compatible format.


\noindent The algorithm used for building decision trees was C4.5\cite{quinlan}.The Weka classifier package has its own version of C4.5 known as J48. In J48 (C4.5), decision trees are created by the training data using the concept of information entropy\footnote{A measure of the uncertainty associated with a random variable \cite{entropy}}. In more detail, each attribute of the data is used to make a decision that splits the data into smaller subsets. At each step,we choose the attribute with the largest difference in entropy in order to make the decision. The algorithm then recurs on the smaller subsets.

\subsubsection{Adapting Decision Trees}

\begin{itemize}

\item \textbf{Binary Splits} - By default, the splits used for categorical attributes were multiway splits. As the statespace for many of the categorical variables was in the order of thousands, we used to run into problems of insufficient memory. So we chose the binary splits for the categorical attributes. The decision trees were built using all the 230 attributes and 50,000 instances without doing any prior feature selection. The feature selection process is embedded in the C4.5 algorithm where it chooses the attribute with the largest difference in entropy to make the decision.

\item \textbf{Handling Skewness} - Since the dataset for all the applications is skewed, the classifier ends up classifying the majority class much better than the minority class as can be seen in the results in Section 4.2. The sensitivity takes the worst hit in the case of appetency where the data is the most skewed. In order to better the results of appetency, we tried the following strategies:
\begin{itemize}
\item Reducing Pruning : Nitesh Chawla in \cite{chawla} suggests that pruning can potentially collapse (small) leaves belonging to the minority class. Hence we increased the confidence factor of J48, which decides the extent of pruning,from 0.25 to 0.4 to decrease the post-pruning.
\item Sampling \cite{weiss}:An alternate strategy for dealing with the skewed data is to use sampling to change the class distribution of the training data.
\begin{itemize}
\item Undersampling:  Undersampling essentially means decreasing the majority class examples from the training data so that the class distribution could be altered in favour of minority classes.But the main disadvantage with this scheme is that it discards potential useful training data.

\item Oversampling: Oversampling means increasing the examples of minority classes. The main disadvantage with this scheme is that by making exact copies of the existing examples, it is likely to introduce overfitting in the model.
\end{itemize}
\item Using cost matrix \cite{weiss}: By default, the decision tree algorithm treats every misclassification as the same. But in case of skewed datasets, if both misclassification costs are treated similarly, it will lead to low sensitivity. Hence, we set the cost of misclassification of a minority class  greater than the cost of misclassification of a majority class by using different cost matrices This was achieved using the meta classifier CostSensitiveClassfier in Weka with the base classifier set as J48.
\end{itemize}
\end{itemize}  

\noindent The missing data was handled by using "?" in the arff file.
