% !TEX root = ArticoloRF.tex
%information gain evaluations
\subsection{Information Gain Evaluations}
The results for classification error using the Information Gain based algorithm are shown in Figure~\ref{fig:InformationGainError}.
Tests have been performed varying tree depth and tree number and using a randomly extracted threshold.
\begin{figure}[htbp]
\begin{center}
\includegraphics[keepaspectratio=true,width=.5\textwidth]{InformationGainError}
\caption{Information Gain Classification Error}
\label{fig:InformationGainError}
\end{center}
\end{figure}
The graph shows that a key aspect of this algorithm is the tree depth. 
This is because the procedure is aimed at finding  a distinctive feature for the considered dataset obtained using the bagging technique.
Setting tree depth to low values leads to high entropy, hence, high disorder in data records and consequently high classification error rates.

For the particular case of the available dataset, setting the tree depth to 5 levels yields a classification error rate of $27\%$ whereas increasing this parameter to 10 levels decreases the classification error rate to $7\%$. 

Variations on the number of trees of the forest have a great impact when the tree depth is low exhibiting a substantial improvement as this parameter is increased. This behavior is similar to that of the extremely randomized version of the classifier. 

This algorithm is computationally heavy if compared to the extremely randomized version, especially when the IG maximization is computed considering both the feature and a range of thresholds.

Table \ref{tab:InformationGainTimes} offers some values in term of run time for different combinations of the distinctive parameters of the algorithm, i.e. tree depth and number of trees in the forest. The tested algorithm randomly selects the feature among those available in the dataset and the optimal threshold is then computed by maximizing the Information Gain for that feature.

\begin{table}[htdp]
\caption{Training and Classification Times for Information Gain}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
	&Training [s]&Classification [s]\\
	\hline
10 trees of 5 levels & 2.83 & 0.013\\
\hline
100 trees of 5 levels & 29.69 & 0.158\\
\hline
10 trees of 20 levels & 5.27 & 0.085\\
\hline
100 trees of 20 levels &53.21 & 0.705\\
\hline
10 trees of 30 levels & 5.93 & 0.130\\
\hline
100 trees of 30 levels & 58.18 & 1.596\\ 
\hline
\end{tabular}
\end{center}
\label{tab:InformationGainTimes}
\end{table}%

In this algorithm the following parameters can be specified:
\begin{itemize}
\item the number of features to effectively use among those available in the dataset as a pool from which one is randomly picked at every splitting step for the decision trees buildup;
\item whether to randomly choose the threshold or to try different values in a specified range with a given variation step for maximizing the computed IG.
\end{itemize}

If the number of the features used for the index computation is decreased, a substantial improvement in terms of performances is achieved with a minimal variation in terms of classification error as long as it is not lowered under a certain threshold. If the number of employed features is set to 1 the algorithm behaves equivalently to the extremely randomized version.

\begin{table}[htdp]
\caption{Classification Error \% for Information Gain Based Algorithm}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{3}{|c|}{8 Features}\\
\hline
Training [s] & Test [s] & Error \% \\
\hline
22.44 & 0.28 & 8.12\\
\hline
\multicolumn{3}{|c|}{32 Features}\\
\hline
Training [s] & Test [s] & Error \%\\
\hline
38.74 & 0.17 & 8.12\\
\hline
\multicolumn{3}{|c|}{64 Features}\\
\hline
Training [s] & Test [s] & Error \%\\
\hline
60.19 & 0.17 & 7.84\\
\hline
\end{tabular}
\end{center}
\label{tab:InformationGainError}
\end{table}

An additional test, fixing the tree depth to 30 levels and the number of trees to 100 and varying the number of features employed to values of 8, 32 and 64 has been performed and its results are shown in Table \ref{tab:InformationGainError}.
