% !TEX root = ArticoloRF.tex
% fisher evaluations
\subsection{Fisher’s LD Evaluations}
Fisher’s LD classification algorithm for tree generation has been adopted for classifying the test set.
Figure \ref{fig:FisherError} shows classification error percentage with respect to tree depth and tree number variation.
\begin{figure}[htbp]
\begin{center}
\includegraphics[keepaspectratio=true,width=.5\textwidth]{FisherError}
\caption{Fisher’s LD Classification Error}
\label{fig:FisherError}
\end{center}
\end{figure}
In order to correctly classify adopting this algorithm the depth of the trees must be at least greater than the number of classes of the data records to be classified. This is because at every step, assuming a good separation plane has been found, one class is separated from all the others.
As Figure \ref{fig:FisherError} shows, classification error rate experiences an abrupt improvement moving from trees whose depth is lower than the number of classes to deeper trees. 
Once the minimum depth is reached, classification yields good results regardless of the number of trees assessing the classification error value around $5\%$. 
Table \ref{tab:FisherError} reports some cases of classification error values.
\begin{table}[htdp]
\caption{Classification Error \% for Fisher’s LD}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Error \%	&10 trees& 50 trees &100 trees\\
	\hline
5 levels & 51.92& 42.40 & 42.13\\
\hline
20 levels & 5.56&4.17 & 4.28\\
\hline
30 levels & 4.84&4.17 & 4.17\\
\hline
\end{tabular}
\end{center}
\label{tab:FisherError}
\end{table}%


Tree number variation is almost weightless for this algorithm because, during the training phase, all the features of the records are considered for the computations whereas, on the contrary, the other implemented algorithms focus on just one of the features at a time. 
This implies that there is little variation among the trees of the forest, in fact, if bagging was not employed, all the trees would be identical and classification error would be exactly the same independently of their number in the forest. 
Employing bagging introduces some variation on the training data records used to build every tree, still these variations have little impact on classification error for this dataset.

This algorithm is computationally very expensive because of the involvement of a large number of features. In the case of this dataset computations are on matrices of $64\times64$ double precision real values.  
The most expensive computation is the pseudo inverse of the scatter matrix $\mathbf{S}_{w}$ that employs the singular value decomposition technique \cite{svdpseudoinverse}.

As shown in Table \ref{tab:FisherTimes}, training times range from 2.71 seconds for 10 trees of 5 levels steadily increasing to 79.05 seconds for 100 trees of 30 levels.
Classification times range from approximately 0.5 seconds for 10 trees to 5 seconds for 100 trees, almost independently of the tree depth.
In Table \ref{tab:FisherTimes} only some cases are reported for space convenience.
\begin{table}[htdp]
\caption{Training and Classification Times for Fisher’s LD}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
	&Training [s]&Classification [s]\\
	\hline
10 trees of 5 levels & 2.71 & 0.53\\
\hline
100 trees of 5 levels & 26.74 & 5.76\\
\hline
10 trees of 20 levels & 9.15 & 0.46\\
\hline
100 trees of 20 levels & 81.34 & 5.33\\
\hline
10 trees of 30 levels & 8.24 & 0.46\\
\hline
100 trees of 30 levels & 79.05 & 5.16\\ 
\hline
\end{tabular}
\end{center}
\label{tab:FisherTimes}
\end{table}%

Compared to other algorithms there is a tradeoff between low classification error rates with a few trees and high training and classification times due to the heavy computations that have to be performed.

