% !TEX root = ArticoloRF.tex
%random evaluations
\subsection{Extremely Randomized Trees Evaluations}
Extremely Randomized Trees is the simplest algorithm implemented for tree generation.
The application of this method to build a forest for the classification of the test set yields the results shown in Figure \ref{fig:RandomError}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[keepaspectratio=true,width=.5\textwidth]{RandomError}
\caption{Extremely Randomized Trees Classification Error}
\label{fig:RandomError}
\end{center}
\end{figure}

Analyzing the graph it is clear that, in terms of classification error, there are benefits both in incrementing the number of trees in the forest and in incrementing the tree depth. Major improvements are due to the number of trees as long as they are sufficiently deep.
As Figure \ref{fig:RandomError} shows, the classification error steadily decreases when the two parameters of the classifier are varied increasing their value. Employing only 10 trees of 30 levels, classification error for this dataset is around $11 \%$, increasing the number of trees to 100 with the same number of levels yields a classification error around $4 \%$.
Table \ref{tab:RandomError} reports some cases of classification error values.
\begin{table}[htdp]
\caption{Classification Error \% for Extremely Randomized Trees}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Error \%	&10 trees& 50 trees &100 trees\\
	\hline
5 levels & 48.74& 30.99 & 28.66\\
\hline
20 levels &10.57&5.01 & 4.01\\
\hline
30 levels &11.63 &4.84&3.73\\
\hline
\end{tabular}
\end{center}
\label{tab:RandomError}
\end{table}%


This algorithm is very fast because, for each node to be split, it simply chooses a random feature to be evaluated for every data record associated with the node and compares the value to a randomly picked threshold from a uniform distribution over the range bounded by the minimum and the maximum values the selected feature takes within those records. It is therefore viable to use a large number of trees to build a good classifier without having to pay too much in terms of training and classification time.
These times span from a  few tens of milliseconds for a small number of trees to values around 2 seconds for a large number of trees as it is shown, for a few cases, in Table \ref{tab:RandomTimes}.

\begin{table}[htdp]
\caption{Training and Classification Times for Extremely Randomized Trees}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
	&Training [s]&Classification [s]\\
	\hline
10 trees of 5 levels & 0.02 & 0.23\\
\hline
100 trees of 5 levels & 0.13 & 2.31\\
\hline
10 trees of 20 levels & 0.17 & 0.01\\
\hline
100 trees of 20 levels &  1.93 & 0.33\\
\hline
10 trees of 30 levels & 0.24 & 0.01\\
\hline
100 trees of 30 levels & 1.63& 0.25\\ 
\hline
\end{tabular}
\end{center}
\label{tab:RandomTimes}
\end{table}%
