The Random Forest technique was first developed by Leo Breiman\cite{Statistics01randomforests} and Adele Cutler. The actual implementation of the random forest algorithm used in this paper is based on the random forest library for Matlab developed by Abhishek Jaiantilal\cite{RFCode} and the algorithm we use is directly inspired from the pseudocode of Breiman and Cutler, that follows:

\begin{enumerate}
\item The total number of training samples available to us is N and the number of variables in the classifier is M.
\item Choose the number m $<<$ M of input variables used to determine the splitting decision at each node of the tree.
\item Form a training set for this tree by choosing n samples with replacement from all N training samples. This technique is called bootstrapping. 
\item For each node of the tree, randomly select m variables and decide the best split based on these m variables.
\item Each tree is fully grown and not pruned.
\item We use the tree for predictions on new samples. A new sample is pushed down the tree and is assigned the label of the training sample in the terminal node it ends up in. 
\item The \emph{random forest prediction} is the average vote of all trees in the forest.
\end{enumerate}

Most of the running time of this algorithm is used to create the forest. Everything else can be neglected in the complexity analysis of the algorithm. Now, the complexity of creating a tree is $O(mn \log n)$; therefore,
if we want to create a forest consisting of $K$ trees, it will take $O(Kmn \log n)$. Thus, it is an efficient algorithm; however, there are memory constrains to be taken into account, since the forest grows very large rather quickly.

The random forest algorithm performs very well on phoneme classification as mentioned in Ioannis Atsonios' paper\cite{Atsonios}. The reason is that appropriate decision trees are very good classifiers.
However, finding the optimal decision tree is an NP-complete problem. Therefore, we exploit the power of decision trees in a different way, namely by creating a forest of decision trees. More specifically, we create multiple suboptimal trees, consider the result produced by each tree and report the prediction by using a committee of trees.

The dataset on which we carried out the experiments to test Random Forest's accuracy and performance is the TIMIT Acoustic-Phonetic Continuous Speech Corpus. In particular, we used Yaodong Zhang's feature set obtained from the first method of feature extraction as given in the description of the project\cite{DataSet}. We used the files $``halS1\_train.X"$and $``train.Y48"$ to generate the random forests. We, then, tested the classifying accuracy of our generated forest by using the files $``halS1\_dev.X"$ and $``dev.Y48."$ Note that we reduced the 48 phoneme classes to 39 to calculate the error rate. The hardware used is a Macbook Pro with a 2.4Ghz Intel Core 2 Duo Processor and 4GB 667MHz DDR2 SDRAM Memory. The Hard Disk capacity was enough to avoid storage issues. We executed the algorithm multiple times, while varying the two main variables, namely the number of trees of the generated forest and the number of variables, $m$, considered in every node split. In the beginning, we kept one of the two variables constant and varied the other one to see how each of the two variables affect the prediction. However, we came to the conclusion that the two variables are not completely unrelated; thus, we run a few experiments by changing both variables at once trying to find an optimal combination. From all of our test cases, the optimal combination is 500 trees and $m = 8$. Slightly better error rates can be achieved with the trade off of longer training running times. The results are summarized in Table \ref{table: RFtable}. We also included plots concerning the importance of each of the M variables.

\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Number of trees & m (variables) & Error Rate & Time (min)\\
\hline
20 & 4 & 0.3772 & 2.88 \\
\hline
20 & 8 & 0.3454 & 3.13 \\
\hline
20 & 20 & 0.3379 & 3.92 \\
\hline
50 & 8 & 0.3100 & 9.63 \\
\hline
50 & 20 & 0.3047 & 7.9 \\
\hline
50 & 40 & 0.3125 & 12.83 \\
\hline
100 & 8 & 0.2946 & 15.28 \\
\hline
100 & 20 & 0.2920 & 19.37 \\
\hline
500 & 8 & 0.2776 & 81.07 \\
\hline
500 & 20 & 0.2839 & 102.5 \\
\hline
1000 & 10 & 0.2782 & 151.43 \\
\hline
\end{tabular}
\caption{Experimental measurements of error rates and running time}
\label{table: RFtable}
\end{table}

\begin{figure}[!htb]
\centering
\includegraphics[scale=0.83]{figures/Importance_plots.pdf}
\caption{Variable Importance Plots}
\label{figure: RFfigure}
\end{figure}

\subsection{Observations}
\begin{enumerate}
\item Increasing the number of trees in the forest usually improves the error rate of the classifier. However, the law of diminishing returns applies here; after a certain number of trees it is hard to improve the error rate by a significant amount. For example, the error rate is improved by a bigger percentage when we increased from a forest of 20 trees to a forest of 50 trees than when we increased from a forest of 100 to a forest of 500 trees. Finally, from the table we conclude that 500 trees are enough to produce an acceptable classifier with very reasonable running time and memory requirements.
\item The random forest error rate depends mainly on the \emph{correlation} between any two trees in the forest and the \emph{individual strength} of each tree. Increasing the correlation increases the error rate. Usually, diversification is preferred, because if there is strong correlation between the trees and some of them are wrong, then all the correlated trees will have some undesired errors, too.
\item The number of variables, $m$,  used to decide every node split is the adjustable parameter to which random forest procedure is most sensitive. Reducing m reduces both the correlation and the strength. Therefore there is an obvious trade off between strength and correlation here and there is an ``optimal" range of $m$, in which we obtain very good error rates. This was expected, since by increasing $m$ and as it approaches M, almost all the variables are taken into account at every split, which leads into very similar splits in the various trees. This leads to high correlation between trees. However, considering more variables at every node, we obtain ``stronger" individual trees.
\item A right balance has to be found between the number of trees and the numbers of variables used for splitting nodes, for optimal error rates. The best way to obtain the appropriate values for those two parameters is by trial and error. As mentioned earlier, a forest of 500 trees, generated using $m = 8$, is a very good classifier.
\item The prediction error between classes is unbalanced. This occurs because some classes are larger than others. We tried to balance all the class error rates, but the overall error rate increases, so we decided to sacrifice balance for overall accuracy.
\item Many statistical problems involve the learning of the importance/effect of a variable for predicting an outcome of interest based on analyzing a sample of a number of observations. Therefore, we graphed the ``Mean decrease in Accuracy" (the top plot in Figure \ref{figure: RFfigure}) to gain some insight on which features/variables are the important ones. Even though \emph{variable importance} is hard to be interpreted, it can provide us with information that allows performance improvement of the algorithm. By knowing which features are the important ones, we can ignore the unimportant ones and decrease the running time of the training of our predictor, even though it is unclear what the exact features are.
\item The Gini importance is a useful measure that allows us to verify the importance of the variables used. The Gini impurity criterion for two children nodes is less than that of their parent. To obtain another measure of \emph{variable importance}, we add up the Gini decreases for each individual variable over all trees in the forest. The results are shown in ``Mean decrease in Gini index" (the bottom plot in Figure \ref{figure: RFfigure}), which is in accordance with the top plot.
\end{enumerate} 

\subsection{Advantages of random forest}
\begin{itemize}
\item A highly accurate classifier; best achieved error rate in our experiments is $27\%$.
\item Runs efficiently on large data bases; created a forest of 1000 trees in less than 3 hours.
\item Handles multiple input variables without variable deletion and estimates which of those variables are important in the classification.
\item Every tree is generated separately and independently of every other tree, therefore we can implement a parallel algorithm on multiple machines, which can dramatically improve the running time of the current algorithm.
\item Generated forests can be saved for future use on other data. This is important because almost all the effort is put on creating the forest. Once we have the forest, then the time required for predictions on new samples is insignificant.
\item Computes proximities between pairs of samples that can be used in clustering, locating outliers or just give interesting views of the data. 
\item It is an experimental technique for detecting relationships between variables.
\end{itemize}
We have not exploited the last two advantages of this algorithm in our experiments, so the possibility for future work on the data set is left open.

\subsection{Disadvantages of random forest}
\begin{itemize}
\item Requires a huge memory capacity, when dealing with large data sets.
\item Even though it is acceptably fast, there are other types of classifiers that perform significantly faster.
\item No capacity for extrapolation.
\item The algorithm is prone to overfiting, especially when used on a noisy data set like ours. Vocal datasets are affected by gender, ethnicity, accents and so many other variables and also the equipment used to collect the samples is not perfect.
\end{itemize}
However, the advantages are more significant than the disadvantages making Random Forest an appropriate classifier of phonemes.
