% !TEX root = ArticoloRF.tex
% fisher proj. details
\subsection{Fisher Classification}
Fisher Linear Discriminant Analysis \cite{fisherlda} is a variant of Linear Discriminant Analysis that is aimed at minimizing class overlap for classification purposes.
Fisher LD is a technique adopted to classify data that is linearly separable. 
The goal of this method is to find a projection of the data in a space where the interclass variance is minimized and the intraclass variance is maximized. 
This classifier is binary whereas Random Forest is a multi-class classifier.
To employ this method to split nodes a one-vs-rest approach \cite{onevsrest} has to be adopted.

Let  $\mathbf{V}$ be the dataset as defined in section \ref{sec:relwork} and, accordingly, $\mathbf{X}\in \mathbb{R}^{m\times n}$ the set of data records.
Let $C_{1}$ and $C_{2}$ be the classes of data records to be classified and 
\[\mathbf{m}_{1}=\dfrac{1}{N_{1}}\sum_{\mathbf{x}_{i}\in C_{1}} \mathbf{x}_{i}\] 
\[\mathbf{m}_{2}=\dfrac{1}{N_{2}}\sum_{\mathbf{x}_{i}\in C_{2}} \mathbf{x}_{i}\]  be the means for the two classes, where $N_{1}$ and $N_{2}$ are the correspondent number of data records belonging to these classes.
Let then $\mathbf{w}\in \mathbb{R}^{n \times 1}$ be the projection vector over which the intraclass overlap is minimized. The class means for projected data can then be computed as:
\[\mathbf{\tilde{m}}_{1}=\mathbf{w\tran}\mathbf{m_{1}}\]
\[\mathbf{\tilde{m}}_{2}=\mathbf{w\tran}\mathbf{m_{2}}\]
The scatter of projected data records over the vector $\mathbf{w}$, which measures the class overlap, is computed as follows for the two classes:
\begin{eqnarray*}
\mathbf{\tilde{s}}_{1}^{2}&=&
%\sum_{\mathbf{v}_{i}\in C_{1}}\left(\mathbf{w}\tran \mathbf{v}_{i}-\mathbf{w}\tran \mathbf{m}_{1}\right)^{2}=
\mathbf{w}\tran\left(\sum_{\mathbf{x}_{i}\in C_{1}}\left( \mathbf{x}_{i}-\mathbf{m}_{1}\right)\left( \mathbf{x}_{i}-\mathbf{m}_{1}\right)\tran\right)\mathbf{w}\\
&=&\mathbf{w}\tran \mathbf{S}_{1} \mathbf{w}
\end{eqnarray*}
\begin{eqnarray*}
\mathbf{\tilde{s}}_{2}^{2}&=&
%\sum_{\mathbf{v}_{i}\in C_{2}}\left(\mathbf{w}\tran \mathbf{v}_{i}-\mathbf{w}\tran \mathbf{m}_{2}\right)^{2}=
\mathbf{w}\tran\left(\sum_{\mathbf{x}_{i}\in C_{2}}\left( \mathbf{x}_{i}-\mathbf{m}_{2}\right)\left( \mathbf{x}_{i}-\mathbf{m}_{2}\right)\tran\right)\mathbf{w}\\
&=&\mathbf{w}\tran \mathbf{S}_{2} \mathbf{w}
\end{eqnarray*}
To minimize class overlap the ratio of projected means over total scatter has to be maximized. This takes the name of Rayleigh coefficient and is computed as:
\[
r(\mathbf{w})=\dfrac{\left(\mathbf{\tilde{m}}_{2}-\mathbf{\tilde{m}}_{1}\right)^{2}}{\mathbf{\tilde{s}}_{1}^{2}+\mathbf{\tilde{s}}_{1}^{2}}=\dfrac{\mathbf{w}\tran\mathbf{S}_{b}\mathbf{w}}{\mathbf{w}\tran\mathbf{S}_{w}\mathbf{w}}
\]
where $\mathbf{S}_{w}=\mathbf{S}_{1}+\mathbf{S}_{2}$ is the interclass variance and $\mathbf{S}_{b}=\left(\mathbf{m}_{2}-\mathbf{m}_{1}\right)\left(\mathbf{m}_{2}-\mathbf{m}_{1}\right)\tran$ is the intraclass variance.
From the above equations the vector that maximizes the Rayleigh ratio is
\[
\mathbf{w}=\mathbf{S}_{w}^{-1}\left(\mathbf{m}_{2}-\mathbf{m}_{1}\right)
\]

For the purposes of Random Forest classifiers this method can be adopted for tree generation. 
During the training phase, for each class of the data records associated with the considered node, the class is evaluated versus the rest of the classes and the Rayleigh ratio is computed. 
The class that yields the highest ratio is kept as $C_{1}$ and all other classes of the data records belonging to the node are merged into class $C_{2}$. 
The correspondent vector $\mathbf{w}$ is then used to project all the data records of the node in the new space. 
The average of class means computed as
\[
t=\dfrac{1}{2}\mathbf{w}\tran\left(\mathbf{m}_{1}+\mathbf{m}_{2}\right)
\]
is assigned as the node classification threshold  $t$.
The procedure is iterated for every node until the tree generation has been completed, i.e. the specified depth $h$ has been reached or each node is associated with records belonging to the same class.

Let now $\mathbf{S}_{L}$ be the subset of data records of class $C_{1}$ and $\mathbf{S}_{R}$ be the subset of data records of class $C_{2}$, the splitting function for tree generation is defined as
\[
f(\mathbf{x})=f(\mathbf{x},\mathbf{w})=\mathbf{w}\tran\mathbf{x}
\]
hence, the classification rule takes the following shape:
\[
\left \lbrace \begin{array}{ll}
\mathbf{S}_{L}=\mathbf{S}_{L}\cup \mathbf{v} & \mathrm{if} \, \mathbf{w}\tran\mathbf{x}\leq \dfrac{1}{2}\mathbf{w}\tran\left(\mathbf{m}_{1}+\mathbf{m}_{2}\right)\\
\mathbf{S}_{R}=\mathbf{S}_{R}\cup \mathbf{v} & \mathrm{otherwise}
\end{array}\right.\]
where
\[
\mathbf{v}=\begin{bmatrix}
\mathbf{x}\\
y
\end{bmatrix}
\]
and $y$ is the class of the data record $\mathbf{x}$.