% !TEX root = ArticoloRF.tex
\label{sec:relwork}
Random Forest \cite{RandomForestsLeoBreimann} is a multiclass classifier based on decision trees. A decision tree is a tree-based data structure in which every node is associated with a set of data records $\mathbf{X} \in \mathbb{R}^{m \times n}$ defined as follows \[ \mathbf{X} = \begin{bmatrix} \mathbf{x}\tran_1 \\ \mathbf{x}\tran_2 \\ \vdots \\ \mathbf{x}\tran_m \end{bmatrix} \]
Each row of this matrix is a data record defined as:
\[\mathbf{x}\tran_{i}=\begin{bmatrix}x_{i,1}&x_{i,2}&\cdots&x_{i,n}\end{bmatrix}\]
Each single column of the matrix $\mathbf{X}$ represents a \emph{feature} of the set of data. A feature is an individual measurable heuristic property of a phenomenon being observed.
The dataset also includes a matrix $\mathbf{Y}\in \mathbb{R}^{m\times1}$ whose elements represent the class of all the data records in $\mathbf{X}$.
The complete dataset can be defined as
\[\mathbf{V}=\begin{bmatrix}\mathbf{X} & \mathbf{Y}\end{bmatrix}\]
Each row of this matrix is defined as
\[\mathbf{v}\tran_{i}=\begin{bmatrix}\mathbf{x}_{i}\tran& y_{i}\end{bmatrix}\]
Every node is also associated with a threshold value $t\in \mathbb{R}$.


Although decision trees can be either binary or $n$-ary, the most common implementation of Random Forest employs only the former type. 
The tree generation algorithm performs, for every node at a certain depth, a binary split obtaining from a set of $p$ parents $2p$ children.
The split-function $f(\mathbf{x})$ is defined as  
\[f(\mathbf{x}) : \mathbb{R}^n \rightarrow \mathbb{R}\] and usually outputs the value of one of the features of $\mathbf{x}$.
When the algorithm has to perform a split, it evaluates $f(\mathbf{x})$ and compares this value to the threshold $t$ associated with the node being split. The splitting rule defining arches connecting a node to its children is the following:
\[
\left \lbrace \begin{array}{ll}
\mathbf{S}_{L}=\mathbf{S}_{L}\cup \mathbf{v} & \mathrm{if} \, f(\mathbf{x})\leq t\\
\mathbf{S}_{R}=\mathbf{S}_{R}\cup \mathbf{v} & \mathrm{if}\, f(\mathbf{x})>t 
\end{array}\right.
\]
where $\mathbf{S}_{L}\subset\mathbf{V}$ is the subset of data records of $\mathbf{V}$ that will be associated with the left child and $\mathbf{S}_{R}\subset\mathbf{V}$ is the subset of data records of $\mathbf{V}$ that will be associated with the right child. $\mathbf{S}_{L}$ and $\mathbf{S}_{R}$ are subject to the following constraints 
\[
\left \lbrace \begin{array}{l}
	\mathbf{S}_{L} \cap \mathbf{S}_{R}=\emptyset \\
	\mathbf{S}_{L} \cup \mathbf{S}_{R}=\mathbf{V}	
	\end{array} \right.
\]

The splitting procedure is iterated until a certain depth $h$ is reached: $h$ can be either user specified or automatically computed stopping the split process when every node contains only a single record or every record associated to a specific node belongs to the same class.

Leaf nodes will contain a histogram of the classes of the records they are associated with, hence, the probability mass function $P(c)$ of the classes $c$. 
The output of the decision tree is the probability of a given data record to belong to a specific class.
Decision trees are combined together in order to output the most common class prediction. 