\section{Execution and parameters adaptation}
	
	\subsection{Analysis and influence of parameters}
		
		All the following chart are generated on an average of three execution to provide significant results.		
			
		\subsubsection{Influence of the learning rate}
			
			The learning rate influence is tested with the following configuration :
			\begin{itemize}
				\item The first hidden layer has 200 neurons each having 400 inputs.
				\item The second hidden layer has 40 neurons each having 200 inputs.
				\item The output layer has 4 neurons each having 40 inputs.
				\item The global tolerated error on all the network is set to 0.001.
			\end{itemize}
			
			\begin{figure}[H]
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/caflrwip.png}
					\label{fig:caflrwip}
				\end{minipage}%
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/ltflrwip.png}
					\label{fig:ltflrwip}
				\end{minipage}
				\caption{Influence of the learning rate on the classification accuracy and time spent to learn, with image preprocessing}
			\end{figure}
			
			\begin{figure}[H]
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/caflrwoip.png}
					\label{fig:caflrwip}
				\end{minipage}%
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/ltflrwoip.png}
					\label{fig:ltflrwip}
				\end{minipage}
				\caption{Influence of the learning rate on the classification accuracy and time spent to learn, without image preprocessing}
			\end{figure}
			
			These charts shows that the accuracy of the classification reaches a maximum for a learning rate set to 0.15 when images are preprocessed. For next tests, that learning rate is used.
			
		\subsubsection{Influence of the number of hidden layers}
			
			The influence of number of hidden layer is tested with the following configuration :
			\begin{itemize}
				\item The first hidden layer has 200 neurons each having 400 inputs.
				\item Each other hidden layer has 200 neurons each having 200 inputs.
				\item The output layer has 4 neurons each having 200 inputs.
				\item The learning rate is set to 0.15.
				\item The global tolerated error on all the network is set to 0.001.
			\end{itemize}			
			
			\begin{figure}[H]
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/cafnhlwip.png}
					\label{fig:cafnhlwip}
				\end{minipage}%
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/ltfnhlwip.png}
					\label{fig:ltfnhlwip}
				\end{minipage}
				\caption{Influence of the number of hidden layers on the classification accuracy and time spent to learn, with image preprocessing}
			\end{figure}
			
			\begin{figure}[H]
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/cafnhlwoip.png}
					\label{fig:cafnhlwoip}
				\end{minipage}%
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/ltfnhlwoip.png}
					\label{fig:ltfnhlwoip}
				\end{minipage}
				\caption{Influence of the number of hidden layers on the classification accuracy and time spent to learn, without image preprocessing}
			\end{figure}
			
			These charts show a maximum accuracy when the network is composed of two or three hidden layers. Moreover the learning time is at its minimum when there are two hidden layers. So in further tests, the network is composed of two hidden layers.
			
		\subsubsection{Influence of first hidden layer size}

			The influence of number of hidden layer is tested with the following configuration :
			\begin{itemize}
				\item The second hidden layer has 40 neurons each having many inputs as the size of the first hidden layer.
				\item The output layer has 4 neurons each having 40 inputs.
				\item The learning rate is set to 0.15.
				\item The global tolerated error on all the network is set to 0.001.
			\end{itemize}			
			
			\begin{figure}[H]
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/cafsfhlwip.png}
					\label{fig:cafnhlwip}
				\end{minipage}%
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/ltfsfhlwip.png}
					\label{fig:ltfnhlwip}
				\end{minipage}
				\caption{Influence of the number of hidden layers on the classification accuracy and time spent to learn, with image preprocessing}
			\end{figure}
			
			\begin{figure}[H]
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/cafsfhlwoip.png}
					\label{fig:cafnhlwoip}
				\end{minipage}%
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/ltfsfhlwoip.png}
					\label{fig:ltfnhlwoip}
				\end{minipage}
				\caption{Influence of the number of hidden layers on the classification accuracy and time spent to learn, without image preprocessing}
			\end{figure}
			
			As we can see, when image processing is used, the maximum accuracy is obtained with a first hidden layer with 200 neurons, what in the same time improves the learning time. So in further tests the first hidden layer is composed of 200 neurons.			
			
		\subsubsection{Influence of the second hidden layer size}
			
				\begin{itemize}
				\item The first hidden layer has 200 neurons each having 400 inputs.
				\item The output layer has 4 neurons each having the number of neurons in the previous hidden layer.
				\item The learning rate is set to 0.15.
				\item The global tolerated error on all the network is set to 0.001.
			\end{itemize}			
				
				\begin{figure}[H]
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/cafsshlwip.png}
					\label{fig:cafnhlwip}
				\end{minipage}%
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/ltfsshlwip.png}
					\label{fig:ltfnhlwip}
				\end{minipage}
				\caption{Influence of the number of hidden layers on the classification accuracy and time spent to learn, with image preprocessing}
			\end{figure}
			
			\begin{figure}[H]
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/cafsshlwoip.png}
					\label{fig:cafnhlwoip}
				\end{minipage}%
				\begin{minipage}[b]{.5\linewidth}
					\centering
					\includegraphics[scale=0.6]{img/ltfsshlwoip.png}
					\label{fig:ltfnhlwoip}
				\end{minipage}
				\caption{Influence of the number of hidden layers on the classification accuracy and time spent to learn, without image preprocessing}
			\end{figure}
			
			These charts show that the maximum accuracy is reached for a second hidden layer composed of 50 neurons. Moreover the learning time reaches its minimum for that value.
			
		\subsection{Best parameters}
		
			From the previous tests, the best configuration for the neural network are the following :
			\begin{itemize}
				\item The neural network has two hidden layers.
				\item The first hidden layer is composed of 200 neurons each having 400 inputs.
				\item The second hidden layer is composed of 50 neurons each having 200 inputs.
				\item The output layer has 4 neurons having 50 inputs.
				\item The learning rate is first set to 0.15.
				\item The tolerated error over the network is set to 0.001.
				\item Preprocessing of data is enabled.
			\end{itemize}