\section{Learning algorithm}

The training is done with a file containing 300 figures. For each image the answer is known. Thus we choose to use the Backward propagation algorithm to make our network learn. The main idea is that for each image, the network try to guess the answer. This one is compared to the good answer, and then the error between the guessed and the real answer is propagated into the network : the weights are adjusted for each neuron, in order to get the good answer for next trials. This mechanism is done for each image, and redone until the network reach a little enough error.

\vspace*{0.7cm}
First errors on output are computed.
\begin{equation}
e_k = o_k \times (1 - o_k)(t_k - o_k)
\end{equation}
where $o_k$ is the output of the k-neuron\\
~and $t_k$ is the desired output for this neuron.
\vspace*{0.7cm}

Then weights are updated for output layer. To the previous weight is added a ratio made of the error, the previous input and the learning rate. This last one is used to control the learning speed. The biggest it is, the more the network learn in one iteration.  This parameter is really important, as a too high rate implies errors, whereas a too low one will make the network too slow. This part will be discussed further later in this report.
\begin{equation}
w_{ik} = w_{ik} + e_k \times learning\_rate \times input_{ik}
\label{upd_weight}
\end{equation}

The same idea is followed for hidden layers. Errors are computed following this formula : 
\begin{equation}
e_h = o_h \times (1 - o_h) \times \Sigma_{k \in K} w_{kh}e_k
\end{equation}
where $o_h$ is the output of the h-neuron\\
~ $e_k$ is the error of the k-neuron\\
~ $w_{kh}$ is the weight in neuron k for neuron h\\
~ and $K$ is the set of neurons whose immediate inputs include the output of h.

\vspace*{0.7cm}

Weights are then modified following the same rule as for output layer.

\vspace*{0.7cm}

It appears that we didn't follow this algorithm in each point. For output layer, the error used for updating weights (eq. \ref{upd_weight}) is the sum of all errors computed for this layer. We realized this mistake quite late, but by correcting it, the algorithm takes a very long time to compute. We then choose to keep this value, as it doesn't affect the veracity of the algorithm. The reason why the correction seems poorer must lay in convergence. As weights are more modified in our algorithm, it get faster. Then as we worked with this solution all along the implementation, parameters must have been adapted to work this way.

\vspace*{0.7cm}

As said before, this algorithm is done once for each image, and then repeated until we reach an error low enough on the whole execution. For each figure the error on output in then sum up to the others. We set this parameter to $10^{-3}$.\\
In order to be more precise the error must converge : learning phase is stopped only if this error is reached five consecutive times. It forces the algorithm to find a real solution, and not a local minimum.

\vspace*{0.7cm}
To make the learning still more efficient this learning mechanism is done three times, with a different learning rate. At first it is chosen at $0.15$, quite low. During the second loop, it is set to triplicate, and finally divided by two. That way a first solution is found with a long time of learning. Then as it is quite stable we try to improve it with a higher rate, which will not be as bad as on the random initialized network. The last loop reinforces the learning by added convergence.