\section{Algorithms}
\subsection{Alpha-beta}
\label{alphabeta}

To evaluate all possible moves, even with a limited depth, takes often a huge amount of time. The aim of alpha-beta algorithm is to avoid the evaluation of poor utility moves.

\vspace{0.7cm}
In a easy way to read, here is our alpha-beta function :
\begin{algorithmic}[1]
\If {current node is a leaf or maximum depth reached}
	\State evaluate current node utility
	\State \Return current node utility
\EndIf
\If {current node is a minimize node}
\State $value \gets +\infty$
\ForAll {Child c of the current node}
\State value $\gets$ min(value, alphabeta(c, $\alpha$, $\beta$, max depth, maximize))
\If {$\alpha \geq value$}
\State $current node utility \gets value$.
\State \Return value
\EndIf
\State $\beta = min(\beta, value)$
\EndFor
\Else
\State $value \gets -\infty$
\ForAll {Child c of the current node}
\State value $\gets$ max(value, alphabeta(c, $\alpha$, $\beta$, max depth, minimize))
\If {$value \geq \beta$}
\State $current node utility \gets value$.
\State \Return value
\EndIf
\State $\alpha = max(\alpha, value)$
\EndFor
\EndIf
\State $current node utility \gets value$.
\State \Return value
\end{algorithmic}
\vspace*{0.7cm}
The function signature is 
\begin{algorithmic}
\State alphaBeta(OthelloSearchNode node, int alpha, int beta, int depth, boolean maximize);
\end{algorithmic}
where :
\begin{itemize}
\item[] $node$ is the a node of the tree containing all the possible moves,
\item[] $alpha$ is the minimum gain the player can expect,
\item[] $beta$ is the maximum gain the player can expect,
\item[] $depth$ is the current depth reached in the search tree,
\item[] $maximize$ is a boolean holding whether the node is a maximize node.
\end{itemize}
and it is first invoked with the following parameters :
\begin{algorithmic}
\State alphabeta(Search tree head, $-\infty$, $+\infty$, MAX\_DEPTH, true);
\end{algorithmic}
\vspace{0.7cm}

The main idea is hold by alpha and beta parameters. At each step the algorithm keeps the lowest and the greatest gains which can currently be expected. When the algorithm analyzes a minimizing point, if one of the children returns a value lower than $\alpha$, it is obvious than no greater action would be chosen. There is then no interest in analyzing the rest of the children. This is an alpha cut.
Conversely, if the current node is a maximizing node, there is no need to analyze the children if one value is greater than $\beta$.
Those manipulations are called pruning.

\begin{figure}[h]
\center
\includegraphics[scale=0.5]{img/alphabeta}
\caption{Example of alphabeta pruning}
\end{figure}

Those manipulations are called pruning.

\subsection{Heuristic}

	The engine to beat counts the number of discs belonging to each player. To achieve this goal, our heuristic is based on three factors :
	\begin{itemize}
		\item The number of discs.
		\item The stability, that is to say the number of discs the opponent can't take anymore.
		\item The mobility, i.e. the number of possible moves engendered by the last move.
	\end{itemize}
	
	The final heuristic is a weighted sum of theses parameters, represented by the following equation :
	\begin{equation}
		\begin{split}
		h(p) = countDisc(p) * \alpha + stableDisc(p) * \beta + mobility(p) * \gamma\\
		Where\ \alpha,\beta,\gamma\ are\ parameters\ to\ adjust\ the\ strategy\ during\ the\ game
		\end{split}
	\end{equation}
	
	\subsubsection*{Discs counts}
		
		The number of discs in each side remains an important parameters. At the end of the game, the aim is still to have the more pieces on the board. However this is not enough to win, and others factor must be considered.
		
		The disc count is then the difference between the number of discs in each camp, computed for one step.
		
	\subsubsection*{Mobility}
		
		One idea to improve ours engine efficiency is to maximize the player's possibilities and reducing those of the opponent. The mobility consists in counting the number of possible moves for each player and computing the difference between player's moves and opponent's moves. That difference is the second member of our heuristic. 
			
	\subsubsection*{Stability}
	
	In some situations, some discs are safe from being retaken. Such situations occurs when a disc is on a corner, or if a border is filled with disc of the same color. An example is shown on figure \ref{fig:e31}
		
		\begin{figure}[H]
			\centering
			\includegraphics[scale=1]{img/e31.png}
			\caption{Example of stability}
			\label{fig:e31}
		\end{figure}
	When possible, it appears to be effective to focus on this safety. The objective is to take the more pieces, but also to keep them until the end. A ply might be really great in a local view and finally not affect the whole game. The stability is then there to make the gains being perennial.
	
	
	\subsubsection*{Steps of a game}
	
		From our point of view, a game can be divided in three steps : the opening, the middle game and the late game. In a real  game - as human vs. human - strategies often evolve considering the point of the game. Following this observation, our heuristic changes its behavior between along those three moments, by adjusting values of $\alpha, \beta, \gamma$. These latter have been chosen by testing.\\
	
		During the opening, one of the main goal is to prevent the opponent to reach the corners. As long as this condition is respected, the heuristic focuses on limiting the amount of moves playable by the opponent. The less choices it has, the less good ply it can reach. Then, in the middle game, the heuristic focuses on the stability and the disc count. The idea is that now that the opponent has been limited in its choice, our engine needs to improve its own position. Defense is not sufficient, it must also attack. Finally during the late game, a good rate of stability should have been meet. The focus is then put on having the more pieces of its color at the end of the game, that is to say, on discs count.

	