\subsection{Search Strategy Evaluation}
\label{sec:eval_search}

As we vary the search strategy over different heuristics, we expect to find that agents with better search strategies, such as mini-max with $\alpha\beta$ pruning, will search to a deeper ply on a given turn and consequently win more games.  We do not have specific numbers for the searching depth of our agents as this information was not saved with the game play information.  Anecdotally, we have seen mini-max achieve 5-7 ply searches with $\alpha\beta$ pruning improving this to 6-10 ply on a given turn.

In evaluating the game play of better search strategies we first note that every agent employing $\alpha\beta$ pruning won more games than were lost against random in the round-robin of the tournament.  To better evaluate the effect of the search strategy, we ran a series of games between agents which had the same heuristic but different search strategies.  The results of these games can be seen in Table~\ref{tbl:sresults}.

\begin{table}
\begin{centering}
\begin{tabular}{|l|l|l|c|c|c|} 
Name (seed) & Search & Heuristic & Wins & Losses & Ties \\ \hline \hline
greedystab				& greedy    & \multirow{3}{*}{stability} & 0 & 12 & 0 \\ \cline{1-2} \cline{4-6}
stabmax					& mini-max	 & 			                 & 7 & 5 & 0 \\ \cline{1-2} \cline{4-6}
stabab		            & mini-max $\alpha\beta$    &            & 11 & 1 & 0 \\ \hline \hline
greedysmartscore			& greedy    & \multirow{3}{*}{smartscore} 		& 2 & 10 & 0 \\ \cline{1-2} \cline{4-6}
smartscoremax			& mini-max    & 		                 & 7 & 5 & 0 \\ \cline{1-2} \cline{4-6}
smartscoreab 		& mini-max $\alpha\beta$    &            & 9 & 3 & 0 \\ \hline \hline
greedyscore			& greedy    & \multirow{3}{*}{score}& 0 & 11 & 1 \\ \cline{1-2} \cline{4-6}
scoremax			 & mini-max    & 	                & 7 & 4 & 1 \\ \cline{1-2} \cline{4-6}
scoreab 		 	& mini-max $\alpha\beta$	 &      & 10 & 2 & 0 \\ \hline \hline
greedymob 			& greedy	 & \multirow{3}{*}{mobility} & 7 & 5 & 0 \\ \cline{1-2} \cline{4-6}
mobimax			& mini-max    &					& 5 & 6 & 1 \\ \cline{1-2} \cline{4-6}
mobab			& mini-max $\alpha\beta$	 &					& 5 & 6 & 1 \\ \hline
\end{tabular}
\caption{Results 6 game matches between agents with the same heuristics, but different search algorithms.}
\label{tbl:sresults}
\end{centering}
\end{table}

The results of most games are what we expect: better search strategies beat lesser search strategies with the same heuristic in most cases.  It is also possible for the better search strategy players to have lost based on some non-determinism where an agent will choose a move randomly between available moves all given the same heuristic weight.  Thus a move can be randomly chosen that looks good to a certain ply, but suffers from the horizon effect and places the agent in a bad position.

It is very interesting to note that the mobility heuristic does appear to improve with deeper search depth.  Both the mini-max and mini-max with $\alpha\beta$ lost to the greedy approach which does not look ahead.  The only explanation we have for this is that greedy is essentially playing optimally against the other opponents by always choosing the smallest number of moves available for the other players and forcing them to randomly choose bad moves.  

Overall, we can conclude that our implementations of mini-max and mini-max with $\alpha\beta$ were successful in creating better Othello agents.  