\documentclass[a4paper,11pt]{article}
\usepackage[T1]{fontenc}
\usepackage[english]{babel}
\usepackage[utf8]{inputenc}
\usepackage{algorithm}
\usepackage{subfig}
\usepackage{hyperref}
\usepackage{graphicx}
\usepackage{amsmath, amsthm, amssymb}
\usepackage{algorithmic}
\usepackage{listings}
\usepackage{geometry}
\geometry{
    a4paper,  % 21 x 29,7 cm,
     body={150mm,230mm}, left=25mm, top=35mm,
    headheight=7mm, headsep=4mm, marginparsep=4mm, marginparwidth=27mm }


\newenvironment{definition}[1][Definition.]{\begin{trivlist}
\item[\hskip \labelsep {\bfseries #1}]}{\end{trivlist}}

\title{Creating an Artificial Intelligence Agent to Solve Sokoban Puzzles}


\begin{document}
\maketitle
\begin{figure}[H]
\includegraphics[width = 13cm]{Members.png}
\end{figure}

\clearpage
\abstract{}
Because the Sokoban puzzle is easy to understand and visualize but also fun and very challenging, and as it can be viewed as a simplified robot control problem, it is a very popular Artificial Intelligence problem. This report presents our attempt to create a Sokoban-solver in C++, and is aimed to sum up the work done by our group on this project. After explaining the principle of the game and its relationship with the AI class, a list of the algorithms that were selected and the reasons for choosing them are given. Then the development phase and the main structure of the program is presented, before concluding with our results.


\clearpage
\tableofcontents


\clearpage
\section{Introduction}
For the course of Artificial Intelligence DD2380, we were assigned a project: to implement an AI agent capable of solving Sokoban puzzles.
In this report we will explain the problem to solve and the methods we used to implement our agent.


\subsection{Description of the Problem}
First of all: what is a Sokoban puzzle?
Sokoban is a type of transport puzzle, in which the player pushes boxes around in a warehouse, trying to get them to storage locations. \cite{ref:wiki}
\newline

\begin{figure}[H]
\includegraphics[width=13cm]{Sokoban.jpg}
\caption{A level of a Sokoban puzzle}
\end{figure}

There are some very basic rules to be followed:
\begin{itemize}
\item{Only one box can be pushed at a time.}
\item{A box cannot be pulled.}
\item{The player cannot walk through boxes or walls.}
\item{The puzzle is solved when all boxes are located at storage locations.}
\end{itemize}


\subsection{Link with the Artificial Intelligence }
As solving Sokoban can be compared to designing a robot which moves boxes in a warehouse, it is a very interesting problem for artificial intelligence researchers. This game is considered to be difficult not only due to its branching factor (which is comparable to chess), but also due to its enormous search tree depth: some levels require more than 1000 "pushes".
A “human” player can quickly discard the futile or redundant paths and can easily recognize patterns that can be used several times during the game, which drastically reduces the amount of search, and they rely a lot on heuristics
However, nowadays, the most complex Sokoban levels are still out of reach even for the best automated solvers.


\clearpage
\section{Design}
In this part, we will try to establish a clear formalization of the problem using course material. We will introduce the methods that we used during the project, explain why we chose these methods and how they work.

\subsection{Formulation of the problem}
The goal is to find a sequence of actions for the player to reach a solved puzzle state. A puzzle is solved if all designated box locations have a box on them. The possible actions are: going up, down, left and right.
To find such a sequence, possible moves have to be considered and checked. However, due to the high branching factor and depth of the Sokoban a simple brute force resolution method quickly becomes impossible to use to find solutions in a reasonable time.
Therefore the main problems that must be dealt with when attempting to create a Sokoban solver are:
 \begin{itemize}
\item{The very high number of possible states.}
\item{The depth is difficult to predict and can be huge.}
\end{itemize}

\subsection{Methods of resolution}
One of the very first decisions that was made involved how to model the
game using a search algorithm. The answer for this is, of course, not
unique. One could for example do it considering that the tree is explored
with player positions, in other words, from one level at depth $i$ in the
tree we move to the next one at depth $i+1$ performing a player move.
However, this is not the approximation we took. Our search tree models the
movement of the boxes. This could, at first sight, be regarded as an added
difficulty for the search since the branching factor increases
considerably.\footnote{A search tree expanded with player moves has as
maximum, for any node, a branching factor equal to four (move up, down,
right or left). On the other hand, a tree expanded with box moves has this
same branching factor multiplied by the number of boxes in the game.} There
are, nonetheless, several advantages with this type of search tree. One of them
is, for example, the freedom of sorting the possible movements when
expanding the search tree.
%
To address the depth problem, we chose to use an iterative deepening
search, so we will search not for long solutions if a quick one exists. Our
algorithm is designed to explore a search tree and as a consequence of it
comes the requirement of pruning duplicate states. The high branching
factor also drove us to prune deadlocks\footnote{We use the somewhat standarized term deadlock to refer to a disposition of boxes such that no solution can be found from it.} from the search.

\subsection{Explanation of the choice}
\label{ssec:choice}
First of all, we explain how we have addressed the problem of \textit{pruning duplicate states}. When can be two or more different states of the search be considered the same so it will be enough to explore one of them and discard the rest? Intuitively, it is quite reasonable to assume that two boards represent the same state if the boxes are located on the same positions. However, there is another, subtler though, characteristic to be taken into account. In order to understand it, think of the concept of state in the search tree; it is not enough just to look into a particular node but also important to consider what states can be derived from it, in other words, its children. Putting together both restrictions, the following definition arises:

\begin{definition}
\textit{Two boards represent the same state in the search tree if and only if the boxes are located on the same positions and the player have access to the exactly same regions.}
\end{definition}
%
We continue our explanation with the strategies used to detect\textit{deadlock states}. We made a basic classification of deadlocks and defined \textit{static} and \textit{dynamic} deadlocks according to the moment of the game in which they can be detected\cite{ref:sokobano}.
%
\begin{itemize}
\item{\underline{Static Deadlocks}. They are defined by the goals and walls placement of the board. These are precisely the static elements of the game since their position do not change. Since the static deadlocks are defined by the static elements of the game, they can be precalculated once at the very beginning of the game, previous to the start of the search process. A box must not be placed on any of the cells of the board classified as static deadlock since this would entail that the box can never reach any of the goals. One exception is if the deadlock is
also a goal. Then it is not a deadlock, since one box has to go there in
order to complete the level.}
\item{\underline{Dynamic Deadlocks}. During the game, some of the box
movements can lead to a situation in which one or more boxes cannot be
moved any more. If at least one of these boxes is not situated on a goal,
then the game has become unsolvable. }
\end{itemize}
%
Finally, we present the ideas of the \textit{heuristics} used in our
solver. It is necessary to use heuristic or informed search because, even
if we take into account all the states that are pruned detected as
  deadlocks, the number of nodes generated in the tree is too high to find
  a solution in a convenient amount of time (more precisely, we have a time
  limit of one minute for each board). The search algorithm must be guided
  to achieve that the most promising nodes are expanded first. Our
  heuristic function is admissible because it never overestimates the real
  value of a board. This election is the most appropriate according to the
  theory, although there are Sokoban solvers that use non-admissible
  heuristics with outstanding results \cite{ref:rollingStone}. Our
  heuristic function gives a value to the board estimating the number of
  box moves necessary to find a solution. The heuristic is based on a
  metric function that computes distances from boxes to goals. Different
  choices of the metric function give different heuristics. The metric
  functions implemented in our solver are:
%
\begin{itemize}
\item{\underline{Manhattan Distance}. If the board is seen as grid of cells
in a two dimensional space, the Manhattan distance between two cells is
simply the sum of the absolute values of the differences between
coordinates. For the Sokoban puzzle the use of this distance has the
drawback that it does not take into consideration that boxes are not allowed to be moved through walls.}
\item{\underline{Wall Distance}. We introduced this distance with the idea
of including the previous restriction that Manhattan distance ignores. The
wall distance between two cells is the number of simple movements (up,
down, right or left) necessary to get from one cell to another. This
heuristic is more accurate than the Manhattan-distance, but still does not
take into account that boxes cannot be pulled, nor does it account for the position of the boxes in a
particular moment of the game or the accesible region of the player.}
\end{itemize}
%
Apart from the metric function, there is another issue related to
\textit{goal booking} to take into account when designing the heuristic
function. This stems from the requirement for the heuristic to be
admissible, we need to ensure that the estimate of the sum of distances
from boxes to goals is not bigger than the real cost - in terms of boxes
moves - of the board. The naïve approach here is simply to minimize the
distance to the goals for all the boxes. Note however that this would lead
to quite unrealistic estimates where more than one box is assigned to the
same goal\footnote{Recall that one of the rules of Sokoban requires each of
the boxes to be placed on each of the goals at the end of the game.}. In
order to avoid this and still obtain an admissible heuristic, we need to
\textit{minimize the value of all the possible box to goal assignations}.
This is exactly what can be done with the Hungarian or Munkres
algorithm\cite{ref:munkres}.

%
So far we have introduced two different metric functions and two different
ways of asigning boxes to goals to compute the heuristic. Combining both
criteria we obtain four different heuristic functions that can be used by
the solver. We have tested them and the results will be presented in
section~\ref{sec:results}.

\subsection{Putting it alltogether}
\label{ssec:alltogether}
Having considered all that, we can now look at our main algorithm.
\vspace{2mm}
\\The algorithm ~\ref{algo:dlSearch} describes the main IDA* search. The piece
of pseudocode ~\ref{algo:initSearch} then describes how the search is
initialized. At last, the algorithm ~\ref{algo:expandSearch} insists on the
particular point of expanding a board and emphasize that we generate
children based on box moves.

\begin{algorithm}
\caption{Main search function}
\label{algo:dlSearch}
\begin{algorithmic}
\STATE DLSearch( $board, depth, costLimit$ ) $return$ a result or DEADEND
\IF {$board$ is solution}
  \RETURN $board$.Path
\ENDIF

\IF {$board$.Value$ + depth \geq CostLimit$}
  \RETURN DEADEND
\ENDIF

\STATE $children \gets board$.Expand

\FORALL{$child \in children$}

  \IF{$child \not\in$ $knownState$}
    \STATE $result \gets$ DLSearch( $child$, $depth + 1$, $CostLimit$ )

    \IF{ $result \not=$ DEADEND }
      \RETURN $result$
    \ENDIF

  \ELSE
    \STATE $knownStates \gets knownStates + child$
  \ENDIF

\ENDFOR

\STATE \COMMENT{ If either no children boards were found or no children boards gave
a solution, then this is a deadend }
\RETURN DEADEND
\end{algorithmic}
\end{algorithm}

\begin{algorithm}
\caption{Initial call to main search function}
\label{algo:initSearch}
\begin{algorithmic}
\STATE $originBoard \gets from server$
\STATE $costLimit \gets originBoard$.Value
\STATE $knownStates \gets \{\emptyset\}$
\WHILE{$forever$}
  \STATE $knownState \gets knownStates + originBoard$
  \STATE $result \gets$ DLSearch( $originBoard$, 0, $costLimit$ )

  \IF{$result \not=$ DEADEND}
    \RETURN $result$
  \ENDIF

  $costLimit \gets costLimit + 2$
  \STATE $knownStates \gets \{\emptyset\}$
\ENDWHILE
\end{algorithmic}
\end{algorithm}

\begin{algorithm}
\caption{Function expanding one given board}
\label{algo:expandSearch}
\begin{algorithmic}
\STATE Expand( $motherBoard$ ) $return$ a list of boards sorted with regard
to their values, from the smallest to the biggest
\STATE $possiblePlayerMoves \gets boards$.PossiblePlayerMoves
\STATE $possibleBoxMoves \gets \{\emptyset\}$

\FORALL{$box$ in $board$.Boxes}
  \STATE update $possibleBoxMoves$ with movements for $box$ given
  $possiblePlayerMoves$
\ENDFOR

$result \gets \{\emptyset\}$

\FORALL{ $move$ in $possibleBoxMoves$ }
  \STATE $childBoard \gets board$.DoMove( $move$ )
  \IF{$childBoard \not= deadlock$}
    \STATE $result \gets result + childBoard$
  \ENDIF
\ENDFOR

\STATE Sort( $result$ )

\RETURN $result$
\end{algorithmic}
\end{algorithm}

The deadlocks are detected using two different algorithms. One of them is used to detect dynamic deadlocks
meanwhile the other takes care of the static ones. As explained above in the section ~\ref{ssec:choice} 
the static deadlocks are detected before starting the game. The strategy followed can be easily explained 
using the natural language and is presented in the Algorithm~\ref{algo:staDeadlocks}. The other type of
deadlocks that our solver is capable of detecting and pruning are called freeze deadlocks~\cite{ref:sokobano}
and belong to the class of dynamic deadlocks introduced before.

\begin{definition}
\textit{A box is frozen if it is frozen on its horizontal and vertical
  axis.}
\end{definition}

\begin{definition}
\textit{A box is frozen on one axis if there is a wall at any of its
  sides, if there are simple deadlocks at boths its sides or if there is a
  frozen box at any of its sides.}
\end{definition}

Once introduced these definitions the implementation of the freeze deadlock
detector is quite straightforward and a version in pseudocode is presented
in the Algorithm~\ref{algo:freezeDeadlocks}. Even if the
Algorithm~\ref{algo:freezeDeadlocks} seems to be somewhat easy, there are a
couple of details that must be addressed. 

\begin{itemize}
  \item When a box is checked if frozen on a particular axis and it appears
  the case of another box located at any of the sides, just the
  complementary axis have to be checked for the latest box. If not, the
  algorithm will rapidly run into infinite recursion.
  \item The pseudode and the strategy presented trust on a four block
  detector~\footnote{This is just a simple detector to remove deadlock
    states caused by the appearence of a block of 2x2 box or wall cells.}
    applied first. If this is not applied correctly, the
    Algorithm~\ref{algo:freezeDeadlocks} would be trapped again on infinite
    recursion.
\end{itemize}
%
Finally, there is small optimization that can be done in the algorithm for
freeze deadlock detection. Instead of doing the check for all the boxes
after making a box move, it is more intelligent to just perform the check
for the boxes that have been possibly affected for the last movement. This
improvement requires the notion of \textit{connected boxes} in the
algorithm. The idea of this is that two boxes are connected if they are
adjacent or link by a path formed by walls and/or boxes. We did not
implement this improvement in our solver due to the difficulty of this
connection state for the boxes. However, we think it should not entail a
big change in the performance since the profile of our solver shows that
actually the time spent on freeze deadlock detection is not very high even
when it is performed with the creation of every new node for all the boxes 
of the board.

%
\begin{algorithm}
  \caption{Detection of Static Deadlocks}
  \label{algo:staDeadlocks}
  \begin{enumerate}
    \item Remove all the boxes and the player from the board.
    \item Select a goal cell and place a box on it. Pull the box to to every possible location on the board and mark
          the positions where the box can be situated.
    \item Repeat the previous step starting with every different cell.
    \item At the end, all the cells of the board that are not marked are static deadlocks since a box in there could never
          be pushed to a goal.
  \end{enumerate}
\end{algorithm}

\begin{algorithm}
  \caption{Detection of Freeze Deadlocks}
  \label{algo:freezeDeadlocks}
  \begin{algorithmic}
    \STATE IsFreezeDeadlock() $return$ boolean
    \FORALL{$box \in boxes$}
      \IF{IsBoxFrozenHorizontal($box$) AND IsBoxFrozenVertical($box$)}
        \IF{$box$ is on a goal}
          \RETURN false
        \ELSE
          \RETURN true
        \ENDIF
      \ENDIF
    \ENDFOR
    \RETURN false
  \end{algorithmic}
\end{algorithm}

Now we continue with a few words about the pruning of duplicate states. The
key idea to do is to assign a key or short representation to a node of the
search tree. As explained above in Section \ref{ssec:choice} a state is
fully characterized by the position of the boxes and the reachable area of
the player. Of these two features, it is for the second the most complex one to
find a short representation of.
\vspace{2mm}
\\Consider first that, given a certain arrangement of boxes in a board, if
two reachable regions for the player intersect in at least one cell then
both regions are actually the same. Therefore, it seems reasonable that we
can store the reachable cells of the player following an standard of which
cell must represent this region. In particular, our solver stores, together 
with the position of the boxes, the identifier of the top, left most reachable 
cell for the player. This reduces the cost of storing each of the cells
reachable for the player for every visited node and also the computational
cost that would have its comparison each time we would visit a new node and
check if it has already been visitted before.
\vspace{2mm}
\\From an implementation point of view, the representation or hash for a
board is an unsigned integer of 64 bits where the first digits (from left
to right) correspond to the position of the boxes in ascending ordered and
the right most digits the representation of the reachable region of the
player.
\vspace{2mm}
\\In the very beginning, we used a set data structure from the STL to store
these hashes for the known states. Later on we profiled the code and
pointed out that the look-up in the set was taking more than the half of
the total execution time of the solver. We decided then to change the data
structure to an unordered set (hash or transposition table) and the cost of
this operation decreased to more than the half. From that moment on, our
solver started to find solutions for more boards on time.

\clearpage
\section{Realization}

\subsection{Organization}
Planning, task division, tools,…
\newline
%
Complementary to the task of finding a solution for Sokoban puzzles itself, we have used and developed auxiliary tools:

\begin{itemize}
\item{Our \href{http://code.google.com/p/ai-sokoban-42/}{Project's webpage} with wiki, scheduler and source code hosted on Google Code}.
\item{We chose git~\cite{ref:git} as our version control software. Its
branching functionality gave us the ability of parallelize the work in
different parts of the project between different members, and to easily
do experimentations.}
\item{Visual and colorful debug functions. They became incredibly handy to detect and correct debugs in an efficient way.}
\item{We have also used shell scripts which automatize the testing of our
solver in the different boards suites provided. Thanks to them we have been
able to test the performance of the solver with different heuristics and
choose the best.} \end{itemize}

\begin{figure}[H]
\includegraphics[width = 13cm]{print-debugs.jpg}
\caption{Examples of output from our print function for debugging purposes. In the centre, we can see the cells detected as static deadlocks in blue. Below at left hand side, the reachable positions for the player in green and in red the cells that are both static deadlocks and reachable positions for the player. At right hand side, cells where the boxes can be pushed.}
\label{fig:print-debugs}
\end{figure}

\subsection{A quick word about the project contributors}
To be honest, the distribution of the tasks among the components have probably 
been the worst aspect of our project. Here we give some information about
what have each of us done during the project.

\begin{itemize}
\item Thibaud implemented two functions. One for detecting static
deadlocks that has not been used in the solver because when the function
was finally coded, this functionality had already been added to the solver.
The other function computes distances in the board using Dijkstra's
algorithm. Unfortunately, this function is not used either since it has not
been tested properly nor integrated.
\item Clément and Fernando made the rest of the code development of the
project.
\item Audrey prepared an initial skeleton of the project report including
abstract, introduction and formulation of the problem.
\item Clément and Fernando wrote the rest of the report.
\end{itemize}

\subsection{Classes and general structure  }

\subsubsection{CBoard }
The class CBoard holds data and methods related to a board at a given state
: boxes and player positions. Since for a given solve, every created boards
are just variations on a common layout (moving boxes and player around),
we can stock every static information as static class members, initialized
when we receive the initial board from the server : board's layout (walls,
goals, static deadlocks), board's dimensions.\\
It is responsible for finding deadlocks (static and dynamic) and expanding
itself.

\subsubsection{CMovePool}
This is a helper class for CBoard. When it comes to find the possible
player moves for a board, its job is to store the reachable positions on
the board and the associated movement strings. A search algorithm can try
to insert new moves in the pool using the corresponding method, and the
move pool will take care of not storing duplicates (and letting know to the
caller if the insertion succeeded or not).

\subsubsection{CPlayer}
CPlayer dialogs with the server : it retrieves the board, solves it
(hopefully), and sends the solution back to the server. It is here that we
implemented our various search experimentations.


\clearpage
\section{Results}
\label{sec:results}
Going from a very basic solver (Iterative depth-first search, pruning only
the duplicated states), we were able to gradually add and test several
features (and combinations of features) which we describe in that report.
\vspace{2mm}
\\Every static calculation (done once and for all when receiving the board to
solve) should used the best available - since it is done once we can allow
it to be a little bit longer if we get better results. So taking the walls
into account to compute the distance estimate makes sense.
\vspace{2mm}
\\When it comes to dynamic operations, things are different. Taking the case
of goal-box assignation, to use the Munkres method is very expensive
(O({$n^3$}) in the best case, where $n$ is number of boxes) but pays off when
goals are scattered across a big board. If goals are grouped and/or the
board is
quite small, the outcome is not so evident, and in the worst case we may
actually not be able to solve a board using the Munkres method! In the end,
after testing we choose to use the simpler, faster method (that is,
assigning every box to its nearest goal).
\vspace{2mm}
\\There is also an interesting thing to observe about starting cost limit
with IDA*. Usually, it is set to the value of the initial board. We found
out though that setting it to a much higher value let us find solutions way
quicker, because we would go "straight" at the solution, without exploring
whole trees not featuring any solution before. Of course, the solution
found wont be very often the shortest one, but since we only have a
constraint on time, this is an acceptable trade-off.

\clearpage
\section{Improvements}
The ideas that we have explained so far are mainly the ones we have
succeeded to implement correctly in our solver. However, we have also tried
some others and thought of extensions that would probably make our solver
to perform much better.
\vspace{2mm}
\\Let us start with deadlocks. In Sokoban it is very important to be able to
detect deadlocks as fast as possible so the search tree can be reduced to a
very great extent. The techniques we have implemented for deadlock
detection, see Section \ref{ssec:alltogether}, are quite simple and do not
detect a big number of different deadlock patterns.
\vspace{2mm}
\\A very nice strategy that could extend our deadlock detection is the 
following. Each time a box is moved, we could select a region of the whole 
board somewhat centered in the cell where the last box has been placed, 
create a new instance of Sokoban with this region and try to solve it. 
Note that the new instance of the game should be built in such a way that 
there are as many goals as boxes in the extracted region. These goals should 
be placed so that the new problem seems easy to solve. If a solution 
for it is found fast, then it means that the extracted region is not a deadlock. 
On the contrary, if no solution is found, then we have detected a state that is in
a deadlock and that can be pruned.
\vspace{2mm}
\\It is interesting to understand that the power of this strategy resides on the fact 
that it cannot lead to false positives (extracted regions detected as deadlocks 
when they really are not) because we are always making a simplification of the problem; in other
words, to take into account less boxes and walls than the ones that really
exist makes the problem easier. Therefore, a problem that is easy to solve
cannot be a deadlock if a more complicated one that includes it is not a
deadlock.
\vspace{2mm}
\\To sum up, this strategy of deadlock detection is, to some
extent, wasting some CPU cycles to solve small subproblems that are not 
directly the one we are interested on. But it can also enable the solver to
prune big regions of the search tree. This trade-off can be dealt with by
tuning some parameters such as the size of the extracted region, every how
many box moves is the detection run, among others.
\vspace{2mm}
\\In order to clarify and give a better intuition of this method, below we
present an instance of a big Sokoban problem at right hand side and another
one at left hand side that could be created to detect the deadlock that is
formed by the pattern in the middle.

\begin{lstlisting}

###############          ##################
#             #          #                #
#  .       $  #          #             .  #
#     $$      #          #     $$      .  # 
#   $# #      #          #    $# #     .  #
#    # #      #          #     # #     .  #
#    $$#      #          #     $$#     .  #
#    . .      #          #                #
#     .       #          #                #
#    . .    * #          ##################
###############

\end{lstlisting}

\vspace{2mm}
From the very start of the project, we had the wish of implementing a
reverse solver that could work in parallel with the forward solver. Our
objective was to culminate this by implementing a bidirectional search. We
implemented a reverse solver in one of the branches of our project, the
code is
\href{http://code.google.com/p/ai-sokoban-42/source/browse/?name=pull_solver}{here}.
\vspace{2mm}
\\Our main idea was to reuse most of the code that we already had from the
forward solver and the search algorithm. We correctly managed to introduce a
new class hierarchy where a father class CBoard is extended into CPullBoard
and CPushBoard but we failed on the task of reusing the search algorithms
using templates in C++. Nonetheless, our reverse solver is functional for
simple boards (all the ones in the por 7778 at the server and also a few
from 7777).
\vspace{2mm}
\\In any case, we are conscious that the inclusion of the reverse solver
together with the forward solver would not have had likely a great
improvement in the performance of our solver in terms of solved puzzles.
This stems mainly from the fact that the number of possible states
generated by the search is so big (exponential with the size of the
problem) that visiting the double number of nodes would not make a big
difference.
\vspace{2mm}
\\Finally, we would have liked to include threading in our solver. Not in
the sense of parallelizing the search in the same way that a reverse and a
forward solver working together do but in order to use more heuristics at
the same time. We have seen that, in general, an easy metric distance doing
the minimum assignation boxes to goals maximizes the number of total boards
solved (in particular for the port 7781, the solve around 90 boards using
the simple implementation and 78 using Munkres and/or more elaborated
distances). However, the boards that are solved using one or other
heuristic are not the same. Take for example 7781 131, this one is easily
solved using whichever implementation of distance but taking into account
goal booking, i.e. we need to use the Munkres algorithm to ensure an
admissible heuristic. On the contrary, none of the heuristics minimizing
costs distances for all the boxes is able to find a solution on time due to
the goal on the center column.
\clearpage
\begin{thebibliography}{9}

\bibitem{ref:wiki}	Wikipedia, Sokoban	\url{http://en.wikipedia.org/wiki/Sokoban}
\bibitem{ref:sokobano}	Sokobano's Wiki	\url{http://sokobano.de/wiki/index.php?title=Main_Page}
\bibitem{ref:rollingStone}	Rolling Stone Solver \url{http://webdocs.cs.ualberta.ca/~games/Sokoban/program.html}
\bibitem{ref:munkres}		TopCoder Algorithm Tutorials, Munkres \url{http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=hungarianAlgorithm}
\bibitem{ref:git}	Git Community Book \url{http://book.git-scm.com/}
\bibitem{ref:munkres-code} Munkres implementation for C++ \url{https://github.com/saebyn/munkres-cpp}

\end{thebibliography}

\end{document}


