\documentclass[runningheads]{llncs}

\usepackage{amssymb}
\setcounter{tocdepth}{3}
\usepackage{graphicx}

\usepackage{url}
\urldef{\mailsa}\path|zepinto@gmail.com|
\newcommand{\keywords}[1]{\par\addvspace\baselineskip
\noindent\keywordname\enspace\ignorespaces#1}

\begin{document}

\mainmatter 

\title{A* Path Planning in the CiberMouse Simulator}

% Appears on top of right pages
\titlerunning{A* Path Planning in the CiberMouse Simulator}

\author{Jos\'e Pinto}
% Appears on top of left pages
\authorrunning{Jos\'e Pinto - MAP-i Consortium}
% (feature abused for this document to repeat the title also on left hand pages)

\institute{MAP-i Doctoral Consortium, Portugal\\
\mailsa\\
\url{http://map.edu.pt/i}}

\toctitle{Lecture Notes in Computer Science}
\tocauthor{Authors' Instructions}
\maketitle


\begin{abstract}
In this paper we use the CiberMouse robotics simulator to test an implementation of A* path planning algorithm. The map of the world is know a priori by parsing a file on disk. The map is discretized into a matrix of squares and then A* is applied through a layered architecure of actuators and estimators.
 

\keywords{Robotics, Path Planning, Software Architecture, Sensor Fusion}
\end{abstract}

\section*{Introduction}

In the context of the \emph{Intelligent Robotics} course of the MAP-i Doctoral Program, our aim is to create a CiberMouse agent capable of determining and following the best path between two points. The CiberMouse competition is held every year in Aveiro, Portugal. The contestants must develop a software agent that controls a robotic entity in a simulated, inaccessible, physical world. A full description of the CiberMouse simulation environment is given in \cite{cibermouse}\\
In our project, we use the CiberMouse simulator in order to verify the capability of our agent in calculating the best path between two points in a known map.\\
This paper is organized as follows: In Section 1 the data structures that are used to store the map in memory are described. Section 2 gives an overview of the software architecure of the agent, along with the description of the most important controllers. In Section 3 the algorithm that was implemented is described in detail. The Section 4 holds some of the experimental results and final thoughts.

\section{World Representation}
CiberMouse simulates a 2D world with polygonal obstacles (walls). Maps are currently stored as XML files with information like \emph{map size}, \emph{obstacles}, \emph{target areas} and \emph{starting positions}. All these entities are parsed by our software agent and translated into its own world representation. Obstacles are saved as instances of the class \emph{java.awt.Shape} which provide many useful methods like 2D shape composition and area calculation. Target areas and starting positions are translated into \emph{java.awt.geom.Point2D} instances.\\
After a map is read into memory, it gets translated into a matrix of squares where each square holds a boolean value meaning if there exist (true) or not (false) an obstacle in that position. To calculate this matrix, we start by merging all the obstacles into a single Shape (the union of all obstacles in the map) and then iteratively test, for each square, if the area of its intersection with the obstacles is equal to zero, in which case the square gets the value false. The result of this approach can be seen in figure~\ref{fig1}.
\begin{figure}
\centering
\includegraphics[height=6cm]{world1.png}
\caption{The map \emph{Ciber2005Manga1Lab.xml} as represented by our matrix of squares (superimposed)}
\label{fig1}
\end{figure}
Currently, we are using a cell width of 1 unit because it matches the size of the robot. This greatly eases the process of calculating the optimal path since every cell with no obstacles becomes a possible state of the search function.

\section{Control Architecture}
The developed agent works by applying an active controller to the currently estimated state. This is done by having an abstract class (\emph{AbstractController}), which is extended by all existing controllers. All controllers implement a method that returns the intended actuation by processing the last actuation and the currently estimated state. When the time comes to switch the currently active controller this can be done just by changing a variable of the system (the active controller). Some of the controllers also delegate actuation to subcontrollers, which are another \emph{AbstractControllers} which are stored as instance variables and called as necessary.\\
An architecure overview can be seen in figure~\ref{figUML}. Now follows a brief description of some of the most important controllers.
\begin{figure}
\centering
\includegraphics[height=7cm]{uml.png}
\caption{UML class diagram of MazeWalker controllers}
\label{figUML}
\end{figure}
\begin{description}
\item [StopController] This controller operates in one execution step. It works by applying the force needed to completely stop the robotic agent. It uses the currently estimated (effective) forces given to the motors at time $t-1$ and applies the forces that are symmetric to the last actuation.
\item [RotateController] This controller is used to rotate the body to a target angle. It continually monitors the currently estimated rotation angle and drives the motors in order to turn the robot to that direction always rotating over the same point.
\item [WalkController] This controller is given a distance amount and always gives the same power to both motors in order to walk a straight line with the length equal to the given distance.
\item [GoToController] This controller is used to rotate, walk and stop at a given point. It uses the subcontrollers RotateController, WalkController and StopController to do this.
\item [SequenceController] This controller is given an array of controllers that are executed in sequence (one after the previous is complete) until the last controller has completed its execution.
\item [AStarController] Given a maze data structure and two points, this controller calculates the best path between those two points and generates GoTo subcontrollers for walking the path.
\end{description}

\section{A* Path Planning}
A* is a search algorithm that uses an heuristic to find the optimal path to a given state as the first (found) path which leads to that state. This happens only when the heuristic function underestimates the distance to the goal, that is, when the heuristic function gives a value that is lower or equal to the best possible path to that state.\\
A* is usually implemented by having a sorted open list of possibilities (next states) and a list with all the already visited states. At each step, the algorithm chooses the next optimal step (top of the open list) and adds some of its neighboors to the open list (in order).\\
With the map discretization, our problem became simplified to finding an optimal path between points in a grid. Some points are unreacheable because there exist some obstacles in the containing square (see section 1).\\ The heuristic function is given by the formula:
\begin{equation}
	H(x) = D_o(x) + D_t(x) + w * |R(x)|
\end{equation}
$D_o(x)$ is the walked distance between the start state (origin) and the current state. $D_t(x)$ is the euclidean distance to the target. $w$ is a configurable weight and $R(x)$ is the needed rotation between the parent state and the current state.\\
To generate the neighboor states of the current cell, we iterate all of the 8 cells that can be reached in one step (either by a horizontal, vertical or diagonal line) and add only the neighboors that don't have obstacles or whose leading paths don't contain any obstacles (in the case of diagonal paths).\\
We also account the needed rotation since there usually exists a lot of paths with the same length and we want to choose the one that demands less time to rotate between positions (shortest path is not always the quickest path). In figure~\ref{figPath} the advantage on using this method can be easily observed.
\begin{figure}
\centering
\includegraphics[height=6cm]{astar1.png}
\caption{A* Fastest Path (A) and A* Shortest Path (B)}
\label{figPath}
\end{figure}
\section{Experimental Results}
Our agent has been tested in various different mazes to test our algorithm and our navigation (state estimation). Since we have developed a state estimator that has exactly the same dynamics as the simulator, most of our tests were initially made in an error-free, estimated world.\\
Without any actuation errors, we verified that all our controllers were behaving exactly as expected. In the end the robot always reached the precise location of the target in the map.\\
When using the simulator, in contrary, sometimes the robot wasn't able to reach the target due to collisions with  obstacles. This happened because the generated paths oftenly pass close to obstacles in the map (being the robot supposed to go in a tangent line to the obstacles).\\
To solve the problem of hiting the obstacles, the \textit{WalkController} was slightly changed in order to avoid obstacles while travelling in a near straight line. This is done by adding decrements or increments to the power of the motors whenever an obstacle is detected to be very near the robot body.\\
To add for precision, we opted to stop the robot at the center of the squares that belong to the shortest path. This proved to be a wise decision since the estimation errors kept being very low but of course this augments the time needed to reach the target.
\begin{figure}
\centering
\includegraphics[height=6cm]{astar2.png}
\caption{An example of a found optimal path and the robot doing the path}
\label{figExample}
\end{figure}
\section{Conclusions}

The usage of an object-oriented language like Java facilitated our task because it allowed for code reutilization, being possible to create new controllers very easily by delegating the actuation to existing subcontrollers.\\
The estimator component was also crucial to the success of this project because it made possible to test the controllers individually and in an error-free world simulation. The correctness of the estimator is also very important since it is the foundation for the robot navigation.\\
In the end, our A* algorithm implementation proved to be very robust, being capable of solving every maze that comes with CiberTools package.\\

\begin{thebibliography}{4}

\bibitem{cibermouse} Nuno Lau, CiberRato 2008: Rules and Technical Specifications, November 2007

\bibitem{book} Jonathan Knudsen, Java 2D Graphics. O'Reilly, May 1999

\bibitem{url} Amit Patel, Amit's A* Pages, \url{http://theory.stanford.edu/~amitp/GameProgramming/}

\bibitem{book} Ernesto Costa and Anabela Sim\~oes, Intelig\^encia Artificial: Fundamentos e Aplica\c{c}\~{o}es. FCA, February 2004

\end{thebibliography}

\end{document}
