Robots can quite successfully navigate in a known environment. However, what does a robot need to do when the environment is unknown?
Autonomous exploration is the way to solve this problem. The goal of autonomous exploration is to build a map of the unknown environment. 
The most common way of doing this is a Occupancy Grid Map and this type of map is explained first in Section \ref{OCM}.
If a robot is exploring and building this map, then the robot will need to know were to move next. This can be done by identifying frontiers. However, the robot should be as efficient as possible in doing so. 
So considering what the robot knows of the environment so far, the robot needs to find the most promising frontier. Frontiers and frontier-based exploration are explained in section \ref{FD}.
Finally, once the most promising frontier is found, the robot will need to navigate towards it. The navigation of the robot is explained in Section \ref{NAV}. Together, these three parts will form the autonomous exploration.

\subsection{Occupancy Grid Map}
\label{OCM}
Robotic mapping has been a popular research field the last couple of years \cite{thrun2002robotic}. Many different methods have been proposed to acquire a spatial model of the environment. However, Occupancy grid maps are used for this research, because this method works well with the laser range finder and the combination is well-researched topic. 

If environments are static, structured and relatively small, the mapping problem is easily solved. However, dynamic and unstructured environments greatly increase the difficulty. Obviously, if the size of the environment grows, the difficulty grows along. 

For robots to perform the mapping task, it needs to possess sensors to perceive the environment. Many different sensors can be used for mapping, including cameras, lasers, radars, sonars and infrared. As mentioned before, the Turtlebot used for the research in this report makes use of a planar laser range finder and the odometer. 

The main problem with occupancy grid maps (and mapping in general) is how to build a reliable and consistent map from noisy or even incomplete sensor data. Occupancy grid maps try to solve this problem by using probabilities to define the map. An environment is split up into cells along a grid and each cell, denoted by the coordinates $(x,y)$, contains a occupation $m_{x,y}$. This occupation value $m$ is a continuum value (usually an integer), denoting the probability of the cell being occupied. 

To decide if $(x,y)$ is occupied or not, the occupation probability or posterior must be calculated. If this probability is higher than a predefined threshold, then the algorithm decides that a cell is occupied. The posterior over each cell is calculated using a Bayes filter. 
So depending on the pose of the robot $x^{t}$ and the collected data $z^{t}$, the posterior $p(m_{x,y} | z^{t}, x^{t})$ can be calculated. 
An example of a small Occupancy Grid Map is given in Figure \ref{OCMfig}. The grey grids are empty and the black grids are occupied, for instance if there is a wall.

\begin{figure}
\centering
\includegraphics[width=200pt]{images/occGridMap}
\caption{Example of an Occupancy Grid Map}
\label{OCMfig}
\end{figure}


\subsection{Frontier-based Exploration}
\label{FD}

Creating an Occupancy Grid Map based on the data collected by sensors is the first part of autonomous exploration. The second part that must be answered is how the robot will collect this required data and where it should go. Most of the time, the robot has limited resources, like time or energy. An example is a robot that is exploring a disaster area to find survivors. In this case the robot needs to do its work as fast as possible. A strategy is needed to make sure the exploration is done efficiently. This strategy must define where the robot should move next, considering the knowledge of the environment and position of the robot. 

The strategy used in this research is a frontier-based exploration algorithm. Shortly, the robot will try to find frontiers on the map and then decide which frontier is the best to move to.
This  process is divided into three parts, namely frontier detection, frontier selection and the exploration strategy.

\subsubsection*{Frontier Detection}
Frontiers are borders on the map between discovered and unknown space. These frontiers mark the positions for the robot in the environment that are most interesting to explore. By moving to one of these frontiers, the robot will gain more knowledge about the environment, since it is then able to perceive a currently unknown part of the environment.

Detecting the frontier cell on the occupancy grid is fairly straightforward, since the occupancy grid map already recognizes occupied and empty cells. The frontier detection algorithm has to identify unexplored cells that are next to empty cells. These cells are then marked as a frontier cell. Unexplored cells that are neighbors of occupied cells are not considered to be frontier nodes, because they can not be reached from the occupied cell. However, it is possible that the same unexplored cell can be classified as a frontier because it has a neighboring cell on another side that is empty. 


\subsubsection*{Frontier Selection}
Once the frontier nodes are detected, the exploration algorithm has to decide which frontier cell to navigate to. The algorithm has to find the most promising cell(s) for exploration. These cells are called vantage points.

A shading algorithm is used in order to find cells on the grid map from which the robot can scan as many frontier cells as possible.
First, the shading algorithm will send out a scan from every frontier cell. This scan will check which grid cells can be observed via a direct line of sight from the frontier cell. 
These scans make use of the fact that light is reciprocal. Since there are fewer frontier cells than free cells, this consumes less computation power than sending out scans from free cells.
This way, every empty cell on the grid will receive a value signifying the amount of frontier cells visible from that specific cell. 

A graphic representation of the steps of the shading algorithm are shown in Figure \ref{shading}. The first picture shows a section of the occupancy grid with black cells denoting walls and white cells denoting free cells. The second picture shows the different wall sections around the frontier cell, each painted with a different color. The third pictures shows the edges of the shaded areas and picture 4 shows all the visible cells from the center. Eventually, the last picture displays the visible cells within the laser scanners' range. These free cells are all used to calculate the value for the vantage points.

After every free cell is classified with this value, the frontier detection algorithm will look for local maximums. These local maximums are then classified as vantage points.
Finally, a check is performed to filter out vantage points too close to a wall. These vantage points are excluded, because the robot can't reach them. The route planning algorithm would exclude these points anyway, but it is preferred to exclude them here because leaving it to the route-planning algorithm would use more computation time.
After all the relevant vantage points are found, they are published as an array for use by the exploration algorithm. Also the frontiers and the vantage points are published as an occupancy grid map used by the visualization. The vantage points are also published as a "Pose Array" (an array of 3D unit vectors) for better visibility.

\begin{figure}
\centering
\includegraphics[width=0.2\paperwidth]{images/grid_1.png}
\hspace{5pt}
\includegraphics[width=0.2\paperwidth]{images/grid_2.png}
\hspace{5pt}
\includegraphics[width=0.2\paperwidth]{images/grid_3.png}
\\
\vspace{10pt}
\includegraphics[width=0.2\paperwidth]{images/grid_4.png}
\hspace{5pt}
\includegraphics[width=0.2\paperwidth]{images/grid_5.png}

\caption{Shading algorithm to find vantage points}
\label{shading}
\end{figure}

\subsubsection*{Exploration Strategy}
After all the relevant vantage points are found, the exploration algorithm can select the next destination.
First, the algorithm requests the travel distance to each vantage point from the navigation node. 
Then, for each vantage point, a value is calculated to classify how good each vantage point is.
This is done by dividing the value gained from the frontier selection by the square root of the distance.
Eventually, the exploration algorithm has a ranking of all the vantage points.
However, a final check is performed to exclude the vantage points that are within a minimum distance. This check is performed in order to ensure that the robot is actually exploring, instead of being stuck between several close vantage points. If no other vantage points are left, the algorithm will include the close vantage points again.
Finally, the vantage point with the highest ranking will be passed on to the navigation node and the robot will move there.


\subsection{Navigation}
\label{NAV}

Although ROS has its own navigation stack that is able to navigate a robot between two points, this stack is erroneous in many cases. Often the robot ends up being stuck in a corner or against an edge or obstacle. Therefore, an alternative navigation algorithm is proposed. 
This navigation algorithm is based on the topological maps as explained by Thrun and B\"{u}cken \cite{thrun1996integrating}.

Initially, the navigation algorithm builds a Voronoi-graph on top of the Occupancy Grid Map using the walls. Following this, two different types of navigational aids are detected along the edges of the Voronoi-graph, namely minimal clearances like doorways or corridors and maximal clearances which most often represent centers of the rooms. The maximum clearance points are points that should be easily reachable, while the minimal clearances are points in the environment that usually separate bigger areas or help the robot navigate through tight areas.

There are two ways the navigation algorithm is required to react. The first case is when a frontier cell is chosen by the exploration strategy.
In this case, the navigation algorithm creates a matrix that stores the Euclidean distance between navigational aids that are visible from each other. This is checked first by projecting a direct line between them and, if this line is clear, by generating a mask that covers the cells the robot would touch if it would drive straight from one aid to another. To generate this mask the Bresenham line algorithm and the Midpoint Circle algorithm are used. After that, the algorithm adds the robot's position and destination to this matrix. It calculates the distance between these points and the navigational aids in the same way. At this point, the matrix conatins the Euclidean distances between all the navigational aids, the robot's position and the destination.
Then an A* search is performed to find the shortest path \cite{russell2010artificial}. It uses the Euclidean distance between the navigational aids and the destination as its heuristic. This variation of A* works better than an A* search based on the grid, because it is not restricted to using angles of 45 degrees. This algorithm should also outperform the Theta* \cite{nash2007theta} and visibility-graph based algorithms because the graph size is much smaller.

The second case in which the navigation algorithm is used, is when the exploration algorithm requests distances from the robot's position to the different vantage points.
Here, the algorithm calculates the distances between the navigational aids in the same way as mentioned above. Then, the robot's position and the requested vantage point are added in a similar way. But, instead of A*, now Dijkstra's algorithm is used to find the shortest path because A* can't be used for multiple destinations. The stopping condition of Dijkstra's algorithm, however, is altered. In comparison to the normal stopping condition, which says that the algorithm should stop when the "open set of nodes is empty", the stopping condition used for the navigation algorithm is reached when "every destination node is in the closed set OR the open set of nodes is empty".

Furthermore, during the movement of the robot itself, obstacle detection is important. If the robot encounters an obstacle in its direct path while moving, a replanning of the path becomes necessary.
These obstacles are detected in a box shape in front of the robot. The length of this box depends on the current speed of the robot.
A 5 cm margin is always maintained between the robot and any obstacle. Replanning is done by running the navigation algorithm from the current position to the previously planned destination.





