\section{Design}

	\subsection{Hardware}
		\subsubsection{Plans}

The first step in designing our robot, was to evaluate the boundary conditions: what does it need to do? What materials are provided? How much mechanical complexity is viable? What constraints are imposed on us by the environment?

Through this evaluation, we formulated a series of conditions a design would need to meet:
\begin{itemize}
	\item 	Attachment location for a camera at about 15 to 20 centimeters from the ground.
	\item 	To move safely in a 30cm wide corridor, the maximum width should be less than 25cm and ideally no more than 20cm.
	\item 	During turns, the sensors will be unreliable, so a zero turn radius is preferable. No part of the robot should be further from the turn center than the sides.
	\item 	To accommodate the minimum range of some of our sensors, clear line of sight should be available in all directions for at least 5cm from the edge.
	\item 	The interior should be large enough to accommodate the PCB's, as well as at least one battery.
	\item 	To avoid ground echo's, it should be possible to mount the sonar at least 10cm from the ground, with clear line of sight in a wide cone ahead.
	\item 	Since no radial cutting tools were provided, and our base material was sheet metal, a perfectly circular design would be too complex.
 	\item 	For more reliable odometry, the structure should be rigid, and have a low center of gravity.
\end{itemize}

These requirements led to the design featured in figure \ref{fig:robot_design_1}. Ideally, the motors and wheels would be mounted internally, but this proved to be too complex: the castor wheel imposed a minimum ground-plate clearance, which was just high enough to mount the motors underneath it, provided the top of the motor housing could protrude into the main electronics area. This precipitated an unusual mounting, where prefab L-beams were used to guarantee exact $90^\circ$ angles and provide structural support around the ground plate extrusions.

\begin{figure}[htb]
\centering
\includegraphics[width=0.49\textwidth]{img/sensors_3D_1.pdf}
\includegraphics[width=0.49\textwidth]{img/sensors_3D_2.pdf}
\caption{Robot Design}
\label{fig:robot_design_1}
\end{figure}

A multitude of prefab L-beams were used to connect all sheet-metal elements; this ensured rigidity, and facilitated alignment of parallel planes. Most of the electronics, battery, and sensor equipment was placed in the back half of the robot, in order to move the center of gravity between the castor wheel and the main driving wheels. Protruding wall plates ensured the robot would not tip over under strong  acceleration or deceleration forces. A octagonal ground plate design was the best approximation to a circle that remained within feasibility limits.

		\subsubsection{Sensor Placement}
Special thought was given to optimal sensor placement. The entire top area of the robot was dedicated for this, and would provide a clear area on which a multitude of sensors could be attached. Most sensors imposed clear limitations on where they could be placed optimally:

\begin{itemize}
	\item IR sensors: initially, these were planned to face the diagonals. Since that implied angular reflection surfaces when driving parallel to walls, and we were warned of the inaccuracies inherent in these, we opted instead for our IR sensors to face straight ahead, backwards, and to the direct left and right. IR sensors were always placed in pairs to allow for readout-verification and alignment to the perpendicular surfaces we would expect.
	\item Cameras. Initially, the cameras were designed to provide stereo-vision. For greatest accuracy, they would have to be mounted as far to the sides as possible. To minimize the blind-spot at the front, they would also have to be placed as far back as possible. Therefore we planned to place the cameras above and behind the wheels, where the sides bend towards the back, see figure \ref{fig:robot_design_1}.
	\item Sonar: due to the wide beam, a sonar would be ideal to detect obstacles. Since our primary direction of movement was forward, the sonar would be placed facing forward, attached to the front of the upper platform where it would be high enough to avoid most ground reflections, and forward enough not to pick up any stray parts of the robot itself.
	\item IMU: ideally, an IMU is placed at the rotational center to most easily measure rotation forces. To reduce magnetic interference, it would also be placed well outside any metal cage and as far away from the electronics as possible. Thus, we placed it in the center of the robot, at the very top of the sensor platform.
\end{itemize}

		\subsubsection{Implementation}

In the end, the sonar and IMU were mounted but not used. Their software implementation proved either too unreliable, too difficult, or simply too much work for too little gain. The cameras were initially placed in the proposed location, see figure \ref{fig:robot_hardware_implementation}, but due to performance constraints and time limitations, stereo-vision was abandoned and they were placed facing to the left and right side of the robot to increase the speed of tag-detection.

\begin{figure}[htb]
\centering
\includegraphics[width=0.49\textwidth]{img/robot_front.jpg}
\includegraphics[width=0.49\textwidth]{img/robot_side.jpg}
\caption{Robot Hardware Implementation}
\label{fig:robot_hardware_implementation}
\end{figure}

Other than camera placement and the addition of the sensor-cage on top, no major deviations from the design were made. As expected, the robot proved to be rigid and sensors did not interfere, and appeared to yield reliable data.


	\subsection{Software}
		To facilitate software development, we utilized a framework called ROS\footnote{\url{http://www.ros.org/}}. ROS gathers a large library of software and utilities for robotics. It also provides an infrastructure for inter-process communication, and various useful tools. For the robot in this course there were only a select few packages of software allowed for use. The biggest advantage for us was that ROS makes separation into components easier by providing these tools to communicate between processes. Sensor readouts can run in one process independent of processes wanting to read the sensor data.

		The software for this robot is contained in one stack that consists of five packages: \emph{SLAM, controller, driver, sensor} and \emph{simulation}. Each of these packages contain one or more nodes. Each package focuses on a single task, and communicates with the other packages through so called topics and services, which are essentially non-blocking and blocking network pipe interfaces, respectively. The focus for each of these packages is:
\begin{itemize}
	\item SLAM: keep track of the robots position, and build a map of the environment.
	\item controller: instruct the robot to act; move, turn, explore, etc. This also includes a manual controller, which enables keyboard control of the robot.
	\item driver: interface between the controller and the robot. Converts directional instructions into motor speeds.
	\item sensor: gather information about the environment and publish this information on sensor topics.
	\item simulation: not used in the actual driving of the robot, the simulation provides a virtual world for testing purposes; this includes simulated walls (and simulated sensor readings), and simulated movements (with simulated uncertainty).
\end{itemize}
We will now discuss these packages, and their nodes, in more detail.

		\subsubsection{SLAM}
			The SLAM (Simultaneous localization and mapping) package is responsible for keeping track of the robot's location and creating a map of the environment, including walls and tags. The SLAM package contains two nodes, \emph{tracker} and \emph{slam\_controller}. The tracker receives robot move updates from the driver package and is responsible for keeping the most accurate location of the robot. Other nodes interact with the tracker by using a service-call to get the robot's position. The tracker was planned to utilize a particle filter to correct the robot's position. The particle filter has been implemented with all its components: moving of particles given uncertainty, determining probability of each particle and the subsequent resampling of particles. However, the particle filter did not work as well as planned, and was not used in the final software.

		\subsubsection{Mapping}
			The map is a 2D occupancy grid based implementation. An occupancy grid divides the world into discrete cells, where each cell represents the probability of that location containing an obstacle. We chose a cell size of 2 cm. The cell size was a trade-off between computational cost and precision --- smaller cell size would have given us more precision in the map but at the cost of performance.

			Updating the map was done by interpreting the readouts from the IR sensors. Each sensor has a defined position relative to the center of the robot, and by translating from the center we determined the start point and end point of each sensor measurement. This ray is entered into the map by updating the cells the ray overlaps. The cells are updated with the probability of them containing an obstacle or not. A ray from point A to point B indicates that most of the space between A and B is empty, but at the end point B there is an obstacle. To model this we form a normal distribution around the end point of the ray, increasing the probability that these cells are occupied. An example of the discretization of the normal distribution is seen in figure \ref{fig:discretized_normal_distribution}. The cells between A and B is then reduced in their probability of being occupied. By tracing a line from start point to end point, and by setting the mean of the normal distribution to the distance from start point to end point, we could enter this into the occupancy grid in one go by using the distance from start point as the input value to the normal distribution.

			\begin{figure}[!ht]
				\centering
				\includegraphics[width=0.8\textwidth]{img/normal_distribution.png}
				\caption{Discretized normal distribution}
				\label{fig:discretized_normal_distribution}
			\end{figure}

			The robot position itself also produces map updates. Each time a sensor update is done, a ``hit'' with a shape of a normal distribution is entered around the center of the robot. Our robot shape is quite close to a circle so this makes sense. This hit frees up the space around the robot, adding a ``free space'' observation to the affected cells.

			Tags are saved in a layer above the occupancy grid. When a tag is detected, the current location of the robot is recorded and the tag is saved to the right or left hand side of the robot.

		\subsubsection{Path planning}
			Path planning is used when the robot needs to revisit the same tags or find the start of the maze. There is a path planning service that responds with a list of waypoints to either a specific tag or the start of the maze. Upon receiving a request, it grabs the current location of the robot and finds a path to the target from this location. The result is a list of waypoints, where each waypoint is an absolute 2D coordinate.

			The path planner uses wall inflation and A* search to find a path to the target. Wall inflation is a procedure to identify the areas in which the robot can not possibly travel --- the robot can for example not reach a location 5cm from the wall due to the size of the robot. We instead mark a non-travellable zone around each wall cell, and search for a path in this marked copy of the map. The wall inflation runs continously, always keeping an up to date inflated copy of the occupancy grid. Whenever the occupancy grid is updated, the path planner re-inflates the area affected by the update.

			The wall inflation is done by inspecting each cell containing a wall in the occupancy grid. For each of these wall cells, a square with diameter of 23 cm (the width of the wheel axises with 1 cm margin on each side) is added as inflated wall around the cell into the wall inflated copy.

			When a request is issued to find a path to a target, the path planner initiates an A* search in the wall inflated map. A standard A* naturally tries to find the shortest path to the given target, given a simple Manhattan distance cost function. An issue with this is that the path found is usually full of unwanted direction changes, making the robot zick-zack towards its target as shown in figure \ref{fig:zick_zack_path}. A solution to this is to put a higher cost on changing the current direction and through this make the path finder try to find a path with less changes in direction. This results in a much better path with large straight segments, see figure \ref{fig:zick_zack_path}. The A* heuristic cost function is the Manhattan distance, depicted in figure \ref{lst:manhattan_distance}, and the penalty for changing direction is set to a constant of 5. A penalty of 5 corresponds to 5 cells which with a cell size of 2 cm corresponds to a distance of 10 cm, meaning the path planner will avoid making more than one directional change every 10 cm.

			\begin{figure}[!ht]
				\centering
				\includegraphics[width=0.49\textwidth]{img/zick_zack_path.png}
				\caption{Path as found with unmodified A*}
				\label{fig:zick_zack_path}
			\end{figure}

			\begin{figure}[!ht]
				\centering
				\includegraphics[width=0.49\textwidth]{img/straight_path.png}
				\caption{Path as found with penalized directional changes}
				\label{fig:straight_path}
			\end{figure}

			\begin{figure}[!ht]
				\begin{lstlisting}[frame=single]
int cost(Cell from, Cell to) {
	return abs(from.x - to.x) + abs(from.y - to.y)
}
				\end{lstlisting}
				\caption{Manhattan distance cost function used in A*}
				\label{lst:manhattan_distance}
			\end{figure}

		\subsubsection{Sensor}
		\label{sec:design_sensor}
There are three main nodes in the sensor package: \emph{camera, sonar} and \emph{IR}. The sonar and IR sensor nodes send out distance measurement readouts; these are fairly simple messages, containing just a distance readout. Additional information is stored in the map and controller packages to match sensor readouts with the correct sensor location, direction and uncertainty.

The camera node is much more complex than that. It sends out tag detection messages, which depending on what phase we are operating in, contain only tag direction and distance, or also color and identification. The camera node can operate in publisher mode during the exploration phase, where it will constantly read out and analyze images and send out tag detection messages without further information. However, it can also operate in service mode, designed for use during the navigation phase, when on request it will perform a blocking snapshot operation which takes a single high-resolution image in the direction specified, and thoroughly analyze it to not only find a tag's relative location, but also determine its color and object.  This requires of course that tag locations are correctly mapped during the exploration phase.

The procedure for tag detection and identification is illustrated in figures \ref{fig:image_analysis_1} through \ref{fig:image_analysis_4}. The first step is blurring the source image; this removes pixel noise which would otherwise propagate throughout the next steps.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\textwidth]{img/0_original.jpg}
\includegraphics[width=0.49\textwidth]{img/1_blurred.jpg}
\caption{Image Analysis Pathway, step 1. Left: original image; right: blurred.}
\label{fig:image_analysis_1}
\end{figure}

After blurring, the image is thresholded, so that only pixels that are "red" enough are kept. The result is then dilated to connect disjoint edges, which is useful at lower resolutions and for tags which are further away, to make sure they are seen as a single block, as illustrated in figure \ref{fig:image_analysis_2}.

\begin{figure}[htb]
\centering
\includegraphics[width=0.49\textwidth]{img/2_threshold.jpg}
\includegraphics[width=0.49\textwidth]{img/3_dilated.jpg}
\caption{Image Analysis Pathway, step 2. Left: color-thresholded image; right: after dilating.}
\label{fig:image_analysis_2}
\end{figure}

The dilated image is then used to find edges using the openCV\footnote{http://opencv.willowgarage.com/wiki/} canny edge detector. By imposing a minimum size limit and eliminating structures that are not roughly square, only tags remain. We found that this method yields zero false positives in the maze, and very few false negatives, especially at short range. This is illustrated better in figure \ref{fig:image_analysis_3}, where despite plenty of red noise in the image, only the true tag remains.

\begin{figure}[htb]
\centering
\includegraphics[width=0.49\textwidth]{img/4_edges.jpg}
\includegraphics[width=0.49\textwidth]{img/5_squares.jpg}
\caption{Image Analysis Pathway, step 3. Left: all edges; right: after filtering.}
\label{fig:image_analysis_3}
\end{figure}

After square regions are found, the exploration phase will stop processing. During navigation however, the tag must also be identified. To this end, the relevant part of the original source image is extracted, whereupon a SURF feature detector is used to find interesting features, which are then matched against lists of known features of stored tags, see figure \ref{fig:image_analysis_4}.

\begin{figure}[htb]
\centering
\includegraphics[width=0.2\textwidth]{img/6_tag.jpg}
\includegraphics[width=0.2\textwidth]{img/7_keypoints.jpg}
\caption{Image Analysis Pathway, step 4. Left: the extracted tag; right: detected features.}
\label{fig:image_analysis_4}
\end{figure}

The analysis does not stop there, however. Almost all known tags will match a certain amount to the provided image. For each tag, a distance measure is calculated, based on the number of matching features, as well as their average likeness to stored features for that tag. The object closest to the image is selected, but if and only if the second-closest object is further away than a certain threshold. We found a difference of at least 5\% was adequate for eliminating most misclassifications, while still retaining most correct matches. Of course, this threshold can be adjusted to make the detector more or less selective.

After the tag is classified, a block of 9 center pixels is evaluated to determine its color. If the red, blue or green component of these pixels is, on average, at least 30\% greater than the other two components, it is classified with that color. If all pixels have a low enough value, the object is classified as black. This method is simple and effective, yet not without resulting in a few misclassifications for tags with much white space in the center. A more elaborate evaluation of the detectors performance is discussed in section \ref{sec:performance}.


		\subsubsection{Controller}
There are two main nodes in the controller package: \emph{manual} and \emph{automatic}. The manual controller will, unsurprisingly, provide manual control of the robot through either a keyboard or a gamepad or joystick. Much more interesting though, is the automatic controller package. This enabled high-level automatic control of the robot: determining goals, take decisions, and act on them.

In order to do this, a layered semi-reactive subsumption control scheme was introduced, based on so called ``behaviors''. Each behavior is responsible for a single task, and can act independently of all other behaviors. Some behaviors are fairly simple, such as obstacle-avoidance which will only stop and turn the robot when it detects an obstacle ahead, while others are fairly complex, like the navigation behavior which will request a list of checkpoints from the SLAM package and drive past each checkpoint in turn, and take a snapshot when it reaches the final checkpoint in a series.

Each cycle, which lasts for 0.5 seconds, the behaviors are updated, selected and used as illustrated in figure \ref{fig:behavior_cycle}.

\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{img/behavior_cycle.jpg}
\caption{Behavior Update Cycle}
\label{fig:behavior_cycle}
\end{figure}

When a behavior is constructed, it will register sensor callbacks. During the 'Update Sensor readings' phase, these callbacks can be called, and if a behavior is listening on an active sensor topic, it will receive new readouts. During the next phase, each behavior sets its priority. Priorities range from 0.0 to 1.0 and should reflect how important it is for a behavior to act. These priorities are naturally dynamic, and usually depend on the sensor input given during the previous phase. Therefore, an obstacle avoidance behavior will have a very low (say, 0.1) priority when it perceives no obstacles in front of the robot, while it can take exclusive priority (1.0) when an object is very near. More complex behaviors, such as exploration and navigation, have a fixed priority which allows them to act when obstacle avoidance and wall-following are inactive, while being superseded when those notice the need to act. After testing more complex schemes, we found that letting only the behavior with the highest priority act was the most effective solution. 

Because of the single sensor readout moment, all behaviors which rely on sensor data must store that data between receiving sensor callback information and the time they can act. Unless behaviors set an exclusive priority level, they must also allow for the fact that they may not get to act on the robot. A behavior that does get to act obtains full control over the robot for the duration of the cycle, typically 0.5 seconds, although it may also be selected to act during the next cycle. 

Some movements take longer to complete than the alloted 0.5 seconds however; to facilitate this, a behavior may also execute so called "non-interruptible" motor commands. These commands will be executed by the driver until they are completed, and all motor commands send during their execution are ignored. This enables us to make clean $90^\circ$ turns, even though such a turn takes longer than 0.5 seconds.

We designed a total of four behaviors:

\begin{enumerate}
	\item Obstacle Avoidance. Usually at priority = 0.1, but when the frontal IR sensors detect a wall closer than 20cm away, gains exclusive priority. Upon acting, it turns the robot $90^\circ$ clockwise.
	\item Wall-following. Usually at priority = 0.1, but when the side-facing IR sensors detect a misalignment to a wall, or a too-short distance to a wall on the side, bump to 0.4 and up to 0.8 priority depending on the misalignment or distance. When the robot is misaligned but far enough away from a wall, this will try to align the robot by turning it a small bit to the left or right, on the spot. When the robot is too close to a wall, this will steer the robot away from the wall while also driving forward for a few centimeters.
	\item Exploration. Only active during the exploration phase, at priority 0.3. This will move forward, and when detecting no wall on the left side, move forward for 5cm, turn $90^\circ$ counter-clockwise, and move forward another 20cm. This enables the exploration behavior to follow a wall on the left side (aided by the wall-following behavior) and thus explore most of a maze. Originally, this behavior also tracked roughly its position and how often it has visited adjacent cells (cells being 30cm by 30cm here). This worked extremely well in the regard that it explored a maze very thoroughly, but also proved much to complex and slow to finish in time for the competition.
	\item Navigation. This behavior is only active during the navigation phase, also at priority 0.3. It will request a list of detected tags, then for each tag request a list of waypoints, move to each waypoint in turn until there are no more waypoints, then take a snapshot and request a list of waypoints for the next tag, as illustrated in figure \ref{fig:navigation_cycle}. When there are no more tags, this will request a series of waypoints to see the bot back to its starting position. For this behavior to work, a valid map must be available.
\end{enumerate}
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\textwidth]{img/navigation_cycle.jpg}
\caption{Navigation Act Cycle}
\label{fig:navigation_cycle}
\end{figure}

A special service is listened to by the automatic controller in order to determine what behaviors should be active; it allows real-time switching between exploration and navigation, by simply deactivating and activating the appropriate behavior. This also means it is very simple to add additional behaviors or modes of control.

Furthermore, this modular behavior design allowed us to test behaviors individually in great detail, before adding more complex behaviors. The only tweaking necessary to make more behaviors interact properly is the selection of appropriate priority values, which turned out to be relatively easy for the four behaviors tested.

		\subsubsection{Simulation}
			The simulation package has two nodes, \emph{fakemotors} and \emph{world}. This package was created after we realized that doing iterative testing on the robot is very slow and time consuming. It is able to simulate most parts of the robot. It utilizes the original \emph{FakeMotors} from the \emph{robo} package that was provided to us in the course. The simulation was intended to be invisible to all components except those that really had to interact with it. Two nodes need to know about the simulation: the motor and the ir. The FakeMotors then provides encoder tick updates exactly like the actual Serializer does.

			The world node reads in a map from a bitmap stored in a file on disk. In this bitmap free space is marked with white and walls are marked black. Since the IR sensors need to output sensible values in simulation, they query the world node for a simulated measurement. The world node performs a simulated measurement by taking the robots location into account, and then performs a simple ray trace hit test to find a distance measurement to a wall, if one is inside the range of the sensor.

			One thing this simulation does not provide is camera support. It can not fake detection of tags and so that part of the software has to be tested on the real robot.

		\subsubsection{Driver}
			In the driver package there is only one node called \emph{motor}. This node is responsible for setting the wheel speeds. This node listens to a topic that handles messages of the type (distance, direction). This message contains a request for the robot to move a certain distance forward and turn a certain amount. The turn is always executed before the driving straight, and the direction is relative to the robot's current angle. The motor node is also responsible for emitting Movement messages that contain how much each wheel has turned since the last message. This distance is defined in centimeters.

			Odometry calculations are also done in the motor node, since it needs to know how much the robot has turned or how far it has travelled to be able to execute the directions it receives. 10 times a second the motor controller receives updates about how many encoder ticks have been registered on the left and right motor. These encoder ticks are translated to a distance by knowledge of the amount of ticks per wheel rotation and the circumference of the wheels. Given the distance each wheel has travelled we then calculate the new position of the robot using the pseudo code in figure \ref{lst:odometry_calculation}. This odometry update makes the approximation that the robot followed an arc. For small enough movements this approximation is reasonable. The left and right wheel are also assumed to have travelled along an arc, making the robot center trace part of the edge of a circle.

			\begin{figure}[!ht]
				\begin{lstlisting}[frame=single]
dist_center = (dist_left + dist_right) / 2.0f

delta_theta = (dist_right - dist_left) / wheel_base_
delta_x = dist_center * cos(current_angle_ + delta_theta)
delta_y = dist_center * sin(current_angle_ + delta_theta)

current_x_ += delta_x
current_y_ += delta_y
current_angle_ += delta_theta
				\end{lstlisting}
				\caption{Calculating the odometry update}
				\label{lst:odometry_calculation}
			\end{figure}

