\documentclass{article}

\usepackage{graphicx}

\usepackage{float}
\newfloat{listing}{thp}{lop}
\floatname{listing}{Listing}

%-------------------------------------------------------------------------------
\title{CMPUT 652 Assignment \#3 \\ Part 2: Domain Optimization}
\author{Sterling Orsten sorsten@cs.ualberta.ca \\
	 Jacqueline Smith {jacqueli@cs.ualberta.ca}}
\begin{document}
\maketitle
%-------------------------------------------------------------------------------

\section{Introduction}

We decided that our goal was to produce a pathfinding system that achieved exceptional speed, while suffering from only mild suboptimality. To do this, we chose techniques centered on abstraction. We manually divided the unobstructed map locations into an exhaustive partition, which we called ``regions''. We define a region as a contiguous set of map locations that is as large as it can be while still being mostly convex. Ideally, one should be able to trace a straight line between most pairs of points within a region without encountering any obstructions. 

The crux of our algorithm is that we can construct an abstract search graph by placing vertices on the borders between regions, and connecting the vertices which border a particular region in a clique. We can then perform a high level search on this abstraction, producing an abstract path whose edges are always fully contained within a single region. This allows us to construct the full path by stitching together a sequence of quick, straight lines, produced by the Bresenham line drawing algorithm, whenever possible, and falling back on a quick, short, lightweight A* search whenever not.

\section{Abstractions}

In order to construct our abstractions, we developed an OpenGL application which displayed a representation of the map. This application allowed us to draw borders onto the map, which would then automatically be used to define regions. The data format saved by this application is actually fairly lightweight, storing only the regions for the locations on the map, the indices of the borders surrounding each region, and the geometry of those borders. As an example, the program displays a particular partition into regions as follows.

\begin{center}
\includegraphics[width=3in, bb=0 0 550 559]{screen.jpg}
\end{center}

We experimented with two different abstractions produced using this application. The first, ``graph1'', divided the map into around $28$ regions, by placing borders in tiny ``choke points'', while leaving larger open areas as single regions, despite many not being convex. This required $37$ borders, which could each be represented by a single vertex in the abstract search graph. This abstraction was motivated by the observation that most of our suboptimality comes from the requirement that we pass borders at specific points, and thus, that large borders tend to require more of a deviation than small borders.

The second abstraction, ``graph2'', divided the map into $97$ regions, by placing borders wherever necessary to ensure that regions were mostly convex. This resulted in $107$ borders, some of which were quite large. In order to reduce the chances of very high suboptimality, those large borders were represented by multiple vertices in the abstract search graph, for a total of $176$ vertices. This abstraction was motivated by pure speed. By having only mostly convex regions, almost all paths could be formed as a sequence of straight line segments. Almost no low-level A* searching was required when using this abstraction. Unless otherwise specified, all future discussion refers to ``graph2''. 

\section{Pathfinding}

The 512x512 map is loaded in memory, with each position on the map stored as a single byte representing either the region number the position is in, or a special value indicating that the point is not passable. For each search, the start and goal regions are first compared. If they are in the same region, a straight line path is attempted using the Bresenham line drawing algorithm. If there is an obstruction, an A* search is performed. As we have chosen regions that are ``mostly convex'', the A* search will happen rarely, and when it does, very few extra nodes will be expanded. If the start and goal are in different regions, a search is performed in the abstraction. This is done using simple A* on a weighted graph, with an octile distance heuristic. The actual start and goal locations are inserted into the abstract graph temporarily, and connected to the border vertices of their respective regions. This allows us to quickly find an abstract path. 

To construct the real path from the abstract path, for each pair of adjacent vertices in the abstract path, we generate a number of points. If it is possible to simply trace a straight line between the two points, it will be done, and the traced locations added to the final path. If it is not possible to do this, a quick A* search is performed, and with the results placed into the final path. In the case of permanent edges in the abstract search graph, we precompute whether or not a straight-line path suffices. If it does not, we can further cache the actual path that we need to take to follow that edge. In either case, we can store the length of the optimal path along the abstract edge as the weight of that edge, allowing for accurate abstract search.

In order to improve the optimality of the paths, an additional fallback was added. Any abstract paths with a small number of vertices, and for which the start and goal were separated by only a small octile distance, will be thrown out and replaced with a direct A* search from start to goal. Such paths tended to be pathological cases in our original algorithm, as generated paths would deviate from short optimal paths in order to pass through arbitrarily defined border vertices. 

\section{Memory Usage}

Our final choice of abstraction uses memory in four main ways. First, a single byte must be kept for every location on the original map. This results in $512 \times 512 \times 1B = 262KB$ of memory. This is a fairly high price, but the ability to instantly localise onto the abstract graph is part of what gives us our speed.

The second major use of memory is in our edges. We store directed edges separately and redundantly, eliminating small amounts of special case code and branching. This means that, from the $474$ edges in our abstract search graph, we actually need to store $948$ edges at $21B$ per edge. This includes the cost of coordinate vectors stored by each edge. These vectors are used to cache paths for edges that cannot be followed with a straight line. For our abstraction we needed to store exactly $3734$ pairs of coordinates, at $8B$ apiece, to cache every necessary path. All told, this means that we are using $50KB$ of memory to store the edges in our abstract graph.

Next, in order to quickly locate the edge corresponding to a particular pair of vertices, we used a large square array of pointers. As we have $176$ vertices in our abstraction, we need $176 \times 176 \times 4B = 124KB$ of memory. This is a somewhat pricey optimisation, but it helps to significantly simplify the process of converting abstract paths into full paths.

Last, and least, we need to store the abstract graph vertices themselves. These are very lightweight, containing only their coordinates and a vector of the directed edges from a vertex to its neighbours. As we have already accounted for the cost of the edges themselves, we need only consider the bookkeeping for those vectors. As such, for our $176$ vertices, we use $176 \times 20B = 2.5KB$. There are also similar structures for storing the list of borders in a region and the list of vertices in a border, but they operate with indices that are one byte in size and as such use even less memory. Combined with the vertices above, these lesser structures probably consume no more than about $6KB$ of memory. This suggests that the persistent data structures used by our algorithm to represent the map consume around $440KB$ of memory. There are a number of ways this could be further trimmed if necessary, but we are well within our $1MB$ limit regardless.

\section{Results}

All results were obtained by compiling our code with the command ``g++ -O3 hw3.cpp -o hw3'' on viking, and selecting the ``test suite''. This loads the abstraction into memory, and then performs the 10,000 example searches specified on the course website. It measures the total running time, total r usage, average suboptimality, minimum and maximum suboptimalities, and the total suboptimality, defined as the sum length of all paths found over the sum of the lengths of optimal paths. We also have optional path checking to ensure that legal paths are produced. Though path checking has been turned off in order to measure the speed of the algorithm, we have run checking for each technique listed below and verified that all of them return valid paths. For the purposes of our comparisons, we will list total r usage and total suboptimality.

First, let us justify our choice of the second abstraction, ``graph2'', compared to its simpler cousin, ``graph1''. The results below suggest that a complex abstraction that reduces the need to do direct searches produces an algorithm four times faster but with only twice the suboptimality, which remains low regardless.

\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Abstraction & Map Regions & Graph Vertices & Total R Usage & Total Subopt\\
\hline
graph1.dat & 28 & 37 & 23.76 s & 2.6\%\\
graph2.dat & 97 & 176 & 5.26 s & 5.1\%\\
\hline
\end{tabular}
\end{center}

Next, let us justify our decision to cache paths corresponding to graph edges within the abstract search graph, so as to allow speedy construction of the full path. It is worth noting that the two methods below do in fact produce the exact same paths, which provides a strong argument for spending the $30 KB$ required to cache the paths.

\begin{center}
\begin{tabular}{|l|l|l|l|}
\hline
Caching & Extra Memory Usage & Total R Usage & Total Subopt\\
\hline
Disabled & 0 KB & 21.50 s & 5.1\%\\
Enabled & 30 KB & 5.26 s & 5.1\%\\
\hline
\end{tabular}
\end{center}

Next, let us justify our choice of vertex generation strategy. The ``solo'' strategy produces one graph vertex for each border, at the midpoint of the border. The ``multi'' strategy produces two or even three vertices for edges that exceed a particular length, placing those vertices such that they evenly split up the border. This strategy is designed to allow a number of choices for how paths cross large open regions. The ``wall'' strategy is identical to ``multi'', save that when three vertices are generated, one remains at the midpoint, and the other two are placed just a few locations shy of the walls. This strategy is designed to allow paths to continue to stay tight and cut corners even when passing through wide borders.

As the choice of technique alters the structure of the resulting search graph, we've included a comparison of the size of the resulting graph as well as the memory required to cache paths. While the more sophisticated vertex generation schemes produce larger graphs and require more memory for caching, they significantly reduce suboptimality. As such, we were willing to incur the minor speed penalty in order to keep our suboptimality within a reasonable range.

\begin{center}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Strategy & Vertices & Edges & Path Cache & Total R Usage & Total Subopt\\
\hline
solo & 107 & 162 & 8.6 KB & 3.92 s & 10.7\%\\
multi & 176 & 474 & 27.4 KB & 5.22 s & 5.9\%\\
wall & 176 & 474 & 29.9 KB & 5.26 s & 5.1\%\\
\hline
\end{tabular}
\end{center}

Finally, let us justify the optimisation of doing direct searches for short abstract paths, to avoid doing large detours when a direct search would be fast anyway. There are two main parameters to this optimisation, the first is the octile distance below which a direct search is considered, the second is the number of vertices in the abstract path below which a direct search is considered. The former condition prevents doing direct searches across large portions of the map, while the latter prevents doing direct searches when the start and goal are on opposite sides of a major divide and require a nontrivial path to reach one another. This does not make a huge difference in terms of total suboptimality, but it DOES drastically reduce the maximum suboptimality, and the gains are primarily for short paths. In actual deployment scenarios such as video games, short paths tend to be common, and ensuring that they are of high quality can be crucial. 

The tradeoff we eventually chose was that abstract paths with up to four vertices (start, cross two borders, and end) in which the start and the goal were no greater than $64$ units of distance from one another, we would do a direct search. This cost us only half a second total over $10000$ paths, but cut the maximum suboptimality by a factor of three. In the interests of transferring our technique to other maps or sets of pathfinding problems, we consider that a fair tradeoff.

\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Max Dist & Max Verts & Total R Usage & Total Subopt & Max Subopt\\
\hline
0 & 0 & 4.63 s & 5.1\% & 75\%\\
32 & 4 & 4.70 s & 5.1\% & 42\%\\
64 & 4 & 5.26 s & 5.1\% & 27\%\\
96 & 5 & 9.18 s & 5.0\% & 27\%\\
\hline
\end{tabular}
\end{center}

\section{Conclusion}

In summary, our final configuration, for which we are submitting our source code, finds valid paths solving the 10,000 test cases from the course website in $5.26$ seconds on viking, with a total suboptimality of $5.1\%$ (and a maximum suboptimality of about $27\%$). It does this primarily through an abstract search graph based on a manual segmentation of the map. Our technique should be transferable to any map which is composed of large, mostly convex regions. Our technique would be of little utility when applied to maps consisting of small, winding corridors and mazes.

\end{document}
