\documentclass{report}
\usepackage{graphicx}
\usepackage{float}
\begin{document}

\title{Parallel Programming Project Report}
\author{Adam Clayton \& LarryWalters \\\\ 
		Email: aclayto5@jhu.edu \& lwalter7@jhu.edu}
\date{600.420 - 12/(13-14)/2011}
\maketitle

\section*{Introduction}
Our project utilizes parallelism to perform automated 2D maze solving.  Typically, maze solving is for human entertainment, but it has other applications as well, such as robotics and graph theory. Depending on the size and complexity, it can be an arduous and time consuming exercise. Our mazes are composed of tens of thousands of valid locations that can be reached, and if solved in the traditional way or trying various routes from the start to the goal, could take a large amount of time. We have attempted to show that by parallelizing the solving of such mazes, the task can be accomplished quickly and efficiently for a variety of sizes. The project is composed of three parts: a maze generator, a maze solver (the parallelized and important part), and a maze visualizer.  What is really being parallelized is the conversion of the maze into a graph structure. Testing has been done to determine speedup and scale up, the results of which show our system is conducive to good parallelism.

\section*{Parallel Design}
The goal of our project was to develop a parallelized program for solving large mazes using java threads. Our program takes in as arguments the number of threads to use for the task and a file containing the maze. The output of our program is a file containing the solution of the maze. When evaluating our problem we realized that the large mazes could be quickly and efficiently split into pieces that could each be operated on relatively independently. This property of the mazes naturally led us to a data decomposition of the problem. The two different types of data decompositions are geometric decompositions and the recursive data algorithm structures. Since the maze could be easily split geometrically (vertically by columns), the geometric decomposition structure was the better fit for our program.
\\\\
For the program structure we choose to use master/worker. One main reason for using the master/worker structure is that there are a number of tasks that are best done by one thread at the beginning and end of the program. At the beginning of our program, before the threads are created, the map needs to be loaded then split into the necessary number of sub-mazes. At the end of the program, after the threads have done their work the program needs to combine the graphs produced by the threads, and find a path from the start to the goal. These preprocess and post-process jobs cannot be done by the SPMD and Loop Parallelism. The Fork/Join pattern could of been employed, but the master/worker problem was simplier since all of the threads were managed by a single thread. Thus the master/worker structure was the best choice for solving this problem. This pattern made the implementation mechanisms of our parallalized program simple. The master thread handles the initial setup by creating all of the threads, allocating resources for each thread, giving each thread a partition of the maze to work on, and releasing the resources of each thread after they have finished their work. Since we are using Java the allocation and de-allocation of resources is handled by the Java virtual machine and the garbage collector. Thus, the worker threads each have their own piece of the maze, and all of the resources needed to perform their tasks upon creation, so no communication is necessary between threads. The only synchronization required by our program is that all the workers need to finish before a path can be found. The master thread takes care of this synchronization simply by waiting for the worker threads to finish before continuing on, using a join statement.
\\\\
The first thing our program does is load in the maze file and de-serialize it into a maze object. This maze file
is created by our maze generator. We represent mazes as a grid of 1's (walls) and 0's (passageways). Once read in the maze is then divided into sub-mazes. The number of sub-mazes that are created is equal to the number of threads to be used. The master creates the threads, gives each of them a sub-maze to operate on, then waits for the threads to finish. In each thread the maze of 1's and 0's is converted into a graph. This is done in the following way.
\\\\
Each passageway (0) in the binary maze is converted into a node. For each node edges are added to the graph to connect the node with neighbors in the maze, therefore each edge has an edge exactly one unit in length. After the initial graph is constructed the thread begins to iteratively eliminate pointless nodes, in a process called pruning. Each node that has only one edge connected to it is a dead end. Each node that has only two edges lies directly, and only, in between two other nodes. Since dead end nodes and intermediate nodes provide no useful information for solving our problem they are eliminated. When intermediate nodes are eliminated, their edges are merged together and the weight of the new edge is the combined length of the original two. In the master, the maze is split by columns, so nodes that lie on the left and right boundary of the maze are not eliminated in this process because it is not possible for the thread to know if those nodes are connected to another part of the maze, which was given to another process. One exception is the thread with the last chunk of the maze, since it knows it has the extreme right piece, and no other nodes can connect to it, so it prunes that boundary. The start and goal positions in the maze are also added to the graph, and they too cannot be eliminated during the pruning process. When the iterative pruning process is complete the thread stops running. The final graph only contains nodes with greater than three edges, nodes on the left and right borders of the combined pieces, and the start and goal node if they are present. 

\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{100x100_10threads_graph.png}
\caption{Example of graph generated by the algorithm; it can be noted where the pieces of the maze were partitioned since there are columns of nodes there from the combining process}
\end{figure}
When all the threads have finished running, the master thread gets from each of them the graph that they created.  The master than merges the graphs together into a single graph for the whole maze.  The master than runs the A* algorithm on the merged graphs to find the shortest path from the start to the goal.  The path object representing that found by A* is serialized and stored in the file provided by the argument.  Afterward, this can be looked at in our maze visualizer, which shows an animation of a circle traversing the path through the maze.  The visualizer can also show the whole maze, as opposed to just the path from the start to the goal.

\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{100x100_2threads_path.png}
\caption{Example of path found in maze}
\end{figure}

\section*{Evaluation}
Our program was evaluated on Amazon's EC2 system using a High-CPU Extra Large Instance, which is a virtual 64 bit platform with 8 cores. We decided to determine the extent of our parallelism using speedup and scale up. To do this, we generated 10 mazes whose sizes were multiples of 10000 cells. The smallest was 10000, the second 20000, and so on up to 100000. We wrote a small shell script to run our solver on each maze 5 times for each number of threads, from 1 to 16 threads (so 5 trials using 1 thread, 5 trials using 2 threads, and so on up to 5 trials for 16 threads, for each maze). The time from the start of the algorithm till the end was recorded for each trial. Then, for each maze (m), 3 out of the 5 trials at a given thread count (c) were picked, their average taken, and the result used as the duration it took the algorithm to solve that maze (m) using that number of threads (c). Thus, there are 16 durations for each maze, one for each thread count. The three picked for each thread count per maze were selected in an attempt use values which were relatively close in duration, as the algorithm rarely takes exactly the same amount of time for consecutive runs. In addition, Amazon's services are by nature subject to fluctuation.
\\\\
Once these measurements were taken, the speedup for each maze at each thread count was determined by dividing the duration using only 1 thread by the duration using thread count c. In other words, the time taken to do the solving task on a "small machine" (1 thread), by the time taken to do the solving task on a "large machine" (1 to 16 threads). The results were then graphed for each of the 10 mazes, as can be seen in Figure 3. 

\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{speedup.png}
\caption{Speedup graphs for the 10 mazes}
\end{figure}

These graphs are not as clean was would be desired. The normal speedup graph increases linearly up to the number of cores (in this case 8), then degrades slightly and eventually becomes more or less constant. The smaller mazes more closely resemble this, but they are not as linear as would be expected, and they fluctuate a lot after passing the number of cores. The larger mazes have a lot of fluctuation, though they do show a consistent increase in speedup up to and even a little past the number of cores.
\\\\
These results show that our problem domain is certainly parallelizable, as there is increase in speedup up to and beyond the number of cores. Our algorithm is constrained to the serial portions of maze splitting, combining graphs from the worker threads, and running A* on the final graph. But the results show that these tasks are significantly less time consuming than the conversion of maze to graph, and that part is completely parallel. This is because if those serial parts were more time consuming than the maze to graph conversion in the workers, then we would see consistent degradation in speedup as the size of the maze increases. Certainly the algorithm will take longer at larger maze sizes, cause there is more data to process, but the fact that speedup increases consistently up to the number of cores, and even beyond that shows the parallelization of our algorithm results in efficient maze solving.
\\\\
Scale up is intended to show how the parallelism scales with the size of the problem. It is the time taken to do the solving task on a small machine (1 thread) divided by the time taken to do an N times larger solving task (N times larger maze) on an N times larger machine (N threads). To calculate this, we used the measurements taken, as described above, using the duration of the 10000 cell maze at 1 thread, 20000 cell maze at 2 threads, and so on up to the 100000 cell maze at 10 threads, to do the divisions. We then graphed the scale up curve as shown in Figure 4.

\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{scaleup.png}
\caption{Figure 4: Scaleup graph}
\end{figure}

As with the speedup curve, our scale up curve did not conform to the expected scale up curve, which is constant up to about the number of cores and afterward degrades toward zero, before leveling out. Our graph shows a large amount of fluctuation, but there is hardly any degradation, in fact the opposite is true. While unexpected and probably somewhat erroneous (see the Discussion section), it does show our algorithm scales very well with the size of the problem; the size of the maze.

\section*{Discussion}
The goal of our project was to develop a parallelized maze solver using Java threads. Using the parallel design process we came up with a data decomposition of the maze solving problem that employed the geometric decomposition, master/worker design patterns. We implemented our design in Java using Java threads. We tested our implementation on large instances on Amazon's cloud computing system. We measured both the speedup and scale up of our program while running on these machines. Our results clearly show fairly good speedup and scale up for our implementation. Good speedup and scale indicates good parallelism in our program. There is a fair amount of unexplained oscillation in the speedup and scale up curves. In our trials, all the 5 durations recorded were almost always within either a couple seconds, or tenths, even hundreths of seconds, of each other. If there were outliers then they were not used as one of the 3 final picks that factored into the average. Therefore source of the oscillation might be the Amazon instance itself because we do not see any mechanism within the algorithm that could be the cause. There is also some indication that in some instances the algorithm performed better than what Amdahl's law predicts is possible. Once again this might have something to do with the Amazon instance itself. Unfortunately since we have little information on how Amazon's virtual machines are implemented we cannot determine the effects of the virtualization might have on our implementation. Our results conform fairly well to the trends in parallelism. If we had tested our algorithm with many more threads (say up to 20 or 30) and more trials, we would probably have seen speedup curves that leveled out and a scaleup curve that degraded down toward zero. We believe our algorithm scales well for parallelism, and perhaps the small subset of our testing did not show the extents to which it can be utilized, and at what points it starts to fail.

While our results prove that our solution to the maze problem was a fairly good parallelized maze solver, our software and hardware architecture did not allow us to explore the full potential of this approach to solving mazes. Our approach using Java threads could not scale up very well to computer clusters. Our algorithm could of been implemented in a map reduce architecture like Hadoop and run on computer clusters. We also toyed with the idea of writing a CUDA program to test the power of GPUs to solve these mazes. We considered implementing different maze generators that would build more complex mazes with some containing loops. We also thought about implementing different approaches to parallelized maze solving, and even worked on adding a second algorithm that was a task decomposition as opposed to data decomposition, but did not have time to fully debug it. A number of graph solving frameworks were made known to us, and it would of been nice to test them out on our problem and compare the performance of our algorithms compared to theirs. 


\end{document}