\mode<article>{\usepackage{fullpage}}

\usepackage{listings}
\lstset{language=Java,
        basicstyle=\small}

\usepackage{graphicx}
\usepackage{hyperref}

\hypersetup{
  colorlinks=true,
  urlcolor=blue,
  linkcolor=black
}

\title{Lecture Eleven -- Graphs}
\author{Matt Bone}
\date{\today}

\begin{document}

\mode<article>{\maketitle}
\tableofcontents
\mode<article>{\pagebreak}
\mode<presentation>{\frame{\titlepage}}
\mode<article>{\setlength{\parskip}{.25cm}}
\mode<all>{\bibliographystyle{abbrvnat}}

\section{Introduction}
Graph theory is an interesting field of mathematics that is often
useful to computer scientists.  While many techniques to represent
graphs and discover their properties algorithmically are already
known, oftentimes the biggest issue is noticing when a problem can be
solved with graph techniques and figuring out how to construct the
graph appropriately.

\subsection{Bridges Of K\"{o}nigsberg}
\begin{figure}
  \center{\includegraphics[height=1.5in]{Konigsberg_bridges}}
  \caption{\label{fig:konisberg}Bridges Of K\"{o}nigsberg (from \href{http://en.wikipedia.org/wiki/Image:Konigsberg_bridges.png}{wikipedia})}
\end{figure}

\mode<presentation>{\begin{frame}[plain]
  \includegraphics[height=8in]{Konigsberg}
\end{frame}}

\mode<presentation>{\begin{frame}
  \frametitle{Bridges Of K\"{o}nisberg}
  \includegraphics{Konigsberg_bridges}
\end{frame}}

In figure \ref{fig:konisberg} we see a map of seven bridges in the Prussian 
city of K\"{o}nigsberg.  Traversing these bridges and taking in the
city on a nice day was a common pastime, but many wondered if it was
possible to start at home, traverse all the bridges once and only
once, and then return home.  Euler formed the basis of graph theory
and showed that this was not possible.

\subsection{Terminology}
As always, it is important to have a common vocabulary for discussing
graphs.  You may notice some terms from our study of trees.

\begin{description}
\item[undirected edge] An edge that has no direction.  It may be
traversed in either direction.  A graph in which all edges are
undirected is an undirected graph.

\item[directed edge] An edge that may be traversed only in the
direction in which the arrow points.  If we have a graph with two
nodes $a$ and $b$ with one edge from $a$ to $b$ then there is a path
from $a$ to $b$ but not from $b$ to $a$. A graph in which all edges
are directed is a directed graph.

\item[adjacency] ``Vertex $a$ is adjacent to vertex $b$ when there is
an edge from $a$ to $b$.''\cite[pg3]{tucker02:combinatorics}  Notice
that this definition works for both directed and undirected edges.

\item[weight] A value, usually numeric, assigned to an edge that
represents the cost of traversing that edge.  A graph in which all
edges have weights is said to be a weighted graphs.  Weighted graphs
are of interest in optimization problems.

\item[degree] The number of edges entering and leaving a node.  In a
directed graph, there is an inbound and outbound degree.

\item[path] A series of edges that can be traversed to get from a
start node to an end node.

\item[cycle] A path with non-zero length that starts and ends at 
the same node.

\item[directed acyclic graph] Often called a \emph{DAG}, this type of
graph is directed and contains no cycles.

\item[Euler cycle/path] An Euler cycle starts and ends at the same
node and traverses each edge once and only once.  For an Euler cycle
every vertex must have an even degree.  An Euler path starts at some
node $a$ and ends at some node $b$ traversing all edges in the graph
once and only once.  For an Euler path every vertex but besides $a$ or
$b$ must have an even degree.
\end{description}

\subsection{Some Sample Problems}

\subsubsection{Job Matching}
Suppose we are trying to match a certain number of applicants to a
certain number of jobs.  In figures \ref{fig:jobmatch-good} and
\ref{fig:jobmatch-bad}, people are on the left and jobs are on the
right. If a person is able to perform the job, an edge is drawn
between them.  In figure \ref{fig:jobmatch-good} we can clearly fill
all the positions, but this is not the case in \ref{fig:jobmatch-bad}.

When we can clearly divide a graph into two distinct sections with the
only edges going from one section to another, then the graph is called
\emph{bipartite}.

\begin{figure}
  \center{\includegraphics[height=1.5in]{jobmatch_good}}
  \caption{\label{fig:jobmatch-good}Job Matching -- Able to match people to jobs}
\end{figure}

\begin{figure}
  \center{\includegraphics[height=1.5in]{jobmatch_bad}}
  \caption{\label{fig:jobmatch-bad}Job Matching -- Unable to match people to jobs}
\end{figure}

\mode<presentation>{\begin{frame}
  \frametitle{Job Matching--Good}
  \includegraphics[height=3in]{jobmatch_good}
\end{frame}}

\mode<presentation>{\begin{frame}
  \frametitle{Job Matching--Bad}
  \includegraphics[height=3in]{jobmatch_bad}
\end{frame}}

\subsubsection{Monorail! Monorail! Monorail!}\label{monorail}
You are connecting the monorail systems of Springfield, North
Haverbrook, Brockway and Ogdenville. Using the graph in figure
\ref{fig:rail-map} optimize for money spent (i.e. connect all the
cities for the least amount of money).  Using the graph in figure
\ref{fig:rail-map2}, optimize for the shortest amount of time if
you're a commuter who regularly goes from North Haverbrook to
Ogdenville.

The first optimization is an example of a \emph{minimum spanning
trees}.  The second shows the \emph{shortest path} between two points.
It is very important to realize that these two optimizations are
different and the minimum spanning tree is not necessarily the
shortest path.

\begin{figure}
  \center{\includegraphics[height=2in]{rail_map}}
  \caption{\label{fig:rail-map}Connecting Cities, minimize cost}
\end{figure}
        
\begin{figure}
  \center{\includegraphics[height=2in]{rail_map2}}
  \caption{\label{fig:rail-map2}Connecting Cities, minimize time}
\end{figure}

\mode<presentation>{\begin{frame}
  \frametitle{Monorail! Monorail! Monorail!}
  \includegraphics[height=3in]{rail_map}
\end{frame}}

\mode<presentation>{\begin{frame}
  \frametitle{Monorail! Monorail! Monorail!}
  \includegraphics[height=3in]{rail_map2}
\end{frame}}

\subsubsection{Traveling Salesman}
From wikipedia:
``If a salesman starts at point A, and if the distances between every
pair of points are known, what is the shortest route which visits all
points and returns to point A?'' (see figure \ref{fig:sales}).

\begin{figure}
  \center{\includegraphics[height=2in]{Salesman}}
  \caption{\label{fig:sales}Traveling Salesman (from \href{http://en.wikipedia.org/wiki/Image:Salesman.PNG}{wikipedia})}
\end{figure}

\mode<presentation>{\begin{frame}
  \frametitle{Traveling Salesman}
  \includegraphics[height=3in]{Salesman}
\end{frame}}

\subsubsection{Processes with Dependencies}
How would we draw a graph if we were concerned with whether or not two
processes could run simultaneously? This shows when we might want to
consider an \emph{independent set} of vertices.

\subsubsection{Scheduling and Graph Coloring}
Say we are trying to fing rooms for meetings.  If we represent each
available room as a color, and we want to schedule meetings so as to
minimize the number of rooms used, we can represent each meeting as a
node and draw edges between those that occur at the same time.  Then
we have to figure out how to color the nodes so that no colors are
adjacent to one another.


\subsection{Computer Representations} 
Now that we have an idea of graphs as an abstract mathematical concept, let's 
look at two possible ways of representing graphs on a computer.

\subsubsection{Adjacency Matrix}
One technique to represent the structure of a graph is an adjacency
matrix.  If there are $n$ nodes in the graph, then the adjacency
matrix is size $n \times n$.  We can think of it mathematically as a
matrix, or, if you prefer, as a two dimensional array inside a
computer.

We assign each node a unique number from $0$ to $n-1$.  Assume we have
two nodes the first represented by some number $i$ and the second by
some number $j$ and an adjacency matrix $A$.  We can discover if node
$i$ is adjacent to node $j$ by looking at entry $A_{ij}$ (where $i$ is
the row and $j$ is the column) in the adjacency matrix.  To find out
whether or not $j$ is adjacent to $i$ we look at entry $A_{ji}$.

For unweighted graphs, the values of the adjacency matrix are either
$1$ (indicating an adjacency) or $0$.  For weighted graphs, the values
are the weights themselves or $\infty$ if there is no adjacency.
Notice that adjacency matrices for undirected graphs will be
symmetric.

\begin{figure}
  \center{\includegraphics[width=5cm]{adj_graph}}
  \caption{\label{fig:adj-graph}Directed, weighted graph}
\end{figure}

\mode<presentation>{\begin{frame}
  \frametitle{A sample directed, weighted graph}
  \includegraphics[width=3.5in]{adj_graph}
\end{frame}}

The adjacency matrix for the graph shown in figure \ref{fig:adj-graph}
looks like:

\begin{frame}
\mode<presentation>{\frametitle{Adjacency Matrix}}
$\begin{array}{ccccccc}
 -         & \mathbf{0} & \mathbf{1} & \mathbf{2} & \mathbf{3} & \mathbf{4} & \mathbf{5}\\
\mathbf{0} & 0          & 8          & \infty     & 5          & \infty     & \infty\\
\mathbf{1} & 9          & 0          & 6          & 3          & \infty     & \infty\\
\mathbf{2} & \infty     & \infty     & 0          & \infty     & 11         & \infty\\
\mathbf{3} & \infty     & \infty     & 1          & 0          & \infty     & \infty\\
\mathbf{4} & \infty     & \infty     & \infty     & 7          & 0          & 1\\
\mathbf{5} & \infty     & \infty     & \infty     & \infty     & \infty     & 0
\end{array}$
\end{frame}

No matter how many edges there are in the graph, the adjacency matrix
representation requires storage proportional to the square of the
number of nodes.  Thus, this representation may be inappropriate for
graphs with lots of nodes and few edges.  However, in cases where
there are many edges, the representation may be ideal.

\subsubsection{Adjacency List} 
Recall our binary tree implementation.  We had an instance of an
element class for each node and two references: one to the left child
and another to the right child.  The graph implementation is similar; each 
node can contain data and has references to adjacent nodes.  A naive
implementation (without proper encapsulation) for an unweighted 
graph is shown below:
\begin{lstlisting}
public class GraphNode<T> {
  public T data;
  public List<GraphNode<T>> adjacencies;
}
\end{lstlisting}

The adjacency lists for the graph in figure \ref{fig:adj-graph}  
looks like:
\begin{frame}[fragile]
\mode<presentation>{\frametitle{Adjacency List}}
\begin{verbatim}
 node  adjacency list
  0    1->3
  1    0->2->3
  2    11
  3    1
  4    3->5
  5
\end{verbatim}
\end{frame}

Notice that the storage requirements for the adjacency list
representation is proportional to the number of nodes plus the the
number of edges.  Also notice that there must be some technique to
store weights (i.e. an object or tuple).



\section{Minimum Spanning Trees} 
If we look back on the first problem in section \ref{monorail}, we see
that we generated a minimum spanning tree.  For a weighted graph, the
minimum spanning tree is the `cheapest' way to connect all the
vertices.

%\subsection{Prim's Algorithm} 
%Dijkstra's algorithm is a single-source shortest path algorithm that
%is often used to find the shortest path between two nodes in a graph.
%The algorithm is greedy and works by repeatedly finding the minimum
%path between the source node and nodes in a data structure we will
%call the fringe. In order to simplify the code, we will assume that
%the fringe is a priority queue. This means that everything in the
%fringe (in our case nodes) has an associated numerical key value.  We
%will repeatedly pull out the node with the minimum key value.
%
%The algorithm starts out with source node $s$, adding all nodes
%adjacent to $s$ to the fringe. For every node $i$ added to the fringe
%there is a corresponding path from $s$ to $i$ . The total weight of
%this path is the key in the priority queue. By selecting the path with
%the lowest weight in the fringe, we now get the shortest path from
%node $s$ to node $i$ (the priority queue is useful here, because it
%keeps track of the smallest paths as the paths are added). This
%shortest path constitutes the start of a tree that will ultimately be
%the output of the algorithm.  Next we add all nodes adjacent to $i$ to
%the fringe, taking care to not add nodes already in the tree and
%re-keying nodes in the fringe that are now accessible through paths
%with lower weights (i.e. we re-index if the path through $i$ is smaller
%than the previous path). The process then starts over and repeats
%until all nodes have been added to the tree.

\subsection{Kruskal's Algorithm} 
Kruskal's algorithm also generates a minimum spanning tree, but does
so in a different manner.  First, the algorithm creates a
\emph{forest} of trees.  Initially for a graph with $n$ nodes, this
forest consists of $n$ trees each containing one node.  The edges are
kept in a data structure and sorted according to their weights.  The
edge with the minimum weight is removed.  If the edge connects two
distinct trees in the forest, then those trees are joined into a
common tree; thus the size of the forest is reduced by one.  If the
edge does not connect two distinct trees, it is ignored.  This process
continues until there is one tree in the forest.  This tree will be a
minimum spanning tree.

One caveat with Kruskal's algorithm is that we must be able to easily
identify the endpoints of a particular edge if we want this algorithm
to run efficiently.  Secondary data structures in addition to
adjacency lists/matrices may be required to hold this information.

\section{Shortest Path}
Often, optimization problems require us to find the shortest
path between two nodes.  For a weighted graph, this means the path
with least amount of total weight (not necessarily the smallest number
of edges).  For an unweighted graph, we can consider each edge to have
a unit weight of one and run the weighted versions of the weighted
algorithms below (this, of course, \emph{will} return the path with
the smallest number of edges).

\subsection{Dijkstra's Algorithm}
Dijkstra's algorithm is a single-source shortest path algorithm that
is often used to find the shortest path between two nodes in a graph.
The algorithm is greedy and works by repeatedly finding the minimum
path between the source node and nodes in a data structure we will
call the fringe. In order to simplify the code, we will assume that
the fringe is a priority queue. This means that everything in the
fringe (in our case nodes) has an associated numerical key value.  We
will repeatedly pull out the node with the minimum key value.

The algorithm starts out with source node $s$, adding all nodes
adjacent to $s$ to the fringe. For every node $i$ added to the fringe
there is a corresponding path from $s$ to $i$ . The total weight of
this path is the key in the priority queue. By selecting the path with
the lowest weight in the fringe, we now get the shortest path from
node $s$ to node $i$ (the priority queue is useful here, because it
keeps track of the smallest paths as the paths are added). This
shortest path constitutes the start of a tree that will ultimately be
the output of the algorithm.  Next we add all nodes adjacent to $i$ to
the fringe, taking care to not add nodes already in the tree and
re-keying nodes in the fringe that are now accessible through paths
with lower weights (i.e. we re-index if the path through $i$ is smaller
than the previous path). The process then starts over and repeats
until all nodes have been added to the tree.

It is important to notice that this leaves us with more information
than we asked for. Dijkstra's algorithm outputs a tree that contains
one and only one shortest path from source node $s$ to \textbf{all}
other nodes. This tree will be very simple with no cycles and only one
possible path to each node\cite[pg 583]{cormen00:algo}. We can then
use this tree to select the path to our destination node. At first it
may seem that Dijkstra's algorithm has done more work than was
required, but this behavior essentially comes for free. Comparable to
other single-source shortest path algorithms, Dijkstra's algorithm has
a lower time bound of $n$ operations (where $n$ is the number of nodes
in the graph), and an upper time bound of $n^{2}$ 
operations\cite[pg 411]{baase00}.

One must be careful not to confuse Prim's algorithm for Dijkstra's.
Remember that both have a priority queue or `fringe', but the keys of
the priority queue in Prim's algorithm are the weights of individual
edges while the keys in of the priority queue in Dijkstra's algorithm
represent the sum of the weights of the edges from the source to the
node.  Again, it is important to realize that a MST != shortest path.

\subsection{Floyd-Warshall Algorithm}
While Dijkstra's algorithm finds the shortest path between two points,
sometimes we want more information. Occasionally, we are interested
in finding all the shortest paths in a graph. One approach would be
to repeatedly run Dijkstra's algorithm. Since we only specify the
source node to get the shortest paths to all other trees, we need
only run the algorithm once for each node in the graph. While this
approach will work, there are better solutions. In particular, the
Floyd-Warshall algorithm shown in listing \ref{alg:Floyd-Warshall}
can give us the same information. 

Unlike the Dijkstra algorithm's greedy approach, the Floyd-Warshall
algorithm takes a dynamic programming tack. Though `dynamic programming'
can be a slippery term, dynamic programming tactics do have some common
characteristics. Like many other methods, dynamic programming usually
breaks harder problems into simpler problems. However, dynamic approaches
generally use some sort of implicit or explicit data structure to
store the results of any given calculation so that identical subproblems
need not be recalculated in the future \cite[pg 323]{cormen00:algo}.
Some standard recursive solutions may not take this approach and may
be forced to repeatedly recalculate the same subproblems. As a result,
some dynamic programming approaches can reduce unwieldy exponential
time algorithms to much more manageable polynomial time algorithms
\cite[pg 452]{baase00}.

The Floyd-Warshall algorithm is more difficult to break down and explain,
so we will only approach it from a high-level point of view. It is
based on the already mentioned subpaths of shortest paths lemma, and
takes advantage of the idea that there may be an intermediate node
between two nodes that yields a shorter path than by directly traversing
the path between them \cite[pg 434]{baase00}. We can see this
in lines 7 and 8 of listing \ref{alg:Floyd-Warshall}. As one might
expect from our initial idea of running Dijkstra's algorithm $n$
times, Floyd-Warshall, too, has an upper bound of $n^{3}$ running
time where $n$ is the size of the square input matrix. Notice that,
by default, Floyd-Warshall returns a distance matrix. This matrix
only gives us the length of the shortest path between index entries.
However, the algorithm can be modified to calculate and store these
shortest paths on the fly, or the paths can be reconstructed from
the resultant distance matrix and the original weight matrix with
some post-processing \cite[pg 633]{cormen00:algo}. Instead of a table
of shortest weights, this leaves us with a collection of shortest
routes, or a `routing table.'

\begin{lstlisting}[float,caption={Floyd-Warshall Algorithm\cite{baase00}},label=alg:Floyd-Warshall,captionpos=b]
void allPairsShortestPath(weightMatrix[][])
  int i,j,k
  distMatrix = clone(weightMatrix) //copy weightMatrix into distMatrix
  for(k=0;k<dimension(weightMatrix); k++)
    for(i=0;i<dimension(weightMatrix); i++)
      for(j=0;j<dimension(weightMatrix; j++)
        distMatrix[i][j]=min(distMatrix[i][j], 
                             distMatrix[i][k] + distMatrix[k][j])
\end{lstlisting}


\section{Critical Path}
The critical path is the longest path in a graph.  Consider an example
of a directed graph that represents inter-dependent processes.  Nodes
represent processes and directed edges represent process dependencies.
An edge from node $a$ to node $b$ with a weight of 10 shows that the
process $a$ is dependent on the process $b$ and requires $10$ units of
time to run to completion.  In such a graph, the sum of the weights of
the critical path is equal to the total running time of the group of
processes.

\mode<all>{\bibliography{sources}}

\end{document}
