The Single Source Shortest Path (SSSP) problem can formally be represented using graph theory: \\ 
A graph G is represented by G $=$ (V, E). V represents a set of vertices and E represents a set of edges, and weight w is to be non-negative. For this purpose we have chosen to look into Dijkstra's algorithm. Below is an introduction to the notation used.

\section{Representation and Mathematical Notations}
\begin{itemize}
\item S: solution set for storing already evaluated vertices.
\item Q: tentative set for storing vertices to be evaluated.
\item Previous$[vertex]$: previous vertex to the given vertex within encapsulation.
\item dist$[vertex]$: distance from the source to the given vertex within encapsulation.
\item u: current vertex
\item v: a neighbour vertices of u.
\item alt: alternative best edge with lowest cost to the neighbour vertex 'v'.
\item dist$\_$between$[u,v]$: function that calculates returns the distance between current vertex u and neighbour v. 
\item decrease-key: updates and reorders the list
\end{itemize}

\newpage
\subsection{Dijkstra's Algorithm}
Dijkstra's algorithm greedily traverses a supplied graph through all its vertices from the provided source and finds a single shortest path to all vertices. It is guaranteed to find a shortest path from the source to the target, as long as none of the edges have a negative cost. The algorithm is greedy in the sense that it visits the vertices with the smallest distance first. It examines the cost of visiting the adjacent vertices of the vertices already traversed at every iteration and updates the distance to a given adjacent in case it is better than the previous path.

\begin{lstlisting}[caption=Dijkstra's algorithm, label={lst:Dijkstra}] 
function Dijkstra(Graph, source): 
	for each vertex v in Graph:
		dist[v] := infinity ;		
		previous[v] := undefined ;
	end for

	dist[source] := 0 ;
	Q := set of all nodes in Graph ;	

	while Q is not empty:
		u := vertex in Q with smallest distance in dist[] ;
		remove u from Q ;
		if dist[u] = infinity:
			break ;
		end if
          
		for each neighbor v of u:					
			alt := dist[u] + dist_between(u, v) ;
			if alt < dist[v]: // Relaxation step.
				dist[v] := alt ;
				previous[v] := u ;
				decrease-key v in Q ;
			end if
			
			if alt = dist[v] ;
				dist[v] := alt ;
				previous[v] := u ;
				add-key v to Q ;		
			end if
		end for
	end while
	return dist;
endfunction
\end{lstlisting}
The if-condition at line 19 examines an edge to see if it offers a better path to a vertex and is referred to as relaxation of the edge. This naming-convention is due to that relaxation in mathematics is making a change that reduces constraints. It is an approximation of a problem by a nearby problem that is easier to solve and as such a solution of the relaxed problem provides information about the original problem. \\
It is worth noticing that in this pseudo code only a single shortest path can be stored since it uses a variable rather than a list and therefore can only update the previously stored distance and previous vertex before the examined vertex. \\
As well Dijkstra's does not examine multiple paths of the same distance either, however this we haven taken care of in the pseudo code at line 25.

\begin{lstlisting}[caption=Dijkstra's algorithm backtrack]
S := empty sequence
u := target
while previous[u] is defined:
	insert u at the beginning of S
	u := previous[u]
end while ;
\end{lstlisting}

The backtracking works in the way that since previous[u] is overwritten each time a better path is found, we can backtrack in the while-loop until there are no more previous vertices and add them one at a time at the beginning of a list. In this way we get the vertices forming the shortest path.

\subsubsection{Time Complexity:}
The worst case for Dijkstra's: f(n) $=$ |E| + $|V|^{2}$ (where |V| is the number of vertices and |E| the number of edges). \\
Which is O($n^{2}$) in asymptotic notation. \\
The worse case for the backtrack algorithm is the number of vertices |V| in the solution list provided by Dijkstra's.

\subsubsection{Optimization}
Dijkstra's algorithm can be optimised in several ways: \\ 
As already mentioned the default implementation of Dijkstra's only returns a single shortest path and does not support several shortest paths of the same distance. If several paths are wanted; Dijkstra's needs to be modified to support storing a list of Previous vertices and distance through them to a given vertex. This process needs to be handled in the relaxation step, line 20 and 21 in Dijkstra \ref{lst:Dijkstra} \\
If several paths of the same distance are wanted; it can be handled by adding a new if-condition. See the Dijkstra code \ref{lst:Dijkstra}  line 25 \\
The process of updating the list of vertices and finding the vertex with the smallest distance can be optimised. This is done by implementing specific data structures for handling the data storage of the vertices and is explained in further detail below. 

\subsubsection{Some Data Structures that can be used for Optimization}
Here we describe some of the data structures we consider using to optimize the performance of our algorithm: Min-heap, Priority Queue, and Fibonacci Heap.
For our domain a min-heap specifically is efficient since the root will always be the vertex with the best known path i.e. lowest cost. \\
Operations and running time for a binary min-heap:
\begin{itemize}
\item Find-min O(1): since it is the root element.
\item Delete-min O(log n): as it has to run heapify-down after deleting the root.
\item Insert O(log n): as it will insert at the very left at leaf level and then call heapify-up to attain the heap property.
\item Create tree O(h): To create a min heap binary tree the runtime is O(h) where h is the hight of the tree.
\item Decrease-key O(log n): as it replaces the value of an element and run the heapify-up procedure.
\end{itemize}

\subsubsection{Insert}
To make an insert operation in a heap we have to follow three steps: \\
1. Add the object to the bottom left of the heap. \\
2. Compare the added object to its parent, if the parents value is equal or lesser: do nothing. \\
3. Else swap the object with its parent and return to step 2.

\subsubsection{Delete}
To make a delete operation in a heap we have to make 4 steps: \\
1. Delete the root, which is the minimum element in the tree. \\
2. Swap the empty root with the last element added. \\
3. Check if the value of the root is smaller than its children. \\
4. If it is smaller stop, else swap them and return to step 3.

\subsubsection{Create Tree}
To build a min heap you would think that the time complexity would be O(n log n) since we are inserting n times. Though this is not the case, since the heap can be build unsorted. This is done by starting at the lowest level of the tree and moving upwards, shifting the root of each sub tree downward as in the delete method.

\subsubsection{Implementation of a Heap}
A heap can be implemented in different ways such as objects with pointers. This would imply that each object points at its parent and children. The most common way to implement a binary-heap is to put it in an ArrayList, since there then is no need for pointers, as you can find parents and children by using math on the index.

\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.65]{includes/graphics/heap.png}
\caption{Binary-heap in an Array}
\label{fig:algorithms:Binaryheap}
\end{center}
\end{figure} 

If we look at the picture we can see how a heap would look like in an Arraylist. If we want to find the children of an element; we just take the index of that element and multiply it by 2 and add 1(left child). If we want the right child; we take the index of the element, multiply it by 2 and add 2(right child). The math would look like this: \\
- left child $=$ a[2i + 1]. \\
- right child $=$ a[2i + 2]. \\

If we want to find the parent of an element; divide its index number by 2 subtract 1 and round the number up: \\
parent $=$ a[floor(i-1)/2] floor means round down. \\


Another type of heap to consider is the Fibonacci heap. Fibonacci heaps also has a Find-min of O(1), but as well it has the particular fast operations: Insert of O(1), Decrease-key of O(1) and Merge of O(1), which clearly makes it interesting in the context. We're not going to go into further detail with this though. \\ 
An important thing to consider as well is that none of the heap support searching, meaning that if the search operation is required another alternative is to be used. \\
 
Priority Queue is another structure that can be used and implemented in many ways depending on how we want it to function. In our case it would be practical to implement it as a decreasing ordered array, since removing the minimum value will cost less, and as its at the end of the array. If it was in front of the array, we would have to move every element one place up and the time complexity would be O(n) instead of O(1).
\begin{itemize}
\item Find-min 0(1):  since its the last element in the array.
\item Delete-min O(1): since its the last element in the array.
\item insert 0(n): since the array have to be sorted.
\end{itemize}

\subsubsection{Algorithm Alternatives}
Another algorithm we considered using is A* algorithm.
A* search is another algorithm often used for finding shortest path between a single source to a single destination. It behaves like Best-First-Search because it uses a heuristic function; h(n) for an optimistic estimation of minimum cost from any vertex n to the target. It is an optimisation of Dijkstra's in the sense that it runs quicker because of the previously mentioned heuristic function. In other words it combines some of Dijkstra's algorithm with Best-First-Search. As well it has a special case: h(n) $=$ 0, where it acts like Dijkstra's as: f(n) $=$ g(n) + h(n) will become 
f(n) $=$ g(n).
More details about A* search can be found in the appendix listing  ~\ref{sec:A star search} \\
Another option is using Breadth-first search and returning all possible paths in order to sort out undesired results out afterwards. It's worth considering a bi-directional Breadth-first search and how big a part of the search can be parallelized, considering that the critical resource is the complete list of vertices to be visit.