\mode<article>{\usepackage{fullpage}}

\usepackage{listings}
\lstset{language=Java,
        basicstyle=\small}

\usepackage{graphicx}
\usepackage{hyperref}

\hypersetup{
  colorlinks=true,
  urlcolor=blue,
  linkcolor=black
}

\title{Lecture Thirteen\\Sorting by Comparison}
\author{Matt Bon\'e}
\date{\today}

\begin{document}

\mode<article>{\maketitle}
\tableofcontents
\mode<article>{\pagebreak}
\mode<presentation>{\frame{\titlepage}}
\mode<article>{\setlength{\parskip}{.25cm}}
\mode<all>{\bibliographystyle{abbrvnat}}

\section{Comparison Sorts}
\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Comparison Sorts}}
  We are concerned with two kinds of sorts, comparison sorts and sorts
  that utilize number representations to garner additional information.
\end{frame}

Here we will only talk about comparison sorts.  These sorting algorithms
are probably what you expect.  They sort by making explicit comparisons between
the keys in the input, that is, the items we would like to sort.

\subsection{Possible Outcomes}
If we consider three distinct integers $a$, $b$, and $c$, then when we
bind the variables to integers, there are only six possible orderings
(from least to greatest):

\begin{frame}
\mode<presentation>{\frametitle{Possible Outcomes for Sorting Three Integers}}
$a,b,c$ (i.e. $a<b<c$) \\
$a,c,b$ \\
$b,a,c$ \\
$b,c,a$ \\
$c,a,b$ \\
$c,b,a$ \\
\end{frame}

\subsection{A Sorting Decision Tree}
We can think of each of these outcomes as a leaf on a decision tree for
an algorithm that sorts three integers (going left means the condition
of the parent was true).

\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Three Integer Decision Tree}}
  \includegraphics[height=7cm]{decisiontree}
\end{frame}

The height of this tree is three, and no matter how we arrange it, the
tree will not get any shorter.  The number of leaves, in this case is
$6$, or $3!$, the number of permutations of a list of three distinct
items.  Generalizing our sorting algorithm from $3$ to $n$ elements,
we can see that the decision tree will have $n!$ leaves.  Thus,
getting to the bottom of the tree is always $\lg n!$ which can be
shown to be $\Omega (n \lg n)$.  This means that the lower bound on
the worst behavior is $n \lg n$.  That is, no matter what comparison
sorting algorithm  you develop, the algorithm's very worst case will
grow at $n \lg n$.

%\subsection{Never Any Better}
%If we look back on the tree properties we have already discussed, recall
%that a tree of $n$ nodes has a height $h$ where:
%\begin{align}
%  h \ge \lceil \lg(n+1) - 1 \rceil \\
%\end{align}
%
%It is easy to show that 
%
%will see that there is no way we could have arranged this tree any
%shorter ($h$ is the tree height):
%\begin{eqnarray}
%h & \ge & \lceil \lg(n+1) - 1 \rceil \\
%h & \ge & \lceil \lg(3!+1) - 1 \rceil \\
%h & \ge & \lceil \lg(7) - 1 \rceil \\ 
%h & \ge & \lceil 1.8 \rceil \\ 
%h & \ge & 2 \\ 
%\end{eqnarray}
%Leaving the math aside, we can come to this conclusion pictorally by
%noticing that we cannot arrange the tree to make it any shorter.
%
%Generalizing this from three inputs to $n$ inputs, we know that there
%are $n!$ possible arrangements for $n$ distinct inputs, and thus $n!$
%leaves.  Plugging in:
%\begin{eqnarray}
%h & \ge & \lceil \lg(n!+1) - 1 \rceil \\
%\end{eqnarray}
%
%This gives us a lower bound for the worst case for \emph{all}
%comparison based sorting algorithms.  That is, no matter what
%comparison based sorting algorithm you can think up, in its worst
%case, it will never perform better than $O(n \lg n)$.

\section{Some Sorting Algorithms}
Here we will look at some sorting algorithms.  For animations of these
algorithms and more analysis, consult
\href{http://www.tools-of-computing.com/tc/CS/Sorts/SortAlgorithms.htm}{this
  page} by Dr. Thomas Christopher (Dr. George Thiruvathukal's advisor).
\subsection{Some Considerations}
When we look at sorting algorithms we want to know several things:

\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Sorting Algorithm Considerations}}
  \begin{itemize}
    \item How much memory does the sort use?
    \item Is the sort stable?
    \item How fast is the sort?
    \item How is its locality?
  \end{itemize}
\end{frame}

\subsection{Insertion Sort}

\begin{frame}[fragile]
\mode<presentation>{\frametitle{Insertion Sort}}
\begin{lstlisting}
  public static int insertionSort(int[] array) {
    int j;
    
    for(int p=1; p<array.length; p++) {
      int tmp = array[p];
      for(j=p; j>0 && array[j-1]>tmp; j--) {
        array[j] = array[j-1];
      }
      array[j] = tmp;
      ops++;
    }
  }
\end{lstlisting}
\end{frame}

With the nested loops, the insertion sort obviously runs at $O(n^2)$.
However, the algorithm is quite efficient for small lists and
almost-sorted lists because of it's locality and the nature of the
inner loop.  The sort uses no additional memory and is stable.

\subsection{Heap Sort}
Though here we have a very naive version of the heapsort that removes
the elements from our original data structure and places them in the
heap, recall that heaps may be represented as arrays, and, if sorting
an array, we can do it in place.  Essentially we convert the unsorted
array into a heap (percolate up on each leaf), and then we designate a
sorted section of the array and an unsorted section.  We can then move
the top item of the heap into the sorted section (depending on what
kind of heap is used and where the sorted section is, this may be as
simple as a swap).

\begin{frame}[fragile]
\mode<presentation>{\frametitle{Heap Sort}}
\begin{lstlisting}
public static List<Integer> 
    heapSort(List<Integer> unsorted) {

  PriorityQueue<Integer> heap = 
          new PriorityQueue<Integer>();

  List<Integer> sorted = 
          new ArrayList<Integer>(unsorted.size());

  for(Integer i: unsorted) {
    heap.add(i);
  }
  while(!heap.isEmpty()) {
    sorted.add(heap.remove());
  }

  return sorted;
}
\end{lstlisting}
\end{frame}

Remember that insertions and removals into a heap run at $O(\lg n)$.
Since there are $n$ insertions and $n$ removals that must be
performed, the heap sort runs at $O(n \lg n)$.  The sort is not stable.

\subsection{Merge Sort}
First let us see how to merge  two sorted arrays:

\begin{frame}[fragile]
%\mode<presentation>{\frametitle{Merging Two Sorted Arrays}}
\begin{lstlisting}
private static void merge(int[] a, int[] tmpArray, 
    int leftPos, int rightPos, int rightEnd) {
  int leftEnd = rightPos - 1;
  int tmpPos = leftPos;
  int numElements = rightEnd - leftPos + 1;
  while(leftPos <= leftEnd && rightPos <= rightEnd) {
    if( a[leftPos]<a[rightPos]) {
      tmpArray[tmpPos++] = a[leftPos++];
    } else {
      tmpArray[tmpPos++] = a[rightPos++];
    }
  }
  while(leftPos <= leftEnd) {
    tmpArray[tmpPos++] = a[leftPos++];
  }
  while(rightPos <= rightEnd ) { 
    tmpArray[tmpPos++] = a[rightPos++];
  }
  for(int i = 0; i<numElements; i++, rightEnd--) {
    a[rightEnd] = tmpArray[rightEnd];
  }
}
\end{lstlisting}
\end{frame}

This algorithm is linear in terms of the size of the two arrays
being merged.  Now we can use this merge, by calling performing a
mergesort on the left and right side of an array and merging the result.  
this process continues recursively until the array has been sorted:

\begin{frame}[fragile]
\mode<presentation>{\frametitle{Merge Sort}}
\begin{lstlisting}
private static void mergeSort(int[] a, int[] tmpArray, 
    int left, int right ) {
  if(left < right) {
    int center = (left + right) / 2;
    mergeSort(a, tmpArray, left, center);
    mergeSort(a, tmpArray, center + 1, right);
    merge(a, tmpArray, left, center + 1, right);
  }
}
\end{lstlisting}
\end{frame}

Merge sort runs at $O(n \lg n)$, and, depending on how it is
implemented may be dominated by the recursive calls or memory
allocation.  Merge sort implementations are usually stable, but
require additional storage.  An interesting property is that a merge
sort implementation typically performs fewer key comparisons for a
given input than a quicksort implementation.

\subsection{Quicksort}
In the quicksort, we choose a pivot, and move all elements less than
the pivot to the left side of the array, and all elements greater than
the pivot to the right side of the array.  The pivot itself is placed
in between both sides (though this is not usually the center of the
array).  Then we call quicksort on the left and right sides, and the
process continues recursively until the array is sorted.  A very
simple implementation in Haskell is shown below:

\begin{frame}[fragile]
\mode<presentation>{\frametitle{Quicksort}}
\begin{lstlisting}[language=Haskell]
qsort []     = []
qsort (x:xs) = qsort smaller ++ [x] ++ qsort bigger
    where smaller = filter (<x)  xs
          bigger = filter (>=x) xs
\end{lstlisting}
\end{frame}

In the worst case quicksort runs at $n^2$.  Thus it is appropriate to 
consider quicksort as an $O(n^2$ algorithm.  \emph{However}, with detailed
analysis it is possible to show that quicksort, on average, runs at
$n \lg n$, and thus the sorting algorithms is quite common.

\mode<all>{\bibliography{sources}}

\end{document}
