\documentclass[12pt]{book}

\title{Think Data Structures}
\author{Allen B. Downey}

\newcommand{\thetitle}{Think Data Structures}
\newcommand{\thesubtitle}{Algorithms and Information Retrieval in Java}
\newcommand{\theauthors}{Allen B. Downey}
\newcommand{\theversion}{1.0.1}

%%%% Both LATEX and PLASTEX

\usepackage{graphicx}
\usepackage{hevea}
\usepackage{makeidx}
\usepackage{setspace}
%\usepackage{longtable}
\usepackage{booktabs}

%\usepackage{draftwatermark}
%\SetWatermarkText{DRAFT: Not ready for distribution!}
%\SetWatermarkScale{1}

\makeindex

% automatically index glossary terms
\newcommand{\term}[1]{%
\item[#1:]\index{#1}}

\usepackage{amsmath}
\usepackage{amsthm}

% format end of chapter exercises
\newtheoremstyle{exercise}
  {12pt}        % space above
  {12pt}        % space below
  {}            % body font
  {}            % indent amount
  {\bfseries}   % head font
  {}            % punctuation
  {12pt}        % head space
  {}            % custom head
\theoremstyle{exercise}
\newtheorem{exercise}{Exercise}[chapter]

\newif\ifplastex
\plastexfalse

%%%% PLASTEX ONLY
\ifplastex

\usepackage{localdef}

\usepackage{url}

\newcount\anchorcnt
\newcommand*{\Anchor}[1]{%
  \@bsphack%
    \Hy@GlobalStepCount\anchorcnt%
    \edef\@currentHref{anchor.\the\anchorcnt}%
    \Hy@raisedlink{\hyper@anchorstart{\@currentHref}\hyper@anchorend}%
    \M@gettitle{}\label{#1}%
    \@esphack%
}

% code listing environments:
% we don't need these for plastex because they get replaced
% by preprocess.py
%\newenvironment{code}{\begin{verbatim}}{\end{verbatim}}
%\newenvironment{stdout}{\begin{verbatim}}{\end{verbatim}}

% inline syntax formatting
\newcommand{\java}{\verb}%}

%%%% LATEX ONLY
\else

\input{latexonly}

\fi

%%%% END OF PREAMBLE
\begin{document}

\frontmatter

%%%% PLASTEX ONLY
\ifplastex

\maketitle

%%%% LATEX ONLY
\else

\begin{latexonly}

%--half title-------------------------------------------------
\thispagestyle{empty}

\begin{flushright}
\vspace*{2.0in}

\begin{spacing}{3}
{\huge \thetitle} \\
{\Large \thesubtitle}
\end{spacing}

\vspace{0.25in}

Version \theversion

\vfill
\end{flushright}

%--verso------------------------------------------------------
\newpage
\thispagestyle{empty}

\quad

%--title page-------------------------------------------------
\newpage
\thispagestyle{empty}

\begin{flushright}
\vspace*{2.0in}

\begin{spacing}{3}
{\huge \thetitle} \\
{\Large \thesubtitle}
\end{spacing}

\vspace{0.25in}

Version \theversion

\vspace{1in}

{\Large \theauthors}

\vspace{0.5in}

{\Large Green Tea Press}

{\small Needham, Massachusetts}

\vfill
\end{flushright}

%--copyright--------------------------------------------------
\newpage
\thispagestyle{empty}

Copyright \copyright ~2016 \theauthors.

\vspace{0.2in}

\begin{flushleft}
Green Tea Press \\
9 Washburn Ave \\
Needham, MA 02492
\end{flushleft}

Permission is granted to copy, distribute, and/or modify this work
under the terms of the Creative Commons
Attribution-NonCommercial-ShareAlike 3.0 Unported License, which is
available at \url{http://thinkdast.com/cc30}.

The original form of this book is \LaTeX\ source code.  Compiling this
code has the effect of generating a device-independent representation
of the book, which can be converted to other formats and printed.

The \LaTeX\ source for this book is available from
\url{http://thinkdast.com/repo}.

%--table of contents------------------------------------------

\cleardoublepage
\setcounter{tocdepth}{1}
\tableofcontents

\end{latexonly}

%--HTML title page--------------------------------------------

\begin{htmlonly}

\vspace{1em}

{\Large \thetitle: \thesubtitle}

{\large \theauthors}

Version \theversion

\vspace{1em}

Copyright \copyright ~2016 \theauthors.

Permission is granted to copy, distribute, and/or modify this work
under the terms of the Creative Commons
Attribution-NonCommercial-ShareAlike 3.0 Unported License, which is
available at \url{http://thinkdast.com/cc30}.

\vspace{1em}

\end{htmlonly}

%-------------------------------------------------------------

% END OF THE PART WE SKIP FOR PLASTEX
\fi


\chapter*{Preface}
\label{preface}

\markboth{PREFACE}{PREFACE}
\addcontentsline{toc}{chapter}{Preface}


\section*{The philosophy behind the book}

Data structures and algorithms are among the most important inventions
of the last 50 years, and they are fundamental tools software
engineers need to know.  But in my opinion, most of the books on these
topics are too theoretical, too big, and too ``bottom up'':

\begin{description}

\item[Too theoretical]  Mathematical analysis of algorithms is based
on simplifying assumptions that limit its usefulness in practice.
Many presentations of this topic gloss over the simplifications and
focus on the math.  In this book I present the most practical subset
of this material and omit or de-emphasize the rest.

\item[Too big] Most books on these topics are at least 500 pages,
and some are more than 1000.  By focusing on the topics I think are
most useful for software engineers, I kept this book under
200 pages.

\item[Too ``bottom up''] Many data structures books focus on how data
  structures work (the implementations), with less about how to use
  them (the interfaces).  In this book, I go ``top down'', starting
  with the interfaces.  Readers learn to use the structures in the
  Java Collections Framework before getting into the details of how
  they work.

\end{description}

Finally, some books present this material out of context and without
motivation: it's just one damn data structure after another!
I try to liven it up by organizing the topics around an
application --- web search --- that uses data structures extensively,
and is an interesting and important topic in its own right.

\index{web search}

This application motivates some topics that are not usually
covered in an introductory data structures class, including persistent
data structures with Redis.

\index{Redis}

I have made difficult decisions about what to leave out, but 
I have made some compromises.  I include a few topics
that most readers will never use, but that they might be expected to
know, possibly in a technical interview.  For these topics, I
present both the conventional wisdom as well as my reasons to be
skeptical. 

This book also presents basic aspects of software engineering practice,
including version control and unit testing.  Most chapters include
an exercise that allows readers to apply what they have learned.
Each exercise provides automated tests that check the solution.
And for most exercises, I present my solution at the beginning of
the next chapter.

\index{unit testing}
\index{version control}


\section{Prerequisites}
\label{prerequisites}

This book is intended for college students in computer science and
related fields, as well as professional software engineers, people
training in software engineering, and people preparing for technical
interviews.

Before you start this book, you should know Java pretty well; in
particular, you should know how to define a new class that extends an
existing class or implements an \java{interface}. If your Java is rusty, 
here are two books you might start with:

\begin{itemize}

\item Downey and Mayfield, {\it Think Java} (O'Reilly Media, 2016),
which is intended for people who have never programmed before.

\item Sierra and Bates, {\it Head First Java} (O'Reilly Media, 2005),
which is appropriate for people who already know another programming
language.

\end{itemize}

If you are not familiar with interfaces in Java, you might want to
work through the tutorial called ``What Is an Interface?'' at
\url{http://thinkdast.com/interface}.

\index{interface}

One vocabulary note: the word ``interface'' can be confusing. In the
context of an {\bf application programming interface} (API), it refers
to a set of classes and methods that provide certain capabilities.

\index{application programming interface}
\index{API}

In the context of Java, it also refers to a language feature, similar to
a class, that specifies a set of methods. To help avoid confusion,
I'll use ``interface'' in the normal typeface for the general idea of
an interface, and \java{interface} in the code typeface for the Java
language feature.

You should also be familiar with type parameters and generic types.
For example, you should know how create an object with a type
parameter, like \java{ArrayList<Integer>}.  If not, you can read about
type parameters at \url{http://thinkdast.com/types}.

\index{type parameter}

You should be familiar with the Java Collections Framework
(JCF), which you can read about at
\url{http://thinkdast.com/collections}.
In particular, you should know about the \java{List} \java{interface}
and the classes \java{ArrayList} and \java{LinkedList}.

\index{Java Collections Framework}
\index{JCF}

Ideally you should be familiar with Apache Ant, which is an automated
build tool for Java.  You can read more about Ant at
\url{http://thinkdast.com/anttut}.

\index{Apache Ant}
\index{Ant}

And you should be familiar with {\tt JUnit}, which is a unit testing
framework for Java.  You can read more about it at
\url{http://thinkdast.com/junit}.

\index{JUnit}


\section*{Working with the code}
\label{code}

The code for this book is in a Git repository at
\url{http://thinkdast.com/repo}.

\index{Git}
\index{version control}

Git is a ``version control system'' that allows you to keep track of
the files that make up a project.  A collection of files under Git's
control is called a ``repository''.

\index{repository}
\index{GitHub}

GitHub is a hosting service that provides storage for Git repositories
and a convenient web interface.  It provides several ways to work with
the code:

\begin{itemize}

\item You can create a copy of the repository on GitHub by pressing
  the {\sf Fork} button.  If you don't already have a GitHub account,
  you'll need to create one.  After forking, you'll have your own
  repository on GitHub that you can use to keep track of code you
  write.  Then you can ``clone'' the repository, which downloads a
  copy of the files to your computer.

\index{fork}
\index{clone}

\item Alternatively, you could clone the repository without forking.
  If you choose this option, you don't need a GitHub account, but you
  won't be able to save your changes on GitHub.

\item If you don't want to use Git at all, you can download the code
  in a ZIP archive using the {\sf Download} button on the GitHub
  page, or this link: \url{http://thinkdast.com/zip}.

\end{itemize}

After you clone the repository or unzip the ZIP file, you should have
a directory called {\tt ThinkDataStructures} with a subdirectory
called {\tt code}.

The examples in this book were developed and tested using Java SE
Development Kit 7.  If you are using an older version, some examples
will not work.  If you are using a more recent version, they should
all work.

\index{Java SDK}

\section*{Contributors}

This book is an adapted version of a curriculum I wrote for the
Flatiron School in New York City, which offers a variety of online
classes related to programming and web development.  They offer a
class based on this material, which provides an online development
environment, help from instructors and other students, and a
certificate of completion.  You can find more information at
\url{http://flatironschool.com}.


\begin{itemize}

\item At the Flatiron School, Joe Burgess, Ann John, and Charles
  Pletcher provided guidance, suggestions, and corrections from the
  initial specification all the way through implementation and
  testing.  Thank you all!

\item I am very grateful to my technical reviewers, Barry Whitman,
  Patrick White, and Chris Mayfield, who made many helpful suggestions
  and caught many errors.  Of course, any remaining errors are my
  fault, not theirs!

\item Thanks to the instructors and students in Data Structures and
  Algorithms at Olin College, who read this book and provided useful
  feedback.

\item Charles Roumeliotis copyedited the book for O'Reilly Media
and made many improvements.

% ENDCONTRIB

\end{itemize}


% Additional contributors who found one or more typos: coming soon, I'm sure.

If you have comments or ideas about the text, please send
them to: {\tt feedback@greenteapress.com}.


\mainmatter

\chapter{Interfaces}
\label{cs-lists-programming-to-an-interface-readme}

This book presents three topics:

\begin{itemize}

\item Data structures: Starting with the structures in the Java
Collections Framework (JCF), you will learn how to use data structures
like lists and maps, and you will see how they work.

\index{data structures}

\item Analysis of algorithms: I present techniques for analyzing
code and predicting how fast it will run and how much space (memory) it
will require.

\index{analysis of algorithms}

\item Information retrieval: To motivate the first two topics, and to
make the exercises more interesting, we will use data structures and
algorithms to build a simple web search engine.

\index{information retrieval}
\index{search engine}

\end{itemize}

Here's an outline of the order of topics:

\begin{itemize}

\item We'll start with the {\tt List} interface and you will write
classes that implement this interface two different ways.  Then we'll
compare your implementations with the Java classes {\tt ArrayList} and
{\tt LinkedList}.

\index{List}
\index{ArrayList}
\index{LinkedList}

\item Next I'll introduce tree-shaped data structures and you will
work on the first application: a program that reads pages from Wikipedia,
parses the contents, and navigates the resulting tree to find links
and other features.  We'll use these tools to test the ``Getting
to Philosophy'' conjecture (you can get a preview by reading
\url{http://thinkdast.com/getphil}).

\index{tree}
\index{Getting to Philosophy}

\item We'll learn about the {\tt Map} interface and Java's
{\tt HashMap} implementation.  Then you'll write classes that implement
this interface using a hash table and a binary search tree.

\index{Map}
\index{HashMap}
\index{hash table}
\index{binary search tree}

\item Finally, you will use these classes (and a few others I'll present
along the way) to implement a web search engine, including: a crawler that
finds and reads pages, an indexer that stores the contents of Web
pages in a form that can be searched efficiently, and a retriever
that takes queries from a user and returns relevant results.

\index{web search}
\index{crawler}
\index{indexer}
\index{retriever}


\end{itemize}

Let's get started.


\section{Why are there two kinds of \java{List}?}
\label{why-are-there-two-kinds-of-list}

When people start working with the Java Collections Framework, they
are sometimes confused about \java{ArrayList} and
\java{LinkedList}.  Why does Java provide two implementations of
the \java{List} \java{interface}?  And how should you choose which one to
use?  We will answer these questions in the next few chapters.

\index{List}
\index{ArrayList}
\index{LinkedList}

I'll start by reviewing \java{interface}s and the classes that
implement them, and I'll present the idea of ``programming to an
interface''.

\index{interface}

In the first few exercises, you'll implement classes
similar to \java{ArrayList} and \java{LinkedList}, so you'll know how
they work, and we'll see that each of them has pros and cons.  Some
operations are faster or use less space with \java{ArrayList}; others
are faster or smaller with \java{LinkedList}.  Which one is better for
a particular application depends on which operations it performs most
often.


\section{Interfaces in Java}
\label{interfaces-in-java}

A Java \java{interface} specifies a set of methods; any class that
implements this \java{interface} has to provide these methods. For
example, here is the source code for \java{Comparable}, which is an
\java{interface} defined in the package \java{java.lang}:

\index{Comparable}

\begin{verbatim}
public interface Comparable<T> {
    public int compareTo(T o);
}
\end{verbatim}

\index{generic type}

This \java{interface} definition uses a type parameter, \java{T}, which
makes \java{Comparable} a {\bf generic type}.  
In order to implement this \java{interface}, a class has to

\begin{itemize}

\item Specify the type \java{T} refers to, and

\item Provide a method
named \java{compareTo} that takes an object as a parameter and returns
an \java{int}.

\end{itemize}

\index{compareTo}
\index{Integer}

For example, here's the source code for \java{java.lang.Integer}:

\begin{verbatim}
public final class Integer extends Number implements Comparable<Integer> {

    public int compareTo(Integer anotherInteger) {
        int thisVal = this.value;
        int anotherVal = anotherInteger.value;
        return (thisVal<anotherVal ? -1 : (thisVal==anotherVal ? 0 : 1));
    }

    // other methods omitted
}
\end{verbatim}

This class extends \java{Number}, so it inherits the methods and
instance variables of \java{Number}; and it implements
\java{Comparable<Integer>}, so it provides a
method named \java{compareTo} that takes an \java{Integer} and
returns an \java{int}.

\index{Number}
\index{Comparable}

When a class declares that it implements an \java{interface}, the compiler
checks that it provides all methods defined by the
\java{interface}.

\index{ternary operator}

As an aside, this implementation of \java{compareTo} uses the
``ternary operator'', sometimes written \java{?:}.  If you are not
familiar with it, you can read about it at
\url{http://thinkdast.com/ternary}.


\section{The List interface}
\label{the-list-interface}

The Java Collections Framework (JCF) defines an \java{interface} called
\java{List} and provides two implementations, \java{ArrayList} and
\java{LinkedList}.

\index{List}

The \java{interface} defines what it means to be a \java{List}; any
class that implements this \java{interface} has to provide a
particular set of methods, including \java{add}, \java{get},
\java{remove}, and about 20 more.

\java{ArrayList} and \java{LinkedList} provide these methods, so
they can be used interchangeably. A method written to work with a
\java{List} will work with an \java{ArrayList}, \java{LinkedList},
or any other object that implements \java{List}.

\index{ArrayList}
\index{LinkedList}

Here's a contrived example that demonstrates the point:

\begin{verbatim}
public class ListClientExample {
    private List list;
    
    public ListClientExample() {
        list = new LinkedList();
    }

    private List getList() {
        return list;        
    }

    public static void main(String[] args) {
        ListClientExample lce = new ListClientExample();
        List list = lce.getList();
        System.out.println(list);
    }
}
\end{verbatim}

\index{ListClientExample}
\index{encapsulate}

\java{ListClientExample} doesn't do anything useful, but it has the
essential elements of a class that {\bf encapsulates} a
\java{List}; that is, it contains a \java{List} as an instance
variable.  I'll use this class to make a point, and then you'll work
with it in the first exercise.

\index{instantiate}

The \java{ListClientExample} constructor initializes \java{list} by
{\bf instantiating} (that is, creating) a new \java{LinkedList}; the
getter method called \java{getList} returns a reference to the
internal \java{List} object; and \java{main} contains a few lines of
code to test these methods.

The important thing about this example is that it uses \java{List}
whenever possible and avoids specifying \java{LinkedList} or
\java{ArrayList} unless it is necessary. For example, the instance
variable is declared to be a \java{List}, and \java{getList} returns
a \java{List}, but neither specifies which kind of list.

If you change your mind and decide to use an \java{ArrayList}, you
only have to change the constructor; you don't have to make any other
changes.

\index{interface-based programming}
\index{programming to an interface}

This style is called {\bf interface-based programming},
or more casually, ``programming to an interface''
(see \url{http://thinkdast.com/interbaseprog}).
Here we are talking about the general idea of an interface,
not a Java \java{interface}.

When you use a library, your code should only depend on the interface,
like \java{List}.  It should not depend on a specific
implementation, like \java{ArrayList}. That way, if the implementation
changes in the future, the code that uses it will still work.

On the other hand, if the interface changes, the code that depends on
it has to change, too.  That's why library developers avoid changing
interfaces unless absolutely necessary.


\section{Exercise 1}
\label{warming-up}

Since this is the first exercise, we'll keep it simple. You will take
the code from the previous section and {\bf swap the implementation};
that is, you will replace the \java{LinkedList} with an
\java{ArrayList}.  Because the code programs to an interface, you will
be able to swap the implementation by changing a single line and
adding an \java{import} statement.

Start by setting up your development environment. For all of the
exercises, you will need to be able to compile and run Java code.  I
developed the examples using Java SE Development Kit 7.  If you are
using a more recent version, everything should still work.  If you are
using an older version, you might find some incompatibilities.

\index{Java SDK}
\index{interactive development environment}
\index{IDE}

I recommend using an interactive development environment (IDE) that
provides syntax checking, auto-completion, and source code
refactoring.  These features help you avoid errors or find them
quickly.  However, if you are preparing for a technical interview,
remember that you will not have these tools during the interview, so
you might also want to practice writing code without them.

If you have not already downloaded the code for this book, see the
instructions in Section~\ref{code}.

In the directory named {\tt code}, you should find these files and
directories:

\begin{itemize}
  \item
    \java{build.xml} is an Ant file that makes it easier to compile
    and run the code.

  \item
    \java{lib} contains the libraries you'll need (for this
    exercise, just JUnit).

  \item
    \java{src} contains the source code.

\end{itemize}

If you navigate into \java{src/com/allendowney/thinkdast},
you'll find the source code for this exercise:

\begin{itemize}

  \item
    \java{ListClientExample.java} contains the code from the previous
    section.

  \item
    \java{ListClientExampleTest.java} contains a JUnit test for
    \java{ListClientExample}.


\end{itemize}

Review \java{ListClientExample} and make sure you understand what it
does. Then compile and run it. If you use Ant, you can navigate to the
{\tt code} directory and run {\tt ant ListClientExample}.

\index{Ant}

You might get a warning like

\begin{verbatim}
List is a raw type. References to generic type List<E> 
should be parameterized.
\end{verbatim}

To keep the example simple, I didn't bother to specify the type of
the elements in the list.  If this warning bothers you, you can
fix it by replacing each \java{List} or \java{LinkedList} with
\java{List<Integer>} or \java{LinkedList<Integer>}.

\index{raw type}

Review \java{ListClientExampleTest}. It runs one test, which creates a
\java{ListClientExample}, invokes \java{getList}, and then checks
whether the result is an \java{ArrayList}. Initially, this test will
fail because the result is a \java{LinkedList}, not an
\java{ArrayList}. Run this test and confirm that it fails.

NOTE: This test makes sense for this exercise, but it is not a good
example of a test. Good tests should check whether the class under
test satisfies the requirements of the \emph{interface}; they should
not depend on the details of the \emph{implementation}.

\index{unit testing}
\index{interface}
\index{implementation}

In the \java{ListClientExample}, replace \java{LinkedList} with
\java{ArrayList}.  You might have to add an \java{import}
statement. Compile and run \java{ListClientExample}. Then run the test
again. With this change, the test should now pass.

To make this test pass, you only had to change \java{LinkedList} in
the constructor; you did not have to change any of the places where
\java{List} appears. What happens if you do?  Go ahead and replace one
or more appearances of \java{List} with \java{ArrayList}. The program
should still work correctly, but now it is ``overspecified''. If you
change your mind in the future and want to swap the interface again,
you would have to change more code.

In the \java{ListClientExample} constructor, what happens if you
replace \java{ArrayList} with \java{List}? Why can't you instantiate a
\java{List}?

\index{constructor}


\chapter{Analysis of Algorithms}
\label{cs-analysis-of-algorithms-readme}

As we saw in the previous chapter, Java provides two
implementations of the \java{List} interface, \java{ArrayList} and
\java{LinkedList}. For some applications \java{LinkedList} is faster;
for other applications \java{ArrayList} is faster.

\index{analysis of algorithms}
\index{profiling}

To decide which one is better for a particular application, one approach
is to try them both and see how long they take. This approach, which is
called ``profiling'', has a few problems:

\begin{enumerate}

\item Before you can compare the algorithms, you have to implement
  them both.

\item The results might depend on what kind of computer you use. One
  algorithm might be better on one machine; the other might be better on
  a different machine.

\item The results might depend on the size of the problem or the data
  provided as input.

\end{enumerate}

We can address some of these problems using {\bf analysis of
  algorithms}. When it works, algorithm analysis makes it possible to
compare algorithms without having to implement them. But we have to
make some assumptions:

\begin{enumerate}

\item To avoid dealing with the details of computer hardware, we
  usually identify the basic operations that make up an algorithm ---
  like addition, multiplication, and comparison of numbers --- and
  count the number of operations each algorithm requires.

\item To avoid dealing with the details of the input data, the best
  option is to analyze the average performance for the inputs we
  expect. If that's not possible, a common alternative is to analyze
  the worst case scenario.

\item Finally, we have to deal with the possibility that one algorithm
  works best for small problems and another for big ones. In that
  case, we usually focus on the big ones, because for small problems
  the difference probably doesn't matter, but for big problems the
  difference can be huge.

\end{enumerate}

This kind of analysis lends itself to simple classification of
algorithms. For example, if we know that the runtime of Algorithm A
tends to be proportional to the size of the input, $n$, and Algorithm
B tends to be proportional to $n^2$, we expect A to be faster than B,
at least for large values of $n$.

\index{constant time}
\index{linear time}
\index{quadratic time}

Most simple algorithms fall into just a few categories.

\begin{itemize}

\item Constant time: An algorithm is ``constant time'' if the runtime
  does not depend on the size of the input. For example, if you have
  an array of $n$ elements and you use the bracket operator
  (\java{[]}) to access one of the elements, this operation takes
  the same number of operations regardless of how big the array
  is.

\item Linear: An algorithm is ``linear'' if the runtime is
  proportional to the size of the input. For example, if you add up the
  elements of an array, you have to access $n$ elements and
  perform $n-1$ additions. The total number of operations
  (element accesses and additions) is $2n-1$, which is
  proportional to $n$.

\item Quadratic: An algorithm is ``quadratic'' if the runtime is
  proportional to $n^2$.  For example, suppose you want to check whether
  any element in a list appears more than once.  A simple algorithm
  is to compare each element to all of the others.  If there are
  $n$ elements and each is compared to $n-1$ others, the total
  number of comparisons is $n^2 -n$, which is proportional to
  $n^2$ as $n$ grows.

\end{itemize}


\section{Selection sort}
\label{selection-sort}

\index{selection sort}
\index{sorting}

For example, here's an implementation of a simple algorithm called
{\bf selection sort}
(see \url{http://thinkdast.com/selectsort}):

\begin{verbatim}
public class SelectionSort {

    /**
     * Swaps the elements at indexes i and j.
     */
    public static void swapElements(int[] array, int i, int j) {
        int temp = array[i];
        array[i] = array[j];
        array[j] = temp;
    }

    /**
     * Finds the index of the lowest value
     * starting from the index at start (inclusive)
     * and going to the end of the array.
     */
    public static int indexLowest(int[] array, int start) {
        int lowIndex = start;
        for (int i = start; i < array.length; i++) {
            if (array[i] < array[lowIndex]) {
                lowIndex = i;
            }
        }
        return lowIndex;
    }

    /**
     * Sorts the elements (in place) using selection sort.
     */
    public static void selectionSort(int[] array) {
        for (int i = 0; i < array.length; i++) {
            int j = indexLowest(array, i);
            swapElements(array, i, j);
        }
    }
}
\end{verbatim}

The first method, \java{swapElements}, swaps two elements of the
array. Reading and writing elements are constant time operations,
because if we know the size of the elements and the location of
the first, we can compute the location of any other element
with one multiplication and one addition, and
those are constant time operations. Since everything in
\java{swapElements} is constant time, the whole method is constant
time.

\index{constant time}

The second method, \java{indexLowest}, finds the index of the smallest
element of the array starting at a given index, \java{start}. Each
time through the loop, it accesses two elements of the array and
performs one comparison. Since these are all constant time operations,
it doesn't really matter which ones we count. To keep it simple, let's
count the number of comparisons.

\begin{enumerate}

\item If \java{start} is 0, \java{indexLowest} traverses the entire
  array, and the total number of comparisons is the length of
  the array, which I'll call $n$.

\item If \java{start} is 1, the number of comparisons is $n-1$.

\item In general, the number of comparisons is $n$ - \java{start}, so 
  \java{indexLowest} is linear.

\end{enumerate}

The third method, \java{selectionSort}, sorts the array. It loops from
0 to $n-1$, so the loop executes $n$ times. Each
time, it calls \java{indexLowest} and then performs a constant time
operation, \java{swapElements}.

\index{linear time}

The first time \java{indexLowest} is called, it
performs $n$ comparisons. The second time, it performs
$n-1$ comparisons, and so on. The total number of comparisons is

\[ n + n-1 + n-2 + ... + 1 + 0 \]

The sum of this series is $n(n+1)/2$, which is
proportional to $n^2$; and that means that \java{selectionSort}
is quadratic.

\index{quadratic time}

To get to the same result a different way, we can think of
\java{indexLowest} as a nested loop. Each time we call
\java{indexLowest}, the number of operations is proportional
to $n$. We call it $n$ times, so the total number of
operations is proportional to $n^2$.


\section{Big O notation}
\label{big-o-notation}

\index{big O notation}

All constant time algorithms belong to a set called $O(1)$. So another way
to say that an algorithm is constant time is to say that it is in $O(1)$.
Similarly, all linear algorithms belong to $O(n)$, and all quadratic
algorithms belong to $O(n^2)$. This way of classifying algorithms is called
``big O notation''.

NOTE: I am providing a casual definition of big O notation. For a more
mathematical treatment, see
\url{http://thinkdast.com/bigo}.

This notation provides a convenient way to write general rules about how
algorithms behave when we compose them. For example, if you perform a
linear time algorithm followed by a constant algorithm, the total run
time is linear. Using $\in$ to mean ``is a member of'':

If $f \in O(n)$ and $g \in O(1)$, $f+g \in O(n)$.

If you perform two linear operations, the total is still linear:

If $f \in O(n)$ and $g \in O(n)$, $f+g \in O(n)$.

In fact, if you perform a linear operation any number of times,
$k$, the total is linear, as long as $k$ is a constant
that does not depend on $n$.

If $f \in O(n)$ and $k$ is a constant, $kf \in O(n)$.

But if you perform a linear operation $n$ times, the result is
quadratic:

If $f \in O(n)$, $nf \in O(n^2)$.

In general, we only care about the largest exponent of $n$. So if
the total number of operations is $2n + 1$, it belongs to
$O(n)$. The leading constant, 2, and the additive term, 1, are
not important for this kind of analysis. Similarly, $n^2 + 100n + 1000$ is
in $O(n^2)$.  Don't be distracted by the big numbers!

``Order of growth'' is another name for the same idea.  An order of
growth is a set of algorithms whose runtimes are in the same big O
category; for example, all linear algorithms belong to the same order
of growth because their runtimes are in $O(n)$.

\index{order of growth}

In this context, an ``order'' is a group, like the \emph{Order of
the Knights of the Round Table}, which is a group of knights, not a way
of lining them up. So you can imagine the \emph{Order of Linear
Algorithms} as a set of brave, chivalrous, and particularly efficient
algorithms.


\section{Exercise 2}
\label{exercise2}

The exercise for this chapter is to implement a \java{List} that
uses a Java array to store the elements. 

In the code repository for this book (see Section~\ref{code}),
you'll find the source files you'll need:

\index{MyArrayList}

\begin{itemize}
\item \java{MyArrayList.java} contains a partial implementation of the
 \java{List} interface.  Four of the methods are incomplete; your job
 is to fill them in.

\item \java{MyArrayListTest.java} contains JUnit tests you can use to
check your work.

\end{itemize}

You'll also find the Ant build file \java{build.xml}.  From the {\tt
  code} directory, you should be able to run \java{ant MyArrayList} to
run \java{MyArrayList.java}, which contains a few simple tests. Or you
can run \java{ant MyArrayListTest} to run the JUnit test.

\index{Ant}
%TODO: either make the build step automatic or add instructions

When you run the tests, several of them should fail. If you examine the
source code, you'll find four \java{TODO} comments indicating the
methods you should fill in.

Before you start filling in the missing methods, let's walk through
some of the code. Here are the class definition, instance variables,
and constructor.

\index{instance variable}
\index{constructor}


\begin{verbatim}
public class MyArrayList<E> implements List<E> {
    int size;                    // keeps track of the number of elements
    private E[] array;           // stores the elements
    
    public MyArrayList() {
        array = (E[]) new Object[10];
        size = 0;
    }
}
\end{verbatim}

As the comments indicate, \java{size} keeps track of how many elements
are in \java{MyArrayList}, and \java{array} is the array that
actually contains the elements.

\index{element}

The constructor creates an array of 10 elements, which are initially
\java{null}, and sets \java{size} to 0. Most of the time, the length
of the array is bigger than \java{size}, so there are unused slots in
the array.

\index{type parameter}

One detail about Java: you can't instantiate an array using a type
parameter; for example, the following will not work:

\begin{verbatim}
        array = new E[10];
\end{verbatim}

To work around this limitation, you have to instantiate an array of
\java{Object} and then typecast it. You can read more about this issue
at \url{http://thinkdast.com/generics}.

Next we'll look at the method that adds elements to the list:

\begin{verbatim}
    public boolean add(E element) {
        if (size >= array.length) {
            // make a bigger array and copy over the elements
            E[] bigger = (E[]) new Object[array.length * 2];
            System.arraycopy(array, 0, bigger, 0, array.length);
            array = bigger;
        } 
        array[size] = element;
        size++;
        return true;
    }
\end{verbatim}

If there are no unused spaces in the array, we have to create a bigger
array and copy over the elements. Then we can store the element in the
array and increment \java{size}.

\index{boolean}

It might not be obvious why this method returns a boolean, since it
seems like it always returns \java{true}. As always, you can find the
answer in the documentation:
\url{http://thinkdast.com/colladd}.
It's also not obvious how to analyze the performance of this
method. In the normal case, it's constant time, but if we have to
resize the array, it's linear. I'll explain how to handle this in
Section~\ref{classifying-add}.

\index{constant time}
\index{linear time}

Finally, let's look at \java{get}; then you can get started on the
exercises.

\begin{verbatim}
    public T get(int index) {
        if (index < 0 || index >= size) {
            throw new IndexOutOfBoundsException();
        }
        return array[index];
    }
\end{verbatim}

Actually, \java{get} is pretty simple:
if the index is out of bounds, it throws an exception; otherwise it
reads and returns an element of the array. Notice that it checks whether
the index is less than \java{size}, not \java{array.length}, so it's
not possible to access the unused elements of the array.

\index{get}

In \java{MyArrayList.java}, you'll find a stub for \java{set} that
looks like this:

\begin{verbatim}
    public T set(int index, T element) {
        // TODO: fill in this method.
        return null;
    }
\end{verbatim}

Read the documentation of \java{set} at
\url{http://thinkdast.com/listset}, then fill in the body of this
method. If you run \java{MyArrayListTest} again, \java{testSet} should
pass.

\index{set}

HINT: Try to avoid repeating the index-checking code.

Your next mission is to fill in \java{indexOf}. As usual, you should
read the documentation at \url{http://thinkdast.com/listindof} so you
know what it's supposed to do. In particular, notice how it is
supposed to handle \java{null}.

\index{indexOf}
\index{helper method}

I've provided a helper method called
\java{equals} that compares an element from the array to a target
value and returns \java{true} if they are equal (and it handles
\java{null} correctly). Notice that this method is private because it
is only used inside this class; it is not part of the \java{List}
interface.

\index{equals}

When you are done, run \java{MyArrayListTest} again;
\java{testIndexOf} should pass now, as well as the other tests that
depend on it.

Only two more methods to go, and you'll be done with this
exercise. The next one is an overloaded version of \java{add} that
takes an index and stores the new value at the given index, shifting
the other elements to make room, if necessary.

\index{add}

Again, read the documentation at \url{http://thinkdast.com/listadd},
write an implementation, and run the tests for confirmation.

HINT: Avoid repeating the code that makes the array bigger.

Last one: fill in the body of \java{remove}.  The documentation is at
\url{http://thinkdast.com/listrem}. When you finish this one, all
tests should pass.

\index{remove}

Once you have your implementation working, compare it to mine, which
you can read at \url{http://thinkdast.com/myarraylist}.


\chapter{ArrayList}
\label{cs-analyzing-our-arraylist-readme}

\index{ArrayList}

This chapter kills two birds with one stone: I present solutions to
the previous exercise and demonstrate a way to classify
algorithms using {\bf amortized analysis}.

\index{amortized analysis}


\section{Classifying MyArrayList methods}
\label{classifying-myarraylist-methods}

For many methods, we can identify the order of growth by examining the
code. For example, here's the implementation of \java{get} from
\java{MyArrayList}:

\begin{verbatim}
    public E get(int index) {
        if (index < 0 || index >= size) {
            throw new IndexOutOfBoundsException();
        }
        return array[index];
    }
\end{verbatim}

Everything in \java{get} is constant time, so \java{get} is constant
time. No problem.

\index{constant time}
\index{get}

Now that we've classified \java{get}, we can classify \java{set},
which uses it. Here is our implementation of \java{set} from the
previous exercise:

\begin{verbatim}
    public E set(int index, E element) {
        E old = get(index);
        array[index] = element;
        return old;
    }
\end{verbatim}

One slightly clever part of this solution is that it does not check the
bounds of the array explicitly; it takes advantage of \java{get},
which raises an exception if the index is invalid.

\index{exception}
\index{set}

Everything in \java{set}, including the invocation of \java{get}, is
constant time, so \java{set} is also constant time.

\index{indexOf}
\index{linear time}

Next we'll look at some linear methods. For example, here's my
implementation of \java{indexOf}:

\begin{verbatim}
    public int indexOf(Object target) {
        for (int i = 0; i<size; i++) {
            if (equals(target, array[i])) {
                return i;
            }
        }
        return -1;
    }
\end{verbatim}

Each time through the loop, \java{indexOf} invokes \java{equals}, so
we have to classify \java{equals} first. Here it is:

\begin{verbatim}
    private boolean equals(Object target, Object element) {
        if (target == null) {
            return element == null;
        } else {
            return target.equals(element);
        }
    }
\end{verbatim}

This method invokes \java{target.equals}; the runtime of this method
might depend on the size of \java{target} or \java{element}, but it
probably doesn't depend on the size of the array, so we consider it
constant time for purposes of analyzing \java{indexOf}.

\index{constant time}
\index{linear time}
\index{equals}

Getting back to \java{indexOf}, everything inside the loop is constant
time, so the next question we have to consider is: how many times does
the loop execute?

If we get lucky, we might find the target object right away and return
after testing only one element. If we are unlucky, we might have to test
all of the elements. On average, we expect to test half of the elements,
so this method is considered linear (except in the unlikely case that
we know the target element is at the beginning of the array).

\index{remove}

The analysis of \java{remove} is similar. Here's my implementation:

\begin{verbatim}
    public E remove(int index) {
        E element = get(index);
        for (int i=index; i<size-1; i++) {
            array[i] = array[i+1];
        }
        size--;
        return element;
    }
\end{verbatim}

It uses \java{get}, which is constant time, and then loops through the
array, starting from \java{index}. If we remove the element at the end
of the list, the loop never runs and this method is constant time. If we
remove the first element, we loop through all of the remaining elements,
which is linear. So, again, this method is considered linear (except in
the special case where we know the element is at the end or a constant
distance from the end).


\section{Classifying \java{add}}
\label{classifying-add}

Here's a version of \java{add} that takes an
index and an element as parameters:

\begin{verbatim}
    public void add(int index, E element) {
        if (index < 0 || index > size) {
            throw new IndexOutOfBoundsException();
        }
        // add the element to get the resizing
        add(element);
        
        // shift the other elements
        for (int i=size-1; i>index; i--) {
            array[i] = array[i-1];
        }
        // put the new one in the right place
        array[index] = element;
    }
\end{verbatim}

This two-parameter version, called \java{add(int, E)}, uses
the one-parameter version, called \java{add(E)}, which puts the
new element at the end. Then it shifts the other elements to the right,
and puts the new element in the correct place.

\index{add}

Before we can classify the two-parameter \java{add(int, E)}, we have
to classify the one-parameter \java{add(E)}:

\begin{verbatim}
    public boolean add(E element) {
        if (size >= array.length) {
            // make a bigger array and copy over the elements
            E[] bigger = (E[]) new Object[array.length * 2];
            System.arraycopy(array, 0, bigger, 0, array.length);
            array = bigger;
        } 
        array[size] = element;
        size++;
        return true;
    }
\end{verbatim}

The one-parameter version turns out to be hard to analyze. If there is
an unused space in the array, it is constant time, but if we have to
resize the array, it's linear because \java{System.arraycopy} takes
time proportional to the size of the array. 

\index{constant time}
\index{linear time}

So is {\tt add} constant time or linear?
We can classify this method by thinking about the average number of
operations per add over a series of $n$ adds. For simplicity,
assume we start with an array that has room for 2 elements.

\begin{itemize}

\item
  The first time we call add, it finds unused space in the array, so it
  stores 1 element.

\item
  The second time, it finds unused space in the array, so it stores 1
  element.

\item
  The third time, we have to resize the array, copy 2 elements, and
  store 1 element. Now the size of the array is 4.

\item
  The fourth time stores 1 element.

\item
  The fifth time resizes the array, copies 4 elements, and stores 1
  element. Now the size of the array is 8.

\item
  The next 3 adds store 3 elements.

\item
  The next add copies 8 and stores 1. Now the size is 16.

\item
  The next 7 adds store 7 elements.

\end{itemize}

And so on. Adding things up:

\begin{itemize}

\item
  After 4 adds, we've stored 4 elements and copied 2.

\item
  After 8 adds, we've stored 8 elements and copied 6.

\item
  After 16 adds, we've stored 16 elements and copied 14.

\end{itemize}

By now you should see the pattern: to do $n$ adds, we have to
store $n$ elements and copy $n-2$. So the total number of
operations is $n + n - 2$, which is $2n-2$.

To get the average number of operations per add, we divide the total by
$n$; the result is $2 - 2/n$. As $n$ gets big, the
second term, $2/n$, gets small. Invoking the principle that we
only care about the largest exponent of $n$, we can think of
\java{add} as constant time.

\index{constant time}
\index{linear time}

It might seem strange that an algorithm that is sometimes linear can be
constant time on average. The key is that we double the length of the
array each time it gets resized. That limits the number of times each
element gets copied. Otherwise --- if we add a fixed amount to the
length of the array, rather than multiplying by a fixed amount --- the
analysis doesn't work.

% NOTE: Patrick notes potential confusion in my use of average, which
% was an average over a hypothetical set of inputs when we looked at
% indexOf, and here is the average over a sequence of operations.

% I am inclined to leave this alone on the grounds that it it more
% confusing for experts (who know the difference between average
% case analysis and amortized analysis) than for beginners (who will
% not, I conjecture, be bothered).

\index{amortized analysis}
\index{average time}

This way of classifying an algorithm, by computing the average time in a
series of invocations, is called {\bf amortized analysis}.  You can
read more about it at 
\url{http://thinkdast.com/amort}. 
The key idea is that the extra cost of copying the array is
spread, or ``amortized'', over a series of invocations.

Now, if \java{add(E)} is constant time, what about
\java{add(int, E)}? After calling \java{add(E)}, it loops through
part of the array and shifts elements. This loop is linear, except in
the special case where we are adding at the end of the list. So
\java{add(int, E)} is linear.

\index{linear time}


\section{Problem Size}
\label{classifying-removeall}

The last example we'll consider is \java{removeAll}; here's the
implementation in \java{MyArrayList}:

\begin{verbatim}
    public boolean removeAll(Collection<?> collection) {
        boolean flag = true;
        for (Object obj: collection) {
            flag &= remove(obj);
        }
        return flag;
    }
\end{verbatim}

Each time through the loop, \java{removeAll} invokes \java{remove},
which is linear.  So it is tempting to think that \java{removeAll} is
quadratic.  But that's not necessarily the case.

\index{quadratic time}

In this method, the loop runs once for each element in
\java{collection}. If \java{collection} contains $m$ elements and the
list we are removing from contains $n$ elements, this method is in
$O(nm)$. If the size of \java{collection} can be considered constant,
\java{removeAll} is linear with respect to $n$. But if the size of the
collection is proportional to $n$, \java{removeAll} is quadratic. For
example, if \java{collection} always contains 100 or fewer elements,
\java{removeAll} is linear. But if \java{collection} generally
contains 1\% of the elements in the list, \java{removeAll} is
quadratic.

\index{constant time}
\index{problem size}
\index{removeAll}

When we talk about {\bf problem size}, we have to be careful
about which size, or sizes, we are talking about. This example
demonstrates a pitfall of algorithm analysis: the tempting shortcut of
counting loops.  If there is one loop, the algorithm is \emph{often}
linear.  If there are two loops (one nested inside the other), the
algorithm is \emph{often} quadratic. But be careful! You have to think
about how many times each loop runs. If the number of iterations is
proportional to $n$ for all loops, you can get away with just counting
the loops. But if, as in this example, the number of iterations is not
always proportional to $n$, you have to give it more thought.


\section{Linked Data Structures}
\label{linked-data-structures}

For the next exercise I provide a partial implementation of the \java{List}
interface that uses a linked list to store the elements.
If you are not familiar with linked lists, you can read about them at
\url{http://thinkdast.com/linkedlist},
but this section provides a brief introduction.

\index{linked data structures}
\index{node}

A data structure is ``linked'' if it is made up of objects, often called
``nodes'', that contain references to other nodes. In a linked
\emph{list}, each node contains a reference to the next node in the
list. Other linked structures include trees and graphs, in which nodes
can contain references to more than one other node.

Here's a class definition for a simple node:

\begin{verbatim}
public class ListNode {

    public Object data;
    public ListNode next;

    public ListNode() {
        this.data = null;
        this.next = null;
    }

    public ListNode(Object data) {
        this.data = data;
        this.next = null;
    }

    public ListNode(Object data, ListNode next) {
        this.data = data;
        this.next = next;
    }

    public String toString() {
        return "ListNode(" + data.toString() + ")";
    }
}
\end{verbatim}

The \java{ListNode} object has two instance variables: \java{data} is a
reference to some kind of \java{Object}, and \java{next} is a reference to
the next node in the list. In the last node in the list, by convention,
\java{next} is \java{null}.

\index{null}

\java{ListNode} provides several constructors, allowing you to provide values
for \java{data} and \java{next}, or initialize them to the default
value, \java{null}.

\index{ListNode}

You can think of each \java{ListNode} as a list with a single element,
but more generally, a list can contain any number of nodes. There are
several ways to make a new list. A simple option is to create a set of
\java{ListNode} objects, like this:

\begin{verbatim}
        ListNode node1 = new ListNode(1);
        ListNode node2 = new ListNode(2);
        ListNode node3 = new ListNode(3);
\end{verbatim}

And then link them up, like this:

\begin{verbatim}
        node1.next = node2;
        node2.next = node3;
        node3.next = null;
\end{verbatim}

Alternatively, you can create a node and link it at the same time. For
example, if you want to add a new node at the beginning of a list, you
can do it like this:

\begin{verbatim}
        ListNode node0 = new ListNode(0, node1);
\end{verbatim}

After this sequence of instructions, we have four nodes containing the
\java{Integer}s 0, 1, 2, and 3 as data, linked up in increasing order. In the
last node, the \java{next} field is
\java{null}.

\begin{figure}
\centering
\includegraphics[width=4in]{figs/linked_list1.pdf}
\caption{Object diagram of a linked list.}
\label{linkedlistfig}
\end{figure}

\index{linked list}
\index{object diagram}

Figure~\ref{linkedlistfig} is an object diagram that shows these
variables and the objects they refer to.  In an object diagram,
variables appear as names inside boxes, with arrows that show what
they refer to.  Objects appear as boxes with their type on the outside
(like \java{ListNode} and \java{Integer}) and their instance variables on
the inside.


\section{Exercise 3}
\label{exercise3}

In the repository for this book,
you'll find the source files you need for this exercise:

\index{MyLinkedList}

\begin{itemize}

\item \java{MyLinkedList.java} contains a partial implementation of
  the \java{List} interface using a linked list to store the elements.

\item \java{MyLinkedListTest.java} contains JUnit tests for
  \java{MyLinkedList}.

\end{itemize}

Run \java{ant MyLinkedList} to run \java{MyLinkedList.java}, which
contains a few simple tests. 

Then you can run \java{ant MyLinkedListTest} to run the JUnit tests.
Several of them should fail. If you examine the source code, you'll
find three \java{TODO} comments indicating the methods you should fill
in.

Before you start, let's walk through some
of the code. Here are the instance variables and the constructor for
\java{MyLinkedList}:

\begin{verbatim}
public class MyLinkedList<E> implements List<E> {

    private int size;            // keeps track of the number of elements
    private Node head;           // reference to the first node

    public MyLinkedList() {
        head = null;
        size = 0;
    }
}
\end{verbatim}

As the comments indicate, \java{size} keeps track of how many elements
are in \java{MyLinkedList}; \java{head} is a reference to the first
\java{Node} in the list or \java{null} if the list is empty.

\index{MyLinkedList}

Storing the number of elements is not necessary, and in general it is
risky to keep redundant information, because if it's not updated
correctly, it creates opportunities for error. It also takes a little
bit of extra space.

\index{size}

But if we store \java{size} explicitly, we can implement the
\java{size} method in constant time; otherwise, we would have to
traverse the list and count the elements, which requires linear time.

\index{constant time}
\index{linear time}

Because we store \java{size} explicitly, we have to update it each
time we add or remove an element, so that slows down those methods a
little, but it doesn't change their order of growth, so it's probably
worth it.

The constructor sets \java{head} to \java{null}, which indicates an
empty list, and sets \java{size} to 0.

\index{type parameter}

This class uses the type parameter \java{E} for the type of the
elements. If you are not familiar with type parameters, you might want
to read this tutorial:
\url{http://thinkdast.com/types}.

The type parameter also appears in the definition of \java{Node},
which is nested inside \java{MyLinkedList}:

\begin{verbatim}
    private class Node {
        public E data;
        public Node next;

        public Node(E data, Node next) {
            this.data = data;
            this.next = next;
        }
    }
\end{verbatim}

Other than that, \java{Node} is similar to \java{ListNode} above.

\index{Node}
\index{add}

Finally, here's my implementation of \java{add}:

\begin{verbatim}
    public boolean add(E element) {
        if (head == null) {
            head = new Node(element);
        } else {
            Node node = head;
            // loop until the last node
            for ( ; node.next != null; node = node.next) {}
            node.next = new Node(element);
        }
        size++;
        return true;
    }
\end{verbatim}

\index{special case}

This example demonstrates two patterns you'll need for your solutions:

\begin{enumerate}

\item
  For many methods, we have to handle the first element of the list as a
  special case. In this example, if we are adding the first element of a
  list, we have to modify \java{head}. Otherwise, we traverse the
  list, find the end, and add the new node.

\item
  This method shows how to use a \java{for} loop to traverse the nodes
  in a list. In your solutions, you will probably write several
  variations on this loop. Notice that we have to declare \java{node}
  before the loop so we can access it after the loop.

\end{enumerate}

Now it's your turn.  Fill in the body of \java{indexOf}.  As usual,
you should read the documentation, at
\url{http://thinkdast.com/listindof},
so you know what it is supposed to do. In particular, notice how it's
supposed to handle \java{null}.

\index{helper method}

As in the previous exercise, I provide a helper method called
\java{equals} that compares an element from the array to a target
value and checks whether they are equal --- and it handles \java{null}
correctly. This method is private because it is used inside this class
but it is not part of the \java{List} interface.

When you are done, run the tests again; \java{testIndexOf}
should pass now, as well as the other tests that depend on it.

\index{add}

Next, you should fill in the two-parameter version of \java{add},
which takes an index and stores the new value at the given index.
Again, read the documentation at \url{http://thinkdast.com/listadd},
write an implementation, and run the tests for confirmation.

\index{remove}

Last one: fill in the body of \java{remove}.  The documentation is
here: \url{http://thinkdast.com/listrem}.  When you finish this one,
all tests should pass.

Once you have your implementation working, compare it to the version
in the \java{solution} directory of the repository.


\section{A note on garbage collection}
\label{a-note-on-garbage-collection}

In \java{MyArrayList} from the previous exercise, the array grows if
necessary, but it never shrinks. The array never gets garbage collected,
and the elements don't get garbage collected until the list itself is
destroyed.

\index{garbage collection}

One advantage of the linked list implementation is that it shrinks when
elements are removed, and the unused nodes can get garbage collected
immediately.

\index{clear}

Here is my implementation of the \java{clear} method:

\begin{verbatim}
    public void clear() {
        head = null;
        size = 0;
    }
\end{verbatim}

When we set \java{head} to \java{null}, we remove a reference to the
first \java{Node}. If there are no other references to that
\java{Node} (and there shouldn't be), it will get garbage collected.
At that point, the reference to the second \java{Node} is removed, so
it gets garbage collected, too. This process continues until all
nodes are collected.

So how should we classify \java{clear}? The method itself contains two
constant time operations, so it sure looks like it's constant time. But
when you invoke it, you make the garbage collector do work that's
proportional to the number of elements. So maybe we
should consider it linear!

\index{constant time}
\index{linear time}
\index{performance bug}

This is an example of what is sometimes called a {\bf performance bug}:
a program that is correct in the sense that it does the right thing,
but it doesn't belong to the order of growth we expected. In languages
like Java that do a lot of work, like garbage collection, behind the
scenes, this kind of bug can be hard to find.


\chapter{LinkedList}

This chapter presents solutions to the previous exercise and continues
the discussion of analysis of algorithms.


\section{Classifying \java{MyLinkedList} methods}
\label{classifying-mylinkedlist-methods}

My implementation of \java{indexOf} is below. Read through it and see
if you can identify its order of growth before you read the explanation.

\begin{verbatim}
    public int indexOf(Object target) {
        Node node = head;
        for (int i=0; i<size; i++) {
            if (equals(target, node.data)) {
                return i;
            }
            node = node.next;
        }
        return -1;
    }
\end{verbatim}

Initially \java{node} gets a copy of \java{head}, so they both refer
to the same \java{Node}. The loop variable, \java{i}, counts from 0 to
\java{size-1}.  Each time through the loop, we use \java{equals} to
see if we've found the target. If so, we return \java{i} immediately.
Otherwise we advance to the next \java{Node} in the list.

Normally we would check to make sure the next \java{Node} is not
\java{null}, but in this case it is safe because the loop ends when we
get to the end of the list (assuming \java{size} is consistent with
the actual number of nodes in the list).

If we get through the loop without finding the target, we return
\java{-1}.

\index{indexOf}
\index{constant time}

So what is the order of growth for this method?

\begin{enumerate}

\item
  Each time through the loop we invoke \java{equals}, which is
  constant time (it might depend on the size of \java{target} or
  \java{data}, but it doesn't depend on the size of the list). The
  other operations in the loop are also constant time.

\item
  The loop might run $n$ times, because in the worse case, we
  might have to traverse the whole list.

\end{enumerate}

So the runtime of this method is proportional to the length of the
list.

\index{add}

Next, here is my implementation of the two-parameter \java{add}
method. Again, you should try to classify it before you read the
explanation.

\begin{verbatim}
    public void add(int index, E element) {
        if (index == 0) {
            head = new Node(element, head);
        } else {
            Node node = getNode(index-1);
            node.next = new Node(element, node.next);
        }
        size++;
    }
\end{verbatim}

If \java{index==0}, we're adding the new \java{Node} at the
beginning, so we handle that as a special case. Otherwise, we have to
traverse the list to find the element at \java{index-1}. We use the
helper method \java{getNode}:

\index{helper method}

\begin{verbatim}
    private Node getNode(int index) {
        if (index < 0 || index >= size) {
            throw new IndexOutOfBoundsException();
        }
        Node node = head;
        for (int i=0; i<index; i++) {
            node = node.next;
        }
        return node;
    }
\end{verbatim}

\java{getNode} checks whether \java{index} is out of bounds; if so,
it throws an exception. Otherwise it traverses the list and returns the
requested Node.

\index{getNode}

Jumping back to \java{add}, once we find the right \java{Node}, we create the
new \java{Node} and put it between \java{node} and \java{node.next}. You
might find it helpful to draw a diagram of this operation to make sure
you understand it.

So, what's the order of growth for \java{add}?

\begin{enumerate}

\item
  \java{getNode} is similar to
  \java{indexOf}, and it is linear for the same reason.

\item
  In \java{add}, everything before and after \java{getNode} is
  constant time.

\end{enumerate}

So all together, \java{add} is linear.

\index{constant time}
\index{linear time}
\index{remove}

Finally, let's look at \java{remove}:

\begin{verbatim}
    public E remove(int index) {
        E element = get(index);
        if (index == 0) {
            head = head.next;
        } else {
            Node node = getNode(index-1);
            node.next = node.next.next;
        }
        size--;
        return element;
    }
\end{verbatim}

\java{remove} uses \java{get} to find and store the element at
\java{index}. Then it removes the \java{Node} that contained it.

If \java{index==0}, we handle that as a special case again. Otherwise
we find the node at \java{index-1} and modify it to skip over
\java{node.next} and link directly to \java{node.next.next}. This
effectively removes \java{node.next} from the list, and it can be
garbage collected.

Finally, we decrement \java{size} and return the element we retrieved
at the beginning.

So, what's the order of growth for \java{remove}? Everything in
\java{remove} is constant time except \java{get} and
\java{getNode}, which are linear. So \java{remove} is linear.

When people see two linear operations, they sometimes think the result
is quadratic, but that only applies if one operation is nested inside
the other. If you invoke one operation after the other, the runtimes
add. If they are both in $O(n)$, the sum is also in
$O(n)$.

\index{linear time}
\index{quadratic time}


\section{Comparing \java{MyArrayList} and \java{MyLinkedList}}
\label{comparing-mylinkedlist-and-myarraylist}

\index{MyArrayList}
\index{MyLinkedList}

The following table summarizes the differences between
\java{MyLinkedList} and \java{MyArrayList}, where \java{1} means
$O(1)$ or constant time and $n$ means $O(n)$ or
linear.

\begin{tabular}[c]{@{}lll@{}}
\hline
& MyArrayList & MyLinkedList \\
\hline
add (at the end) & \textbf{1} & n
\\
add (at the beginning) & n & \textbf{1}
\\
add (in general) & n & n
\\
get / set & \textbf{1} & n
\\
indexOf / lastIndexOf & n & n
\\
isEmpty / size & 1 & 1
\\
remove (from the end) & \textbf{1} & n
\\
remove (from the beginning) & n & \textbf{1}
\\
remove (in general) & n & n
\\
\hline
\end{tabular}

The operations where \java{MyArrayList} has an advantage are adding at
the end, removing from the end, getting and setting.

The operations where \java{MyLinkedList} has an advantage are adding
at the beginning and removing from the beginning.

For the other operations, the two implementations are in the same order
of growth.

\index{order of growth}

Which implementation is better? It depends on which operations you are
likely to use the most. And that's why Java provides more than one
implementation, because it depends.


\section{Profiling}

For the next exercise I provide a class called \java{Profiler} that
contains code that runs a method with a range of problem sizes,
measures runtimes, and plots the results.

\index{profiling}

You will use \java{Profiler} to classify the performance
of the \java{add} method for the Java implementations of
\java{ArrayList} and \java{LinkedList}.

Here's an example that shows how to use the profiler:

\begin{verbatim}
    public static void profileArrayListAddEnd() {
        Timeable timeable = new Timeable() {
            List<String> list;

            public void setup(int n) {
                list = new ArrayList<String>();
            }

            public void timeMe(int n) {
                for (int i=0; i<n; i++) {
                    list.add("a string");
                }
            }
        };

        String title = "ArrayList add end";
        Profiler profiler = new Profiler(title, timeable);

        int startN = 4000;
        int endMillis = 1000;
        XYSeries series = profiler.timingLoop(startN, endMillis);
        profiler.plotResults(series);
    }
\end{verbatim}

This method measures the time it takes to run \java{add} on an
\java{ArrayList}, which adds the new element at the end. I'll explain
the code and then show the results.

\index{add}

In order to use \java{Profiler}, we need to create a \java{Timeable}
object that provides two methods: \java{setup} and \java{timeMe}.
The \java{setup} method does whatever needs to be done before we start
the clock; in this case it creates an empty list. Then \java{timeMe}
does whatever operation we are trying to measure; in this case it adds
$n$ elements to the list.

\index{Profiler}
\index{Timeable}
\index{anonymous class}

The code that creates \java{timeable} is an {\bf anonymous class} that
defines a new implementation of the \java{Timeable} interface and
creates an instance of the new class at the same time. If you are not
familiar with anonymous classes, you can read about them here:
\url{http://thinkdast.com/anonclass}.

But you don't need to know much for the next exercise; even if you
are not comfortable with anonymous classes, you can
copy and modify the example code.

The next step is to create the \java{Profiler} object, passing the
\java{Timeable} object and a title as parameters.

The \java{Profiler} provides \java{timingLoop} which uses the
\java{Timeable} object stored as an instance variable. It invokes the
\java{timeMe} method on the \java{Timeable} object several times
with a range of values of $n$. \java{timingLoop} takes two
parameters:

\begin{itemize}

\item
  \java{startN} is the value of $n$ the timing loop should
  start at.

\item
  \java{endMillis} is a threshold in milliseconds. As
  \java{timingLoop} increases the problem size, the runtime increases;
  when the runtime exceeds this threshold, \java{timingLoop} stops.

\end{itemize}

When you run the experiments, you might have to adjust these
parameters. If \java{startN} is too low, the runtime might be too
short to measure accurately. If \java{endMillis} is too low, you might
not get enough data to see a clear relationship between problem size and
runtime.

This code is in \java{ProfileListAdd.java}, which you'll run in the next
exercise. When I ran it, I got this output:

\begin{verbatim}
4000, 3
8000, 0
16000, 1
32000, 2
64000, 3
128000, 6
256000, 18
512000, 30
1024000, 88
2048000, 185
4096000, 242
8192000, 544
16384000, 1325
\end{verbatim}

The first column is problem size, $n$; the second column is
runtime in milliseconds. The first few measurements are pretty noisy; it
might have been better to set \java{startN} around 64000.

\index{XYSeries}

The result from \java{timingLoop} is an \java{XYSeries} that
contains this data. If you pass this series to \java{plotResults}, it
generates a plot like the one in Figure~\ref{fig-profile1}.

\begin{figure}
\centering
\includegraphics[height=2.5in]{figs/profile1.png}
\caption{Profiling results: runtime versus problem size for
adding $n$ elements to the end of an \java{ArrayList}.}
\label{fig-profile1}
\end{figure}

The next section explains how to interpret it.


\section{Interpreting results}\label{interpreting-results}

Based on our understanding of how \java{ArrayList} works, we expect
the \java{add} method to take constant time when we add elements to
the end. So the total time to add $n$ elements should be linear.

\index{constant time}
\index{linear time}
\index{ArrayList}

To test that theory, we could plot total runtime versus problem size,
and we should see a straight line, at least for problem sizes that are
big enough to measure accurately. Mathematically, we can write the
function for that line:

\newcommand{\runtime}{\mbox{runtime}}

\[ \runtime = a + b n \]

where $a$ is the intercept of the line and $b$ is the
slope.

\index{quadratic time}

On the other hand, if \java{add} is linear, the total time for
$n$ adds would be quadratic. If we plot runtime versus problem
size, we expect to see a parabola. Or mathematically, something like:

\[ \runtime = a + b n + c n^2 \]

With perfect data, we might be able to tell the difference between a
straight line and a parabola, but if the measurements are noisy, it can
be hard to tell. A better way to interpret noisy measurements is to plot
runtime and problem size on a \textbf{log-log} scale.

\index{logarithm}
\index{log-log scale}

Why? Let's suppose that runtime is proportional to $n^k$, but we
don't know what the exponent $k$ is. We can write the
relationship like this:

\[ \runtime = a + b n + \ldots + c n^k \]

For large values of $n$, the term with the largest exponent is
the most important, so:

\[ \runtime \approx c n^k \]

where $\approx$ means ``approximately equal''. Now, if we
take the logarithm of both sides of this equation:

\[ \log(\runtime) \approx \log(c) + k \log(n) \]

This equation implies that if we plot $\runtime$ versus $n$ on a
log-log scale, we expect to see a straight line with intercept
$\log(c)$ and slope $k$. We don't care much about the
intercept, but the slope indicates the order of growth: if
$k=1$, the algorithm is linear; if $k=2$, it's
quadratic.

\index{slope}

Looking at the figure in the previous section, you can estimate the
slope by eye. But when you call \java{plotResults} it computes a least
squares fit to the data and prints the estimated slope. In this
example:

\begin{verbatim}
Estimated slope = 1.06194352346708
\end{verbatim}

which is close to 1; and that suggests that the total time for
$n$ adds is linear, so each add is constant time, as expected.

\index{constant time}

One important point: if you see a straight line on a graph like this,
that does \textbf{not} mean that the algorithm is linear. If the run
time is proportional to $n^k$ for any exponent $k$, we
expect to see a straight line with slope $k$. If the slope is
close to 1, that suggests the algorithm is linear. If it is close to 2,
it's probably quadratic.

\index{linear time}
\index{quadratic time}


\section{Exercise 4}
\label{instructions-4}

In the repository for this book you'll find the source files you need
for this exercise:

\begin{enumerate}

\item
  \java{Profiler.java} contains the implementation of the
  \java{Profiler} class described above. You will use this class, but
  you don't have to know how it works. But feel free to read the source.

\item
  \java{ProfileListAdd.java} contains starter code for this exercise,
  including the example, above, which profiles
  \java{ArrayList.add}. You will modify this file to profile a few
  other methods.

\end{enumerate}

\index{ProfileListAdd}

Also, in the \java{code} directory , you'll find the Ant build file
\java{build.xml}.

Run \java{ant ProfileListAdd} to run \java{ProfileListAdd.java}. You should
get results similar to Figure~\ref{fig-profile1}, but you might have to adjust
\java{startN} or \java{endMillis}. The estimated slope should be close
to 1, indicating that performing $n$ add operations takes time
proportional to $n$ raised to the exponent 1; that is, it is
in $O(n)$.

In \java{ProfileListAdd.java}, you'll find an empty method named
\java{profileArrayListAddBeginning}. Fill in the body of this method
with code that tests \java{ArrayList.add}, always putting the new
element at the beginning. If you start with a copy of
\java{profileArrayListAddEnd}, you should only have to make a few
changes. Add a line in \java{main} to invoke this method.

Run \java{ant ProfileListAdd} again and interpret the results. Based on
our understanding of how \java{ArrayList} works, we expect each add
operation to be linear, so the total time for $n$ adds should be
quadratic. If so, the estimated slope of the line, on a log-log scale,
should be near 2. Is it?

\index{linear time}
\index{quadratic time}

Now let's compare that to the performance of \java{LinkedList}. Fill
in the body of \java{profileLinkedListAddBeginning} and use it to
classify \java{LinkedList.add} when we put the new element at the
beginning. What performance do you expect? Are the results consistent
with your expectations?

\index{LinkedList}

Finally, fill in the body of \java{profileLinkedListAddEnd} and use it
to classify \java{LinkedList.add} when we put the new element at the
end. What performance do you expect? Are the results consistent with
your expectations?

I'll present results and answer these questions in the next chapter.


\chapter{Doubly-linked list}

This chapter reviews results from the previous exercise and introduces yet
another implementation of the \java{List} interface, the doubly-linked
list.

\section{Performance profiling results}
\label{performance-profiling-results}

\index{profiling}

In the previous exercise, we used \java{Profiler.java} to run various
\java{ArrayList} and \java{LinkedList} operations with a range of
problem sizes. We plotted runtime versus problem size on a log-log
scale and estimated the slope of the resulting curve, which indicates
the leading exponent of the relationship between runtime and problem
size.

\index{Profiler}
\index{add}
\index{average time}

For example, when we used the \java{add} method to add elements
to the end of an \java{ArrayList}, we found that the total time to
perform $n$ adds was proportional to $n$; that is, the
estimated slope was close to 1. We concluded that performing $n$
adds is in $O(n)$, so on average the time for a single add is
constant time, or $O(1)$, which is what we expect based on algorithm
analysis.

\index{constant time}

% NOTE: Again, Patrick is concerned that my use of ``average'' might
% be confusing, but I think it's reasonable to describe amortized
% analysis as an average over a series of operations.

The exercise asks you to fill in the body of
\java{profileArrayListAddBeginning}, which tests the performance of
adding new elements at the beginning of an \java{ArrayList}. Based on our
analysis, we expect each add to be linear, because it has to shift the
other elements to the right; so we expect $n$ adds to be
quadratic.

\index{quadratic time}
\index{linear time}

Here's a solution, which you can find in
the {\tt solution} directory of the repository.

\begin{verbatim}
    public static void profileArrayListAddBeginning() {
        Timeable timeable = new Timeable() {
            List<String> list;

            public void setup(int n) {
                list = new ArrayList<String>();
            }

            public void timeMe(int n) {
                for (int i=0; i<n; i++) {
                    list.add(0, "a string");
                }
            }
        };
        int startN = 4000;
        int endMillis = 10000;
        runProfiler("ArrayList add beginning", timeable, startN, endMillis);
    }
\end{verbatim}

This method is almost identical to \java{profileArrayListAddEnd}. The
only difference is in \java{timeMe}, which uses the two-parameter
version of \java{add} to put the new element at index 0. Also, we
increased \java{endMillis} to get one additional data point.

Here are the timing results (problem size on the left, runtime in
milliseconds on the right):

\begin{verbatim}
4000, 14
8000, 35
16000, 150
32000, 604
64000, 2518
128000, 11555
\end{verbatim}

Figure~\ref{fig-profile2}
shows the graph of runtime versus problem size.
\index{problem size}

\begin{figure}
\centering
\includegraphics[height=2.5in]{figs/profile2.png}
\caption{Profiling results: runtime versus problem size for adding
$n$ elements at the beginning of an \java{ArrayList}.}
\label{fig-profile2}
\end{figure}

Remember that a straight line on this graph does \textbf{not} mean that
the algorithm is linear. Rather, if the runtime is proportional to
$n^k$ for any exponent, $k$, we expect to see a straight
line with slope $k$. In this case, we expect the total time for
$n$ adds to be proportional to $n^2$, so we expect a
straight line with slope 2. In fact, the estimated slope is 1.992, which
is so close I would be afraid to fake data this good.

\index{profiling}


\section{Profiling \java{LinkedList} methods}
\label{profiling-linkedlist-methods}

In the previous exercise you also profiled the performance of adding new
elements at the beginning of a \java{LinkedList}. Based on our
analysis, we expect each \java{add} to take constant time, because in
a linked list, we don't have to shift the existing elements; we can just
add a new node at the beginning. So we expect the total time for
$n$ adds to be linear.

\index{constant time}
\index{linear time}
\index{LinkedList}

Here's a solution:

\begin{verbatim}
    public static void profileLinkedListAddBeginning() {
        Timeable timeable = new Timeable() {
            List<String> list;

            public void setup(int n) {
                list = new LinkedList<String>();
            }

            public void timeMe(int n) {
                for (int i=0; i<n; i++) {
                    list.add(0, "a string");
                }
            }
        };
        int startN = 128000;
        int endMillis = 2000;
        runProfiler("LinkedList add beginning", timeable, startN, endMillis);
    }
\end{verbatim}

We only had a make a few changes, replacing
\java{ArrayList} with \java{LinkedList} and adjusting
\java{startN} and \java{endMillis} to get a good range of data.
The measurements were noisier than the previous batch; here
are the results:

\begin{verbatim}
128000, 16
256000, 19
512000, 28
1024000, 77
2048000, 330
4096000, 892
8192000, 1047
16384000, 4755
\end{verbatim}

Figure~\ref{fig-profile3}
shows the graph of these results.

\begin{figure}
\centering
\includegraphics[height=2.5in]{figs/profile3.png}
\caption{Profiling results: runtime versus problem size for adding
$n$ elements at the beginning of a \java{LinkedList}.}
\label{fig-profile3}
\end{figure}

It's not a very straight line, and the slope is not exactly 1; the slope
of the least squares fit is 1.23. But these results indicate that the
total time for $n$ adds is at least approximately $O(n)$,
so each add is constant time.

\index{constant time}

\section{Adding to the end of a \java{LinkedList}}
\label{adding-to-the-end-of-a-linkedlist}

Adding elements at the beginning is one of the operations where we
expect \java{LinkedList} to be faster than \java{ArrayList}. But for
adding elements at the end, we expect \java{LinkedList} to be slower.
In my implementation, we have to traverse the entire list to add an
element to the end, which is linear. So we expect the total time for
$n$ adds to be quadratic.

\index{quadratic time}
\index{linear time}
\index{LinkedList}
\index{add}

Well, it's not.  Here's the code:

\begin{verbatim}
    public static void profileLinkedListAddEnd() {
        Timeable timeable = new Timeable() {
            List<String> list;

            public void setup(int n) {
                list = new LinkedList<String>();
            }

            public void timeMe(int n) {
                for (int i=0; i<n; i++) {
                    list.add("a string");
                }
            }
        };
        int startN = 64000;
        int endMillis = 1000;
        runProfiler("LinkedList add end", timeable, startN, endMillis);
    }
\end{verbatim}

Here are the results:

\begin{verbatim}
64000, 9
128000, 9
256000, 21
512000, 24
1024000, 78
2048000, 235
4096000, 851
8192000, 950
16384000, 6160
\end{verbatim}

Figure~\ref{fig-profile4}
shows the graph of these results.

\begin{figure}
\centering
\includegraphics[height=2.5in]{figs/profile4.png}
\caption{Profiling results: runtime versus problem size for adding
$n$ elements at the end of a \java{LinkedList}.}
\label{fig-profile4}
\end{figure}

\index{profiling}

Again, the measurements are noisy and the line is not perfectly
straight, but the estimated slope is 1.19, which is close to what
we got adding elements at the beginning, and not very close to 2, which
is what we expected based on our analysis. In fact, it is closer to 1,
which suggests that adding elements at the end is constant time.
What's going on?

\index{constant time}

\section{Doubly-linked list}
\label{doubly-linked-list}

My implementation of a linked list, \java{MyLinkedList}, uses a
singly-linked list; that is, each element contains a link to the next,
and the \java{MyArrayList} object itself has a link to the first node.

\index{doubly-linked list}
\index{LinkedList}

But if you read the documentation of \java{LinkedList} at
\url{http://thinkdast.com/linked},
it says:

\begin{quote}
Doubly-linked list implementation of the List and Deque
interfaces. [\ldots] All of the operations perform as
could be expected for a doubly-linked list. Operations that
index into the list will traverse the list from the beginning or the
end, whichever is closer to the specified index.
\end{quote}

If you are not familiar with doubly-linked lists, you can read more
about them at \url{http://thinkdast.com/doublelist},
but the short version is:

\begin{itemize}
\item
  Each node contains a link to the next node and a link to the previous
  node.

\item
  The \java{LinkedList} object contains links to the first and last
  elements of the list.

\end{itemize}

So we can start at either end of the list and traverse it in either
direction. As a result, we can add and remove elements from the
beginning and the end of the list in constant time!

\index{constant time}

The following table summarizes the performance we expect from
\java{ArrayList}, \java{MyLinkedList} (singly-linked), and
\java{LinkedList} (doubly-linked):

\begin{tabular}[c]{@{}llll@{}}
\hline
& MyArrayList & MyLinkedList & LinkedList
\\
\hline
add (at the end) & \textbf{1} & n & \textbf{1}
\\
add (at the beginning) & n & \textbf{1} & \textbf{1}
\\
add (in general) & n & n & n
\\
get / set & \textbf{1} & n & n
\\
indexOf / lastIndexOf & n & n & n
\\
isEmpty / size & 1 & 1 & 1
\\
remove (from the end) & \textbf{1} & n & \textbf{1}
\\
remove (from the beginning) & n & \textbf{1} & \textbf{1}
\\
remove (in general) & n & n & n
\\
\hline
\end{tabular}


\section{Choosing a Structure}

The doubly-linked implementation is better than \java{ArrayList} for
adding and removing at the beginning, and just as good as
\java{ArrayList} for adding and removing at the end. So the only
advantage of \java{ArrayList} is for \java{get} and \java{set},
which require linear time in a linked list, even if it is doubly-linked.

\index{linear time}
\index{data structure selection}
\index{choosing a data structure}

If you know that the runtime of your application depends on the time it
takes to \java{get} and \java{set} elements, an \java{ArrayList}
might be the better choice. If the runtime depends on adding and
removing elements near the beginning or the end, \java{LinkedList}
might be better.

\index{order of growth}
\index{constant time}

But remember that these recommendations are based on the order of growth
for large problems. There are other factors to consider:

\begin{itemize}

\item
  If these operations don't take up a substantial fraction of the
  runtime for your application --- that is, if your applications spends
  most of its time doing other things --- then your choice of a
  \java{List} implementation won't matter very much.

\item
  If the lists you are working with are not very big, you might not get
  the performance you expect. For small problems, a quadratic algorithm
  might be faster than a linear algorithm, or linear might be faster
  than constant time. And for small problems, the difference probably
  doesn't matter.

\item
  Also, don't forget about space. So far we have focused on runtime, but
  different implementations require different amounts of space. In an
  \java{ArrayList}, the elements are stored side-by-side in a single
  chunk of memory, so there is little wasted space, and computer
  hardware is often faster with contiguous chunks. In a linked list,
  each element requires a node with one or two links. The links take up
  space (sometimes more than the data!), and with nodes scattered
  around in memory, the hardware might be less efficient.

\end{itemize}

In summary, analysis of algorithms provides some guidance for choosing
data structures, but only if

\begin{enumerate}

\item
  The runtime of your application is important,

\item
  The runtime of your application depends on your choice of data
  structure, and

\item
  The problem size is large enough that the order of growth actually
  predicts which data structure is better.

\end{enumerate}

You could have a long career as a software engineer without ever finding
yourself in this situation.


\chapter{Tree traversal}
\label{cs-traversing-trees}

This chapter introduces the application we will develop during the
rest of the book, a web search engine.
I describe the elements of a search engine and
introduce the first application, a Web crawler that downloads and parses
pages from Wikipedia.  This chapter also presents a recursive implementation of
depth-first search and an iterative implementation that uses a Java
\java{Deque} to implement a ``last in, first out'' stack.

\index{Deque}

\section{Search engines}
\label{the-road-ahead}

A \textbf{web search engine}, like Google Search or Bing, takes a set
of ``search terms'' and returns a list of web pages that are relevant
to those terms (I'll discuss what ``relevant'' means later).  You can
read more at \url{http://thinkdast.com/searcheng}, but I'll explain
what you need as we go along.

\index{search engine}
\index{search term}
\index{crawler}
\index{indexer}
\index{retriever}

The essential components of a search engine are:

\begin{itemize}

\item
  Crawling: We'll need a program that can download a web page, parse it,
  and extract the text and any links to other pages.

\item
  Indexing: We'll need a data structure that makes it possible to look up a
  search term and find the pages that contain it.

\item
  Retrieval: And we'll need a way to collect results from the Index and
  identify pages that are most relevant to the search terms.

\end{itemize}

We'll start with the crawler.  The goal of a crawler is to discover and
download a set of web pages. For search engines like Google and Bing,
the goal is to find \emph{all} web pages, but often crawlers are limited
to a smaller domain. In our case, we will only read pages from
Wikipedia.

\index{Wikipedia}
\index{Getting to Philosophy}

As a first step, we'll build a crawler that reads a Wikipedia page,
finds the first link, follows the link to another page, and repeats. We
will use this crawler to test the ``Getting to Philosophy'' conjecture,
which states:

\begin{quote}
Clicking on the first lowercase link in the main text of a
Wikipedia article, and then repeating the process for subsequent
articles, usually eventually gets one to the Philosophy article.
\end{quote}

This conjecture is stated at
\url{http://thinkdast.com/getphil}{},
and you can read its history there.

Testing the conjecture will allow us to build the basic pieces of a
crawler without having to crawl the entire web, or even all of
Wikipedia. And I think the exercise is kind of fun!

In a few chapters, we'll work on the indexer, and then we'll get to the
retriever.

\section{Parsing HTML}
\label{parsing-html}

When you download a web page, the contents are written in
HyperText Markup Language, aka HTML.
For example, here is a minimal HTML document:

\begin{verbatim}
<!DOCTYPE html>
<html>
  <head>
    <title>This is a title</title>
  </head>
  <body>
    <p>Hello world!</p>
  </body>
</html>
\end{verbatim}

The phrases ``This is a title'' and ``Hello world!'' are the text that
actually appears on the page; the other elements are \textbf{tags} that
indicate how the text should be displayed.

\index{HTML}
\index{tag}
\index{jsoup}
\index{parsing}

When our crawler downloads a page, it will need to parse the HTML in
order to extract the text and find the links. To do that, we'll use
\textbf{jsoup}, which is an open-source Java library that downloads and
parses HTML.

\index{DOM tree}

The result of parsing HTML is a Document Object Model tree, or
\textbf{DOM tree}, that contains the elements of the document, including
text and tags. The tree is a linked data structure made up of nodes; the
nodes represent text, tags, and other document elements.

\index{root}
\index{child node}

The relationships between the nodes are determined by the structure of
the document. In the example above, the first node, called the
\textbf{root}, is the \java{<html>} tag, which
contains links to the two nodes it contains,
\java{<head>} and
\java{<body>}; these nodes are the
\textbf{children} of the root node.

The \java{<head>} node has one child,
\java{<title>}, and the
\java{<body>} node has one child,
\java{<p>} (which stands for ``paragraph''). 
Figure~\ref{fig-dom1}
 represents this tree graphically.

\begin{figure}
\centering
\includegraphics[height=2.5in]{figs/dom_tree1.pdf}
\caption{DOM tree for a simple HTML page.}
\label{fig-dom1}
\end{figure}


Each node contains links to its children; in addition, each node
contains a link to its \textbf{parent}, so from any node it is possible
to navigate up and down the tree. The DOM tree for real pages is usually
more complicated than this example.

\index{parent node}
\index{inspecting the DOM}

Most web browsers provide tools for inspecting the DOM of the page you
are viewing. In Chrome, you can right-click on any part of a web page
and select ``Inspect'' from the menu that pops up. In Firefox, you can
right-click and select ``Inspect Element'' from the menu. Safari
provides a tool called Web Inspector, which you can read about at
\url{http://thinkdast.com/safari}.
For Internet Explorer, you can read the instructions at
\url{http://thinkdast.com/explorer}.

\begin{figure}
\centering
\includegraphics[height=2.5in]{figs/DOMinspector.png}
\caption{Screenshot of the Chrome DOM Inspector.}
\label{fig-dom2}
\end{figure}

Figure~\ref{fig-dom2}
shows a screenshot of the DOM for the Wikipedia page on Java,
\url{http://thinkdast.com/java}.
The element that's highlighted is the first paragraph of the main text
of the article, which is contained in a
\java{<div>} element with
\java{id="mw-content-text"}. We'll use this element id to identify the
main text of each article we download.

\index{element}



\section{Using jsoup}
\label{using-jsoup}

jsoup makes it easy to download and parse web pages, and to navigate the
DOM tree. Here's an example:

\begin{verbatim}
    String url = "http://en.wikipedia.org/wiki/Java_(programming_language)";

    // download and parse the document
    Connection conn = Jsoup.connect(url);
    Document doc = conn.get();

    // select the content text and pull out the paragraphs.
    Element content = doc.getElementById("mw-content-text");
    Elements paragraphs = content.select("p");
\end{verbatim}

\java{Jsoup.connect} takes a URL as a \java{String} and makes a connection to
the web server; the \java{get} method downloads the HTML, parses it,
and returns a \java{Document} object, which represents the DOM.

\index{jsoup}
\index{Document}

\java{Document} provides methods for navigating the
tree and selecting nodes. In fact, it provides so many methods, it can
be confusing. This example demonstrates two ways to select nodes:

\begin{itemize}

\item
  \java{getElementById} takes a \java{String} and searches the tree for an
  element that has a matching ``id'' field. Here it selects the node
  \java{<div id="mw-content-text" lang="en" dir="ltr" class="mw-content-ltr">},
  which appears on every Wikipedia page to identify the
  \java{<div>} element that contains the main
  text of the page, as opposed to the navigation sidebar and other
  elements.

  The return value from \java{getElementById} is an \java{Element}
  object that represents this \java{<div>} and
  contains the elements in the \java{<div>} as
  children, grandchildren, etc.

\item
  \java{select} takes a \java{String}, traverses the tree, and returns all
  the elements with tags that match the \java{String}. In this example, it
  returns all paragraph tags that appear in \java{content}. The return
  value is an \java{Elements} object.

\end{itemize}

\index{select}
\index{Node}
\index{Element}

Before you go on, you should skim the documentation of these classes so
you know what they can do. The most important classes are
\java{Element}, \java{Elements}, and \java{Node}, which you can read 
about at 
\url{http://thinkdast.com/jsoupelt},
\url{http://thinkdast.com/jsoupelts}, and
\url{http://thinkdast.com/jsoupnode}.

\java{Node} represents a node in the DOM tree;  there are several
subclasses that extend \java{Node}, including 
\java{Element}, \java{TextNode}, \java{DataNode}, and \java{Comment}.
\java{Elements} is a \java{Collection} of \java{Element} objects.

\index{subclass}

\begin{figure}
\centering
\includegraphics[width=5in]{figs/yuml2.pdf}
\caption{UML diagram for selected classes provided by jsoup.}
\label{fig-uml2}
% Edit: http://yuml.me/edit/4bc1c919
\end{figure}

Figure~\ref{fig-uml2} is a UML diagram showing the relationships among
these classes.  In a UML class diagram, a line with a hollow arrow
head indicates that one class extends another.  For example, this
diagram indicates that \java{Elements} extends \java{ArrayList}.
We'll get back to UML diagrams in Section~\ref{uml-class-diagrams}.

\index{UML diagram}


\section{Iterating through the DOM}
\label{iterating-through-the-dom}

To make your life easier, I provide a class called
\java{WikiNodeIterable} that lets you iterate through the nodes in a
DOM tree. Here's an example that shows how to use it:

\index{WikiNodeIterable}

\begin{verbatim}
    Elements paragraphs = content.select("p");
    Element firstPara = paragraphs.get(0);

    Iterable<Node> iter = new WikiNodeIterable(firstPara);
    for (Node node: iter) {
        if (node instanceof TextNode) {
            System.out.print(node);
        }
    }
\end{verbatim}

This example picks up where the previous one leaves off. It selects the
first paragraph in \java{paragraphs} and then creates a
\java{WikiNodeIterable}, which implements
\java{Iterable<Node>}. 
\java{WikiNodeIterable} performs a ``depth-first search'', which
produces the nodes in the order they would appear on the page.

\index{depth-first search}
\index{TextNode}

In this example, we print a \java{Node} only if it is a
\java{TextNode} and ignore other types of \java{Node}, specifically
the \java{Element} objects that represent tags. The result is the
plain text of the HTML paragraph without any markup. The output is:

\begin{quote}
Java is a general-purpose computer programming language that is
concurrent, class-based, object-oriented,{[}13{]} and specifically
designed \ldots
\end{quote}


\section{Depth-first search}
\label{depth-first-search}

There are several ways you might reasonably traverse a tree, each with
different applications. We'll start with ``depth-first search'', or DFS.
DFS starts at the root of the tree and selects the first child. If the
child has children, it selects the first child again. When it gets to a
node with no children, it backtracks, moving up the tree to the parent
node, where it selects the next child if there is one; otherwise it
backtracks again. When it has explored the last child of the root, it's
done.

\index{DFS}
\index{recursion}

There are two common ways to implement DFS, recursively and iteratively.
The recursive implementation is simple and elegant:

\begin{verbatim}
private static void recursiveDFS(Node node) {
    if (node instanceof TextNode) {
        System.out.print(node);
    }
    for (Node child: node.childNodes()) {
        recursiveDFS(child);
    }
}
\end{verbatim}

This method gets invoked on every \java{Node} in the tree, starting
with the root. If the \java{Node} it gets is a \java{TextNode}, it
prints the contents. If the \java{Node} has any children, it invokes
\java{recursiveDFS} on each one of them in order.

\index{tree traversal}
\index{pre-order}
\index{post-order}
\index{in-order}

In this example, we print the contents of each \java{TextNode} before
traversing the children, so this is an example of a ``pre-order''
traversal. You can read about ``pre-order'', ``post-order'', and
``in-order'' traversals at \url{http://thinkdast.com/treetrav}.  For
this application, the traversal order doesn't matter.

\index{call stack}

By making recursive calls, \java{recursiveDFS} uses the call stack
(\url{http://thinkdast.com/callstack}) to keep
track of the child nodes and process them in the right order. As an
alternative, we can use a stack data structure to keep track of the
nodes ourselves; if we do that, we can avoid the recursion and traverse
the tree iteratively.


\section{Stacks in Java}
\label{stacks-in-java}

Before I explain the iterative version of DFS, I'll explain
the stack data structure.  We'll start with the general concept of a
stack, which I'll call a ``stack'' with a lowercase ``s''. Then we'll
talk about two Java \java{interfaces} that define stack methods:
\java{Stack} and \java{Deque}.

\index{Stack}
\index{Deque}

% TODO: introduce the term ``Abstract Data Type''?

A stack is a data structure that is similar to a list: it is a
collection that maintains the order of the elements. The primary
difference between a stack and a list is that the stack provides fewer
methods. In the usual convention, it provides:

\begin{itemize}

\item
  \java{push}: which adds an element to the top of the stack.

\item
  \java{pop}: which removes and returns the top-most element from the stack.

\item
  \java{peek}: which returns the top-most element without modifying
  the stack.

\item
  \java{isEmpty}: which indicates whether the stack is empty.

\end{itemize}

Because \java{pop} always returns the top-most element, a stack is
also called a ``LIFO'', which stands for ``last in, first out''. An
alternative to a stack is a ``queue'', which returns elements in the
same order they are added; that is, ``first in, first out'', or FIFO.

\index{push}
\index{pop}
\index{peek}
\index{isEmpty}
\index{FIFO}
\index{LIFO}
\index{stack}
\index{queue}

It might not be obvious why stacks and queues are useful: they don't
provide any capabilities that aren't provided by lists; in fact, they
provide fewer capabilities. So why not use lists for everything? There
are two reasons:

\begin{enumerate}

\item
  If you limit yourself to a small set of methods --- that is, a small
  API --- your code will be more readable and less error-prone. For
  example, if you use a list to represent a stack, you might
  accidentally remove an element in the wrong order. With the stack API,
  this kind of mistake is literally impossible. And the best way to
  avoid errors is to make them impossible.

\item
  If a data structure provides a small API, it is easier to implement
  efficiently. For example, a simple way to implement a stack is a
  singly-linked list. When we push an element onto the stack, we add it
  to the beginning of the list; when we pop an element, we remove it
  from the beginning. For a linked list, adding and removing from the
  beginning are constant time operations, so this implementation is
  efficient. Conversely, big APIs are harder to implement efficiently.

\end{enumerate}

\index{constant time}

To implement a stack in Java, you have three options:

\begin{enumerate}

\item
  Go ahead and use \java{ArrayList} or \java{LinkedList}. If you use
  \java{ArrayList}, be sure to add and remove from the \emph{end},
  which is a constant time operation. And be careful not to add elements
  in the wrong place or remove them in the wrong order.

\item
  Java provides a class called \java{Stack} that provides the standard
  set of stack methods. But this class is an old part of Java: it is not
  consistent with the Java Collections Framework, which came later.

\item
  Probably the best choice is to use one of the implementations of the
  \java{Deque} interface, like \java{ArrayDeque}.

\end{enumerate}

``Deque'' stands for ``double-ended queue''; it's supposed to be
pronounced ``deck'', but some people say ``deek''. In Java, the
\java{Deque} interface provides \java{push}, \java{pop},
\java{peek}, and \java{isEmpty}, so you can use a \java{Deque} as
a stack. It provides other methods, which you can read about at
\url{http://thinkdast.com/deque},
but we won't use them for now.

\index{deque}


\section{Iterative DFS}
\label{iterative-dfs}

Here is an iterative version of DFS that uses an \java{ArrayDeque} to
represent a stack of \java{Node} objects:

\begin{verbatim}
    private static void iterativeDFS(Node root) {
        Deque<Node> stack = new ArrayDeque<Node>();
        stack.push(root);

        while (!stack.isEmpty()) {
            Node node = stack.pop();
            if (node instanceof TextNode) {
                System.out.print(node);
            }

            List<Node> nodes = new ArrayList<Node>(node.childNodes());
            Collections.reverse(nodes);

            for (Node child: nodes) {
                stack.push(child);
            }
        }
    }
\end{verbatim}

The parameter, \java{root}, is the root of the tree we want to
traverse, so we start by creating the stack and pushing the root onto
it.

\index{ArrayDeque}
\index{iterative DFS}

The loop continues until the stack is empty. Each time through, it pops
a \java{Node} off the stack. If it gets a \java{TextNode}, it prints
the contents. Then it pushes the children onto the stack. In order to
process the children in the right order, we have to push them onto the
stack in reverse order; we do that by copying the children into an
\java{ArrayList}, reversing the elements in place, and then iterating
through the reversed \java{ArrayList}.

One advantage of the iterative version of DFS is that it is easier to
implement as a Java \java{Iterator}; you'll see how in the next
chapter.

\index{LinkedList}

But first, one last note about the \java{Deque} interface: in
addition to \java{ArrayDeque}, Java provides another implementation of
\java{Deque}, our old friend \java{LinkedList}. \java{LinkedList}
implements both interfaces, \java{List} and \java{Deque}. Which
interface you get depends on how you use it. For example, if you assign
a \java{LinkedList} object to a \java{Deque} variable, like this:

\begin{verbatim}
Deqeue<Node> deque = new LinkedList<Node>();
\end{verbatim}

you can use the methods in the \java{Deque} interface, but not all
methods in the \java{List} interface. If you assign it to a
\java{List} variable, like this:

\begin{verbatim}
List<Node> deque = new LinkedList<Node>();
\end{verbatim}

you can use \java{List} methods but not all \java{Deque} methods.
And if you assign it like this:

\begin{verbatim}
LinkedList<Node> deque = new LinkedList<Node>();
\end{verbatim}

you can use \emph{all} the methods. But if you combine methods from
different interfaces, your code will be less readable and more
error-prone.



\chapter{Getting to Philosophy}
\label{getphilo}

The goal of this chapter is to develop a Web crawler that tests the
``Getting to Philosophy'' conjecture, which we presented in
Section~\ref{the-road-ahead}.

\index{Getting to Philosophy}


\section{Getting started}
\label{getting-started}

In the repository for this book,
you'll find some code to help you get started:

\begin{enumerate}

\item
  \java{WikiNodeExample.java} contains the code from the previous
  chapter, demonstrating recursive and iterative implementations of
  depth-first search (DFS) in a DOM tree.

\item
  \java{WikiNodeIterable.java} contains an \java{Iterable} class for
  traversing a DOM tree. I'll explain this code in the next section.

\item
  \java{WikiFetcher.java} contains a utility class that uses jsoup to
  download pages from Wikipedia. To help you comply with Wikipedia's
  terms of service, this class limits how fast you can download pages;
  if you request more than one page per second, it sleeps before
  downloading the next page.

\item
  \java{WikiPhilosophy.java} contains an outline of the code you will
  write for this exercise. We'll walk through it below.

\end{enumerate}

You'll also find the Ant build file
\java{build.xml}.  If you run \java{ant WikiPhilosophy}, it will run
a simple bit of starter code.

\index{WikiPhilosophy}
\index{Ant}


\section{Iterables and Iterators}
\label{iterables-and-iterators}

In the previous chapter, I presented an iterative depth-first search
(DFS), and suggested that an advantage of the iterative version,
compared to the recursive version, is that it is easier to wrap in an
\java{Iterator} object. In this section we'll see how to do that.

\index{Iterable}
\index{Iterator}

If you are not familiar with the \java{Iterator} and \java{Iterable}
interfaces, you can read about them at
\url{http://thinkdast.com/iterator}
and
\url{http://thinkdast.com/iterable}.

Take a look at the contents of \java{WikiNodeIterable.java}. The outer
class, \java{WikiNodeIterable} implements the
\java{Iterable<Node>} interface  so we can use
it in a for loop like this:

\begin{verbatim}
    Node root = ...
    Iterable<Node> iter = new WikiNodeIterable(root);
    for (Node node: iter) {
        visit(node);
    }
\end{verbatim}

where \java{root} is the root of the tree we want to traverse and
\java{visit} is a method that does whatever we want when we ``visit''
a \java{Node}.

\index{WikiNodeIterable}

The implementation of \java{WikiNodeIterable} follows a conventional
formula:

\begin{enumerate}

\item
  The constructor takes and stores a reference to the root
  \java{Node}.

\item
  The \java{iterator} method creates a returns an \java{Iterator}
  object.

\end{enumerate}

Here's what it looks like:

\begin{verbatim}
public class WikiNodeIterable implements Iterable<Node> {

    private Node root;

    public WikiNodeIterable(Node root) {
        this.root = root;
    }

    @Override
    public Iterator<Node> iterator() {
        return new WikiNodeIterator(root);
    }
}
\end{verbatim}

The inner class, \java{WikiNodeIterator}, does all the real work:

\begin{verbatim}
    private class WikiNodeIterator implements Iterator<Node> {

        Deque<Node> stack;

        public WikiNodeIterator(Node node) {
            stack = new ArrayDeque<Node>();
            stack.push(root);
        }

        @Override
        public boolean hasNext() {
            return !stack.isEmpty();
        }

        @Override
        public Node next() {
            if (stack.isEmpty()) {
                throw new NoSuchElementException();
            }

            Node node = stack.pop();
            List<Node> nodes = new ArrayList<Node>(node.childNodes());
            Collections.reverse(nodes);
            for (Node child: nodes) {
                stack.push(child);
            }
            return node;
        }
    }
\end{verbatim}

\index{WikiNodeIterator}
\index{DFS}
\index{depth-first search}

This code is almost identical to the iterative version of DFS, but now
it's split into three methods:

\begin{enumerate}

\item
  The constructor initializes the stack (which is implemented using an
  \java{ArrayDeque}) and pushes the root node onto it.

\item
  \java{isEmpty} checks whether the stack is empty.

\item
  \java{next} pops the next \java{Node} off the stack, pushes its
  children in reverse order, and returns the \java{Node} it popped. If
  someone invokes \java{next} on an empty \java{Iterator}, it throws
  an exception.

\end{enumerate}

It might not be obvious that it is worthwhile to rewrite a perfectly
good method with two classes and five methods.  But now that we've
done it, we can use \java{WikiNodeIterable} anywhere an
\java{Iterable} is called for, which makes it easy and syntactically
clean to separate the logic of the iteration (DFS) from whatever
processing we are doing on the nodes.

\index{isEmpty}
\index{next}


\section{\java{WikiFetcher}}
\label{wikifetcher}

\index{WikiFetcher}

When you write a Web crawler, it is easy to download too many pages too
fast, which might violate the terms of service for the server you are
downloading from. To help you avoid that, I provide a class called
\java{WikiFetcher} that does two things:

\begin{enumerate}

\item
  It encapsulates the code we demonstrated in the previous chapter for
  downloading pages from Wikipedia, parsing the HTML, and selecting the
  content text.

\item
  It measures the time between requests and, if we don't leave enough
  time between requests, it sleeps until a reasonable interval has
  elapsed. By default, the interval is one second.

\end{enumerate}

Here's the definition of \java{WikiFetcher}:

\begin{verbatim}
public class WikiFetcher {
    private long lastRequestTime = -1;
    private long minInterval = 1000;

    /**
     * Fetches and parses a URL string, 
     * returning a list of paragraph elements.
     *
     * @param url
     * @return
     * @throws IOException
     */
    public Elements fetchWikipedia(String url) throws IOException {
        sleepIfNeeded();

        Connection conn = Jsoup.connect(url);
        Document doc = conn.get();
        Element content = doc.getElementById("mw-content-text");
        Elements paragraphs = content.select("p");
        return paragraphs;
    }

    private void sleepIfNeeded() {
        if (lastRequestTime != -1) {
            long currentTime = System.currentTimeMillis();
            long nextRequestTime = lastRequestTime + minInterval;
            if (currentTime < nextRequestTime) {
                try {
                    Thread.sleep(nextRequestTime - currentTime);
                } catch (InterruptedException e) {
                    System.err.println(
                        "Warning: sleep interrupted in fetchWikipedia.");
                }
            }
        }
        lastRequestTime = System.currentTimeMillis();
    }
}
\end{verbatim}

The only public method is \java{fetchWikipedia}, which takes a URL as
a \java{String} and returns an \java{Elements} collection that contains one
DOM element for each paragraph in the content text. This code should
look familiar.

\index{Elements}

The new code is in \java{sleepIfNeeded}, which checks the time since
the last request and sleeps if the elapsed time is less than
\java{minInterval}, which is in milliseconds.

That's all there is to \java{WikiFetcher}. Here's an example that
demonstrates how it's used:

\begin{verbatim}
    WikiFetcher wf = new WikiFetcher();

    for (String url: urlList) {
        Elements paragraphs = wf.fetchWikipedia(url);
        processParagraphs(paragraphs);
    }
\end{verbatim}

In this example, we assume that \java{urlList} is a collection of
\java{String}s, and \java{processParagraphs} is a method that does something
with the \java{Elements} object returned by \java{fetchWikipedia}.

This example demonstrates something important: you should create one
\java{WikiFetcher} object and use it to handle all requests. If you
have multiple instances of \java{WikiFetcher}, they won't enforce the
minimum interval between requests.

\index{singleton}

NOTE: My implementation of \java{WikiFetcher} is simple, but it would
be easy for someone to misuse it by creating multiple instances. You
could avoid this problem by making \java{WikiFetcher} a ``singleton'',
which you can read about at
\url{http://thinkdast.com/singleton}.


\section{Exercise 5}
\label{exercise5}

In \java{WikiPhilosophy.java} you'll find a simple \java{main}
method that shows how to use some of these pieces. Starting with this
code, your job is to write a crawler that:

\begin{enumerate}

\item
  Takes a URL for a Wikipedia page, downloads it, and parses it.

\item
  It should traverse the resulting DOM tree to find the first
  \emph{valid} link. I'll explain what ``valid'' means below.

\item
  If the page has no links, or if the first link is a page we have
  already seen, the program should indicate failure and exit.

\item
  If the link matches the URL of the Wikipedia page on philosophy, the
  program should indicate success and exit.

\item
  Otherwise it should go back to Step 1.

\end{enumerate}

The program should build a \java{List} of the URLs it visits and
display the results at the end (whether it succeeds or fails).

\index{Getting to Philosophy}

So what should we consider a ``valid'' link? You have some choices here.
Various versions of the ``Getting to Philosophy'' conjecture use
slightly different rules, but here are some options:

\begin{enumerate}

\item
  The link should be in the content text of the page, not in a sidebar
  or boxout.

\item
  It should not be in italics or in parentheses.

\item
  You should skip external links, links to the current page, and red
  links.

\item
  In some versions, you should skip a link if the text starts with an
  uppercase letter.

\end{enumerate}

You don't have to enforce all of these rules, but we recommend that you
at least handle parentheses, italics, and links to the current page.

If you feel like you have enough information to get started, go ahead.
Or you might want to read these hints:

\begin{enumerate}

\item
  As you traverse the tree, the two kinds of \java{Node} you will need
  to deal with are \java{TextNode} and \java{Element}. If you find
  an \java{Element}, you will probably have to typecast it to access
  the tag and other information.

\item
  When you find an \java{Element} that contains a link, you can check
  whether it is in italics by following parent links up the tree. If
  there is an \java{<i>} or \java{<em>} tag in the parent chain, the
  link is in italics.

\item
  To check whether a link is in parentheses, you will have to scan
  through the text as you traverse the tree and keep track of opening
  and closing parentheses (ideally your solution should be able to
  handle nested parentheses (like this)).

\item
  If you start from the Java page, you should get to Philosophy
  after following
  seven links, unless something has changed since I ran the code.

\end{enumerate}

OK, that's all the help you're going to get. Now it's up to you.
Have fun!



\chapter{Indexer}

At this point we have built a basic Web crawler; the next piece we will
work on is the \textbf{index}. In the context of web search, an index is
a data structure that makes it possible to look up a search term and
find the pages where that term appears. In addition, we would like to
know how many times the search term appears on each page, which will
help identify the pages most relevant to the term.

\index{index}
\index{search term}

For example, if a user submits the search terms ``Java'' and
``programming'', we would look up both search terms and get two sets of
pages. Pages with the word ``Java'' would include pages about the island
of Java, the nickname for coffee, and the programming language. Pages
with the word ``programming'' would include pages about different
programming languages, as well as other uses of the word. By selecting
pages with both terms, we hope to eliminate irrelevant pages and find
the ones about Java programming.

Now that we understand what the index is and what operations it
performs, we can design a data structure to represent it.


\section{Data structure selection}
\label{data-structure-selection}

The fundamental operation of the index is a \textbf{lookup};
specifically, we need the ability to look up a term and find all pages
that contain it. The simplest implementation would be a collection of
pages. Given a search term, we could iterate through the contents of the
pages and select the ones that contain the search term. But the runtime
would be proportional to the total number of words on all the pages,
which would be way too slow.

\index{lookup}
\index{map}
\index{key-value pair}
\index{key}
\index{value}
\index{frequency}

A better alternative is a \textbf{map}, which is a data structure that
represents a collection of \textbf{key-value pairs} and provides a fast
way to look up a \textbf{key} and find the corresponding \textbf{value}.
For example, the first map we'll construct is a \java{TermCounter},
which maps from each search term to the number of times it appears in a
page. The keys are the search terms and the values are the counts (also
called ``frequencies'').

Java provides an interface called \java{Map} that specifies the
methods a map should provide; the most important are:

\begin{itemize}

\item
  \java{get(key)}: This method looks up a key and returns the
  corresponding value.

\item
  \java{put(key, value)}: This method adds a new key-value pair to the
  \java{Map}, or if the key is already in the map, it replaces the
  value associated with \java{key}.

\end{itemize}

Java provides several implementations of \java{Map}, including the two
we will focus on, \java{HashMap} and \java{TreeMap}. In upcoming
chapters, we'll look at these implementations and analyze their performance.

In addition to the \java{TermCounter}, which maps from search terms to
counts, we will define a class called \java{Index}, which maps from a
search term to a collection of pages where it appears. And that raises
the next question, which is how to represent a collection of pages.
Again, if we think about the operations we want to perform, that guides
our decision.

\index{Set}
\index{set intersection}

In this case, we'll need to combine two or more collections and find the
pages that appear in all of them. You might recognize this operation as
\textbf{set intersection}: the intersection of two sets is the set of
elements that appear in both.

As you might expect by now, Java provides a \java{Set} interface that
defines the operations a set should perform. It doesn't actually provide
set intersection, but it provides methods that make it possible to
implement intersection and other set operations efficiently. The core
\java{Set} methods are:

\begin{itemize}

\item
  \java{add(element)}: This method adds an element to a set; if the
  element is already in the set, it has no effect.

\item
  \java{contains(element)}: This method checks whether the given
  element is in the set.

\end{itemize}

Java provides several implementations of \java{Set}, including
\java{HashSet} and \java{TreeSet}.

\index{add}
\index{contains}

Now that we've designed our data structures from the top down, we'll
implement them from the inside out, starting with \java{TermCounter}.


\section{TermCounter}
\label{termcounter}

\index{TermCounter}

\java{TermCounter} is a class that represents a mapping from search
terms to the number of times they appear in a page. Here is the first
part of the class definition:

\begin{verbatim}
public class TermCounter {

    private Map<String, Integer> map;
    private String label;

    public TermCounter(String label) {
        this.label = label;
        this.map = new HashMap<String, Integer>();
    }
}
\end{verbatim}

The instance variables are \java{map}, which contains the mapping from
terms to counts, and \java{label}, which identifies the document the
terms came from; we'll use it to store URLs.

\index{URL}
\index{Map}
\index{HashMap}

To implement the mapping, I chose \java{HashMap}, which is the most
commonly-used \java{Map}. Coming up in a few chapters, you will see how
it works and why it is a common choice.

\java{TermCounter} provides \java{put} and \java{get}, which are
defined like this:

\begin{verbatim}
    public void put(String term, int count) {
        map.put(term, count);
    }

    public Integer get(String term) {
        Integer count = map.get(term);
        return count == null ? 0 : count;
    }
\end{verbatim}

\java{put} is just a \textbf{wrapper method}; when you call
\java{put} on a \java{TermCounter}, it calls \java{put} on the
embedded map.

\index{put}
\index{get}
\index{wrapper method}

On the other hand, \java{get} actually does some work. When you call
\java{get} on a \java{TermCounter}, it calls \java{get} on the
map, and then checks the result. If the term does not appear in the
map, \java{TermCount.get} returns 0. Defining \java{get} this way
makes it easier to write \java{incrementTermCount}, which takes a term
and increases by one the counter associated with that term.

\begin{verbatim}
    public void incrementTermCount(String term) {
        put(term, get(term) + 1);
    }
\end{verbatim}

If the term has not been seen before, \java{get} returns 0; we add 1,
then use \java{put} to add a new key-value pair to the map. If the
term is already in the map, we get the old count, add 1, and then store
the new count, which replaces the old value.

In addition, \java{TermCounter} provides these other methods to help
with indexing Web pages:

\begin{verbatim}
    public void processElements(Elements paragraphs) {
        for (Node node: paragraphs) {
            processTree(node);
        }
    }

    public void processTree(Node root) {
        for (Node node: new WikiNodeIterable(root)) {
            if (node instanceof TextNode) {
                processText(((TextNode) node).text());
            }
        }
    }

    public void processText(String text) {
        String[] array = text.replaceAll("\\pP", " ").
                              toLowerCase().
                              split("\\s+");

        for (int i=0; i<array.length; i++) {
            String term = array[i];
            incrementTermCount(term);
        }
    }
\end{verbatim}

\begin{itemize}

\item
  \java{processElements} takes an \java{Elements} object, which is a
  collection of jsoup \java{Element} objects. It iterates through the
  collection and calls \java{processTree} on each.

\item
  \java{processTree} takes a jsoup \java{Node} that represents the
  root of a DOM tree. It iterates through the tree to find the nodes
  that contain text; then it extracts the text and passes it to
  \java{processText}.

\item
  \java{processText} takes a \java{String} that contains words, spaces,
  punctuation, etc. It removes punctuation characters by replacing
  them with spaces, converts the remaining letters to lowercase, then
  splits the text into words. Then it loops through the words it found
  and calls \java{incrementTermCount} on each.  The \java{replaceAll}
  and \java{split} methods take {\bf regular expressions} as parameters;
  you can read more about them at \url{http://thinkdast.com/regex}.

\end{itemize}

\index{Element}
\index{DOM tree}
\index{regular expression}

Finally, here's an example that demonstrates how \java{TermCounter} is
used:

\begin{verbatim}
    String url = "http://en.wikipedia.org/wiki/Java_(programming_language)";
    WikiFetcher wf = new WikiFetcher();
    Elements paragraphs = wf.fetchWikipedia(url);

    TermCounter counter = new TermCounter(url);
    counter.processElements(paragraphs);
    counter.printCounts();
\end{verbatim}

This example uses a \java{WikiFetcher} to download a page from
Wikipedia and parse the main text. Then it creates a
\java{TermCounter} and uses it to count the words in the page.

\index{WikiFetcher}

In the next section, you'll have a chance to run this code and test your
understanding by filling in a missing method.


\section{Exercise 6}
\label{exercise6}

In the repository for this book,
you'll find the source files for this exercise:

\begin{itemize}

\item \java{TermCounter.java} contains the code from the previous
  section.

\item \java{TermCounterTest.java} contains test code for
  \java{TermCounter.java}.

\item \java{Index.java} contains the class definition for the next
  part of this exercise.

\item \java{WikiFetcher.java} contains the class we used in the
  previous exercise to download and parse Web pages.

\item \java{WikiNodeIterable.java} contains the class we used to
  traverse the nodes in a DOM tree.

\end{itemize}

You'll also find the Ant build file
\java{build.xml}.

\index{Ant}

Run \java{ant build} to compile the source
  files. Then run \java{ant TermCounter}; it should run the code from
  the previous section and print a list of terms and their counts. The
  output should look something like this:

\begin{verbatim}
genericservlet, 2
configurations, 1
claimed, 1
servletresponse, 2
occur, 2
Total of all counts = -1
\end{verbatim}

When you run it, the order of the terms might be different.

\index{size}

The last line is supposed to print the total of the term counts, but
it returns \java{-1} because the method \java{size} is incomplete.
Fill in this method and run \java{ant TermCounter} again. The result
should be \java{4798}.

Run \java{ant TermCounterTest} to confirm that this part of the
exercise is complete and correct.

\index{Index}

For the second part of the exercise, I'll present an implementation of an
\java{Index} object and you will fill in a missing method. Here's the
beginning of the class definition:

\begin{verbatim}
public class Index {

    private Map<String, Set<TermCounter>> index = 
        new HashMap<String, Set<TermCounter>>();

    public void add(String term, TermCounter tc) {
        Set<TermCounter> set = get(term);

        // if we're seeing a term for the first time, make a new Set
        if (set == null) {
            set = new HashSet<TermCounter>();
            index.put(term, set);
        }
        // otherwise we can modify an existing Set
        set.add(tc);
    }

    public Set<TermCounter> get(String term) {
        return index.get(term);
    }
\end{verbatim}

The instance variable, \java{index}, is a map from each search term to
a set of \java{TermCounter} objects. Each \java{TermCounter}
represents a page where the search term appears.

The \java{add} method adds a new \java{TermCounter} to the set
associated with a term. When we index a term that has not appeared
before, we have to create a new set. Otherwise we can just add a new
element to an existing set. In that case, \java{set.add} modifies a
set that lives inside \java{index}, but doesn't modify \java{index}
itself. The only time we modify \java{index} is when we add a new
term.

\index{add}
\index{get}

Finally, the \java{get} method takes a search term and returns the
corresponding set of \java{TermCounter} objects.

This data structure is moderately complicated. To review, an
\java{Index} contains a \java{Map} from each search term to a
\java{Set} of \java{TermCounter} objects, and each \java{TermCounter}
is a map from search terms to counts.

\begin{figure}
\centering
\includegraphics[width=4in]{figs/index.pdf}
\caption{Object diagram of an \java{Index}.}
\label{indexfig}
\end{figure}

Figure~\ref{indexfig} is an object diagram that shows these
objects.  The \java{Index} object has an instance variable named
\java{index} that refers to a \java{Map}.  In this example the
\java{Map} contains only one string, \java{"Java"}, which maps
to a \java{Set} that contains two \java{TermCounter} objects,
one for each page where the word ``Java'' appears.

\index{object diagram}
\index{URL}

Each \java{TermCounter} contains \java{label}, which is the URL
of the page, and \java{map}, which is a \java{Map} that
contains the words on the page
and the number of times each word appears.

The method \java{printIndex} shows how to
unpack this data structure:

\begin{verbatim}
    public void printIndex() {
        // loop through the search terms
        for (String term: keySet()) {
            System.out.println(term);

            // for each term, print pages where it appears and frequencies
            Set<TermCounter> tcs = get(term);
            for (TermCounter tc: tcs) {
                Integer count = tc.get(term);
                System.out.println("    " + tc.getLabel() + " " + count);
            }
        }
    }
\end{verbatim}

The outer loop iterates the search terms. The inner loop iterates the
\java{TermCounter} objects.

\index{Ant}

Run \java{ant build} to make sure your source code is compiled, and
then run \java{ant Index}. It downloads two Wikipedia pages, indexes
them, and prints the results; but when you run it you won't see any
output because we've left one of the methods empty.

\index{indexPage}

Your job is to fill in \java{indexPage}, which takes a URL (as a
\java{String}) and an \java{Elements} object, and updates the index. The
comments below sketch what it should do:

\begin{verbatim}
public void indexPage(String url, Elements paragraphs) {
    // make a TermCounter and count the terms in the paragraphs

    // for each term in the TermCounter, add the TermCounter to the index
}
\end{verbatim}

When it's working, run \java{ant Index} again, and you should see
  output like this:

\begin{verbatim}
...
configurations
    http://en.wikipedia.org/wiki/Programming_language 1
    http://en.wikipedia.org/wiki/Java_(programming_language) 1
claimed
    http://en.wikipedia.org/wiki/Java_(programming_language) 1
servletresponse
    http://en.wikipedia.org/wiki/Java_(programming_language) 2
occur
    http://en.wikipedia.org/wiki/Java_(programming_language) 2
\end{verbatim}

The order of the search terms might be different when you run it.

Also, run \java{ant TestIndex} to confirm that this part of the exercise is
complete.


\chapter{The Map interface}

In the next few exercises, I present several implementations of the
\java{Map} interface. One of them is based on a \textbf{hash table},
which is arguably the most magical data structure ever
invented. Another, which is similar to \java{TreeMap}, is not quite as
magical, but it has the added capability that it can iterate the
elements in order.

\index{map}
\index{hash table}

You will have a chance to implement these data structures, and then we
will analyze their performance.

But before we can explain hash tables, we'll start with a simple
implementation of a \java{Map} using a \java{List} of key-value
pairs.

\section{Implementing \java{MyLinearMap}}
\label{implementing-mylinearmap}

\index{MyLinearMap}

As usual, I provide starter code and you will fill in the missing
methods. Here's the beginning of the \java{MyLinearMap} class
definition:

\begin{verbatim}
public class MyLinearMap<K, V> implements Map<K, V> {

    private List<Entry> entries = new ArrayList<Entry>();
\end{verbatim}

This class uses two type parameters, \java{K}, which is the type of
the keys, and \java{V}, which is the type of the values.
\java{MyLinearMap} implements \java{Map}, which means it has to
provide the methods in the \java{Map} interface.

\index{type parameter}
\index{ArrayList}

A \java{MyLinearMap} object has a single instance variable,
\java{entries}, which is an \java{ArrayList} of \java{Entry}
objects. Each \java{Entry} contains a key-value pair. Here is the
definition:

\begin{verbatim}
    public class Entry implements Map.Entry<K, V> {
        private K key;
        private V value;
        
        public Entry(K key, V value) {
            this.key = key;
            this.value = value;
        }
        
        @Override
        public K getKey() {
            return key;
        }
        @Override
        public V getValue() {
            return value;
        }
    }
\end{verbatim}

There's not much to it; an \java{Entry} is just a container for a key
and a value. This definition is nested inside \java{MyLinearList}, so
it uses the same type parameters, \java{K} and \java{V}.

\index{Entry}

That's all you need to do the exercise, so let's get started.


\section{Exercise 7}
\label{exercise7}

In the repository for this book,
you'll find the source files for this exercise:

\begin{itemize}

\item \java{MyLinearMap.java} contains starter code for the first part
  of the exercise.

\item \java{MyLinearMapTest.java} contains the unit tests for
  \java{MyLinearMap}.

\end{itemize}

You'll also find the Ant build file
\java{build.xml}.

\index{Ant}

Run \java{ant build} to compile the source files. Then run \java{ant
  MyLinearMapTest}; several tests should fail, because you have some
work to do!

\index{helper method}

First, fill in the body of \java{findEntry}. This is a helper method
that is not part of the \java{Map} interface, but once you get it
working, you can use it for several methods. Given a target key, it
should search through the entries and return the entry that contains
the target (as a key, not a value) or \java{null} if it's not
there. Notice that I provide an \java{equals} method that
compares two keys and handles \java{null} correctly.

\index{findEntry}

You can run \java{ant MyLinearMapTest} again, but even if your
\java{findEntry} is correct, the tests won't pass because \java{put}
is not complete.

\index{put}

Fill in \java{put}. You should read the documentation of
\java{Map.put} at \url{http://thinkdast.com/listput} so you know what
it is supposed to do. You might want to start with a version of
\java{put} that always adds a new entry and does not modify an
existing entry; that way you can test the simple case first.  Or if
you feel more confident, you can write the whole thing at once.

\index{containsKey}
\index{get}
\index{remove}

Once you've got \java{put} working, the test for \java{containsKey}
should pass.

Read the documentation of \java{Map.get} at
  \url{http://thinkdast.com/listget}
  and then fill in the method. Run the tests again.

Finally, read the documentation of \java{Map.remove} at
  \url{http://thinkdast.com/maprem}
  and fill in the method.

At this point, all tests should pass. Congratulations!


\section{Analyzing \java{MyLinearMap}}
\label{analyzing-mylinearmap}

\index{equals}

In this section I present a solution to the previous exercise and
analyze the performance of the core methods.  Here are
\java{findEntry} and \java{equals}:

\begin{verbatim}
private Entry findEntry(Object target) {
    for (Entry entry: entries) {
        if (equals(target, entry.getKey())) {
            return entry;
        }
    }
    return null;
}

private boolean equals(Object target, Object obj) {
    if (target == null) {
        return obj == null;
    }
    return target.equals(obj);
}
\end{verbatim}

The runtime of \java{equals} might depend on the size of the
\java{target} and the keys, but does not generally depend on
the number of entries, $n$. So \java{equals} is constant time.

\index{constant time}
\index{analysis of algorithms}

In \java{findEntry}, we might get lucky and find the key we're looking
for at the beginning, but we can't count on it. In general, the number
of entries we have to search is proportional to $n$, so
\java{findEntry} is linear.

\index{findEntry}
\index{linear time}

Most of the core methods in \java{MyLinearMap} use \java{findEntry},
including \java{put}, \java{get}, and \java{remove}. Here's what
they look like:

\begin{verbatim}
public V put(K key, V value) {
    Entry entry = findEntry(key);
    if (entry == null) {
        entries.add(new Entry(key, value));
        return null;
    } else {
        V oldValue = entry.getValue();
        entry.setValue(value);
        return oldValue;
    }
}
\end{verbatim}

\begin{verbatim}
public V get(Object key) {
    Entry entry = findEntry(key);
    if (entry == null) {
        return null;
    }
    return entry.getValue();
}
\end{verbatim}
    
\begin{verbatim}
public V remove(Object key) {
    Entry entry = findEntry(key);
    if (entry == null) {
        return null;
    } else {
        V value = entry.getValue();
        entries.remove(entry);
        return value;
    }
}
\end{verbatim}

After \java{put} calls \java{findEntry}, everything else is constant
time. Remember that \java{entries} is an \java{ArrayList}, so adding
an element \emph{at the end} is constant time, on average. If the key is
already in the map, we don't have to add an entry, but we have to call
\java{entry.getValue} and \java{entry.setValue}, and those are both
constant time. Putting it all together, \java{put} is linear.

\index{put}
\index{get}
\index{constant time}

By the same reasoning, \java{get} is also linear.

\java{remove} is slightly more complicated because
\java{entries.remove} might have to remove an element from the
beginning or middle of the \java{ArrayList}, and that takes linear
time. But that's OK: two linear operations are still linear.

\index{linear time}

In summary, the core methods are all linear, which is why we called this
implementation \java{MyLinearMap} (ta-da!).

If we know that the number of entries will be small, this implementation
might be good enough, but we can do better. In fact, there is an
implementation of \java{Map} where all of the core methods are
constant time. When you first hear that, it might not seem possible.
What we are saying, in effect, is that you can find a needle in a
haystack in constant time, regardless of how big the haystack is. It's
magic.

\index{haystack}

I'll explain how it works in two steps:

\begin{enumerate}

\item
  Instead of storing entries in one big \java{List}, we'll break them
  up into lots of short lists. For each key, we'll use a \textbf{hash
  code} (explained in the next section) to determine which list to use.

\item
  Using lots of short lists is faster than using just one, but as I'll
  explain, it doesn't change the order of growth; the core operations
  are still linear. But there is one more trick: if we increase the
  number of lists to limit the number of entries per list, the result is
  a constant-time map. You'll see the details in the next exercise, but
  first: hashing!

\end{enumerate}


\index{hash code}

In the next chapter, I'll present a solution, analyze the performance
of the core \java{Map} methods, and introduce a more efficient
implementation.


\chapter{Hashing}
\label{cs-maps-hashing-readme}

In this chapter, I define
\java{MyBetterMap}, a better implementation of the \java{Map} interface
than \java{MyLinearMap}, and introduce
\textbf{hashing}, which makes \java{MyBetterMap} more efficient.


\section{Hashing}
\label{hashing}

\index{hashing}
\index{MyBetterMap}

To improve the performance of \java{MyLinearMap}, we'll write a new
class, called \java{MyBetterMap}, that contains a collection of
\java{MyLinearMap} objects. It divides the keys among the embedded
maps, so the number of entries in each map is smaller, which speeds up
\java{findEntry} and the methods that depend on it.

Here's the beginning of the class definition:

\begin{verbatim}
public class MyBetterMap<K, V> implements Map<K, V> {
    
    protected List<MyLinearMap<K, V>> maps;
    
    public MyBetterMap(int k) {
        makeMaps(k);
    }

    protected void makeMaps(int k) {
        maps = new ArrayList<MyLinearMap<K, V>>(k);
        for (int i=0; i<k; i++) {
            maps.add(new MyLinearMap<K, V>());
        }
    }
}
\end{verbatim}

The instance variable, \java{maps}, is a collection of
\java{MyLinearMap} objects. The constructor takes a parameter,
\java{k}, that determines how many maps to use, at least initially.
Then \java{makeMaps} creates the embedded maps and stores them in an
\java{ArrayList}.

\index{ArrayList}

Now, the key to making this work is that we need some way to look at a
key and decide which of the embedded maps it should go into. When we
\java{put} a new key, we choose one of the maps; when we \java{get}
the same key, we have to remember where we put it.

\index{get}
\index{Map}

One possibility is to choose one of the sub-maps at random and keep
track of where we put each key. But how should we keep track? It might
seem like we could use a \java{Map} to look up the key and find the
right sub-map, but the whole point of the exercise is to write an
efficient implementation of a \java{Map}. We can't assume we already
have one.

A better approach is to use a \textbf{hash function}, which takes an
\java{Object}, any \java{Object}, and returns an integer called a
\textbf{hash code}.  Importantly, if it sees the same \java{Object}
more than once, it always returns the same hash code. That way, if we
use the hash code to store a key, we'll get the same hash code when we
look it up.

\index{hash function}
\index{hash code}

In Java, every \java{Object} provides a method called
\java{hashCode} that computes a hash function. The implementation of
this method is different for different objects; we'll see an example
soon.

\index{helper method}

Here's a helper method that chooses the right sub-map for a
given key:

\begin{verbatim}
protected MyLinearMap<K, V> chooseMap(Object key) {
    int index = 0;
    if (key != null) { 
        index = Math.abs(key.hashCode()) % maps.size();
    }
    return maps.get(index);
}
\end{verbatim}

If \java{key} is \java{null}, we choose the sub-map with index 0,
arbitrarily. Otherwise we use \java{hashCode} to get an integer,
apply \java{Math.abs} to make sure it is non-negative,
then use the remainder operator, \java{\%}, which guarantees that the
result is between 0 and \java{maps.size()-1}. So \java{index} is
always a valid index into \java{maps}. Then \java{chooseMap} returns
a reference to the map it chose.

\index{chooseMap}
\index{put}
\index{get}

We use \java{chooseMap} in both \java{put} and \java{get}, so when
we look up a key, we get the same map we chose when we added the key. At
least, we should --- I'll explain a little later why this might not
work.

Here's my implementation of \java{put} and \java{get}:

\begin{verbatim}
public V put(K key, V value) {
  MyLinearMap<K, V> map = chooseMap(key);
    return map.put(key, value);
}

public V get(Object key) {
    MyLinearMap<K, V> map = chooseMap(key);
    return map.get(key);
}
\end{verbatim}

Pretty simple, right? In both methods, we use \java{chooseMap} to find
the right sub-map and then invoke a method on the sub-map. 
That's how it works; now let's think about performance.

\index{sub-map}

If there are $n$ entries split up among $k$ sub-maps,
there will be $n/k$ entries per map, on average. When we look up
a key, we have to compute its hash code, which takes some time, then we
search the corresponding sub-map.

Because the entry lists in
\java{MyBetterMap} are $k$ times shorter than the entry list in
\java{MyLinearMap}, we expect the search to be $k$ times
faster. But the runtime is still proportional to $n$, so
\java{MyBetterMap} is still linear. In the next exercise, you'll see how we
can fix that.

\index{linear time}


\section{How does hashing work?}
\label{how-does-hashing-work}

The fundamental requirement for a hash function is that the same object
should produce the same hash code every time. For immutable objects,
that's relatively easy. For objects with mutable state, we have to think
harder.

\index{SillyString}

As an example of an immutable object, I'll define a class called
\java{SillyString} that encapsulates a \java{String}:

\begin{verbatim}
public class SillyString {
    private final String innerString;

    public SillyString(String innerString) {
        this.innerString = innerString;
    }

    public String toString() {
        return innerString;
    }
\end{verbatim}

This class is not very useful, which is why it's called
\java{SillyString}, but I'll use it to show how a class can define
its own hash function:

\begin{verbatim}
    @Override
    public boolean equals(Object other) {
        return this.toString().equals(other.toString());
    }
    
    @Override
    public int hashCode() {
        int total = 0;
        for (int i=0; i<innerString.length(); i++) {
            total += innerString.charAt(i);
        }
        return total;
    }
\end{verbatim}

Notice that \java{SillyString} overrides both \java{equals} and
\java{hashCode}. This is important. In order to work properly,
\java{equals} has to be consistent with \java{hashCode}, which means
that if two objects are considered equal --- that is, \java{equals}
returns \java{true} --- they should have the same hash code. But this
requirement only works one way; if two objects have the same hash code,
they don't necessarily have to be equal.

\index{equals}
\index{toString}

\java{equals} works by invoking \java{toString}, which returns
\java{innerString}. So two \java{SillyString} objects are equal if
their \java{innerString} instance variables are equal.

\index{hashCode}

\java{hashCode} works by iterating through the characters in the
\java{String} and adding them up. When you add a character to an \java{int},
Java converts the character to an integer using its Unicode code point.
You don't need to know anything about Unicode to understand this
example, but if you are curious, you can read more at 
\url{http://thinkdast.com/codepoint}.

\index{Unicode}
\index{code point}

This hash function satisfies the requirement: if two
\java{SillyString} objects contain embedded strings that are equal,
they will get the same hash code.

This works correctly, but it might not yield good performance,
because it returns the same hash code for many different strings. If two
strings contain the same letters in any order, they will have the same
hash code. And even if they don't contain the same letters, they might
yield the same total, like \java{"ac"} and \java{"bb"}.

If many objects have the same hash code, they end up in the same
sub-map.  If some sub-maps have more entries than others, the speedup
when we have $k$ maps might be much less than $k$. So one of the goals
of a hash function is to be uniform; that is, it should be equally
likely to produce any value in the range.  You can read more about
designing good hash functions at
\url{http://thinkdast.com/hash}.

\index{sub-map}

\section{Hashing and mutation}
\label{hashing-and-mutation}

\java{String}s are immutable, and \java{SillyString} is also immutable
because \java{innerString} is declared to be \java{final}. Once you
create a \java{SillyString}, you can't make \java{innerString} refer
to a different \java{String}, and you can't modify the \java{String} it
refers to. Therefore, it will always have the same hash code.

\index{mutable}
\index{immutable}
\index{SillyArray}

But let's see what happens with a mutable object. Here's a definition
for \java{SillyArray}, which is identical to \java{SillyString},
except that it uses an array of characters instead of a \java{String}:

\begin{verbatim}
public class SillyArray {
    private final char[] array;

    public SillyArray(char[] array) {
        this.array = array;
    }

    public String toString() {
        return Arrays.toString(array);
    }
    
    @Override
    public boolean equals(Object other) {
        return this.toString().equals(other.toString());
    }
    
    @Override
    public int hashCode() {
        int total = 0;
        for (int i=0; i<array.length; i++) {
            total += array[i];
        }
        System.out.println(total);
        return total;
    }
\end{verbatim}

\index{setChar}

\java{SillyArray} also provides \java{setChar}, which makes it
possible to modify the characters in the array:

\begin{verbatim}
public void setChar(int i, char c) {
    this.array[i] = c;
}
\end{verbatim}

Now suppose we create a \java{SillyArray} and add it to a map:

\begin{verbatim}
SillyArray array1 = new SillyArray("Word1".toCharArray());
map.put(array1, 1);
\end{verbatim}

The hash code for this array is 461. Now if we modify the contents of
the array and then try to look it up, like this:

\begin{verbatim}
array1.setChar(0, 'C');
Integer value = map.get(array1);
\end{verbatim}

the hash code after the mutation is 441. With a different hash code,
there's a good chance we'll go looking in the wrong sub-map. In that
case, we won't find the key, even though it is in the map. And that's
bad.

\index{hash code}

In general, it is dangerous to use mutable objects as keys in data
structures that use hashing, which includes \java{MyBetterMap} and
\java{HashMap}. If you can guarantee that the keys won't be modified
while they are in the map, or that any changes won't affect the hash
code, it might be OK. But it is probably a good idea to avoid it.


\section{Exercise 8}

\index{MyBetterMap}

In this exercise, you will finish off the implementation of
\java{MyBetterMap}.  In the repository for this book,
you'll find the source files for this exercise:

\begin{itemize}

\item
  \java{MyLinearMap.java} contains our solution to the previous exercise,
  which we will build on in this exercise.

\item
  \java{MyBetterMap.java} contains the code from the previous chapter
  with some methods you will fill in.

\item
  \java{MyHashMap.java} contains the outline of a hash table that
  grows when needed, which you will complete.

\item
  \java{MyLinearMapTest.java} contains the unit tests for
  \java{MyLinearMap}.

\item
  \java{MyBetterMapTest.java} contains the unit tests for
  \java{MyBetterMap}.

\item
  \java{MyHashMapTest.java} contains the unit tests for
  \java{MyHashMap}.

\item
  \java{Profiler.java} contains code for measuring and plotting
  runtime versus problem size.

\item
  \java{ProfileMapPut.java} contains code that profiles the
  \java{Map.put} method.
\end{itemize}

As usual, you should run \java{ant build} to compile the source
files. Then run \java{ant MyBetterMapTest}. Several tests should fail,
because you have some work to do!

\index{Ant}

Review the implementation of \java{put} and \java{get} from the
previous chapter. Then fill in the body of \java{containsKey}. HINT:
use \java{chooseMap}. Run \java{ant MyBetterMapTest} again and confirm
that \java{testContainsKey} passes.

\index{put}
\index{get}
\index{containsValue}

Fill in the body of \java{containsValue}. HINT: \emph{don't} use
\java{chooseMap}.  Run \java{ant MyBetterMapTest} again and confirm
that \java{testContainsValue} passes. Notice that we have to do more
work to find a value than to find a key.

Like \java{put} and \java{get}, this implementation of
\java{containsKey} is linear, because it has to search one of the
embedded sub-maps.  In the next chapter, we'll see how we can
improve this implementation even more.

\index{linear time}


\chapter{HashMap}

In the previous chapter, we wrote an implementation of the
\java{Map} interface that uses hashing.  We expect this version
to be faster, because the lists it searches are shorter, but
the order of growth is still linear.

\index{HashMap}
\index{sub-map}

If there are $n$ entries and $k$ sub-maps, the size of the sub-maps is
$n/k$ on average, which is still proportional to $n$.  But if we
increase $k$ along with $n$, we can limit the size of $n/k$.

For example, suppose we double $k$ every
time $n$ exceeds $k$; in that case the number of entries
per map would be less than 1 on average, and pretty much always less
than 10, as long as the hash function spreads out the keys reasonably
well.

\index{constant time}

If the number of entries per sub-map is constant, we can search a single
sub-map in constant time. And computing the hash function is generally
constant time (it might depend on the size of the key, but does not
depend on the number of keys). That makes the core \java{Map} methods,
\java{put} and \java{get}, constant time.

In the next exercise, you'll see the details.


\section{Exercise 9}
\label{implementing-myhashmap}

\index{MyHashMap}

In \java{MyHashMap.java}, I provide the outline of a hash table that
grows when needed. Here's the beginning of the definition:

\begin{verbatim}
public class MyHashMap<K, V> extends MyBetterMap<K, V> implements Map<K, V> {

    // average number of entries per sub-map before we rehash
    private static final double FACTOR = 1.0;

    @Override
    public V put(K key, V value) {
        V oldValue = super.put(key, value);

        // check if the number of elements per sub-map exceeds the threshold
        if (size() > maps.size() * FACTOR) {
            rehash();
        }
        return oldValue;
    }
}
\end{verbatim}

\java{MyHashMap} extends \java{MyBetterMap}, so it inherits the
methods defined there. The only method it overrides is \java{put}
which calls \java{put} in the superclass --- that is, it calls the
version of \java{put} in \java{MyBetterMap} --- and then it checks
whether it has to rehash. Calling \java{size} returns the total number
of entries, $n$. Calling \java{maps.size} returns the number of
embedded maps, $k$.

\index{superclass}
\index{load factor}
\index{MyBetterMap}

The constant \java{FACTOR}, which is called the \textbf{load factor},
determines the maximum number of entries per sub-map, on average. If
\java{n > k * FACTOR}, that means
\java{n/k > FACTOR}, which means the number of entries
per sub-map exceeds the threshold, so we call \java{rehash}.

\index{Ant}

Run \java{ant build} to compile the source files. Then run \java{ant
  MyHashMapTest}.  It should fail because the implementation of
\java{rehash} throws an exception. Your job is to fill it in.

\index{rehash}

Fill in the body of \java{rehash} to collect the entries in the table,
resize the table, and then put the entries back in. I provide two
methods that might come in handy: \java{MyBetterMap.makeMaps} and
\java{MyLinearMap.getEntries}. Your solution should double the number
of maps, $k$, each time it is called.


\section{Analyzing \java{MyHashMap}}
\label{analyzing-myhashmap}

\index{constant time}

If the number of entries in the biggest sub-map is proportional to
$n/k$, and $k$ grows in proportion to $n$, several of the core
\java{MyBetterMap} methods become constant time:

\begin{verbatim}
    public boolean containsKey(Object target) {
        MyLinearMap<K, V> map = chooseMap(target);
        return map.containsKey(target);
    }

    public V get(Object key) {
        MyLinearMap<K, V> map = chooseMap(key);
        return map.get(key);
    }

    public V remove(Object key) {
        MyLinearMap<K, V> map = chooseMap(key);
        return map.remove(key);
    }
\end{verbatim}

Each method hashes a key, which is constant time, and then invokes a
method on a sub-map, which is constant time.

\index{put}

So far, so good. But the other core method, \java{put}, is a little
harder to analyze. When we don't have to rehash, it is constant time,
but when we do, it's linear. In that way, it's similar to
\java{ArrayList.add}, which we analyzed in Section~\ref{classifying-add}.

\index{linear time}

For the same reason, \java{MyHashMap.put} turns out to be
constant time if we average over a series of invocations.
Again, the argument is based on amortized analysis 
(see Section~\ref{classifying-add}).

\index{amortized analysis}

Suppose the initial number of sub-maps, $k$, is 2, and the load
factor is 1. Now let's see how much work it takes to \java{put} a
series of keys. As the basic ``unit of work'', we'll count the number of
times we have to hash a key and add it to a sub-map.

\index{unit of work}

The first time we call \java{put} it takes 1 unit of work. The second
time also takes 1 unit. The third time we have to rehash, so it takes 2
units to rehash the existing keys and 1 unit to hash the new key.

Now the size of the hash table is 4, so the next time we call
\java{put}, it takes 1 unit of work. But the next time we have to
rehash, which takes 4 units to rehash the existing keys and 1 unit to
hash the new key.

\index{hashing}

Figure~\ref{fig-hashtable} shows the pattern, with the normal work of hashing
a new key shown across the bottom and extra work of rehashing shown as a
tower.

\begin{figure}
\centerline{\includegraphics[width=5.5in]{figs/tower.pdf}}
\caption{Representation of the work done to add elements to a hash table.}
\label{fig-hashtable}
\end{figure}

As the arrows suggest, if we knock down the towers, each one fills the
space before the next tower. The result is a uniform height of 2 units,
which shows that the average work per \java{put} is about 2 units. And
that means that \java{put} is constant time on average.

This diagram also shows why it is important to double the number of
sub-maps, $k$, when we rehash. If we only add to $k$
instead of multiplying, the towers would be too close together and
they would start piling up. And that would not be constant
time.

\index{constant time}


\section{The tradeoffs}
\label{the-tradeoffs}

We've shown that \java{containsKey}, \java{get}, and \java{remove}
are constant time, and \java{put} is constant time on average. We
should take a minute to appreciate how remarkable that is. The
performance of these operations is pretty much the same no matter how
big the hash table is. Well, sort of.

\index{containsKey}
\index{get}
\index{remove}
\index{put}

Remember that our analysis is based on a simple model of computation
where each ``unit of work'' takes the same amount of time. Real
computers are more complicated than that. In particular, they are
usually fastest when working with data structures small enough to fit in
cache; somewhat slower if the structure doesn't fit in cache but still
fits in memory; and \emph{much} slower if the structure doesn't fit in
memory.

\index{cache}
\index{containsValue}

Another limitation of this implementation is that hashing doesn't help
if we are given a value rather than a key: \java{containsValue} is
linear because it has to search all of the sub-maps. And there
is no particularly efficient way to look up a value and find the
corresponding key (or possibly keys).

\index{linear time}

And there's one more limitation: some of the methods that were constant
time in \java{MyLinearMap} have become linear. For example:

\begin{verbatim}
    public void clear() {
        for (int i=0; i<maps.size(); i++) {
            maps.get(i).clear();
        }
    }
\end{verbatim}

\java{clear} has to clear all of the sub-maps, and the number of
sub-maps is proportional to $n$, so it's linear. Fortunately,
this operation is not used very often, so for most applications this
tradeoff is acceptable.

\index{clear}


\section{Profiling \java{MyHashMap}}
\label{profiling-myhashmap}

Before we go on, we should check whether \java{MyHashMap.put} is really
constant time.

\index{MyHashMap}
\index{profiling}
\index{Ant}

Run \java{ant build} to compile the source
files. Then run \java{ant ProfileMapPut}. It measures the runtime of
\java{HashMap.put} (provided by Java) with a range of problem sizes,
and plots runtime versus problem size on a log-log scale. If this
operation is constant time, the total time for $n$ operations
should be linear, so the result should be a straight line with slope
1. When I ran this code, the estimated slope was close to 1, which is
consistent with our analysis. You should get something similar.

Modify \java{ProfileMapPut.java} so it profiles your implementation,
\java{MyHashMap}, instead of Java's \java{HashMap}. Run the
profiler again and see if the slope is near 1. You might have to
adjust \java{startN} and \java{endMillis} to find a range of
problem sizes where the runtimes are more than a few milliseconds, but
not more than a few thousand.

When I ran this code, I got a surprise: the slope was about 1.7,
which suggests that this implementation is not constant time after all.
It contains a ``performance bug''. 

\index{performance bug}

Before you read the next section, you should track down the error, fix
it, and confirm that \java{put} is now constant time, as expected.


\section{Fixing \java{MyHashMap}}
\label{fixing-myhashmap}

\index{size}

The problem with \java{MyHashMap} is in \java{size}, which is
inherited from \java{MyBetterMap}:

\begin{verbatim}
    public int size() {
        int total = 0;
        for (MyLinearMap<K, V> map: maps) {
            total += map.size();
        }
        return total;
    }
\end{verbatim}

To add up the total size it has to iterate the sub-maps. Since we
increase the number of sub-maps, $k$, as the number of entries,
$n$, increases, $k$ is proportional to $n$, so
\java{size} is linear.

\index{linear time}

And that makes \java{put} linear, too, because it uses \java{size}:

\begin{verbatim}
    public V put(K key, V value) {
        V oldValue = super.put(key, value);

        if (size() > maps.size() * FACTOR) {
            rehash();
        }
        return oldValue;
    }
\end{verbatim}

Everything we did to make \java{put} constant time is wasted if
\java{size} is linear!

\index{constant time}
\index{linear time}

Fortunately, there is a simple solution, and we have seen it before: we
have to keep the number of entries in an instance variable and update it
whenever we call a method that changes it.

\index{MyFixedHashMap}

You'll find my solution in the repository for this book, in
\java{MyFixedHashMap.java}.  Here's the beginning of the class definition:

\begin{verbatim}
public class MyFixedHashMap<K, V> extends MyHashMap<K, V> implements Map<K, V> {

    private int size = 0;

    public void clear() {
        super.clear();
        size = 0;
    }
\end{verbatim}

Rather than modify \java{MyHashMap}, I define a new class that
extends it. It adds a new instance variable, \java{size}, which is
initialized to zero.

Updating \java{clear} is straightforward; we invoke \java{clear} in
the superclass (which clears the sub-maps), and then update
\java{size}.

\index{superclass}

Updating \java{remove} and \java{put} is a little more difficult
because when we invoke the method on the superclass, we can't tell
whether the size of the sub-map changed. Here's how I worked around
that:

\begin{verbatim}
    public V remove(Object key) {
        MyLinearMap<K, V> map = chooseMap(key);
        size -= map.size();
        V oldValue = map.remove(key);
        size += map.size();
        return oldValue;
    }
\end{verbatim}

\java{remove} uses \java{chooseMap} to find the right sub-map, then
subtracts away the size of the sub-map. It invokes \java{remove} on
the sub-map, which may or may not change the size of the sub-map,
depending on whether it finds the key. But either way, we add the new
size of the sub-map back to \java{size}, so the final value of
\java{size} is correct.

\index{remove}

The rewritten version of \java{put} is similar:

\begin{verbatim}
    public V put(K key, V value) {
        MyLinearMap<K, V> map = chooseMap(key);
        size -= map.size();
        V oldValue = map.put(key, value);
        size += map.size();

        if (size() > maps.size() * FACTOR) {
            size = 0;
            rehash();
        }
        return oldValue;
    }
\end{verbatim}

We have the same problem here: when we invoke \java{put} on the
sub-map, we don't know whether it added a new entry. So we use the same
solution, subtracting off the old size and then adding in the new size.

\index{put}
\index{size}

Now the implementation of the \java{size} method is simple:

\begin{verbatim}
    public int size() {
        return size;
    }
\end{verbatim}

And that's pretty clearly constant time.

\index{constant time}

When I profiled this solution, I found that the total time for putting
$n$ keys is proportional to $n$, which means that each \java{put} is
constant time, as it's supposed to be.

\index{profiling}


\section{UML class diagrams}
\label{uml-class-diagrams}

\index{UML}
\index{class diagram}

One challenge of working with the code in this chapter is that we have
several classes that depend on each other. Here are some of the
relationships between the classes:

\begin{itemize}

\item
  \java{MyLinearMap} contains a \java{LinkedList} and implements
  \java{Map}.
\item
  \java{MyBetterMap} contains many \java{MyLinearMap} objects and
  implements \java{Map}.
\item
  \java{MyHashMap} extends \java{MyBetterMap}, so it also contains
  \java{MyLinearMap} objects, and it implements \java{Map}.
\item
  \java{MyFixedHashMap} extends \java{MyHashMap} and
  implements \java{Map}.
\end{itemize}

To help keep track of relationships like these, software engineers
often use {\bf UML class diagrams}. UML stands for Unified Modeling
Language
(see \url{http://thinkdast.com/uml}).
A ``class diagram'' is one of several graphical standards defined by UML.

In a class diagram, each class is represented by a box, and
relationships between classes are represented by
arrows. Figure~\ref{fig-uml} shows a UML class diagram for the classes
from the previous exercise, generated using the online tool yUML at
\url{http://yuml.me/}.

\begin{figure}
\centering
\includegraphics[width=5in]{figs/yuml1.pdf}
\caption{UML diagram for the classes in this chapter.}
\label{fig-uml}
% Edit: http://yuml.me/edit/2aa18a2d
\end{figure}

\index{inheritance}
\index{IS-A relationship}
\index{HAS-A relationship}

Different relationships are represented by different arrows:

\begin{itemize}

\item
  Arrows with a solid head indicate HAS-A relationships. For example,
  each instance of \java{MyBetterMap} contains multiple instances of
  \java{MyLinearMap}, so they are connected by a solid arrow.

\item
  Arrows with a hollow head and a solid line indicate IS-A
  relationships. For example, \java{MyHashMap} extends
  \java{MyBetterMap}, so they are connected by an IS-A arrow.

\item
  Arrows with a hollow head and a dashed line indicate that a class
  implements an interface; in this diagram, every class implements
  \java{Map}.

\end{itemize}

UML class diagrams provide a concise way to represent a lot of
information about a collection of classes. They are used during design
phases to communicate about alternative designs, during implementation
phases to maintain a shared mental map of the project, and during
deployment to document the design.


\chapter{TreeMap}

\index{TreeMap}
\index{Map}

This chapter presents the binary search tree, which is an efficient
implementation of the \java{Map} interface that is particularly useful
if we want to keep the elements sorted.


\section{What's wrong with hashing?}

At this point you should be familiar with the \java{Map} interface and
the \java{HashMap} implementation provided by Java. And by making your
own \java{Map} using a hash table, you should understand how
\java{HashMap} works and why we expect its core methods to be constant
time.

\index{constant time}

Because of this performance, \java{HashMap} is widely used, but it is
not the only implementation of \java{Map}. There are a few reasons you
might want another implementation:

\begin{enumerate}

\item
  Hashing can be slow, so even though \java{HashMap} operations are
  constant time, the ``constant'' might be big.

\item
  Hashing works well if the hash function distributes the keys evenly
  among the sub-maps.  But designing good hash functions is not easy,
  and if too many keys end up in the same sub-map, the performance of
  the \java{HashMap} may be poor.

\item
  The keys in a hash table are not stored in any particular order; in
  fact, the order might change when the table grows and the keys are
  rehashed. For some applications, it is necessary, or at least useful,
  to keep the keys in order.

\end{enumerate}

It is hard to solve all of these problems at the same
time, but Java provides an implementation called \java{TreeMap} that
comes close:

\begin{enumerate}

\item
  It doesn't use a hash function, so it avoids the cost of hashing
  and the difficulty of choosing a hash function.

\item
  Inside the \java{TreeMap}, the keys are are stored in a
  \textbf{binary search tree}, which makes it possible to traverse the
  keys, in order, in linear time.

\item
  The runtime of the core methods is proportional to $\log n$,
  which is not quite as good as constant time, but it is still
  very good.

\end{enumerate}

In the next section, I'll explain how binary search trees work and then you
will use one to implement a \java{Map}. Along the way, we'll analyze
the performance of the core map methods when implemented using a tree.

\index{linear time}


\section{Binary search tree}
\label{binary-search-tree}

\index{binary search tree}
\index{BST}
\index{BST property}
\index{node}

A binary search tree (BST) is a tree where each node contains a key, and
every \java{node} has the ``BST property'':

\begin{enumerate}

\item
  If \java{node} has a left child, all keys in the left subtree must
  be less than the key in \java{node}.

\item
  If \java{node} has a right child, all keys in the right subtree must
  be greater than the key in \java{node}.

\end{enumerate}

\begin{figure}
\centering
\includegraphics[height=2.5in]{figs/Binary_search_tree_1229.png}
%\includegraphics[height=2.5in]{figs/Binary_search_tree.svg}
\caption{Example of a binary search tree.}
\label{fig-bst}
\end{figure}

Figure~\ref{fig-bst}
shows a tree of integers that has this property.
This figure is from the Wikipedia page on binary search trees at
\url{http://thinkdast.com/bst}, which you
might find useful while you work on this exercise.

The key in the root is 8, and you can confirm that all keys to the left
of the root are less than 8, and all keys to the right are greater. You
can also check that the other nodes have this property.

\index{key}

Looking up a key in a binary search tree is fast because we don't have
to search the entire tree. Starting at the root, we can use the
following algorithm:

\begin{enumerate}

\item
  Compare the key you are looking for, \java{target}, to the key in
  the current node. If they are equal, you are done.

\item
  If \java{target} is less than the current key, search the left tree.
  If there isn't one, \java{target} is not in the tree.

\item
  If \java{target} is greater than the current key, search the right
  tree. If there isn't one, \java{target} is not in the tree.

\end{enumerate}

At each level of the tree, you only have to search one child. For
example, if you look for \java{target = 4} in the previous diagram,
you start at the root, which contains the key \java{8}. Because
\java{target} is less than \java{8}, you go left. Because
\java{target} is greater than \java{3} you go right. Because
\java{target} is less than \java{6}, you go left. And then you find
the key you are looking for.

In this example, it takes four comparisons to find the target,
even though the tree contains nine keys. In general, the number of
comparisons is proportional to the height of the tree, not the number of
keys in the tree.

\index{height}

So what can we say about the relationship between the height of the
tree, \java{h}, and the number of nodes, $n$? Starting small
and working up:

\begin{itemize}

\item
  If \java{h=1}, the tree only contains one node, so \java{n=1}.

\item
  If \java{h=2}, we can add two more nodes, for a total of
  \java{n=3}.

\item
  If \java{h=3}, we can add up to four more nodes, for a total
  of \java{n=7}.

\item
  If \java{h=4}, we can add up to eight more nodes, for a total
  of \java{n=15}.

\end{itemize}

By now you might see the pattern. If we number the levels of the tree from
\java{1} to \java{h}, the level with index \java{i} can have up to
$2^{i-1}$ nodes. And the total number of nodes in \java{h} levels is $2^h-1$.
If we have

\[ n = 2^h - 1 \]

we can take the logarithm base 2 of both sides:

\[ log_2 n \approx h \]

which means that the height of the tree is proportional to
$\log n$, if the tree is full; that is, if each level contains the
maximum number of nodes.

So we expect that we can look up a key in a binary search tree in time
proportional to $\log n$. This is true if the tree is full, and
even if the tree is only partially full. But it is not always true, as
we will see.

\index{log time}
\index{logarithm}
\index{order of growth}

An algorithm that takes time proportional to $\log n$ is called
``logarithmic'' or ``log time'', and it belongs to the order of growth
$O(\log n)$.


\section{Exercise 10}
\label{exercise10}

For this exercise you will write an implementation of
the \java{Map} interface using a binary search tree.

\index{Map}

Here's the beginning of an implementation, called \java{MyTreeMap}:

\begin{verbatim}
public class MyTreeMap<K, V> implements Map<K, V> {

    private int size = 0;
    private Node root = null;
\end{verbatim}

The instance variables are \java{size}, which keeps track of the
number of keys, and \java{root}, which is a reference to the root node
in the tree. When the tree is empty, \java{root} is \java{null} and
\java{size} is 0.

Here's the definition of \java{Node}, which is defined inside
\java{MyTreeMap}:

\begin{verbatim}
    protected class Node {
        public K key;
        public V value;
        public Node left = null;
        public Node right = null;

        public Node(K key, V value) {
            this.key = key;
            this.value = value;
        }
    }
\end{verbatim}

\index{Node}
\index{key-value pair}

Each node contains a key-value pair and references to two child nodes,
\java{left} and \java{right}. Either or both of the child nodes can
be \java{null}.

Some of the \java{Map} methods are easy to implement, like
\java{size} and \java{clear}:

\begin{verbatim}
    public int size() {
        return size;
    }

    public void clear() {
        size = 0;
        root = null;
    }
\end{verbatim}

\java{size} is clearly constant time.

\index{size}
\index{constant time}

\java{clear} appears to be constant time, but consider this: when
\java{root} is set to \java{null}, the garbage collector reclaims
the nodes in the tree, which takes linear time. Should work done by the
garbage collector count? I think so.

\index{clear}
\index{linear time}

In the next section, you'll fill in some of the other methods, including
the most important ones, \java{get} and \java{put}.

\section{Implementing a TreeMap}

\index{MyTreeMap}

In the repository for this book, you'll find these source files:

\begin{itemize}

\item
  \java{MyTreeMap.java} contains the code from the previous section
  with outlines for the missing methods.

\item
  \java{MyTreeMapTest.java} contains the unit tests for
  \java{MyTreeMap}.

\end{itemize}

Run \java{ant build} to compile the source files. Then run \java{ant
  MyTreeMapTest}.  Several tests should fail, because you have some
work to do!

\index{Ant}

I've provided outlines for \java{get} and \java{containsKey}.  Both of
them use \java{findNode}, which is a private method I defined; it is
not part of the \java{Map} interface. Here's how it starts:

\begin{verbatim}
    private Node findNode(Object target) {
        if (target == null) {
            throw new IllegalArgumentException();
        }

        @SuppressWarnings("unchecked")
        Comparable<? super K> k = (Comparable<? super K>) target;

        // TODO: FILL THIS IN!
        return null;
    }
\end{verbatim}

\index{get}
\index{containsKey}
\index{findNode}

The parameter \java{target} is the key we're looking for. If
\java{target} is \java{null}, \java{findNode} throws an exception.
Some implementations of \java{Map} can handle \java{null} as a key,
but in a binary search tree, we need to be able to compare keys, so
dealing with \java{null} is problematic. To keep things simple,
this implementation does not allow \java{null} as a key.

The next lines show how we can compare \java{target} to a key in the
tree. From the signature of \java{get} and \java{containsKey}, the
compiler considers \java{target} to be an \java{Object}. But we need
to be able to compare keys, so we typecast \java{target} to
\java{Comparable<? super K>}, which means that
it is comparable to an instance of type \java{K}, or any superclass of
\java{K}.  If you are not familiar with this use of ``type
wildcards'', you can read more at
\url{http://thinkdast.com/gentut}.

\index{type wildcard}
\index{superclass}

Fortunately, dealing with Java's type system is not the point of this
exercise. Your job is to fill in the rest of \java{findNode}. If it
finds a node that contains \java{target} as a key, it should return
the node. Otherwise it should return \java{null}. When you get this
working, the tests for \java{get} and \java{containsKey} should
pass.

Note that your solution should only search one path through the tree, so
it should take time proportional to the height of the tree. You should
not search the whole tree!

\index{height}
\index{helper method}

Your next task is to fill in \java{containsValue}. To get you started,
I've provided a helper method, \java{equals}, that compares
\java{target} and a given key. Note that the values in the tree (as
opposed to the keys) are not necessarily comparable, so we can't use
\java{compareTo}; we have to invoke \java{equals} on \java{target}.

\index{containsValue}

Unlike your previous solution for \java{findNode}, your solution for
\java{containsValue} \emph{does} have to search the whole tree, so its
runtime is proportional to the number of keys, $n$, not the
height of the tree, \java{h}.

The next method you should fill in is \java{put}. I've provided
  starter code that handles the simple cases:

\begin{verbatim}
    public V put(K key, V value) {
        if (key == null) {
            throw new IllegalArgumentException();
        }
        if (root == null) {
            root = new Node(key, value);
            size++;
            return null;
        }
        return putHelper(root, key, value);
    }

    private V putHelper(Node node, K key, V value) {
        // TODO: Fill this in.
    }
\end{verbatim}

If you try to put \java{null} as a key, \java{put} throws an
exception.

If the tree is empty, \java{put} creates a new node and initializes
the instance variable \java{root}.

\index{put}
\index{helper method}

Otherwise, it calls \java{putHelper}, which is a private method I
defined; it is not part of the \java{Map} interface.

Fill in \java{putHelper} so it searches the tree and:

\begin{enumerate}

\item
  If \java{key} is already in the tree, it replaces the old value with
  the new, and returns the old value.

\item
  If \java{key} is not in the tree, it creates a new node, finds the
  right place to add it, and returns \java{null}.

\end{enumerate}

Your implementation of \java{put} should take time proportional to the
height of the tree, $h$, not the number of elements, $n$. Ideally you
should search the tree only once, but if you find it easier to search
twice, you can do that; it will be slower, but it doesn't change the
order of growth.

\index{keySet}

Finally, you should fill in the body of \java{keySet}.  According to
the documentation at \url{http://thinkdast.com/mapkeyset}, this method
should return a \java{Set} that iterates the keys in order; that is,
in increasing order according to the \java{compareTo} method.  The
\java{HashSet} implementation of \java{Set}, which we used in
Section~\ref{exercise6}, doesn't maintain the order of the keys, but
the \java{LinkedHashSet} implementation does.  You can read about it
at \url{http://thinkdast.com/linkedhashset}.

I've provided an outline of \java{keySet} that creates and returns a
\java{LinkedHashSet}:

\begin{verbatim}
    public Set<K> keySet() {
        Set<K> set = new LinkedHashSet<K>();
        return set;
    }
\end{verbatim}

\index{helper method}
\index{recursion}

You should finish off this method so it adds the keys from the tree to
\java{set} in ascending order. HINT: you might want to write a helper
method; you might want to make it recursive; and you might want to
read about in-order tree traversal at
\url{http://thinkdast.com/inorder}.

\index{in-order} 
\index{tree traversal}

% TODO: more help with recursion?

When you are done, all tests should pass. In the next chapter, I'll go
over my solutions and test the performance of the core methods.


\chapter{Binary search tree}

This chapter presents solutions to the previous exercise, then tests the
performance of the tree-backed map. I present a problem with the
implementation and explain how Java's \java{TreeMap} solves it.


\section{A simple \java{MyTreeMap}}
\label{our-version-of-mytreemap}

In the previous exercise I gave you the outline of \java{MyTreeMap} and
asked you to fill in the missing methods. Now I'll present a
solution, starting with \java{findNode}:

\index{MyTreeMap}
\index{findNode}

\begin{verbatim}
private Node findNode(Object target) {
    // some implementations can handle null as a key, but not this one
    if (target == null) {
            throw new IllegalArgumentException();
    }

    // something to make the compiler happy
    @SuppressWarnings("unchecked")
    Comparable<? super K> k = (Comparable<? super K>) target;

    // the actual search
    Node node = root;
    while (node != null) {
        int cmp = k.compareTo(node.key);
        if (cmp < 0)
            node = node.left;
        else if (cmp > 0)
            node = node.right;
        else
            return node;
    }
    return null;
}
\end{verbatim}

\java{findNode} is a private method used by \java{containsKey} and
\java{get}; it is not part of the \java{Map} interface. The
parameter \java{target} is the key we're looking for. I explained the
first part of this method in the previous exercise:

\begin{itemize}

\item
  In this implementation, \java{null} is not a legal value for a key.

\item
  Before we can invoke \java{compareTo} on \java{target}, we have to
  typecast it to some kind of \java{Comparable}. The ``type wildcard''
  used here is as permissive as possible; that is, it works with any
  type that implements \java{Comparable} and whose \java{compareTo}
  method accepts \java{K} or any supertype of \java{K}.

\end{itemize}

\index{type wildcard}

After all that, the actual search is relatively simple. We initialize a
loop variable \java{node} so it refers to the root node. Each time
through the loop, we compare the target to \java{node.key}. If the
target is less than the current key, we move to the left child. If it's
greater, we move to the right child. And if it's equal, we return the
current node.

If we get to the bottom of the tree without finding the target, we
conclude that it is not in the tree and return \java{null}.


\section{Searching for values}
\label{searching-for-values}

As I explained in the previous exercise, the runtime of
\java{findNode} is proportional to the height of the tree, not the
number of nodes, because we don't have to search the whole tree. But
for \java{containsValue}, we have to search the values, not the keys;
the BST property doesn't apply to the values, so we have to search the
whole tree.

\index{recursion}

My solution is recursive:

\begin{verbatim}
public boolean containsValue(Object target) {
    return containsValueHelper(root, target);
}

private boolean containsValueHelper(Node node, Object target) {
    if (node == null) {
        return false;
    }
    if (equals(target, node.value)) {
        return true;
    }
    if (containsValueHelper(node.left, target)) {
        return true;
    }
    if (containsValueHelper(node.right, target)) {
        return true;
    }
    return false;
}
\end{verbatim}

\java{containsValue} takes the target value as a parameter and
immediately invokes \java{containsValueHelper}, passing the root of
the tree as an additional parameter.

\index{base case}
\index{recursion}

Here's how \java{containsValueHelper} works:

\begin{itemize}

\item
  The first \java{if} statement checks the base case of the recursion.
  If \java{node} is \java{null}, that means we have recursed to the
  bottom of the tree without finding the \java{target}, so we should
  return \java{false}. Note that this only means that the target did
  not appear on one path through the tree; it is still possible that it
  will be found on another.

\item
  The second case checks whether we've found what we're looking for. If
  so, we return \java{true}. Otherwise, we have to keep going.

\item
  The third case makes a recursive call to search for \java{target} in
  the left subtree. If we find it, we can return \java{true}
  immediately, without searching the right subtree. Otherwise, we keep
  going.

\item
  The fourth case searches the right subtree. Again, if we find what we
  are looking for, we return \java{true}. Otherwise, having searched
  the whole tree, we return \java{false}.

\end{itemize}

This method ``visits'' every node in the tree, so it takes time
proportional to the number of nodes.

\index{linear time}


\section{Implementing {\tt put}}
\label{implementing-put}

The \java{put} method is a little more complicated than \java{get}
because it has to deal with two cases: (1) if the given key is already
in the tree, it replaces and returns the old value; (2) otherwise it has
to add a new node to the tree, in the right place.

\index{put}

In the previous exercise, I provided this starter code:

\begin{verbatim}
public V put(K key, V value) {
    if (key == null) {
        throw new IllegalArgumentException();
    }
    if (root == null) {
        root = new Node(key, value);
        size++;
        return null;
    }
    return putHelper(root, key, value);
}
\end{verbatim}

And asked you to fill in \java{putHelper}. Here's my solution:

\begin{verbatim}
private V putHelper(Node node, K key, V value) {
    Comparable<? super K> k = (Comparable<? super K>) key;
    int cmp = k.compareTo(node.key);

    if (cmp < 0) {
        if (node.left == null) {
            node.left = new Node(key, value);
            size++;
            return null;
        } else {
            return putHelper(node.left, key, value);
        }
    }
    if (cmp > 0) {
        if (node.right == null) {
            node.right = new Node(key, value);
            size++;
            return null;
        } else {
            return putHelper(node.right, key, value);
        }
    }
    V oldValue = node.value;
    node.value = value;
    return oldValue;
}
\end{verbatim}

\index{subtree}

The first parameter, \java{node}, is initially the root of the tree,
but each time we make a recursive call, it refers to a different
subtree.  As in \java{get}, we use the \java{compareTo} method to
figure out which path to follow through the tree.  If \java{cmp < 0},
the key we're adding is less than \java{node.key}, so we want to look
in the left subtree. There are two cases:

\begin{itemize}

\item
  If the left subtree is empty, that is, if \java{node.left} is
  \java{null}, we have reached the bottom of the tree without finding
  \java{key}. At this point, we know that \java{key} isn't in the
  tree, and we know where it should go. So we create a new node and add
  it as the left child of \java{node}.

\item
  Otherwise we make a recursive call to search the left subtree.

\end{itemize}

If \java{cmp > 0}, the key we're adding is greater than
\java{node.key}, so we want to look in the right subtree. And we
handle the same two cases as in the previous branch.
Finally, if \java{cmp == 0}, we found the key in the tree, so we
replace and return the old value.

\index{iterative}

I wrote this method recursively to make it more readable, but it would
be straightforward to rewrite it iteratively, which you might want to
do as an exercise.


\section{In-order traversal}
\label{in-order-traversal}

The last method I asked you to write is \java{keySet}, which returns
a \java{Set} that contains the keys from the tree in ascending order.
In other implementations of \java{Map}, the keys returned by
\java{keySet} are in no particular order, but one of the capabilities
of the tree implementation is that it is simple and efficient to sort
the keys. So we should take advantage of that.

\index{in-order}
\index{tree traversal}
\index{keySet}

Here's my solution:

\begin{verbatim}
public Set<K> keySet() {
    Set<K> set = new LinkedHashSet<K>();
    addInOrder(root, set);
    return set;
}

private void addInOrder(Node node, Set<K> set) {
    if (node == null) return;
    addInOrder(node.left, set);
    set.add(node.key);
    addInOrder(node.right, set);        
}
\end{verbatim}

In \java{keySet}, we create a \java{LinkedHashSet}, which is a
\java{Set} implementation that keeps the elements in order (unlike
most other \java{Set} implementations). Then we call
\java{addInOrder} to traverse the tree.

\index{LinkedHashSet}

The first parameter, \java{node}, is initially the root of the tree,
but as you should expect by now, we use it to traverse the tree
recursively. \java{addInOrder} performs a classic ``in-order
traversal'' of the tree.

If \java{node} is \java{null}, that means the subtree is empty, so
we return without adding anything to \java{set}. Otherwise we:

\begin{enumerate}

\item
  Traverse the left subtree in order.

\item
  Add \java{node.key}.

\item
  Traverse the right subtree in order.

\end{enumerate}

Remember that the BST property guarantees that all nodes in the left
subtree are less than \java{node.key}, and all nodes in the right
subtree are greater. So we know that \java{node.key} has been added in
the correct order.

\index{BST property}
\index{recursion}
\index{base case}

Applying the same argument recursively, we know that the elements from
the left subtree are in order, as well as the elements from the right
subtree.  And the base case is correct: if the subtree is empty, no
keys are added.  So we can conclude that this method adds all keys in
the correct order.

Because this method visits every node in the tree, like
\java{containsValue}, it takes time proportional to $n$.


\section{The logarithmic methods}
\label{the-logarithmic-methods}

In \java{MyTreeMap}, the methods \java{get} and \java{put} take
time proportional to the height of the tree, $h$. In the previous
exercise, we showed that if the tree is full --- if every level of the tree
contains the maximum number of nodes --- the height of the tree is
proportional to $\log n$.

\index{logarithmic time}
\index{get}
\index{put}

And I claimed that \java{get} and \java{put} are logarithmic; that
is, they take time proportional to $\log n$. But for most
applications, there's no guarantee that the tree is full. In general,
the shape of the tree depends on the keys and what order they are added.

To see how this works out in practice, we'll test our implementation
with two sample datasets: a list of random strings and a list of
timestamps in increasing order.

\index{profiling}

Here's the code that generates random strings:

\begin{verbatim}
Map<String, Integer> map = new MyTreeMap<String, Integer>();

for (int i=0; i<n; i++) {
    String uuid = UUID.randomUUID().toString();
    map.put(uuid, 0);
}
\end{verbatim}

\java{UUID} is a class in the \java{java.util} package that can
generate a random ``universally unique identifier''. UUIDs are useful
for a variety of applications, but in this example we're taking
advantage of an easy way to generate random strings.

\index{UUID}

I ran this code with \java{n=16384} and measured the runtime and the
height of the final tree. Here's the output:

\begin{verbatim}
Time in milliseconds = 151
Final size of MyTreeMap = 16384
log base 2 of size of MyTreeMap = 14.0
Final height of MyTreeMap = 33
\end{verbatim}

I included ``log base 2 of size of MyTreeMap'' to see how tall the tree
would be if it were full. The result indicates that a full tree with
height 14 would contain 16,384 nodes.

The actual tree of random strings has height 33, which is substantially
more than the theoretical minimum, but not too bad. To find one key in a
collection of 16,384, we only have to make 33 comparisons. Compared to a
linear search, that's almost 500 times faster.

\index{linear search}

This performance is typical with random strings or other keys that are
added in no particular order. The final height of the tree might be 2-3
times the theoretical minimum, but it is still proportional to
$\log n$, which is much less than $n$. In fact,
$\log n$ grows so slowly as $n$ increases, it can be
difficult to distinguish logarithmic time from constant time in
practice.

\index{constant time}
\index{logarithmic time}
\index{timestamp}

However, binary search trees don't always behave so well. Let's see what
happens when we add keys in increasing order. Here's an example that
measures timestamps in nanoseconds and uses them as keys:

\begin{verbatim}
MyTreeMap<String, Integer> map = new MyTreeMap<String, Integer>();

for (int i=0; i<n; i++) {
    String timestamp = Long.toString(System.nanoTime());
    map.put(timestamp, 0);
}
\end{verbatim}

\java{System.nanoTime} returns an integer with type \java{long} that
indicates elapsed time in nanoseconds. Each time we call it, we get a
somewhat bigger number. When we convert these timestamps to strings,
they appear in increasing alphabetical order.

And let's see what happens when we run it:

\begin{verbatim}
Time in milliseconds = 1158
Final size of MyTreeMap = 16384
log base 2 of size of MyTreeMap = 14.0
Final height of MyTreeMap = 16384
\end{verbatim}

The runtime is more than seven times longer than in the previous case.
longer. If you wonder why, take a look at the final height of the tree:
16384!

\begin{figure}
\centering
\includegraphics[width=4in]{figs/bst.pdf}
\caption{Binary search trees, balanced (left) and unbalanced (right).}
\label{bstfig}
\end{figure}

If you think about how \java{put} works, you can figure out what's
going on. Every time we add a new key, it's larger than all of the keys
in the tree, so we always choose the right subtree, and always add the
new node as the right child of the rightmost node. The result is an
``unbalanced'' tree that only contains right children.

\index{unbalanced tree}
\index{balanced tree}

The height of this tree is proportional to $n$, not
$\log n$, so the performance of \java{get} and \java{put} is
linear, not logarithmic.

\index{linear time}

Figure~\ref{bstfig} shows an example of a balanced and unbalanced
tree.  In the balanced tree, the height is 4 and the total number of
nodes is $2^4-1 = 15$.  In the unbalanced tree with the same number of
nodes, the height is 15.


\section{Self-balancing trees}
\label{self-balancing-trees}

\index{self-balancing tree}

There are two possible solutions to this problem:

\begin{itemize}

\item
  You could avoid adding keys to the \java{Map} in order. But this is
  not always possible.

\item
  You could make a tree that does a better job of handling keys if they
  happen to be in order.

\end{itemize}

The second solution is better, and there are several ways to do it. The
most common is to modify \java{put} so that it detects when the tree
is starting to become unbalanced and, if so, rearranges the nodes. Trees
with this capability are called ``self-balancing''. Common
self-balancing trees include the AVL tree (``AVL'' are the initials of
the inventors), and the red-black tree, which is what the Java
\java{TreeMap} uses.

\index{AVL tree}
\index{red-black tree}

In our example code, if we replace \java{MyTreeMap} with the Java
\java{TreeMap}, the runtimes are about the same for the random strings
and the timestamps. In fact, the timestamps run faster, even though they
are in order, probably because they take less time to hash.

\index{logarithmic time}

In summary, a binary search tree can implement \java{get} and
\java{put} in logarithmic time, but only if the keys are added in an
order that keeps the tree sufficiently balanced. Self-balancing trees
avoid this problem by doing some additional work each time a new key is
added.

You can read more about self-balancing trees at
\url{http://thinkdast.com/balancing}.


\section{One more exercise}
\label{one-more-exercise}

In the previous exercise you didn't have to implement \java{remove},
but you might want to try it. If you remove a node from the middle of
the tree, you have to rearrange the remaining nodes to restore the BST
property.  You can probably figure out how to do that on your own, or
you can read the explanation at
\url{http://thinkdast.com/bstdel}.

\index{remove}

Removing a node and rebalancing a tree are similar operations: if you do
this exercise, you will have a better idea of how self-balancing trees
work.



\chapter{Persistence}

In the next few exercises we will get back to building a web search
engine. To review, the components of a search engine are:

\begin{itemize}

\item
  Crawling: We'll need a program that can download a web page, parse it,
  and extract the text and any links to other pages.

\item
  Indexing: We'll need an index that makes it possible to look up a
  search term and find the pages that contain it.

\item
  Retrieval: And we'll need a way to collect results from the index and
  identify pages that are most relevant to the search terms.

\end{itemize}

\index{search engine}
\index{crawler}
\index{indexer}
\index{retriever}

If you did Exercise~\ref{exercise6}, you implemented an index
using Java maps. In this exercise, we'll revisit the indexer and make
a new version that stores the results in a database.

\index{indexer}

If you did Exercise~\ref{exercise5}, you
built a crawler that follows the first link it finds. In the next exercise,
we'll make a more general version that stores every link it finds in a
queue and explores them in order.

And then, finally, you will work on the retrieval problem.

In these exercises, I provide less starter code, and you will make more
design decisions. These exercises are also more open-ended. I will suggest
some minimal goals you should try to reach, but there are many ways you
can go farther if you want to challenge yourself.

Now, let's get started on a new version of the indexer.

\section{Redis}
\label{redis}

\index{Redis}

The previous version of the indexer stores the index in two data
structures: a \java{TermCounter} that maps from a search term to the
number of times it appears on a web page, and an \java{Index} that
maps from a search term to the set of pages where it appears.

These data structures are stored in the memory of a running Java
program, which means that when the program stops running, the index is
lost. Data stored only in the memory of a running program is called
``volatile'', because it vaporizes when the program ends.

\index{volatile}
\index{persistent}

Data that persists after the program that created it ends is called
``persistent''. In general, files stored in a file system are
persistent, as well as data stored in databases.

\index{JSON}

A simple way to make data persistent is to store it in a file. Before
the program ends, it could translate its data structures into a format
like JSON (\url{http://thinkdast.com/json}) and then write them
into a file. When it starts again, it could read the file and rebuild
the data structures.

But there are several problems with this solution:

\begin{enumerate}
\item
  Reading and writing large data structures (like a Web index) would be
  slow.

\item
  The entire data structure might not fit into the memory of a single
  running program.

\item
  If a program ends unexpectedly (for example, due to a power outage),
  any changes made since the program last started would be lost.

\end{enumerate}

A better alternative is a database that provides persistent storage and
the ability to read and write parts of the database without reading and
writing the whole thing.

\index{database}
\index{DBMS}

There are many kinds of database management systems (DBMS) that provide
different capabilities. You can read an overview at
\url{http://thinkdast.com/database}.

\index{Redis}

The database I recommend for this exercise is Redis, which provides
persistent data structures that are similar to Java data structures.
Specifically, it provides:

\begin{itemize}

\item
  Lists of strings, similar to Java \java{List}.

\item
  Hashes, similar to Java \java{Map}.

\item
  Sets of strings, similar to Java \java{Set}.

\end{itemize}

Redis is a ``key-value database'', which means that the data structures
it contains (the values) are identified by unique strings (the keys). A
key in Redis plays the same role as a reference in Java: it identifies
an object. We'll see some examples soon.

\index{key-value database}


\section{Redis clients and servers}
\label{redis-clients-and-servers}

\index{client}
\index{server}

Redis is usually run as a remote service; in fact, the name stands for
``REmote DIctionary Server''. To use Redis, you have to run the Redis
server somewhere and then connect to it using a Redis client. There are
many ways to set up a server and many clients you could use. For this
exercise, I recommend:

\begin{enumerate}

\item
  Rather than install and run the server yourself, consider using a
  service like RedisToGo (\url{http://thinkdast.com/redistogo}), which runs
  Redis in the cloud. They offer a free plan with enough resources for
  the exercise.

\item
  For the client I recommend Jedis, which is a Java library that
  provides classes and methods for working with Redis.

\end{enumerate}

\index{RedisToGo}
\index{Jedis}

Here are more detailed instructions to help you get started:

\begin{itemize}

\item Create an account on RedisToGo, at
  \url{http://thinkdast.com/redissign},
  and select the plan you want (probably the free plan to get started).

\item
  Create an ``instance'', which is a virtual machine running the Redis
  server. If you click on the ``Instances'' tab, you should see your new
  instance, identified by a host name and a port number. For example, I
  have an instance named ``dory-10534''.

\item
  Click on the instance name to get the configuration page. Make a note
  of the URL near the top of the page, which looks like this:

  \begin{verbatim}
  redis://redistogo:1234567feedfacebeefa1e1234567@dory.redistogo.com:10534
  \end{verbatim}

\end{itemize}

\index{Redis instance}

This URL contains the server's host name, \java{dory.redistogo.com},
the port number, \java{10534}, and the password you will need to
connect to the server, which is the long string of letters and numbers
in the middle. You will need this information for the next step.


\section{Making a Redis-backed index}
\label{hello-jedis}

\index{JedisMaker}
\index{JedisIndex}
\index{WikiFetcher}

In the repository for this book,
you'll find the source files for this exercise:

\begin{itemize}

\item
  \java{JedisMaker.java} contains example code for connecting to a
  Redis server and running a few Jedis methods.

\item
  \java{JedisIndex.java} contains starter code for this exercise.

\item
  \java{JedisIndexTest.java} contains test code for
  \java{JedisIndex}.

\item
  \java{WikiFetcher.java} contains the code we saw in previous exercises to
  read web pages and parse them using jsoup.

\end{itemize}

You will also need these files, which you worked on in previous
exercises:

\begin{itemize}

\item
  \java{Index.java} implements an index using Java data structures.

\item
  \java{TermCounter.java} represents a map from terms to their
  frequencies.

\item
  \java{WikiNodeIterable.java} iterates through the nodes in a DOM
  tree produced by jsoup.

\end{itemize}

If you have working versions of these files, you can use them for
this exercise.  If you didn't do the previous exercises, or you
are not confident in your solutions, you can copy my solutions
from the {\tt solutions} folder.

The first step is to use Jedis to connect to your Redis server.
\java{RedisMaker.java} shows how to do this. It reads information
about your Redis server from a file, connects to it and logs in using
your password, then returns a \java{Jedis} object you can use to
perform Redis operations.

\index{helper class}

If you open \java{JedisMaker.java}, you should see the
\java{JedisMaker} class, which is a helper class that provides one
static method, \java{make}, which creates a \java{Jedis} object. Once
this object is authenticated, you can use it to communicate with your
Redis database.

\java{JedisMaker} reads information about your Redis server from a
file named \java{redis_url.txt}, which you should put in the
directory \java{src/resources}:

\begin{itemize}

\item
  Use a text editor to create end edit
  \java{ThinkDataStructures/code/src/resources/redis_url.txt}.

\item
  Paste in the URL of your server. If you are using RedisToGo, the URL
  will look like this:

\java{redis://redistogo:1234567feedfacebeefa1e1234567@dory.redistogo.com:10534}

\end{itemize}

Because this file contains the password for your Redis server, you
should not put this file in a public repository. To help you avoid
doing that by accident, the repository contains a {\tt .gitignore}
file that makes it harder (but not impossible) to put this file
in your repo.

\index{Ant}

Now run \java{ant build} to compile the
source files and \java{ant JedisMaker} to run the example code in
\java{main}:

\begin{verbatim}
    public static void main(String[] args) {

        Jedis jedis = make();
        
        // String
        jedis.set("mykey", "myvalue");
        String value = jedis.get("mykey");
        System.out.println("Got value: " + value);
        
        // Set
        jedis.sadd("myset", "element1", "element2", "element3");
        System.out.println("element2 is member: " + 
                           jedis.sismember("myset", "element2"));
        
        // List
        jedis.rpush("mylist", "element1", "element2", "element3");
        System.out.println("element at index 1: " + 
                           jedis.lindex("mylist", 1));
        
        // Hash
        jedis.hset("myhash", "word1", Integer.toString(2));
        jedis.hincrBy("myhash", "word2", 1);
        System.out.println("frequency of word1: " + 
                           jedis.hget("myhash", "word1"));
        System.out.println("frequency of word1: " + 
                            jedis.hget("myhash", "word2"));
        
        jedis.close();
    }
\end{verbatim}

This example demonstrates the data types and methods you are most likely
to use for this exercise. When you run it, the output should be:

\begin{verbatim}
Got value: myvalue
element2 is member: true
element at index 1: element2
frequency of word1: 2
frequency of word2: 1
\end{verbatim}

In the next section, I'll explain how the code works.


\newcommand{\redis}{\textit}

\section{Redis data types}
\label{redis-data-types}

Redis is basically a map from keys, which are strings, to
values, which can be one of several data types. The most basic Redis
data type is a \redis{string}.  I will write Redis types in
italics to distinguish them from Java types. 

To add a \redis{string} to the database,
use \java{jedis.set}, which is similar to \java{Map.put}; the
parameters are the new key and the corresponding value. To look up a
key and get its value, use \java{jedis.get}:

\begin{verbatim}
        jedis.set("mykey", "myvalue");
        String value = jedis.get("mykey");
\end{verbatim}

In this example, the key is \java{"mykey"} and the value is
\java{"myvalue"}.

\index{Redis set}
\index{Redis get}

Redis provides a \redis{set} structure, which is
similar to a Java
\java{Set<String>}. To add elements to a Redis \redis{set},
you choose a key to identify the \redis{set} and then use
\java{jedis.sadd}:

\begin{verbatim}
        jedis.sadd("myset", "element1", "element2", "element3");
        boolean flag = jedis.sismember("myset", "element2");
\end{verbatim}

You don't have to create the \redis{set} as a separate step. If it doesn't
exist, Redis creates it. In this case, it creates a \redis{set} named
\java{myset} that contains three elements.

The method \java{jedis.sismember} checks whether an element is in a
\redis{set}. Adding elements and checking membership are constant time
operations.

\index{constant time}

Redis also provides a \redis{list} structure, which is
similar to a Java
\java{List<String>}. The method
\java{jedis.rpush} adds elements to the end (right side) of a
\redis{list}:

\begin{verbatim}
        jedis.rpush("mylist", "element1", "element2", "element3");
        String element = jedis.lindex("mylist", 1);
\end{verbatim}

Again, you don't have to create the structure before you start
adding elements. This example creates a \redis{list} named ``mylist'' that
contains three elements.

\index{Redis list}
\index{Redis hash}

The method \java{jedis.lindex} takes an integer index and returns the
indicated element of a \redis{list}. Adding and accessing elements are
constant time operations.

Finally, Redis provides a \redis{hash} structure, which is similar to a Java
\java{Map<String, String>}. The method
\java{jedis.hset} adds a new entry to the \redis{hash}:

\begin{verbatim}
        jedis.hset("myhash", "word1", Integer.toString(2));
        String value = jedis.hget("myhash", "word1");
\end{verbatim}

This example creates a \redis{hash} named \java{myhash} that contains one
entry, which maps from the key \java{word1} to the value \java{"2"}.

The keys and values are \redis{string}s, so if we want to store
an \java{Integer}, we have to convert it to
a \java{String} before we call \java{hset}. 
And when we look up the value using \java{hget},
the result is a \java{String}, so we might have to convert it back
to \java{Integer}.

\index{field}

Working with Redis \redis{hash}es can be confusing, because we use a key to
identify which \redis{hash} we want, and then another key to identify a value in
the \redis{hash}. In the context of Redis, the second key is called a ``field'',
which might help keep things straight. So a ``key'' like \java{myhash}
identifies a particular \redis{hash}, and then a ``field'' like \java{word1}
identifies a value in the \redis{hash}.

For many applications, the values in a Redis \redis{hash} are integers, so Redis
provides a few special methods, like \java{hincrby}, that treat the
values as numbers:

\begin{verbatim}
        jedis.hincrBy("myhash", "word2", 1);
\end{verbatim}

This method accesses \java{myhash}, gets the current value associated
with \java{word2} (or 0 if it doesn't already exist), increments it by
1, and writes the result back to the \redis{hash}.

Setting, getting, and incrementing entries in a \redis{hash} are constant time
operations.

\index{constant time}
\index{Redis data type}

You can read more about Redis data types at
\url{http://thinkdast.com/redistypes}.


\section{Exercise 11}
\label{exercise11}

At this point you have the information you need to make a web search
index that stores results in a Redis database.

\index{JedisIndex}

Now run \java{ant JedisIndexTest}. It should
fail, because you have some work to do!

\java{JedisIndexTest} tests these methods:

\begin{itemize}

\item
  \java{JedisIndex}, which is the constructor that takes a
  \java{Jedis} object as a parameter.

\item
  \java{indexPage}, which adds a Web page to the index; it takes a
  \java{String} URL and a jsoup \java{Elements} object that contains the
  elements of the page that should be indexed.

\item
  \java{getCounts}, which takes a search term and returns a
  \java{Map<String, Integer>} that maps from
  each URL that contains the search term to the number of times it
  appears on that page.

\end{itemize}

Here's an example of how these methods are used:

\begin{verbatim}
        WikiFetcher wf = new WikiFetcher();
        String url1 = 
            "http://en.wikipedia.org/wiki/Java_(programming_language)";
        Elements paragraphs = wf.readWikipedia(url1);

        Jedis jedis = JedisMaker.make();
        JedisIndex index = new JedisIndex(jedis);
        index.indexPage(url1, paragraphs);
        Map<String, Integer> map = index.getCounts("the");
\end{verbatim}

If we look up \java{url1} in the result, \java{map}, we should get
339, which is the number of times the word ``the'' appears
on the Java Wikipedia page (that is, the version we saved).

\index{WikiFetcher}

If we index the same page again, the new results should replace the old
ones.

One suggestion for translating data structures from Java to Redis:
remember that each object in a Redis database is identified by a unique
key, which is a \redis{string}. If you have two kinds of objects in the same
database, you might want to add a prefix to the keys to distinguish
between them. For example, in our solution, we have two kinds of
objects:

\begin{itemize}

\item
  We define a \java{URLSet} to be a Redis \redis{set} that contains
  the URLs that contain a given search term. The key for each
  \java{URLSet} starts with \java{"URLSet:"}, so to get the URLs
  that contain the word ``the'', we access the \redis{set} with the key
  \java{"URLSet:the"}.

\item
  We define a \java{TermCounter} to be a Redis \redis{hash} that maps
  from each term that appears on a page to the number of times it
  appears. The key for each \java{TermCounter} starts with
  \java{"TermCounter:"} and ends with the URL of the page we're
  looking up.

\end{itemize}

\index{URLSet}
\index{TermCounter}

In my implementation,  there is one \java{URLSet} for each term and one
\java{TermCounter} for each indexed page. I provide two helper
methods, \java{urlSetKey} and \java{termCounterKey}, to assemble
these keys.

\index{helper method}


\section{More suggestions if you want them}
\label{more-suggestions-if-you-want-them}

At this point you have all the information you need to do the exercise, so
you can get started if you are ready. But I have a few suggestions you
might want to read first:

\begin{itemize}

\item
  For this exercise I provide less guidance than in previous
  exercises.  You will have to make some design decisions; in
  particular, you will have to figure out how to divide the problem
  into pieces that you can test one at a time, and then assemble the
  pieces into a complete solution. If you try to write the whole thing
  at once, without testing smaller pieces, it might take a very long
  time to debug.

\item
  One of the challenges of working with persistent data is that it is
  persistent. The structures stored in the database might change every
  time you run the program. If you mess something up in the database,
  you will have to fix it or start over before you can proceed. To help
  you keep things under control, I've provided methods called
  \java{deleteURLSets}, \java{deleteTermCounters}, and
  \java{deleteAllKeys}, which you can use to clean out the database
  and start fresh. You can also use \java{printIndex} to print the
  contents of the index.

\item
  Each time you invoke a \java{Jedis} method, your client sends a
  message to the server, then the server performs the action you
  requested and sends back a message. If you perform many small
  operations, it will probably take a long time. You can improve
  performance by grouping a series of operations into a
  \java{Transaction}.

\end{itemize}

For example, here's a simple version of \java{deleteAllKeys}:

\begin{verbatim}
    public void deleteAllKeys() {
        Set<String> keys = jedis.keys("*");
        for (String key: keys) {
            jedis.del(key);
        }
    }
\end{verbatim}

Each time you invoke \java{del} requires a round-trip from the client
to the server and back. If the index contains more than a few pages,
this method would take a long time to run. We can speed it up with a
\java{Transaction} object:

\index{server}

\begin{verbatim}
    public void deleteAllKeys() {
        Set<String> keys = jedis.keys("*");
        Transaction t = jedis.multi();
        for (String key: keys) {
            t.del(key);
        }
        t.exec();
    }
\end{verbatim}

\java{jedis.multi} returns a \java{Transaction} object, which
provides all the methods of a \java{Jedis} object. But when you invoke
a method on a \java{Transaction}, it doesn't run the operation
immediately, and it doesn't communicate with the server. It saves up a
batch of operations until you invoke \java{exec}. Then it sends all of
the saved operations to the server at the same time, which is usually
much faster.

\index{Transaction}



\section{A few design hints}
\label{a-few-design-hints}

Now you \emph{really} have all the information you need; you should
start working on the exercise. But if you get stuck, or if you really don't
know how to get started, you can come back for a few more hints.

\textbf{Don't read the following until you have run the test code, tried
out some basic Redis commands, and written a few methods in
\java{JedisIndex.java}}.

OK, if you are really stuck, here are some methods you might want to
work on:

\begin{verbatim}
    /**
     * Adds a URL to the set associated with term.
     */
    public void add(String term, TermCounter tc) {}

    /**
     * Looks up a search term and returns a set of URLs.
     */
    public Set<String> getURLs(String term) {}

    /**
     * Returns the number of times the given term appears at the given URL.
     */
    public Integer getCount(String url, String term) {}

    /**
     * Pushes the contents of the TermCounter to Redis.
     */
    public List<Object> pushTermCounterToRedis(TermCounter tc) {}
\end{verbatim}

These are the methods I used in my solution, but they are certainly
not the only way to divide things up. So please take these suggestions
if they help, but ignore them if they don't.

For each method, consider writing the tests first. When you figure out
how to test a method, you often get ideas about how to write it.

Good luck!



\chapter{Crawling Wikipedia}

In this chapter, I present a solution to the previous exercise and
analyze the performance of Web indexing algorithms. Then we build a
simple Web crawler.

\section{The Redis-backed indexer}
\label{redis-indexer}

\index{Redis}
\index{URLSet}
\index{TermCounter}

In my solution, we store two kinds of structures in Redis:

\begin{itemize}

\item
  For each search term, we have a \java{URLSet}, which is a Redis \redis{set}
  of URLs that contain the search term.

\item
  For each URL, we have a \java{TermCounter}, which is a Redis \redis{hash}
  that maps each search term to the number of times it appears.

\end{itemize}

We discussed these data types in the previous chapter. You can also
read about Redis structures at \url{http://thinkdast.com/redistypes}

\index{JedisIndex}

In \java{JedisIndex}, I provide a method that takes a search term
and returns the Redis key of its \java{URLSet}:

\begin{verbatim}
private String urlSetKey(String term) {
    return "URLSet:" + term;
}
\end{verbatim}

And a method that takes a URL and returns the Redis key of its
\java{TermCounter}:

\begin{verbatim}
private String termCounterKey(String url) {
    return "TermCounter:" + url;
}
\end{verbatim}

Here's the implementation of \java{indexPage}, which takes a URL and a
jsoup \java{Elements} object that contains the DOM tree of the
paragraphs we want to index:

\begin{verbatim}
public void indexPage(String url, Elements paragraphs) {
    System.out.println("Indexing " + url);

    // make a TermCounter and count the terms in the paragraphs
    TermCounter tc = new TermCounter(url);
    tc.processElements(paragraphs);

    // push the contents of the TermCounter to Redis
    pushTermCounterToRedis(tc);
}
\end{verbatim}

To index a page, we

\begin{enumerate}

\item
  Make a Java \java{TermCounter} for the contents of the page, using
  code from a previous exercise.

\item
  Push the contents of the \java{TermCounter} to Redis.

\end{enumerate}

Here's the new code that pushes a \java{TermCounter} to Redis:

\begin{verbatim}
public List<Object> pushTermCounterToRedis(TermCounter tc) {
    Transaction t = jedis.multi();

    String url = tc.getLabel();
    String hashname = termCounterKey(url);

    // if this page has already been indexed, delete the old hash
    t.del(hashname);

    // for each term, add an entry in the TermCounter and a new
    // member of the index
    for (String term: tc.keySet()) {
        Integer count = tc.get(term);
        t.hset(hashname, term, count.toString());
        t.sadd(urlSetKey(term), url);
    }
    List<Object> res = t.exec();
    return res;
}
\end{verbatim}

This method uses a \java{Transaction} to collect the operations and
send them to the server all at once, which is much faster than sending a
series of small operations.

\index{Transaction}

It loops through the terms in the \java{TermCounter}. For each one it

\begin{enumerate}

\item
  Finds or creates a \java{TermCounter} on Redis, then adds a field
  for the new term.

\item
  Finds or creates a \java{URLSet} on Redis, then adds the current
  URL.

\end{enumerate}

If the page has already been indexed, we delete its old
\java{TermCounter} before pushing the new contents.

That's it for indexing new pages.

\index{getCounts}

The second part of the exercise asked you to write \java{getCounts}, which
takes a search term and returns a map from each URL where the term
appears to the number of times it appears there. Here is my solution:

\begin{verbatim}
    public Map<String, Integer> getCounts(String term) {
        Map<String, Integer> map = new HashMap<String, Integer>();
        Set<String> urls = getURLs(term);
        for (String url: urls) {
            Integer count = getCount(url, term);
            map.put(url, count);
        }
        return map;
    }
\end{verbatim}

\index{helper method}
This method uses two helper methods:

\begin{itemize}

\item
  \java{getURLs} takes a search term and returns the Set of URLs where
  the term appears.

\item
  \java{getCount} takes a URL and a term and returns the number of
  times the term appears at the given URL.

\end{itemize}

Here are the implementations:

\begin{verbatim}
    public Set<String> getURLs(String term) {
        Set<String> set = jedis.smembers(urlSetKey(term));
        return set;
    }

    public Integer getCount(String url, String term) {
        String redisKey = termCounterKey(url);
        String count = jedis.hget(redisKey, term);
        return new Integer(count);
    }
\end{verbatim}

Because of the way we designed the index, these methods are simple and
efficient.


\section{Analysis of lookup}
\label{analysis-of-lookup}

Suppose we have indexed $N$ pages and discovered $M$
unique search terms. How long will it take to look up a search term?
Think about your answer before you continue.

\index{analysis}

To look up a search term, we run \java{getCounts}, which

\begin{enumerate}

\item
  Creates a map.

\item
  Runs \java{getURLs} to get a Set of URLs.

\item
  For each URL in the Set, runs \java{getCount} and adds an entry
  to a \java{HashMap}.

\end{enumerate}

\java{getURLs} takes time proportional to the number of URLs that
contain the search term. For rare terms, that might be a small number,
but for common terms it might be as large as $N$.

Inside the loop, we run \java{getCount}, which finds a
\java{TermCounter} on Redis, looks up a term, and adds an entry to a
HashMap. Those are all constant time operations, so the overall
complexity of \java{getCounts} is $O(N)$ in the worst case. However, in
practice the runtime is proportional to the number of pages that contain
the term, which is normally much less than $N$.

\index{constant time}
\index{linear time}

This algorithm is as efficient as it can be, in terms of
algorithmic complexity, but it is very slow because it sends many small
operations to Redis. You can make it faster using a
\java{Transaction}. You might want to do that as an exercise, or you
can see my solution in \java{RedisIndex.java}.

\index{Transaction}


\section{Analysis of indexing}
\label{analysis-of-indexing}

Using the data structures we designed, how long will it take to index a
page? Again, think about your answer before you continue.

\index{analysis}
\index{DOM tree}

To index a page, we traverse its DOM tree, find all the
\java{TextNode} objects, and split up the strings into search terms.
That all takes time proportional to the number of words on the page.

\index{HashMap}

For each term, we increment a counter in a HashMap, which is a constant
time operation. So making the \java{TermCounter} takes time
proportional to the number of words on the page.

\index{linear time}

Pushing the \java{TermCounter} to Redis requires deleting a
\java{TermCounter}, which is linear in the number of unique terms.
Then for each term we have to

\begin{enumerate}

\item
  Add an element to a \java{URLSet}, and

\item
  Add an element to a Redis \java{TermCounter}.

\end{enumerate}

Both of these are constant time operations, so the total time to push
the \java{TermCounter} is linear in the number of unique search terms.

\index{constant time}

In summary, making the \java{TermCounter} is proportional to the
number of words on the page. Pushing the \java{TermCounter} to Redis
is proportional to the number of unique terms.

\index{TermCounter}

Since the number of words on the page usually exceeds the number of
unique search terms, the overall complexity is proportional to the
number of words on the page. In theory a page might contain all search
terms in the index, so the worst case performance is $O(M)$, but we don't
expect to see the worse case in practice.

This analysis suggests a way to improve performance: we should probably
avoid indexing very common words. First of all, they take up a lot of
time and space, because they appear in almost every \java{URLSet} and
\java{TermCounter}. Furthermore, they are not very useful because they
don't help identify relevant pages.

\index{stop words}

Most search engines avoid indexing common words, which are known in this
context as stop words (\url{http://thinkdast.com/stopword}).


\section{Graph traversal}
\label{graph-traversal}

If you did the ``Getting to Philosophy'' exercise in
Chapter~\ref{getphilo}, you already have a program that reads a Wikipedia
page, finds the first link, uses the link to load the next page, and
repeats. This program is a specialized kind of crawler, but when
people say ``Web crawler'' they usually mean a program that

\begin{itemize}

\item
  Loads a starting page and indexes the contents,

\item
  Finds all the links on the page and adds the linked URLs to a collection,
  and

\item
  Works its way through the collection, loading pages, indexing them, and
  adding new URLs.

\item
  If it finds a URL that has already been indexed, it skips
  it.

\end{itemize}

You can think of the Web as a graph
where each page is a node and each link is a directed edge from one node
to another. If you are not familiar with graphs, you can read about
them at \url{http://thinkdast.com/graph}.

\index{graph}
\index{traversal}

Starting from a source node, a crawler traverses this graph,
visiting each reachable node once.

\index{queue}
\index{stack}
\index{FIFO}
\index{LIFO}

The collection we use to store the URLs determines what kind of
traversal the crawler performs:

\begin{itemize}

\item
  If it's a first-in-first-out (FIFO) queue, the crawler performs a
  breadth-first traversal.

\item
  If it's a last-in-first-out (LIFO) stack, the crawler performs a
  depth-first traversal.

\item
  More generally, the items in the collection might be prioritized. For
  example, we might want to give higher priority to pages that have not
  been indexed for a long time.

\end{itemize}

You can read more about graph traversal at
\url{http://thinkdast.com/graphtrav}.


\section{Exercise 12}
\label{exercise12}

\index{WikiCrawler}
\index{JedisIndex}

Now it's time to write the crawler.  In the repository for this book,
you'll find the source files for this exercise:

\begin{itemize}

\item \java{WikiCrawler.java}, which contains starter code for your
  crawler.

\item \java{WikiCrawlerTest.java}, which contains test code for
  \java{WikiCrawler}.

\item \java{JedisIndex.java}, which is my solution to the previous
  exercise.

\end{itemize}

\index{helper class}

You'll also need some of the helper classes we've used in previous
exercises:

\begin{itemize}
\item  \java{JedisMaker.java}
\item  \java{WikiFetcher.java}
\item  \java{TermCounter.java}
\item  \java{WikiNodeIterable.java}
\end{itemize}

Before you run \java{JedisMaker}, you have to provide a file with
information about your Redis server. If you did this in the previous
exercise, you should be all set. Otherwise you can find instructions in
Section~\ref{hello-jedis}.

\index{JedisMaker}
\index{Ant}

Run \java{ant build} to compile the source files, then run
\java{ant JedisMaker} to make sure it is configured to connect to your
Redis server.

Now run \java{ant WikiCrawlerTest}. It should
fail, because you have work to do!

Here's the beginning of the \java{WikiCrawler} class I provided:

\begin{verbatim}
public class WikiCrawler {

    public final String source;
    private JedisIndex index;
    private Queue<String> queue = new LinkedList<String>();
    final static WikiFetcher wf = new WikiFetcher();

    public WikiCrawler(String source, JedisIndex index) {
        this.source = source;
        this.index = index;
        queue.offer(source);
    }

    public int queueSize() {
        return queue.size();
    }
}
\end{verbatim}

The instance variables are

\begin{itemize}

\item
  \java{source} is the URL where we start crawling.

\item
  \java{index} is the \java{JedisIndex} where the results should go.

\item
  \java{queue} is a \java{LinkedList} where we keep track of URLs
  that have been discovered but not yet indexed.

\item
  \java{wf} is the \java{WikiFetcher} we'll use to read and parse
  Web pages.

\end{itemize}

Your job is to fill in \java{crawl}. Here's the prototype:

\index{crawl}

\begin{verbatim}
    public String crawl(boolean testing) throws IOException {}
\end{verbatim}

The parameter \java{testing} will be \java{true} when this method is
called from \java{WikiCrawlerTest} and should be \java{false}
otherwise.

When \java{testing} is \java{true}, the \java{crawl} method should:

\begin{itemize}

\item
  Choose and remove a URL from the queue in FIFO order.

\item
  Read the contents of the page using
  \java{WikiFetcher.readWikipedia}, which reads cached copies of pages
  included in the repository for testing purposes (to avoid
  problems if the Wikipedia version changes).

\item
  It should index pages regardless of whether they are already indexed.

\item
  It should find all the internal links on the page and add them to the
  queue in the order they appear. ``Internal links'' are links to other
  Wikipedia pages.

\item
  And it should return the URL of the page it indexed.

\end{itemize}

When \java{testing} is \java{false}, this method should:

\begin{itemize}

\item
  Choose and remove a URL from the queue in FIFO order.

\item
  If the URL is already indexed, it should not index it again, and
  should return \java{null}.

\item
  Otherwise it should read the contents of the page using
  \java{WikiFetcher.fetchWikipedia}, which reads current content from
  the Web.

\item
  Then it should index the page, add links to the queue, and return the
  URL of the page it indexed.

\end{itemize}

\java{WikiCrawlerTest} loads the queue with about 200 links and then
invokes \java{crawl} three times. After each invocation, it checks the
return value and the new length of the queue.

When your crawler is working as specified, this test should pass. Good
luck!



\chapter{Boolean search}

In this chapter I present a solution to the previous exercise. Then
you will write code to combine multiple search results and sort them
by their relevance to the search terms.


\section{Crawler solution}
\label{crawler-solution}

First, let's go over our solution to the previous exercise. I provided an
outline of \java{WikiCrawler}; your job was to fill in \java{crawl}.
As a reminder, here are the fields in the \java{WikiCrawler} class:

\index{WikiCrawler}

\begin{verbatim}
public class WikiCrawler {
    // keeps track of where we started
    private final String source;

    // the index where the results go
    private JedisIndex index;

    // queue of URLs to be indexed
    private Queue<String> queue = new LinkedList<String>();

    // fetcher used to get pages from Wikipedia
    final static WikiFetcher wf = new WikiFetcher();
}
\end{verbatim}

When we create a \java{WikiCrawler}, we provide \java{source} and
\java{index}. Initially, \java{queue} contains only one element,
\java{source}.

\index{queue}
\index{LinkedList}

Notice that the implementation of \java{queue} is a
\java{LinkedList}, so we can add elements at the end --- and remove
them from the beginning --- in constant time. By assigning a
\java{LinkedList} object to a \java{Queue} variable, we limit
ourselves to using methods in the \java{Queue} interface; specifically,
we'll use \java{offer} to add elements and \java{poll} to remove
them.

\index{constant time}

Here's my implementation of \java{WikiCrawler.crawl}:

\begin{verbatim}
    public String crawl(boolean testing) throws IOException {
        if (queue.isEmpty()) {
            return null;
        }
        String url = queue.poll();
        System.out.println("Crawling " + url);

        if (testing==false && index.isIndexed(url)) {
            System.out.println("Already indexed.");
            return null;
        }

        Elements paragraphs;
        if (testing) {
            paragraphs = wf.readWikipedia(url);
        } else {
            paragraphs = wf.fetchWikipedia(url);
        }
        index.indexPage(url, paragraphs);
        queueInternalLinks(paragraphs);
        return url;
    }
\end{verbatim}

Most of the complexity in this method is there to make it easier to
test. Here's the logic:

\begin{itemize}

\item
  If the queue is empty, it returns \java{null} to indicate that it
  did not index a page.

\item
  Otherwise it removes and stores the next URL from the queue.

\item
  If the URL has already been indexed, \java{crawl} doesn't index it
  again, unless it's in testing mode.

\item
  Next it reads the contents of the page: if it's in testing mode, it
  reads from a file; otherwise it reads from the Web.

\item
  It indexes the page.

\item
  It parses the page and adds internal links to the queue.

\item
  Finally, it returns the URL of the page it indexed.

\end{itemize}

I presented an implementation of \java{Index.indexPage} in
Section~\ref{redis-indexer}. So the only new method is
\java{WikiCrawler.queueInternalLinks}.

\index{Index}

I wrote two versions of this method with different parameters: one
takes an \java{Elements} object containing one DOM tree per
paragraph; the other takes an \java{Element} object that contains a
single paragraph.

\index{Element}

The first version just loops through the paragraphs. The second version
does the real work.

\begin{verbatim}
    void queueInternalLinks(Elements paragraphs) {
        for (Element paragraph: paragraphs) {
            queueInternalLinks(paragraph);
        }
    }

    private void queueInternalLinks(Element paragraph) {
        Elements elts = paragraph.select("a[href]");
        for (Element elt: elts) {
            String relURL = elt.attr("href");

            if (relURL.startsWith("/wiki/")) {
                String absURL = elt.attr("abs:href");
                queue.offer(absURL);
            }
        }
    }
\end{verbatim}


To determine whether a link is ``internal,'' we check whether the URL
starts with ``/wiki/''. This might include some pages we don't want to
index, like meta-pages about Wikipedia. And it might exclude some pages
we want, like links to pages in non-English languages. But this simple
test is good enough to get started.

\index{Wikipedia}

That's all there is to it. This exercise doesn't have a lot of new material;
it is mostly a chance to bring the pieces together.


\section{Information retrieval}
\label{information-retrieval}

\index{information retrieval}

The next phase of this project is to implement a search tool. The pieces
we'll need include:

\begin{enumerate}

\item
  An interface where users can provide search terms and view results.

\item
  A lookup mechanism that takes each search term and returns the pages
  that contain it.

\item
  Mechanisms for combining search results from multiple search terms.

\item
  Algorithms for ranking and sorting search results.

\end{enumerate}

The general term for processes like this is ``information retrieval'',
which you can read more about at 
\url{http://thinkdast.com/infret}.

In this exercise, we'll focus on steps 3 and 4. We've already built a simple
version of 2. If you are interested in building Web applications, you
might consider working on step 1.


\section{Boolean search}
\label{boolean-search}

\index{boolean search}

Most search engines can perform ``boolean searches'', which means you
can combine the results from multiple search terms using boolean logic.
For example:

\begin{itemize}

\item
  The search ``java AND programming'' might return only pages that
  contain both search terms: ``java'' and ``programming''.

\item
  ``java OR programming'' might return pages that contain either term
  but not necessarily both.

\item
  ``java -indonesia'' might return pages that contain ``java'' and do
  not contain ``indonesia''.

\end{itemize}

Expressions like these that contain search terms and operators are
called ``queries''.

\index{query}

When applied to search results, the boolean operators \java{AND},
\java{OR}, and \java{-} correspond to the set operations
\java{intersection}, \java{union}, and \java{difference}. For
example, suppose

\begin{itemize}

\item
  \java{s1} is the set of pages containing ``java'',

\item
  \java{s2} is the set of pages containing ``programming'', and

\item
  \java{s3} is the set of pages containing ``indonesia''.

\end{itemize}

In that case:

\begin{itemize}

\item
  The intersection of \java{s1} and \java{s2} is the set of pages
  containing ``java'' AND ``programming''.

\item
  The union of \java{s1} and \java{s2} is the set of pages
  containing ``java'' OR ``programming''.

\item
  The difference of \java{s1} and \java{s2} is the set of pages
  containing ``java'' and not ``indonesia''.
\end{itemize}

In the next section you will write a method to implement these operations.

\index{intersection}
\index{union}
\index{difference}
\index{set operations}


\section{Exercise 13}
\label{exercise13}

In the repository for this book
you'll find the source files for this exercise:

\begin{itemize}

\item
  \java{WikiSearch.java}, which defines an object that contains search
  results and performs operations on them.

\item
  \java{WikiSearchTest.java}, which contains test code for
  \java{WikiSearch}.

\item
  \java{Card.java}, which demonstrates how to use the \java{sort}
  method in \java{java.util.Collections}.

\end{itemize}

You will also find some of the helper classes we've used in previous
exercises.

\index{WikiSearch}
\index{helper class}

Here's the beginning of the \java{WikiSearch} class definition:

\begin{verbatim}
public class WikiSearch {

    // map from URLs that contain the term(s) to relevance score
    private Map<String, Integer> map;

    public WikiSearch(Map<String, Integer> map) {
        this.map = map;
    }

    public Integer getRelevance(String url) {
        Integer relevance = map.get(url);
        return relevance==null ? 0: relevance;
    }
}
\end{verbatim}

A \java{WikiSearch} object contains a map from URLs to their relevance
score. In the context of information retrieval, a ``relevance score'' is
a number intended to indicate how well a page meets the needs of the
user as inferred from the query. There are many ways to construct a
relevance score, but most of them are based on ``term frequency'', which
is the number of times the search terms appear on the page. A common
relevance score is called TF-IDF, which stands for ``term frequency --
inverse document frequency''.  You can read more about it at
\url{http://thinkdast.com/tfidf}.

\index{relevance}
\index{term frequency}
\index{inverse document frequency}
\index{TF-IDF}

You'll have the option to implement TF-IDF later, but we'll start with
something even simpler, TF:

\begin{itemize}

\item
  If a query contains a single search term, the relevance of a page is
  its term frequency; that is, the number of time the term appears on
  the page.

\item
  For queries with multiple terms, the relevance of a page is the sum of
  the term frequencies; that is, the total number of times any of the
  search terms appear.

\end{itemize}

Now you're ready to start the exercise.
Run \java{ant build} to compile the source files, then run
\java{ant WikiSearchTest}. As usual, it should
fail, because you have work to do.

\index{Ant}

In \java{WikiSearch.java}, fill in the bodies of \java{and},
\java{or}, and \java{minus} so that the relevant tests pass. You
don't have to worry about \java{testSort} yet.

\index{and}
\index{or}
\index{minus}

You can run \java{WikiSearchTest} without using Jedis because it
doesn't depend on the index in your Redis database. But if you want to
run a query against your index, you have to provide a file with
information about your Redis server.  See Section~\ref{hello-jedis}
for details.

\index{JedisMaker}

Run \java{ant JedisMaker} to make sure it is configured to connect to
your Redis server. Then run \java{WikiSearch}, which prints results
from three queries:

\begin{itemize}

\item
  ``java''

\item
  ``programming''

\item
  ``java AND programming''

\end{itemize}

Initially the results will be in no particular order, because
\java{WikiSearch.sort} is incomplete.

\index{sort}
\index{Collections}

Fill in the body of \java{sort} so the results are returned in
increasing order of relevance. I suggest you use the \java{sort}
method provided by \java{java.util.Collections}, which sorts any kind of
\java{List}. You can read the documentation at
\url{http://thinkdast.com/collections}.

There are two versions of \java{sort}:

\begin{itemize}

\item
  The one-parameter version takes a list and sorts the elements using
  the \java{compareTo} method, so the elements have to be
  \java{Comparable}.

\item
  The two-parameter version takes a list of any object type and a
  \java{Comparator}, which is an object that provides a
  \java{compare} method that compares elements.

\end{itemize}

\index{Comparable}
\index{Comparator}

If you are not familiar with the \java{Comparable} and
\java{Comparator} interfaces, I explain them in the next section.


\section{{\tt Comparable} and {\tt Comparator}}
\label{comparable-and-comparator}

\index{Card}

The repository for this book includes \java{Card.java}, which
demonstrates two ways to sort a list of \java{Card} objects. Here's
the beginning of the class definition:

\begin{verbatim}
public class Card implements Comparable<Card> {

    private final int rank;
    private final int suit;

    public Card(int rank, int suit) {
        this.rank = rank;
        this.suit = suit;
    }
\end{verbatim}

A \java{Card} object has two integer fields, \java{rank} and
\java{suit}. \java{Card} implements
\java{Comparable<Card>}, which means that it
provides \java{compareTo}:

\begin{verbatim}
    public int compareTo(Card that) {
        if (this.suit < that.suit) {
            return -1;
        }
        if (this.suit > that.suit) {
            return 1;
        }
        if (this.rank < that.rank) {
            return -1;
        }
        if (this.rank > that.rank) {
            return 1;
        }
        return 0;
    }
\end{verbatim}

\index{compareTo}

The specification of \java{compareTo} indicates that it should return
a negative number if \java{this} is considered less than
\java{that}, a positive number if it is considered greater, and 0 if
they are considered equal.

If you use the one-parameter version of \java{Collections.sort}, it
uses the \java{compareTo} method provided by the elements to sort
them. To demonstrate, we can make a list of 52 cards like this:

\begin{verbatim}
    public static List<Card> makeDeck() {
        List<Card> cards = new ArrayList<Card>();
        for (int suit = 0; suit <= 3; suit++) {
            for (int rank = 1; rank <= 13; rank++) {
                Card card = new Card(rank, suit);
                cards.add(card);
            }
        }
        return cards;
    }
\end{verbatim}

And sort them like this:

\begin{verbatim}
        Collections.sort(cards);
\end{verbatim}

This version of \java{sort} puts the elements in what's called their
``natural order'' because it's determined by the objects themselves.

\index{natural order}
\index{Comparator}
\index{compare}

But it is possible to impose a different ordering by providing a
\java{Comparator} object. For example, the natural order of
\java{Card} objects treats Aces as the lowest rank, but in some card
games they have the highest rank. We can define a \java{Comparator}
that considers ``Aces high'', like this:

\begin{verbatim}
        Comparator<Card> comparator = new Comparator<Card>() {
            @Override
            public int compare(Card card1, Card card2) {
                if (card1.getSuit() < card2.getSuit()) {
                    return -1;
                }
                if (card1.getSuit() > card2.getSuit()) {
                    return 1;
                }
                int rank1 = getRankAceHigh(card1);
                int rank2 = getRankAceHigh(card2);

                if (rank1 < rank2) {
                    return -1;
                }
                if (rank1 > rank2) {
                    return 1;
                }
                return 0;
            }

            private int getRankAceHigh(Card card) {
                int rank = card.getRank();
                if (rank == 1) {
                    return 14;
                } else {
                    return rank;
                }
            }
        };
\end{verbatim}

This code defines an anonymous class that implements \java{compare},
as required. Then it creates an instance of the newly-defined, unnamed
class. If you are not familiar with anonymous classes in Java, you can
read about them at \url{http://thinkdast.com/anonclass}.

\index{anonymous class}

Using this \java{Comparator}, we can invoke \java{sort} like this:

\begin{verbatim}
        Collections.sort(cards, comparator);
\end{verbatim}

In this ordering, the Ace of Spades is considered the highest class in
the deck; the two of Clubs is the lowest.

\index{ordering}

The code in this section is in \java{Card.java} if you want to
experiment with it. As an exercise, you might want to write a comparator
that sorts by \java{rank} first and then by \java{suit}, so all the
Aces should be together, and all the twos, etc.


\section{Extensions}
\label{extensions}

\index{TF-IDF}
\index{relevance}
\index{snippet}
\index{Heroku}

If you get a basic version of this exercise working, you might want to work
on these optional exercises:

\begin{itemize}

\item Read about TF-IDF at \url{http://thinkdast.com/tfidf}
  and implement it. You might have to modify \java{JavaIndex} to
  compute document frequencies; that is, the total number of times
  each term appears on all pages in the index.

\item For queries with more than one search term, the total relevance for
  each page is currently the sum of the relevance for each term. Think
  about when this simple version might not work well, and try out some
  alternatives.

\item Build a user interface that allows users to enter queries with
  boolean operators. Parse the queries, generate the results, then
  sort them by relevance and display the highest-scoring
  URLs. Consider generating ``snippets'' that show where the search
  terms appeared on the page. If you want to make a Web application
  for your user interface, consider using Heroku as a simple option
  for developing and deploying Web applications using Java.  See
  \url{http://thinkdast.com/heroku}.

\end{itemize}



\chapter{Sorting}

Computer science departments have an unhealthy obsession with sort
algorithms. Based on the amount of time CS students spend on the topic,
you would think that choosing sort algorithms is the cornerstone of
modern software engineering. Of course, the reality is that software
developers can go years, or entire careers, without thinking about how
sorting works. For almost all applications, they use whatever
general-purpose algorithm is provided by the language or libraries they
use. And usually that's just fine.

\index{sorting}

So if you skip this chapter and learn nothing about sort algorithms,
you can still be an excellent developer. But there are a few reasons
you might want to do it anyway:

\begin{enumerate}

\item
  Although there are general-purpose algorithms that work well for the
  vast majority of applications, there are two special-purpose
  algorithms you might need to know about: radix sort and bounded heap
  sort.

\item
  One sort algorithm, merge sort, makes an excellent teaching example
  because it demonstrates an important and useful strategy for
  algorithm design, called ``divide-conquer-glue''. Also, when we
  analyze its performance, you will learn about an order of growth we
  have not seen before, {\bf linearithmic}. Finally, some of the most
  widely-used algorithms are hybrids that include elements of merge
  sort.

\item
  One other reason to learn about sort algorithms is that technical
  interviewers love to ask about them. If you want to get hired, it
  helps if you can demonstrate CS cultural literacy.

\end{enumerate}

So, in this chapter we'll analyze insertion sort, you will implement merge
sort, I'll tell you about radix sort, and you will write a simple
version of a bounded heap sort.

\index{divide-conquer-glue}
\index{linearithmic time}


\section{Insertion sort}
\label{insertion-sort}

We'll start with insertion sort, mostly because it is simple to describe
and implement. It is not very efficient, but it has some redeeming
qualities, as we'll see.

\index{insertion sort}

Rather than explain the algorithm here, I suggest you read the
insertion sort Wikipedia page at
\url{http://thinkdast.com/insertsort}, which includes
pseudocode and animated examples. Come back when you get the general
idea.

Here's an implementation of insertion sort in Java:

\begin{verbatim}
public class ListSorter<T> {

    public void insertionSort(List<T> list, Comparator<T> comparator) {

        for (int i=1; i < list.size(); i++) {
            T elt_i = list.get(i);
            int j = i;
            while (j > 0) {
                T elt_j = list.get(j-1);
                if (comparator.compare(elt_i, elt_j) >= 0) {
                    break;
                }
                list.set(j, elt_j);
                j--;
            }
            list.set(j, elt_i);
        }
    }
}
\end{verbatim}

I define a class, \java{ListSorter}, as a container for sort
algorithms. By using the type parameter, \java{T}, we can write
methods that work on lists containing any object type.

\index{ListSorter}
\index{type parameter}

\java{insertionSort} takes two parameters, a \java{List} of any kind
and a \java{Comparator} that knows how to compare type \java{T}
objects. It sorts the list ``in place'', which means it modifies the
existing list and does not have to allocate any new space.

\index{List}

The following example shows how to call this method with a \java{List} of
\java{Integer} objects:

\begin{verbatim}
        List<Integer> list = new ArrayList<Integer>(
            Arrays.asList(3, 5, 1, 4, 2));

        Comparator<Integer> comparator = new Comparator<Integer>() {
            @Override
            public int compare(Integer elt1, Integer elt2) {
                return elt1.compareTo(elt2);
            }
        };

        ListSorter<Integer> sorter = new ListSorter<Integer>();
        sorter.insertionSort(list, comparator);
        System.out.println(list);
\end{verbatim}

\java{insertionSort} has two nested loops, so you might guess that
its runtime is quadratic. In this case, that turns out to be correct,
but before you jump to that conclusion, you have to check that the
number of times each loop runs is proportional to $n$, the size
of the array.

\index{linear time}

The outer loop iterates from 1 to \java{list.size()}, so it is linear
in the size of the list, $n$.
The inner loop iterates from \java{i} to 0, so it is also linear in $n$.
Therefore, the total number of times the inner loop runs is quadratic.

\index{quadratic time}

If you are not sure about that, here's the argument:

\begin{itemize}

\item
  The first time through, $i=1$ and the inner loop runs at most
  once.

\item
  The second time, $i=2$ and the inner loop runs at most twice.

\item
  The last time, $i=n-1$ and the inner loop runs at most
  $n-1$ times.

\end{itemize}

So the total number of times the inner loop runs is the sum of the
series $1, 2, \ldots , n-1$, which is $n (n-1) / 2$. And the
leading term of that expression (the one with the highest exponent) is
$n^2$.

\index{linear time}

In the worst case, insertion sort is quadratic. However:

\begin{enumerate}

\item
  If the elements are already sorted, or nearly so, insertion sort is
  linear. Specifically, if each element is no more than $k$
  locations away from where it should be, the inner loop never runs more
  than $k$ times, and the total runtime is $O(kn)$.

\item
  Because the implementation is simple, the overhead is low; that is,
  although the runtime is $a n^2$, the coefficient of the leading
  term, $a$, is probably small.

\end{enumerate}

So if we know that the array is nearly sorted, or is not very big,
insertion sort might be a good choice. But for large arrays, we can
do better. In fact, much better.


\section{Exercise 14}
\label{exercise14}

Merge sort is one of several algorithms whose runtime is better than
quadratic. Again, rather than explaining the algorithm here, I suggest
you read about it on Wikipedia at
\url{http://thinkdast.com/mergesort}.  Once you get the idea, come
back and you can test your understanding by writing an implementation.

\index{merge sort}
\index{quadratic time}

In the repository for this book, you'll find the source files for this
exercise:

\begin{itemize}

\item
  \java{ListSorter.java}

\item
  \java{ListSorterTest.java}

\end{itemize}

Run \java{ant build} to compile the source files, then run
\java{ant ListSorterTest}. As usual, it should
fail, because you have work to do.

\index{Ant}
\index{ListSorter}

In \java{ListSorter.java}, I've provided an outline of two methods,
\java{mergeSortInPlace} and \java{mergeSort}:

\begin{verbatim}
    public void mergeSortInPlace(List<T> list, Comparator<T> comparator) {
        List<T> sorted = mergeSortHelper(list, comparator);
        list.clear();
        list.addAll(sorted);
    }

    private List<T> mergeSort(List<T> list, Comparator<T> comparator) {
       // TODO: fill this in!
       return null;
    }
\end{verbatim}

These two methods do the same thing but provide different interfaces.
\java{mergeSort} takes a list and returns a new list with the same
elements sorted in ascending order. \java{mergeSortInPlace} is a
\java{void} method that modifies an existing list.

\index{mergeSort}

Your job is to fill in \java{mergeSort}. Before you write a fully
recursive version of merge sort, start with something like this:

\begin{enumerate}

\item
  Split the list in half.

\item
  Sort the halves using \java{Collections.sort} or
  \java{insertionSort}.

\item
  Merge the sorted halves into a complete sorted list.

\end{enumerate}

This will give you a chance to debug the merge code without dealing with
the complexity of a recursive method.

\index{base case}
\index{recursion}

Next, add a base case (see
\url{http://thinkdast.com/basecase}). If you are
given a list with only one element, you could return it immediately,
since it is already sorted, sort of. Or if the length of the list is
below some threshold, you could sort it using \java{Collections.sort}
or \java{insertionSort}. Test the base case before you proceed.

Finally, modify your solution so it makes two recursive calls to sort
the halves of the array. When you get it working, \java{testMergeSort}
and \java{testMergeSortInPlace} should pass.


\section{Analysis of merge sort}
\label{analysis-of-merge-sort}

\index{analysis}

To classify the runtime of merge sort, it helps to think in terms of
levels of recursion and how much work is done on each level. Suppose
we start with a list that contains $n$ elements. Here are the steps of
the algorithm:

\begin{enumerate}

\item
  Make two new arrays and copy half of the elements into each.

\item
  Sort the two halves.

\item
  Merge the halves.

\end{enumerate}

Figure~\ref{fig-sort1}
shows these steps.

\begin{figure}
\centering
\includegraphics[height=2.5in]{figs/merge_sort1.pdf}
\caption{Representation of merge sort showing one level of recursion.}
\label{fig-sort1}
\end{figure}

\index{linear time}

The first step copies each of the elements once, so it is linear. The
third step also copies each element once, so it is also linear. Now we
need to figure out the complexity of step 2. To do that, it helps to
looks at a different picture of the computation, which shows the levels
of recursion, as in Figure~\ref{fig-sort2}.

\begin{figure}
\centering
\includegraphics[height=2in]{figs/merge_sort2.pdf}
\caption{Representation of merge sort showing all levels of recursion.}
\label{fig-sort2}
\end{figure}

At the top level, we have $1$ list with $n$ elements. 
For simplicity, let's assume $n$ is a power of 2.
At the next level there are $2$ lists with $n/2$ elements.
Then $4$ lists with $n/4$ elements, and so on until we get
to $n$ lists with $1$ element.

On every level we have a total of $n$ elements. On the way down,
we have to split the arrays in half, which takes time proportional to
$n$ on every level. On the way back up, we have to merge a total
of $n$ elements, which is also linear.

If the number of levels is $h$, the total amount of work for the
algorithm is $O(nh)$. So how many levels are there? There are two
ways to think about that:

\begin{enumerate}

\item
  How many times do we have to cut $n$ in half to get to 1?

\item
   Or, how many times do we have to double $1$ before we get to $n$?

\end{enumerate}

Another way to ask the second question is ``What power of 2 is
$n$?''

$2^h = n$

Taking the $\log_2$ of both sides yields

$h = \log_2 n$

So the total time is $O(n \log n)$. I didn't bother to write the
base of the logarithm because logarithms with different bases differ by
a constant factor, so all logarithms are in the same order of growth.

\index{logarithm}
\index{linearithmic time}
\index{n log n}

Algorithms in $O(n \log n)$ are sometimes called
``linearithmic'', but most people just say ``n log n''.

\index{comparison sort}

It turns out that $O(n \log n)$ is the theoretical lower bound for
sort algorithms that work by comparing elements to each other. That
means there is no ``comparison sort'' whose order of growth is better
than $n \log n$.  See \url{http://thinkdast.com/compsort}.

But as we'll see in the next section, there are non-comparison sorts
that take linear time!

\index{linear time}


\section{Radix sort}
\label{radix-sort}

\index{radix sort}
\index{Obama, Barack}
\index{Schmidt, Eric}
\index{Google}
\index{bubble sort}

During the 2008 United States Presidential Campaign, candidate Barack
Obama was asked to perform an impromptu algorithm analysis when he
visited Google. Chief executive Eric Schmidt jokingly asked him for
``the most efficient way to sort a million 32-bit integers.'' Obama
had apparently been tipped off, because he quickly replied, ``I think
the bubble sort would be the wrong way to go.'' You can watch the
video at \url{http://thinkdast.com/obama}.

Obama was right: bubble sort is conceptually simple but its runtime is
quadratic; and even among quadratic sort algorithms, its performance
is not very good.  See \url{http://thinkdast.com/bubble}.

\index{quadratic time}

The answer Schmidt was probably looking for is ``radix sort'', which is
a {\bf non-comparison} sort algorithm that works if the size of the
elements is bounded, like a 32-bit integer or a 20-character string.

\index{non-comparison sort}

To see how this works, imagine you have a stack of index cards where
each card contains a three-letter word. Here's how you could sort the
cards:

\begin{enumerate}

\item
  Make one pass through the cards and divide them into buckets based on
  the first letter. So words starting with \java{a} should be
  in one bucket, followed by words starting with \java{b}, and so on.

\item
  Divide each bucket again based on the second letter. So words starting
  with \java{aa} should be together, followed by words starting with
  \java{ab}, and so on. Of course, not all buckets will be full, but
  that's OK.

\item
  Divide each bucket again based on the third letter.

\end{enumerate}

At this point each bucket contains one element, and the buckets are
sorted in ascending order. Figure~\ref{fig-sort3}
shows an example with
three-letter words.

\begin{figure}
\centering
\includegraphics[height=2.0in]{figs/radix_sort1.pdf}
\caption{Example of radix sort with three-letter words.}
\label{fig-sort3}
\end{figure}

The top row shows the unsorted words. The second row shows what the
buckets look like after the first pass. The words in each bucket begin
with the same letter.

After the second pass, the words in each bucket begin with the same two
letters. After the third pass, there can be only one word in each
bucket, and the buckets are in order.

During each pass, we iterate through the elements and add them to
buckets. As long as the buckets allow addition in constant time, each
pass is linear.

\index{constant time}
\index{linear time}

The number of passes, which I'll call $w$, depends on the ``width''
of the words, but it doesn't depend on the number of words, $n$.
So the order of growth is $O(wn)$, which is linear in $n$.

There are many variations on radix sort, and many ways to implement
each one. You can read more about them at
\url{http://thinkdast.com/radix}. As an optional
exercise, consider writing a version of radix sort.


\section{Heap sort}
\label{heap-sort}

\index{heap sort}
\index{bounded heap}

In addition to radix sort, which applies when the things you want to
sort are bounded in size, there is one other special-purpose sorting
algorithm you might encounter: bounded heap sort. Bounded heap sort is
useful if you are working with a very large dataset and you want to
report the ``Top 10'' or ``Top k'' for some value of $k$ much
smaller than $n$.

For example, suppose you are monitoring a Web service that handles a
billion transactions per day. At the end of each day, you want to
report the $k$ biggest transactions (or slowest, or any other
superlative). One option is to store all transactions, sort them at
the end of the day, and select the top $k$. That would take time
proportional to $n \log n$, and it would be very slow because we
probably can't fit a billion transactions in the memory of a single
program. We would have to use an ``out of core'' sort algorithm. You
can read about external sorting at \url{http://thinkdast.com/extsort}.

\index{out of core algorithm}
\index{external sorting}

Using a bounded heap, we can do much better! Here's how we will
proceed:

\begin{enumerate}

\item
  I'll explain (unbounded) heap sort.

\item
  You'll implement it.

\item
  I'll explain bounded heap sort and analyze it.

\end{enumerate}

\index{heap}
\index{binary search tree}
\index{BST}

To understand heap sort, you have to understand a heap, which is a data
structure similar to a binary search tree (BST). Here are the differences:

\begin{itemize}

\item
  In a BST, every node, \java{x}, has the ``BST property'': all nodes
  in the left subtree of \java{x} are less than \java{x} and all
  nodes in the right subtree are greater than \java{x}.

\item
  In a heap, every node, \java{x}, has the ``heap property'': all
  nodes in both subtrees of \java{x} are greater than \java{x}.

\item
  Heaps are like balanced BSTs; when you add or remove elements, they
  do some extra work to rebalance the tree.  As a result, they can
  be implemented efficiently using an array of elements.

\end{itemize}

The smallest element in a heap is always at the root, so we can find
it in constant time. Adding and removing elements from a heap takes
time proportional to the height of the tree $h$. And because the heap
is always balanced, $h$ is proportional to $\log n$.  You can read
more about heaps at \url{http://thinkdast.com/heap}.

\index{heap property}
\index{constant time}
\index{logarithmic time}
\index{PriorityQueue}
\index{offer}
\index{poll}
\index{Queue}

The Java \java{PriorityQueue} is implemented using a heap.
\java{PriorityQueue} provides the methods specified in the
\java{Queue} interface, including \java{offer} and \java{poll}:

\begin{itemize}

\item
  \java{offer}: Adds an element to the queue, updating the heap so
  that every node has the ``heap property''. Takes $\log n$ time.

\item
  \java{poll}: Removes the smallest element in the queue from the root
  and updates the heap. Takes $\log n$ time.

\end{itemize}

Given a \java{PriorityQueue}, you can easily sort of a collection of
$n$ elements like this:

\begin{enumerate}

\item
  Add all elements of the collection to a \java{PriorityQueue} using
  \java{offer}.

\item
  Remove the elements from the queue using \java{poll} and add them to
  a \java{List}.

\end{enumerate}

Because \java{poll} returns the smallest element remaining in the
queue, the elements are added to the \java{List} in ascending order.
This way of sorting is called {\bf heap sort}
(see \url{http://thinkdast.com/heapsort}).

\index{heap sort}
\index{linearithmic}
\index{n log n}

Adding $n$ elements to the queue takes $n \log n$ time. So
does removing $n$ elements. So the runtime for heap sort is
$O(n \log n)$.

\index{ListSorter}

In the repository for this book, in \java{ListSorter.java} you'll find
the outline of a method called \java{heapSort}. Fill it in and then
run \java{ant ListSorterTest} to confirm that it works.


\section{Bounded heap}
\label{bounded-heap}

\index{bounded heap}

A bounded heap is a heap that is limited to contain at most $k$
elements. If you have $n$ elements, you can keep track of the
$k$ largest elements like this:

Initially, the heap is empty.  For each element, \java{x}:

\begin{itemize}

\item
  Branch 1: If the heap is not full, add \java{x} to the heap.

\item
  Branch 2: If the heap is full, compare \java{x} to the
  \emph{smallest} element in the heap. If \java{x} is smaller, it
  cannot be one of the largest $k$ elements, so you can discard
  it.

\item
  Branch 3: If the heap is full and \java{x} is greater than the
  smallest element in the heap, remove the smallest element from the
  heap and add \java{x}.

\end{itemize}

\index{k largest elements}

Using a heap with the smallest element at the top, we can keep track of
the largest $k$ elements. Let's analyze the performance of this
algorithm. For each element, we perform one of:

\begin{itemize}

\item
  Branch 1: Adding an element to the heap is $O(\log k)$.

\item
  Branch 2: Finding the smallest element in the heap is $O(1)$.

\item
  Branch 3: Removing the smallest element is $O(\log k)$. Adding
  \java{x} is also $O(\log k)$.

\end{itemize}

In the worst case, if the elements appear in ascending order, we always
run Branch 3. In that case, the total time to process $n$
elements is $O(n \log k)$, which is linear in $n$.

\index{linear time}

In \java{ListSorter.java} you'll find the outline of a method called
\java{topK} that takes a \java{List}, a \java{Comparator}, and an
integer $k$. It should return the $k$ largest elements in the
\java{List} in ascending order. Fill it in and then run \java{ant
  ListSorterTest} to confirm that it works.

\index{Comparator}


\section{Space complexity}
\label{space-complexity}

\index{space complexity}
\index{analysis}

Until now we have talked a lot about runtime analysis, but for many
algorithms we are also concerned about space. For example, one of the
drawbacks of merge sort is that it makes copies of the data. In our
implementation, the total amount of space it allocates is
$O(n \log n)$. With a more clever implementation, you can get the
space requirement down to $O(n)$.

In contrast, insertion sort doesn't copy the data because it sorts the
elements in place. It uses temporary variables to compare two elements
at a time, and it uses a few other local variables. But its space use
doesn't depend on $n$.

Our implementation of heap sort creates a new \java{PriorityQueue} to
store the elements, so the space is $O(n)$; but if you are
allowed to sort the list in place, you can run heap sort with
$O(1)$ space.

One of the benefits of the bounded heap algorithm you just implemented
is that it only needs space proportional to $k$ (the number of
elements we want to keep), and $k$ is often much smaller than
$n$.

Software developers tend to pay more attention to runtime than space, and
for many applications, that's appropriate. But for large datasets, space
can be just as important or more so. For example:

\begin{enumerate}

\item If a dataset doesn't fit into the memory of one program, the run
  time often increases dramatically, or it might not run at all. If you
  choose an algorithm that needs less space, and that makes it possible
  to fit the computation into memory, it might run much faster. In the
  same vein, a program that uses less space might make better use of
  CPU caches and run
  faster (see \url{http://thinkdast.com/cache}).

\item On a server that runs many programs at the same time, if you can
  reduce the space needed for each program, you might be able to run
  more programs on the same server, which reduces hardware and energy
  costs.

\end{enumerate}

So those are some reasons you should know at least a little bit about
the space needs of algorithms.

\index{cache}
\index{server}


\backmatter
\printindex

%\cleardoublepage

\end{document}
