\usepackage{listings}
\lstset{language=Java,
        basicstyle=\small}

\usepackage{graphicx}
\usepackage{hyperref}
\hypersetup{
  colorlinks=true,
  urlcolor=blue,
  linkcolor=black
}
\mode<article>{\usepackage{fullpage}}

\title{Lecture Three -- Complexity Analysis}
\author{Matt Bone}
\date{\today}

\begin{document}

\mode<article>{\maketitle}
\tableofcontents
\mode<article>{\pagebreak}
\mode<presentation>{\frame{\titlepage}}
\mode<article>{\setlength{\parskip}{.25cm}}

\section{Complexity Analysis}

\subsection{What's An Algorithm?}

\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Algorithms}}

  \mode<presentation>{an \emph{algorithm} ``solve[s] a problem in a finite amount of time''}

\end{frame}

Instead of talking about methods (OOP), functions (functional
programming), procedures, or subroutines, we will now talk about code
more generally, calling small chunks that accomplish something an
\emph{algorithm}.  According to our text, an algorithm ``solve[s] a
problem in a finite amount of time'', So we are not talking about
daemons or interaction loops that run forever.

\subsection{A Theoretical Model of Computing}
\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Running Time}}
  \mode<presentation>{We need a unit of computation.}
\end{frame}
Our main goal is to analyze algorithms with respect to their
running time.  Now since even basic operations such as addition and
multiplication can take different amounts of time on different 
machines, we have to think about computing in a more theoretical
manner.

So we will think about time and computation in more abstract terms.
We will assume that basic operations such addition, multiplication,
division, indexing into an array, and function calls take a fixed and
atomic unit of time.  More sophisticated operations like matrix
multiplication will be broken down into their component parts.  Though
something like matrix multiplication may be implemented atomically in
a specialized hardware, we will not consider this in our analysis.


\subsection{Running Time}
\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Running Time}}
  Given our model of computation, we want to know how an algorithm's 
  running time changes with respect to its input size.
\end{frame}
Oftentimes it is the case that an algorithm slows down as its input
increases.  Though adding two numbers together takes no longer
or shorter period of time given the size of the two numbers (remember
these are atomic, one unit operations as per our model), we intuitively
understand that an algorithm computing the sum of a list of numbers
will take longer when we increase the size of the list.

But how much longer?  And can we wrap this in consistent and easily
understandable language?

\subsection{Big-O Notation}
\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Big-O Notation}}
  We want to know how functions grow with respect to their input.  Big-O
  notation gives us this information in consistent and analyzable format.

  Formally (from your book, pg 73), a function $f$ is $O(f)$ if for some
  pair of positive constants $C$ and $K$, 

  %\begin{equation}
  \[ f(n) \le Cg(n)\mbox{ for all } n \ge K \]
  %\end{equation}
  
\end{frame}

Big-O notation talks about the growth rates of functions; it gives us
an upper bound on the growth rate of a function. Back to the summing
example, summing two numbers is a constant time operation.  It is not
dependent on the size of the input.  In Big-O this is $O(1)$.
However, summing a list (which requires us to visit every element in
that list) depends directly on the size of that list.  Thus we say it
is linear or $O(n)$ where $n$ is the size of the list.

If we consider a matrix of dimension $n$ and we wish to multiply every
element in that matrix by a certain number, we now have to visit $n*n$
elements.  Our simple multiplication algorithm is therefore dependent
on the square of the input size (since we've defined the input size as
the dimension of the matrix).  Therefore, this simple algorithm is
$O(n^2)$.



\subsubsection{Dominating Terms -- Qualitative}
\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Dominating Terms -- Qualitative}}
  We want to know what terms dominate:
  \begin{itemize}
    \item $2n^5+340n^2$ is $O(n^5)$
    \item $n^n + n\log{n}$ is $O(n^n)$
    \item $\sqrt{n} + 56\log{n}$ is $O(\sqrt{n})$
  \end{itemize}
\end{frame}
Finding the dominant term is simply a matter of picking out the term 'highest' 
in the list on the previous slide.

\subsubsection{Dominating Terms -- Calculus}
\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Dominating Terms -- Calculus}}

  \[\mathop {\lim }\limits_{x \to \infty } \frac{f(x)}{{g(x) }}\]

  \begin{itemize}
    \item if the limit is $0$ then $g(x)$ dominates $f(x)$
    \item if the limit is $\infty$ then  $f(x)$ dominates $g(x)$
    \item if the limit is a non-zero integer, then $f(x)$ and $g(x)$ are of the same order
  \end{itemize}
\end{frame}

\subsubsection{Orders in order}
\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Orders in order}}
  \begin{enumerate}
    \item$O(1)$
    \item$O(\log{(\log{(n)})})$
    \item$O(\log{n})$
    \item$O(\sqrt{n})$
    \item$O(n)$
    \item$O(n\log{n})$
    \item$O(n^2)$
    \item$O(n^2\log{n})$
    \item$O(n^3)$
    \item$O(n^4)$
    \item$O(2^n)$
    \item$O(n!)$
    \item$O(n^n)$
  \end{enumerate}
\end{frame}
Most (but not all) of the interesting algorithms that we will talk
about fall some where in the range of 3 to 9.

\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Orders in order}}
  \begin{center}
    \includegraphics[height=8cm]{common_all}
  \end{center}
\end{frame}

\subsubsection{Large Constant Factors}
\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Large Constant Factors}}

  Consider two algorithms $f(n)$ and $g(n)$ that solve the same
  problem and have the following order estimates:

  \[f(n)=O(100n^2)\]

  \[g(n)=O(5n^3)\]
  
\end{frame}

We can see that for $n<20$, the $g(n)$ algorithm will outperform
$f(n)$.  However, for inputs larger than size 20, the two diverge
rapidly, with $f(n)$, an $O(n^2)$ algorithm, outperforming the $g(n)$,
an $O(n^3)$ algorithm. (pg 18, Data Structures and Algorithms. Aho,
Hopcroft, Ullman. Addison-Wesley, 1983.)

\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Large Constant Factors}}
  \begin{center}
    \includegraphics[height=8cm]{big_constants}
  \end{center}
\end{frame}

However we should ask ourselves:

\begin{itemize}
  \item Do we really know the size of the inputs? (probably not)
  \item Will the size of the inputs ever get larger? (probably so)
\end{itemize}

So, even in cases like these we will probably end up choosing the
algorithm with the smallest growth factor.

A real world example of this conundrum is calculating the intersection
of two arrays.  A naive $O(n^2)$ algorithm is easily written, but by
sorting the arrays first, we can push this calculation down to
$O(n*log(n))$ albeit with a very large constant factor.  This is one
case where carefully understanding the algorithm's domain could be
highly beneficial.
\section{Problems}
\subsection{Easy Problems}
\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Easy Problems}}
  \mode<presentation>{Are there any interesting easy problems?}
\end{frame}

\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Graph of Easy Problems}}
  \begin{center}
    \includegraphics[height=8cm]{common_small}
  \end{center}
\end{frame}

There are actually a surprising number of hard problems.  In general,
we can think of anything running in polynomial time as solvable.  Even
for large polynomials, we're able to throw more processing power at
the problem to make it tractable.

More interesting is the fact that we can find problems that are solved
in constant time.  An example of a problem that (usually) comes it at
around constant time is hashing.

\subsection{Hard Problems}
\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Interesting Hard Problems}}
  \begin{itemize}
    \item Solving CAPTCHAs
    \item Image processing
    \item Bin Packing
    \item Lots of graph theory (traveling salesman)
  \end{itemize}
\end{frame}

\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Graph of Hard Problems}}
  \begin{center}
    \includegraphics[height=8cm]{common_big}
  \end{center}
\end{frame}

\subsection{Storage Complexity}
\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Storage Complexity}}
  \mode<presentation>{We can perform a similar analysis on storage.}
\end{frame}
We have mostly talked about runtime complexity (how many compute
cycles it takes to solve a problem) however, we can also talk about
storage complexity.  Some algorithms require a certain amount of
memory, and we can use big O notation to talk about how these memory
requirements grow with respect to the size of the input.  Usually
we're talking about RAM, but we can also talk about disk space.

\section{Examples}
Here we will look at some very simple patterns and see how we 
can get order estimates by `eyeballing' algorithms.  The book elaborates
on this topic.

\subsection{Nested Loops and Polynomials}
\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Nested Loops and Polynomials}}
  \begin{lstlisting}
    int weirdSum = 13;
    for(int i=1;i<n;i++) {
      for(int j=1;j<n;j++) {
        weirdsum *= j*i*3;
      }
    }
  \end{lstlisting}
\end{frame} 
Here we have two nested loops whose running time is dependent on the
square of the size of the input.  Therefore we have an $O(n^2)$
algorithm.

\subsection{Loops Next to One Another}
\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Loops Next to One Another}}
  \begin{lstlisting}
    int weirdSum = 13;
    for(int i=1;i<n;i++) {
      weirdSum *= i;
    }
    for(int j=1;j<n;j++) {
      weirdsum += j;
    }
  \end{lstlisting}
\end{frame}
Loops sitting right next to one another (i.e. not nested) run one
after the other.  With a careful analysis, the above algorithm runs at
something like $1+2n+2n$, and this simplifies to $O(n)$.

\subsection{Logarithms}
\begin{frame}[fragile]
  \mode<presentation>{\frametitle{Logarithm}}
  \begin{lstlisting}
    for(int i=0; i<n; i++) {
      j=n;
      while(j>1) {
        j=j/2; //half the number of iterations
        //some work
      } 
    }
  \end{lstlisting}
\end{frame}
Whenever you see the amount of work being haved (or more generally
divided by some constant greater than one), you know that you have a
log.  Here we see an outer loop that is iterating over all elements
and an inner loop that has this halving behavior; thus this is an
$O(n\lg{n})$ function.

\end{document}
