\documentclass{assn}
\num{Lab 3 Commentary}
\by{Ian Voysey (iev@), Ryan Hofler (rhofler@)}
\due{29 November 2008, 23:50 + 1 Late Day}
\for{15-440: Fund. Systems}

\begin{document}
\begin{enumerate}[A.]
\item The problem we solved was finding the longest shortest path to any
  article in Wikipedia. If we call the target node $v$, the shortest path
  between two nodes $SP$, and the vertex set of the graph $G$ $\nu(G)$,the
  quantity of interest can be formalized as $$\max_{u \in \nu(G)} \Big [ \len
    \left ( SP(u,v)\right ) \Big ]$$ In our tests, this was always the article
  on Jesus, i.e. \url{http://en.wikipedia.org/wiki/Jesus}. The method that we
  actually used to solve this ended up enumerating the titles of wikipedia
  partitioned by their length to the target node (i.e. the equivalence classes
  induced by this relationship).
\item Our original solution was as follows
  \begin{enumerate}[I.]
  \item Use one MapReduce phase to scan every article and emit all of the
    interesting links found in $(src,dest)$ form. Upon being reduced, this
    constitutes the adjacency list of the graph.
  \item Use a traditional implementation of Dijkstra's algorithm to produce
    the minimum spanning tree of this graph, reparsing the output from the
    MapReduce job.
  \item Again use traditional programming paradigms to enumerate the longest
    path to the specified target vertex and find its length.
  \end{enumerate}

  The appeal of this solution was how closely it mirrored how Hadoop is
  commonly used in practice. Often it's easier to use the large and somewhat
  cumbersome infrastructure to pre-process a large data set to make it
  tractable by traditional means, which can perform more complicated
  procedures with less pain.

\item This approach failed semi-spectacularly. We underestimated just how
  large the adjacency list produced by the first MapReduce phase would be, so
  even on a small percentage of one shard of the whole data set, Dijkstra's was
  taking upwards of ten minutes to run. It also somewhat circumvented the
  purpose of the lab, even though it mirrored practical use of Hadoop.

  The solution that we settled on was basically a distributed version of
  Dijkstra's, adapted to the fact that the graph is directed and every edge
  weight is equal. The basic steps are
  \begin{enumerate}[I.]
  \item Use one MapReduce phase to scan every article and emit all of the
    interesting links found in $(src,dest)$ form. Upon being reduced, this
    constitutes the adjacency list of the graph.
  \item Run a distributed BFS some constant number of times to explore paths
    outward from the target article (usually
    \url{http://en.wikipedia.org/wiki/Jesus}).
  \item Clean up with a final MapReduce phase to sort by equivalence classes
  \end{enumerate}
  
  Step II could be replaced with ``run DBFS until the output doesn't change,''
  but we decided that this was too wasteful. The likelihood of a path longer
  than $10$ nodes or so is so small due to the connectivity of Wikipedia that
  the overhead of an unknown number of MapReduce jobs operating on remaining
  lists of unknown (and probably quite small) size is totally uncalled
  for. Testing to see if the output changed between iterations is very time
  consuming because for the early iterations the output is still insanely
  large.
  
  The one caveat with this solution is that the reducer used in the production
  of the Adjacency list does not run in constant space. That reducer takes in
  a list of links and has to output the unique sublist of that input. Doing
  this in constant space without a genuinely brain damaged phase to invert the
  Adjacency list twice is impossible. Instead, we dumped the input list into a
  \texttt{HashSet} doing some sanity checks along the way, and then emitted
  each element of the set, effectively enforcing uniqueness.

  While this is clearly not constant space, we found that the entire adjacency
  list was well under $100$ MB, so the entries for any one node must fit in
  the physical memory of one node. This is kind of a cop-out, but it is safe
  for this particular data set.
   
\item The biggest lesson here is that MapReduce is a limited paradigm. It is
  intensely powerful for those problems that are approachable by it
  (i.e. finding all the links), but often unwieldy for many types of
  tasks. This apparently can be alleviated by judicious use of streaming
  libraries rather than native Java, but that was disallowed.

  The other lesson learned was an object appreciation for just how big ``big''
  datasets can be, and the complications involved in trying to process
  them. It's a bit like going to the Grand Canyon.

\item The article with the longest shortest path to Jesus in Wikipedia is
  \texttt{Sri Chandrasekhara Bharati I} with a path length of $16$. The next
  two runners up are \texttt{Sri Nrusimha Bharati I} with $15$ and \textt{Sri
    Purushotthama Bharati I} with $14$.

  The really cool part about using DBFS is the way that the output is
  rendered. We set the last clean up phase to have $\texttt{MAX\_ITS} +1$
  reducers, so we get that many files in the final directory, where
  \texttt{part-000$k$} contains the names of the articles with paths of length
  $k$ to the target article, i.e. Jesus. We used this to determine that we'd
  actually found the longest shortest path by running our whole suite with
  increasingly high iterations until we had output files of zero
  size. Included in this hand in is a plot of line count versus file name
  (i.e. path length) as well as the raw data its derived from.

\item The best advice that could be passed down is to totally forget about
  using Eclipse. As usual, the overhead of that massive IDE is simply not
  needed, and it's a huge pain to set up. It's far better to just have a
  Makefile with some reasonably well thought out targets. Apparently this can
  be replaced with an Ant file, more specific to Java, but for us the ability
  of a Makefile to run arbitrary UNIX commands served better.

  We also saw that prototyping the code, particularly if it uses regular
  expressions like it ought to, is also very helpful. We did this first in
  Perl, then in a very small Java application, but probably any scripting
  language will do the job.
  
  It would also be nice to try doing this with the Streaming libraries instead
  of native Java. There's a bit of a start up cost, but we highly suspect that
  once you get going, it's far easier to focus on the problem you're trying to
  solve instead of wrangling with a bad environment. 
\end{enumerate}
\end{document}
