\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{geometry}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{textcomp}
\usepackage{tabularx}
\usepackage{tikz}
\geometry{a4paper}
\usepackage[francais]{babel}
\title{CS129A Programming Assignment 1}
\author{Hadoop18}
\date{5 October 2014}

\begin{document}
\maketitle
$_{}$
\def\checkmark{\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;}
\newenvironment{attributes}
{\medskip\medskip
 \begin{tabular}{|l|l|}
 \hline} 
{\hline
 \end{tabular}
 \medskip\medskip}
\\
{\large {\bf 1.0 Introduction}}
\\
\\
This assignment involved three different map reduce tasks.  Part A deals with extracting all the wikipedia articles from the wikipedia article dump file whose name is present in the {\tt people.txt} provided by the assignment framework found in \emph{/home/o/class/cs129a/assignment1} on any of the public machines.  Part B parses the output of part A (a smaller wikipedia article dump file) in order to index each article with the count of all the word types found therein.  This includes processing the content of each wikipedia article, removing stop words and other irrelevant tokens, and lemmatization.  Finally, Part C takes in the output of Part B to create an inverted index for each unique word type.  This maps each word type to a list of tuples {\tt <articleID, frequency>}, where {\tt articleID} is the unique wikipedia article title and {\tt frequency} is the number of times the index (i.e. the word type) appeared in that wikipedia article. 
\\
\\
The remainder of the report briefly reviews the difficulty with the assignment and the challenges faced.  In particular, it notes the confusion of the resources and references both provided and those found online.  Suggestions are also given to possibly improve the assignment: the final assessment is that the hardest part was not implementing the Java map reduce code, but rather learning the whole interface and APIs.  The final section deals with issues and quirks that neither tinkering nor extended research could resolve.
\\
\\
{\large {\bf 2.0 Completion times and statistics}}
\\
\\
Below the completion time of each task is described in some detail where possible.  These completion times and other statistics also include the size of the output and input files for the task.  Afterwards, a table is given which summarizes this information for both the cluster and a local pseudo-distributed machine.  The path of the output on the HDFS is also listed for the curious reader to explore.  
\\
\\
{\large {\bf 2.1 Part A}}
\\
\\
Of the three tasks, Part A took the longest.  Unfortunately, the exact completion time of Part A on the cluster is not known.  It was estimated to have taken roughly 8 hours to complete.  Part A arguably is the hardest task in terms of resources consumed, since it has to parse the entire {\tt enwiki-latest.xml} dump file, which is a formidable 46 GB.  This no doubt dominated the completion time.  The size of the resulting file was roughly 10\% of the original, coming to exactly 4,293,263,727 bytes, which is about 4.29326 GB.  For completing part A on the local pseudo-distributed machine, the watered down wikipedia article dump file {\tt wiki-small.txt} was used as input, with a size of 395 MB.  The completion time took roughly 3-4 minutes, producing an output file of size 26.6 MB.  In comparison to the full job, this means that about 10\% of the articles on {\tt enwiki-latest.xml} are people articles, while only roughly 6.7\% of  are people articles {\tt wiki-small.txt}.  The job runtime would probably be significantly faster if the raw source were JSON or YAML.
\\
\\
{\large {\bf 2.2. Part B}}
\\
\\
The exact completion time of Part B was not recorded--it took no longer than 3 hours to complete.  In terms of code complexity, Part B was likely the most complicated.  This is because Part B not only had to parse each wikipedia article but also perform stop word removal and lemmatization.  The input to Part B was the same output file as described above for Part A, being roughly 4.3 GB.  The resulting output was exactly 1,036,888,786 bytes, which is about 1.03689 GB.  This is 4 times less than the input file, suggesting that 75\% of the actual content of wikipedia articles consist of markup and stop words.  That appears to be an incredible overhead, if the stop words themselves have a negligible contribution.  Running this task on the local machine using the 26.6 MB file as input took roughly 2-3 minutes, producing an output file the size of 11.9 MB.  The difference in proportions between input/output for the cluster and local job may be attributed to the idiosyncrasies of each data set--articles from {\tt wiki-small.txt} are probably much larger in content on average than those found in {\tt enwiki-latest.xml}.  
\\
\\
{\large {\bf 2.3 Part C}}
\\
\\
This task had the shortest completion time, though there were some interesting quarks.  The first time this task was successfully submitted and running, it completed exactly 12.5\% of the map job roughly every 10 minutes.  This was because there were exactly 8 mappers.  This very slow run did not ultimately complete, as another student from outside the group accidentally killed the job by typing in the wrong job ID.  When the task was re-submitted immediately after having been killed, it completed in exactly 10 minutes--the map job finished almost immediately (in fact, the first job status checked immediately after submission showed the mappers were already 100\% complete), while the majority of the completion time was dominated by the reduce phase.  Consider the input file was only 1 GB  in size, this perhaps is not too astounding.  The resulting output had a size of 1,954,528,089 bytes (roughly 1.95453 GB) , which is practically twice the size of the input file.  Running this job on the local machine with 11.9 MB of input took less than 10 seconds to complete (almost instantaneous), resulting in an output the size of 22 MB.  Interestingly, the ratio of input/output size is the same: the output is twice as large as the input file.  
\\
\\
\newpage
$_{}$
\\
{\large {\bf 2.4 Table and summary}}
\begin{center}
    \begin{tabular}{| l | l | l | l | l |}
    \hline
    {\bf Task} & {\bf Input size} & {\bf Output size} & {\bf Completion time} & {\bf HDFS output path} \\ \hline
    Part A Cluster & 46 GB & 4.3 GB & 8 hours & \emph{/user/hadoop18/pa1/A} \\ \hline
    Part B Cluster & 4.3 GB & 1.0 GB & 2-3 hours & \emph{/user/hadoop18/pa1/B} \\ \hline
    Part C Cluster & 1.0 GB & 1.9 GB & 10 minutes & \emph{/user/hadoop18/pa1/C} \\ \hline
    Part A Local & 395 MB & 26.6 MB & 3-4 minutes & N/A \\ \hline
    Part B Local & 26.6 MB & 11.9 MB & 2-3 minutes & N/A \\ \hline
    Part C Local & 11.9 MB & 22 MB & 10 seconds & N/A \\ \hline
    \end{tabular}
\end{center}
Information on the machine used for local runs on pseudo-distributed is given below for a curious reader.
\\

Model: MacBook Pro Retina (Early 2013) \\

Processor: 2.4 GHz Intel Core i7 \\

Memory: 8 GB 1600 MHz DDR3 \\

Operating system: OS X Version 10.9.5 \\

Java version: 1.7.0\_45 \\

Hadoop version: 2.5.0
\\
\\
{\large {\bf 3.0 Difficulty}}
\\
\\
The hardest part of the assignment was learning how to use both Hadoop and the HDFS, along with learning the various APIs associated with running map reduce tasks through the JVM.  For example, completing Part A was bottlenecked by not knowing how to give Hadoop access to third party libraries.  There was no documentation on this topic from the class materials; it is {\bf not} a trivial problem[1][2][3].  Without completing Part A, it is not really possible to make progress on part B or part C, since the output of each is required as input for the next task.  Had there been explicit instructions regarding this problem, the group would probably not have needed to use an extension.  
\\
\\
Another area that initially was difficult to master was understanding when a file had to be mounted onto the HDFS for use.  This proved straightforward, however, after having used HDFS enough.  
\\
\\
Completing the logic behind the map reduce tasks in the Java source files was very straightforward and by far the easiest part of the assignment.  But actually getting Hadoop to run these is another story.  The issue is that many of the Hadoop resources[4] use the old format of writing map reduce tasks in Java, which are now deprecated.  This made it difficult to find a working example to directly learn from.  
\newpage
$_{}$
\\
{\large {\bf 4.0 Quirks and problems}}
\\
\\
All of the tasks successfully ran on the cluster to completion and outputting the correct output.  However, there were some more advanced techniques the group wished to learn but was not successful in implementing.  
\\
\\
The first of these is learning to chain map reduce jobs.  This is {\bf not} a trivial problem, and suffers from a similar issue mentioned in 3.0: there remain many possible different ways to chain jobs using different syntax and classes from the Hadoop API.  There did not appear to be a single, authoritative way to do this.  The resources available online also suffer from this.  
\\
\\
Creating a custom file input format is also not a trivial task.  This suffers from the same issues (deprecated and outdated resources) and there is little current documentation on this task.  The group was not initially aware that Hadoop provided its own convenient {\tt KeyValueTextInputFormat} class for formatting the input to Part C and had created its own custom file input format to deal with this.  The confusing part was that on the local machine, the custom file input format worked, but when run the cluster it would inexplicably cause the map red task to fail (no JVM crash or explanation otherwise).  This leads into the next problem.  
\\
\\
It appears that even if a task runs on a pseudo-distributed local machine, it is not guaranteed to actually be \emph{correct} or work if run the cluster.  This is unfortunate because it limits local testing, which is considerably cheaper resource wise and also easier to play around with.  It also can lure a developer into a false sense of security.  For example, while both chaining and custom file input worked on the local machine, the exact same implementations failed on the cluster.  This proved very frustrating at times.  One possible solution would be to give each group a virtualized cluster, such as Amazon's Elastic MapReduce service.
\newpage
$_{}$
\\
{\large {\bf References}}
\\
\\
$_{}$[1]  http://blog.cloudera.com/blog/2011/01/how-to-include-third-party-libraries-in-your-map-reduce-job/
\\
\\
$_{}$[2] http://grepalex.com/2013/02/25/hadoop-libjars/
\\
\\
$_{}$[3] http://hadoopi.wordpress.com/2014/06/05/hadoop-add-third-party-libraries-to-mapreduce-job/
\\
\\
$_{}$[4] http://hadoop.apache.org/docs/r1.2.1/mapred\_tutorial.html


\end{document}