%------------------------------------------------------------------
%  Document Class
%------------------------------------------------------------------
\documentclass{article}

%------------------------------------------------------------------
%  Graphics and Related Packages
%------------------------------------------------------------------
\usepackage{multirow}
\usepackage{listings}
\usepackage{amssymb}
\usepackage[pdftex]{graphicx}
\usepackage{url}

%------------------------------------------------------------------
%  Margins
%------------------------------------------------------------------
\addtolength{\oddsidemargin}{-.8in}
\addtolength{\evensidemargin}{-.875in}
\addtolength{\textwidth}{1.7in}
\addtolength{\topmargin}{-.875in}
\addtolength{\textheight}{1.7in}

%------------------------------------------------------------------
%  Document
%------------------------------------------------------------------
\begin{document}

%------------------------------------------------------------------
%  Paper title
%------------------------------------------------------------------
\title{On Comparing Inverted Index Parallel Implementations \\ Using Map/Reduce \\ CMPUT 681 \\ Foxtrot Team}
\maketitle
\setlength{\parindent}{0pt}
\setlength{\parskip}{2ex plus 0.5ex minus 0.2ex}

%------------------------------------------------------------------
%  Author names and affiliations
%------------------------------------------------------------------
\begin{center}
\author{\textbf{Joshua Davidson - Victor Guana} \\ Afsaneh Esteki - Anahita Alipour - Mohammad Salameh\\ \{jdavidso, guana, afsaneh.esteki, alipour1, msalameh\}@cs.ualberta.ca\\ Computing Science Department \\ University of Alberta \\ }
\end{center}

%------------------------------------------------------------------
%  Abstract
%------------------------------------------------------------------
\begin{abstract}
This document presents the implementation, experimental design, and resulting analysis for the comparison of three different implementations of an Inverted Index algorithm. Specifically, we compare data parallel implementations supported using the MapReduce programming framework over two different ecosystems for parallel computation; Hadoop, and MPI. In the inverted index algorithm context, results from our prototypes show that Hadoop implementations perform considerably better than implementations done using MPI with respect to overall execution time. We have tested our implementations using two main datasets that differ in word density and size. In all the experiments, we found a clear tendency on the better overall performance of our Hadoop implementations in comparisons with our MPI implementations. Our analysis, experimental conditions, and implementation caveats are summarized in the following report.
\end{abstract}

%------------------------------------------------------------------
\section{Introduction}
%------------------------------------------------------------------
The MapReduce programming strategy presents a very attractive proposal for speeding up of computation when using large amounts of independent data. This paradigm offers a conceptual framework inspired by functional programming, where the computation is divided in distributed jobs than can compute partial results of an specified algorithm (map functions), and then merge those in order to get a unified result (reduce functions). In this report, we explore different implementations of the MapReduce strategy for the parallelization of computing an inverted index. Specifically, our parallel implementations are targeted for a distributed memory execution environment. We have compared the behaviour of three different implementations. The first two rely on the Hadoop environment, which provides a natural API and execution platform for the implementation of MapReduce algorithms. In particular, the Hadoop system provides a distributed file system that supports, in a transparent manner, the storage and distribution of massive amounts of data. The third implementation relies on the Message Passing Interface (MPI) library. MPI supports the execution of parallel algorithms using communication and flow control abstract structures within a cluster environment.

This report explores these three implementations using an inverted index algorithm written with the MapReduce strategy. The inverted index algorithm computes an index, which is a \textit{key/value} pair, where the words of a given page are the \textit{keys}, and the \textit{values} associated with those \textit{keys} are their frequency within a given page.   The goal of this report is to investigate and answer the following questions within the confines of the inverted index calculation using the aforementioned implementations.

\begin{description}
\item [Q1] What are the performance impacts of using either Hadoop or MPI implementations of the inverted index algorithm when compared to one another?
\item [Q2] Does the implementation of a combiner within MapReduce improve the algorithm execution time?
\item [Q3] What impacts does the size of inputs have on our MapReduce implementations?
\end {description}

The answers to these questions will provide insights in order to properly analyze the advantages and drawbacks when using MapReduce with Yahoo's open source platform, Hadoop, and with the open source MPI library. Also, the answers will hopefully suggest areas on which the entire MapReduce strategy can be optimized for the problem.

%------------------------------------------------------------------
\section{Problem Description - The Inverted Index}\label{sec:problem}
%------------------------------------------------------------------
An efficient method to perform web searches has become necessary with the ever increasing amount of pages on the internet, and the large sophisticated queries that users execute. Given a user query, a search engine has to search terabytes of textual data, retrieve the relevant pages, rank them, and send them back to the user. An inverted index is a data structure that maps a \textit{key} such as a word, letter or symbol to a corresponding set of documents that contains the \textit{key}. This index is used to optimize the execution of search queries. For example, a search with three words then becomes a function that returns the intersection of the three indexes that correspond to each of the words in the search. 

An inverted index data structure is one of the basic components for most of the search engines. Building an inverted index requires a an amount of time that increases with the amount of data to be considered in the index. Each page has to be preprocessed to remove all of the \textit{HTML} tags and other metadata as well as \textit{stop words} such as \textbf{and, or, when,} and \textbf{the}. The index is then computed using the remaining words in each document. Due to the rapidly increasing speed at which the internet is growing, and the amount of computational time that the algorithm needs to create these indexes, the performance of algorithms used to compute these indexes is a critical factor for commercial search engines.

Different implementations of an inverted index can augment different features in addition to the simple \textit{key/value} pairs of word and document IDs. The \textit{value}, can hold a payload containing the total term frequency or the per-document frequency of a given \textit{key}. For information retrieval and page ranking tasks, the position of the \textit{key} and the context in which it appears are extremely valuable. For instance, a word that occurs in the title of the page has more weight when ranking than a word that occurs in a the page text content or hyperlink. Even different implementations have different ways in assigning IDs to document. Pages that are considered important due to its content and high hit ratio might be assigned different numeric value IDs than less important pages. Pages belonging to similar websites might also have sequential ID numbers. 

Given that the computation used to create the indexes is orthogonal between pages since each page, or input into the algorithm can be computed independently and merged later.  This property allows the MapReduce framework to offer a very attractive strategy for the parallel execution of this task. Section \ref{sec:litreview} presents a review of works that argue why MapReduce is an important solution strategy for these types of problems as well as the motivation behind using high performance computing systems such as Hadoop or MPI to work with the MapReduce framework.

%------------------------------------------------------------------
\section{Literature Review} \label{sec:litreview}
%------------------------------------------------------------------
\subsection{Why MapReduce?}
The MapReduce framework has become increasingly popular over the last decade for processing large amounts of data.  The MapReduce notion was inspired by the functions \textit{map} and \textit{reduce} from lisp and companies such as Google use MapReduce in hundreds of applications on thousands of machines with terabytes of data\cite{MapReduce}. Some examples of problems that lend themselves to this framework are \textit{distributed grep, counting URL access frequency, distributed sort}, and \textit{inverted indexing} \cite{MapReduce}, \cite{HPCVM} as well as data mining jobs, graph processing and even some machine learning tasks \cite{Hadoop}.  There are many other problems that can be modelled in this way as long as they can be represented as having a \textit{key:value} pair for the map tasks such that the reducer can use the \textit{key:value} pairs emitted by the mappers and produce a simplified list or some total value depending on the application goals.  MapReduce is an attractive option for programs that can be written under this framework since it allows itself simple parallelization that can be easily scaled with the resources available.  This notion is especially important in the pay-as-you-go computing model, otherwise known as the \textit{cloud} \cite{Hadoop}.  This model is also becoming more and more popular as it is no longer required for a group or company to have on hand a cluster or similar systems to perform parallel computing as they can be rented from providers of \textit{cloud} resources.  In order to make the most out of these systems, the scalability that MapReduce offers is essential.   MapReduce also has some nice properties regarding fault tolerance.  Since the worker jobs execute independently, and they are not dependant on one another for intermediate results, if the system can detect failures on the map tasks run on the workers, then those jobs can be re-executed, making MapReduce resilient to large-scale worker failure\cite{MapReduce}.

\subsection{The Issue of Virtual Machines}
As mentioned in the previous section, the idea of renting computation resources is becoming more appealing to projects which do not need or want to own the infrastructure required to execute their computational tasks.  The amounts of these types of projects will only continue to increase as the cost of such \textit{cloud} services decrease.  The idea of virtualization is important to both the providers of \textit{cloud} services and their users.  Users are able to tune their applications to custom virtual environments before renting resources and providers are able to use hardware is the most cost effective as well as reduce overheads of maintenance.  Other powerful features virtual machines (VMs) the ability to allow for checkpointing, isolation, and live migration, all of which are all very attractive to providers and users both \cite{HPCVM}.  The downsides of using VMs in high performance systems are the overheads associated with Virtual Machine Monitors (VMMs), which is very visible when I/O performance is concerned \cite{HPCVM}.  Low overhead environments are also needed to manage the migration, distribution, booting up and shutting down of the virtual machines as they are scheduled to run on the system in order for VMs to be considered for high performance applications \cite{HPCVM}.  When comparing MapReduce tasks in a virtual machine against physical cluster, analysis has shown that physical cluster out performs the VM, as expected, however the margins are relatively small, especially when the VMs use certain I/O optimizations \cite{Cloudlet},\cite{MRVM}.

\subsection{Implementing MapReduce}
The idea behind MapReduce can be implemented in numerous ways.  Two such ways are using a well known message passing interface such as MPI and C\textsuperscript{++}\cite{MPI} or to use the open source Java implementation Hadoop\cite{Hadoop}.  There are advantages and disadvantages to each method of implementation, which should be addressed when deciding how to approach starting a project. One advantage of using MPI to implement MapReduce is the availability of the library.  Another advantage is that MPI provides numerous communication methods to handle the distribution of the data such as MPI's built-in collective operations.  To implement a combiner for instance, the MPI\_Reduce\_local operation could be used \cite{MPI}.  As well, a scheduling process could use MPI\_Scatter operations to distribute the map tasks to the mappers.  Other MPI functionality such as non-blocking communications could also be used to increase performance of the MapReduce tasks by overlapping communication and computation, which can be done in the map phase using pipelining \cite{MPI}.  Some disadvantages of using MPI is the ability to detect and correct faults. MPI defaults to aborting jobs when they fail, and even though this can be configured to handle errors, these are non-trivial when using collective operations \cite{MPI}.  Also, checking the successful completion of collective operations may be very expensive\cite{MPI}.

Hadoop, which is Yahoo's open source MapReduce implementation developed for use with Java, provides an easy to use abstraction of the MapReduce semantics.  The Hadoop environment contains functions to the map and reduce phases as well as a distributed file system accessible to the installation for data replication and storage.  A Hadoop installation also comes complete with web based tools for monitoring tasks as well as setting up nodes for execution and scheduling.  Hadoop also handles application faults with built-in job restarts and migration of tasks from one node to another \cite{white2010hadoop}.  These features , combined with companies such as Amazon offering pre-built virtual machine images containing Hadoop for use with Amazon's \textit{EC2 Cloud} computing infrastructure make the Hadoop implementation very appealing. There are however tradeoffs to consider when choosing Hadoop.  One such tradeoff of Hadoop is the Hadoop Distributed File System (HDFS) still relies on the physical system to perform I/O, which can be an expensive operation when going through the Hadoop layer as opposed to using the disk directly\cite{Hadoop}.  Hadoop also has little support for manipulation of the sorting and grouping algorithms. As well, the scheduler, which is sensitive to the speed of the nodes, can have a negative impact on the performance within some applications.  One more problem can arise from using immutable objects, such as Java's strings can have negative performance impacts when performing database type operations \cite{Hadoop}.

\subsection{Inverted Indexing}
inverted indexing is a problem that lends itself to the MapReduce framework nicely.  As stated in Section \ref{sec:problem} an inverted index is an indexing over all the words for a collection of data and the pages, or locations within those pages in which the words appear\cite{InvInd}.  This indexing can be extended to include the frequency of the words in a given document as well as the absolute positions within the documents in which they appear or by weighting the terms based on \textit{PageRank} or similar such measures \cite{Book}. Given an inverted index, say for web pages, it is easy to imagine the power they hold for generating the results to search queries.  If the documents are sorted properly, then an application could lookup the search terms in the inverted index, and gather the top \textit{k} results. \cite{Book}. This problem fits nicely in the MapReduce framework since it is highly data parallel.  In order to generate even a simple inverted index, each document has to first be parsed for the words in the document and a frequency count examined for each word.  This task lends itself to the \textit{map} phase of MapReduce and hence can be computed independently in parallel.  Once each document, or set of documents, has generated these \textit{key:value} pairs, they can then be sent to the \textit{reduce} phase of a MapReduce algorithm.  The reducer takes all of the data for each document and composes a master index with all the words and their frequencies per document.  \textit{Lin et al.} even give a algorithm for these operations with respect to MapReduce \cite{Book}.

\subsection{Performance}
MapReduce is a framework that can provide significant performance increases for problems that are highly data parallel.  Since each map task is assigned to independent portions of the data those tasks can work in parallel, giving the \textit{map} phase of the algorithm a near linear speedup depending on the amount of overheads associated with starting the task itself \cite{MapReduce}.  The \textit{reduce} phase however can become a bottleneck if it has to process the data from all of the mappers and that computation is data dependant.  Multiple reduce tasks may be used to help speed this implementation up, which is the notion of combiners.  Overall, the MapReduce framework has shown to provide good results for problems with data parallelism \cite{MapReduce}.  Using MPI to solve this problem allows for some extra simplicity from an application development perspective with the proper use of collective operations \cite{MPI}.  As well, using the MPI library's non-blocking communication modes can provide performance increases for problems exhibiting certain characteristics \cite{MPI}.  MPI also has similar performs measures when run on virtual machines compared to InfiniBand networks with respect to latency and throughput \cite{HPCVM}.  Hadoop has been shown to perform well against parallel database implementations \cite{Hadoop}.  When run on a VM instead of the native OS, Hadoop seems to perform with reasonable degradation, the cause of which being disk I/O \cite{MRVM}.

\subsection{Synthesis}
The MapReduce framework and the types of problems that lend themselves to this solution can provide application developers with a simplified framework in which to structure their algorithms.  MapReduce allows for simple parallelism and high scalability for its users, which allows for high performance solutions to many problems. The notion of virtual machines and their role in the high performance computing environment is becoming a hot topic with the emergence of \textit{cloud} computing resources as alternative to purchasing hardware, Amazon's \textit{EC2} being a concrete example of such a system.  The use of traditional high performance computing techniques and libraries such as MPI and their use within the MapReduce environment as well as Hadoop have been explored and shown to be two viable options that developers can use when making decisions regarding algorithm design and platform execution .  The problem of inverted indexing is one such problem where the choice of implementation and the need for superior performance can have an impact on commercial applications.  All of these ideas have been explored independently and little work has been done to compare them together, even though all of these concepts are closely related.

%------------------------------------------------------------------
\section{Implementation} \label{sec:implementation}
%------------------------------------------------------------------
This section describes the implementation of the three strategies being evaluated. Section \ref{sec:hadoop} exposes the inverted index implementation using MapReduce within the Hadoop environment. Section \ref{sec:mpi} portraits the same solution using the MPI library. Section \ref{sec:sequential}, explains our baseline sequential implementation used to verify the results.

%------------------------------------------------------------------
\subsection{Data Sets} \label{datasets}
%------------------------------------------------------------------
In order to execute performance tests over the proposed implementations we have used two main datasets. The first one is the  \textit{ClueWeb}\cite{clueweb} dataset,  which is a compilation of millions of \textit{HTML} web pages collected between January and February of 2009 by the University of Massachusetts and Carnegie Mellon University  sources of In particular we have taken two subsets, one of 1 GB and another of 4 GB of these pages where the predominant language was English. 

The second dataset we used was the \textit{Wikipedia Dumps}\cite{wikidumps}, which is a collection of millions of wikipedia articles in English in the form of wikitext source and metadata embedded in XML. For this dataset, we also used two subsets of 1 GB and 4 GB selected in a random manner.

The datasets were then parsed to remove any \textit{HTML} tags, script structures, wikitext annotations, XML markups and \textit{stop words}. In Section \ref{sec:experimentation} a complete description of the specific format for each experiment is portrayed.

%------------------------------------------------------------------
\subsection{Hadoop} \label{sec:hadoop}
%------------------------------------------------------------------
The Hadoop ecosystem is a software framework targeted for data-intensive distributed applications. It provides an execution platform for MapReduce jobs over distributed memory clusters supporting three major features. The first one is a Java API that generalize the implementation of the \textit{mapper} and \textit{reducer} functions that are automatically deployed over the execution platform. Secondly, it proves the \textit{Hadoop Distributed File System} (HDFS), a mechanism to distribute large amounts of information in a transparent manner over the cluster nodes in order to behave like a local file system. The third feature is a set of services to manage the MapReduce execution over the cluster (see Figure \ref{fig:hadoopScheme}). While the name-node and data-node services represent the core of the HDFS from the master and slave perspective, the MapReduce execution administration is performed by the job and task tracker services. 

%--------------------figure----------------
\begin{figure}[htb]        
  \begin{center}
    \includegraphics[scale=0.6]{../images/hadoopservices.png}
    \caption{Hadoop Multi-node Cluster Management Services \cite{hadooptutorial2011}}
    \label{fig:hadoopScheme}
  \end{center}
\end{figure}
%--------------------figure----------------

Both of our Hadoop inverted index implementations follow the MapReduce structure showed in Figure  \ref{fig:mapReduceScheme}. As a general description, the indexing pages were placed in the HDFS in a specific format explained in Section \ref{sec:experimentation}. A set of mapper functions created multiple partial indexes that then were reduced by a single reduce operation. 

%--------------------figure----------------
\begin{figure}[htb]
  \begin{center}
    \includegraphics[scale=0.6]{../images/mapreducegeneral.png}
    \caption{MapReduce Execution Scheme \cite{white2010hadoop}}
    \label{fig:mapReduceScheme}
  \end{center}
\end{figure}
%--------------------figure----------------

%------------------------------------------------------------------
\subsubsection{Hadoop Inverted Index Simple} \label{hadoopSimple}
%------------------------------------------------------------------
In the Hadoop Simple implementation, the mapper Function (Listing 1) tokenizes the words within a file as either page ids or actual text, and emits the occurrences  \textit{$<word, page.id>$} intermediate keys. In the parsing process, each dataset was divided into independent files with multiple inner pages. Each page ID (url or article name) was mapped with a unique short ID in order to avoid large keys for extremely large urls, those IDs were specially formatted with tree "\#" symbols at the beginning. In the mapper function, the algorithm browses each token, evaluates the page context by looking the current page ID, and emits each word occurrence tuple until another page ID is found. This process continues until the file tokens are totally read.

The reduce function (Listing 2) takes each mapper's intermediate keys, and builds a Hash Map, counting the specific word frequency per page. At the end, the reducer writes the word as a final key, together with the string representation of the Hash Map as the corresponding value which is a word frequency appearance per page.

\begin{lstlisting}[float, numbers=left, captionpos=b, frame=tb, xleftmargin=3em, caption={Hadoop inverted index Simple (Mapper)}, morekeywords= {void, int, for},label=lst:sp, basicstyle=\tiny]
public class Map extends Mapper<LongWritable, Text, Text, Text>  {
  private String id="default";
  public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
		
	  // Getting the file words in a single line
	  String line = value.toString();
	
	  // Split the string in separate words
	  StringTokenizer count = new StringTokenizer(line.toLowerCase());
		
	  // Verify if the current token is a page id 
	  while (count.hasMoreTokens()) {
	  String wordT = count.nextToken();
		
	    if(wordT.startsWith("###")){
	    id = wordT.substring(3);
	    }
	    else{
	      // Emit <k,v> using the id and the consecutive words
	      context.write(new Text(wordT), new Text(id));
	    }
	  }
	}
}
\end{lstlisting}

\begin{lstlisting}[float, numbers=left, captionpos=b, frame=tb, xleftmargin=3em, caption={Hadoop inverted index Simple (Reducer)}, morekeywords= {void, int, for},label=lst:sp, basicstyle=\tiny]
public class Reduce extends Reducer<Text, Text, Text, Text> {

  protected void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {

	  // Map to count word occurrences per file
	  HashMap<String, Integer> fileCounter = new HashMap<String, Integer>();
		
	  // Get all the values (files)
	  for (Text value : values) {
	    String fileCount = value.toString();
	
	    // Is the word in the map?
	    Integer counter = fileCounter.get(fileCount);
			
	    // if not initialize the counter. Else, take the actual counter and add 1
	    if (counter == null) {
		  fileCounter.put(fileCount, 1);
	    }
	    else {
		  fileCounter.remove(fileCount);
		  Integer actual = counter+1;
		  fileCounter.put(fileCount, actual);
	    }
	  }
	  // write the reduction {word, <file, counter>}
	  context.write(key, new Text(fileCounter.toString()));
  }
}
\end{lstlisting}

%------------------------------------------------------------------
\subsubsection{Hadoop Inverted Index Combiner} \label{hadoopCombiner}
%------------------------------------------------------------------
In the inverted index combiner implementation, the mapper has the additional responsibility of building partial reductions over the processed inputs. The partial reduction process is referred to as a combiner. The nature of the algorithm is not only based on emitting a key \textit{$<word, pageid>$} per word found. Instead, the mapper counts the occurrences of the words in the pages provided as inputs, thus emitting intermediate keys with the form \textit{$<word,< page.id=occurrences>>$}. In this case, the messages sent to the reducer are fewer in comparison with the simple implementation. Given that the algorithm combines the partial counts per word, only one intermediate key is emitted per word opposed to one per word appearance as in the simple version. In Listing 3, the mapper function is portrayed with two main phases. In the first phase (lines 9 to 46) the algorithm builds a nested hash map in order to count the occurrences of each word within each page. In the second phase, the same data structure is traversed (lines 48 to 65) and the combined intermediate keys are emitted.

\begin{lstlisting}[float, numbers=left, captionpos=b, frame=tb, xleftmargin=3em, caption={Hadoop inverted index Combiner (Mapper)}, morekeywords= {void, int, for},label=lst:sp, basicstyle=\tiny]
public class Map extends Mapper<LongWritable, Text, Text, Text>  {
	
  // Global variables accessed across parallel maps
  private String id="default";
  private HashMap<String, Integer> fileCounterMap =null;
 
  public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
			
    // Hashmap with the partial combiners per file read
    HashMap<String, HashMap<String, Integer>> master = new HashMap<String, HashMap<String, Integer>>();
				
    // Getting the file words in a single line
    String line = value.toString();
	
    // Split the string in separate words
    StringTokenizer count = new StringTokenizer(line.toLowerCase());
				
    while (count.hasMoreTokens()) {
      String wordT = count.nextToken();
	
      if(wordT.startsWith("###")){
        id = wordT.substring(3);
      }
      else
      {
        // Map to count word occurrences per file (given id)
        fileCounterMap = master.get(id);
					
        if(fileCounterMap == null){
        //Does Not exist? create a map to count the word occurrences within a page
        HashMap<String, Integer> fileCounterMapT = new HashMap<String, Integer>();
        master.put(id, fileCounterMapT);
        }
        else{
          //Does Exist? start counting the words introducing a new counter per word within a page
          Integer counter = fileCounterMap.get(wordT);
          if (counter == null) {
            fileCounterMap.put(wordT, 1);
          }
          else {
            Integer actual = counter+1;
            fileCounterMap.put(wordT, actual);
          }	
        }
      }
    }
    
    // Emit keys Phase
    
    //Per File counting map
    Iterator<String> itCombinerFile = master.keySet().iterator();
    while(itCombinerFile.hasNext()){
      String fileid = itCombinerFile.next();
      HashMap<String, Integer> countFile= master.get(fileid);
				
      // and per  Word counting map within a file counter
      Iterator<String> itCombiner = countFile.keySet().iterator();
				
      while(itCombiner.hasNext()){
        String word = itCombiner.next();
        Integer countWord = countFile.get(word);
					
        // Emit the KV tuple <word, <file,counter>
        String combinerTuple = fileid+"="+countWord;
        context.write(new Text(word), new Text(combinerTuple));
      }
    }
  }
}
\end{lstlisting}

From the reducer perspective (see Listing 4), the implementation is very similar that of the simple implementation. The only difference lies on the initial decomposition of the value of the received tuple. In this case, the reducer has to derive from $page.id$ the partial count of the word in order to build the unified final count. This process is done by splitting the value and introducing the partial counters in the global counter map with its respective word and page ID (lines 11 to 15). As mentioned before, the counter is able to make larger contributions to the index, which reduces the granularity since partial counts are received instead of one unit of frequency at each time. 

\begin{lstlisting}[float, numbers=left, captionpos=b, frame=tb, xleftmargin=3em, caption={Hadoop inverted index Combiner (Reducer)}, morekeywords= {void, int, for},label=lst:sp, basicstyle=\tiny]
public class Reduce extends Reducer<Text, Text, Text, Text> {

  protected void reduce(Text key, Iterable<Text> values, Context context)	throws IOException, InterruptedException
  {
    // Map to count word occurrences per file
    HashMap<String, Integer> fileCounter = new HashMap<String, Integer> ();
    
    // Get all the values (files)
    for (Text value : values) {
	
      String fileCount = value.toString();
      String [] fileCountTuple = fileCount.split("=");
			
      String fileName = fileCountTuple[0];
      Integer countForFile = Integer.parseInt(fileCountTuple[1]);
						
      // Is the word in the map?
      Integer counter = fileCounter.get(fileName);
			
      // if not initialize the counter. Else, take the actual counter and add the partial count
      if (counter == null) {
        fileCounter.put(fileName, countForFile);
      } 
      else {
        fileCounter.remove(fileName);
        Integer actual = counter+countForFile;
        fileCounter.put(fileName, actual);
      }
    }
    // Emit the reduction {word, <file, counter>}
    context.write(key, new Text(fileCounter.toString()));
  }
}
\end{lstlisting}

%------------------------------------------------------------------
\subsection{MPI}\label{sec:mpi}
%------------------------------------------------------------------
We used a simple MPI implementation of MapReduce to compare the Hadoop implementation against.  In the \textit{map} phase, each map tasks check its local folders for its input data.  This design decision was made to help mimic the HDFS that Hadoop uses, since we did the data propagation prior to execution \textit{(see section \ref{sec:experimentation})}.  Once the mappers get their input data, they begin to make an index of the words in each file. They are stored in a Standard Template Library (STL) \textit{map} container, which has the page names as \textit{keys} and the \textit{values} are another STL \textit{map}, which is composed of word and count tuples for each word appearing in the given page.  Each time a new page is found within the input file, a new entry into the combiner \textit{map} is entered with the first word appearing on that page (see lines 15-21 of Listing 5).  The page is then parsed for space delimited words which are then either added to the word count \textit{map} as a new entry or their value in the word count \textit{map} is updated.  After a sufficient size map has been built by the mapper, it then sends the data to the reducer.  This index is built and sent once per input file.

Once there is enough data to send, the mapper proceeds to first make a synchronous send to the reducer telling it how many pages are being sent from the mapper (See Listing 6.).  Upon return from the synchronous send, the mapper begins to package up each page as a list of the words and number of appearance of the word on the given page (Listing 6, lines 6-24).  The mapper inverts this information when it builds the package in order for ease of insertion by the reducer.  These values are then all concatenated using a ":" in order to be parsed by the reducer (Listing 6, lines 11-23). Once a package has been formed it is sent on a per page basis to the reducer.  This implementation uses to use a per page granularity for communication, however this could be easily changed to either increase or decrease communication. One such implementation was tested and provided us with lessons on how MPI handles memory management \textit{(see section \ref{sec:analysis})}.  Once the \textit{map} task is finished sending it's data, it can begin to process another input file.

The reducer waits for the first mapper to finish a task, then it receives the number of packages it will be sent from that mapper.  The first mapper in our implementation is the \textit{rank 1} mapper and the reducer is assigned to be \textit{rank 0}.  Once again, a STL \textit{map} is used to store the inverted index produced from the data sent by the mappers.  After a mapper has finished sending the number of packages, the reducer begins to gather the packages and parse them into the \textit{key/value} pairs for the inverted index (Listing 7, lines 8-14).  Once parsed , the reducer inserts or updates the inverted index for each word in a page, for all packages sent from a mapper (Listing 7, lines 17-35).  In our implementation, the inverted index is then written to disk (Listing 7, line 39).  This was done once per input file in order to avoid memory limitations on the system on which the experiments were run, however with a simple change this functionality be changed to write more or less frequently.  Not shown in the listings is the outer function that wraps all of the execution and is used to provide a round robin type of fairness which allow for each mapper to send one input file in turn.  This was done in order to allow a mapper to send an input file, then begin processing another while a second mapper sends its processed data.

\begin{lstlisting}[float, numbers=left, captionpos=b, frame=tb, xleftmargin=3em, caption={MPI inverted index Mapper with Combiner}, morekeywords= {void, int, for},label=lst:sp, basicstyle=\tiny]
void parseAndSend(vector<string> *files) {
  /* DATA STRUCTURE AND VARIABLE DECLARATIONS OMITTED FROM LISTING */
 
  //Send the number of files for this particular mapper 
  MPI_Send(&num_files,1,MPI_INT,0,TAG_INIT1,MPI_COMM_WORLD);

  /* Loop over the directory and process each file */
  while(i < num_files) {
    //Parse a single file into the map
    while (!in.eof()) {
      in>>word;
      
      /*Get the page name from the special character sequence in the parsed data
         and add the next word to the combiner map */
      if(word.substr(0,3).compare("###")==0){
        pagename = word.substr(3);
        in>>word;
        wordCountMap.insert(pair<string, int>(word, 1));
        combinerMap.insert(pair<string, map<string,int> >(pagename, wordCountMap));
        wordCountMap.clear();
      }
      /*Check for an existing word for a file, increase the count if it exists or add it to the map */
      else {
        WordCountMap::iterator combinerMapIter = combinerMap[pagename].begin();
        combinerMapIter = combinerMap[pagename].find(word);
        if (combinerMapIter == combinerMap[pagename].end()) {
          combinerMap[pagename].insert(pair<string, int >(word, 1));
        }
        else {
          combinerMap[pagename][word]++;
        }
      }
    }
    sendMap(&combinerMap);
    combinerMap.clear();
   }
  i++;
 }
\end{lstlisting}

\begin{lstlisting}[float, numbers=left, captionpos=b, frame=tb, xleftmargin=3em, caption={MPI inverted index Mapper Communication}, morekeywords= {void, int, for},label=lst:sp, basicstyle=\tiny]
void sendMap(CombinerMap *combinerMap) {
  /* DATA STRUCTURE AND VARIABLE DECLARATIONS OMITTED FROM LISTING */    

  MPI_Ssend(&mapSize, 1 , MPI_INT, 0, TAG_INIT1,MPI_COMM_WORLD);

  for (CombinerMap::iterator combinerMapIter (combinerMap->begin()); 
  combinerMapIter != combinerMap->end(); ++combinerMapIter){
    currentPage = combinerMapIter->first;
    package_size = 0;
    package = "";
    for (WordCountMap::iterator wordCountMapIter ((combinerMapIter->second).begin()); 
    wordCountMapIter != (combinerMapIter->second).end(); ++wordCountMapIter){
      key = wordCountMapIter->first;
      value = currentPage+"="+ int2stringTT(wordCountMapIter->second);
      value.append(",");
      key_size = key.size();
      value_size = value.size();
         
      package.append(key);
      package.append(":");
      package.append(value);
      package.append(":");     
    }
    package_size = (int)package.size()+1;
    //Send the packaged data
    if(!(c_package = (char*)malloc((package_size)*sizeof(char)))){
      fprintf(stderr,"PACKAGE MALLOC FAILED ON NODE %d\n",rank);
      exit(-1);
    }
    MPI_Send(&package_size, 1 , MPI_INT, 0, TAG_INIT2,MPI_COMM_WORLD);
    MPI_Send(c_package, package_size , MPI_CHAR, 0, TAG_INIT3,MPI_COMM_WORLD);
    free(c_package);   
  }
}
\end{lstlisting}

\begin{lstlisting}[float, numbers=left, captionpos=b, frame=tb, xleftmargin=3em, caption={MPI inverted index Reducer}, morekeywords= {void, int, for},label=lst:sp, basicstyle=\tiny]
void gatherAndWrite(int sender, int file_num) {
  /* DATA STRUCTURE AND VARIABLE DECLARATIONS OMITTED FROM LISTING */    
  
  MPI_Recv(&num_pages, 1 ,MPI_INT, sender, TAG_INIT1, MPI_COMM_WORLD,MPI_STATUS_IGNORE);
  
  //Recieve a package from a mapper
  for(i=0;i<num_pages;i++){
    MPI_Recv(&package_size,1,MPI_INT,sender,TAG_INIT2, MPI_COMM_WORLD,MPI_STATUS_IGNORE);
    if(!(package = (char*)malloc(package_size*sizeof(char)))) {
      exit(-1);
    }
    MPI_Recv(package, package_size , MPI_CHAR, sender, TAG_INIT3,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
    
    //Parse the package details  
    key = strtok (package,":");
    value = key;
    while (key != NULL){
      value = strtok(NULL,":");
      if(value == NULL) {
        break;
      }        
        str_key.assign(key);
        str_value.assign(value);
          
        InvertedIndex::iterator iter = inv_index.begin();
        iter = inv_index.find(str_key);
           
        if (iter == inv_index.end()){
          inv_index.insert(pair<string, string>(str_key, str_value));
        } 
        else{
          inv_index[str_key].append(str_value);
        }      
        key = strtok (NULL, ":");
      }
    free(package);
  }
  //Write the inverted index to a file
  writing(&inv_index,sender, file_num);
  inv_index.clear();
}
\end{lstlisting}


%------------------------------------------------------------------
\subsection{Sequential} \label{sec:sequential}
%------------------------------------------------------------------
The sequential implementation follows the same methods as the MPI implementation with the exception being that the communication has been removed.  The data is first read in from the disk, then it is put directly into the inverted index data structure with the key and value reversed.  This data structure also has to be periodically written to disk after it reaches a certain size.  For this, the same strategy was used as in the MPI implementation in that the partial indexes were written one per input file.  This could be changed in the same fashion as mentioned in Section \ref{sec:mpi}.  The sequential implementation was used mainly for validation that the indexes produced by the parallel implementations were correct.  This was done using the unix \textit{grep} and \textit{diff} commands.

%------------------------------------------------------------------
\section{Experimentation} \label{sec:experimentation}
%------------------------------------------------------------------
In order to address our experimental questions we have designed a series experiments to evaluate the runtime and resource consumption of our implementations. The objective of the experiments is to allow us to observe different behaviours across the various implementations.  Each implementation was run with each of the four major experiments we constructed. The experiments were designed by using varying input sizes and using data that exhibited different characteristics (see Table \ref{tab:expsummary}).

\begin{table}[htbp]
\begin{center} 
\begin{tabular}{|l|l|l|l|l|}
\hline
\textbf{Implementation-Dataset} & \textbf{Clueweb 1 GB} & \textbf{Clueweb 4 GB} & \textbf{Wiki 1 GB} & \textbf{Wiki 4 GB} \\ 
\hline
\textbf{Hadoop Simple} & Exp. 1 & Exp. 5 & Exp. 9 & Exp. 13 \\ 
\hline
\textbf{Hadoop Combiner} & Exp. 2 & Exp. 6 & Exp. 10 & Exp. 14 \\ 
\hline
\textbf{MPI} & Exp. 3 & Exp. 7 & Exp. 11 & Exp. 15 \\ 
\hline
\textbf{Sequential} & Exp. 4 & Exp. 8 & Exp. 12 & Exp. 16 \\ 
\hline
\end{tabular}
\end{center}
\caption{Experiments Summary (Implementations vs. Datasets)}
\label{tab:expsummary}
\end{table}

Experiments 1 to 4 focus on evaluating the overall performance of the implementations with a 1 GB dataset in the context of the ClueWeb files. We divided the dataset in individual input files of around 100 MB each. In the Hadoop implementations, the files were placed in the HDFS of our deployment platform (see Section \ref{enviroment}). In the case of the MPI related experiments, the set of files was distributed among the input directories of the MPI application nodes. For this experiment we had 10 different input files, each containing multiple parsed we pages. It is important to mention that the distribution of the files was consistent with a homogeneous load balancing policy. Specifically, in the MPI experiments each node processed 3 to 4 files during its execution and in the Hadoop implementations, the HDFS distributed the input files among the cluster nodes.

In experiments 5 to 8, all the implementations were tested using a ClueWeb dataset of a much larger size. In this case, forty 100 MB files were used as the input files for the algorithms totalling 4 GB.  The files were distributed among the cluster nodes the same way as in the previous experiment as was the the same fair load balance policy.

The a third and fourth experimental groups, experiments 9 to 16 were tested with a similar structure to the previous two experiment groups. For these instances, the Wikipedia dataset was used as input data for the experiments. The datasets were divided using the same strategy mentioned before for the 1 GB and 4 GB experiments. 

We designed the experiments by varying the input size and type in order to observe performance characteristics of each implementation with varying data.  We chose the ClueWeb and Wikipedia data specifically since the Wikipedia data is more likely to have a larger range of vocabulary, all of which will be in English. The ClueWeb data will probably use less vocabulary, however there could be multiple languages within the data as well as many more slang terms, allowing for the possibility of a much sparser index. This property regarding how many page entries there are the index each word is referred to as the word density of the dataset, with Wikipedia having a higher density than the ClueWeb data for the reasons given.  According to various statistics, both Wikipedia articles and web pages tend to have similar average word counts per page so this should not impact the resulting indexes \cite{WikiStats}, \cite{WebStats}.

%------------------------------------------------------------------
\subsection{Metrics and Measured Items}
%------------------------------------------------------------------
In order to verify our research questions we measured the following items on each of the experimental executions.

\begin{enumerate}
\item \textbf{Execution time.}  This was done for the map as well as the total execution time. In the case of the Hadoop implementations, the execution services provide a summary of the execution time per phase once a job is completed. For the MPI implementations, time measurements were implemented by having each mapper sum the time spent in the map phase including idle time between communication with the reducer and having the reducer keep track of the total running time.
\item \textbf{Average memory consumption.} This item was recorded manually by monitoring the Linux execution management console, namely the \textit{top} command.
\item \textbf{Disk space required for the execution}. For all the experiments, each time a dataset was configured and deployed, the hard drive free space was compared with a previous clean state in order to get the total used space prior to the execution. Additionally, in the case of the Hadoop implementations, the consumption per node was monitored manually by recording the hard drive space available during random intervals throughout execution.
\item \textbf{Network communication patterns.} We have instrumented the network of the master node in all the implementations, and recorded the different types of traffic patters exposed by their parallel execution. We used the \textit{tcpdump} console packet analyzer in order to capture the traffic and Wireshark 1.6 to filter and plot the patterns.
\end{enumerate}

%------------------------------------------------------------------
\subsection{Experimental Environment} \label{enviroment}
%------------------------------------------------------------------
All our experiments were executed in a four node virtualized cluster. Each node has the description portrayed in Table \ref{tab:nodes}. The virtual nodes were hosted on a shared memory sever running a virtualization environment. The description of the host machine is presented in Table \ref{tab:machines}. A  NAT strategy was used in order to connect the experimental nodes supported by a \textit{VM Ware ESX Virtual Switch}. A NAT traversal configuration was used in order to expose the machines to remote public connections. We have called the cluster nodes SSRG2, SSR3, SSRG4, SSRG7. In all the cases the node SSRG4 was the master of the execution containing the \textit{JobTracker} and \textit{NameNode} services for Hadoop, and was the master node of execution for the MPI related experiments.

\begin{table}[htbp]
\begin{center} 
\small{
\begin{tabular}{|l|l|}
\cline{2-2}
\multicolumn{1}{l|}{} & \textbf{Host IBM System x3500 M2}  \\ 
\hline
\textbf{Processor} & 2x Xeon 4C E5530 80W 2.4GHz/1066MHz/8MB \\ 
\hline
\textbf{Hard Drive} & 8x IBM 73 GB 2.5in SFF Slim-HS 15K 6Gbps SAS HDD \\ 
\hline
\textbf{Memory} & 8 16x4GB Dual Rank PC3-10600 CL9 ECC DDR3 1333MHz Chipkill LP RDIMM \\ 
\hline
\textbf{OS} &  VMWare ESXi 4 \\ 
\hline
\end{tabular}
}
\end{center}
\caption{Host Machine Subject Description}
\label{tab:machines}
\end{table}

\begin{table}[htbp]
\begin{center} 
\small{
\begin{tabular}{|l|l|}
\cline{2-2}
\multicolumn{1}{l|}{} & \textbf{Nodes 1, 2, 3, and 4}  \\ 
\hline
\textbf{Processor} & 2 Cores (4 Virtual Threads) \\ 
\hline
\textbf{Hard Drive} & 50 GB \\ 
\hline
\textbf{Memory} & 8 GB \\ 
\hline
\textbf{OS} & Ubuntu 10.04 (32 bits) \\ 
\hline
\end{tabular}
}
\end{center}
\caption{Virtual Node Subject Description}
\label{tab:nodes}
\end{table}

%------------------------------------------------------------------
\section{Results} \label{sec:results}
%------------------------------------------------------------------
In this section we summarize the results obtained for each experiment. The four main tables and graphs show the observed comparative measurements. Additionally, the collected network patterns are portrayed in the context of the experiments. Discussion and analysis of the results are presented in Section \ref{sec:analysis}.

Table \ref{tab:indexSize} shows the size of the inverted index generated by each implementation. Due to reasons discussed in the analysis section, note that the inverted index generated by each implementation was slightly different in terms of size. Table \ref{tab:indexSize}'s graphical representation is presented in Figure \ref{fig:indexSizeResults}. The main differences here can be observed by comparing the Hadoop family of implementations to the MPI implementation. In general the indexes generated by the MPI implementation were smaller than the ones generated by the Hadoop implementations. This difference is in the order of MB. The effect of the word density of each dataset showed that indexes generated from different data of the same size are dissimilar. 

\begin{table}[htbp]
\begin{center} 
\small{
\begin{tabular}{|l|l|l|l|}
\hline
\multicolumn{4}{|c|}{\textbf{Inverted Index Size (GB)}} \\ 
\hline
\textbf{Dataset/Implementation} & \textbf{Hadoop Simple} & \textbf{Hadoop Combiner} & \textbf{MPI} \\ 
\hline
Wiki 1 GB & 2.23 & 2.23 & 2.2 \\ 
\hline
Web 1 GB & 2.25 & 2.25 & 2.3 \\ 
\hline
Wiki 4 GB & 6.4 & 6.4 & 6.16 \\ 
\hline
Web 4 GB & 6.9 & 6.9 & 6.6 \\ 
\hline
\end{tabular}
}
\end{center}
\caption{Experiment Results - Generated Index Size Comparison}
\label{tab:indexSize}
\end{table}
%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.6]{../images/DiskUsage.png}
    \caption{Filesystem Usage - Experimental Results}
    \label{fig:filesystemResults}
  \end{center}
\end{figure}
%--------------------figure----------------

Table \ref{tab:indexSize} points to the differences on the amount of disk use on the filesystem by each implementation. It is important to mention that even for the same dataset, each implementation used different amounts of space. As a general result, the Hadoop distributed filesystem or HDFS, used an average of 120\% more disk space than the MPI implementation. This results was obtained by measuring the amount of free space on each node in the cluster, or in the case of Hadoop in the HDFS , and comparing this measure with the free space after the input files were placed. A graphical representation of this comparison is presented in Figure \ref{fig:filesystemResults}.

\begin{table}[htbp]
\begin{center} 
\small{
\begin{tabular}{|l|l|l|l|}
\hline
\multicolumn{4}{|c|}{\textbf{Filesystem Disk Usage (GB)}} \\ 
\hline
\textbf{Dataset/Implementation} & \textbf{Hadoop Simple} & \textbf{Hadoop Combiner} & \textbf{MPI} \\ 
\hline
Wiki 1 GB & 2.22 & 2.22 & 1 \\ 
\hline
Web 1 GB & 2.55 & 2.55 & 1 \\ 
\hline
Wiki 4 GB & 6.55 & 6.55 & 4 \\ 
\hline
Web 4 GB & 7.89 & 7.89 & 4 \\ 
\hline
\end{tabular}
}
\end{center}
\caption{Experiment Results - Filesystem Usage (Minutes)}
\label{tab:diskUsage}
\end{table}

%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.6]{../images/IndexSize.png}
    \caption{Calculated inverted index Size - Experimental Results}
    \label{fig:indexSizeResults}
  \end{center}
\end{figure}
%--------------------figure----------------

As we presented in Section \ref{sec:implementation}, our implementations offered different perspectives on how to perform the mapper phases in the inverted index computation. Table \ref{tab:mapTime} exposes the execution time of the map phases on each experiment execution. As a summary of the observed behaviours, for large datasets, the Hadoop combiner map phase was the fastest among the three implementations under study. The MPI mapper functions were on average 5\% faster for the 1 GB datasets. For the 4 GB datasets, the Hadoop combiner mappers were 50\% faster than the Hadoop simple mappers, and around 70\% faster than the MPI mappers. Figure \ref{fig:mapPhaseTime} shows a graphical comparison of the observed time differences.

\begin{table}[htbp]
\begin{center} 
\small{
\begin{tabular}{|l|l|l|l|}
\hline
\multicolumn{4}{|c|}{\textbf{Map Phase Time (Minutes)}} \\ 
\hline
\textbf{Dataset/Implementation} & \textbf{Hadoop Simple} &\textbf{ Hadoop Combiner} & \textbf{MPI} \\ 
\hline
Wiki 1 GB & 3.5 & 2.5 & 2.1 \\ 
\hline
Web 1 GB & 3.4 & 2.5 & 3.75 \\ 
\hline
Wiki 4 GB & 8.3 & 6.4 & 16.64 \\ 
\hline
Web 4 GB & 11.6 & 7.3 & 21.84 \\ 
\hline
\end{tabular}
}
\end{center}
\caption{Experiment Results - Total Map Phase Time Comparison}
\label{tab:mapTime}
\end{table}

%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.6]{../images/MapTime(Minutes).png}
    \caption{Map Phase Execution Time - Experimental Results}
    \label{fig:mapPhaseTime}
  \end{center}
\end{figure}
%--------------------figure----------------


Table \ref{tab:totalTime} and Figure \ref{fig:totalTime} expose the total time consumed by each implementation for the generation of the inverted index given a specific input. Although the results should be compared taking in mind each of the implementation specific conditions and limitations, as an overall measure the Hadoop combiner was on average 33\% faster than the Hadoop simple implementation, and 18\% faster than the MPI implementation.

\begin{table}[htbp]
\begin{center} 
\small{
\begin{tabular}{|l|l|l|l|}
\hline
\multicolumn{4}{|c|}{\textbf{inverted index Total Computation Time}} \\ 
\hline
\textbf{Dataset/Implementation} & \textbf{Hadoop Simple} & \textbf{Hadoop Combiner} & \textbf{MPI} \\ 
\hline
Wiki 1 GB & 8.53 & 6.2 & 9.4 \\ 
\hline
Web 1 GB & 9.13 & 7.26 & 10.18 \\ 
\hline
Wiki 4 GB & 24.55 & 18.46 & 21.92 \\ 
\hline
Web 4 GB & 36.29 & 22.41 & 21.4 \\ 
\hline
\end{tabular}
}
\end{center}
\caption{Experiment Results - Total Time Comparison}
\label{tab:totalTime}
\end{table}

%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.6]{../images/TotalTime.png}
    \caption{Total Execution Time - Experimental Results}
    \label{fig:totalTime}
  \end{center}
\end{figure}
%--------------------figure----------------

We instrumented and monitored the network activity during the execution of the experiments, and the network analysis shows patterns corresponding to  each phase of the execution. The network activity was taken form the master node (SSRG4) perspective.

In all the network activity graphs for the Hadoop implementation (Figures \ref{fig:hadoopCombinerWeb1Network}, \ref{fig:hadoopCombinerWeb4Network}, \ref{fig:hadoopCombinerWiki1Network}, \ref{fig:hadoopCombinerWiki4Network}) a pattern of two main zones of network activity emerged. For all the datasets types (Figures \ref{fig:hadoopCombinerWeb1Network} and \ref{fig:hadoopCombinerWeb4Network} related with the ClueWeb dataset, and \ref{fig:hadoopCombinerWiki1Network}, \ref{fig:hadoopCombinerWiki4Network} for the Wikipedia dataset), a first set of peaks happened during the map phase execution of the algorithms, after that, and for a relatively short period of time, the network activity was almost silent. A second set of activity peaks was observed at the end of the reduce execution of the experiments. In general, the second peak seems to carry less information than the first one. A discussion on what happened in each zone for this type implementation is discussed in Section \ref{sec:analysis} 

%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.6]{../images/hadoopWeb1Simple.png}
    \caption{Hadoop Simple - ClueWeb 1 GB Network Results}
    \label{fig:hadoopSimpleWeb1Network}
  \end{center}
\end{figure}
%--------------------figure----------------

%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.6]{../images/hadoopWiki1Simple.png}
    \caption{Hadoop Simple - Wiki 1 GB Network Results}
    \label{fig:hadoopSimpleWiki1Network}
  \end{center}
\end{figure}
%--------------------figure----------------

%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.6]{../images/hadoopWeb3Simple.png}
    \caption{Hadoop Simple - ClueWeb 4 GB Network Results}
    \label{fig:hadoopSimpleWeb4Network}
  \end{center}
\end{figure}
%--------------------figure----------------

Interesting behaviours were found given that the activity observed had a unique point of view depending on the placement of the reducer. For example, in Figures \ref{fig:hadoopCombinerWeb1Network} and \ref{fig:hadoopSimpleWiki4Network} clear communications patterns emerged with two different nodes from the SSRG4. In case of Figure \ref{fig:hadoopCombinerWeb1Network}, the communication was performed with the SSRG7 node, however in Figure \ref{fig:hadoopSimpleWiki4Network} it was done with SSRG3. We believe that this is because the received and sent messages only travel between the mappers and the reducer node. At the beginning of each execution, Hadoop assigns randomly the reducer node among the existing ones in the cluster. In case of Hadoop combiner execution with the Web 1 GB dataset (Figure \ref{fig:hadoopCombinerWeb1Network}), the reducer was SSRG7, and on the Hadoop simple Wiki 4 GB execution it was SSRG3. 

%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.6]{../images/hadoopWiki3Simple.png}
    \caption{Hadoop Simple - Wiki 4 GB Network Results}
    \label{fig:hadoopSimpleWiki4Network}
  \end{center}
\end{figure}
%--------------------figure----------------

%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.6]{../images/hadoopWeb1Combiner.png}
    \caption{Hadoop Combiner - ClueWeb 1 GB Network Results}
    \label{fig:hadoopCombinerWeb1Network}
  \end{center}
\end{figure}
%--------------------figure----------------

%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.6]{../images/hadoopWiki1Combiner.png}
    \caption{Hadoop Combiner - Wiki 1 GB Network Results}
    \label{fig:hadoopCombinerWiki1Network}
  \end{center}
\end{figure}
%--------------------figure----------------

%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.6]{../images/hadoopWeb3Combiner.png}
    \caption{Hadoop Combiner - ClueWeb 4 GB Network Results}
    \label{fig:hadoopCombinerWeb4Network}
  \end{center}
\end{figure}
%--------------------figure----------------

%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.6]{../images/hadoopWiki3Combiner.png}
    \caption{Hadoop Combiner - Wiki 4 GB Network Results}
    \label{fig:hadoopCombinerWiki4Network}
  \end{center}
\end{figure}
%--------------------figure----------------

In Figure \ref{fig:hadoopSimpleWeb4Network} a very interesting fact occurred from the SSRG4 perspective. In this case, we turned down the SSRG7 node which was the node originally assigned to perform the reduce operation during the map phase of the algorithm. As a consequence, Hadoop reassigned the reduce task to the SSRG3 node, after which the network activity switched to communication with the new reducer node. 

Figure \ref{fig:hadoopCombinerWiki4Network} show the case where the SSRG4 node was selected as the reducer node. In this case, the information collected showed communications to and from all of the other nodes in the cluster.

The MPI communication patterns are displayed in Figures \ref{fig:MPIWeb1Network}, \ref{fig:MPIWeb4Network}, \ref{fig:MPIWiki1Network}, and \ref{fig:MPIWiki4Network}. In these cases, the patterns shown for the 1 GB datasets (Figures \ref{fig:MPIWeb1Network} for ClueWeb 1 GB, and \ref{fig:MPIWiki1Network} for the Wikipedia 1 GB) differ significantly from the 4 GB datasets (Figures \ref{fig:MPIWeb4Network} and \ref{fig:MPIWiki4Network} for the ClueWeb and Wikipedia 4 GB respectively). While the first two patterns show three main symmetric zones in a sequential manner, the final two show interleaved communication. This behaviour was caused by the communication fairness strategy involved on each experimental execution. More details about this observation are presented in Section \ref{sec:analysis}.

%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.4]{../images/mpiWeb1.png}
    \caption{MPI - ClueWeb 1 GB Network Results}
    \label{fig:MPIWeb1Network}
  \end{center}
\end{figure}
%--------------------figure----------------

%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.4]{../images/mpiWiki1.png}
    \caption{MPI - Wiki 1 GB Network Results}
    \label{fig:MPIWiki1Network}
  \end{center}
\end{figure}
%--------------------figure----------------

%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.6]{../images/mpiWeb3.png}
    \caption{MPI - ClueWeb 4 GB Network Results}
    \label{fig:MPIWeb4Network}
  \end{center}
\end{figure}
%--------------------figure----------------

%--------------------figure----------------
\begin{figure}
  \begin{center}
    \includegraphics[scale=0.6]{../images/mpiWiki3.png}
    \caption{MPI - Wiki 4 GB Network Results}
    \label{fig:MPIWiki4Network}
  \end{center}
\end{figure}
%--------------------figure----------------


%------------------------------------------------------------------
\section{Analysis} \label{sec:analysis}
%------------------------------------------------------------------
\subsection{Inverted Index Size}
\textbf{The size of the final inverted index for a given dataset differs between the MPI and the Hadoop implementations.}  As shown in table \ref{tab:indexSize}, the final file sizes of the inverted indexes are smaller in the MPI implementation than in the Hadoop implementation.  The reason for this difference is in the way that Java and C\textsuperscript{++} encode the characters of the strings.  By default, Java encodes Strings using Unicode and C\textsuperscript{++} encodes the string in UTF-8.  Since UTF-8 is able to detect the foreign language and other characters not handled by Unicode, the MPI tokenizer was able to better match \textit{keys} with these special characters.  This was more predominant in the ClueWeb data since the English Wikipedia articles had less of these foreign language characters.

\textbf{The size of the inverted indexes for the ClueWeb data is larger than the inverted indexes of the Wiki data for inputs of the same size.}  Another observation from Table \ref{tab:indexSize} is that within a given implementation, the size of the inverted index is larger for the ClueWeb data.   One possible explanation for this anomaly could be the discrepancy in the word density of the data. Collisions in the ClueWeb are much more infrequent even though the vocabulary used in the ClueWeb data is smaller than in the Wikipedia data.  This is the case since the Wikipedia dataset was only drawn from the English language Wikipedia articles but the ClueWeb dataset contained all characters from all languages.  One future test could be to remove all non English words from the ClueWeb data or to use foreign language Wikipedia dumps interspersed with the English ones.

\subsection{Disk and Memory Usage} \label{sec:usageAnalysis}
\textbf{The Hadoop implementation of MapReduce uses more disk space than the MPI implementation.}  We monitored the disk usage before, during and after each execution of the experiments described in Section \ref{sec:experimentation}.  When using Hadoop, the first thing we noticed was when copying the input files to the HDFS, Hadoop replicated the data at a ratio of almost two, as can be seen in Table \ref{tab:diskUsage}.  This replication is a feature of the HDFS to help handle fault tolerance, however the ratio has to be accounted for when system limitations are to be considered.  As well, Hadoop used much more of the physical disk space when in operation.  We monitored almost 100\% use of the available 50 GB of local disk space on each of the nodes for both implementations of Hadoop on the 4 GB datasets.  This limitation prevented us from using Hadoop with a dataset larger than 4 GB without purchasing and installing larger disks on each of the nodes.  For the MPI implementation we did not see this sort of disk usage pattern, understandably so, since no file replication was provided in the implementation.

\textbf{The MPI implementation of MapReduce consumed more of the available memory.} For the MPI implementation, the bottleneck for our experimentation was the memory usage. Our implementation wrote to disk less frequently than Hadoop, which meant storing larger indexes in main memory.  Hadoop only reported using approximately 5 to 10\% of the 8 GB of available memory when executing on each of the nodes, whereas our MPI implementation consumed closer to 20 to 30\% of the memory on any given execution node.  In fact, before we decided on writing the partial indexes to disk with the MPI implementation, we ran into the hard limit of addressable memory for a single process in a 32 bit linux distribution, which is approximately 3 GB.  This became the a limitation on the size of a single input file with our implementation without re-designing the way we read in files from the system, however our implementation could handle datasets of arbitrary size if the input files generated from those are partitioned correctly.

\subsection{Map Phase Time}
\textbf{Within the Hadoop implementations, the map phase time was less when using a combiner.}  We believe that this is a result of the lower amount of network transfer being executed when using the combiner.  This statement is reinforced with the analysis of the network graphs Figures \ref{fig:hadoopSimpleWeb1Network}, \ref{fig:hadoopCombinerWeb1Network}, \ref{fig:hadoopSimpleWiki1Network}, \ref{fig:hadoopCombinerWiki1Network},  \ref{fig:hadoopSimpleWeb4Network}, \ref{fig:hadoopCombinerWeb4Network}, \ref{fig:hadoopSimpleWiki4Network}, and \ref{fig:hadoopCombinerWiki4Network}.  This trend is the most predominant in the graphs showing the 1 GB datasets since those experiments provided less overall communication and the network traffic is easier to compare. By looking at Figure \ref{fig:hadoopSimpleWeb1Network} compared with Figure \ref{fig:hadoopCombinerWeb1Network}, as well as Figure \ref{fig:hadoopSimpleWiki1Network} compared with \ref{fig:hadoopCombinerWiki1Network}, the thickness and amplitude of the network traffic between the mapper and the reducer is less for the combiner implementations, suggesting that less overall communication took place.  This makes sense in that the combiner's job is to reduce the frequency that any given \textit{key} has to be sent to the reducer.  By packaging these operations up the combiner is able to save on network data transfer which increases the overall performance.  

\textbf{The map phase for MPI takes longer than for the Hadoop implementation.}  This result can be misleading when observing the graphs alone.  The numbers suggest that this is the trend however we believe that this is in fact a limitation of our MPI implementation rather than an observable result.  The Hadoop environment, coupled with the HDFS allows for multiple mappers to be executed on a single node.  Our MPI implementation does not allow for that level of parallelism, since we did not implement a method that could handle the files in a similar manner since the operating system installation did not provide a native distributed file system (DFS).  We propose that with a proper DFS in place, and our MPI implementation changed such that it were to distribute tasks to each node in such a way as to allow for full use of the resources available, the MPI implementation would achieve greater performance on with respect to the time of the map phase.  Hadoop also allows for more mappers than processors available for any given node, which in our experiments was observed to be 18 to 23 mappers running on the 1 GB datasets and between 53 to 75 for the 4 GB datasets.  This behaviour could also be mimicked with MPI by oversubscribing the nodes, implementation of which would provide for a better comparison of these results.

\subsection{Total Time}\label{sec:totalTime}
\textbf{The Hadoop combiner outperformed the Hadoop simple implementations on all datasets.} The combiner is able to provide an overall performance increase by reducing the amount of communication to the reducer.  This results can be seen in Tables \ref{tab:mapTime} and \ref{tab:totalTime} and is further reinforced by the network graphs Figures \ref{fig:hadoopSimpleWeb1Network}, \ref{fig:hadoopCombinerWeb1Network}, \ref{fig:hadoopSimpleWiki1Network} and \ref{fig:hadoopCombinerWiki1Network}.  The way the combiner is able to reduce this network overhead is by being able to send less but more informative information to the reducer by way of a \textit{key/value} pair that contains larger counts for each word in a file.  Not only does this reduce network overhead, but this strategy also allows the reducer to perform less operations on the hash-map.  The reduction of overheads coupled with the more efficient reducer operations allowed the combiner implementations to out perform the simple implementations in Hadoop by 10\% on average.

\textbf{Overall, MPI and Hadoop have comparable overall task completion times for the given datasets.}  The fact that MPI is able to achieve performance close to the level of Hadoop in our implementations is a result of the way that the implementations relate.  In the Hadoop implementation, we are able to take advantage of the number of processors on each machine, giving a higher level of parallelization, which we did not implement with MPI.  On the other hand, the MPI implementation does not combine the final partial indexes into one large inverted index, which Hadoop spends time to do in the reduce phase.  Both of these factors play into any performance analysis regarding overall completion time for these experiments.  In order to properly draw a conclusion on this aspect, our MPI implementation would have to be enhanced further to use all the available computational resources for each execution node as well as invoke a merging scheme for the partial indexes.

\subsection{Network Analysis}

\subsection{Hadoop}
\textbf{The Hadoop implementations have two major sources of network communication, once for the mappers sending information to the reducer and another for writing files to the HDFS.}  From the two observed patterns mentioned in Section \ref{sec:results} for the network graphs representing the Hadoop implementations, these two observations are clear.  The first pattern is the network traffic occurring during the map phase of the execution.  Since all of the analysis was done from the point of view of a single node, namely SSRG4, this becomes apparent in Figure \ref{fig:hadoopCombinerWeb4Network}.  On this graph, SSRG4 was chosen randomly by Hadoop to perform the reduce task.  Here we can observe that it performed the communication with all of the mapper nodes.  It also seems there may be some fairness involved as the communication is performed by each of the nodes in an overlapping fashion.  Once this traffic ceases, the reducer then begins the task of creating the inverted index.  The second pattern here is the distribution of the output and partial files Hadoop uses to create the full inverted index.  During this phase we can observe the same type of network pattern but with smaller amplitude and longer periods for each waveform corresponding to the writing of the partial files to the distributed file system.  These patterns are further enforced in Figures ref{fig:hadoopSimpleWeb1Network}, \ref{fig:hadoopCombinerWeb1Network}, \ref{fig:hadoopSimpleWiki1Network}, \ref{fig:hadoopCombinerWiki1Network},  \ref{fig:hadoopSimpleWeb4Network}, \ref{fig:hadoopSimpleWiki4Network}, and \ref{fig:hadoopCombinerWiki4Network} where the node SSRG4 was instead chosen to be only a mapper.  The communication here is only to one node, namely the reducer.  One other thing worth mentioning is the anomaly present in Figure \ref{fig:hadoopCombinerWeb1Network}.  In this graph, network traffic is between two different nodes and SSRG4, not just the reducer node.  Upon analyzing the log files for this experiment, we discovered that on this execution, the reducer was initially chosen to be SSRG7, however the reduce task failed on that node upon which Hadoop then gave the reduce task to SSRG3.  This can be seen on the network graph where communication was first between SSRG4 and SSRG7 initially, then upon failure, all the traffic was then between SSRG4 and SSRG3.

\subsection{MPI}
\textbf{The network graphs representing the communication in the MPI implementations correlate directly with the implementation details.}  Not mentioned in the experimentation but the the MPI implementations were done with two levels of granularity to handle the case mentioned in Section \ref{sec:usageAnalysis}.  The first implementation involved parsing all the input data for each mapper into a map, then sending the entire map to the reducer.  This was only possible for the 1 GB datasets since there was only enough main memory to store the maps for 250 GB worth of input data.  Figures \ref{fig:MPIWeb1Network} and \ref{fig:MPIWiki1Network} show this process from the perspective of the reducer with each mapper making one large send operation to the reducer.  This implementations ended up under performing when compared to the Hadoop implementation on the same datasets.  To handle the 4 GB datasets, a round robin message passing scheme, as described in Section \ref{sec:mpi}, was used.  The network traffic patterns in Figures \ref{fig:MPIWeb1Network} and \ref{fig:MPIWiki1Network} show this behaviour directly, with interleaved communication between all of the mapper nodes.  This communication pattern resembles that of the first phase in the Hadoop implementations.  We speculate that if the MPI implementation were to use a distributed file system and perform the merger of the partial indexes, then we would observe similar graphs to Hadoop with this communication strategy.  Overall, the communication patterns for all of the MPI network graphs reinforce the design choices used in the corresponding implementations.

%------------------------------------------------------------------
\section{Conclusions}
%------------------------------------------------------------------
Regarding the research questions that motivated our work, we have concluded for question \textbf{[Q1]} \textbf{What are the performance impacts of using either Hadoop or MPI implementations of the inverted index algorithm when compared to one another?} that the Hadoop environment has a huge performance impact on the overall performance of the implementations. In both cases, the Hadoop combiner and simple implementations provided significantly better performance that our MPI implementation. However, we believe that even though the Hadoop ecosystem gives to its implementations an advantage on their deployment and load balancing mechanics, the MPI implementations can be specifically tuned in order to achieve the same, or presumably better results. Modifications on the communication architecture, or the dynamic support of different levels of the memory stack such as the main memory and the disk, can considerably change the results obtained with MPI. Being cautious, we also want to argue that the Hadoop implementations can be also improved by tuning the deployment platform regarding HDFS policies and load balancing strategies or by using API features like multiple computation context, to achieve better parallelism in the partial index merging and sorting. Talking exclusively of our implementations, we can assert that the MPI implementation is highly supported by a dynamic programming strategy, thus given the size of the inputs, consuming large amounts of memory not always available in the execution environment. On the other hand, the Hadoop implementations make an intensive use of the HDFS, translating the workload to the local hard drives of the cluster nodes. We found the Hadoop strategy more suitable to our current implementations given the size of the datasets, and the available experimental resources. Further and exhaustive experimentations must be provided in order to achieve a complete comparison of both platforms inside different algorithmic contexts.

Regarding question \textbf{[Q2] Does the implementation of a combiner within MapReduce improve the algorithm execution time?}, we can clearly support the hypothesis that the combiner optimize two main bottlenecks on the parallel execution of the inverted index computation. The first of such bottlenecks was removed when the partial inverted indexes processed by the mapper functions reduced the number of messages send on the network to the reducer nodes. Even though the mappers in those implementations have a much larger computational complexity, their execution time was on average 56\% faster than in MPI, and 10\% faster than the Hadoop simple implementation. By avoiding accessing the network, the combiner implementation outperformed the other two on the mapper and total execution time. The second such bottleneck was in regards to the amount of information being sent was removed with the reduction of intermediate keys exposed by the combiner strategy, which also lead to the reduction of the computational complexity in the reduce phase. In conclusion, for the context of the implemented index algorithms, the use of a combiner strategy with MapReduce provided an improvement of almost 50\% regarding the time taken to complete the map phase in comparison with the other implementations.

For question \textbf{[Q3] What impacts does the size of inputs have on our MapReduce implementations?} we believe that given the facts presented for Q2, the reduction of the computation/communication ratio represents a major area of potential performance improvements since the bottlenecks revolve around communication for the network. Hadoop implementations showed an increase of the granularity proportional to the experimental input sizes, even though we ca not assure that this behaviour affected the overall performance. We can report that the computational strategy in order to manage memory or hard drive access is highly impacted by the amount of data being processed. In terms of comparing the execution time results dependant on the data sizes such as assertions regarding behaviours on varying input sizes,  we can report that all our implementations behaved in a similar manner regarding computational proportions for each of the provided datasets. We believe that this experimental observation is due to the translation of the computational complexity to memory and the file system on each implementation.

%------------------------------------------------------------------
\section{Lessons}
%------------------------------------------------------------------
\subsection{Environment}
One area that provided the group with some major lessons was with respect to the factors surrounding the choice of an execution environment.  For the experiments, first we had to consider the time it took to install any software we needed to support our execution.  For both Hadoop and MPI, we needed an parallel environment capable of running both of these systems effectively.  We decided to go with a 32 bit linux installation running in a virtual machine in order to avoid any hardware issues we were likely to have.  The first lesson here was to be more aware of the impacts of making this choice.  First off, the Hadoop environment was freely available from Cloudera within a virtual machine image that they provided. Using a virtualized laptop cluster we developed the first versions of the strategies under evaluation, we cheeked its mechanics and the correctness of our implementations. However, then the migration of the environment for running the real experiments became a fundamental need. Given the amount of resources consumed by the implementations, and the sizes of our datasets, we decided to migrate the cluster to a more robust computation infrastructure. Together with Dan Han's group, we were able to deploy Hadoop from the scratch with the mentioned configuration. Once we had the cluster installation up and running we were able to run our Hadoop experiments in a very straight forward manner. None the less, the deployment and testing of the Hadoop environment took more than two weeks, implying a big effort in order to run well-formed experiments.

On the other hand the MPI environment was trivial to install and took very little time to configure in order to achieve the same level of operation. It is important to mention that in spite of the Hadoop installation extra effort, it provides very nice features like monitoring tools for runtime and phase analysis, web management platforms, and fault tolerance administration which specially allowed for the tasks to be restarted on failure automatically. This had to be handled manually in the case of MPI.  Moreover, the lack of a DFS also hindered our MPI implementation, since when trying to send files to the nodes via the network, the data transfer ended up dwarfing the actual computation, so we did the distribution manually a priori.

\subsection{Memory Management}
Another major lesson was the issue of memory management.  In the MPI execution, we had multiple points of memory management to contend with.  One of the first failures we ran into was when trying to send the \textit{key/value} data per word, per page as opposed to packaging this information into one send for each page.  The MPI library was storing these into a buffer and it caused a memory allocation failure once the volume of messages exceeded a certain amount.  We then tried to change these to synchronous sends and receives, we ended up with a huge performance decrease, almost an order of magnitude.  The solution we chose was packaging this information on a per-input page granularity, however other solutions exist that we would have liked to explore, such as managing the communication with respect to the actual memory being consumed by the inverted index data structure. Time limitations of the project prevented that exploration.  As well, the use of collective operations and asynchronous data transfer would have been a good direction to explore with regards to trying to optimize MPI.

Possible extensions of this work include the use of MPI optimizations as well as Hadoop configuration file specific tuning that were not introduced with the default deployment.  We would have also liked to expand the datasets in both the direction of multiple languages with large vocabularies and with more volume of data, since we realize that 1 GB and 4 GB were not sufficiently large to test the true limitations of both implementations.  Running on machines with 64bit operating systems with larger disk sizes and more memory would allow for this experimentation to be run.  Another extensions include better handling of the \textit{special} characters observed in some of the input data.  One could also analyze the effects of the merging of the partial indexes with MPI or non-merging them with Hadoop and the execution time of this phase.  By no means is this list exhaustive, since there are many other variables that could be tested to get a better comparison of these two systems.

This report, as well as all of the Java and C\textsuperscript{++} source code used to run these experiments outlined in the report are freely available from \url{http://cmput681-foxtrot.googlecode.com/svn/trunk/}, as well as the code to parse the datasets mentioned.  The dataset locations can be found in the bibliography.

%------------------------------------------------------------------
\section{Authors Contribution}
%------------------------------------------------------------------
The following section provides a breakdown of the contributions to this report with regards to each of the sections presented.  The original problem description and research done regarding solution strategies was done by Mohammad, who also found an excellent text providing detailed algorithmic details for the inverted indexing problem.  Victor preceded to re-write the final problem description section after details of the initial proposal changed significantly.  The literature review section was fully researched and completed by Josh with the exception of the inverted index paper which was discovered by Mohammad.  With regards to the datasets, the ClueWeb data was made available through Mohammad, who also suggested the Wikipedia data.  Josh wrote the parsing schema and corresponding parser code with collaboration on the regular expressions with Mohammad.  Josh then proceeded to clean the datasets for use in the experimentation. Victor was responsible for setting up the Google Code Repository for use of revision control for the code and the report. 

Victor, Josh and Mohammad were initially be responsible for the Hadoop environment and implementation while Anahita and Afsaneh were initially be responsible for the MPI implementation of the solution.  The Hadoop team worked together on the installation details surrounding a test environment for Hadoop, with Victor installing the environment on the cluster for which we would run our experiments.  Mohammad implemented a Python solution to the inverted index and Victor then proceeded to complete a Java solution to the inverted index problem for both a simple and combiner implementation.  The Java implementation was then chosen for use with Hadoop.  Afsaneh and Anahita worked on multiple MPI implementations, however they were unable to run outside of a test environment.  Josh then began working on the final MPI solution based on the previous works of Afsaneh and Anahita.  The corresponding sections of the reports regarding the implementation details were written by the Josh and Victor as they were the authors of the solutions from which the results were gathered.

The experimental details were collaborated on by all members, however the all of the experiments for both implementations on the cluster were run and monitored by Josh and Victor, which involved debugging anomalies with the datasets as well as tweaking certain aspects of the Hadoop installation.  Results were then gathered and the graphs were made by Victor and Josh.  The experimentation section was written by the members who worked on these tasks. 

The analysis section of the project was completed by Victor and Josh with collaboration from Afsaneh and Anahita who were responsible for doing statistical analysis of the word density and for researching implementation details of the Hadoop system and the MPI library.  The conclusions drawn from the analysis as well as the lessons learned were collaborated on by Victor and Josh.

%------------------------------------------------------------------
%         Bibliography
%------------------------------------------------------------------
\bibliographystyle{abbrv}
\bibliography{../bib/references}

\end{document}