%------------------------------------------------------------------
%  Document Class
%------------------------------------------------------------------
\documentclass{article}

%------------------------------------------------------------------
%  Graphics and Related Packages
%------------------------------------------------------------------
\usepackage{multirow}
\usepackage{listings}
\usepackage{amssymb}
\usepackage[pdftex]{graphicx}

%------------------------------------------------------------------
%  Marging
%------------------------------------------------------------------
\addtolength{\oddsidemargin}{-.8in}
\addtolength{\evensidemargin}{-.875in}
\addtolength{\textwidth}{1.7in}
\addtolength{\topmargin}{-.875in}
\addtolength{\textheight}{1.7in}

%------------------------------------------------------------------
%  Document
%------------------------------------------------------------------
\begin{document}

%------------------------------------------------------------------
%  Paper title
%------------------------------------------------------------------
\title{Parallel and Distributed Systems Final Project Proposal \\ CMPUT 681 \\ Foxtrot Team}
\maketitle
\setlength{\parindent}{0pt}
\setlength{\parskip}{2ex plus 0.5ex minus 0.2ex}

%------------------------------------------------------------------
%  Author names and affiliations
%------------------------------------------------------------------
\begin{center}
\author{Anahita Alipour - Joshua Davidson - Afsaneh Esteki - Victor Guana - Mohammad Salameh\\ \{alipour1, jdavidso, afsaneh.esteki, guana, msalameh\}@cs.ualberta.ca\\ Computing Science Department \\ University of Alberta \\ }
\end{center}

%------------------------------------------------------------------
%  Abstract
%------------------------------------------------------------------
\begin{abstract}
Our course final project is framed in the implementation of an inverted index algorithm over the Hadoop distributed computing infrastructure. Our objective is to provide a Map-Reduce implementation of the algorithm based on the framework given by the Hadoop ecosystem. Additionally, we pursue the implementation of the same algorithm under an MPI architecture (without the Hadoop support) in order to compare their performance. We are going to use the \textit{Clue Web}\footnote{http://lemurproject.org/clueweb09.php/} and the \textit{Wikipedia Dumps}\footnote{http://dumps.wikimedia.org/} data sets as our experimental inputs. 
\end{abstract}

%------------------------------------------------------------------
\section{Problem Description}
%------------------------------------------------------------------
An efficient web search has become necessary with the ever increasing amount of web pages on the internet of the web and the large, sophisticated queries that users execute. Given a user query, a search engine has to search terabytes of textual data, retrieve the relevant pages, rank them, and send them back to the user. An inverted index is a data structure that maps a term, for instance a word, letter, symbol, to a corresponding set of documents that contain the term. This index is used to optimize the speed of search queries. For example, a search with three words becomes a function that returns the intersection of the three indexes that correspond to each word in the search. 

An inverted index data structure is one of the basic components for most of the search engines. Building an inverted index requires a huge amount of time and text processing. Every single page has to be pre processed to remove all of the \textit{HTML} tags and all of the stop words such as \textit{and}, \textit{or}, \textit{when}, and \textit{the}. The index is then computed using the remaining words in each document. The time it takes to build this index is a critical factor for a commercial grade search engine. Due to the rapidly increasing speed at which the internet is growing means that creating these indexes requires a huge number of computational hours to complete. In addition, pages are continuously added, updated and removed. This creates the need to crawl new and updated pages to update the inverted index. The main challenge with this is how to efficiently update the index given the large number of changes occurring.  These tasks are highly data parallel and that allows the parallelization of both creating and updating the inverted index.

%------------------------------------------------------------------
\section{Implementation Proposal}
%------------------------------------------------------------------
We plan to develop three concrete implementation of a simplified inverted index algorithm in order to create indices for a two types of data inputs: \textit{Clue Web} and \textit{Wikipedia Dumps}. In order to respond to the time complexity of the problem, we want to parallelize the algorithm using a map-reduce programming approach. The map function allows for machines to create indexes for subsets of the data in parallel. The reduce function collects all the independently generated indexes and merges them into a complete index. Two implementations will be supported on different parallel computing frameworks, and a third will be implemented as a baseline for the benchmarking of the parallel approaches. The details are described below.

\textbf{Hadoop:} The first implementation will be framed in the Hadoop ecosystem. It provides a layer of abstraction in order to execute map-reduce jobs using a Java API. It automatically distributes the computation of the map function between a cluster of pre-configured computers, and additionally, provides off-the-shelf mechanisms in order to gather the reduce results from the different computation nodes.

\textbf{MPI:} The second implementation will be developed using the MPICH library. The idea behind this implementation is to use collective operations in order to implement a simplified implementation of the map-reduce algorithm for the inverted index calculation.

\textbf{Sequential:} The third an final implementation will be developed in a sequential based algorithm. It will be used as a baseline for testing speedups of the parallel approaches.

%------------------------------------------------------------------
\subsection{Experimental Questions}
%------------------------------------------------------------------
For this project, it is the goal of the group to investigate and answer the following questions within the domain of the Inverted Indexing problem described.

\begin{description}
\item[Q1] Is it the case that the Hadoop implementation of Map-Reduce outperforms Sequential and MPI solutions to the Inverted Indexing problem?
\item [Q2] Can the Map-Reduce implementation be optimized by using various pre-processing and pre-reduce strategies in the algorithm?
\item[Q3] What impacts does varying the granularity of the inputs have on our Map-Reduce implementation?
\end {description}

The answers to these questions will provide insight in order to properly analyze what advantages does using Map-Reduce on a pre-designed platform such as Hadoop have over a non customized Map-Reduce API implemented in MPI? Also, it will help answer where the potential drawbacks of using an architecture such as Hadoop and will hopefully suggest area on which it can be optimized for the problem.
%------------------------------------------------------------------
\subsection{Metrics}
%------------------------------------------------------------------
In order to verify our research questions we will measure the following items on each of the implementations by both the input size and  granularity:

\begin{enumerate}
\item Execution Speed (map formatting time, map indexing time, and reduce time), 
\item Average memory consumption (with the same item observation), 
\item Disk space required for the execution,
\item Idle time and per node performance for each execution node.
\end{enumerate}

%------------------------------------------------------------------
\subsection{Experimental Environment}
%------------------------------------------------------------------
The planned experimental environment is determined by each implementation execution. For the \textbf{Hadoop} implementation, we plan on pursuing the execution of the experiments in a beowulf cluster (Westgrid), additionally, we will want to test the same approximation in an elastic computing environment such as Amazon EC2. The execution of the \textbf{MPI} implementation, will be performed in the same beowulf cluster that the Hadoop implementation in order to have a hardware-based comparison. The \textbf{Sequential} algorithm will be executed on a shared memory machine if available as well as single nodes of any cluster or elastic computing environment we use.

%------------------------------------------------------------------
\subsection{Proposed Timeline}
%------------------------------------------------------------------
\begin{description}
\item[Phase 1 - Background Research]
In this phase of the project we will do the appropriate amount of reading regarding MapReduce and Inverted Indexing, the motivation for solving the problem, and the prior work done on solving this problem using Hadoop and other implementations of MapReduce.  Writing the proposal will be done in this phase.
\\\textbf{[Due Date] November 5}
\end{description}

\begin{description}
\item[Phase 2 - System setup]
In this phase, we will be setting up the development environment for using Hadoop and testing it with the a demo application of WordCount provided with the Hadoop deployment package.  Also, we will write another testing app of our own to fully understand how to use the environment.
\\\textbf{[Due Date] November 10}
\end{description}

\begin{description}
\item[Phase 3 - Gathering / Preprocessing Data / Implement the Map and Reduce Functions for Hadoop]
In this phase our team will gather the data from the WebClue and the WikiDumps and pre-process the data by removing the HTML or XML tags from it.  One half of our team will implement the Map and Reduce functions needed for Hadoop to then process the sanitized text,  the other half will write the sequential and MPI versions of the indexing code.
\\\textbf{[Due Date] November 15}
\end{description}

\begin{description}
\item[Phase 4 - Results]
Now we will begin to run Hadoop on the data on various platforms.  The MPI implementation as well as the sequential implementation will be run for comparison.  Due to the size of the data and potentially limited resources, a week and a half will be given for this part
\\\textbf{[Due Date] November 27}
\end{description}

\begin{description}
\item[Phase 5 - Analysis]
In this phase, the analysis of the results as well as finishing touches on the project and any more experiments will be completed.  We will finish the write up and submit the completed work.
\\\textbf{[Due Date] December 5}
\end{description}

%------------------------------------------------------------------
%         Bibliography
%------------------------------------------------------------------
\bibliographystyle{abbrv}
\bibliography{../bib/references}

\end{document}