\documentclass[11pt,a4paper,onecolumn,oneside,notitlepage,final]{article}

% packages
\usepackage{breakurl}
\usepackage[scale=0.7]{geometry}
\usepackage[pdftex,bookmarks=true,bookmarksnumbered=true]{hyperref}

% setup meta-info
\hypersetup{
pdfauthor = {ZHENG Chaodong},
pdftitle = {Towards Better Data Processing: MapReduce and the Improvements},
pdfsubject = {CS5223 Assignment Two}}

\begin{document}
\bibliographystyle{plain}

% setup title and author
\begin{center}
{\LARGE \textbf{Towards Better Data Processing} \par}
{\huge \textbf{MapReduce and the Improvements} \par}
\vskip 0.5\baselineskip
\textit{Name}: ZHENG Chaodong \quad \textit{Matric. No.}: A0068967E \par
\end{center}

\section{Introduction}\label{intro}

The last decade has saw the explosion like development in the domain of Internet and World Wide Web. What comes along with this new trend is the need to store, manipulate and extract information from extremely large volume of data which is usually in the magnitude of terabytes or petabytes. Usually, this phenomenon is referred as \emph{Big Data} and the process of processing these data is called \emph{data intensive computing} \cite{big-data-wiki, data-inten-comp-wiki}. Researchers and engineers have proposed and implemented many models and applications to better serve data intensive computing, among which \emph{MapReduce} \cite{mr-google} and \emph{Hadoop} \cite{hd} are two famous and widely used examples.

MapReduce is a framework which is first proposed by Google to solve highly distributable data-intensive computing problems using a large number of computers \cite{mr-wiki}. Hadoop, which is developed by Apache Software Foundation, is an open-source implementation of the MapReduce framework along with some modifications and improvements. Both of them have obtained great fame and success in both academic and industry area. In fact, many well-known companies, like Google, Yahoo! and Facebook, have used MapReduce or Hadoop extensively in their products and contribute back to the community at the same time \cite{hd-wiki}.

In this short survey, we'll firstly introduce MapReduce and Hadoop, along with the design philosophy behind them. After that, we'll investigate some modifications proposed by researchers which are used to improve or extend the functionality of these two applications. In particular, a new programming model, which is called ``Piccolo'', will also be presented. At last, we'll talk about some future research directions or possibilities which we believe are worth noting. We end this survey with a short conclusion.

\section{MapReduce and Hadoop}\label{mr-hd}

\subsection{MapReduce}\label{mr-hd-mr-intro}

As mentioned earlier, MapReduce is a programming model which is used to process large data set in a distributed fashion. Core of MapReduce are two functions named \emph{map} and \emph{reduce}, where map takes input of \emph{key/value} styled pairs and generate intermediate key/value pairs according to user defined semantics, and reduce merges all values associated with the same key with another user-defined function. Multiple map and reduce instances may run concurrently on different machines, each processing different part of a large data set, but obey the same semantics defined by the user. On the other hand, the MapReduce running environment will take care of other issues like scheduling, fault tolerance, etc. Thus, programmers, even those who have little experience in distributed or parallel computing, can easily write programs to handle data-intensive computation using a cluster of computers.

\subsubsection{The Detailed model}\label{mr-hd-mr-intro-model}

In this section, we'll describe the MapReduce model with more details. We'll follow the execution of a simple MapReduce program, the word counting program which count the occurrence of each different word in a large collection of documents, to demonstrate the whole life cycle of the MapReduce model. We believe this is a natural and proper way to accomplish the detailed introduction.

\textbf{Input and Split}: Upon receiving the request of a user, the runtime will first split the whole input data into many pieces, each with equivalent size. According to \cite{mr-wiki}, the runtime will also process the initial data and generate initial key/value pairs as input to each map function. In our example, the key can be the line number and the value may be the string on that particular line. After this, the runtime will fork many \emph{workers} and one \emph{master}. One point worth noting is that each physical machine may contain multiple workers.

\textbf{Map}: Now, all idle workers will each be assigned a map task and become a \emph{mapper}. Each mapper will take the initial key/value pairs assigned to it and generate the intermediate key/value pairs. In our example, this refers to the process that for each line in the input data, the map function will emit a pair for each word in this line whose key is the word itself and the value is 1.

\textbf{Partition}: Intermediate key/value pairs are periodically written into map worker's local disk. They are also partitioned into several parts according to a partition function. These location information will be send back to master and the master may need to forward them to future reducers.

\textbf{Reduce}: When master starts a reduce worker, or \emph{reducer}, master will pass the above mentioned location information to it. The reducer will use \emph{remote procedure call (RPC)} to read the corresponding intermediate pairs. Then, the reducer will sort all these pairs according to their associated key. After this, the reducer will apply the user defined reduce function for each unique key. Thus, each unique key will lead to one output recored and the the collections of all these records forms the final output. In our word counting example, the reducer will count the number of intermediate pairs associated with each key and hence get the number of occurrence for each different word.

When all reducers have accomplished their work, the execution of this MapReduce program is finished.

\subsubsection{Other Considerations}\label{mr-hd-mr-intro-other}

Besides the basic model, Dean and Ghemawat also mentioned lots of other aspects of the practical MapReduce system being used in Google. We summarized the important ones here.

\textbf{Fault Tolerance}: Machine failure is almost inevitable in distributed environment. MapReduce runtime takes measures from three perspectives to archive fault tolerance. First, master node will periodically send ping to each worker. If no response is received within a threshold, that worker is considered dead and the task it was assigned to will be re-executed by another worker. This handles worker failure. Secondly, to handle master failure, the master will periodically snapshot their current state and restore execution from last checkpoint in the presence of failure. Last but not least, MapReduce utilizes atomic commits of map and reduce tasks' output to ensure sequential consistency. This helps preserve the semantics of user programs in highly concurrent environment.

\textbf{Locality}: To reduce network and disk I/O overhead, the MapReduce scheduling algorithm takes locality into consideration as well. In particular, the master will attempt to schedule a map task on a machine which contains a replica of the corresponding input data. Failing that, the master will attempts to schedule it to a machine near the input data.

\textbf{Backup Tasks}: Sometimes the presence of workers which progress significantly slower than others, which is often called \emph{stragglers}, can greatly increase the total running time of the entire program. This is especially true when stragglers appear during the last few map or reduce tasks. To alleviate this problem, the runtime will schedule backup execution of remaining running map and reduce tasks when the entire program is near completion. When any execution of the same task is finished, the task is marked as completed.

\textbf{Status Information}: MapReduce program is batch oriented and can take a long time to complete. To provide more insight of the execution to the developers and engineers, the master also serves as a HTTP server to provide real-time progress and status information to the users.

\subsection{Hadoop}\label{mr-hd-hd-intro}

Hadoop is an open source implementation of the MapReduce framework developed and supported by Apache Software Foundation. It consists of three parts. The \emph{Hadoop Common} is the common utilities that support the other Hadoop sub-projects; the \emph{Hadoop Distributed File System (HDFS)} is a distributed file system; and the \emph{Hadoop MapReduce} is the implementation itself.

\section{Improvements toward MapReduce}\label{improve}

The appearance of MapReduce and Hadoop fits in people's need for programming models and tools to handle data-intensive computing problems in distributed and cloud environment. They have been used and studied extensively ever since. In the meantime, engineers and researchers have also proposed many modifications and improvements toward them.

\subsection{Scheduling in Hadoop}\label{improve-schedule}

Scheduling has always been a problem when concurrency or parallel is present. In this section, we will introduce Hadoop's original scheduling algorithms and the \emph{Longest Approximate Time to End (LATE)} algorithm proposed by Zaharia et al \cite{mr-hete}. We will also discuss our opinions towards these two algorithms.

\subsubsection{The Original Algorithm}\label{improve-schedule-original}

When a worker becomes available, Hadoop will first check if any failed task is present and schedule them with highest priority. Then, unscheduled tasks are considered. Finally, Hadoop will try to look for stragglers and run speculative tasks.

To select stragglers, Hadoop monitor each task's progress with \emph{progress score} which is between 0 and 1. For map task, the score is the portion of data that has already been read and processed. For reduce task, the execution is divided into three phases each accounts for $1/3$: the \emph{copy} phase where intermediate pairs are pulled from map workers to the reduce worker; the \emph{sort} phase where these input pairs are sorted according to their key; and the \emph{reduce} phase where user function is applied to intermediate pairs and generate final output. In each phase, the detailed progress score is calculate according to the data copied, sorted or processed.

With each task's progress score, the master will calculate the average score and those fell behind it more than 0.2 are considered stragglers. Some other points worth noting is that at most one speculative copy of each straggler will run at any time and any task should run at least one minute before evaluating it for speculation.

\subsubsection{The LATE Scheduler}\label{improve-schedule-late}

Hadoop's original scheduling algorithm is straightforward and makes several implicit assumptions. For example, workers progress at roughly same rate; progress rate remain same in map or reduce task; each of the three portions of the reduce work takes over same time; etc. Moreover, Hadoop also only employ threshold mechanism to determine stragglers.

Despite this algorithm looks quite simple, it works relatively well in most homogeneous environments. However, when heterogeneity is present, Hadoop may perform badly \cite{mr-hete}. This observation motivates Zaharia to propose the Longest Approximate Time to End (LATE) scheduling algorithm.

LATE is a speculative task scheduler, the key insight behind its design is: ``\emph{always speculatively execute tasks that will finish farthest into the future}" \cite{mr-hete}.

To achieve this goal, LATE first define progress rate and estimates task remain time: $progress \; rate = (progress \; score)/(execution \; time)$ and $task \; remain \; time = (1-progress)/(progress \; rate)$. Then, LATE will choose tasks which have highest estimated remain time and have progress rate lower then $SlowTaskThreshold$, mark them as eligible for speculation. Following that, LATE will check if too many speculative tasks are already running, using a threshold $SpeculativeCap$. If that threshold is not reached, LATE will choose \emph{fast nodes} to run these chosen speculative tasks. Fast node are selected based on each node's total work performed. In particular, nodes which performs total work more than $SlowNodeThreshold$ are considered as fast nodes.

\subsubsection{Discussion}\label{improve-schedule-discussion}

Compared with the original algorithm, LATE exploits heterogeneity and thus provides three major advantages: First, LATE only relaunches slowest tasks and the number of slots for speculative tasks is also limited. Compared with Hadoop's original algorithm, this stops launching too much backup tasks simultaneously which can cause node threshing. Second, LATE picks node to run speculative tasks more carefully, namely, only fast nodes are considered. This provides higher probability for speculative task to overtake the original straggler. Third, a more precise method is employed to calculate node's progress rate and current progress. Moreover, remaining time to completion is used instead of current progress rate, this allows nodes to slow down sometime and thus improves accuracy when choosing stragglers.

However, we also believe there are space for modification and improvements in LATE. First and foremost, the method for calculating progress rate can be improved. In the current algorithm, all historical information start from the beginning of the task are taken into considerations, the calculated progress rate may not reflect the real time speed and thus can provide inaccurate information. More advanced methods like \emph{Round Trip Time (RTT)} estimation algorithm in TCP \cite{network-book} can be employed.

Second, lots of hard threshold are used. Despite LATE uses ranking to pick stragglers, threshold is still extensively used. However, hard threshold can be non-helpful in heterogeneous environment; even for homogeneous situation, hard threshold may still misidentify nodes which are slightly better or worse than the threshold. Thus, more uses of ranking or mechanisms like dynamic and adaptive threshold can be quite desirable.

Third, LATE does not take data locality into consideration when choosing nodes to run speculative tasks. This is because the authors believe a fast node on computation is more suitable than a node which is nearer to data. They also claim this assumption is indeed true through statistics information and experiments. However, the potential performance penalty brought by network bandwidth and I/O should never be underestimated. A mixture of computation power and data locality may be used when evaluating whether nodes are suitable to run speculative tasks.

\subsection{MapReduce Online}\label{improve-mr-online}

Despite MapReduce can offer status update while running, it is batch oriented and does not provide results until the whole task is finished. Moreover, MapReduce also \emph{materialized} all intermediate data. These properties help improve the efficiency of MapReduce. They also simplifies synchronization between map and reduce phase, thus allowing the fault tolerance mechanism to be simple and elegant. However, researcher are also working on migrating concepts in database domain, like \emph{pipelining} and \emph{online aggregation}, to MapReduce. Among these works, Condie's et al.'s approach is a good example \cite{mr-online}.

\subsubsection{Hadoop Online Prototype (HOP)}\label{improve-mr-online-hop}

In Condie's paper, a system called \emph{Hadoop Online Prototype (HOP)} is proposed. HOP provides pipelining, online aggregation and continuous query. Online aggregation allow users to see approximate and continue refining results as soon as the reducers start processing intermediate data. In the meantime, supporting for continuous query also widens the area MapReduce  can be used for; for example, event monitoring and stream processing can now be supported.

The basic idea to achieve pipelining is as follows. The master node starts reducers once its corresponding mappers have some initial output. These reduces connect to a bounded number of their mappers via TCP connection. They then can pull intermediate data from these mappers and start generating final results. The authors also propose an adaptive load balancing mechanism to even the burden on mappers and reducers. More specifically, mappers will tell the master about the amount of new data it generates via RPC. If the reducers can keep up with the speed of the mappers, everything works as above described. However, if the reducer fall behind, the mapper will act in some sense like a ``reducer" to do some sorting and aggregation. This is particular helpful in terms of saving time and network bandwidth when intermediate data are large but easy to process, like the ones in the word counting example.

Fault tolerance in HOP is also a little different from stock Hadoop. In particular, the data generated by mappers which have already been pipelined to reducer will not be discarded immediately. Rather, they will be kept on disk until whole program is finished. Meanwhile, each mapper will periodically report to the master on the progress of their work, in terms of the offset in the generated intermediate file. Thus, when mapper failure is present, master can just reschedule it and asks it to start from the last offset. On the other hand, if a reducer fails, the new reducer can pull all the data from beginning and then start processing them.

Pipelining enables single-job online aggregation. To enable multi-job online aggregation, notion of ``\emph{snapshot}" is introduced. Suppose there are jobs $j_{1}$ and $j_{2}$; and $j_{2}$ depends on $j_{1}$'s output. Unlike the single-job scenario, $j_{2}$'s mappers can not start until $j_{1}$ is entirely finished. To resolve this, $j_{1}$ can take snapshot of its final result when, for example, $25\%$, $50\%$ or $75\%$ of total work is done. Thus, $j_{2}$ can make use of these partial final results and generate approximate output to the user. One problem brought by this approach is that a later snapshot can not make use of the previous one, hence multi-job online aggregation comes with a penalty on performance.

Pipelining also enables continuous queries like analyzing URL access log and system console log. Implementing continuous query on HOP is natural and straightforward, it requires no significant changes. The only point worth noting is that continuous query may still need a period of historical information. Thus, to achieve fault tolerance, rolling buffer styled backup may be needed. 

\subsubsection{Discussion}\label{improve-mr-online-discussion}

HOP extends Hadoop's functionality by pipelining. However, as in the database domain, pipelining and online aggregation also may lead to potential problems.

Pipelining within single job may improve efficiency, but pipelining among multiple jobs can be costly and degrade system-wide speed. As mentioned in the last section, to support inter-job online aggregation, snapshot must be used. These intermediate snapshot  and corresponding approximate results do not contribute to the final accurate result yet they consume a lot computation and I/O resources. Users should be very careful on balancing the benefits and disadvantages brought by pipelining and online aggregation.

Another problem which inevitably comes with online aggregation is accuracy. More specifically, it is hard to guarantee that approximate results are close to final results. In the database domain, to achieve accuracy, special sampling methods like \emph{ripple join} are usually used \cite{db-book}. Moreover, statistics information like confidence will also be provided \cite{db-book}. In HOP, however, no special non-linear sampling or accessing pattern is used; and statistics information are not provided as well. Although preliminary evaluation in this paper shows that the approximation is reasonably accurate, more careful analysis and experiments should be conducted before drawing the final conclusion.

\subsection{Piccolo}\label{improve-piccolo}

MapReduce and Hadoop is a member of the \emph{shared nothing} category in the big family of distributed programming models. Compared with traditional \emph{shared memory} or \emph{message passing} models, it provides a easy, convenient, but powerful way to express many computations. However, such data centric model is not a natural fit for in memory applications which require frequent and fine granularity access to intermediate states. Power and Li borrows ideas from both categories and propose a new programming model named Piccolo \cite{piccolo}.

\subsubsection{The Piccolo Programming Model}\label{improve-piccolo-model}

Piccolo is a data centric programming model for writing parallel in-memory applications across many machines \cite{piccolo}. When writing Piccolo application, user needs to specify \emph{control} function and \emph{kernel} function. Control function creates shared table, launches instances of kernel function and performs global synchronization. Kernel function contains the application logic and may read/write data from/to shared table. Currently, only global barrier in control function is implemented, pair-wise synchronization among kernel instances are not supported.

Piccolo's shared table is based on \emph{(key, value)} pairs. User can define how the shared table is partitioned using a \emph{partition function}. Piccolo provides following interfaces to this shared table: \emph{clear()}, \emph{contains(key)}, \emph{get(key)}, \emph{put(key, value)}, \emph{update(key, value)}, \emph{flush()} and \emph{get\_iterator(partition)}. What is particular interesting among these interfaces is the update function. Since multiple kernel instance may issue update concurrently, user is allowed to define a \emph{accumulation} function to resolve write-write conflict. Another point worth noting is that Piccolo also provides some level of guarantee on table semantics.

\subsubsection{Other Considerations}\label{improve-piccolo-other}

Similar to the MapReduce case, Power and Li also introduces several other key considerations when designing Piccolo so that it can be a real-world programming model.

\textbf{Locality}: If users have previous knowledge on which table(s) will be most frequently accessed by a particular kernel, they can explicitly ask the runtime to schedule that kernel on the machine which stores that table. This helps reduce network and disk I/O overhead.

\textbf{Load Balancing}: Unlike MapReduce, no much flexibility can be provided to the Piccolo runtime on load balancing as the runtime must honor the locality preferences assigned by user and any ongoing kernel task can not be interrupted even if they are running slow (because terminating kernel may require the whole program to start from very beginning, which is too costly). Thus, Piccolo only employs a simple form of load balancing, namely, when runtime sees idle nodes which have finished all their own work, it will instructs these nodes to \emph{steal} queued kernel instances from other busy nodes. Queued kernel instances with largest associated table partition will be chosen first. To accomplish the steal process, the runtime also uses a mechanism to migrate table partition to the new node.

\textbf{Fault Tolerance}: Piccolo does not provide fully automatic fault tolerance. Rather, it relies on user-assisted checkpoint and restore to handle failures. More specifically, upon receiving user's request on performing a checkpoint, the runtime will use Chandy-Lamport distributed snapshot algorithm to snapshot shared table's current states. However, it is the user's responsibility to save any program or execution related data like variable's value. Thus, when failure is present, the system can restart from the last checkpoint and restore running.

\subsubsection{Discussion}\label{improve-piccolo-discussion}

Traditional distributed programming models include message-passing models and distributed shared-memory models. Both of them are mostly used in computation-intensive environment. Hence, they usually provides extensive mechanisms for workers to communicate with other and perform synchronization. Models for data-intensive computation, like MapReduce and Hadoop, on the other hand, do not provide such features. However, for some data-intensive applications, like PageRank calculation and distributed web crawler, communication among workers can be crucial. Piccolo recognizes this need and integrate some key features in distributed data-intensive computation into distributed data centric models. Moreover, Piccolo also propose the novel accumulation method to resolve write-write conflicts.

Meanwhile, however, we should also note that Piccolo is a very new model; it lacks some flexibility and convenience in terms of fine granularity synchronization and fully automatic fault tolerance. Space still exists for modifications and improvements.

\section{Future Research}\label{future}

Big data is the emerging trend in computation, it provides vast opportunities for companies and other organizations to explore new findings and make profits. However, challenges always comes along with opportunities, people soon discovers it is getting more and more difficult to manage and make use of these data. Cloud computing and virtualization help solve these problems from the hardware perspective, while MapReduce and Hadoop is becoming the core on the software side.

Hadoop, together with the HDFS filesystem are acting like the backbone for data intensive computation. Despite the fact that it can be directly applied to many problems, people are now realizing that what they really need is an entire platform based on Hadoop to satisfy various needs. Fortunately, both the academic and the industry are starting to pay attention to this need. Piccolo is a good try from the research community; while Apache's Hive, HBase, Pig \cite{hd} and Microsoft's announcement on integrating Hadoop into SQL Server 2012 \cite{ms-integrate-hd-into-sql} shows the determination from companies. Lots of design and implementation challenges will inevitably arise and we believe they can provide many opportunities for both researchers and developers to investigate and explore.

Security is another problem we are concerned with. Originally, Hadoop does not provide much security guarantee in terms of confidentiality, integrity and authenticity. Last year on Hadoop Summit 2010, Yahoo! is starting to integrate some mechanisms like Kerberos into Hadoop to provide some level of security \cite{hd-security}. However, the scope is very limited due to concerns on backward compatibility, performance, etc. Moreover, some experts also expressed their concern on these new security mechanisms on the BlackHat conference as they drew the conclusion: ``\emph{Hadoop made significant advances but faces several significant challenges}'' \cite{hd-security-question}. As more and more people are using Hadoop and HDFS to manage data, there will certainly be more and more hostile activities it. Providing powerful, flexible yet performance preserving security measures is still an open topic.

\section{Conclusion}\label{conclusion}

In this short survey, we introduced the MapReduce programming model and its corresponding open source implementation Hadoop. We also investigated several modifications and improvements toward it, namely, MapReduce Online, HOP and Piccolo. At the end of the survey, we propose two major directions which we believe are of great importance and not yet fully studied.

\bibliography{ref}

\end{document}
