\documentclass[a4paper,10pt]{article}

\usepackage{graphicx}
\usepackage{subfigure}

% \topmargin -1.0cm        % read Lamport p.163
% \oddsidemargin -0.02cm   % read Lamport p.163
% \evensidemargin -0.02cm  % same as oddsidemargin but for left-hand pages
% \textwidth 16.59cm
% \textheight 24.00cm 
% \pagestyle{empty}       % Uncomment if don't want page numbers
% \parskip 5.5pt           % sets spacing between paragraphs
% \parindent 0pt		  % sets leading space for paragraphs
% \headsep 0.50cm


\title{Hadoop Performance Monitoring Tools}
\author{Kai Ren, Lianghong Xu, Zongwei Zhou}

\begin{document}
\date{}
\maketitle

\subsection*{Introduction}
MapReduce (MR) \cite{mapreduce}, built on the top of the Google File System (GFS) \cite{gfs}, is a programming framework intended to facilite data-intensive cloud computing on commodity clusters. Hadoop \cite{hadoop} is an open-source Java implementation of MapReduce, which is currently widely used by many Internet services companies.  
However, diagnosing the performance problems of Hadoop programs and monitoring Hadoop system are inherently difficult due to its scale and distributed nature.
Although current Hadoop system exports many textual metrics and logs, this information is difficult to interpret and not fully understood by many application programmers.
The survey from \cite{mochi} points out that many Hadoop users are more concerned about dynamic MR-specific behavior and its implication to the performance problem.
For example, one typical question faced by the user is how many mappers/reducers should be set to achieve the best performance. However, metrics and logs exposed by current Hadoop can hardly answer the question why a specific number of mappers/reducers can achieve the optimal. This motivated us to extract fine-grained metrics of the Hadoop job behavior, and visualize its distributed execution to support debugging. On the hand, in the context of running multiple jobs in a Hadoop system, it is also lack of certain metrics to reflect the cluster resource utilization of each job (task). This results in the difficulty for cluster administrators to measure their cluster utilization and setup the correct configuration of Hadoop systems.

This project aims to provide an online monitoring tool that collects fine-grained metrics of mapreduce jobs and Hadoop systems, visualizes metrics data. For application programmers, 
they can better understand the execution of individual Hadoop map-reduce jobs and therefore reason about the root causes of the degradation of their performance problems.
For cluster administrators, they get more comprehensive information about the utilization of cluster resource. We also expect our monitoring tools can provide useful feedback to the Hadoop developers.

\subsection*{Goals}
The goals of our project are:
 
\textit{To capture fine-grained metric of the dynamice behavior of the Hadoop program} that affects its performance but is either not visible/exposed to user-written Map and Reduce code. Examples include the control flow and data flow of Hadoop programs on per-job/per-task level, as well as OS-level metrics of each task / job (e.g. disk and network I/O throughput, memory consumption, cpu load).
 
\textit{To find some natural ways to visualize our data and help users to quickly identify the problems existing in their programs}. For example, in Hadoop jobs, some map/reduce tasks will run extremely slow, and we want diagnose the root cause of its slow down: unbalanced key distribution, algorithmic mistakes or running on the busy node. Also, we want to integrate our metrics data report to some current cluster monitoring tools, e.g. Ganglia \cite{ganglia}. The cluster users and administrators can simultaneously have an overview about the workload of the whole cluster, as well as status information of the mapreduce programs running on the cluster.
 
\textit{To correlate to the performance of a Hadoop program execution and the configuration parameters}. We want to develop a methodology about how use our monitoring tool to analyze the configuration parameter's affect on the execution of Hadoop programs.
 
\textit{To be online}. We plan to implement the monitoring tool in an on-line way so that useful information can be reported to the users as soon as possible

\subsubsection*{Major contribution}

Our major contributions are: \\

1. Collect OS-level information for each map/reduce task. Visualize and correlate OS-level metrics with high-level map-reduce metrics (e.g. data-flow and control flow), which is useful to MapReduce application developper, and not provide by current tools \cite{mochi}. Possible approach is to instrument JVM. \\

2. Collect utilization information (e.g. disk I/O and network utilization) of per node under the context of multi-jobs. This feature is good for both cluster administrator who want to monitor the cluster utilization, and MapReduce application programmer. \\

3. Help MapReduce application programmer to do root-cause analysis about the performance problems of MR programs. \\

4. (Optional) Implement our mointoring system in an online fashion. \\

5. (Optional) Embed our online monitoring system into Hadoop framework, improve MapReduce scheduler through run-time resource ultilization information feedback.

\subsection*{Related Work}
Currently, Hadoop reports coarse-grained metrics about the performance of the whole system \cite{hdpguide} through logs and metrics API. Unfortunately, it lacks of certain important metrics on per-job/per-task level such as disk and network I/O utilization. Furthermore, logs generated by Hadoop can get excessively large, which makes it extremely difficult to handle them manually. Java debugging/profiling tools (jstack, hprof) only help debug local code-level errors rather than distributed problems over the cluster. Path-tracing tools \cite{xtrace}, although reports fined-grained data like RPC call graph, fails to provide insights at the higher level of abstraction (e.g. Maps and Reduces) that is more natural to application programmers.
 
Mochi \cite{mochi}, a visual log-analysis based tools, partially solves some of the mentioned problems. It parses the logs generated by Hadoop under debugging mode, infers the casual relations among recorded events, and then reports visualized metrics like per-task's execution time, workload of each node. However, it still does not correlate per-job / per-task MR-specific behavior with OS-level metrics. And it does not provide any root-cause analysis for the perfomrance problems of MR programs. Also, its analysis runs offline and thus cannot provide instant visualized monitoring information to users.

Ganglia \cite{ganglia} is a cluster-monitoring tool that collects system-wide totals of several high-level variables; e.g., CPU utilization, throughput, free disk space for each monitored node. Ganglia mainly focus on high-level variables (uptime, boot state, free disk etc.) and track only system wide totals. It is usually used to help flag misbehaviors (e.g., "a node went down").


\subsection*{Evaluation}

The effectiveness of our monitoring tool relies on its low overhead which enables it to collect metrics consistenly without slowing down the cluster;
and its extensive coverage of the events monitored, which prevents potentially interesting behaviors from flying under the raddar. We've envisioned about the following types of experiment to evaluate our monitoring tools:

\textit{Events coverage}. We plan to use three Hadoop programs as our case study to evalute the effecitveness of our monitoring tools. We show that through our monitoring tools users can infer some root causes of performance problems in their programs. The three Hadoop programs include:

\begin{itemize}
\item DiscFinder: DiscFinder is a distributed version of the Friends-of-Friends technique, which is an astronomical application for identifying clusters of galaxies. Traditional sequential scheme is no longer sufficient as the size of the galaxy datasets grows. DiscFinder is implemented using a Map-Reduce “wrapper”, which distributes a set of galaxies among multiple cores,  runs a sequential Friends-of-Friends algorithm on each core, and then merges the results of the local computations. It treats the sequential Friends-of-Friends procedure as a black box and does not rely on any specific properties of that procedure. It is mainly I/O intensive. We found its performance is seriously affected by different settings of the number of mappers and reducers.

\item TeraSort: Terasort benchmark sorts $10^10$ records (approximately 1TB), each including a 10-byte key. Terasort implemented using Hadoop distributes the record dataset to a number of mappers to generate key-value pairs;  then reducers fetch these records from mappers and sort them by their keys.

\item PEGASUS : 
PEGASUS is an open source Peta Graph Mining library which performs typical graph mining tasks such as computing the diameter of the graph, computing the radius of each node and finding the connected components.
\end{itemize}

\textit{Overhead}. Firstly, we can monitor resource utilization of our data collectors. Secondly, we compare the performance of Hadoop jobs when our monitoring tool is enabled or disabled.   


\subsection*{Preliminary Results}
\subsubsection*{Visualize map/reduce task execution time break down}
Map task break down:
map task launch time tm1, successful map attempt start time tm2, successful map attempt finish time tm3,  map task finish time tm4
delta1 = tm2-tm1 (Time for task setup overhead, and also execution time of failed map attempts)
delta2 = tm3-tm2 (Time for executing a successful map attempt)
delta3 = tm4-tm3 (Time for task cleanup overhead)

Reduce task break down:
reduce task launch time rm1, successful reduce attempt start time rm2, successful reduce attempt finish shuffle time rm3,  successful reduce attempt finish sort time rm4, successful reduce attempt finish time rm5, reduce task finish time rm6
delta1 = rm2-rm1 (Time for task setup overhead, and also execution time of failed reduce attempts)
delta2 = rm3-rm2 (Time for executing shuffle)
delta3 = rm4-rm3 (Time for executing sort)
delta4 = rm5-rm4 (Time for executing reduce)
delta5 = rm6-rm5 (Time for task cleanup overhead)

Example visualization output is shown in Figure~\ref{fig:breakdown}.
Give programmer a concrete view of task time break down
Is the problem in map-reduce frame work overhead, or task itself?
For reduce task, is the problem in shuffle phase, sort phase or reduce phase?

\begin{figure}
  \begin{center}
      \includegraphics[width=.8\textwidth]{figs/breakdown.pdf}
   \end{center} \vspace{-0.2in}
   \caption{An example of the breakdown of the execution of all tasks}
   \label{fig:breakdown}
\end{figure}


\subsubsection*{Visualize map/reduce task execution time distribution}
Similar to (1),
Map task execution time: tm4 - tm1
Reduce task execution time: rm6 - rm1

Example visualization output is shown in Figure~\ref{fig:dist}.
Give programmer a concrete view of task time distribution
Are there any abnormal tasks that took up much execution time?
Are the map/reduce task workload uniform enough?

\begin{figure}
  \begin{center}
      \includegraphics[width=.8\textwidth]{figs/mapdist.pdf}
      \includegraphics[width=.8\textwidth]{figs/reducedist.pdf}
   \end{center} \vspace{-0.2in}
   \caption{An example of the distribution of the running time of map and reduce tasks}
   \label{fig:dist}
\end{figure}


\subsubsection*{Visualize aggregated information of each task tracker(or per host)}
Aggregated information as follows:
1) map tasks per host (success/fail/speculative)
2) reduce tasks per host (success/fail/speculative)
3) total file read bytes per host
4) total file write bytes per host
5) total HDFS read bytes per host
6) total shuffle bytes per host
7) total map output bytes per host
8) total map input bytes per host

Example of visualization output is shown in Figure~\ref{fig:mapperhost} and Figure~\ref{fig:shuffleperhost}. 1) and 2) give programmer a view of map/reduce task distribution. Is there severe task contention? If yes, on which host?
3), 4) and 5) can help programmer diagnose host local file system errors and hdfs sytem errors. Are the data in HDFS distribution evenly?
6) can let programmer knows whether the reduce shuffled data is evenly distributed among host or not. Is there any host that have lots of shuffle data and thus become a bottleneck? 7) and 8) give programmer a view of map input and output data information. Is there a host consumes lots of input data or generate lots of output data?

\begin{figure}
  \begin{center}
      \includegraphics[width=.8\textwidth]{figs/maptasks.pdf}
   \end{center} \vspace{-0.2in}
   \caption{An example of the number of tasks per host}
   \label{fig:mapperhost}
\end{figure}

\begin{figure}
  \begin{center}
      \includegraphics[width=.8\textwidth]{figs/shuffle.pdf}
   \end{center} \vspace{-0.2in}
   \caption{An example of shuffle bytes per host} 
   \label{fig:shuffleperhost}
\end{figure}

\subsubsection*{Debug experiences}
DiscFinder example: increasing reducer number from 60 to 1024, the performance first raises, and then drops. Why? What is the optimal reducer number?

mapper number:746
reducer number: 60, 124, 252, 1024

\begin{figure}
  \begin{center}
      \includegraphics[width=.8\textwidth]{figs/60.pdf}
      \includegraphics[width=.8\textwidth]{figs/124.pdf}
      \includegraphics[width=.8\textwidth]{figs/1024.pdf}
   \end{center} \vspace{-0.2in}
   \caption{An example of debugging DiscFinder} 
   \label{fig:debug}
\end{figure}

As shown in Figure~\ref{fig:debug}, when reducer number is 60, some of the reducer spend quite a lot of time. The reason may be in Disc Finder program. The reduce key distribution is not evenly.When reducer number is large, the system can only launch certain number of reduce tasks at the beginning of the job, and then continuously launch the remainning tasks. The task slots for reducer is limited in the sytem, so large number of reducer doesn't help to improve system performance. They may introduce more task start and cleanup overhead.When reducer number is 124, reduce key distribution is even, and all reducers can be launch at the beginning of the job. This is good for performance.

\bibliographystyle{plain}
\bibliography{propref}
\end{document}
