\section{Introduction}
Data intensive applications are gaining increasing popularity and importance in various fields
such as scientific computations and large scale Internet services. Several software programming frameworks have been proposed
for such applications on clusers consisting of commodity machines, among which 
MapReduce\cite{mapreduce} is the most well-known and generally understood.
Hadoop \cite{hadoop} is an open-source Java implementation of MapReduce,
which is currently widely used by many Internet services companies (e.g., Yahoo! and Facebook).

Monitoring Hadoop and diagnosing MapReduce performance problems, however, are inherently difficult
due to its large scale and distributed nature. Debugging Hadoop programs by looking at its various
logs is painful. Hadoop logs can get excessively large, which makes it impractical to handle them manually.
Most current Hadoop monitoring tools such as Mochi \cite{mochi} are merely log-based and lack
certain important operating system level metrics, which may be a significant addition to the diagnosis process.
Cluster monitoring tools such as Ganglia \cite{ganglia} expose
per-node and cluster-wide OS metrics but provide no MapReduce-specific information.

We present a non-intrusive, effective, distributed Hadoop monitoring tool which targets at
making Hadoop program debugging simpler. It extracts a number of useful MapReduce and File System (HDFS)
level metrics from Hadoop logs and correlates them with operating system level metrics such as CPU and
disk I/O utilization on a per-job/per-task basis. In order to effectively present these metrics to programmers,
we propose a flexible multi-view visualization scheme, including cluster view, node view, job view and task view,
each of which represents a different level of granularity. We propose a view-zooming model to assist application developers in better reasoning about their MapReduce job execution progress and program behavior.
As far as we know, our tool, along with its multi-view visulization and view-zooming model, is the first monitoring tool to correlate per-job/per-task level operating system metrics with MapReduce-specific metrics, and to support both
online monitoring and offline analysis. We give some case studies to show how our tool can effectively diagnose performance problems in MapReduce programs, some of which cannot be easily diagnosed by other tools.

In the rest of this paper, Section ~\ref{sec:arch} introduces the architecture of our monitoring tool, and
Section ~\ref{sec:impl} describes implementation details. We show some evaluation results in Section ~\ref{sec:eval},
and provides several examples to demonstrate the effectiveness of our tool in diagnosing MapReduce programs.
Section ~\ref{sec:rel} compares our tool with related works, and Section ~\ref{sec:con} concludes this paper. 
