\section{Architecture}\label{sec:arch}

Our non-intrusive monitoring tool explores data from Hadoop logs and extracts OS level information of job-related processes on each cluster node. We construct four different views of the collected metrics, namely cluster view, node view, job view, and task view, and propose a view-zooming model in order to assist system administrators and application developers to better reason about system behavior and job execution progess.

\subsection{Metrics aggregation}

In order to expose as much useful information as possible, our monitoring tool collects a number of metrics in MapReduce, Hadoop File System (HDFS), and operating system levels. MapReduce and HDFS metrics are extracted from logs of Hadoop job tracker, task tracker and HDFS data node, while operating system metrics are gathered on each node in a cluster.

\subsubsection{MapReduce and HDFS metrics}

Hadoop provides various logs detailing MapReduce job execution as well as the underlying data node activities. These logs get excessively large after a MapReduce job runs for a while. However, we observe that a large amount of information in the logs are redundant and overly trivial. As a result, there exists a good chance for us to filter only useful information from the massive logs. A close examination of all these logs enables us to extract a number of MapReduce and HDFS level metrics that to our knowledge may be of interest to the potential users of our tool.

MapReduce level metrics are mainly obtained from jobtracker and tasktracker logs. Jobtracker log provides rich information of job execution. To be specific, it records job duration, lifetime span of each task and various job-related counters, including total launched maps and reduces, number of bytes read from and written to HDFS for each task, map input records, reduce shuffle bytes and so on. Tasktracker logs are distributed over every node and are sent to a central collector for analysis. They reveal detailed information of a task execution state such as the Java Virtual Machine ID associated with each task and the shuffle data flow including total shuffle bytes, source and destination node location. HDFS data node logs mainly describe data flow on the underlying file system level. A typical data node log entry contains operation type (read or write), source and destination network addresses of the two nodes involved, number of transfer bytes, and operation timestamp. We don't look into master node log because it only contains metadata-related information, which we believe is of little interest to cluster administrators and application developers.

\subsubsection{Operating system metrics}

A distinguishing feature of our monitoring tool is the ability to aggregate per-node operating system level metrics within a cluster. For now, the exported metrics include CPU, memory, disk and network utilization on each node, which we believe can satisfy most need of our users for now.

\subsection{Multi-view visulization}

Our monitoring tool provides both coarse-grained and fine-grained views of MapReduce job execution. Specifically, four levels of views-cluster view, job view, task view, and node view- are presented according to users' need. 

\subsubsection{Cluster view} 

Cluster view constructs an overall perception of a cluster's running state. It does so by visulizing resource utilization of multiple jobs on each node within the cluster. For example, cluster view shows that for every job executing within the temporal scope, a cluster view is able to present the change over time of the average CPU usage across the entire cluster. Cluster view offers useful information to system administrators and assists them to obtain a comprehensive perspective of the utilization of cluster resources. MapReduce program developers may also benefit from a cluster view, from which they can examine the concurrent running jobs during the execution of their own jobs. When there are a large number of concurrent jobs competing for the cluster resources, it would not be surprising if the performance of the job execution is worse than expectation.

\subsubsection{Node view}

Node view is similar to cluster view but it only exposes information related to a specific node. Because all the information are gathered on a single node, it is reasonable to assume that node view reveals more find-grained data than cluster view, in which data is averaged across all the nodes and may hide some deviations among them. An example of node view is shown in Figure ~\ref{fig:nodeView}. The CPU percentage of Hadoop processes are aggregated against the host percentage during that period. From this figure, we can clearly see how the CPU resource is shared among the concurrent running processes on this node as well as how much portion a specific process takes as to the total aggregated percentage.

\begin{figure}[b]
  \begin{center}
  \includegraphics[width=.5\textwidth]{figs/nodeV.eps}
  \end{center}
  \caption{An example of node view}
  \label{fig:nodeView}
\end{figure}

\subsubsection{Job view}

Job view provides coarse-grained information of the execution progress of a specific job. Job view focuses on high-level job-centric information, including three main statistical distributions: task-based distribution, node-based distribution and time-based distribution. Figure ~\ref{fig:job4} shows an exemplary task-based distribution of  file read bytes for reducer tasks. Figure ~\ref{fig:job3} gives an example of node-based distribution of shuffle bytes sent and received by each node. Time-based distribution is illustrated by Figure ~\ref{fig:job1}, which presents the duration of all the tasks involved in a job execution. Job view provides rich information about job execution status and is probably the most important view that application programmers would like to look into. It is easy to diagnose job execution abnormalies with the assistance of job view. For instance, it is not difficult to detect that a task runs for a much longer time than the other peer tasks or that a reducer task shuffles much more bytes than other reducers.

\begin{figure}[b]
  \begin{center}
  \includegraphics[width=.5\textwidth]{figs/job4.eps}
  \end{center}
  \caption{task-based distribution of shuffle bytes for reducer tasks}
  \label{fig:job4}
\end{figure}

\begin{figure}[b]
  \begin{center}
  \includegraphics[width=.5\textwidth]{figs/job3.eps}
  \end{center}
  \caption{node-based distribution of shuffle bytes for reducer tasks}
  \label{fig:job3}
\end{figure}

\begin{figure}[t]
  \begin{center}
  \includegraphics[width=.5\textwidth]{figs/job1.eps}
  \end{center}
  \caption{time-based distribution of task durations}
  \label{fig:job1}
\end{figure}

\subsubsection{Task view}

Sometimes job view is not sufficient to reveal all the underlying details about the execution state of a specific task. As a result, our tool presents task view, which provides fine-grained information about a specific task execution. It can be viewed as a part of job view, but with extra level of details in order to reveal all the information that to our knowledge application developers may care about a given task. Task view breaks task duration down into sereral periods, each representing a specific task execution state, as shown in Figure \ref{fig:task1}. To help users understand exactly which execution state causes the skewness, it also shows the average and standard deviation of execution time among all the peer tasks for each period. For instance, for the shuffle period of a reduce task, task view constructs the shuffle data flow including the source and destination node locations as well as the associated volume of transferred bytes, as shown in Figure \ref{fig:task2}. 

\begin{figure}[t]
  \begin{center}
  \includegraphics[width=.5\textwidth]{figs/task1.eps}
  \end{center}
  \caption{task duration breakdown}
  \label{fig:task1}
\end{figure}

\begin{figure}[t]
  \begin{center}
  \includegraphics[width=.5\textwidth]{figs/task2.eps}
  \end{center}
  \caption{map task data flow}
  \label{fig:task2}
\end{figure}

\subsection{View zooming}

Above we have shown what the four views are and what they look like. We also proposes a view-zooming model to correlate these views together to assist application developers in reasoning about performance degeneration as illustrated in Figure \ref{fig:zoom}. However, users are not necessarily restricted to this model and may leverage the multiple views provided by our tool according to their needs. We construct the four views in two dimensions: the physical dimension indicates whether a view's scope is on a single node or aross the entire cluster, while the MR dimension represents the granularity of the views in terms of number of jobs and tasks.

For application developers, before launching any MapReduce jobs, they may want to have a look at the cluster view, which tells them how many jobs are concurrently running and how system resources are distributed among them, and then decide whether or not it is a good time to join the contention. After a job is finished, they may want to zoom in from the cluster view to the job view and focus on the execution progress of their own jobs. The three distributions presented by job view provides ample information for them to detect some, if any, performance problems. For example, from time-based or task-based distribution, they may find a suspect task consuming much more time to finish than its peer tasks. It is most likely that at this point applcation developers would like to zoom in to the task view of the morbid task to obtain the insights of the root cause of the problem. However, another possibility exists that application developers, by examining the node-distribution from the job view and detect some node exhibits abnormal behavior, would like to zoom in to node view to find out the detailed information on this node during the job execution.

In order to make the discussion concrete, we also investigate how cluster administrators may benefit from these views. Because our monitoring tool correlates MapReduce metrics together with OS metrics, it can expose information as to the resource sharing situation of each job running in the cluster, which cluster administrators may use for service-oriented charging and can not be provided by traditional system monitoring tool such as Ganglia. In our view monitoring model, a typical path taken by cluster administrators may be zooming between cluster view and node view, which show the resource utilization on the entire cluster and a single node respectively.

\begin{figure}
  \begin{center}
  \includegraphics[width=.5\textwidth]{figs/zoom.eps}
  \end{center}
  \caption{View-zooming model}
  \label{fig:zoom}
\end{figure}
