\section{Implementation}\label{sec:impl}

\subsection{Metric Aggregation}
For the performance diagnosis of Hadoop map-reduce jobs, we gather and analyze OS-level peromance metrics for each map-reduce task, without requiring any modifications to the Hadoop system and the OS.

The basic idea to collect OS level metrics for each task is to infer them from its JVM process. 
Hadoop runs each map/reduce task in its own Java Virtual Machine to isolate them from other running tasks. 
In Linux, OS-level performance metrics for each process are made available as text files in the $/proc/[pid]/$ pseduo file system.
It provides many aggregation counters for specific machine resources.
For example $write\_bytes$ in $/proc/pid/io/$ gives the number of bytes that is sent to the storage layer in the lifetime of a particular process.
If these aggreation counters are periocally collected, then the throughput of some resource like disk I/O can be approximated.
This motivates us to collect metrics data from $/proc/$ file system and correlate them with its corresponding map/reduce task.

However, the difficulty lies in how to correlate process metrics to each map/reduce task.
One map/reduce task can correspond to one or serveral processes in OS,
because a task may spawn serveral subprocesses, which is a common case in stream processing jobs.
To solve this problem, we create the process tree for each JVM process through the parent id provided in $/proc$ file system, 
and aggregate the metrics of every subprocess to the counters of each map/reduce task.

Another difficulty is that Hadoop allows the reuse of Java Virtual Machine. 
With the reuse of JVM enabled, multiple tasks of one job can run in the same JVM process sequentially in order to reduce the overhead of starting a new JVM. However, in the OS-level metrics, we can only infer the identification of the first task running on a JVM process.
There is no additional information in OS level about the tasks that later reuse the JVM process.
After checked the Hadoop log, we found that Hadoop logs the JVM id of each mapreduce task 
as well as the event of creating a new JVM. 
Thus, we can identify the JVM id of a particular JVM process from the Hadoop log.
And according to the JVM id and the timestamp of creating and destorying a mapreduce task,
we can infer which process this task run in.

\subsection{Online Reporting}

To achieve online reporting job-centric metrics, we adopt the source code from Ganglia.
Ganglia is a distributed monitoring system that has great scalability and portability.
It runs a daemon called "gmond" on each cluster node that periodically collects metric data of the local machine
and reports the data directly to a central node "gmetad". 
The central node stores metric data into RRD Database tools.

\begin{figure}
  \begin{center}
  \includegraphics[width=.5\textwidth]{figs/ganglia.eps}
  \end{center}
  \caption{Online monitoring system architecture}
  \label{fig:nodeView}
\end{figure}

Gmond supports python module which can be used to add functionality to collect metrics.
It is easy fo us to plug-in new metric collection module.  
However, the central node "gmetad" has its limitation in organization of metrics data.
Its interal data structure orgnazies the metrics by its hostname and the cluster it belongs to.
In order to organize the metrics by mapreduce job, we modify its internal data structure
as well as its storage organization in RDD database.
