\section{Evaluation}\label{sec:eval}

The effectiveness of our monitoring tool relies on its low overhead which enables it to collect metrics consistenly without slowing down the cluster, as well as its extensive coverage of the events monitored, which prevents potentially interesting behaviors from flying under the raddar. Thus our evaluation consists of two parts: one is to evaluate the overhead of our monitoring tool and another one is to do case study to show the effectiveness of our monitoring tool to assist cluster administrator and hadoop programmer to trace interesting events in Hadoop system.

We perform our experiments on a 64-nodes testbed cluster. 
Each node has 2 quad-core Intel E5440, 16GB RAM, four 1TB SATA disks and a Qlogic 10GE NIC.
Each node runs Debian GNU/Linux 5.0 with Linux kernel 2.6.32.   

\subsection{Benchmarks}

We use the following Hadoop programs to evalute the overhead of our monitoring tools. We show that through our monitoring tools users can infer some root causes of performance problems in their programs. The two Hadoop programs are:

{\bf TeraSort:} Terasort benchmark sorts $10^{10}$ records (approximately 1TB), each including a 10-byte key. Terasort implemented using Hadoop distributes the record dataset to a number of mappers to generate key-value pairs; then reducers fetch these records from mappers and sort them by their keys.
We run TeraSort in multiple times, and test the overhead caused by our monitoring tool. 

{\bf Identity Change Detection:} This is a data-mining job that analyzes telephone call graph and detect the idenfication changes of the owner of a particular telephone number.
This program does analysis on a very large data sets, and consists of multiple mapreduce jobs. We use one of its sub-jobs as our case-study.
One characteristic of this job is that its generates very large output in its map phase.
Suppose the input size of this job is $M$ key-value pairs, and the its immediate result emitted by mapper function is about $M^2$ values.
For the largest data set in our experiment, it writes nearly 3TB data into Hadoop filesystem.

\subsection{Overhead}

From the information provided by the $/proc$ filesystem, we observe that local per-node overhead of our monitoring tool accounts for less than $0.01\%$ of the CPU resource. The peak virtual memory usage is about 55MB. 
We also test disk I/O throughput of the log-version of our monitoring tool. During the normal operation time of the cluster, the disk write throughput of our monitoring tool is about 0.5KB/s in average. The write rate will increase as the number of map-reduce jobs increases.   
