\section{Case Study}\label{sec:case}
\subsection{Detecting disk-hog}
In this case study, we inject a disk-hog, a C program that repeatly
write large files into disks, to certain machines in the
experiment cluster. We use this disk-hog to simulate real world
disk I/O hotspot in cluster, and study how this I/O hotspot influences
our MapReduce job. We run the identity change detection program describled in Section ~\ref{sec:eval} for experiment. The experimental setting is also identical to
the one in Section ~\ref{sec:eval}.

We show that only extracting MapReduce level information from
Hadoop log is not sufficient to detect our simpel disk-hog, but corelatting
OS level metrics with MapReduce level information could help us quickly
detect it. Figure ~\ref{fig:mapcomp} shows the running time distribution
of map tasks in normal case and disk-hog case. In each case, there are map tasks
running in different amout of time. Even if disk-hog can cause some map tasks
running for a little bit longer time, it still can not change the entire
running time distribution significantly. As we can see, the distribution
under two cases still looks quite similar, and it is difficult to detect
anomaly which is caused by disk-hog.
Figure ~\ref{fig:reducecomp} shows the running time distribution
of reduce tasks in normal case and disk-hog case. As we can see, all reduce tasks
in disk-hog task are running slower than normal case, because each reduce task
in our MR job have to wait for the shuffle bytes from the map task(s)
running on the disk-hog machine(s). However, under disk-hog case, 
the distribution of reduce task running time are still quite similar
with the normal case. The influence of disk-hog on reduce task running time
distribution is not significant enough for anomaly detection.

\begin{figure}
  \centering
  \subfigure[{\bf Normal Case}]{\includegraphics[width=.5\textwidth]{figs/map.eps}}
  \subfigure[{\bf Disk-hog Case}]{\includegraphics[width=.5\textwidth]{figs/maphog.eps}}
  \caption{Comparing map task running time distribution of normal case and
  disk-hog case}
  \label{fig:mapcomp}
\end{figure}

\begin{figure}
  \centering
  \subfigure[{\bf Normal Case}]{\includegraphics[width=.5\textwidth]{figs/reduce.eps}}
  \subfigure[{\bf Disk-hog Case}]{\includegraphics[width=.5\textwidth]{figs/reducehog.eps}}
  \caption{Comparing reduce task running time distribution of normal case and
  disk-hog case}
  \label{fig:reducecomp}
\end{figure}
However, by corelatting OS level metrics with MapReduce level information,
we can detect the disk-hog. There is one graph in our Job View (see Figure ~\ref{fig:iocomp}). This graph shows the comparison of the average job I/O write throught
and machine I/O write throughput on each cluster machine.
It is easy to see that on machine no. 10 and 23, the average machine 
I/O write throughput is much larger than job I/O write throughput, and there
is no other MR job running on the cluster. It is quite possible that
there is I/O write anomalies in these two machines. The programmer
can then use zooming methodology to see the actual I/O write throughput
variation on the suspected machines, and inform cluster administrator to
check the I/O status on those machines.
 
\begin{figure}
  \centering
  \subfigure[{\bf Normal Case}]{\includegraphics[width=.5\textwidth]{figs/ionormal.eps}}
  \subfigure[{\bf Disk-hog Case}]{\includegraphics[width=.5\textwidth]{figs/iohog.eps}}
  \caption{Comparing MR job and machine I/O wirte bytes throughut in normal case
  and disk-hog case}
  \label{fig:iocomp}
\end{figure}

\subsection{Concurrent Job influence}
This case study shows how zooming technology and fine-grained information
help with debugging MapReduce program.
When there are multiple jobs running on your cluster and you want to evaluate how these
jobs influence each other. From system-wide Cluster View resources utilization, you may
see two jobs consume almost the same CPU usage in an extended time period, and concludes
that these two jobs influence each other. However, when we use zooming methodology to jump
into Node View, we will get more fine-grained information, such as exact CPU usage temporal variation.
From this fine-grained informaiton, we can get a exact perception of how these concurrent
job interleave with each other, and how significant they influence each other. 
As shown in Figure ~\ref{fig:mjob}, job 4011 and 4010 is scheduled to run concurrently, and they
influence each other in CPU usage, but job 4022 and 4023 is totally interleaved. Their average CPU usage
in the time period shown is quie similar in these two cases.

\begin{figure}
  \centering
  \subfigure{\includegraphics[width=.5\textwidth]{figs/mjob1.eps}}
  \subfigure{\includegraphics[width=.5\textwidth]{figs/mjob2.eps}}
  \caption{How two job influence each other in Node View}
  \label{fig:mjob}
\end{figure}


