\section{Incast in Hadoop MapReduce}

Hadoop represents an interesting case study of how incast affects application level behavior. Hadoop is an open source implementation of MapReduce, a distributed computation paradigm that played a key part in popularizing the phrase ``big data''. Network traffic in Hadoop consists of small flows carrying control packets for various cluster coordination protocols, and larger flows carrying the actual data being processed. Incast potentially affects Hadoop in complex ways. Further, Hadoop may well mask incast behavior, because network forms only a part of the overall computation and data flow. Our goal for this section is to answer whether incast affects Hadoop, by how much, and under what circumstances. 

We perform two sets of experiments. First, we run stand-alone, artificial Hadoop jobs to find out how much incast impacts each component of the MapReduce data flow. Second, we replay a scaled-down, real life production workload using previously published tools~\cite{SWIM} and cluster traces from Facebook, a leading Hadoop user to understand the extent to which incast affects whole workloads. These experiments take place on the same DETER machines as those in the previous section. We use only the large buffer Procurve switch for these experiments. 

\subsection{Stand-alone jobs}

Table~\ref{table:hadoopParams} lists the Hadoop cluster settings we considered. The actual stand-alone Hadoop jobs are \texttt{hdfsWrite}, \texttt{hdfsRead}, \texttt{shuffle}, and \texttt{sort}. The first three jobs stress one part of the Hadoop IO pipeline at a time. Sort represents a job with 1-1-1 ratio between read, shuffled, and written data. We implement these jobs by modifying the \texttt{randomwriter} and \texttt{randomtextwriter} examples that are pre-packaged with recent Hadoop distributions. We set the jobs to write, read, shuffle, or sort 20GB of terasort format data on 20 machines. 


\subsubsection{Experiment setup}

\begin{table}[t]
\centering
%\vspace{-5pt}
\scriptsize
\begin{tabular}{rl} 
\hline
{\bf Parameter} & {\bf Values} \\ \hline \hline
\texttt{Hadoop jobs}                   & \texttt{hdfsWrite, hdfsRead,} \\ 
                                       & \texttt{shuffle, sort} \\ \hline
\texttt{TCP version}     & \texttt{Linux-2.6.28.1, 1ms-min-RTO} \\
\texttt{Hadoop version}  & \texttt{0.18.2, 0.20.2} \\
\texttt{Switch model}    & \texttt{HP Procurve 5412} \\
\texttt{Number of machines}    & \texttt{20 workers and 1 master} \\
\hline 
\texttt{fs.inmemory.size.mb}           & \texttt{75, 200            } \\
\texttt{io.file.buffer.size}           & \texttt{4096, 131072       } \\
\texttt{io.sort.mb}                    & \texttt{100, 200           } \\ 
\texttt{io.sort.factor}                & \texttt{10, 100            } \\
\texttt{dfs.block.size}                & \texttt{67108864, 536870912} \\
\texttt{dfs.replication}               & \texttt{3, 1               } \\
\texttt{mapred.reduce.parallel.copies} & \texttt{5, 20          } \\
\texttt{mapred.child.java.opts}        & \texttt{-Xmx200m,	-Xmx512M} \\ \hline
\end{tabular}
\normalsize
\caption{\small Hadoop parameter values for experiments with stand-alone jobs.}
\label{table:hadoopParams}
%\vspace{-5pt}
\end{table}


The TCP versions are the same as before -- standard Linux 2.6.28.1, and modified Linux 2.6.28.1 with \texttt{tcp\_rto\_min} set to 1ms. We consider Hadoop versions 0.18.2 and 0.20.2. Hadoop 0.18.2 is considered a legacy, basic, but still relatively stable and mature distribution. Hadoop 0.20.2 is a more fully featured distribution that introduces some performance overhead for small jobs~\cite{SWIM}. Subsequent Hadoop improvements have appeared on several disjoint branches that are currently being merged, and 0.20.2 represents the last time there was a single mainline Hadoop distribution~\cite{hadoopVersions}. 

The rest of the parameters are detailed Hadoop configuration settings. Tuning these parameters can considerably improve performance, but requires specialist knowledge about the interaction between Hadoop and the cluster environment. The first value for each configuration parameter in Table~\ref{table:hadoopParams} represents the default setting. The remaining values are tuned values, drawn from a combination of Hadoop sort benchmarking~\cite{hadoopSortParams}, suggestions from enterprise Hadoop vendors~\cite{hadoopClouderaParams}, and our own experiences. One configuration worth further explaining is \texttt{dfs.replication}. It controls the degree of data replication in HDFS. The default setting is three-fold data replication to achieve fault tolerance. For use cases constrained by storage capacity, the preferred method is to use HDFS RAID~\cite{HDFSRAID}, which achieves fault tolerance with 1.4$\times$ overhead, much closer to the ideal one-fold replication. 

%The parameters \texttt{fs.inmemory.size.mb}, \texttt{io.\-file.\-buffer.size}, \texttt{io.sort.mb}, and \texttt{mapred.child.java.opts} control the memory size allocated for different parts of Hadoop. The parameter \texttt{io.sort.factor} sets the number of streams to concurrently merge during the shuffle-sort phase of MapReduce computations. A similar parameter is \texttt{mapred.reduce.parallel.copies}, which controls the degree of parallelism in the data shuffle between map tasks and reduce tasks. The default value for this paramter is 5; the other value of 20 correspond to the lowest points in Figure~\ref{fig:ProcurveManyhosts128-1024KBBlocks}. Also, \texttt{dfs.block.size} sets the block size in the underlying Hadoop distributed file system (HDFS). The default value is 64MB, and having larger blocks lowers the Hadoop namenode overhead in keeping track of the block locations. Further, \texttt{dfs.replication} controls the degree of data replication in HDFS. The default setting is three-fold data replication to achieve fault tolerance. For use cases constrained by storage capaciy, the preferred method is to use HDFS RAID~\cite{HDFSRAID}, which achieves fault tolerance with 1.4$\times$ overhead, much closer to the ideal one-fold replication. 

%\begin{figure}[t]
%\begin{center}
%\centering
%\includegraphics[trim = 0cm 8cm 8.2cm 0cm, clip, width=8cm]{figures/HadoopNortel}
%%\vspace{-17pt}
%\caption{\small Hadoop stand alone job completion times. Nortel 5500 switches. Showing job completion times (top) and overhead introduced by incast (bottom) for default Hadoop-0.18.2 (left), and tuned Hadoop-0.18.2 (right). Due to logistical limits, we did not run the \texttt{hdfswrite} and \texttt{shuffle} jobs. The tuned Hadoop-0.18.2 leads to considerably lower job completion times, with the incast overhead being higher and stastically significant, \ie non-overlapping confidence intervals.}
%\label{fig:HadoopNortel}
%\end{center}
%%\vspace{-8pt}
%\end{figure}

%Figure~\ref{fig:HadoopNortel} shows the results on the Nortel 5500 switches comparing two versions of TCP using the default and tuned settings for Hadoop 0.18.2. Due to logistical limits, we did not run the \texttt{hdfswrite} and \texttt{shuffle} jobs. The error bars show 95\% confidence intervals from 20 repeated measurements. The tuned Hadoop-0.18.2 leads to considerably lower job completion times and lower performance variance, \ie more narrow confidence intervals. This behavior is preferred over that for default Hadoop settings. For both tuned and default settings, some incast overhead is visible. We define overhead as the performance difference between default and \texttt{1ms-min-RTO} TCP, normalized by the job completion time for \texttt{1ms-min-RTO} TCP. The incast overhead for tuned Hadoop is both higher and stastically significant, \ie the confidence intervals in the job completion times do not overlap.

\subsubsection{Results}

\begin{figure}[t]
\begin{center}
\centering
\includegraphics[trim = 0cm 7cm 7.7cm 0cm, clip, width=8.3cm]{figures/Hadoop18Procurve}
%\vspace{-17pt}
\caption{\small Hadoop stand alone job completion times. HP Procurve 5412 switches. Showing job completion times (top) and overhead introduced by incast (bottom) for default Hadoop-0.18.2 (left), and tuned Hadoop-0.18.2 (right). The error bars show 95\% confidence intervals from 20 repeated measurements. The tuned Hadoop-0.18.2 leads to considerably lower job completion times. The confidence intervals are not overlapping for both settings. However, the default Hadoop has higher incast overhead.}
\label{fig:Hadoop18Procurve}
\end{center}
%\vspace{-8pt}
\end{figure}

Figure~\ref{fig:Hadoop18Procurve} shows the results for Hadoop 0.18.2. We consider two performance metrics --- job completion time, and incast overhead. We define incast overhead according to Equation~\ref{eq:defineIncastOverhead}, \ie the difference between job completion time under default and \texttt{1ms-min-RTO} TCP, normalized by the job completion time for \texttt{1ms-min-RTO} TCP. The default Hadoop has very high incast overhead, while for tuned Hadoop, the incast overhead is barely visible. However, the tuned Hadoop-0.18.2 setting leads to considerably lower job completion times. 

\scriptsize{
\begin{eqnarray} 
t &=& jobCompletionTime \nonumber \\ 
IncastOverhead &=& \frac{t_{defaultTCP} - t_{1ms-min-RTO}}{t_{1ms-min-RTO}} 
\label{eq:defineIncastOverhead}
\end{eqnarray}
}
\normalsize

The results illustrate a subtle form of Amdalh's Law, which explains overall improvement to a system when only a part of the system is being improved. Here, the amount of incast overhead depends on how much network data transfers contribute to the overall job completion time. The default Hadoop configurations result in network transfers contributing to a large fraction of the overall job completion time. Thus, incast overhead is clearly visible. Conversely, for tuned Hadoop, overall job completion time is already low. Incast overhead is barely visible because the network transfer time is low. 

%We observe another level of subtlety from the difference between Nortel 5500 and default Hadoop versus HP Procurve 5412 and default Hadoop. Hadoop transfers both application data and control messages over the network. Control messages involve small bursts of packets, while application data transfers involve larger packet streams. Nortel 5500 has shallow packet buffers, and both applicatoin data and control packets likely suffers from high packet drop probability. Dropping control packets would trigger Hadoop application-level timeout and backoff mechanisms, which take place on the order of seconds. This contributes to the high average and variance in job completion times. In contrast, the HP Procurve 5412 has larger packet buffers that lower the packet drop probability. Since control packets are less likely to be dropped, there is less overhead from Hadoop application-level timeout and backoff mechanisms. This results in application data transfers forming a larger fraction of overall completion time, which in turn leads to higher incast overhead compared with Nortel 5500 and default Hadoop. 

We repeat these measurements on Hadoop 0.20.2. Compared with Hadoop 0.18.2, the more recent version of Hadoop sees a performance improvement for the default configuration. For the optimized configuration, Hadoop 0.20.2 sees performance overhead of around 10 seconds for all four job types. This result is in line with our prior comparisons between Hadoop versions 0.18.2 and 0.20.2~\cite{SWIM}. Unfortunately, 10 seconds is also the performance improvement for using TCP with 1ms-min-RTO. Hence, the performance overhead in Hadoop 0.20.2 masks the benefits of addressing incast. 
%For brevity, we omit the results graph for these measurements. 

\vspace{2pt}
\emph{Takeaway: Incast does affect Hadoop. The performance impact depends on cluster configurations, as well as data and compute patterns in the workload.}




\subsection{Real life production workloads}

The results in the above subsection indicate that to find out how much incast \emph{really} affects Hadoop, we must compare the default and 1ms-min-RTO TCP while replaying real life production workloads. 

Previously, such evaluation capabilities are exclusive to enterprises that run large scale production clusters. Recent years have witnessed a slow but steady growth of public knowledge about front line production workloads~\cite{fairScheduler, scarlett, SWIM, beemr, PACMan}, as well as emerging tools to replay such workloads in the absence of production data, code, and hardware~\cite{SWIM, SWIMwebsite}. 

\subsubsection{Workload analysis}

We obtained seven production Hadoop workload traces from five companies in social networking, e-commerce, telecommunications, and retail. Among these companies, only Facebook has so far allowed us to release their name and synthetic versions of their workload. We do have permission to share some summary statistics. The full analysis is under publication review. 

Several observations are especially relevant to incast. Consider Figure~\ref{fig:HadoopWorkloadJobSizes}, which shows the distribution of per job input, shuffle, and output data for all workloads. First, all workloads are dominated by jobs that involve data sizes of less than 1GB. For jobs so small, scheduling and coordination overhead dominate job completion time. Therefore, incast will make a difference only if the workload intensity is high enough that Hadoop control packets alone would overwhelm the network. Second, all workloads do contain jobs at the 10s TB or even 100s TB scale. This compels the operators to use Hadoop 0.20.2. This version of Hadoop is the first to incorporate the Hadoop fair scheduler~\cite{fairScheduler}. Without it the small jobs arriving behind very large jobs would see FIFO head of queue blocking, and suffer wait times of hours or even days. This feature is so critical that cluster operators use it despite the performance overhead for small jobs. Hence, it is likely that in Hadoop 0.20.2, incast will be masked by the performance overhead. 

\subsubsection{Workload replay}

\begin{figure}[t]
\begin{center}
\centering
\includegraphics[trim=0cm 8.9cm 11.5cm 0cm, clip, width=8cm]{figures/dataSizePerJob}
%\vspace{-20pt}
\caption{\small Per job input, shuffle, and output size for each workload. \texttt{FB-*} workloads come from a six-months cluster trace in 2009 and a 45-days trace in 2010. \texttt{CC-*} workloads come from traces of up to 2 months long at various customers of Cloudera, which is a vendor of enterprise Hadoop.}
\label{fig:HadoopWorkloadJobSizes}
\end{center}
%\vspace{-10pt}
\end{figure}

We replay a day-long Facebook 2009 workload on the default and 1ms-min-RTO versions of TCP. We synthesize this workload using the method in~\cite{SWIM}. It captures in a relatively short synthetic workload the representative job submission and computation patterns for the entire six-month trace. 

Our measurements confirm the hypothesis earlier. Figure~\ref{fig:HadoopWorkloadCompleteTimeCDF} shows the distribution of job completion times. We see that the distribution for 1ms-min-RTO is 10-20 seconds right shifted compared with the distribution for default TCP. This is in line with the 10-20 seconds overhead we saw in the workload-level measurements in~\cite{SWIM}, as well as the stand-alone job measurements earlier in the article. The benefits of addressing incast are completely masked by overhead from other parts of the system. 

\begin{figure}[t]
\begin{center}
\centering
\includegraphics[trim=0cm 14cm 13cm 0cm, clip, width=6.5cm]{figures/HadoopWorkloadCompleteTimeCDF}
%\vspace{-20pt}
\caption{\small Distribution of job completion times for the \texttt{FB-2009} workload. The distribution for 1ms-min-RTO is 10-20 seconds right shifted compared with the distribution for default TCP.}
\label{fig:HadoopWorkloadCompleteTimeCDF}
\end{center}
%\vspace{-10pt}
\end{figure}



Figure~\ref{fig:HadoopWorkloadWorkSequence} offers another perspective on workload level behavior. The graphs show two sequences of 100 jobs, ordered by submission time, \ie we take snapshots of two continuous sequences of 100 jobs out of the total 6000+ jobs in a day. These graphs indicate the behavior complexity once we look at the entire workload of thousands of jobs and diverse interactions between concurrently running jobs. The 10-20 seconds performance difference on small jobs becomes insignificant noise in the baseline. The few large jobs take significantly longer than the small jobs, and stand out visibly from the baseline. For these jobs, there are no clear patterns to the performance of 1ms-min-RTO versus standard TCP. 

\begin{figure}[t]
\begin{center}
\centering
\includegraphics[trim=0cm 11.3cm 13.5cm 0cm, clip, width=7cm]{figures/HadoopWorkloadWorkSequence}
%\vspace{-20pt}
\caption{\small Sequences of job completion times. Showing two continuous job sequences of 100 jobs. The few large jobs have long completion times, and stand out from the baseline of continuous stream of small jobs. }
\label{fig:HadoopWorkloadWorkSequence}
\end{center}
%\vspace{-10pt}
\end{figure}

The Hadoop community is aware of the performance overheads in Hadoop 0.20.2 for small jobs. Subsequent versions partially address these concerns~\cite{hadoopWorld2011Talk}. It would be worthwhile to repeat these experiments once the various active Hadoop code branches merge back into the next mainline Hadoop~\cite{hadoopVersions}. 

\vspace{2pt}
\emph{Takeaway: Small jobs dominate several production Hadoop workloads. Non-network overhead in present Hadoop versions mask incast behavior for these jobs.}

