\section{Incast for Future Big Data Workloads}

%So far, we have model incast by combined analysis regarding flow rates, RTOs, slow start, and switch buffer space. We have also shown that incast significantly affects some types of Hadoop jobs under highly optimized configurations, however those performance impacts are masked by other artifacts of real life production workloads. 

Hadoop is an example of the rising class of big data computing paradigms, which almost always involve some amount of network communications. To understand how incast affects future big data workloads, one needs to appreciate the technology trends that drive the rising prominence of big data, the computational demands that result, the countless design and mis-design opportunities, as well as the root causes of incast. 

We believe that the top technology trends driving the prominence of big data include (1). Increasingly easy and economical access to large scale storage and computation infrastructure~\cite{Hadoop, EC2}, (2). Ubiquitous ability to generate, collect, and archive data about both technology systems and the physical world~\cite{EMCDigitalUniverse}, and (3).
Growing desire and statistical literacy across many industries to understand and derive value from large datasets~\cite{HadoopWorld2011Speakers, FacebookSIGMOD2011, dremel, quincy}. 

Several data analysis trends emerge, confirmed by the cluster operators who provided the traces in Figure~\ref{fig:HadoopWorkloadJobSizes}. (1). There is increasing desire to do interactive data analysis, as well as streaming analysis. The goal is to have humans with non-specialist skills explore diverse and evolving data sources, and once they discover a way to extract actionable insights, such insights should be updated based on incoming data in a timely and continuous fashion. (2). Bringing such data analytic capability to non-specialists requires high-level computation frameworks built on top of common platforms such as MapReduce. Examples of such frameworks in the Hadoop MapReduce ecosystem include HBase, Hive, Pig, Sqoop, Oozie, and others. (3). Data sizes grow faster than the size per unit cost of storage and computation infrastructure. Hence, efficiently using storage and computational capacity are major concerns. 

Incast plays into these trends as follows. The desire for interactive and streaming analysis requires highly responsive systems. The data size required for these computations are small compared with those required for computations on historical data. We know that when incast occurs, the RTO penalty is especially severe for small flows. Applications would be potentially forced to either delay the analysis response, or give answers based on partial data. Thus, incast could emerge as a barrier for high quality interactive and streaming analysis. 

The desire to have non-specialists use big data systems suggests that functionality and usability should be the top design priorities. Incast affects performance, which can be interpreted as a kind of usability. It becomes a priority only after we have a functional system. Also, as our Hadoop experiments demonstrate, performance tuning for multi-layered software stacks would need to confront multiple layers of complexity and overhead. 

The need for storage capacity efficiency entails storing compressed data, performing data deduplication, or using RAID instead of data replication to achieve fault tolerance. In such environments, memory locality becomes the top concern, and disk or network locality becomes secondary~\cite{diskLocalityIrrelevant}. If the workload characteristics permits a high level of memory or disk locality, network traffic gets decreased, the application performance increases, and incast becomes less of a concern. 

The need for computational capacity efficiency implies that computing infrastructure needs to be more highly utilized. Network demands will thus increase. Consolidating diverse applications and workloads multiplexes many network traffic patterns. Incast will likely occur with greater frequency. Further, additional TCP pathologies may be revealed, such as the similarly phrased TCP outcast problem, which affects link share fairness for large flows~\cite{TCPOutcast}. 




\section{Recommendations}

\vspace{2pt}
\noindent \emph{Set TCP minimum RTO to 1ms.} 
\vspace{2pt}

Future big data workloads likely reveal TCP pathologies other than incast. Incast and similar behavior are fundamentally transport-level problems. It is not resource effective to overhaul the entire TCP protocol, redesign switches, or replace the datacenter network to address a single problem. Setting \texttt{tcp\_rto\_min} is a configuration parameter change -- low overhead, immediately deployable, and as we hope our experiments show, it does no harm inside the datacenter. 

\vspace{2pt}
\noindent \emph{Deploy better tracing infrastructure.} 
\vspace{2pt}

It is not yet clear how much incast impacts future big data workloads. The article discusses several contributing factors. We need further information to determine which factors dominate under what circumstances. Better tracing helps remove the uncertainty. Where possible, such insights should be shared with the general community. We hope the workload comparisons in this article encourage similar, cross-organizational efforts elsewhere. 

\vspace{2pt}
\noindent \emph{Apply a scientific design process.} 
\vspace{2pt}

We believe future big data systems demand a departure from some design approaches that emphasize implementation over measurement and validation. The complexity, diversity, scale, and rapid evolution of such systems imply that mis-design opportunities proliferate, redesign costs increase, experiences rapidly become obsolete, and intuitions become hard to develop. Our approach in this article involves performing simplified experiments, developing models based on first principles, empirically validating these models, then connecting the insights to real life by introducing increasing levels of complexity. We hope our experiences tackling the incast problem demonstrates the value of a design process rooted in empirical measurement and evaluation. 

