\section{Introduction}

TCP incast is a recently identified network transport pathology that affects many-to-one communication patterns in datacenters. It is caused by a complex interplay between datacenter applications, the underlying switches, network topology, and TCP, which was originally designed for wide area networks. Incast increases the queuing delay of flows, and decreases application level throughput to far below the link bandwidth. The problem especially affects computing paradigms in which distributed processing cannot progress until all parallel threads in a stage complete. Examples of such paradigms include distributed file systems, web search, advertisement selection, and other applications with partition or aggregation semantics~\cite{cmuFAST2008, incastWREN2009, DCTCP}.  

There have been many proposed solutions for incast. Representative approaches include modifying TCP parameters~\cite{incastSIGCOMM2009, incastWREN2009} or its congestion control algorithm~\cite{ICTCP}, optimizing application level data transfer patterns~\cite{cmuFAST2008, incastPDSW2007}, switch level modifications such as larger buffers~\cite{cmuFAST2008} or explicit congestion notification (ECN) capabilities~\cite{DCTCP}, and link layer mechanisms such as Ethernet congestion control~\cite{EthernetCongestionNotification,QCN}. Application level solutions are the least intrusive to deploy, but require modifying each and every datacenter application. Switch and link level solutions require modifying the underlying datacenter infrastructure, and are likely to be logistically feasible only during hardware upgrades. 

Unfortunately, despite these solutions, we still have no quantitatively accurate and empirically validated model to predict incast behavior. Similarly, despite many studies demonstrating incast for microbenchmarks, we still do not understand how incast impacts application level performance subject to real life complexities in configuration, scheduling, data size, and other environmental and workload properties. These concerns create justified skepticism on whether we truly understand incast at all, whether it is even an important problem for a wide class of workloads, and whether it is worth the effort to deploy various incast solutions in front-line, business-critical datacenters. 

We seek to understand how incast impacts the emerging class of big data workloads. Canonical big data workloads help solve needle-in-a-haystack type problems and extract actionable insights from large scale, potentially complex and unformatted data. We do not propose in this article yet another solution for incast. Rather, we focus on developing a deep understanding of one existing solution: reducing the minimum length of TCP retransmission time out (RTO) from 200ms to 1ms~\cite{incastSIGCOMM2009, incastWREN2009}. We believe TCP incast is fundamentally a transport layer problem, thus a solution at this level is best.  

The first half of this article develops and validates a quantitative model that accurately predicts the onset of incast and TCP behavior both before and after. The second half of this article investigates how incast affects the Apache Hadoop implementation of MapReduce, an important example of a big data application. We close the article by reflecting on some technology and data analysis trends surrounding big data, speculate on how these trends interact with incast, and make recommendations for datacenter operators. 
