\section{What We Know About TCP Incast}

\subsection{Solutions in many layers}


\subsection{Experimental challenges}

It is non-trivial to conduct experiments on TCP incast. Many approaches frequently used by network researchers turn out to be problematic. We outline a few experimental concerns below drawing from our own experiences working on the topic. 

One approach is to use network simulators such as \texttt{ns-2} or \texttt{ns-3}~\cite{ns2, ns3}. Unfortunately, common implementations of TCP stacks, such as those in Linux and Windows, all deviate considerably from the idealized TCP in the simulators. After consulting the Linux source code, we decided that it would be justified to merge any proposed TCP improvements with an actual TCP stack and conduct experiment on real machines and real networks. 

Another approach we found to be problematic is to use packet-level tracing tools such as \texttt{tcptrace} and \texttt{tcpdump}~\cite{tcptrace,tcpdump}. These tools generate considerable load on the machines being traced, enough to distort the results. For some of our early work, this problem limits the network throughput at around 80\% of the capacity. Most critically, we did not realize there exists a systematic tracing overhead until we specifically designed an experiment to test the problem. 

A third approach we intentionally avoided is to conduct experiments on public cloud computing platforms such as Amazon EC2~\cite{EC2}. Public cloud computing platforms enables very large scale experiments. Researchers for other topics routinely do experiments on the order of hundreds or even thousands of machines~\cite{SWIM,BOOM,fairScheduler,PACMan}. However, public cloud computing platforms currently permits only customized \emph{virtual} operating systems. This prevents experimentation with modified TCP kernel stacks. Furthermore, the network is likewise shared and virtualized. Incast becomes masked because for many settings the network throughput cannot get driven up to a large fraction of the capacity.

These concerns lead us to two experimental extremes. We could do experiments on small scale, pristine laboratory networks, in which we deploy the modified TCP kernel stacks. The size of these networks limit the ability to investigate any solutions at scale. An alternate approach is to deploy the modified TCP kernel stacks on large scale production environments, and confront the full complexity of a shared network. Doing so carries considerable operational risks, especially if the production environments are business critical. Further, a truly large scale production environment is often not available. Both approaches would lack insight into the packet-by-packet TCP interactions -- the lack of packet-level tracing tools that are \emph{empirically verified} to carry low overhead would forced us to observe application level behavior only. 

Our goal for this article is to understand incast, and use this knowledge to extrapolate to settings beyond our present experimental capabilities. Hence, we choose the pristine laboratory network approach. We performed the experiments in this article on the DETER testbed~\cite{DETER}, which lead to a helpful analytical model as well as implications about general big data workloads. 

