\section{Towards an Analytical Model}

We use a simple network topology and workload to develop an analytical model for incast, shown in Figure~\ref{fig:IncastSetup}. This is the same setup as that used in prior work~\cite{cmuFAST2008, incastSIGCOMM2009, incastWREN2009}. We choose this topology and workload to make the analysis tractable. 

\begin{figure}[t]
\begin{center}
\centering
\includegraphics[trim = 0cm 13cm 7cm 0cm, clip, width=8cm]{figures/IncastSetup}
%\vspace{-17pt}
\caption{\small Simple setup to observe incast. The receiver requests $k$ blocks of data from a set of $N$ storage servers. Each block is striped across $N$ storage servers. For each block request received, a server responds with a fixed amount of data. Clients do not request block $k+1$ until all the fragments of block $k$ have been received.}
\label{fig:IncastSetup}
\end{center}
%\vspace{-8pt}
\end{figure}


The workload is as follows. The receiver requests $k$ blocks of data from a set of $N$ storage servers --- in our experiments $k=100$ and $N$ varies from 1 to 48. Each block is striped across $N$ storage servers. For each block request received, a server responds with a fixed amount of data. Clients do not request block $k+1$ until all the fragments of block $k$ have been received --- this leads to a {\em synchronized read pattern} of data requests. We re-use the storage server and client code in ~\cite{cmuFAST2008,incastSIGCOMM2009, incastWREN2009}. The performance metric for these experiments is {\em application-level goodput}, \ie the total bytes received from all senders divided by the finishing time of the \emph{last} sender. 

We conduct our experiments on the DETER Lab testbed~\cite{DETER} where we have full control over the non-virtualized node OS, as well as the network topology and speed. We used 3GHz dual-core Intel Xeon machines with 1Gbps network links. The nodes run standard Linux 2.6.28.1. This was the most recent mainline Linux distribution in late 2009, when we obtained our prior results~\cite{incastWREN2009}. We present results using both a relatively shallow-buffered Nortel 5500 switch (4KB per port), and a more deeply buffered HP Procurve 5412 switch (64KB per port). 



\subsection{Flow rate models}

The simplest model for incast is based on two competing behaviors as we increase $N$, the number of concurrent senders. The first behavior occurs before the onset of incast, and reflects the intuition that goodput is the block size divided by the transfer time. Ideal transfer time is just the sum of a round trip time (RTT) and the ideal send time. Equation~\ref{eq:idealModel} captures this idea. 

\scriptsize{
\begin{eqnarray} 
Goodput_{beforeIncast} &=& idealGooputPerSender \times N \nonumber \\
        &=& \frac{blockSize}{idealTransferTime} \times N \nonumber \\
        &=& \cfrac{blockSize}{RTT + \cfrac{blockSize}{perSenderBandwidth}} \times N \nonumber \\
        &=& \cfrac{blockSize}{RTT + {\cfrac{blockSize \times N}{linkBandwidth}}} \times N \nonumber \\
\label{eq:idealModel}
\end{eqnarray}
}
\normalsize

Incast occurs when there are some $N>1$ concurrent senders, and the goodput drops significantly. After the onset of incast, TCP retransmission time out (RTO) represents the dominant effect. Transfer time becomes RTT + RTO + ideal send time, as captured in Equation~\ref{eq:simpleIncastModel}. The goodput collapse represents a transition between the two behavior modes. 

\scriptsize{
\begin{eqnarray} 
Goodput_{incast} &=& goodputPerSender \times N \nonumber \\
        &=& \frac{blockSize}{idealTransferTime + RTO} \times N \nonumber \\
        &=& \cfrac{blockSize}{RTO + RTT + \cfrac{blockSize}{perSenderBandwidth}} \times N \nonumber \\
        &=& \cfrac{blockSize}{RTO + RTT + {\cfrac{blockSize \times N}{linkBandwidth}}} \times N \nonumber \\
\label{eq:simpleIncastModel}
\end{eqnarray}
}
\normalsize

Figure~\ref{fig:ModelIdeal} gives some intuition with regard to Equations~\ref{eq:idealModel} and~\ref{eq:simpleIncastModel}. We substitute $blockSize$ = 64KB, 256KB, 1024KB, and 64MB, as well as $RTT$ = 1ms, and $RTO$ = 200ms. Before the onset of incast (Equation~\ref{eq:idealModel}), the goodput increases as $N$ increases, though with diminishing rate, asymptotically approaching the full link bandwidth. The curves move vertically upwards as block size increases. This reflects the fact that larger blocks result in a larger fraction of the ideal transfer time spent transmitting data, versus waiting for an RTT to acknowledge that the transmission completed. After incast occurs (Equation~\ref{eq:simpleIncastModel}), RTO dominates the transfer time for small block sizes. Again, larger blocks lead to RTO forming a smaller ratio versus ideal transmission time. The curves move vertically upwards as block size increases. 

\begin{figure}[t]
\begin{center}
\centering
\includegraphics[trim = 0cm 11cm 10.5cm 0cm, clip, width=8cm]{figures/ModelIdeal}
%\vspace{-17pt}
\caption{\small Flow rate model for incast. Showing ideal behavior (solid lines, Equation~\ref{eq:idealModel}) and incast behavior caused by RTOs (dotted lines, Equation~\ref{eq:simpleIncastModel}). We substitute $blockSize$ = 64KB, 256KB, 1024KB, and 64MB, as well as $RTT$ = 1ms, and $RTO$ = 200ms. The incast goodput collapse comes from the transition between the two TCP operating modes.}
\label{fig:ModelIdeal}
\end{center}
%\vspace{-8pt}
\end{figure}




\subsection{Empirical verification}


\begin{figure}[t]
\begin{center}
\centering
\includegraphics[trim = 0cm 11.2cm 14.5cm 0cm, clip, width=6cm]{figures/IncastWRENModel}
%\vspace{-17pt}
\caption{\small Empirical verification of flow rate incast model. Uses our previously presented data in~\cite{incastWREN2009}. The $blockSize$ is 256KB, $RTO$ is set to 100ms and 200ms, and the model uses $RTT$ = 1ms. Error bars represent 95\% confidence interval around the average of 5 repeated measurements. The switch is a Nortel 5500 (4KB per port). Showing (1). Incast goodput collapse begins at $N$ = 2 senders, and (2). Behavior after goodput collapse verifies Equation~\ref{eq:simpleIncastModel}. }
\label{fig:IncastWRENModel}
\end{center}
%\vspace{-8pt}
\end{figure}

This model matches well with our empirical measurements. Figure~\ref{fig:IncastWRENModel} superpositions the model on our previously presented data in~\cite{incastWREN2009}. There, we fix $blockSize$ at 256KB and set $RTO$ to 100ms and 200ms. The switch is a Nortel 5500 (4KB per port). For simplicity, we use $RTT$ = 1ms for the model. Goodput collapse begins at $N$ = 2, and we observe behavior for Equation~\ref{eq:simpleIncastModel} only. The empirical measurements (solid lines) match the model (dotted-lines) almost exactly. 



\begin{figure}[t]
\begin{center}
\centering
\includegraphics[trim = 0cm 11.2cm 11cm 0cm, clip, width=8cm]{figures/ProcurveManyhosts16-128KBBlocks}
%\vspace{-17pt}
\caption{\small Empirical verification of flow rate TCP model before onset of incast. Measurements done on HP Procurve 5412 switches (64KB per port). $RTO$ is 200ms. Error bars represent 95\% confidence interval around the average of 5 repeated measurements. Showing (1). Behavior before goodput collapse verifies Equation~\ref{eq:idealModel}, and (2). Onset of incast goodput collapse predicted by switch buffer overflow during slow start (Equation~\ref{eq:predictGoodputCollapse}).}
\label{fig:ProcurveManyhosts16-128KBBlocks}
\end{center}
%\vspace{-8pt}
\end{figure}

We use a more deeply buffered switch to verify Equation~\ref{eq:idealModel}. As we discuss later, the switch buffer size determines the onset of incast. Figure~\ref{fig:ProcurveManyhosts16-128KBBlocks} shows the behavior using the HP Procurve 5412 switch (64KB per port). Behavior before goodput collapse qualitatively verifies Equation~\ref{eq:idealModel} --- the goodput increases as $N$ increases, though with diminishing rate; the curves move vertically upwards as block size increases. We can see this graphically by comparing the curves in Figure~\ref{fig:ProcurveManyhosts16-128KBBlocks} before the goodput collapse to the corresponding curves in Figure~\ref{fig:ModelIdeal}. 

\vspace{2pt}
\emph{Takeaway: Flow rate model captures behavior before onset of incast. TCP RTO dominates behavior after onset of incast.}




\subsection{Predicting the onset of incast}

Figure~\ref{fig:ProcurveManyhosts16-128KBBlocks} also shows that goodput collapse occurs at different $N$ for different block sizes. We can predict the location of the onset of goodput collapse by detailed modeling of TCP slow start and buffer occupancy. Table~\ref{table:slowStartCwndSizes} shows the slow start congestion window sizes versus each packet round trip. For 16KB blocks, 12 concurrent senders of the largest congestion window of 5864 bytes would require 70368 bytes of buffer, larger than the available buffer of 64KB per port. Goodput collapse begins after $N$ = 13 concurrent senders. The discrepancy of 1 comes from the fact that there is additional ``buffer'' on the network beyond the packet buffer on the switch, \eg packets in flight, buffer at the sender machines, etc. According to this logic, goodput collapse should take place according to Equation~\ref{eq:predictGoodputCollapse}. The equation accurately predicts that for Figure~\ref{fig:ProcurveManyhosts16-128KBBlocks}, the goodput collapse for 16KB, 32KB, and 64KB blocks begin at 23, 7, and 4 concurrent senders, and for Figure~\ref{fig:IncastWRENModel}, the goodput collapse is well underway at 2 concurrent senders.  

\scriptsize{
\begin{eqnarray} 
N_{initialGoodputCollapse} = \left\lceil\frac{perSenderBuffer}{largestSlowStartCwnd}\right\rceil + 1 
\label{eq:predictGoodputCollapse}
\end{eqnarray}
}
\normalsize

\begin{table}[t]
\centering
%\vspace{-5pt}
\footnotesize
\begin{tabular}{r r r r r} 
\hline
{\bf Round }  & {\bf 16KB  } & {\bf 32KB  } & {\bf 64KB  } & {\bf 128KB } \\ 
{\bf trip \#} & {\bf blocks} & {\bf blocks} & {\bf blocks} & {\bf blocks} \\ \hline 
1 & 1,448 & 1,448  & 1,448  & 1,448  \\ 
2 & 2,896 & 2,896  & 2,896  & 2,896 \\ 
3 & 5,792 & 5,792  & 5,792  & 5,792 \\
4 & 5,864 & 11,584 & 11,584 & 11,584 \\
5 &       & 10,280 & 23,168 & 23,168 \\
6 &       &        & 19,112 & 46,336 \\ 
7 &       &        &        & 36,776 \\ \hline
\end{tabular}
\normalsize
\caption{\small TCP slow start congestion window size in bytes versus number of round trips. Showing the behavior for $blockSize$ = 16KB, 32KB, 64KB, 128KB. We verified using \texttt{sysctl} that Linux begins at 2$\times$ base MSS, which is 1448 bytes.}
\label{table:slowStartCwndSizes}
%\vspace{-5pt}
\end{table}

\vspace{2pt}
\emph{Takeaway: For small flows, the switch buffer space determines the onset of incast.}


\subsection{Second order effects}

Figure~\ref{fig:ProcurveManyhosts16-128KBBlocks} also suggests the presence of second order effects not explained by Equations~\ref{eq:idealModel} to~\ref{eq:predictGoodputCollapse}. Equation~\ref{eq:predictGoodputCollapse} predicts that goodput collapse for 128KB blocks should begin at $N$ = 2 concurrent senders, while the empirically observed goodput collapse begins at $N$ = 4 concurrent senders. It turns out that block sizes of 128KB represent a transition point from RTO-during-slow-start to more complex modes of behavior. 

We repeat the experiment for $blockSize$ = 128KB, 256KB, 512KB, and 1024KB. Figure~\ref{fig:ProcurveManyhosts128-1024KBBlocks} shows the results, which includes several interesting effects.

\begin{figure}[t]
\begin{center}
\centering
\includegraphics[trim = 0cm 11.2cm 9cm 0cm, clip, width=8.5cm]{figures/ProcurveManyhosts128-1024KBBlocks2ndOrderEffects}
%\vspace{-17pt}
\caption{\small 2nd order effects other than RTO during slow start. Measurements done on HP Procurve 5412 switches (64KB per port). $RTO$ is 200ms. Error bars represent 95\% confidence interval around the average of 5 repeated measurements. Showing (1). Partial RTOs more accurately modeling incast behavior for large blocks, (2). Transition between single and multiple partial RTOs, and (3). Triple duplicate ACKs causing more gradual, $blockSize$ independent onset of incast.}
\label{fig:ProcurveManyhosts128-1024KBBlocks}
\end{center}
%\vspace{-8pt}
\end{figure}


First, for $blockSize$ = 512KB and 1024KB, the goodput immediately after the onset of incast is given by Equation~\ref{eq:partialIncastModel}. It differs from Equation~\ref{eq:simpleIncastModel} by the multiplier $\alpha$ for the $RTO$ in the denominator. This $\alpha$ is an empirical constant, and represents a behavior that we call partial RTO. What happens is as follows. When RTO takes place, TCP SACK (turned on by default in Linux) allows transmission of further data, until the congestion window can no longer advance due to the lost packet. Hence, the link is idle for a duration of less than the full RTO value. Hence we call this effect partial RTO. For $blockSize$ = 1024KB, $\alpha$ is 0.6, and for $blockSize$ = 512KB, $\alpha$ is 0.8. 

\scriptsize{
\begin{eqnarray} 
Goodput_{incast} &=& \cfrac{blockSize}{\alpha \times RTO + RTT + {\cfrac{blockSize \times N}{linkBandwidth}}} \times N \nonumber \\
\label{eq:partialIncastModel}
\end{eqnarray}
}
\normalsize

Second, beyond a certain number of concurrent senders, $\alpha$ transitions to something that approximately doubles its initial value (0.6 to 1.0 for $blockSize$ = 1024KB, 0.8 to 1.5 for $blockSize$ = 512KB). This simply represents that two partial RTOs have occurred. 

Third, the goodput collapse for $blockSize$ = 256KB, 512KB, and 1024KB is more gradual compared with the cliff-like behavior in Figure~\ref{fig:ProcurveManyhosts16-128KBBlocks}. Further, this gradual goodput collapse has the same slope across different $blockSize$. Two factors explain this behavior. First, flows with $blockSize \geq$ 128KB have a lot more data to send even after the buffer space is filled with packets sent during slow start (Equation~\ref{eq:predictGoodputCollapse} and Table~\ref{table:slowStartCwndSizes}). Second, even when the switch drops packets, TCP can sometimes recover. Empirical evidence of this fact exists in Figure~\ref{fig:ProcurveManyhosts16-128KBBlocks}. There, for $blockSize$ = 16KB and $N$ = 13 to 16 concurrent senders, at least one of five repeated measurements manages to get goodput close to 90\% of link capacity. Goodput collapse happens for other runs because the packets are dropped in a way that a connection with little additional data to send would observe only a single or double duplicate ACK, and go into RTO soon after. Larger blocks suffer less from this problem because the ongoing data transfers triggers triple duplicate ACK with higher probability. Thus, the connection retransmits, enters congestion avoidance, and avoids RTO. Hence the gradual goodput collapse. 

We should point out that SACK semantics are independent of duplicate ACKs, since SACK is layered on top of existing cumulative ACK semantics~\cite{TCPSACK}. 
%\begin{figure}[t!]
%\begin{center}
%\centering
%\includegraphics[trim = 0cm 11.2cm 10.7cm 0cm, clip, width=8cm]{figures/ProcurveManyhosts128-1024KBBlocks}
%%\vspace{-17pt}
%\caption{\small Goodput collapse other than RTO during slow start. Measurements done on HP Procurve 5412 switches (64KB per port). $RTO$ is 200ms. Error bars represent 95\% confidence interval around the average of 5 repeated measurements. Equation~\ref{eq:predictGoodputCollapse} predicts that goodput collapse for these settings would begin at $N$ = 2. Actual behavior is different. RTO still occurs, because after the goodput collapse, the empirical behavior matches that predicted by Equation~\ref{eq:simpleIncastModel}.}
%\label{fig:ProcurveManyhosts128-1024KBBlocks}
%\end{center}
%%\vspace{-8pt}
%\end{figure}






%This interaction further explains two features. First, the more gradual goodput collapse is independent of block size beyond 256KB. Larger blocks all experience the same duplicate ACK behavior and exit slow start for some ranges of $N$. Second, as $N$ increase further, RTO eventually takes place. For large $N$, it becomes less likely that the same connection will see triple duplicate ACKs, the congestion window stops advancing, and the connection eventually enters RTO. 

\vspace{2pt}
\emph{Takeaway: Second order effects include partial RTO due to SACK, multiple partial RTOs, and triple duplicate ACKs causing more gradual onset of incast.}
\vspace{2pt}

%The 1024KB line in Figure~\ref{fig:ProcurveManyhosts128-1024KBBlocks} reveals a further complexity. At $N=24$ concurrent senders, the goodput begins decreasing, countering the steadily increasing trend from $N=$ 16 to 23. This continues until the line hits the goodput predicted by single RTO model. Thereafter the goodput follows the single RTO model. 
%
%We believe this behavior is due to buffer management complexities within the switch. The HP Procurve 5412 switch contains 24-port modules. Hence, for flows of 1-to-$N\leq23$, the nodes can fit within a single 24-port module. The goodput there is above that predicted by Equation~\ref{eq:simpleIncastModel}, but follows the increasing slope of the modeled goodput line. This suggests some benefitial second order effect that systematically reduces the probability of a RTO. Flows with $N\geq24$ traverse multiple modules, the benefitial second order effects gradually disappear, and the goodput more closely follow Equation~\ref{eq:simpleIncastModel}. Prior work observed similar effects, with turning points on goodput curves suggesting switch buffer management in multiples of 12 ports~\cite{incastWREN2009Talk} or 16 ports~\cite{ICTCP}. 
%
%\vspace{2pt}
%\emph{Takeaway: Triple duplicate ACKs and switch buffer management policies cause second order effects that deviate from modeled behavior for some conditions. First order behavior after incast remains in agreement with RTO model in Equation~\ref{eq:simpleIncastModel}.}
%\vspace{2pt}



\subsection{Good enough model}

Unfortunately, some parts of the model remain qualitative. We admit that the full interaction between triple duplicate ACKs, slow start, and available buffer space requires elaborate treatment far beyond the flow rate and buffer occupancy analysis presented here. 

That said, the models here represent the first time we quantitatively explain major features of the incast goodput collapse. Comparable results in related work~\cite{ICTCP,cmuFAST2008} can be explained by our models also. The analysis allows us to reason about the significance of incast for future big data workloads later in the article. 

