\section{Evaluation}
\label{eval}
In this section, we first use real trace simulation to discuss how to set the parameters $W_H$ and $W_L$, and then 
evaluate ATCP performance in various scenarios such as different topologies and traces. 
Finally, we compare application performance in TCP, DCTCP~\cite{dctcp} and ATCP.
\subsection{Parameter Setting}
In weight-size function, the parameters are $W_H$, $W_L$ and threshold $T$.
We set up the $T$ by observing the flow size distribution in Figure~\ref{flow}.
The gap between small and medium ($<$10MB) flows and large ($>$10MB) flows is very obvious, so we set the parameter $T$ to be 10MB.
\begin{figure}
\centering
\small
\begin{tabular}{cc}
\includegraphics[width=0.22\textwidth]{fig/small_1.pdf} &
\includegraphics[width=0.22\textwidth]{fig/small_2.pdf}\\
(a) Time(ms) - [$W_H+W_L$] &  (b) Time(ms) - [$W_H:W_L$]
\end{tabular}
\caption{Median Completion Time of Small Flows}
\label{fig:para:small}
\end{figure}

\begin{figure}
\centering
\small
\begin{tabular}{cc}
\includegraphics[width=0.22\textwidth]{fig/medium_1.pdf} &
\includegraphics[width=0.22\textwidth]{fig/medium_2.pdf}\\
(a) Time(s) - [$W_H+W_L$] & (b) Time(s) - [$W_H:W_L$]
\end{tabular}
\caption{Median Completion Time of Medium Flows}
\label{fig:para:medium}
\end{figure}

We build a 4-hop chain topology with 100Mbps capacity and 50us latency on each link. We use the measured trace
and try combinations of $W_H+W_L$ (fixed-sum mode) and $W_H:W_L$ (proportional mode). We simulated with $W_H+W_L$ and $W_H:W_L$ ranging from 1 to 5. We measure the flow completion time. The results are shown in Figure ~\ref{fig:para:medium} and Figure ~\ref{fig:para:small}, which are the median of completion times for medium flows and small flows.

Compared with small flows, medium flows have more significant improvement. In the best case, with $W_H=3$ and $W_L=1$, median completion time for medium flow is reduce by 40\% from 2.3s to 1.4s.  In all cases with $W_H:W_L>2$, median completion time is reduced by 20\% to 40\%. While small flows' median completion time is reduced by at most 15\%. In most cases, the reduction is about 10\%. Since the flow completion time is composed by propagation delay, queuing delay and transmission time. Small flows' propagation delay dominates the total transfer time, so allocating more bandwidth only helps a bit; but medium flows'  transmitting time(size/rate) dominates the total transfer time, so allocating more bandwidth improves medium flow's performance more. 

${\bf W_H:W_L}$ plays a key role in bandwidth allocation. From figure~\ref{fig:para:medium}(b) the larger the ratio is, the smaller the completion time is. The completion time reduces rapidly as $W_H:W_L$ varies from 1 to 4, then trend slows down. This can be explained by the 2-flow example, suppose the weight ratio is $R$, the link capacity is $C$ and the medium flow size is $S$, then the bandwidth allocated to medium flow is $$C\times\frac{R}{R+1},$$ so the completion time is $$\frac{S}{C}\times (1+\frac{1}{R}),$$ whose curve is decreasing rapidly first, then slows down.

${\bf W_H+W_L}$ has influence on the oscillation of total throughput. In the 2-flow example, $(W_H+W_L)\times SegmentSize$ is the increment of sum of all congestion windows in each RTT. 
With a large $W_H+W_L$, the total congestion window exceeds the networks capacity quickly, and packet loss happens frequently, which leads to throughput oscillation.
So as $W_H+W_L$ increases from 1 to 5, the completion time decreases first, then increases, which matches Figure~\ref{fig:para:medium}(a).
\subsection{ATCP Performance}
We first simulate ATCP on a {\bf chain topology} to evaluate its effectiveness and robustness in various situations. We still use the flow traces from the measurement and 4-hop chain topology. In all these evaluations, we set $W_H=3$ and $W_L=1$.

We introduce flow {\bf deadlines} from $D_3$~\cite{D3}. Applications in data center usually have deadlines, which constrained by the service's acceptive response. Only flows that complete in their deadline is meaningful~\cite{D3}. For example, in some distributed computations such as web search, the sub task that cannot complete before a certain deadline is abandoned. We introduce deadline for our small flows, we take the deadline of 30ms from ~\cite{D3}. We look into the result get in previous section with $W_H=3$ and $W_L=1$, the results is that with ATCP, 84.6\% small flows meet with their deadlines, while only 77\% small flows meeting with their deadline in TCP.
\begin{table}
\label{dense}
\centering
\caption{Completion Time - Flow Density}
\begin{tabular}{|c|c|c|}
\hline
\multirow{2}{*}{Setting} & \multicolumn{2}{|c|}{\shortstack{Median Completion Time \\ (Small/Medium Flows)}}\\ \cline{2-3}
& TCP & ATCP\\ \hline
Non-dense flows & 625ms/2.3s & 543ms/1.4s \\ \hline
dense non-large flows & 763ms/2.5s & 680ms/1.6s \\ \hline
dense large flows & 750ms/2.8s & 680ms/1.8s \\ \hline
dense all flows & 813ms/2.9s & 707ms/1.8s \\ \hline
\end{tabular}
\label{tab:dense}
\end{table}

ATCP is robust in case of {\bf dense flows}. We increase the density of flows in three ways, dense large flows, dense non-large flows, and dense all flows, the way we increase flow density is to double the corresponding (large, non-large, and all) flow number in a fixed time. As we expected, with the increase in completion time, which is measured in Table~\ref{tab:dense} the links become more congested, so the throughput for each flow decreases. In all cases, ATCP reduces completion time compared with TCP, small flows' median completion time get 10\%-13\% reduction at the median and medium flows is about 30\%-40\%. In dense non-large flow case, the improvement is a bit better, which is 39\% than that in dense large flow case in which the reduction is 36\% for medium flow; because in dense non-large flow case, non-large flows take more bandwidth from the large flows.
\begin{table}
\centering
\caption{Completion Time with Disjoint Path}
\begin{tabular}{|c|c|c|}
\hline
\multirow{2}{*}{Setting} & \multicolumn{2}{|c|}{\shortstack{Median Completion Time\\(Small/Medium Flows)}}\\ \cline{2-3}
& TCP & ATCP\\ \hline
Large flows on the chain & 562ms/2.0s & 489ms/1.3s \\ \hline
Small flows on the chain & 688ms/2.5s & 598ms/1.6s \\ \hline
\end{tabular}
\label{tab:chain}
\end{table}

Flows always have {\bf disjoint paths}, and only some links on the path may be shared. We simulate two scenarios where large flows and small flows partially share their path. We use a 5-node chain topology. In the first case, large flows transfer data through the whole chain, and small and medium flows takes one of the links from the chain. In the second case, each large flow takes one link respectively, while small and medium flow pass the whole chain. We still use trace from the measurement. 
The completion time at both cases with different parameters are shown in Table~\ref{tab:chain}.

In both cases, ATCP works better. Even when flows share different links, non-large flows still get more bandwidth when they compete with large flows. In the case that large flows take the whole chain, the performance is a bit better than that in the case that non-large flows take the whole chain. If the large flow takes the whole chain, as long as one of the links on its path is congested, the long flow will decrease the sending rate at the whole chain, which will give more opportunity for non-large flow on other links.
\begin{figure}
\centering
\includegraphics[width=2in]{fig/tree_time.pdf}
\caption{Median Web Service Completion Time}
\label{fig:tree}
\end{figure}

\begin{table}
\centering
\caption{Web Service in a Tree}
\begin{tabular}{|c|c|c|}
\hline
Completion Time & TCP & ATCP\\ \hline
median of small flows & 102ms & 80ms\\ \hline
median of medium flows & 1.7s & 1.2s \\ \hline
\end{tabular}
\label{tab:tree}
\end{table}
We also simulate a {\bf web service} application in a {\bf tree} topology like Figure~\ref{example}(a), we choose a one node as a web server and one node as a database. The database server backups data by transmitting data to another server, which is a large flow. We also add some other background flows, with the distribution of Figure~\ref{flow}. Then we simulate some non-large flows from between web server and a client and between web server and database. Results in Table~\ref{tab:tree} show that there are about 5\%-10\% improvement on completion time for small flows, about 50\% for medium flows, and almost no influences on large flows.

\subsection{ATCP vs. DCTCP}
We compare ATCP with DCTCP. We choose DCTCP because both of them are flow agnostic, 
and we do not need deadlines from the applications.
\begin{figure}
\label{fattree}
\centering
\small
\begin{tabular}{cc}
\includegraphics[width=0.22\textwidth]{fig/fattree_small.pdf} &
\includegraphics[width=0.22\textwidth]{fig/fattree_medium.pdf}\\
 (a) Small Flows &  (b) Medium Flows
\end{tabular}
\caption{Completion Time in Fattree}
\vspace{-4mm}
\label{fig:fattree}
\end{figure}

We perform our simulations on a network with {\bf fattree}~\cite{fattree} topology. 
We maintain the full bisection bandwidth in a tree topology 
to simulate the rich paths in fattree~\cite{fattree} topology. 
There are 4 racks with each rack having up to 10 machines. Each of these machines connects to the top-of-rack (ToR) switch via 1 Gbps link. ToR switches are connected to a core router via 10 Gbps link. One server works as an aggregator in a distributed application, and all other servers send small responses of size in [1KB, 10MB] to the aggregator. Then we inject a background flow for the core switch to the aggregator. We compare TCP, ATCP and DCTCP in terms of the flows' median completion time.

In Figure~\ref{fig:fattree}, small flows' median completion time is 210ms, 20ms and 21ms in TCP, DCTCP and ATCP
respectively; and medium flows' median is 1.1s, 0.93s and 0.81s. Both ATCP and DCTCP dominate TCP; ATCP allocates more bandwidth to small flows to reduce their transmission time; DCTCP maintains smaller switch queue length to reduce their queuing delay. In DCTCP and ATCP, the small flows completion times are near each other. But medium flows have better performance in ATCP than DCTCP. Because DCTCP reduce the queuing time in the network, and this saving has an upper bound (queue length over bandwidth); but ATCP reduce the transmission time (data size over sending rate), so that the larger the data size is, the more a flow benefits. 

Referring to the {\bf deadline} discussion in \cite{D3}, we set a deadline of 30ms for the flows smaller than 100KB. In our simulation, the percentage of flows whose deadlines are satisfied are 23\%, 62\% and 61\% in TCP, DCTCP and ATCP respectively. ATCP improves TCP a lot and is comparable with DCTCP. In our simulation, we use rather dense flows compared with $D^3$\cite{D3}, and 38\% more flows are satisfied. This portion is larger than the result in $D^3$. We believe ATCP is comparable with $D^3$ in terms of the service deadline.

Finally, we look into an application's performance. We take {\bf MapReduce} as an example. According to \cite{mantri}, the straggler in the shuffle phase influences the application's whole performance. With the same topology and background flow setting, we deploy TCP, ATCP and DCTCP for a MapReduce simulation. We collect the completion times of all shuffle flows. 
\begin{figure}
\centering
\includegraphics[width=0.25\textwidth]{fig/mr_cdf.pdf}
\caption{MapReduce Shuffle Flows' Completion Time CDF}
\vspace{-4mm}
\label{mr_cdf}
\end{figure}
Figure~\ref{mr_cdf} displays the completion time CDF of a hadoop job's shuffle flows in different protocols. The ATCP curve is the leftmost, thus is the most efficient protocol for shuffle flows. By running MapReduce several times, ATCP and DCTCP perform better than TCP in terms of average completion time. ATCP reduce the average completion time by 61\%, and DCTCP by 59\%. ATCP is even better than DCTCP, because in our MapReduce simulation, we simulate a large data sorting application, and set the data block size to be 8MB. In this size, ATCP flows benefit more than DCTCP. 

The latest completion time determines the application's completion time. ATCP reduces the data shuffle time by 33\% and total job completion time by 10.5\%; DCTCP reduce these them by 25\% and 7\%. ATCP's small-flow-preferred mechanism improves distributed application's performance in the end. This MapReduce simulation also implies that the MapReduce application can be configured to suits the new transport layer protocol.
