\section{Motivating Example}
\label{example}
\begin{figure}
\centering
\small
\begin{tabular}{cc}
\includegraphics[width=0.25\textwidth]{fig/web_app.pdf}& \includegraphics[width=0.25\textwidth]{fig/mapreduce.pdf}\\
(a) Web Service & (b) MapReduce
\end{tabular}
\caption{Various Flows in Data Center Applications}
\label{fig:mot_app}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.25\textwidth]{fig/mot_web.pdf}
\caption{Web Service Flow Completion Time}
\label{fig:mot_web}
\end{figure}
To motivate our changes on TCP, we describe below two kinds of
applications: web services and a MapReduce distributed system, which
are typical applications in current enterprise and university data
centers. We measure their performance and should how our small changes
improve them. 

{\bf Web Services:} A typical three-tier web service is shown in
Figure~\ref{fig:mot_app}(a). The query and response flows are usually
small flows and they are delay-sensitive; the backup flows are usually
pretty large. We collected packet trace continuously for 12 hours over
multiple days in a campus data center (serving the students and staff
of a large US University). Inside the data center, a variety of
services are running simultaneously, ranging from archival to
distributed file systems, E-mail, web services (administrative sites
and web portals for students and faculty), and even multicast video
streams. Web service traffic and distributed file system traffic 
constitute most of the traffic, 60\% and 40\%, respectively. 

We modify TCP so that when a small flow ($<$10MB)
meets with a large ($>$10MB) flow, it get 3 times more bandwidth than the large 
flow (We use our mechanism in ATCP, which is described in Section~\ref{atcp}).
%We find that 80\% of the flows are smaller than 10KB in size; most of the bytes are in the top 10\% of large flows, which are around 10MB. 80\% of the flows are less than 11 seconds long. 80\% of the flows' interarrival times were between 4ms and 40ms. Based on this trace, we define small flows as flows smaller than 10MB and large flows as flows larger than 10MB.  In TCP's congestion avoidance state, the congestion window is increased by 1 segment per round trip time (RTT). Consider what may happen when we let small flows increase 1.5 segments per RTT and large flows increase by 0.5 segment per RTT. 
We use NS2 to simulate the above trace and compare TCP with the above modification in terms of
flow completion times (Figure~\ref{fig:mot_web}). Compared with
TCP, the modified TCP reduces median completion time by more than half,
from 200ms to 80ms; and more than 90\% of web flows benefit from this
change. Only some large flows are influenced, but their completion
time is increased negligibly (by less than 1\%). 
Then we look into the web service traffic, which are time-sensitive.  
More than 90\% of the web flows are smaller than 10MB and they benefit from this modification.

{\bf MapReduce:} A typical MapReduce workload (Figure~\ref{fig:mot_app}(b))
first distributes raw data blocks to several mappers, following which
each mapper does some computation over the data. Subsequently, there
is a shuffle phase in which results from mappers are sent to reducers
in a many-to-many mode. After each reducer collects all corresponding
results, there may be a final collector to fetch the result. Each phase is composed
of parallel network transfers, especially the shuffle phase, which
takes 33\% of total job completion time in typical computation jobs in the 
cloud~\cite{orchestra}. A key issue
in MapReduce is that, if one of the transfers in the shuffle phase is
delayed (``straggler''), the entire job is affected. MapReduce flows
are usually mixed up with other background data flows. If they can
grab bandwidth from such (delay-insensitive) background flows, the
data transfer time is reduced, thus improving the MapReduce job
performance.

The examples show that if small flows get some ``help'' when competing
with large flows at a bottleneck link, they can complete more quickly
and improve overall job performance. From applications like web
services, most flows are small (less than 10MB) according to our
measurements; while for distributed computations like MapReduce, a
given job can be split into smaller components, and fine-grained tasks
are easier to schedule and can benefit from our new transport
protocol. In this way our new transport protocol can improve most
delay-sensitive applications. 
%Next, we analyze data center flow characteristics. We show that large and small flows do exist and compete simultaneously, lending weight to the applicability of our approach.


