\section{Introduction}
Data centers serve as an infrastructure for providing essential
resources to host a broad range of applications from web search,
email, advertisement to data mining of user behavior and system log
analysis\cite{facebook, mapreduce, amazon, dryad}.  However, different applications may come with different
delay sensitivity requirements. While some background jobs such as
backups do not necessitate completion in a timely fashion, online
services usually impose stringent latency goals on response time in
their service level agreements (SLAs)\cite{D3}.  Many studies have shown that
network transfer plays a key role in determining the completion times
of these jobs\cite{orchestra}. For example, data shuffle in the reduce phase of
Map-Reduce jobs is a well-known bottleneck for the whole
job. Due to the high operational cost, data center resources are shared and
multiplexed by these different jobs. Thus, how network resources are
shared has a crucial impact on the job performance and the ability to
meet various latency requirements.

Existing proposals for sharing the network fall into three categories
but each work has its own Achilles' heel. The first solution is to
simply partition the cluster into two parts, and run delay sensitive
jobs on one with dedicated network resource. However, this static
partitioning scheme makes it impossible to do fine-grained resource
sharing across clusters and thus is not efficient\cite{mesos}. Another
approach is to enforce network allocation by adding complex components
into the end-system software and hypervisors. One example is
SeaWall\cite{seawall}, which adds a bandwidth allocator between TCP/IP
and the NIC. Indeed, this solution could divide network capacity based
on the desired policy, but it comes with the expense of intricate
modifications to end-system architecture (e.g., policer flow rate
control mechanisms, schemes to ensure distributed convergence to
pre-assigned weights etc.). More importantly, it also engenders key
changes to the service model exposed to tenants, and therefore is not
always feasible. Yet another way for sharing the network is to make
explicit scheduling of network traffic. An example is
Orchestra\cite{orchestra} which coordinates different data transfers
with a global controller. Again, because jobs need to express their
needs explicitly to the central controller, this method requires
changes to the traditional service model. Depending on the transfer
scheduler used, with the default being simple FIFO, this approach may
result in poor utilization of the data center as a whole, as some of
the afore-mentioned naive solutions.

The position we take in this paper is that an ideal usable and
effective network sharing scheme that effectively helps meet
application delay requirements: (1) discriminates network flows
according to their job timing requirements; (2) share cluster resource
very efficiently; (3) makes minimal changes to the end system
software; changes should be simple to implement so that it is easy to
reason about overall system behavior and performance; (4) does not
modify tenant applications, in particular, the current service model
the cloud exposes.

In this paper, we make two major contributions. First, we conduct a
first-of-a-kind measurement study of the relationship between flow
sizes and timing requirements using real data center traces. These
measurements inform key aspects of the eventual design of our system;
but, they are also interesting in their own right as they can
influence other aspects of data center design that we don't focus on
(e.g., traffic engineering). Our analysis shows that time-sensitive
jobs usually come with small flows smaller than 10 MB,
while the flow sizes of other jobs falls in the range that is larger than 10MB. 
Also, small flows often exist concurrently and share links with
those large flows.

These observations motivate our Adaptive TCP (ATCP)  design to meet the above
four requirements, which forms the second contribution of this
paper. We propose {\em Adaptive Transmission Control Protocol} (ATCP),
a simple approach for network sharing. In this protocol, we solve three 
problems: how to precisely control flows' rate when they are contending,
how much bandwidth to be allocated for various flows, and how to make 
the allocation to be flow agnostic. The basic idea is to modify the
congestion control behavior in TCP and perform {\em adaptive weighted
 fairness sharing among flows}. As is known, TCP allocates bandwidth equally
among all flows and does not take job latency requirements into
consideration. In order to distinguish flows with different timing
targets, we count how many bytes a flow has delivered already. We
dynamically tune a flow's weight such that it decreases as a flow
transfers more data. In effect, we can prioritize small flows'
bandwidth allocation and get them to complete faster than the larger
flows that they are contending with. Our key insight is that only the
additive increase behavior of TCP congestion control needs to be
modified to realize the above form of weighted sharing. Therefore,
our method makes as little a change as possible to the cloud
infrastructure.

We introduce a weight-size function to derive a flow's weight
according to the size of the data it has sent. The
parameters in the weight-size function are the weight upper bound
$W_H$, lower bound $W_L$ and threshold $T$.  We set up $T$ by
observing the empirical flow size distribution. We analyze different
combinations of $W_H$ and $W_L$, and chose ones which produce the
smallest value for the median completion time over real data center
traces.

Based on extensive simulations using NS2, we find that ATCP benefits
small flows significantly. Thus, delay-sensitive applications see the
greatest improvements. We conduct trace-driven simulations in a 
chain topology and find that compared with TCP; more than 90\% of
flows benefited from ATCP and reduced their completion times. 
Small flows' ($<$ 100KB) average completion time is reduced by
10\%; medium flows' (between 100KB and 10MB) completion time is 
reduced by 30\%-40\% on average; large flows' ($>$ 10MB) completion
time is almost not influenced.  We simulate a distributed
application flow trace in the fattree topology and show that the
benefit introduced by ATCP to small flows is comparable to
DCTCP. Finally, we perform a simulation of MapReduce jobs.  The result
shows that by improving small flows completion time, the whole
job's performance improves.

This paper is organized as follows. Section~\ref{example} provides
motivating examples for the problem. In section~\ref{flow}, we analyze
the flow characteristics and flow relationships. In section~\ref{req},
we propose the requirements to design an adaptive transmission control protocol 
in the cloud. In section~\ref{atcp}, we build the theoretical basis 
of flow rate control and scheduling, and design ATCP.  In section~\ref{impl}, 
we describe our implementation. In section~\ref{eval}, we evaluate
our ATCP and compare it with TCP and DCTCP~\cite{dctcp} in various scenarios. We discuss
related work in section~\ref{sec:related}. Finally, we conclude in
section~\ref{con}.

