\section{Adaptive TCP}
\label{atcp}
In this section, we theoretically prove that assigning weight to the TCP
additive increase lead to precise flow rate control a flow's rate and preferring
small flows to large flows benefits small flows without influence on large ones.
Then we design Adaptive TCP (ATCP) which makes size-adaptive bandwidth allocation automatic.
\subsection{Flow Rate Control}
\label{sec:rate_control}
In networks, each intermediate router maintains virtual output queues at each input port 
and an output queue at each output port; these queues share the switch memory~\cite{switch}. 
Packets that arrive are served in first-in-first-out (FIFO) order in switches, 
and they are dropped when the switch buffer overflows which accounts for packet loss~\cite{red, etcp}. 
A TCP flow begins with the slow start phase, during which its congestion window increases 
by 1 segment after each ACK. Duplicate ACKs (usually 3) indicate packet 
loss in the network, leading the congestion 
window to be halved. After the first loss, TCP gets into the congestion 
avoidance phase where the congestion window is increased by 1 segment 
every round trip time (RTT) and still halved in the case of the next packet loss 
(3 duplicate ACKs). This scheme is 
called additive increase multiplicative decrease (AIMD). 
The size of the data that a sender can send
is at most equal to the congestion window size; therefore the sending
rate is the congestion window size divided by RTT. 
RTT does not change too much in a flow's duration, so
the congestion window determines the sending rate. In TCP all flows follow
the same AIMD scheme, thus they equally share the network and achieve max-min fairness.

In the AIMD, we define a {\bf weight} to each flow. A flow with weight $a$ increases its
congestion window by $a$ segment each RTT in the congestion avoidance phase. Then the contending
flows' sending rate can be precisely controlled by the following theorem.

{\bf Theorem 1:} In congestion avoidance phase, if two contending flows additive increase 
rate ratio is $a:b$ and the multiplicative decrease is the same (decreased by
 a half when congested), these two flows' sending rate ratio converge to $a:b$.

{\bf Assumptions:} (1) There are only 2 flows competing
with each other.  (2) The flow durations are long enough for them to
converge to the final allocation ratio. (3) When the network is
congested, the intermediate router starts to drop packets belonging to
both the flows.  (4) Both flows have the same total delay and RTT on
their paths.

{\bf Symbols and Terms:} (1) $T$ is the total delay on the links of the path and 
$T'$ is the RTT, (2) $B$ is the total size of all switch buffers in the network,
(3) On-path links have the same bandwidth capacity $C$, (4) $a$ is flow 1's
weight, and $b$ is flow 2's weight, (5) flow 1 and flow 2 converge to
sending rate $R_1$ and $R_2$ finally.

{\bf Proof:} The bandwidth-delay product is $C\times T$ and so the
total byte capacity in the network is $C\times T+B$, which is denoted
by $$S=C\times T+B.$$  
As two flows increase their own congestion window, the
network experiences congestion.  Assume the two flows' congestion
windows are $w_1$ and $w'_1$ when the network is congested for the
1st time, $w_2$ and $w'_2$ for the 2nd time,..., $w_i$ and $w'_i$
the i-th time. Then we have
$$w_j+w'_j=S,\forall j.$$ 
When the network is congested, according to the multiplicative decrease  
the two flows' congestion windows are halved into $0.5w_j$ and $0.5w'_j$ respectively.
In the following additive increase, their windows increase in a ratio of $a:b$ until 
the remain $0.5S$ network capacity is divided. Then the next congestion occurs, at this time 
$$w_{j+1}=0.5\times w_{j}+0.5\times S\times \frac{a}{a+b}$$ 
$$w'_{j+1}=0.5\times w'_j+0.5\times S\times \frac{b}{a+b}.$$ 
With this recursive equation,  
$$
\begin{array}{rcl}
w_{j+1}-\frac{a}{a+b}S & = & 0.5(w_j-\frac{a}{a+b}S)\\
 & = & ...=0.5^j(w_1-\frac{a}{a+b}S)
\end{array}
$$
let $i\rightarrow\infty$, we have
$$w=lim_{i\rightarrow\infty}w_i=S\times \frac{a}{a+b}$$ 
$$w'=lim_{i\rightarrow\infty}w'_i=S\times \frac{b}{a+b}.$$ 
So sending rates of flow 1 and flow 2 are 
$$R_1=\frac{S}{T'}\times\frac{a}{a+b}, R_2=\frac{S}{T'}\times \frac{b}{a+b}.$$
and 
$$R_1:R_2=a:b.$$
\begin{figure}
\centering
\small
\begin{tabular}{cc}
\includegraphics[width=0.22\textwidth]{fig/tcp_cwnd_simu.pdf}&
\includegraphics[width=0.22\textwidth]{fig/atcp_cwnd_simu.pdf}\\
(a) TCP &  (b) Weighted TCP
\end{tabular}
\caption{Congestion Window Changes in the 2-flow Scenario}
\label{fig:cwnd}
\end{figure}
{\bf Simulation:} We simulate the 2-flow scenario. The network topology is a
3-node chain topology, the link capacity is 100Mbps and the delay on links is 50us.
The two flows have the same source and destination. We 
measure and plot the congestion window as a function of time in Figure~\ref{fig:cwnd}.

In both TCP and weighted TCP, the 2 flows converge to a congestion avoidance state quickly.
In TCP each of the 2 flows increases its congestion window by one segment size every RTT,
and the window is halved in case of packet loss. In rate control mode, we set the
weights of the 2 flows to be 1 and 2 respectively. In the final converged state, flow 1's
congestion window increases twice as fast as flow 2's. When the network is congested,
both windows are halved. In converged state, the ratio of both flows' window sizes equals
the ratio of weight at any time.

The assumption (2) holds for most of the flows. When a new flow joins the network, it contends with existing flows which probably take most of the link capacity. Assume the first packet drop happens at the rate of hundreds of Mbps and the RTT is hundreds of microseconds, then the congestion windows is in the order of tens of KB, the data that is already sent is also in this order. The assumption (4) does not always hold, which causes occasionally deviations from the theoretical result. But when the network is congested, and both flows are sending at least hundreds of packets per second, it is of high possibility that both flows' packets are dropped. The simulation (Figure~\ref{fig:cwnd}) also shows that in most cases that the network is congested, both flows' congestion window are decreased by a half. 
\subsection{Small Flow First Scheduling}
With weighted TCP, we can control flows' sending rates precisely. When multiple flows contend about the bandwidth, the bandwidth allocation actually is a job scheduling problem. We propose that, by small flow first scheduling, we can reduce the average completion time.

{\bf Theorem 2:} If a large flow with duration $[s_1, t_1]$ and a small flow with duration $[s_2, t_2]$ share the same network bandwidth, and if $s_1<s_2<t_2<t_1$, then by allocating more bandwidth to the small flow, the average completion time of the two flows reduces.

{\bf Symbols and Terms:} (1) the large flow has size $S_1$ and the small $S_2$, (2) the shared link bandwidth on the path is $C$, (3) in TCP, the small flow sends at rate $R_2$; when assigning more bandwidth to the small flow, it has sending rate of $R'_2$, (4) the durations of the large flow and the small one are $[s'_1, t'_1]$ and $[s'_2, t'_2]$ when assigning more bandwidth to the small flow. 

{\bf Proof:} With the same trace in two cases, we have $$s'_1=s_1, s'_2=s_2.$$ 
Assigning more bandwidth in the 2nd case, we have
$$R'_2>R_2.$$ 
Then for the small flow $$t'_2-s'_2=\frac{S_2}{R'_2}<\frac{S_2}{R_2}=t_2-s_2,$$
completion time decreases.
Consider the time when the last bytes of both flows is sent, we have
$$t'_1-s'_1=\frac{S_1+S_2}{C}=t_1-s_1$$
For the large flow, the completion time is the same. So the average completion time decreases.

By the measurement in Section~\ref{flow}, we observe 
that most small flows start and finish in the duration of 
a large flow. Only a very small number of small flows partially 
overlap with a large flow. So we conclude that most small 
flows benefit in their completion time. If a large flow 
partially overlap with a small flow, the overlapping period is less than a small flow's duration, 
which is 2 orders of magnitude smaller than its original 
completion time, thus it is neglectable.
\begin{figure}
\centering
\small
\begin{tabular}{cc}
\includegraphics[width=0.22\textwidth]{fig/tcp_rate.pdf} &
\includegraphics[width=0.22\textwidth]{fig/atcp_rate.pdf}\\
 (a) TCP & (b) Weight TCP
\end{tabular}
\caption{Throughput in the 2-Flow Scenario}
\label{fig:2flow:simu}
\end{figure}

{\bf Simulation:} We still simulate the 2-flow scenario, with the two
flow of size 100MB and 10MB on the same path. We set the large flow with weight 1
and small flow with 2 in weighted TCP, and also simulate the same flows with TCP. The throughput
of both flows is shown in Figure~\ref{fig:2flow:simu}.

In TCP, the large flow starts first, then followed by the small flow. According
to TCP's fairness, both sending rates finally converge to half of the
link capacity. In weighted TCP, the expected bandwidth allocation ratio is 2:1, which is exactly shown in
Figure~\ref{fig:2flow:simu}(b), and the completion time also decreases
from 1.8s to 1.2s.
\subsection{ATCP Design}
\label{design}
The design of ATCP is based on TCP rate control and flow scheduling in the previous sections.
In ATCP, we first add a sent-data counter in the flow socket
structure. It counts bytes as the flow sends
data. Then we introduce a weight-size function, which takes the
sent-data size as input and gives a weight as output. 
Finally, we change the additive increase in TCP by making the increased size proportional to
the weight. 

The weight is large at the beginning, and then decreases as sent-data size
increases. So small flows' weight is relatively high in their duration; a large
flow only sends the first few bytes with a high weight and the remaining 
 bytes are sent with a low weight. When a small flow
competes with a large flow, it is quite possible that the small flow
has a higher weight than the large flow, so that small flow can get
more bandwidth. 

ATCP design satisfies the requirements in Section~\ref{req}, ATCP  
uses AIMD scheme, so the network is still fully utilized. 
By setting small flows' a higher weight, ATCP guarantees that
small flows get more bandwidth and complete more quickly. 
By counting sent bytes, ATCP does not need flow size information 
from the application layer; thus it is flow agnostic. All the 
changes are in the protocol stack, so ATCP avoids
hardware device changes.

The {\bf weight-size function} is the key of the adaptive rate control. 
ATCP start from the requirements and design the weight-size function:
\begin{itemize}
\item All flows achieve high network utilization. So the weight-size 
function is always positive.
\item The more a flow sends, the less competitive it is. So the weight-size
function is decreasing, but not necessary to be strict.
\item Small flows are more or at least not less competitive than large flows. 
So in the duration of a small flow, its weight is no smaller than a large one. 
$W(s)=max(w)$, for $s<T_1$, where $T_1$ is the threshold of small flows. $W(s)$ is a constant 
when $s<T_1$ because if it is strictly monotone decreasing, a late-start large 
flow can have larger weight than an early-start small flow in their overlapping 
period, which degrades the small flow's throughput. 
\item ATCP should have neglectable influence on large flows. For several contending 
large flows, they should have the same behaviors with TCP, thus their weight should 
be the same. So $W(s)=c$, for $s>T_2$, where $T_2$ is a threshold which means the 
flow has sent sufficient volume of traffic. $W(s)$ is a constant when $s>T_2$, 
because if it is strictly monotone decreasing, the late-start large flows will dominate
the early-started ones.
\end{itemize}
To satisfy the principles above, the weight-size function should have the following format
$$
W=\left\{
        \begin{array}{ll}
        W_H & \mbox{if $s\leq T_1$ } \\
        W'(s) & \mbox{if $T_1<s\leq T_2$ }\\
	W_L & \mbox{if $s>T_2$ }
        \end{array}
        \right.
,$$
where the $W'(s)$ is a monotone decreasing function from $W_H$ to $W_L$ at the range $[T_1, T_2]$. There are multiple choices of the function $W'(s)$, such as exponentially decreasing or linear decreasing. But considering the order of magnitude of small flows and large flows, we go on simplify the weight-size function to a two-segment constant function.

As we discussed in Section~\ref{flow}, 80\% bytes are sent by flows that are larger than 10MB, 
so we define the small flow threshold $T_1$ to be 10MB. We want the large flows behave like TCP,
and only when $s>T_2$, the large flows has a fixed weight and max-min fairness among them. So $T_2$
should not be too large and the interval $[T_1, T_2]$ should only be a small portion of the whole
flow size. We assume it to be 2 orders of magnitude smaller. Consider the typical large flow size is hundreds of megabytes, $T_2$ is in [1MB, 10MB]. Then $T_2$ is simplified to be equal to $T_1$. 
In our real trace simulation, even we set $T_1\neq T_2$ and different $W'(s)$, we find 
very small portion of flows overlapping with large flows in $[T_1, T_2]$ and the format of 
$W'(s)$ really does not make too much difference. So the weight-size function is finally set to be: 
$$
W=\left\{
        \begin{array}{ll}
        W_H & \mbox{if $s\leq T$ } \\
        W_L & \mbox{ otherwise }
        \end{array}
        \right.
.$$

By this weight-size function, for all flows that are smaller than $T$, their data will be sent by the highest weight $W_H$. When competing with a large flow, if the large flow is sending in weight $W_H$, its performance in ATCP is no worse than in TCP; if the large flow is sending in weight $W_L$, the small flow will be more aggressive. And comparing the magnitude of $T$ and data size of a large flow, the latter is the most common case in the network, thus most small flows benefit.
\subsection{Discussions}
ATCP does not lead to large flow {\bf starvation}.  
ATCP allocates more bandwidth to small flows than large flows,
thus small flows completes more quickly and leave more time to
large flows as compensation. One may argue that if small flows come
one by one which makes the large flow 
contends with small flows throughout its life, it always gets lower
bandwidth in ATCP than in TCP. We argue that this comparison is unfair, because
in ATCP, small flows complete more quickly, and if they come continuouly, there 
are actually more frequent small flows in ATCP.
To make the comparison fair, we 
fix the flow trace with the same flow arrival time stamps and sizes, 
if the network sends data by the best effort and links are always close-to-fully-utilized,
in a fixed period, the network sends a certain amount of bytes.
In this amount, the total size of all small flows is fixed,  
and all the left bandwidth are used by large flows, which follows
max-min fairness among them in both TCP and ATCP. So the large flows' completion time is not influenced. 
Simulation results also verify that large flows are not influenced. 

There are many TCP variants now, such as Tahoe, Reno, new Reno, Cubic~\cite{cubic}, etc., 
and the IETF TCPM working group~\cite{tcpm} also develop various extensions of TCP to adjust 
to different scenarios. However, all these TCP variants follow AIMD, our mechanism can be used
to modify them to be adaptive in the cloud. 

ATCP does not achieve application-level {\bf fairness}. If an application
starts multiple connections to speed data transfer up, it gets more bandwidth
than the application with less connections. However, this is not solved by
TCP either. ATCP can work with other mechanisms that control fairness, 
such as fairness queuing or QoS. In this case, the flows in the same queue
can still reduce their average completion time by ATCP. 

The weight-size function can be in other formats. For example, it can be refreshed periodically
to adjust to some periodical bursty flows. 
Recently there are some flow deadline-aware designs such as $D^3$~\cite{D3} and D2TCP~\cite{d2tcp}.
We can also achieve this by changing the weight-size function to be 
weight-time function. For example, we let the weight function 
increase with time first, and after it misses the deadline, 
the weight is decreased to a small value. 

A flow's data transfer time includes propagation delay, queuing delay and transmission time.
Propagation delay equals [sum of links' length] over [light speed in the links],
queuing delay is the sum of [queue length in each switch] over [link bandwidth], and
transmission time equals [flow size] over [sending rate].
ATCP actually decreases the transmission time by assigning a larger sending rate. 
In the data center, if the data size is too small, the queuing delay and propagation 
delay dominate the total transfer time, the improvement is limited. 
While if the transmission time dominates the total time, the improvement 
is more significant. In this case the flow size is usually 100KB-10MB 
according to our simulation. 
