\section{Evaluation}
\label{sec:eval}
We validate APLAD functions to diagnose virtual network problems, measure
its overhead to the existing system and observe its performance as a service.
There are two sources of overhead introduced by APLAD: data collection and query execution. 
Data collection is performed locally, so we only evaluate its effect on local hypervisor and virtual 
machines. Query execution is a distributed task, so we evaluate its global impact in terms of extra   
traffic volume and its performance in terms of response time.
\subsection{Functional Validation}
%APLAD captures flow traces, which directly displays the network problems if there is any. If there is reachability issue (caused by failure or mis-configuration), APLAD can easily find the hops that causes packet loss. In this section, we focus on performance issues.
The symptoms of virtual network problems are reachability issues and performance
issues. APLAD can cope with both by analyzing the application flow trace.
The reachability issue can be easily found by track packets.
In this section, we focus on performance issues.
\subsubsection{Bottlenecked Middlebox Detection}
\label{sec:middlebox}
In virtual networks, middleboxes are usually used by a cloud provider to achieve 
better network utilization or security. In the cloud, middleboxes are also 
provided to the tenants as services~\cite{mbservice}. In these cases, the tenant 
does not have direct access to the middlebox, which makes its 
diagnosis difficult. In a virtual topology with multiple middleboxes, 
especially when the middleboxes form a chain, if a large amount traffic 
traverses the middlebox chain, one of the middleboxes may become a bottleneck. The
bottlenecked middlebox needs scaling up. However, there is no general 
way to determine the bottlenecked middlebox. 

One solution is to try to scale each middlebox to see whether there is 
performance improvement at the application~\cite{stratos-tr}. But this solution 
needs the application's support and is not prompt enough. Another solution 
is to monitor VM (we assume a tenant is using a software middlebox in the VM) resource 
usage, which is still not feasible due to middlebox heterogeneity~\cite{drfq}. 
Also some resources, such as memory throughput, is hard to measure. Network 
administrators can also check middlebox logs to find out problems. However, 
this requires too much effort to become familiar with various middleboxes, moreover, 
the problem may be in the OS kernel.
\begin{figure}[h]
\centering
\small
\begin{tabular}{c}
\includegraphics[width=0.6\textwidth]{fig/chain_logical.pdf}\\
(a) Logical Chain Topology with Middlebox\\
\includegraphics[width=0.6\textwidth]{fig/chain_scale.pdf}\\
(b) Actual Topology after Scaling\\
\end{tabular}
\caption{Chain Topology with Middlebox Scaling}
\label{fig:chain_topo}
\end{figure}

\begin{figure}[h]
\centering
\small
\begin{tabular}{cccc}
\includegraphics[width=0.23\textwidth]{fig/throughput.pdf}&
\includegraphics[width=0.23\textwidth]{fig/A_RTT.pdf}&
\includegraphics[width=0.23\textwidth]{fig/B_RTT.pdf}&
\includegraphics[width=0.23\textwidth]{fig/C_RTT.pdf}\\
(a) 2 Flows' Throughput &
(b) RTT at Point A&
(c) RTT at Point B &
(d) RTT at Point C\\
\includegraphics[width=0.23\textwidth]{fig/RE_delay.pdf}&
\includegraphics[width=0.23\textwidth]{fig/IDS_delay.pdf}&
\includegraphics[width=0.23\textwidth]{fig/RE_loss.pdf}&
\includegraphics[width=0.23\textwidth]{fig/IDS_loss.pdf}\\
(e) RE Processing Time &
(f) IDS Processing Time &
(g) RE Packet Loss &
(h) IDS Packet Loss \\
\end{tabular}
\caption{Bottleneck Middlebox Locating}
\label{fig:bottleneck}
\end{figure}

Here we use APLAD to diagnose the bottleneck. We assume a flow from a client 
to the server traverses a middlebox chain with a Redundancy Elimination 
(RE)\cite{smartRE} and an Intrusion Detection System (IDS)\cite{snort}.  
In the case that traffic volume increases, one of the two middleboxes 
becomes the bottleneck and requires scale up. 

At first, a client fetches data from the server at the rate of about 100Mbps. 
Then at the 10th second, a second client also connects to the server and 
starts to receive data from the server. Then client 1's throughput drops to 
about 60Mbps, and client 2's throughput is also about 60Mbps (Figure~\ref{fig:bottleneck}(a)). 
To find the bottleneck of the whole chain topology, we use APLAD to deploy trace capture 
at points A, B and C in the topology. We capture all traffic with the server IP 
address. We start the diagnosis application in Section~\ref{sec:analysis}, and check 
the RTT at each point. Figure~\ref{fig:bottleneck}(b)(c)(d) shows that 
at point A and B the RTT increases significantly when the second flow joins, 
and at point C the RTT does not change too much. We use $RTT_A-RTT_B$ as the 
processing time at the RE middlebox and $RTT_B-RTT_C$ as that of the IDS. 
It is obvious that when traffic increases, the processing time at  
the IDS increases by about 90\% (Figure~\ref{fig:bottleneck}(e)(f)). 
We deploy the packet loss diagnostic application to observe the 
packet loss at each hop. Figure~\ref{fig:bottleneck}(g)(h) indicates that 
when the second client joins, packet loss happens at the IDS and no packets are lost 
at the RE. These observations indicate that the IDS becomes the bottleneck of the whole chain. 
So the IDS should be scaled up as in Figure~\ref{fig:chain_topo}(b). 
Then we can see that the throughput of both flows increases to nearly 100Mbps, 
the delay at the IDS decreases back to 3ms, and there is no packet loss at the IDS. 
The RE middlebox has some packet loss, but it does not impact the application 
performance. The logical chain topology with middleboxes is thus successfully scaled.
\subsubsection{Flow Trajectory}
We now test the methods to correlate flows as described in Section~\ref{sec:correlation}.
First, we use packet fingerprints to correlate the input and output packets of middleboxes.
We route a flow to traverse an RE and an IDS, then use packet id (ip.Identification + tcp.Sequence $\ll$ 16) to correlate 
packets between each logical hops. We compare two 0.5 GB traces and find that all packets  are  correlated 
unless dropped at the hop.
%Second, we try to hash the payload and use hash value to distinguish packets. The packet trajectory accurancy are actually decided by the payload similarity and the hash function conflict possibility. To correlate an inbound flow to a middlebox with one of the multiple outbound flows, we first drop the flow's packet that has the same hash value with multiple outbound packets; then extract the flow ( 5 fields ) mapping from each remaining packet pair, then make a majority vote to decide the flow fields mapping. with trace xxx and hash function, the accuracy is XXX%.

Then we look into the load balancer case, in which packets have no fingerprints.
%We start 100 iperf client flows to 10 iperf servers via a load balancer named haproxy.
We use a load balancer named haproxy, which breaks one TCP connection into two. 
In haproxy, we use round robin to balance the flows. 
We use iperf to generate traffic, whose payloads have a high probability of being the same. 
So the packets have no fingerprints to be distinguished from each other.
We sort connection built time of the client side and server side, i.e., the 1st ACK
packet from the client to the load balancer and the 1st SYN from the load balancer to the server,
and correlate inbound and outbound flows by this time sequence. 
\revise{\#B}{We start 4000 iperf client flows to 10 iperf servers via a load balancer 
named haproxy; the connections are set up by the haproxy as soon as possible, which takes 12 seconds.
We use haproxy logs to check the accuracy. We find that with the load of 330 connections 
per second in haproxy we can achieve 100\% accuracy on flow correlation. 
This is the fastest rate for a haproxy in our VM to build connections. 
The result reveals that it is feasible to use time sequence to correlate flows and 
APLAD provides flexible APIs to correlate flows for a layer-4 load balancer.} 
\subsection{Trace Collection}
\label{sec:trace_collection}
\revise{\#11}{APLAD makes use of the extra processing capability of virtual switches, so that 
flow capture does not impact the existing tenant network traffic. However, it consumes memory I/O throughput on servers, so flow capture could possibly impact some I/O intensive applications with rapid memory access 
in virtual machines. We measure and model this overhead in our experiment.}

\begin{figure}[h]
\centering
\small
\begin{tabular}{cc}
\includegraphics[width=0.4\textwidth]{fig/notunnel.pdf}&
\includegraphics[width=0.4\textwidth]{fig/one_ovs.pdf}\\
%\includegraphics[width=0.24\textwidth]{fig/two_ovs.pdf}\\
%(a) Virtual Switch Capacity & (b) Link Capacity\\
%\includegraphics[width=0.24\textwidth]{fig/gre.pdf}\\
%(b) Link Capacity\\
(a) Flow Capture &
(b) OVS Capacity  \\
%& (d) GRE Tunnel
\multicolumn{2}{c}{*Curves without a legend are TCP flows}
\end{tabular}
\caption{Network Overhead of Trace Collection}
\label{fig:col_overhead}
\end{figure}
{\bf Network Overhead:} With our optimization, trace duplication is performed by a virtual switch 
and the table server is set up locally. This introduces extra network traffic volume from
the virtual switch to the table server. We evaluate whether this trace duplication impacts existing
network flows.

We set up 8 virtual machines on 2 hypervisors, and start eight 1Gbps TCP flows between 
pairs of VMs running on the 2 hypervisors.  We then use APLAD to capture one of them
every minute. Figure~\ref{fig:col_overhead}(a) shows that when the flows are mirrored into table 
servers, the original flow's throughput is not impacted by the flow capture on OVS.
The reason is that the total throughput of VM traffic is limited by the 10Gbps NIC capacity. 
However, the packet processing capacity of OVS is larger than 10Gbps, which makes it possible 
to perform extra flow replication even when OVS is forwarding high throughput flows.

We conduct an experiment to further understand the packet processing capacity of OVS. 
In Figure~\ref{fig:col_overhead}(b), we start a background flow
between 2 hypervisors, which traverses OVS and saturates the
10Gbps NIC. Then we start one new TCP flow between two VMs on the same
hypervisor every minute to measure the left-over processing capacity
of OVS. The peak processing throughput of OVS is around 18Gbps.
Thus there is a significant amount of packet processing capacity -- up to
8Gbps -- on OVS to perform local flow replication even
when the 10Gbps NIC is saturated.
\begin{figure}[h]
\centering
\small
\begin{tabular}{cc}
\includegraphics[width=0.4\textwidth]{fig/mem_vm.pdf}&
\includegraphics[width=0.4\textwidth]{fig/mem_net.pdf}\\
(a) VM Memory Throughput & (b) Network-Memory
\end{tabular}
\caption{Memory Overhead of Trace Collection}
\label{fig:mem_overhead}
\end{figure}

{\bf Memory Overhead:} Another overhead concern is memory. The physical server
can be more and more powerful with more CPU, larger memory and more peripheral devices. 
However, the computer architecture makes all internal data transfer go through the memory
and the bus, which is a potential bottleneck for cloud servers with multiple VMs running
various applications. APLAD surely takes some memory throughput to dump traces; we 
evaluate how much impact is introduced to the virtual machine memory access.
%whether this impact to memory throughput is acceptable.

In Figure~\ref{fig:mem_overhead}(a), we run linux mbw benchmark in virtual machines. 
In that benchmark, we allocate two 100MB memory spaces and call memcpy() to copy one
to another. Results show that 1 VM can only make use of about 3GB/s memory bandwidth. As the number of VMs 
increases, the aggregated memory throughput reaches the upper bound, which is about 8GB/s.

We look into the influence of network traffic on memory throughput. We start 20 VMs on the 
hypervisor, 8 VMs run the memory benchmark, 6 VMs send network traffic by iperf out to another
physical server, and 6 VMs are used to dump traces. We control the network throughput
and aggregate the memory throughput. 
The network throughput is constrained by the physical NIC bandwidth, which 
is 10Gbps. When the network traffic does not saturate the physical NIC, the memory benchmark 
saturates the remaining memory bandwidth. We fit the memory-network throughput using linear
regression. Figure~\ref{fig:mem_overhead}(b) indicates the relationship between aggregate
network throughput and aggregate memory throughput. The solid line is  without
flow capture; the dash line is when we dump all network traffic.
We assume the network throughput is $N$ Gbps, and the memory throughput is $M$ 
GB/s. Without network traffic dump, the network-memory throughput is $$N + 2.28M = 18.81,$$
and with network traffic dump, it is $$N + 2.05M = 16.1$$ This result shows that each 1Gbps
network traffic dump costs an extra 59 MB/s of memory throughput.

Memory throughput overhead introduced by APLAD is unavoidable. 
%The evaluation results shows that it is acceptable. 
\revise{\#C}{Our experiment quantifies the performance impact introduced by the APLAD data collection
on application memory throughput.}
We advise that the cloud administrator takes memory throughput into consideration
when allocating VMs for the tenants.

%{\bf VM processing speed}
\subsection{Data Query}
\label{sec:data_query}
%{\bf Service Setup}
We use the trace in the bottleneck middlebox detection experiment (Section~\ref{sec:middlebox}). 
We monitor throughput, RTT and packet loss, and observe its overhead and performance in terms
of response time. These three diagnostic applications represent different data table operations:
aggregation, single-table join and multi-table join.

In throughput and RTT monitoring, we check them periodically at different time granularity; 
we control the checking period to observe the overhead and performance.
In packet loss monitoring, we sample packets in one table and search for it in another;
we control the sample rate to observe the overhead and performance.

\revise{\#11}{Data queries require data movement such that it consumes network bandwidth. The APLAD
network traffic can be isolated from the tenant traffic by tunneling, and their bandwidth allocation
can also be scheduled together with the tenant traffic by the cloud controller.}
\begin{table}
\centering
\small
\caption{Throughput Query}
%\begin{tabular}{ccc}
%{\setlength{\tabcolsep}{0.15em}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Period(s)& 1 & 3 & 5 & 7 & 9  \\ \hline
Execution(s) & 0.03 &0.1 &0.16&0.22 &0.29 \\ \hline
Traffic(MB) & $<$0.1 & $<$0.1 & $<$0.1 &$<$0.1& $<$0.1 \\ \hline
\end{tabular} 
%}&
%\caption{Throughput Query}
\label{tab:query_throughput}
\end{table}

\begin{table}
\centering
\small
\caption{RTT Query}
%{\setlength{\tabcolsep}{0.15em}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Period(s)& 1 & 3 & 5 & 7 & 9  \\ \hline
Execution(s) &0.1 &0.29 &0.49 & 0.69& 0.9 \\ \hline
Traffic(MB) & $<$0.1 & $<$0.1 & $<$0.1 &$<$0.1& $<$0.1 \\ \hline
\end{tabular}
%}&
%\caption{RTT Query}
\label{tab:query_rtt}
\end{table}

\subsubsection{Overhead}
{\bf Storage:} At each hop, the total traffic volume is 0.5GB, so the total size of all traces
is 1.5GB. After the traces are parsed and dumped into the database, the table storage costs
only 10MB for tables and 10MB for logs. The storage for one diagnosis is not a big issue for 
current cloud storage, and this storage space can be released after the diagnosis.

{\bf Network:} 
The result of the throughput and the RTT monitoring experiment in Table~\ref{tab:query_throughput} and ~\ref{tab:query_rtt}
show little network traffic, because a local data table operation does not cause 
any traffic and outputting the results generates negligible traffic. 
Inter-table operations need data movement, e.g. packet loss monitoring in our
experiment. The overhead is easy to predict: it is the record size multiplied by the number
of records to move in the execution period. Table~\ref{tab:query_loss} shows
that with a 100Mbps flow, the extra traffic generated by packet loss detection
is only a few Mbps at the rate of 10,000 samples per second.
\begin{table}
\centering
\small
\caption{Packet Loss Query}
\begin{tabular}{|c|c|c|c|c|c|} 
\hline 
Samples/s & 1E0 & 1E1 & 1E2 & 1E3 & 1E4   \\ \hline 
Execution(s) & $<$0.01 & $<$0.01 &0.01&0.03 &0.2 \\ \hline 
Traffic(MB) & 0.1 &0.1 &0.2& 0.5&3.4 \\ \hline 
\end{tabular} 
\label{tab:query_loss}
\end{table}
\subsubsection{Performance}
In throughput and RTT monitoring, the response time shows strong linear relations with
the checking period. In throughput monitoring, one second's traffic volume of a 100Mbps 
flow can be processed in 0.03 second,
so we predict that at most 3Gbps flow's throughput can be monitored in real time. Similarly, 
at most 1Gbps flow's RTT can be monitored in real time.

In the packet loss case, APLAD can process 10,000 records in 0.2 second. Each record costs
a fixed amount of time; scaling this linearly, we predict that 
with 2-3 Gbps throughput, the packet loss can be detected in real time.

\subsection{Scalability}
In this section, we discuss the scalability of the APLAD framework. In a large-scale cloud environment, the 
scalability challenge for the APLAD is to perform data collection from a large number of VMs and support 
diagnosis requests for a large number of tenants. Since APLAD co-locates table servers with tenants' VMs 
and only performs data collection locally, data collection will not be the scalability bottleneck when there 
is a large number of VMs. The {\bf control server} generates data collection policies and passes query commands 
and results between tenants and table servers. It is easy to add this logic to 
existing user-facing cloud control servers. 
Given that existing clouds, such as Amazon EC2 and Microsoft Azure, 
have been able to support a large number of tenants through web-based control servers, we believe the control server will not be a major 
scalability bottleneck either. However, in a large-scale cloud, APLAD table servers will need to perform real time 
data processing and table queries for many tenants, which could become a 
major scalability bottleneck. Therefore, our scalability discussion is focused on 
the query performance of table servers. 
%The {\bf control server} generates data collection policies. As we mentioned in Section~\ref{sec:algo}, the optimized capture point algorithm is to place it locally, which has O(1) time complexity.  Another function of control server is to pass the query commands and results between the tenants and the table servers, which is pretty simple logic. So the control server is not likely to be a bottleneck of APLAD.

%In real data center networks, there may be thousands or tens of thousands of physical servers providing services to more tenants. The majority resources are allocated to the tenants, the resource left (e.g. link bandwidth, CPU, memory) can be used to process APLAD requests.

To evaluate the {\bf table servers'} scalability, we perform simulation analysis 
based on the statistics of real cloud applications and our query performance 
measurements. In the simulation, we make the following assumptions:
\begin{itemize}
\item The data center network has full bisection bandwidth, so we simplify the physical network by one big switch 
connecting all physical servers. The physical NIC bandwidth is 10 Gbps.
\item Typical enterprise cloud applications (e.g. interactive and batch multi-tiered applications) use 2 to 20 VMs~\cite{cloudnaas}.
We assume each application is running in one virtual network, so each virtual network has 2 to 20 VMs.
\item The flow throughput between virtual machines follows a uniform distribution in [1, 100] Mbps~\cite{cloudnaas}.
%have a size distributation which follows log-scale normal distribution with a mean of 1KB, and a duration distribution which follows log-scale uniform distribution in [10us, 1000s].
\item In each physical server, the virtual switch can process up to 18 Gbps network traffic (Section~\ref{sec:trace_collection}). 
\item Throughput query is common in the network diagnosis. This query is   intensive because it needs to inspect all the packets. We assume each tenant issues a throughput query of all the traffic in its virtual network. Each executor can process query for 3 Gbps network traffic at real time (Section~\ref{sec:data_query}).
\end{itemize}

In the simulation, we first generate virtual networks whose size and flow characteristics following our assumptions, then
allocate them (greedily to the server with most available resources) to a data center with 10000 physical servers 
until the total link utilization reaches a threshold. 
Then the tenants start to issue diagnostic requests. 
Each diagnostic request is capturing and querying all the traffic in the tenant's virtual network. If there are enough resources
left (trace duplication capability in the virtual switch and query processing capability in the query executor), 
the tenant's request consumes its physical resources and succeeds; otherwise, the request is rejected and fails.
As more and more requests are being issued, there are fewer and fewer resources left for the following diagnostic requests.
We stop issuing diagnostic requests when requests start to be rejected.
Then we calculate the portion of virtual networks that is successfully diagnosed over the total virtual networks allocated. 

{\bf Data Collection:} When total link utilization is under 80\% (less than 150K tenants), all the virtual network traffic can be captured. Even if the
total link utilization is 90\% (162K tenants), 98.6\% virtual networks can be diagnosed. Given that in a typical 
data center network, the utilization of 80\% links are lower than 10\% and 
99\% links are under 40\% utilization~\cite{cloudcharactoristics}, 
we conclude that in a common case (total link utilization is lower than 30\%)
all virtual network traffic can be captured without impacting existing application traffic.

{\bf Data Query:} In Table~\ref{tab:scalability}, when total link utilization is under 30\% (54K tenants), 
almost all queries succeed. When total link utilization is high, the product of the link utilization and the success 
query ratio is about 30\%, that is, 30\% of the total link capacity can be queried at ``real time'' successfully. 
Given that link utilization in the typical data centers is normally lower than 30\%~\cite{cloudcharactoristics}, 
so most tenants' traffic can be queried at real time. If some tenants relax the latency requirement of queries and do offline data processing, the APLAD query can make even better use of the spare resources without 
contending with latency sensitive queries. 
%{\bf Data Query:} In Table~\ref{tab:scalability}, when total link utilization is under 30\% (90K tenants), almost all queries succeed. When total link utilization is high, the product of the link utilization and the success query ratio is about 30\%, that is, 30\% of the total link capacity can be queried in ``real time'' successfully. With the typical data center workload mentioned above~\cite{cloudcharactoristics}, almost all traffic can be queried in real time. If some tenants relax the query time requirement and do offline data processing, then APLAD query can make use of the spare resources without contending with time-intensive applications.

\begin{table}
\centering
\small
\caption{Successful Queries in a Data Center}
{\setlength{\tabcolsep}{0.15em}
\begin{tabular}{|c|c|c|c|c|c|} 
\hline 
%Link Utilization(\%) & 10 & 20 & 30 & 40 & 50 & 60 & 70 & 80 & 90   \\ \hline 
%Successful Query(\%) &100 & 100& 97.8 & 73.5 & 58.7 & 48.9 &41.9 &36.7 & 32.5   \\ \hline 
Tenants Count & 18K & 54K & 90K & 126K & 162K \\ \hline
Link Utilization(\%) & 10  & 30  & 50  & 70  & 90   \\ \hline 
Successful Query(\%) &100 & 97.85 & 58.7  &41.9  & 32.5   \\ \hline 
\end{tabular} 
}
\label{tab:scalability}
\end{table}




%{\bf controller processing speed}

%{\bf Infrastructure Scalability: } Our experiments above consider a single tenant with a simple topology. As argued earlier, APLAD is designed to scale to more servers and tenants since it only performs flow replication on local hypervisors and does on-demand query execution.  We are currently conducting larger experiments to explore the scalability of our approach.

