\section{APLAD Design}
\label{sec:design}

In this section, we describe the design of our virtual network diagnosis framework (APLAD) to address the 
challenges outlined in the previous section. We show how the APLAD
architecture preserves data isolation and abstraction, and demonstrate
APLAD's applicability to existing cloud management platforms.
%%and scalable to large number of tenants. APLAD also solves the flow correlation problem.

%In this section, we describe the APLAD service procedures, design VND architecture
%which both allows the cloud provider to offer diagnostic services and preserves
%the virtual network abstraction for the tenant. We optimize APLAD to make it compatible
%with existing cloud management platform and scalable with the data center and virtual
%network size. We solve the flow correlation problem in virtual networks with APLAD.
%framework that enables a cloud provider to offer virtual network diagnosis as a service to its tenants, addressing the key challenges outlined above.

\subsection{APLAD Service Operation}
\label{sec:service}
\begin{figure*}[h]
\centering
\small
\begin{tabular}{ccccc}
\includegraphics[width=0.23\textwidth]{fig/service_2.pdf} &
\includegraphics[width=0.23\textwidth]{fig/service_3.pdf} &
\includegraphics[width=0.23\textwidth]{fig/service_4.pdf} &
\includegraphics[width=0.23\textwidth]{fig/service_5.pdf} \\
(a) Convert tenant request &
(b) Deployment and &
(c) Parse data into &
(d) Diagnosis by data \\
into diagnosis policy &
data collection &
readable tables &
operation or applications
\end{tabular}
\caption{Diagnosis as a Service operation}
\label{fig:service}
\end{figure*}

Figure~\ref{fig:service} illustrates the operation of APLAD's diagnosis
service, which takes input from the tenants and produces the 
raw data, operational interfaces, and initial analysis results. We assume 
the cloud has the architecture as described in Section~\ref{sec:vnet}. 
There is a network controller that knows the physical topology and all tenants' 
virtual network embedding information (i.e., an SDN controller).

First, when a tenant observes poor performance or failure of his
virtual network, he submits a diagnosis request to the APLAD control
server (Figure~\ref{fig:service}(a)). The request describes the flows
and components experiencing problems.  The control server, which is
deployed by the cloud administrator, accepts the tenant request and
obtains the physical infrastructure details like topology and the
tenant's allocated resources.  The control server then translates the
diagnosis request into a diagnosis policy. The diagnosis policy
includes a flow pattern to diagnose (flow granularity such as IP, port,
protocol, etc.), capture point (the physical location to trace the flow),
and storage location (the physical server location for storage and
further analysis).

Then, the cloud controller deploys this diagnosis policy into the physical network to 
collect the flow traces (Figure~\ref{fig:service}(b)). This deployment 
includes three aspects: 1) mirroring
problematic flows' packets at certain capture points (physical or virtual switches), 
2) setting up trace collectors to store and process the packet traces, and 3) configuring routing
rules from the capture point to the collector for the dumped packets. 
Now the tenant can monitor his problematic trace of his network applications. 

Next, the tenant supplies a parse configuration that specifies packet
fields of interest and the raw packet trace is parsed
(Figure~\ref{fig:service}(c)), either offline after the data
collection, or online as the application runs.  The raw trace includes
packets plus timestamps.  The raw traces are parsed into
human-readable tables with columns for each packet header field and
rows for each packet; each trace table denotes a packet trace at a
certain capture point. There is also a metadata table transformed from
the diagnosis policy.  All of these tables collectively form a
diagnosis schema.

Finally, the tenant can diagnose the virtual network problem based on
the trace tables.  The control server provides an interface to the
tenants through which they can fetch the raw data, perform basic
SQL-like operations on the tables and even use integrated diagnosis
applications from the provider. This helps tenants diagnose problems
in their own applications or locate problems in the virtual network.
\revise{\#2}{If the physical network has problems, tenants can still
  use APLAD to find the abnormal behavior (packet loss, latency, etc.)
  in observations of the virtual components, so that they can report
  the abnormality to the cloud administrator.}

\subsection{APLAD Architecture}
\label{sec:arch}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{fig/arch.pdf}
\caption{Virtual Network Diagnosis Framework}
\label{fig:arch}
\end{figure}
APLAD is composed of a {\bf control server} and multiple
{\bf table servers} (Figure~\ref{fig:arch}).
Table servers collect flow traces from network devices (both physical and virtual),
perform initial parsing, and store data into distributed data
tables. The control server allows tenants to specify trace collection
and parse configurations, and diagnose their virtual networks using
abstract query interfaces. 
%The configuration and query interfaces allow cloud tenants to ``peer
%into'' problems in their logical networks without the provider having
%to expose unnecessary information about the infrastructure or other
%tenants.
%, the diagnosis service operates in an on-demand fashion to reduce
%overhead; that is,
To reduce overhead, trace collection and analysis begin only in
reaction to the tenant's diagnosis requests.

\subsubsection{Control Server}
\revise{\#1}{The control server is the communication hub between tenants, 
the cloud controller, and table servers. Its configuration and query interfaces allow cloud tenants to ``peek into'' problems 
in their logical networks without having the provider to expose unnecessary information about the infrastructure or other tenants. 
To decide how to collect data, the control server needs interfaces from the cloud controller
to request virtual-to-physical resource mapping (e.g., placement of
VMs or middleboxes, tunnel endpoints) and interfaces to set up data collection policies (e.g. flow mirroring rules, collector VM 
setup, and communication tunnels between all APLAD components).
}
\begin{figure}[h]
\centering
\small
%{\setlength{\tabcolsep}{0.1em}
%\begin{tabular}{|lp{0.22\textwidth}|}
\begin{tabular}{|ll|}
\hline
1)& Virtual Appliance \textbf{\emph{Link}} : \textbf{\emph{l1}} \\
2)& \hspace{1em}Capture Point \textbf{\emph{node1}} \\
3)& \hspace{2em}Flow Pattern \textbf{\emph{field = value, ...}}\\
4)& \hspace{1em}Capture Point \textbf{\emph{node2}}\\
5)& ...\\
6)& Virtual Appliance \textbf{\emph{Node}} : \textbf{\emph{n1}} \\
7)& \hspace{1em}Capture Point \textbf{\emph{input, [output]}} \\
8)& ...\\
%9)& \hspace{0.4em} Cap Point \textbf{\emph{output}} \\
%10)& ...\\
\hline
\end{tabular} 
%}
\caption{Trace Collection Configuration format}
\label{fig:col_config}
\end{figure}

\begin{figure}[h]
\centering
\small
%{\setlength{\tabcolsep}{0.2em}
\begin{tabular}{cc}


\renewcommand{\arraystretch}{0.7}
\begin{tabular}{|ll|}
\hline
1)& Appliance \textbf{\emph{Node}} : \textbf{\emph{lb}} \\
2)& \hspace{0.5em}Cap \textbf{\emph{input}} \\
3)& \hspace{1em}\textbf{\emph{srcIP=10.0.0.6/32}}\\
4)& \hspace{1em}\textbf{\emph{dstIP=10.0.0.8/32}}\\
5)& \hspace{1em}\textbf{\emph{proto=TCP}}\\
6)& \hspace{1em}\textbf{\emph{srcPort=*}}\\
7)& \hspace{1em}\textbf{\emph{dstPort=80}}\\
8)& \hspace{1em}\textbf{\emph{dualDirect=True}}\\
9)& \hspace{0.5em}Cap \textbf{\emph{output}}\\
10)&\hspace{1em} ...\\\hline
11)& Appliance ...\\
12)&\hspace{0.5em}...\\
\hline
\end{tabular} &
\begin{tabular}{c}
\includegraphics[width=0.35\textwidth]{fig/col_example.pdf}\\
\end{tabular}\\
(a) Configuration & (b) Tenant's view\\
\end{tabular}
%}
\caption{Trace Collection Example}
\label{fig:col_example}
\end{figure}

\begin{figure}[!ht]
\centering
\small
%{\setlength{\tabcolsep}{0.1em}
\begin{tabular}{cc}
\renewcommand{\arraystretch}{0.7}
\begin{tabular}{|ll|}
\hline
1)&Trace ID \textbf{\emph{tr\_id1}}\\ 
2)&Pattern \textbf{\emph{field = value, ...}}\\
3)&Physical Cap Point \textbf{\emph{vs1}}\\
4)&Collector \textbf{\emph{c1}} at \textbf{\emph{h1}}\\
5)&Path \textbf{\emph{vs1, ..., h1, c1}}\\
\hline
6)&Trace ID \textbf{\emph{tr\_id2}} \\
7)&...\\ \hline
8)&Trace ID \textbf{\emph{tr\_id3}} \\
9)&...\\ \hline
\end{tabular} &
\begin{tabular}{c}
\includegraphics[width=0.3\textwidth]{fig/policy_example.pdf}\\
\end{tabular}\\
(a) Policy & (b) Deployment
\end{tabular}
%}
\caption{Trace Collection Policy}
\label{fig:col_policy}
\end{figure}

The {\bf policy manager} in the control server manages the trace
collection and parse configuration submitted by cloud tenants. When a
tenant encounters problems in its virtual network, it can submit a {\bf
  trace collection configuration} (Figure~\ref{fig:col_config})
that specifies the flow of interest, e.g., flows related to a
set of endpoints, or application types (line 1, 2, 4, 6, 7).  The pattern may be
specified at different granularity, such as a particular TCP flow or all traffic to/from a
particular (virtual) IP address (line 3). 

Figure~\ref{fig:col_example} shows a
trace collection configuration example. A tenant deploys a load balancer and
multiple servers in his virtual network, and now he wants to diagnose the 
load balancer. He describes the problematic appliance to be the {\bf node lb} (line 1),
and captures both the input and output (line 2, 9). The flow of interest is
the web service flow (port 80) between the host 10.0.0.6 and 10.0.0.8 (line 3-8).
In the configuration, the tenant only has the view of his virtual network 
(Figure~\ref{fig:col_example}(b)) and the infrastructure is not exposed to the tenant.

The policy manager combines the trace collection configuration with
network topology and the tenant's logical-to-physical mapping
information.  This is assumed to be available at the SDN controller,
e.g., similar to a network information base~\cite{onix} (not shown in
the figure).  The policy manager then computes a {\bf collection
  policy} (Figure~\ref{fig:col_policy}(a)) that represents how flow
traces should be captured in the physical network.  The policy
includes the flow pattern (line 2), the capture points in the network
(line 3), and the location of {\bf trace collectors} (line 4), which
reside in the table servers to create local network taps to collect
trace data.  The policy also has the routing rules to dump the
duplicated flows for the capture point into the collector (line 5). We
discuss the capture point and table server allocation algorithm in
Section~\ref{sec:algo}.  Based on the policy, the cloud controller
sets up corresponding rules on the capture points to collect the
appropriate traces (e.g., matching and mirroring traffic based on a
flow identifier in OpenFlow), and it starts the collectors in
virtual machines and configures routing rules between capture points
and collectors (Figure~\ref{fig:col_policy}(b)).  We discuss how to
avoid interference between diagnostic rules and routing rules in
Section~\ref{sec:multi-table}.
\begin{figure}[ht]
\small
\centering
%{\setlength{\tabcolsep}{0.2em}
\begin{tabular}{cc}

\begin{tabular}{p{0.3\textwidth}}
\begin{tabular}{|p{0.3\textwidth}|}
\hline
Trace ID \textbf{\emph{tr\_id1}}\\ \hline
Table ID \textbf{\emph{tab\_id1}} \\
Filter \textbf{\emph{exp}}\\
Fields \textbf{\emph{field\_list}}\\ \hline
Table ID \textbf{\emph{tab\_id2}} \\
...\\ \hline
%Table ID \textbf{\emph{tab\_id3}} \\
%...\\ \hline
\end{tabular} \\
\hspace{0.2em}\textbf{\emph{exp}} = not \textbf{\emph{exp}} \textbar \textbf{\emph{ exp}} and \textbf{\emph{exp}} \textbar \\
\hspace{1em}\textbf{\emph{exp}} or \textbf{\emph{exp}} \textbar  \textbf{\emph{ (exp)}} \textbar  \textbf{\emph{ prim}},\\
\hspace{0.2em}\textbf{\emph{prim}} = \textbf{\emph{field}} $\in$ \textbf{\emph{value\_set}}, \\
\hspace{0.2em}\textbf{\emph{field\_list}} = \textbf{\emph{field}} (as \textbf{\emph{name}}) \\
\hspace{1em}(, \textbf{\emph{field}} (as \textbf{\emph{name}}))*\\
\end{tabular} 

&

\begin{tabular}{|l|}
\hline
Trace ID \textbf{\emph{all}}\\
%\hline Table ID \emph{\# system-assigned}\\
Filter: \textbf{\emph{ip.proto = tcp}}\\
\hspace{1em}\textbf{\emph{or ip.proto = udp}}\\
Fields:
\textbf{\emph{timestamp}} as \textbf{\emph{ts}},\\
%\hspace{1em} \textbf{\emph{packet\_id}} as \textbf{\emph{id}}, \\
\hspace{1em} \textbf{\emph{ip.src}} as \textbf{\emph{src\_ip}},\\
\hspace{1em} \textbf{\emph{ip.dst}} as \textbf{\emph{dst\_ip}},\\
\hspace{1em} \textbf{\emph{ip.proto}} as \textbf{\emph{proto}},\\
\hspace{1em} \textbf{\emph{tcp.src}} as \underline{\textbf{\emph{src\_port}}},\\
\hspace{1em} \textbf{\emph{tcp.dst}} as \underline{\textbf{\emph{dst\_port}}},\\
\hspace{1em} \textbf{\emph{udp.src}} as \underline{\textbf{\emph{src\_port}}},\\
\hspace{1em} \textbf{\emph{udp.dst}} as \underline{\textbf{\emph{dst\_port}}}\\
\hline
\end{tabular}\\
(a) Configuration & (b) Example\\
\end{tabular}
%}
\caption{Parse Configuration and an Example}
\label{fig:parse}
\end{figure}

Cloud tenants also submit a {\bf parse configuration} in
Figure~\ref{fig:parse}(a) to perform initial parsing on the raw flow
trace. It has multiple parsing rules, with each rule having filter
and field lists that specify the packets of interest and the header
fields values to extract, as well as the table columns to store the 
values.  Based on the parse configuration, the policy manager configures
the {\bf trace parser} on table servers to parse the raw traffic 
traces into multiple text tables, called trace tables, which store 
the packet records with selected header fields. Figure~\ref{fig:parse}(b) 
shows an example parse configuration, in which all traces (line 1) 
in the current diagnosis are parsed. All the layer-4 packets 
including TCP and UDP (line 2, 3) are the packets of interest. The 
packets' 5-tuple fields, i.e. source/destination IP, source/destination 
port and protocol, are extracted and stored in tables. In this 
configuration, the TCP and UDP's source/destination ports are stored 
in the same columns.\\
\centerline{\small
$<$ts, src\_ip, dst\_ip, proto, src\_port, dst\_port$>$.
}

Based on trace tables, tenants can perform various diagnosis
operations through a query interface provided by the control
server. The {\bf analysis manager} in the control server takes the
tenant's query, schedules its execution on distributed {\bf query
  executors} on table servers, and returns the results to the
tenant. In Section~\ref{sec:app}, we discuss typical diagnosis tasks
that can be easily implemented using the query interface.

\subsubsection{Table Server}
A table server has three components, a trace collector, a trace 
parser and a query executor. The raw packets from the virtual NIC
pass through these three components in a stream. The trace collector dumps all 
packets and transmits them to the trace parser. The trace parser which 
is configured by the policy manager, parses each packet to filter out
packets of interest and extracts the specified fields. The extraction
results are stored by the query executor in trace tables. 

A query executor can itself be viewed as a database with its own
tables; it can perform operations such as search, join, etc. on data
tables.  Query executors in all table servers form a distributed
database which supports inter-table operations.  \revise{\#3}{We
  choose a distributed approach over a centralized one for two
  reasons. First, with distributed storage, APLAD only moves data when
  the query requires it, so it avoids unnecessary data movement and
  reduces network traffic overhead.  Second, for all the diagnostic
  applications discussed in Section~\ref{sec:app}, the most common
  table operations are single-table operations. These operations can
  be executed independently on each table server, so distributed
  storage helps to parallelize the data queries and avoid overloading
  a single query processing node.  }

\subsection{Trace Analysis}
\label{sec:analysis}
The tenant sends virtual network diagnostic requests via a SQL interface,
and the diagnostic query is executed on the distributed query executors
with distributed query execution optimizations.
%to detect their network problems. The distributed diagnostic query execution is optimized by the distributed database.

\subsubsection{Diagnostic Interfaces and Applications}
\label{sec:app}
APLAD provides a SQL interface to tenants, on which various network
diagnosis operations can be developed. Tenants can develop and issue
diagnostic tasks themselves or use diagnostic applications available
from the cloud provider.  \revise{\#7}{APLAD makes use of existing SQL
  operations on relational databases, so that it supports a wide
  variety of diagnostic applications.  } Some of the queries are
single-table queries, and others need to combine multiple tables.
Single table queries are useful to identify anomalies in the
input/output path of an appliance, for example.

{\bf Filter:} With filters, the tenant can focus on packets of interest. For example,
tenants may want to check ARP packets to find address resolution problem, they may want to check
DNS packets for name resolution problems, and they may be interested in a certain kind
of traffic such as ssh or HTTP.  These filters are actually matching a field to a value
and are easily described by a standard SQL query of the form:\\
\centerline{\small
\begin{tabular}{|l|}
\hline
select * from Table where field = value\\
\hline
\end{tabular}
}

{\bf Statistics:} The tenants may need distributions of traffic on a certain
field, such as MAC address, IP and port. These distributions can be used to identify 
missing or excessive traffic.  Distribution computation first gets the count of records,
and then calculate the portion of each distinct field value. These are described as:\\
\centerline{\small
\begin{tabular}{|l|}
\hline
var1 = select field, count(*) from tab group by field\\
var2 = select count(*) from tab\\
for each record r in var1\\
\hspace{1em}Output $<$r.field, r.count/var2$>$\\
\hline
\end{tabular}
}

{\bf Groups:} The unique groups among all packets records gives a global
view of all traffic types. For example, identifying unique TCP connections
of a web service helps identifying client IP distribution. In SQL, it is 
equivalent to finding the distinct field groups. Finding unique group query 
is described as:\\
\centerline{\small
\begin{tabular}{|l|}
\hline
select distinct field1, field2, ... from Table\\
\hline
\end{tabular}
}

{\bf Throughput:} Throughput has a direct impact on application performance
and is a direct indicator of whether the network is congested. To monitor a flow's
throughput we first group the packet records by timestamp and then output
the sum of payload lengths in each time slot. It can be implemented as follows:\\
\centerline{\small
\begin{tabular}{|l|}
\hline
\textit{\# assume the timestamp unit is second}\\
select ceil(ts), sum(payload\_length) from table group by ceil(ts)\\
\hline
\end{tabular}
}

Combining or comparing multiple tables can help to find poorly
behaving network appliances.

{\bf RTT:} RTT is the round-trip delay for a packet in the network. Latency is caused
by queuing of packets in network buffers, so RTT is a good indicator of network 
congestion. To determine RTT, we need to find a packet and its ACK, then use the 
difference of their timestamps to estimate the RTT.  Assume the trace tables have the following format:\\
{\centering \small
$<$ts, id\footnote{Packet ID is used to identify each packet, and does not change with hops. This ID can be calculated from unchanged fields in the packets such as identification number in the IP header, sequence number in TCP header or hash of the payload.}
, srcIP, dstIP, srcPort, dstPort, seq, ack, payload\_length$>$.
}

RTT monitoring is designed as follows:\\
\centerline{\small
\begin{tabular}{|l|}
\hline
1) create view F as select * from T where srcIP=IP1 and dstIP = IP2\\
2) create view B as select * from T where dstIP=IP1 and srcIP = IP2\\
3) create view RTT as select F.ts as t1, B.ts as t2 from F, B where F.seq + F.payload\_length = B.ack\\
4) select avg(t2-t1) from RTT\\
\hline
\end{tabular}
}
Note that the RTT computation discussed here is a simplified version. The diagnostic application could handle the more complicated
logic of RTT computation in real networks. 
%The logic of RTT computation is designed in the RFC and implemented
%in OS kernels, a diagnostic application can refer to them to compute its own RTT.
For example, retransmitted packets can be excluded
from the RTT computation; in the case of SACK, a data packet's
acknowledgment may be in the SACK field. 
%% {\bf GW: how SACK is related to more complicated RTT computation? }.

{\bf Delay at a hop:} Delay time of a packet on a hop indicates the packet processing time at that hop,
which indicates whether that hop is overloaded. To find the one-hop delay, we 
correlate input and output packets, and then calculate their timestamp difference. The SQL
description is:\\
\centerline{\small
\begin{tabular}{|l|}
\hline
1) create view DELAY as select In.ts as t1, Out.ts as t2 from In, Out where In.id = Out.id\\
2) select avg(t2-t1) from DELAY\\
\hline
\end{tabular}
}

{\bf Packet loss:} Packet loss causes TCP congestion window decrease, and directly 
impacts application performance. Finding packets loss at a hop requires identifying the missing packet
records between the input/output tables of that hop. It is described as:\\
\centerline{\small
\begin{tabular}{|l|}
\hline
select * from In where In.id not in (select id from Out)\\
\hline
\end{tabular}
}

All the examples above are one-shot queries, and the applications can periodically pull new data from the APLAD executors. 
If an application wants to get a continuous data stream (e.g. traffic
volume or RTT in each second), 
a proxy can be added between the distributed database and
the application, which queries the database periodically and pushes data to the application.

\subsubsection{Distributed Query Execution}
\label{framework}

\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{fig/ana_frame.pdf}
%\includegraphics[width=2.6in]{fig/ana_frame.pdf}
\caption{Analysis Framework}
\label{fig:analysis_framework}
\end{figure}

The data analysis framework in Figure~\ref{fig:analysis_framework} can
be viewed as running atop a distributed database. Each tenant's
diagnosis forms a schema, including the metadata table in the analysis
manager and trace tables in the query executors. The metadata table
records how the tenant's traces are collected and parsed, and each
tenant can only access its own information as a view. The trace tables
are parsed traces which are distributed to query executors.

When a query is submitted to the analysis manager, the query is
optimized to an execution plan as in typical distributed
databases~\cite{dis_query}. 
In APLAD, each table is placed locally at a query executor. This benefits
the query execution: single-table operations do not need to move data across the network, and multiple table operations can predict the
traffic volume introduced into the network, so the analysis manager is
able to decide each executor's task to complete a query and make
better execution plans, for example, using dynamic programming. 

%A distributed query execution plan takes the data locality into
%consideration and use dynamic programming to find the best execution
%node for each data operation (scan, join, etc.) in the execution
%plan. Each query executor performs its own part of the execution
%plan.  This optimization is out of the scope of this paper; we make
%use of an existing distributed database system.  Finally, the result
%is returned to the tenant via the analysis manager.

\subsection{Scalability Optimizations} 

Below, we describe a number of optimizations to improve the
scalability of APLAD as the size of the data center and number of
virtual network endpoints grow.

%% We optimize APLAD with the consideration that VND should be scalable
%% with data center and virtual network size and compatible with existing
%% cloud management platform.
%impact the existing system performance as little as possible.

\subsubsection{Local Table Server Placement}
\label{sec:algo}
Replicating traffic from capture points to table servers is a
major source of both bandwidth and processing overhead in APLAD. 
Flow capture points can be placed on either virtual or
physical switches.
Assuming all appliances (including tenant VMs, middleboxes and
network services) participate in the overlay as a VM, the physical network
works as a carrier of virtual links (tunnels) between these VMs. In this case,
APLAD can always place capture points on hypervisor virtual switches.
Virtual switches are the ends of virtual links, so it is easier to 
disambiguate traffic for different tenants here because packets captured 
there have been decapsulated from tunnels.   Also, if the capture point is
placed on a physical switch, the trace traffic must traverse the
network to arrive at the trace collector, adding to the bandwidth overhead.
%most virtual networks are realized using edge-based architectures,
%where tunnel endpoints and virtual routers are all configured on
%hypervisor virtual switches. On the virtual switches, we can capture
%the input and output traffic of all the virtual links. \footnote{The
%exception is that when a virtual link goes through shared physical
%middlebox devices. Capturing on virtual switches, then, is not
%sufficient to diagnose physical middlebox-related issues.}
%Second, it is easier to disambiguate traffic for different tenants at
%virtual switches because packets captured there have been decapsulated
%from tunnels.  
Finally, current virtual switches, such as Open vSwitch (OVS), can
support flexible flow replication using OpenFlow rules, which is
supported in a relatively smaller (though growing)  number of physical network devices.
\revise{\#5}{If a virtual network service is implemented in physical appliances,
the trace capture points can be placed in the access switch
or a virtual network gateway.
}

APLAD also places a table server locally on the same hypervisor with 
its capture point, which helps keep
all trace collection traffic {\em local to the hypervisor}. 
Data movement across the network is needed only when a distributed
query is executed. By allocating table servers in a distributed way around
the data center, all data 
storage and computation are distributed in the data center. So APLAD 
is scalable with the cluster and the virtual network size.

\subsubsection{Independent Flow Collection Rules}
\label{sec:multi-table}
\begin{figure}
\centering 
\includegraphics[width=0.7\textwidth]{fig/multi_table.pdf} 
\caption{Flow capture with multiple tables} 
\label{fig:multi_table} 
\end{figure}

Open vSwitch (OVS) allows us to capture a specific flow by installing
OpenFlow rules to replicate the flow to its local table
server. However, there may already be OpenFlow rules installed on the
OVS for forwarding or other purposes.  We have to make sure that the
flow collection rules do not interfere with those existing
rules. Similar problems have been addressed by tools like
Frenetic~\cite{frenetic}.

For example, if existing rules route flows by destination IP, and the
tenant wants to monitor the port 80 traffic, the administrator needs
to install the monitoring rule for port 80, and also the overlapping
flow space (IP, port 80) of both rules.  Otherwise the switch only
uses one rule for the overlap part and ignores the other.  However,
when the cloud controller updates a routing rule, it must check
whether there are diagnostic rules overlapping with it; if so, the
cloud controller needs to update both the original rules and the
overlapping rules.  This way of managing the diagnostic routing rules
not only causes excessive use of the routing table entries, but also
adds complexity to the existing routing policy and other components.

APLAD solves this problem by using the multi-table option in Open
vSwitch (Figure~\ref{fig:multi_table}).  We use two tables in APLAD with
flow collection rules installed in Table 0 and forwarding rules
written into Table 1. Table 0 is the first consulted table for any
packet, where there is a default rule to forward packets to Table
1. When the administrator wants to capture a certain flow, new rules
are added into Table 0 with actions that send packets to the table
server port and also forward to Table 1.  Using this simple approach,
we avoid flow capture rules impacting existing rules on the same
switch.

\subsection{Flow Correlation}
\label{sec:correlation}
In a virtual network, a flow usually traverses several logical hops, such 
as virtual switches and middleboxes.  When cloud tenants experience poor
network performance or incorrect behavior in their virtual networks, 
the ability to correlate flows and trace the flows along their paths
is necessary to locate the malfunctioning components. 
%When a client which connects to a 
%server inside the virtual network experiences bad performance or improper 
%behavior, the flow correlation in the virtual network becomes 
%an efficient method to locate the malfunctioning appliances. 
For example, when multiple clients fetch files from a set of back-end
servers, and one of the servers provides corrupted files, with flow
correlation on its path, one can follow the failed client's flow
in reverse to the server to locate the malfunctioning server.

It is easy to identify a flow based on the packet header if packets
are simply forwarded by routers or switches.  However, middlebox
devices may change the packet header or even payload of incoming
flows, which makes it very difficult to correlate the traffic flows on
their paths. We summarize several flow trajectory scenarios and
discuss how flows can be correlated in these cases.

(1) Packets pass a virtual appliance with some of its header fields unchanged.
Examples of such appliances are firewalls or intrusion detection systems.  
We define a packet's fingerprint (packet ID) on those fields  
to distinguish it from other packets.  We use SQL to 
describe the flow correlation:\\
\centerline{\small
\centering
\begin{tabular}{|l|}
\hline
select * from T1, T2 join by id \\
\hline
\end{tabular}
}

For example, the IP header has an identification field which does not
change with hops; the TCP header has a sequence number which is unique
in a flow if the packet is not retransmitted.  We can define a packet
ID by IP.id + TCP.seq $\ll$ 16.  We add a field id in data tables to
describe the packet ID.  This ID can be used to correlate the packets
into and out of a middlebox.

(2) Some appliances, such as NAT and layer-4 load balancers, may
change the entire packet header but do not change packet payloads. In
this case, a flow can be identified using its payload information.  We
define a packet's fingerprint (packet ID) as hash(payload) in the
trace table. So packets in both input and output traces can still be
joined by packet ID.

Recent work~\cite{flowtags} proposes to add tags to the packets and 
modify middleboxes to keep the tag, so that a middlebox's ingress 
and egress packets can be mapped by tags. Another approach is to 
treat middleboxes as opaque and use a heuristic algorithm to find 
the mapping~\cite{simple}.  In the view of APLAD, both methods 
are giving the packets a fingerprint (in the latter case the 
fingerprint is not strictly unique) -- APLAD can support both methods.

(3) There are still cases where the packet header is changed and the
payload is not distinguishable from the input and output of certain
appliances. For example, multiple clients fetch the same web pages
from a set of backend servers via a load balancer.  A layer-4 load
balancer usually breaks one TCP connection into two, that is, the load
balancer accepts the connection request from the client and starts
another connection with backend servers. In this case, a flow's header
is totally changed; the input and output packet headers have no
relation.  If all clients fetch the same file, then the payload is
also not distinguishable among all flows.

In this case, we use the flows' creation time sequence to correlate
them.  Usually, the load balancer listens to a port continuously.  When a connection from the client is received, the load balancer
creates a new thread in which the load balancer connects to one of the
backend servers. So the first ACK from the client to the load balancer
(the 3rd step in the 3-way shake) indicates that the client
successfully connects with the load balancer; then the load balancer
creates a new thread to connect to servers; the first SYN (1st step in
3-way shake) from the load balancer to the servers indicates the load
balancer has started to connect with the servers. So if these two
packets are ordered by arriving time sequence respectively. These two
packets of the same flow should be in the same position in both
sequences.\\ \centerline{\small
%\begin{tabular}{|p{0.45\textwidth}|}
\begin{tabular}{|l|}
\hline
create table inbound as fields, order \\
create table outbound as fields, order \\
var1 = select min(ts), fields from INPUT where ackflag=1\\
\hspace{4em}group by srcIP, dstIP, srcPort, dstPort\\
index = 0 \\
for record in var1 \\
\hspace{1em}  insert into inbound $<$record, index++$>$ \\
var2 = select min(ts), fields from OUTPUT where synflag=1\\
\hspace{4em}group by srcIP, dstIP, srcPort, dstPort\\
index=0 \\
for record in var2 \\
\hspace{1em} insert into outbound $<$record, index++$>$\\
\hline
\end{tabular}
}
