\section{Background}
\label{sec:back}
\subsection{Virtual Networks in the Cloud}
\label{sec:vnet}
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{fig/vnet.pdf}
%\includegraphics[width=2.5in]{fig/vnet.pdf}
\caption{Virtual Overlay Networks}
\label{fig:vnet}
\end{figure}
Figure~\ref{fig:vnet} shows an example virtual network for a cloud tenant.
In this example, tenant virtual machines are organized into
two subnets. The virtual machines in the same IP subnets are in the
same broadcast domain and they communicate with external hosts via
their subnet gateway; the cloud platform can also provide network
services to the virtual networks such as a DHCP server in a subnet, a
load balancer or intrusion detection system on a virtual link or a
firewall on a gateway. 
%This virtual network topology itself is very like a network connecting  physical servers, except that 
The virtual network is constructed as an overlay network running on
the physical network. In a large scale cloud environment, there could
be a large number of tenant networks running on the shared physical
infrastructure.

The virtual machines run atop hypervisors and connect to in-hypervisor
virtual switches (e.g., Open vSwitch).  To decouple the virtual
network from the physical network, tunnels are set up among all the
virtual switches. Several tunneling techniques have been proposed to
support efficient encapsulation among virtual switches, such as NVGRE,
VxLAN, and STT.  All tenant traffic is sent through the tunnels with
different tunnel IDs in the encapsulation header used to achieve
isolation between tenants.

Routing and other network services are implemented as logical or
virtual components. For example, OpenStack supports routing across
different networks using a virtual router function which installs a
distributed routing table on all of the hypervisors. Middlebox
services are implemented by directing traffic through multiple virtual
or physical middlebox appliances.


\subsection{Challenges of Virtual Network\\~Diagnosis}
\label{sec:challenge}
Once the basic network is set up, configuring various aspects of the
network, e.g., firewall rules, routing adjacencies, etc., requires
coordinated changes across multiple elements in the tenant's
topology. A number of things could go wrong in configuring such a
complex system, including incorrect virtual machine settings, or
misconfigured gateways or middleboxes. To complicate matters further,
failures can occur in the underlying physical infrastructure elements
which are not visible in the virtual networks.  Hence, diagnosing
virtual networks in a large scale cloud environment introduces several
concomitant technical challenges described further below.
 
%We design Virtual Network Diagnosis as a Service (VND) framework, which provide network trace collection, parsing and processing together together with a tenant-friendly interfaces. 
%In VND, the flow tracing is the basis and other latter processing is based on the traces. Except for satisfying the three requirements above, given the cloud environment that there are multiple tenants and various network appliances. There are new challenges.

% There are several technical challenges in designing VND service in large scale cloud environments. 

{\bf Challenge 1: Preserving abstractions.} Tenants work with an
abstract view of the network, and the diagnosis approach should
continue to preserve this abstract view.  Details of the physical
locations from which data is being gathered should be hidden, allowing
tenants to apply analyze data that corresponds to their logical view
of the network.

{\bf Challenge 2: Low overhead network information collection.}  Most
network diagnostic mechanisms collect information by tracing flows on
network devices~\cite{ndb, ofrewind}.  In traditional enterprise and
ISP networks, operators and users rely on the built-in mechanisms on
physical switches and routers for network diagnosis such as NetFlow,
sFlow or port mirroring. In the cloud environment, however, the
virtual network is constructed on software components, such as virtual
switches and virtual routers.  Trace capture for high throughput flows
imposes significant traffic volume into the network and switches.  As
the cloud infrastructure is shared among tenants, the virtual network
diagnostic mechanisms must limit their impact on switching performance
and the effect on other tenant or application flows.

%Although some software switches (e.g. OVS) support traffic sampling and port mirroring,
%overhead on the switching performance
%of these software components. Furthermore, many trace capturing mechanisms, such as port mirroring, essentially replicate the traffic of inspected flows, which introduces significant amount of 
%and introduces additional 

%{\bf Challenge 1: Flow capture introduces overhead into the network, which may degrade the network performance.} Flow capture actually duplicates the traffic of the inspected flow, which traverses the network from the monitoring switch to the collector. This traffic is possible of large amount in size, and takes some bandwidth on the physical links and CPU circles in the switch. So that other tenants' network flows are possible to be impacted and suffer performance degrade. And there are other traffic like data processing flow, which also takes some network resources.\\
%\textit{Requirement 4:} The data collection should introduce little performance influence to the existing traffic in the network.

{\bf Challenge 3: Scaling to many tenants.}  Providing a network
diagnosis service to a single tenant requires collection of flows of
interest and data analysis on the (potentially distributed) flow
data. All these operations require either network bandwidth or CPU
cycles. In a large-scale cloud with a large number of tenants who may
request diagnosis services simultaneously, data collection and
analysis can impose significant
bottlenecks impacting both the speed and effectiveness of
troubleshooting and also affecting prevalent network
traffic.% virtual network diagnosis service in cloud must be
% able to scale up to handle the data collection and data analysis
% requests from many tenants simultaneously. Scalability is a
% significant challenge for cloud virtual network diagnosis services.

{\bf Challenge 4: Disambiguating and correlating flows.} To provide
network diagnosis services for cloud tenants, the service provider
must be able to identify the right flows for different tenants and
correlate them among different network components. This problem is
particularly challenging in cloud virtual overlay networks for two
reasons: (1) Tunneling/encapsulation makes tracing
tenant-specific traffic on intermediate hops of a tunnel
difficult; (2)
% while this can be addressed by keeping a mapping
% between tunnel IDs and tenants (which is not difficult in SDNs, but
% tricky otherwise),
middleboxes and other services may transform
packets, further complicating correlation.
%much trickier issues arise when traffic travels through middleboxes which transform packets. 
For example, NATs rewrite the IP addresses/ports; a WAN optimizer can
``compress'' the payload from multiple incoming packets into a few
outgoing packets, etc.


% In today's cloud environment, cloud providers give very little
% visibility to the users to troubleshoot problems observed in tenants'
% virtual networks.

\subsection{Limitations of Existing Tools}
% necessity
\label{sec:related}

There are many network diagnosis tools designed for the Internet or enterprise 
networks. These tools are designed to diagnose network problems in 
various settings, but due to the unique challenges of multi-tenant 
cloud environments, they cannot be used to provide 
virtual network diagnosis service. 
%While there are many existing proposals for enterprise network diagnosis,
We discuss existing diagnosis tools in two 
categories: tools deployed in the infrastructure 
%{\bf GW: any tool is arguably deployed in the infrastructure} 
and tools deployed in the virtual machines. 
%{\bf GW: "deployed in the virtual network"? are these tools 
%designed for virtual network diagnosis? We need to have a better categorization...}

Solutions deployed on network infrastructure, such as NDB~\cite{ndb}, 
OFRewind~\cite{ofrewind}, Anteater~\cite{anteater}, HSA~\cite{hsa}, 
Veriflow~\cite{veriflow} and Frenetic~\cite{frenetic} could be used in 
data centers to troubleshoot problems in network states. 
However, these tools expose all the raw network information in the
process. In the context of the cloud, this violates
isolation across tenants and may expose crucial information about
the infrastructure that introduces vulnerability to potential 
attacks. In addition, these solutions are either inefficient or insufficient
 for virtual network diagnosis. For example, OFRewind collects all
control and data packets in the network, which introduces significant overhead
in the existing network.  NDB's trace collection granularity is constrained 
by the existing routing rules, which is not flexible enough for cloud tenants
to diagnose specific application issues.  Anteater, HSA, and Veriflow 
model the network forwarding behavior and can check the reachability or 
isolation, which is limited to analyzing routing problems;
Frenetic focuses on operating each single switch without considering 
the virtual network wide problems.

% These tools cannot be deployed as a service in cloud
% environment, because cloud tenants cannot be allowed to access all the
% network devices and they shouldn't be exposed with all the information
% about the network infrastructure and traffic in the network either. On
% the data analysis aspect, these tools also don't provide the data
% partition and isolation functions required in multi-tenant cloud
% environments. 

Many network monitoring or tracing tools, such as tcpdump, SNAP~\cite{snap} 
and X-Trace~\cite{xtrace} can be deployed in client virtual machines for 
network diagnosis. 
%SNAP and X-Trace need to modify guest OS kernel or 
%tenant applications, which constrains the tenant's choice of guest
%OS and applications. tcpdump has to be run on individual VMs, and it 
%lacks of a management mechanism to diagnose network wide problems.
These tools are usually heavy-weight, however, and 
it may not be possible to apply these tools on virtual 
appliances, such as a distributed virtual router or a firewall 
middlebox. Second, and more importantly, simply collecting traffic traces 
is not enough to provide a virtual network diagnosis service. 
In such a service, tenants also need to be able to perform meaningful
analysis that helps them tie the observed problem to an issue with
their virtual network, or some underlying problem with the provider's
infrastructure. 

%the best practice a cloud user can do is to use tcpdump to monitor traffic on virtual interfaces and analyze the packet trace. 
% These approaches, first of all, are difficult to manage efficiently in the distributed environment.
% %provides no information about network components to locate the source of the problem. 
% Second, they lack of sufficient trace analysis interfaces to the tenant.
% Third, they introduce overhead to the virtual machines.
% %analyzing distributed packet traces for network diagnosis is a huge burden for cloud users without networking expertise. 

%Thus, we need a new approach to enable virtual network
%diagnosis. Diagnosis involves data analysis, so gathering traffic
%data forms the basis of any new approach. However, these are several
%concomitant challenges that we identify next.

% In what follows, we first descrive the challenged. 

% We argue that virtual network diagnosis should be provided as a
% network service to cloud users. Cloud providers should provide the
% visibility for cloud users to look into their network problems through
% abstract network diagnosis interfaces. These interfaces should allow
% cloud users to trouble shooting their network problems without expose
% unnecessary information about the underlying network infrastructure
% and other tenants' information. The network diagnosis service engine
% should be able to collect, parse and analyze traffic trace
% efficiently, disambiguate and isolate the data and delivery them
% correctly to different tenants for diagnosis purpose.
%We study existing network diagnosis solutions like SNAP~\cite{snap}, HSA~\cite{hsa}, Anteater~\cite{anteater}, Veriflow~\cite{veriflow}, OFRewind~\cite{ofrewind}, ndb~\cite{ndb}, Frenetic~\cite{frenetic} as well as some basic tools such as tcpdump, ping and traceroute. We evaluate them in the cloud environment to see whether they are deployable and how much support they can provide for virtual network diagnosis. We three limitations and identify the requirements for virtual network diagnosis.
%{\bf Limitation 1: tenant applicable.} Most solutions are deployed by the network administrator, and tenants have limited access to the infrastrucure, thus the tenant cannot deploy them directly~\cite{veriflow, ofrewind, ndb, frenetic}. Some tools extract information from the physical network~\cite{hsa, anteater}, which is out of the tenant's view due to transparency. The remaining basic tools point tools~\cite{tcpdump} are point tools, which are hard to manage in the distributed system and consume tenant's resources.\\
%\textit{requirement 1:} Virtual network diagnosis should be accessable to the tenants, including easy to deploy and easy to manage. The interfaces to the tenants should be flexible to support various data collection, parsing and manipulation requests. 
%{\bf Limitation 2: system compatible.} OFRewind and ndb~\cite{ofrewind, ndb} capture every packets in the network, however fine-grained rules need to be used in the network appliances in the cloud to provide per-tenant per-flow diagnosis. Frenetic~\cite{frenetic} addresses the problem that these new rules will be screwed up with the existing routing rules and adds another layer between network controller and the network appliances to deal with the rule conflicts, which leads to rewriting network controller and is not preferable.\\
%\textit{Requirement 2:} Virtual network diagnosis should work as an independent modules to the existing cloud management system, trying to avoid routing and management collisions.
{%\bf Limitation 3: sufficient data processing.} Tcpdump~\cite{tcpdump}, OFRewind~\cite{ofrewind} and ndb~\cite{ndb} provide raw data to the cloud administrator for diagnosis purpose, which need packet parsing knowledge in the following process. Analysis tools like wireshark~\cite{wireshark} provides functions such as filter, statistics and so on, but packets are parsed to flat records so that not arbitrary operations are supported. For example, they do not support operations to find out difference between two traces, which may be useful to find packet loss. Frenetic~\cite{frenetic} provides basic operations like group by, split when and so on to the administrator, however, these operations are limited at each single node.\\ 
%\textit{Requirement 3:} Virtual network diagnosis should provide both the raw data and a data manipulation interface to the tenant; the manipulation interface should support various data processing reqirements.
%\subsection{Related Work}

%\aditya{this entire section is absolute trash. I'm sorry - no paper can have English as bad as this.}

%\aditya{why is this section appearing here?}


%We evaluate typical network diagnosis solutions in the cloud environment to see whether they are deployable and how much support they can provide for virtual network diagnosis.

%SNAP~\cite{snap} changes the OS kernel and exposes TCP-internal variables such as retransmissions and RTT to the network administrator and the application developer. SNAP is light-weight as it does not require  collecting full packet traces. However, in the cloud the netowrk administrator has no access to the customers' VM's operating system; and SNAP in the VM can only provide end-to-end TCP variables, which is often insufficient to aid in detailed diagnosis.

%HSA~\cite{hsa}, Anteater~\cite{anteater} and Veriflow~\cite{veriflow} build models of network forwarding behavior which are then used to check for unreachability, loops and poor isolation. Unfortunately, these tools are mostly useful in checking for static configuration issues, but they cannot help with dynamic issues such as those triggered by router software bugs or other unexpected network failures. Another shortcoming of them is that it is hard for them to find performance problems such as congestion and improper TCP configuration. \aditya{edited till here - then gave up}

%OFRewind~\cite{ofrewind} is a diagnosis tool for SDN. It adds a proxy between switches and the SDN controller to collect control messages; it can also collect data packets inside the network on demand. By replaying the records and monitor the switches' behavior, OFRewind helps the network administrator find controller software and switch hardware bugs. ndb~\cite{ndb} also uses a proxy between the switches and the controller. It modifies the flow modification message and appends a new action that records the new output ports and output this record message to a collector; thus, the administrator can study how a flow traverses the network. There is a shortage for these two tools that they do not support arbitrary granularity flow capture, because if inspected flow and data traffic flow are in different granularity, the rules for both functions will be screwed up; and this coarse-grained information does not suit the cloud situation, in which administrator usually need to do per-tenant per-flow diagnosis. \aditya{I don't understand the difference. ``screwed up'' is not acceptable English. We are writing a paper, not replying to a blog post}

%NetPrint~\cite{netprint} uses multiple users' problem and solution report as problem repository, collects the user's configuration, then uses heuristical algorithm to finds out the root cause of the appeared problem. NetMedic~\cite{netmedic} organizes the network component in a graph, then use probability model to locate the root cause of the abnormal components. These two solutions aim more to solve the network application configuration problems such as DNS, firewall misconfiguration. ODD~\cite{odd} collects end-to-end loss rate and topology, build an algebra model to compute the loss rate on each link so that to locate the problematic link. These solutions all need to collect problem information such as abnormal component and paths to diagnose problem. They are built with problem collected. Our VNetDaaS provides support from the lower layer. \aditya{I don't get what this means}

%Frenetic~\cite{frenetic} builds a network programming language on top of SDN, by which the administrator can use simple SQL language to collect packet trace and some statistics. They also discuss the problem of the switch rule collision when routing and monitoring happen at the same switch simultaneously. The programming language inspire us to build the language for network diagnosis, which can simplify the diagnosis procedures and even help to build diagnosis applications to perform diagnosis automatically. In Frenetic, the switches are still be viewed as isolated nodes, and the monitoring is done at them independently. For our requirements of network diagnosis, we need combine them to find problems. In VNetDaaS, we add more operations to for inter-node data. \aditya{what is the advantage of VND? Also, use VND and not VNet...}

% {\color{red} WW: 
% 1) Packets in the tunnel is encapsulated. In the intermediate hops of a tunnel, if VND wants to a flow trace,
% it need to know the encapsulation header(tunnel ID), so that it can set up rules to capture that flow. 
% VND need more information (from the network controller) to calculate the collection policy. \\
% 2)  current Openflow devices do not support layer 2 over layer 3 header matching. It is not hard, but we cannot do it now.
% }

Thus, we need a new approach to enable virtual network diagnosis, which
involves trace collection and analysis. This new approach should 
overcome the challenges in Section~\ref{sec:challenge}

%{\red A key contribution of our paper is in identifying the virtual network diagnosis problem and in articulating the challenges involved. A second contribution is to show initial evidence that it is possible to overcome these challenges. We show this in the context of a strawman proposal described next, and a preliminary performance evaluation described in Section~\ref{sec:eval}.  }
% . 
% a proxy or a cache; and TTL
% is reduced by 1 at each hop on path. With all the encapsulation and
% transformation actions taken on the packet header, it is very
% difficult to identify and correlate a flow that is observed on
% multiple devices.

%{\bf Challenge 2: Flows may be transformed by middleboxes or other appliance in the networks, which makes it hard to follow.} Middleboxes are possible to transform the packets traversing them. For example, redundancy elimination uses shims to take the place of duplicated payload; NAT replaces the MAC or IP address; and TTL is reduced by 1 at each hop on path. These kinds of transformation makes the packet hard to be followed in the network. Without the exact packet location information, some kinds of diagnosis are difficult.\\
%\textit{Requirement 6:} The system needs to take packet transformation introduced by middleboxes and other network appliance into consideration.

%{\bf Challenge 2: The cloud has a large amount of tenants, many of whom may submit data collection and processing request simultenously.} There are several points in this system which may contrain the scale of the diagnosis request. First, the capture point start to duplicate the inspected flows, which is possible to be overload. Second, flow capture and trace fetching inject traffic into the networks, which takes bandwidth in the phyiscal links. Third, the data analysis is based on distributed traces, which needs bandwidth to exchange and CPU circles to process. \\
%\textit{Requirement 5:} The diagnosis system needs to be scalable to process multiple tenants' simultenous diagnosis requests. 
%\subsection{Related Works}
%\label{sec:related}
%{\color{red} VNetDaaS is necessary}
%\begin{table*}[htb]
%\centering
%\begin{tabular}{|c|c|c|c|c|c|c|}
%\hline
%\multirow{2}{*} {\bf{{\small Features}} }& \bf{{\small Tenant}}& \bf{{\small Admin}}& \multirow{2}{*}{ \bf{{\small Independency}}}& \bf{{\small Diagnose}}& \bf{{\small Diagnose}} & \bf{{\small Data processing}} \\
% & \bf{{\small deployable}}& \bf{{\small deployable}}& & \bf{{\small performance}}& \bf{{\small reachability}} & \bf{{\small operations}} \\
%\hline
%{\small SNAP~\cite{snap}} & {\small Y} & {\small N} & {\small Y} & {\small Y} & {\small Y} & {\small N} \\
%\hline
%{\small HSA~\cite{hsa}} & \multirow{3}{*} {\small N} & \multirow{3}{*} {\small Y} & \multirow{3}{*} {\small Y} & \multirow{3}{*} {\small N} & \multirow{3}{*} {\small Y} & \multirow{3}{*} {\small N} \\
%{\small Anteater~\cite{anteater}} & & & & & & \\
%{\small Veriflow~\cite{veriflow}} & & & & & & \\
%\hline
%{\small OFRewind~\cite{ofrewind}} & \multirow{2}{*} {\small N} & \multirow{2}{*} {\small Y} & \multirow{2}{*} {\small N} & \multirow{2}{*} {\small Y} & \multirow{2}{*} {\small Y} & \multirow{2}{*} {\small N} \\
%{\small ndb~\cite{ndb}} & & & & & &  \\
%\hline
%{\small Frenetic\cite{frenetic}} & {\small N} & {\small Y} & {\small N} & {\small Y} & {\small N} & {\small some} \\
%\hline
%{\small VNetDaaS} & {\small Y} & {\small Y} & {\small Y} & {\small Y} & {\small Y} & {\small Y} \\
%\hline
%\end{tabular}
%\caption{Features Supported by Various Diagnosis Solution}
%\label{tab:references}
%\end{table*}
%We evaluate typical network diagnosis solutions in the cloud environment to see whether they are deployable and how much support they can provide for virtual network diagnosis.

%SNAP~\cite{snap} changes the OS kernel and exposes TCP internal variables such as retransmission, RTT to the network administrator and the application developer. SNAP is a light-weighted solution without collecting the full packet traces. However, in the cloud netowrk administrator has no access to the customer's VM operating system; and SNAP in the VM can only provide end-to-end TCP variable, which is hard to reveal the problematic point in the whole physical path.

%HSA~\cite{hsa} and Anteater~\cite{anteater} build models to describe the network data plane. They model the network forwarding behavior and propose the algorithm to check reachability, loops and isolation based on their model. In implementation, they collect the routing rules by SNMP and check the network by their model. Veriflow~\cite{veriflow} also builds model to describe network invarients. It sits between the SDN applications and the devices to intercept and verify the routing rules with network invarients. This kind of static check helps to find out the configuration problems for flow routing, but it is not about how the packets actually traverse the network. If the forwarding problem is silent, these tools cannot find it out. Another shortage of them is that it is hard for them to find performance problem such as congestion and improper TCP configuration.

%OFRewind~\cite{ofrewind} is a diagnosis tool for SDN. It adds a proxy between switches and the controller which collects the control messages, and it can also collect the data packets inside the network on demand. But replaying the trace record and monitor the switches behavior, OFRewind helps the network administrator to find controller software and switch hardware bugs. ndb~\cite{ndb} also uses a proxy between the switches and the controller. It modifies the flow modification message and append a new action that record the new output ports and output this record message to a collector; thus, the administrator can study how a flow traverses the network. There is a shortage for these two tools that they do not support arbitrary granularity flow capture, because if inspected flow and data traffic flow are in different granularity, the rules for both functions will be screwed up; and this coarse-grained information does not suit the cloud situation, in which administrator usually need to do per-tenant per-flow diagnosis. 

%NetPrint~\cite{netprint} uses multiple users' problem and solution report as problem repository, collects the user's configuration, then uses heuristical algorithm to finds out the root cause of the appeared problem. NetMedic~\cite{netmedic} organizes the network component in a graph, then use probability model to locate the root cause of the abnormal components. These two solutions aim more to solve the network application configuration problems such as DNS, firewall misconfiguration. ODD~\cite{odd} collects end-to-end loss rate and topology, build an algebra model to compute the loss rate on each link so that to locate the problematic link. These solutions all need to collect problem information such as abnormal component and paths to diagnose problem. They are built with problem collected. Our VNetDaaS provides support from the lower layer. 

%%%Frenetic~\cite{frenetic} builds a network programming language on top of SDN, by which the administrator can use simple SQL language to collect packet trace and some statistics. They also discuss the problem of the switch rule collision when routing and monitoring happen at the same switch simultaneously. The programming language inspire us to build the language for network diagnosis, which can simplify the diagnosis procedures and even help to build diagnosis applications to perform diagnosis automatically. In Frenetic, the switches are still be viewed as isolated nodes, and the monitoring is done at them independently. For our requirements of network diagnosis, we need combine them to find problems. In VNetDaaS, we add more operations to for inter-node data.

