% Pre-lim
% by Eric Benedict


\chapter{Introduction}
\section{Background}
\label{sec:back}
Cloud computing has transformed a large part of the IT industry. In a public
cloud, the cloud provider maintains a physical infrastructure, 
and provides computing resources to its tenants according to their
requirements. The tenants are charged according to a ``pay-as-you-go" pricing model. 
The cloud computing model gives the tenants agile 
deployment of their applications and eases their maintenance; the pricing schema saves the tenants' cost
and creates revenue for the provider.
The computing resources provided by a cloud provider can be software as a service (SaaS),
platform as a service (PaaS) or infrastructure as a service (IaaS).~\cite{berkeley-cloud}
\subsection{Data Center Networking}
Cloud data centers (DCs) can be hyper-scale or warehouse-scale supporting over 100,000 end hosts.
The hosts exchange network traffic with external network (north-south traffic) 
and between each other (east-west traffic). Applications in data centers
often generate significant volumes of east-west traffic, so the data center
needs to be horizontally scalable -- i.e. it can be scaled by adding more network switches, links
and servers~\cite{bgp-routing}.

A traditional data center network topology is a 3-layer tree. The 3 layers are core, aggregation
and access layers (we also refer to them as tier-1, tier-2 and tier-3). 
The tier-3 switches connect with physical servers. To satisfy server-to-server
bandwidth demand, the layer that is higher from the servers has higher port density
and link capacity (e.g. ``trunk" links) to reduce oversubscription. An alternative topology
is a clos topology (a.k.a. a fattree)~\cite{fattree}, which provides better horizontal
scalability. Each tier-3 switch (a.k.a. a ToR switch or an access switch) and its
servers form a rack. Several racks form a pod. In a pod, tier-3 switches and tier-2
switches are connected as a full bipartite graph. For each tier-1 switch,
each pod contributes one uplink port from one of its tier-2 switches to connect to that tier-1 switch.

Traditional tree-topology DC networks use the Spanning Tree Protocol (STP) to achieving 
layer-2 routing. In a clos network which is of a larger scale, operators usually use 
OSPF or BGP for routing and ECMP for load balancing~\cite{bgp-routing}.
Recently, software-defined networking (SDN) has been introduced into data center networking. 
In SDN, all switches' control plane is moved to the SDN controller, and the SDN controller
computes the routes and sends configurations to each switch. As an implementation
of SDN, the OpenFlow protocol allows forwarding decisions to be based on layer 2 to layer 4
headers, so the flow control is more flexible and finer grained. Some other
routing designs like Hedera~\cite{hedera} can be easily implemented in an SDN architecture.
\subsection{Virtual Networks}
\label{sec:vnet}
Once the infrastructure is connected and the routing is configured, 
the whole switch fabric can be viewed as a network resource pool (a big virtual switch) 
providing connectivity and bandwidth capacity among all the physical servers. 
Cloud services are deployed in the physical servers, and applications
communicate with each other via the switch fabric. Among the cloud services (SaaS, 
PaaS, IaaS) we focus on the infrastructure as a service in this thesis.

In a cloud that provides infrastructure as a service, cloud tenants' network 
requirements are described as virtual networks, which can be sophisticated logical
network topologies connecting their virtual machines (VMs) and other network
appliances, such as routers or middleboxes. Tenants can flexibly define
policies on different virtual links in this topology~\cite{cloudnaas,stratos-tr}. 
Recent progress on network virtualization has made it possible to run
multiple virtual networks on a shared physical network, and decouple
the virtual network configuration from the underlying physical
network. 

The underlying infrastructure then takes care of the realization of the
virtual networks by: for example, deploying VMs and virtual appliances,
instantiating the virtual links, setting up traffic shapers/bandwidth
reservations as needed, and logically isolating the traffic of
different tenants (e.g., using VLANs or tunnel IDs).  
While virtual networks can be implemented in a number of ways, we
focus on the common overlay-based approach adopted by several cloud
networking platforms. Examples that support such functionality include
OpenStack Neutron~\cite{openstack}, VMware/Nicira's NVP~\cite{nvp},
and IBM DOVE~\cite{dove}. Configuring the virtual networks requires
setting up tunnels between the deployed VM instances and usually
includes coordinated changes to the configuration of several VMs,
virtual switches, and potentially physical
switches and virtual/physical network appliances. 

\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{fig/vnet.pdf}
\caption{Virtual Overlay Networks}
\label{fig:vnet}
\end{figure}
Figure~\ref{fig:vnet} shows an example virtual network for a cloud tenant.
In this example, tenant virtual machines are organized into
two subnets. The virtual machines in the same IP subnets are in the
same broadcast domain and they communicate with external hosts via
their subnet gateway; the cloud platform can also provide network
services to the virtual networks such as a DHCP server in a subnet, a
load balancer or intrusion detection system on a virtual link or a
firewall on a gateway. 
The virtual network is constructed as an overlay network running on
the physical network. In a large scale cloud environment, there could
be a large number of tenant networks running on the shared physical
infrastructure.

The virtual machines run atop hypervisors and connect to in-hypervisor
virtual switches (e.g., Open vSwitch).  To decouple the virtual
network from the physical network, tunnels are set up among all the
virtual switches. Several tunneling techniques have been proposed to
support efficient encapsulation among virtual switches, such as NVGRE,
VxLAN, and STT.  All tenant traffic is sent through the tunnels with
different tunnel IDs in the encapsulation header used to achieve
isolation between tenants.

Routing and other network services are implemented as logical or
virtual components. For example, OpenStack supports routing across
different networks using a virtual router function which installs a
distributed routing table on all of the hypervisors. Middlebox
services are implemented by directing traffic through multiple virtual
or physical middlebox appliances.
\section{Cloud Organization}
There are three elements in a cloud architecture -- the control plane, the
data plane and the application plane. Each plane performs its own functionality;
a cloud provider and tenants operate on different planes.

{\bf The control plane} is in charge of configuring virtual networks.
It maintains and monitors the physical infrastructure (connectivity, resource utilization, etc.), 
responds to a tenant's virtual network requirements, computes the virtual-to-physical mapping,
and generates and deploys the configurations (routing rules, VM setup, etc.). 
The the control plane functions are usually integrated into a cloud controller. For example, 
in OpenStack, the controller has components managing storage, VMs and network connectivity.
These components can monitor the current states of the physical devices (switches,
workstations, etc.) and configure them. 
%When a tenant's virtual network requirement is proposed, the cloud
%controller runs an allocation algorithm with the virtual network, current
%state of the physical infrastructure (topology, utilization, etc.) and
%the policy (requirements on security or performance, etc.) as inputs, and outputs
%the virtual-to-physical mapping.

{\bf The data plane} is in charge of delivering a tenant's traffic between his VMs or
to/from external networks. It obeys the configurations from
the control plane. The data plane is composed of the physical switch data plane, the 
middlebox data plane
and the datapath in hypervisors. The datapath in hypervisors includes 
physical NICs, NIC drivers, hypervisor network stacks, virtual switches and the VM's virtual NICs. 
All tenants' application traffic must traverse through the data plane to arrive at its
destination.

{\bf The application plane} carries tenants' distributed applications, which 
generates tenants' network traffic. It is composed of guest OS and applications
in the tenants' VMs. The tenants deploy various applications in the application plane to
satisfy their requirements, resulting in significant 
heterogeneity between different tenants.

\section{Virtual Network Diagnostic Problem}
\subsection{Problem Statement}
% system is complicated
% two roles' existing prevent a complete solution of the whole system
In the cloud architecture, each of the three planes
cannot guarantee 100\% availability and correctness; 
misconfigurations, software/hardware bugs and compatibility issues may happen
all the way from physical devices to the virtualization layer and even to the
tenants' distributed applications. This leads to connectivity issue, 
performance problems, information leakage, etc.. 
The complexity also makes the system possibly fail to coordinate
different components.

In addition, two roles are involved in the cloud -- tenants and the provider, 
and there exists a barrier between them.
The provider is responsible for the physical infrastructure and the virtualization,
 and confidentiality disallows the provider from looking into the
tenants' VMs. The tenants take charge of their distributed applications
inside their VMs, and isolation and abstraction preclude them from  directly
accessing the physical infrastructure. In view of such challenges and constraints
 it is difficult to provide 
a solution that is capable of monitoring and diagnosing all layers in the cloud.

This thesis aims to systematically seek a complete solution to diagnose virtual network
problems in the public cloud environment. 
We explore solutions to diagnose virtual-network-related problem in each plane,
and also evaluate their cost and performance.

\subsection{Methodology}
To provide a complete solution for virtual network troubleshooting 
in public clouds, 
We propose diagnostic solutions for each of the three planes.

In the application plane, a tenant only has 
a view of its own virtual network, which is composed of VMs, virtual links and
network functions (i.e. middleboxes, implemented as software in virtual machines). 
We provide an Application PLAne Diagnostic solution (APLAD) to detect problems 
in the application plane, including VM's guest OS, applications and some tenant-deployed
middleboxes. APLAD is deployed on the tenant's demand; it collects application 
traffic traces, parses them and provides a SQL interface for the tenant 
to diagnose problems inside their virtual networks.

In the control plane, a tenant's virtual network requirements are transformed
into an actual deployment in the physical infrastructure. There are three layers
in the control plane: logical view and policies, physical view and devices 
states. Packets are finally forwarded according to the routing states in devices.
We provide a Control PLAne Diagnostic solution (CPLAD), which
uses sFlow to sample traffic and uses big data analytic techniques to validate 
whether the packet forwarding behaviors matches a tenant's logical network view 
and other network principles and invariants.

In the data plane, a tenant's network traffic is delivered between VMs and the
external networks. The data plane includes physical switch data planes,  
middlebox data planes and the datapath in hypervisors. We found that performance isolation between 
tenants is difficult to be completely guaranteed in some cases due to resource sharing (NIC,
bus, etc.). We provide a Data PLAne Diagnostic solution (DPLAD) to find the performance 
bottleneck of a virtual network in the data plane.
\section{Contributions}
The contributions of this thesis can be summarized as follows. 
\begin{itemize}
\item Our work is the first to address the problem of virtual network diagnosis 
and the technical challenges of providing such a service in the cloud. 
We propose the design of a APLAD framework for cloud tenants to diagnose 
their virtual network and application problems and also propose the service 
interface to cloud tenants. We propose optimization techniques to reduce overhead 
and achieve scalability for the APLAD framework. We demonstrate the feasibility of APLAD
through a real implementation, and conduct experiments measuring overhead along 
with simulations to show scalability.
\item We propose a control plane diagnostic solution which validates whether the 
actual packet forwarding matches the tenants' requirements. This solution
spans the three layers in the cloud control plane, so that it can detect problems
in all layers.
\item We propose a data plane diagnostic solution which aims to find the bottleneck 
of a virtual link in the physical infrastructure. This solution can both benefit
virtual network troubleshooting and provide hints for virtual network allocation
and migration.
\end{itemize}



