\chapter{CONTROL PLANE DIAGNOSIS}
\section{Overview}
% control plane, problems
% most control plane tools on a certain layer
% data plane tool too simple
% make CPLAD, check physical configuration, virtual network
% 

In the cloud, a cloud controller maintains and monitors the physical infrastructure.
As we mentioned in Section~\ref{sec:back}, it sets up the routing in the switch fabric,
so as to provide connectivity between servers; it responds to tenants' virtual network
requirements, allocates VMs, virtual links and network function services, generates and deploys
configurations (routing, VM setup, etc.). A cloud controller
usually has several components with each of them in the charge of a certain functionality,
such as network connectivity, tunnel setup, VM setup, middlebox configuration, etc.~\cite{openstack}.

A typical cloud network controller functionally spans 3 layers: logical view and policy, 
physical view, device states. The actual packet forwarding follows the routing configurations
in device states~\cite{layering}. A tenant's virtual network requirement 
includes a logical view and policies~\cite{cloudnaas}.
The cloud controller maintains the physical view of the infrastructure. The tenant's requirement
is translated into devices states and deployed in the devices.
Various problems may happen during the infrastructure maintenance and virtual network setups.
These problems can be introduced into the system from mis-configuration in the virtual network,
 compatibility issues between different components, hardware bugs in the physical devices
or software bugs in the cloud controller.

\begin{figure}[htb]
\centering
\includegraphics[width=0.7\textwidth]{fig/layering.pdf}
\caption{Control Plane Layering}
\label{fig:layering}
\end{figure}
Existing tools ~\cite{anteater,hsa, ndb, ofrewind} usually make troubleshooting on a certain layer
and assume the other layers are working well; so it is possible that
they miss problems in a certain layer. Some solutions analyze the configurations
assuming the actual forwarding behavior follows the configurations, which, however, is not always
true~\cite{hsa, anteater}. The solutions that capture the actual forwarding behaviors are usually designed for general 
purpose (e.g. ping, traceroute, OFRewind~\cite{ofrewind}), and they do not consider the specific requirements
in the cloud control plane.

We propose a Control PLAne Diagnostic solution (CPLAD). CPLAD captures the actual packet forwarding
behaviors in the network by sampling. It analyzes the packet samples to verify whether
the actual forwarding behavior is legitimate. CPLAD samples the packets in the physical network
so that the tenant's traffic is captured with its outer tunneling protocol header.
By correlating the outer header and the inner header, CPLAD infers the tenant's virtual-to-physical
mapping as well as the logical topology. By comparing the inferred topology with
the required virtual network topology, CPLAD can find out whether the actual deployment matches
the tenant's expectation.
\section{Challenges}
Each packet sample only reflects its appearance at a certain device, CPLAD needs to combine
all packet samples and verifies (1) the packet forwarding in the physical network  is legitimate 
and (2) the virtual network allocation matches the tenant's requirement. There are several challenges in CPLAD.
\begin{itemize}
\item Unlike flow trajectory, sampling is random. A packet is not guaranteed to be 
sampled along its path. So there is a challenge to differentiate sample miss and packet loss.
\item When deploying CPLAD in a large scale data center,
it should be scalable to diagnose the whole control plane. Multiple switches generate a large amount
of packet samples; there is a requirement to perform big data analysis on packet samples.
\end{itemize}
\section{Design}
\label{sec:cplad_design}
%\subsection{Background}
%the inner packet to the corresponding VMs. 
%In a tenant's virtual network, all virtual switches in one subnet form a virtual distributed layer-2 switch.

\subsection{Packet Sampling}
It is not practical to capture all packets records in each switch because of the overhead and
cost. To get the actual forwarding actions on switches, we need to trace packets
as they are traversing the network. There are some options to trace packets: sFlow~\cite{sflow}, NetFlow and
port mirroring. We choose sFlow due to several seasons: (1) sFlow
has minimum performance impact to the existing traffic; (2) sFlow preserves
most part of the sampled packet, so in the case of tunneling protocol (e.g. VXLAN), both
the outer header and the inner header are preserved. The outer header can
be used to validate the routing and the inner header can be used to verify 
the tenant's VM allocation. 

We start sFlow on all physical switches (tier 1, 2, 3 switches), 
and also dump or sample packet trace on the physical NIC in each hypervisor.
CPLAD periodically checks the packet samples to find whether there exists 
routing violations. We assume the tenant traffic is transmitted by tunneling 
protocols, e.g. VXLAN. There is an outer header which contains 
the tenant ID and the routing information (IP of the source/destination 
hypervisor); and the payload of the outer header is a layer-2 packet in the 
tenant's virtual network.
An sFlow packet contains the first few bytes of the original packet plus some 
meta information such as switch ID, time stamp of sampling.
\subsection{Routing Validation}
A flow is defined by the routing granularity in the switch fabric. If the 
network uses layer-2 or layer-3 routing, then a flow is represented by its
source/destination IP address tuple. If ECMP is used, then the routing granularity
is a five tuple of source/destination IP, source destination port and protocol.

When the sample packet arrives at the sFlow collector, it is first preprocessed
into a record $<$flow\_ID, time\_stamp, packet\_ID, switch\_ID$>$, where
flow\_ID is the fields in the packet header that matches the routing granularity 
(e.g. five tuple of the packet in ECMP, IP addresses in layer 2 or layer 3 networks), 
time\_stamp is the time when the packet is sampled, packet\_ID 
uniquely indexes a packet\footnote{A packet ID represents a packet, it is constant 
during a packet's life time, and different from other packets' ID. It can
be some unique field in the packet header (e.g. IP identification number, TCP
sequence number) or hash of the payload.}, and switch\_ID is the location where the packet is 
sampled.

Then packet records are put into bins by their flow ID; each bin contains
all records of one flow. This is designed as a mapper procedure in a 
MapReduce framework. A mapper has some records as inputs and outputs
key-value pairs where the key is the flow\_ID, and the value is (time\_stamp,
packet\_ID, switch\_ID).

We assume the sample ratio is $r = 1/n$, and average $N$ packets of 
the flow are sampled on each switch that observes this flow.
Then there should be $N/r$ packets of this flow traversing the network in the sample period. 

The routing validation is done in a Reducer function. 
Each flow's records are checked about their reachability, loops, black holes, leakage, etc.
In each period, the reducer can also refer to history results in previous periods 
to increase the confidence of judgment.

{\bf Reachability:} All packets samples are marked on
the topology. If the marked switch forms a path from the source to the destination, then 
the source/destination pair is reachable. Otherwise, CPLAD regards that there exists
an unreachability problem for this flow with a false positive of $(1-r)^{N/r}$. 
After $s$ periods of check, the 
false positive decreases to $(1-r)^{sN/r}$, which decreases exponentially. When the false
positive is low enough, an alert can be fired to the operator.

{\bf Loops:} If a packet appears at the same location twice, there should be a loop.
For $N$ packets, if there is a loop, there is a probability of $1-(1-r)^N$ that
the loop is detected. After $s$ periods, the probability is $1-(1-r)^{sN}$. So
the probability where a loop is detected increases with the rounds of detection.

{\bf Black holes:} To find black holes, CPLAD needs to find the direction of packet flow 
between neighboring nodes. Neighboring nodes are directly connected nodes in the topology.
$N$ packets have a probability of $1-(1-r)^N$ where one of them appears on both of
the neighboring nodes. If one packet appears twice at one switch, CPLAD calculates 
the difference of their time stamps.
If the difference is within a threshold, it indicates the packet is transferred from
one to another without other intermediate relay. The order of two time stamps
indicates the flow direction between the two neighboring nodes. After the flow direction
is determined, a node with 0 out degree is regarded as the black hole.

{\bf Single Path:} Similarly to black holes, a node with a out degree larger that
2 violates the single path principle.



%In the Map phase, the input is the samples from each switch, the output
%is a key-value store, with the tenant's ID ( in the VXLAN header ) as
%the key, and $<$ timestamp, switch ID, inner packet $>$ as the output.
%In the Reduce phase, each reducer get the physical topology and check 
%each tenant's packet trace. (Figure~\ref{fig:reducer})
%\begin{figure}[ht]
%\centering
%\renewcommand{\arraystretch}{0.7}
%\begin{tabular}{l}
%\hline
%flows=\{\}\\
%packets=\{\}\\
%vm=\{\}\\
%for each packet p from switch s with timestamp t:\\
%\hspace{1em} get the flow f of packet p\\ 
%\hspace{1em} flows[f].path.add(s)\\
%\hspace{1em} packets[p].add($<$ t, s $>$)\\
%for each packet p in packets:\\
%\hspace{1em} use $<$ s, t $>$ pair and the packet src/dst IP to determine the virtual machine ip location l\\
%\hspace{1em} vm[ip].add(l)\\
%for each ip in vm:\\
%\hspace{1em} check VM invariants\\
%for each flow f in flows:\\
%\hspace{1em} check path invariants\\
%Construct the whole actual forwarding map, and compare it with the logic view and policy\\
%\hline
%%\end{tabular}
%\caption{Reducer Algorithm}
%\label{fig:reducer}
%\end{figure}

%The following invariants are checked:

%1) each VM is allocated at one location. Otherwise, we can find physical-to-local mapping error.
%2) each flow has one path. Otherwise, we find routing error. The flows can be of different granularity, such
%as TCP flows or src-dst IP pairs. 
%3) There is no loop on a path.

%If there is a black hole on a path, it is possible that the switch on path does not sample the packet.
%We can leave this path verification to the next round. The possibility that a path always cannot be verified
%decreases with the rounds. If a path always has a black hole in several rounds, it is reported as a 
%routing error.
\subsection{Allocation Validation}
All sampled packets in the switch fabric are in tunnels. These packets has an outer header for 
routing and inner payload which is the tenant's layer-2 packet. By correlating the IP addresses
in the outer header and the inner header, CPLAD finds out the virtual-to-physical mapping of that tenant.

To provide a scalable solution, CPLAD still uses MapReduce to analyze the packet samples.
In the mapper function, each packet is parsed to extract the tenant ID, source/destination
IP in both outer header and inner header. The mapper function output tenant ID as the key
and (outer source IP, outer destination IP, inner source IP, inner destination IP, inner source port, 
inner destination port, inner protocol) as the value. The reducer function makes use of 
each input key-value pair to infer and supplement a tenant's virtual topology. Each input record
in the reducer adds to the mapping from a VM to a hypervisor and a virtual links between VM, the inner flow
port number are also added to the virtual link for policy (allow/deny flows) check.

There are still two cases to supplement the virtual topology inference. First, if a middlebox is transparent 
to the application on a virtual link (e.g. redundancy elimination, intrusion detection system),
it does not have an IP address. A packet traverses this middlebox has the same inner header but different outer
header when entering and leaving. The two virtual IP addresses of the same virtual link are mapped to the
same hypervisor IP address which hosts the middlebox. If this happens, we can infer the existence of a middlebox
on a virtual link and add the middlebox to the topology.

Another case is that if two VMs on the same hypervisor exchange traffic, this traffic traverse the virtual 
switch without leaving the hypervisor, so that it is not sampled. To find this kind of virtual links,
CPLAD appends actions on each routing rule in the virtual switch. It encodes the in port and out port
of each rule into a field in the header, and dumps packets to a packet collection port on the virtual switch. 
Then the collector can find the virtual link that inside a 
hypervisor. By referring to the samples from the physical NIC on that hypervisor, 
the tenant ID of that virtual link can be found; and the hidden virtual link
is added to the inferred topology.

After the virtual topology is inferred, it is compared with the tenant's expected logical topology.
If the inferred topology is a subgraph of the logical topology, then the allocation is legitimate.
The tenant's policies (allow/forbidden rules) on the virtual link are checked with the
actual flow records on the inferred topology links. 

\section{Implementation}
%To increase the accurancy of timestamp, we turn on sFlow on the TOR switch and use a physical server
%to collect and timestamp the samples.
%In the hypervisor, the packets should be sampled both on the physical NIC ( to keep the VXLAN header ) and
%the tap of the virtual machine ( to find the final destination ).
We can use NS2 to simulate a large scale network and sample packets. We also simulate
control plane errors and use CPLAD to check the errors.
We also need to verify whether the probability in Section~\ref{sec:cplad_design} 
works in the actual scenario.
