% Pre-lim
% by Eric Benedict
\chapter{Data Plane Diagnosis}
\section{Overview}
When the cloud provider serves a tenant's virtual network requirements,
it usually tries to provide performance guarantee. For example, the provider
can use some resource allocation algorithms to reserve resources such as
CPU, memory, network capacity to the tenant~\cite{secondnet, cloudnaas, oktopus}. 
However, due to system complexity and incomplete consideration in the allocation algorithm,
performance isolation between tenants can not be guaranteed.
For example, memory/bus throughput is not considered in the allocation algorithms, 
which may lead to contension on this resouce between tenants; to support a certain bandwidth
guarantee, a hypervisor needs to deal with a certain number of packet receiving
interrupts, whose processing may be limited by the hypervisor CPU. Software and hardware
bugs also cause troubles to the packet delivery in the data plane. These problems
are reflected as performance problems in a tenant's view (e.g. stragglers in 
hadoop~\cite{mantri}).

Without complete performance guarantee, there is a need to detect the
bottleneck for a virtual network in the case of performance
problems. We provide a Data PLAne Diagnostic solution (DPLAD). DPLAD follows the packet datapath
and finds out where exactly the packet is dropped. Existing works~\cite{odd}
usually simplify a overlay network as virtual links. 
However, in the scenario of public cloud, virtualization adds complexity
to the data plane. It includes the physical switch data plane, the middlebox data plane and
the datapath in hypervisors. Especially in the hypervisor, the datapath is longer than a 
standard network stack. 

DPLAD builds a packet relay model for the data plane, define properties to collect on nodes in graphs
and uses these statistics to analyze the bottleneck in the meshed graph. DPLAD is implemented by 
instrumenting statistics along the packet datapath, including device drivers, kernel network stack, virtual switches, 
TUN/TAP, hypervisor, so as to find where exactly the packet drop happens. 
%\section{Challenges}
%Datapath of different virtual links may inter-wave with each other, and they interfere
%with each other. It is challenge to find the root cause of packet drop.
\section{Motivating Examples}
In this section, we give two examples of factors that affect tenant application 
performance, which are memory throughput and hypervisor packet interrupt handling.
Since these two factors are hard to measure, 
they are usually not considered in virtual network allocation algorithm. 
\subsection{Memory Throughput}
to be done. 

2 VMs are located in the same hypervisor, if one performs heavy memory access,
another VM's performance is impacted.
\subsection{Hypervisor Packet Interrupt}
to be done.

2 VMs are located in the same hypervisor, if one sends a large amount of 
small packets without exceeding its allocated bandwidth. The hypervisor
is busy dealing with packet receiving interrupt, leading to packet
drop of another VM.
\section{Diagnosis Requirements}
Since performance is hard to guarantee, there should be a way to detect
 the bottleneck when performance degradation happens. In particular,
the solution should satisfy the following requirements.

When DPLAD detects the bottleneck in the data plane, the resource that is 
in shortage should also be identified, so that the operator can conduct
resource allocation or migration. When multiple parts are involved in a
performance problem, the root cause of the problem
should be found out. Finally, the troubleshooting should have little impact 
to existing traffic.
\section{Design}
\subsection{Datapath in hypervisors}
\label{sec:datapath}
When the application traffic is exchanged between two hypervisors, 
it traverses through the switch fabric. In this process, the packets are 
relayed by switches on the path. Packets are dropped where the switch buffer
overflows. In the hypervisor, the datapath is more complicated.

We look into a typical cloud platform composed with KVM/QEMU and OVS, and summarize the
datapath of a packet. A packet receiving datapath starts from the physical NIC ring buffer
and ends at the VM application buffer; a packet sending datapath is the opposite.

\begin{figure}[htb]
\centering
\includegraphics[width=0.7\textwidth]{fig/pkt_rcv.pdf}
%\includegraphics[width=0.7\textwidth]{fig/pkt_snd.pdf}
\caption{Packet Receiving}
\label{fig:pkt_rcv}
\end{figure}
When a packet arrives at the pNIC ring buffer, it triggers an interrupt to the 
hypervisor. The packet would be copied into the memory either in the interrupt handler 
or in polling function later (top half of packet receiving interrupt handler). The  
packet is put into the backlog queue of a physical CPU. 
After scheduling, this packet is processed in the hypervisor 
network stack (bottom half of the packet receiving). In the virtualization 
environment, the packet is handed to the virtual switch/bridge frame handling
function and forwarding to the corresponding port. Each port is assigned a TUN/TAP
to receive packets. The packets are stored in the socket receiving queue 
in the TUN/TAP. The KVM/QEMU has a polling thread inqiring these TUN/TAPs and copies the packets
into the virtual NIC (vNIC) ring buffers; and send a signal to the VM. Inside the VM,
the interrupt handler copies the packets into VM memory and puts the vNIC into
virtual CPU (vCPU) backlog queue\footnote{The I/O instruction is a privilege instruction,
which is handled by the hypervisor}. When the backlog is processed, the packet
traverses the VM network stacks and is enqueued into the corresponding sockets. Finally,
the application read from the socket.

\begin{figure}[htb]
\centering
\includegraphics[width=0.7\textwidth]{fig/pkt_snd.pdf}
\caption{Packet Sending}
\label{fig:pkt_snd}
\end{figure}

When a packet is sent from the application, the socket write function copies the data from
the application buffer to the VM kernel space. The I/O output instruction to send packet
out of the vNIC is handled by the hypervisor; the hypervisor copies the packet to hypervisor memory and 
calls the TUN/TAP write function. Then the packet is put to pCPU backlog with the TUN/TAP 
as input device. The TUN/TAP's poll function is scheduled to process the backlog, and it 
calls the virtual switch/bridge to handle the frame; and the virtual switch/bridge calls
the pNIC driver to transmit the packet to the pNIC's sending buffer.

\subsection{Data Plane Graph}
We introduce a data plane graph to describe the packet forwarding path in a hypervisor with
multiple VMs. 
A data plane graph has two kinds of nodes: queue node and process node; the edges in a data plane graph
indicates the data flow between queue nodes and process nodes. 
The data path in Section~\ref{sec:datapath}
is displayed in Figure~\ref{fig:datapath_graph} .


\begin{figure}[htb]
\centering
\includegraphics[width=0.9\textwidth]{fig/datapath_graph.pdf}
\caption{Datapath Graph in a Hypervisor}
\label{fig:datapath_graph}
\end{figure}
A queue node presents a queue on the datapath. It can be a
socket buffer queue, a CPU backlog queue, a DMA buffer, a pNIC ring buffer, a switch memory, a vNIC
ring buffer or a middlebox application buffer. Each buffer usually has a fixed maximum length. 
A process node transfers packets between queue nodes. It can be network adapter 
transmitting, an interrupt handler or a linux task (in 
the kernel or an application). When a packet traverses the datapath, each process node moves
the packet from its predecessor queue node to one of its successor queue node. 

Packet drop usually happens in a process node. We define the following properties for process nodes.
\begin{itemize}
\item {\bf Synchronization}: 
A process node can be synchronized or asynchronized. If the target queue is full,
an asynchronized process drops the current packets, while a synchronized process sleeps the 
current task.  
\item {\bf Invoked Duration}: This indicates the time when a process is actively processing packets. 
This is implemented as the CPU time spent on this process.
\item {\bf Rx Packets}: This is the total number of packets that are received by this process node.
\item {\bf Tx Packets}: This is the total number of packets that are transmitted by this process node.
\item {\bf Overflow Drops}: This counts the number of packet dropped due to 
the target queue overflow. It is invalid in an asynchronized process.
\item {\bf Memory Failure}: It is possible that a process allocates more memory
when processing or transferring data between memory spaces (user and kernel space). This counter 
is the statistics of memory allocation failure.
\item {\bf Processing Failure}: Except overflow drops and memory allocation failures, we
categories other packet drops as processing failures. Checksum failure, wrong packet
format, etc. are examples of this kind of failure.
\end{itemize}
\subsection{Bottleneck Detection}
From the data plane graph, we can summarize some patterns. One queue node can have several
predecessor process node but only one successor process node; one process node can
have multiple predecessor queue nodes and multiple successor queue nodes.

%Figure XXX shows a queuing node centric view. Examples are:..... 
When a queue node overflows,
all its predecessor process nodes observes this (by overflow drops).
%if the queuing node is the only predecessor of its successor node, the successor node observes this (by invoke), otherwise the successor cannot observe this.
%Figure XX shows a process node centric view. Examples are: ..... 
In a process node if can observe different failures. For example, it can be starved (reflected by invoked duration);
its successor queue node can overflow (reflected by overflow drops); the processing node
can have failures (memory failure, packet validation failure). 

Memory failure and processing failure are easy to be found based on the statistics.
However, the overflow drop is the symptom that a process node cannot handle all its
incoming traffic. Two reasons can cause a queue node overflow: its successor slow reads and its
predecessors make bursty writes. If the former, its successor should be in starvation 
(reflected by invoked duration and packet throughput);
if the latter, one of its predecessor should have a bursty Overflow drop.

DPLAD finds the root cause using the following algorithm. If a process observes overflow,
DPLAD continuously checks its successor nodes until finding one process node that does not overflow.
If the found node is in starvation (throughput decrease or invoked duration reduce), 
this node is reported as bottleneck. If no starvation node is found, the DPLAD
continuously check all nodes on it path and find those nodes with bursty traffic. It reports
the finest granularity (a socket or a vNIC) as the root cause of bursty traffic. 
%In a chain graph, it is easy to find the bottleneck. In the data plane
%graph, the graph is a mesh topology, we need to find the root cause of the overflow drop symptom.
%If a processing node slows the processing, all its predecessor queuing nodes 
%overflow (reflect by overflow drops) and all its sucessor queuing nodes starve (reflected by invoke). 
%So we can follow the path to find the boundary of overflowed and starved queuing node.

%There are several cases that can cause packet drops. First, inside one process, there
%may be packet validation, memory allocation etc.. If a packet does not pass the
%checksum validation, or if the process lacks memory to process the packet, the packet
%will be dropped. Second, if a buffer is full, its precedent asynchronized process
%will dropped the packet. In this case the reason that a buffer is full can be 
%its descendant process is slow in processing or be blocked.

%Each virtual can be viewed as a chain of buffers/queues and processes, and all 
%virtual links' chains inter-waved with each other to form a new graph. With this
%graph, when a certain virtual link's performance degrade, we can (1) find out the
%suspicious root cause of the process that causes the packet drop; (2) cross-validate different
%virtual links to further confirm or rule out the suspectedness. 

%Figure~\ref{fig:bottleneck} is the algorithm to find the bottleneck in a buffer/queue and process
%chain. Another thing we need to make clear is that bottleneck may not be the root cause of packet 
%drop. For example, multiple processes write to one buffer, one of them increase packet rate dramatically,
%then the common descedant becomes the bottleneck; however the root cause should be the
%mis-behaved ancestor. To find the root cause, DPLAD checks the statistics in the bottleneck's all
%precedents and finds out the earliest one (in time) that increase its packet rate, 
%and then repeats check the precedents of the suspicious process.
%\begin{figure}[ht]
%\centering
%\renewcommand{\arraystretch}{0.7}
%\begin{tabular}{l}
%\hline\\
%while ( a process p drop packets ) :\\
%\hspace{1em} if the drop is caused by p's packet operation: return p\\
%%\hspace{1em} for p's descendant q : \\
%\hspace{2em} q's dropping increases :\\
%\hspace{2em} p $<-$ q\\
%\hspace{1em} return p\\
%\hline\\
%\end{tabular}
%\caption{Algorithm to find the bottleneck in a chain}
%\label{fig:bottleneck}
%\end{figure}
\section{Implementation}
In the current linux implementation, some of the statistics of a process are already implemented,
such as Tx, Rx, drop statistics in the NIC driver and TUN/TAP. 

We add invoked duration counter, Tx, Rx, overflow drops, memory failure, processing failure all along 
the datapath, including the NIC driver, TUN/TAP, KVM/QEMU, VM applications.
In the kernel, these statistics are dumped to /prof file system; in application, they
are provided by a socket.
\section{Evaluation}
\subsection{Validation}
To be done, two cases will be studied.
1) a middlebox becomes a bottleneck on the datapath.
2) the hypervisor is in shortage of resources (CPU), which affects the application performance.
%Draw figures of these statistics in different scenarios.
%1) one VM with busy hypervisor (busy cpu, busy memory access, busy IO)
%2) two VMs, one is busy (busy CPU, busy memory access, busy network)
