% Pre-lim
% by Eric Benedict
\chapter{Related Works}

There are many network diagnosis solutions designed for the Internet or enterprise 
networks. These tools are designed to diagnose network problems in 
various settings, but due to the unique challenges of multi-tenant 
cloud environments, they cannot be well used to perform virtual network diagnosis. 
We discuss these solutions according to the planes that they are deployed.
\section{Application Plane}
% ping, tcpdump, traceroute, SNAP, X-Trace

Many network monitoring or tracing tools, such as tcpdump, SNAP~\cite{snap} 
and X-Trace~\cite{xtrace} can be deployed in client virtual machines for 
network diagnosis. A common disadvantage of these tools is that they
are point tools focusing on a certain virtual machine, and they do not provide
a view of the whole virtual network to the tenant. So the tenant need to 
handle the complexity of data analysis such as correlating
traces from different devices. In APLAD, the 
data collection and parse are handled by the system, and a convenient
SQL interface is provided to the tenant. The tenant can easily analyze and correlate
data from every point of his virtual network, so he can focus
on the abnormal behaviors in the trace. 

Another concern of deploying these tools inside the VMs is the performance impact.
If a tenant uses tcpdump to collect traces, the existing applications in that VM
are probably impacted. APLAD is deployed out of the tenant's virtual network,
it uses extra resources in the physical infrastructure to perform diagnosis, so
the existing applications are not impacted.

SNAP~\cite{snap} instruments the VM network stack, and gets TCP internal variables
such as congestion windows size, retransmission, etc. X-Trace~\cite{xtrace} instruments each layer
in the network stack from the application all the way to the network device so as to find
the location where network messages are missing. Netcheck~\cite{netcheck} instruments socket functions
and build a global execution order to validate distributed application. 
These solutions modify the VM guest OS or applications, and introduce constraints 
(e.g. OS kernel, application version) to the tenants.
While APLAD does not change any configurations inside the virtual network, so 
it is more compatible to existing system.

Deja vu~\cite{dejavu} uses history diagnostic results to build a decision tree,
judge the root cause of the symptom. When a new problem appears, the possible
root cause is returned. In the paper, Deja vu is used on web applications.
It is not a generic solution for all distributed applications.
\section{Control Plane}
% layering
Brandon et al. propose to divide the network control plane into layers according to their functions~\cite{layering}.
They also propose to use binary search to diagnose each layer. However, they depends
on the existing tools to verify each layer. So their solution needs a great effort to integrate
different diagnostic tools. CPLAD uses the actual forwarding behaviors to reversely infer the
physical layer and the logical layer, so it is a complete solution that spans all layers.

% NICE
NICE~\cite{nice} runs symbolic execution on network controller (SDN controller) and combines
the execution with network topology, so that it verify network invariants. However, the symbolic
execution introduces the complexity of initializing the SDN controller software states. 
And it does not detect silent bugs.

% Frenetic, NDB, OFRewind
One set of tools such as Frenetic, NDB, OFRewind~\cite{frenetic, ndb, ofrewind} are used
to monitor device states or capture packet traces in the physical network. 
They do not provide the intelligence to check the legitimacy of the physical network and 
the virtual network. Frenetic also provides a descriptive language to operate on devices,
but it focus on each single switch without a view of the whole network.

% HSA Anteater, Veriflow, libra
Other solutions in the control plane usually check a certain layers. For example,
HSA, Netplumber, Anteater, Veriflow and Libra~\cite{hsa, netplumber, veriflow, anteater, libra} read
or capture the switch routing configurations, and then use their models to check network
invariants such as reachability, loop free, blackhole free, etc. These solutions
are actually diagnosing the consistency and legitimacy in physical layer and the device states.
Netsight~\cite{netsight} is a collection of applications built on NDB, and it uses packet history
to verify network invariants; CPLAD supports large scale packet trace analysis and 
also verifies the mapping from virtual networks to the physical infrastructure.
ATPG~\cite{atpg} uses HSA~\cite{hsa} and rules in switches to generate test packets,
and injects them into the network. By observing the results it points out the problematic
switch or link.

SOFT~\cite{soft} diagnoses the software SDN agent on SDN switches using symbolic execution.
It compares the execution results between different switches, so as to find and report
the difference. It is constrained on the device state layer.

\section{Data Plane}
Some solutions aim to find the physical problematic link on the path of a overlay network (ODD~\cite{odd}).
These solution usually simplify the virtual network as nodes and edges. However, a virtual link
may be allocated on multiple physical switches and links, middleboxes as well as the data path in hypervisors.
The complexity is hidden by this model, so the root cause of performance problem cannot be found directly.

% network stack trace, odd
There are also some solutions instrumenting the network stack to find the performance
bottleneck~\cite{nest, net-stack}. 
These tools are designed for the traditional network stack. 
In the cloud environment, the data path on hypervisors is much more 
complicated. DPLAD redesigns the instrumentation and statistics
on the path, builds a data plane graph and perform troubleshooting on it.

