\section{VNetDaaS Design}
\label{sec:design}
%{\bf GW: For a workshop paper, we don't need to go into so many details. I would suggest, instead of explain the design details of each component one by one, we better organize the design by how we address each of the challenges. So the overall design section could be organized as something like this : 3.1 VNetDaaS architecture 3.2 Abstract query interface for diagnosis 3.3 Local data collection 3.3 On-demand distributed query 3.4 Handling packet transformations }
%In the section~\ref{sec:back}, we specify six requirements for VNetDaaS. In the section~\ref{sec:service}, we propose the steps to provide diagnosis service for tenants to access their data(requirement 1); in the section~\ref{sec:arch}, we describe the VNetDaaS architecture including data collection and parsing interfaces(requirement 1 \& 2) and also discuss the solution to the packet ransformation in middleboxes(requirement 6); in the section~\ref{sec:lang}, we provide sufficient operative interfaces for tenants to process their data(requirement 3); in the section~\ref{sec:processing}, we explain how the data processing command are executed distributedly to scale the system(requirement 5). The remaining requirements (requirement 4 \& 5) are implementation issues, we leave to the section~\ref{sec:impl}.\\

We propose a framework called VNetDaaS that integrates with cloud network controller to provide virtual network diagnosis services for tenants. In this section, we 
will introduce the architecture and design of VNetDaaS framework. 

\subsection{Architecture}
\label{sec:arch}
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{fig/arch.pdf}
\caption{Virtual Network Diagnosis Framework}
\label{fig:arch}
\end{figure}
\begin{figure}[htb]
\centering
\footnotesize
{\setlength{\tabcolsep}{0.1em}
\begin{tabular}{cc}
\begin{tabular}{|lp{0.22\textwidth}|}
\hline
1)& Appliance \textbf{\emph{link}} of \textbf{\emph{l1}} \\
2)& \hspace{0.4em}Cap Point \textbf{\emph{node1}} \\
3)& \hspace{0.8em}Pattern \textbf{\emph{field = value, ...}}\\
4)& \hspace{0.4em}Cap Point \textbf{\emph{node2}}\\
5& ...\\
6)& Appliance Type \textbf{\emph{node}} of \textbf{\emph{mb1}} \\
7)& \hspace{0.4em} Cap Point \textbf{\emph{input}} \\
8)& ...\\
9)& \hspace{0.4em} Cap Point \textbf{\emph{output}} \\
10)& ...\\
\hline
\end{tabular} &
\begin{tabular}{|lp{0.21\textwidth}|}
\hline
1)&Trace ID \textbf{\emph{tr\_id1}}\\ 
2)&Pattern \textbf{\emph{src\_ip=10.0.0.6, ..., proto=TCP}}\\
3)&Physical Capture Point \textbf{\emph{vs1}}\\
4)&Collector \textbf{\emph{c1}} at \textbf{\emph{h1}}\\
5)&Path \textbf{\emph{vs1, ..., h1, c1}}\\
\hline
6)&Trace ID \textbf{\emph{tr\_id2}} \\
7)&...\\ \hline
8)&Trace ID \textbf{\emph{tr\_id3}} \\
9)&...\\ \hline
\end{tabular}\\
(a) Collection Config & (b) Diagnosis Policy
\end{tabular}
}

\caption{Data Collection}
\label{fig:collection}
\end{figure}
\begin{figure}[htb]
\footnotesize
\centering
{\setlength{\tabcolsep}{0.2em}
\begin{tabular}{cc}
\begin{tabular}{p{0.23\textwidth}}
\begin{tabular}{|p{0.23\textwidth}|}
\hline
Trace ID \textbf{\emph{tr\_id1}}\\ \hline
Table ID \textbf{\emph{tab\_id1}} \\
Filter \textbf{\emph{exp}}\\
Fields \textbf{\emph{field\_list}}\\ \hline
Table ID \textbf{\emph{tab\_id2}} \\
...\\ \hline
Table ID \textbf{\emph{tab\_id3}} \\
...\\ \hline
\end{tabular} \\
\textbf{\emph{exp}} = not \textbf{\emph{exp}} \textbar \textbf{\emph{ exp}} and \textbf{\emph{exp}} \textbar \\
\hspace{0.8em}\textbf{\emph{exp}} or \textbf{\emph{exp}} \textbar  \textbf{\emph{ (exp)}} \textbar  \textbf{\emph{ prim}},\\
\textbf{\emph{prim}} = \textbf{\emph{field}} $\in$ \textbf{\emph{value\_set}}, \\
\textbf{\emph{field\_list}} = \textbf{\emph{field}} (as \textbf{\emph{name}}) \\
\hspace{0.8em}(, \textbf{\emph{field}} (as \textbf{\emph{name}}))*\\
\end{tabular} &
\begin{tabular}{|p{0.22\textwidth}|}
\hline
Trace ID \textbf{\emph{all}}\\ \hline
Table ID \emph{\# system-assigned}\\
Filter: \textbf{\emph{ip.proto = tcp}}\\
\hspace{1em}\textbf{\emph{or ip.proto = udp}}\\
Fields: 
\textbf{\emph{time\_stamp}} as \textbf{\emph{ts}},\\
\hspace{1em} \textbf{\emph{packet\_id}} as \textbf{\emph{id}}, \\
\hspace{1em} \textbf{\emph{ip.src}} as \textbf{\emph{src\_ip}},\\
\hspace{1em} \textbf{\emph{ip.dst}} as \textbf{\emph{dst\_ip}},\\ 
\hspace{1em} \textbf{\emph{ip.proto}} as \textbf{\emph{proto}},\\ 
\hspace{1em} \textbf{\emph{tcp.src}} as \underline{\textbf{\emph{src\_port}}},\\ 
\hspace{1em} \textbf{\emph{tcp.dst}} as \underline{\textbf{\emph{dst\_port}}},\\ 
\hspace{1em} \textbf{\emph{udp.src}} as \underline{\textbf{\emph{src\_port}}},\\ 
\hspace{1em} \textbf{\emph{udp.dst}} as \underline{\textbf{\emph{dst\_port}}}\\
\hline
\end{tabular}
\end{tabular}
}

\caption{Parse Configuration and an Example}
\label{fig:parse}
\end{figure}

Figure~\ref{fig:arch} shows the architecture of VNetDaaS and its control flow. VNetDaaS is composed of a diagnosis {\bf control server} and multiple {\bf table servers}. Table servers are used to collect 
traffic traces from network devices (both physical and virtual) and perform initial parsing to store data into distributed data tables. The control server is a tenant facing service provider that allows tenants to 
specify data collection and parsing configurations and diagnose their virtual networks through abstracted query interfaces. Through the configuration and query interface, the diagnosis control server allows 
cloud tenants to look into virtual network problems without expose unnecessary information about the infrastructure and other tenants. The control server provide diagnosis service in an on-demand manner, which means 
it only starts data collection and analysis in react to the diagnosis requests of cloud tenants. By doing that, we can reduce the overhead of VNetDaaS. 

%Figure~\ref{fig:arch} illustrates VNetDaaS architecture and data and control flow in it. In VNetDaaS, there is a {\bf control server} and multiple {\bf table servers}. We assume the cloud has the architecture as described in section~\ref{sec:vnet}. There is a cloud controller that knows the physical topology and all tenants' virtual network embedding information.

There is a policy manager in control server that manages the data collection and parsing configuration submitted by cloud tenants. When a tenant runs into problems into its virtual network, it can submit a data 
collection configuration that specifies the a particular flow pattern that it cares about. This flow pattern is normally decided by the problematic flows where the tenant observes networking issues. It could be 
in different granularity, such as a particular TCP flow or all IP traffic. Based on the data collection configuration, the policy manager will compute a {\bf collection policy} that decides how to capture flow 
traces from the network. It includes the flow pattern, the capture points in the network and the trace collector location. Figure~\cite{fig:collection} shows an example of data collection configuration and the policy generated 
by the policy manager. According to the collection policy, the network controller will set up corresponding rules on the capture points to collect the trace of specified flows on computed table servers.  In each table server, 
there is a trace collector running to collect traffic trace for selected flows.  

%When a tenant meets with problems from his network applications inside his virtual network, he submits the diagnosis request to {\bf ploicy manager} in the control server. The request includes a {\bf trace collection configuration} as in Figure~\ref{fig:collection}(a), which specifies the virtual network appliance (virtual links or virtual node), the trace capture point (two ends of a virtual link or input/output of a virtual node) and the flow pattern. The flow can be in different granularity such as a TCP flow or all IP traffic. The policy manager also gets the tenant virtual network allocation information, and then computes a {\bf diagnosis policy} in Figure~\ref{fig:collection}(b). The diagnosis policy spcifies how to capture the flow trace in the physical network. It includes the flow pattern, the capture point in the physical network, the {\bf trace collector} location and the path from the capture point to the collector; each trace is given a trace ID to distinguish them in the collector. The algorithm to allocate a collector for each virtual capture point is to use breadth first serarch (BFS), starting from the physical location of virtual capture point and finding a hypervisor with sufficient CPU, memory, bandwidth. Given the virtual capture points are usually on the virtual swithes, the table server can be allocated on the same hypervisor so as to avoid network traffic through the physical links.

%Then, this diagnosis policy is passed to the cloud controller. New table servers with collectors and routing rules are set up to collect the flow traces. The tenant starts his network application for diagnosis. As the application traffic traverses its virtual link, the corresponding trace is dumped into the table servers. The raw trace is packets with time stamps.

Cloud tenants can also submit a parse configuration along with the collection configuration to perform initial parsing on the raw flow trace. Figure~\ref{fig:parse} shows the format of a parse configuration. It has multiple 
parsing rules. Each rule has a filter and a field list that specify the interested packet headers and field values to extract from the flow trace. According to the parse configuration, the policy manager can configure the 
trace parser on table servers to parse the raw traffic traces into multiple human-readable tables that store the packet records with selected data fields.

%Next, the tenant starts parsing on the raw packet trace.
%He submits a {\bf parse configuration} to the policy manager, which passes it to each {\bf trace parser} in the table server.
%The parse configuration has the format in Figure~\ref{fig:parse}. A trace can be parsed into one or more tables with different table ID. Each parsing rule has a fitler and a field list; the fitler rules specify the packets to the tenant's interest and the field list specifies the field value to extract. The raw traces are parsed into human-readable tables with each column describes one or more fields in the packet header and each row as a packet record. These {\bf trace tables} are stored in the {\bf query executors} in table servers. There is also {\bf metadata tables} transformed from the diagnosis policy and parse configuration in the {\bf analysis manager}. All these tables are viewed as a diagnosis schema.

After the traffic trace is collected and parsed into table servers, tenants can perform various diagnosis operations through a query interface provided by the control server. The analysis manager in the control server will 
the tenant's query, and schedules its execution on distributed table servers, and return the results to the tenant. In the following section, we will discuss typical diagnosis tasks that can be easily implemented using the 
query interface. 
{\bf GW: need to fix the "data analyzer" in the figure.}

%Finally, the tenant can diagnoze the network problem based on trace tables. The {\bf analysis manager} in the control server works as a front end to interact with the tenant. It provides interfaces with the format of a query langeuage to the tenant. The the analysis manager takes the tenant's query, schedules its execution in distributed table servers and returns the result to the tenant. There can be diagnosis applications on top of the query language.

%The control server schedules data collection and data processing; and the table server perform data collection, data parse and distributed query exection. In the whole procedure, the tenant only needs to submit trace collection configuration and trace parse configuration, which is convenient to manage. And the data manipulation interface also saves the tenant's effort to parse the raw traces.

\subsection{Data Collection} 

Collecting flow traces would require capturing selected flows on the capture points and replicate the flows onto the table servers, which is a major source of both bandwidth and 
processing overhead in the VNetDaaS framework. In this section, we discuss several design issues in the data collection and the ways to address them. 

{\bf Table server and capture point placement: }  The placement of table servers has significant implication on the bandwidth overhead of data collection since each data collection request 
will cause a flow to be replicated from a capture point to its table server. VNetDaaS chooses to always place capture points on hypervisor virtual switches. This is due to two reasons. First, most virtual 
overlay network is implemented 
using edge-based architecture, where tunneling, routing are all done at the virtual switches in the hypervisors~\cite{dove, quantum}. As a result, virtual switches are the end points of most virtual 
links in an virtual overlay network. So by placing capture points at virtual switches, VNetDaaS is able to trace flows across multi-hop virtual links traveling through virtual routers and middlebox appliances.
\footnote{The only exception is that when a virtual link goes through shared physical middlebox devices, capturing on virtual switches won't be enough to diagnose physical middlebox problems.}  
Second, virtual switches, such as Open vSwitch, are more flexible in supporting flow replications using OpenFlow rules, which is not supported in many physical network devices. 

To limit the data collection overhead from capture points to table servers, VNetDaaS place a table server VM on each hypervisor node. By doing that, all the capture points will have a table server on its 
local hypervisor. We can keep all the data collection flows local in the hypervisor nodes. Data movement is only needed when a distributed query is executed so that we can reduce the bandwidth 
overhead of VNetDaaS as much as possible. 


\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{fig/multi_table.pdf}
\caption{Flow capture with multiple tables}
\label{fig:multi_table}
\end{figure}

{\bf Independent data collection rules: }
In hypervisor virtual switches, we can capture a specific flow by install OpenFlow rules to replicate the flow to its local table server. However, we have to make sure that the data collection rules do not 
interference with existing routing rules in virtual switches since they are operating on the overlapped packet headers. Similar problems have been discussed in Frenetic~\cite{frenetic}. VNetDaaS chooses to 
solve this problem using the multiple routing table option in virtual switches. As shown in Figure~\ref{fig:multi_table}, the data collection rules 
are installed in Table 0 and other modules write the forwarding rules into Table 1. Table 0 is the first consulted table for any packet, and there is a default rule to forward packets to table 1. When the administrator wants to capture a certain flow, new rules are added into table 0, the actions are outputting the flow to table server port and forwarding it to table 1. Using the multi-table option, we can add flow capture rules onto virtual switches 
without impacting rules installed by other modules. 

\subsection{Data Analysis}
\label{sec:analysis}

In this section, we introduce the VNetDaaS data analysis framework and the query interface for virtual network diagnosis service. 

\subsubsection{Framework and Execution}
\label{framework}

\begin{figure}[htb]
\centering
\includegraphics[width=0.48\textwidth]{fig/ana_frame.pdf}
\caption{Analysis Framework}
\label{fig:analysis_framework}
\end{figure}

The data analysis framework is shown in Figure~\ref{fig:analysis_framework}. In each diagnosis request, the schema includes metadata tables and trace tables. The metadata tables describes how the data is captured and parsed, and the trace tables come from the trace parsers. The metadata tables are views which only expose current tenant's information for security issue. 

In each table server, the executor supports primitive operators on the collected tables, such as join, scan. The analysis manager provides an interface with a query language to the tenant. The queries from tenant's manipulation or applications are  translated into execution plans which are in a tree format with each node as an operator. In distributed database, there are two more operators send and receive, which can be used for data exchange between the executors. The execution plan is optimized with the operator execution order and site in consideration in the the analysis manager~\cite{dis_query}. 
{\bf GW: don't quite understand this subsection...}

\subsubsection{Query Interfaces}
\noindent{\footnotesize
\begin{tabular}{|p{0.45\textwidth}|} \hline 
{\bf select} \it{column (as new\_name),...} {\bf from} \it{table} {\bf where} \it{exp(column)} {\bf group by} \it{column} (having \it{exp(column)}) {\bf order by} \it{column} {\bf limit} \it{number}\\
\hline 
\end{tabular} 
}\\
We take the SQL directly and use it here. We claim that this is enough for most of the diagnosis requests. In Frenetic~\cite{frenetic}, there are other operators. They are {\bf every} and {\bf splitwhen}. We can implemente them as follows.

{\bf every:} execute a query periodically. \textit{select * from tab1 every t} is equivilent to:\\
{\footnotesize
\begin{tabular}{|l|}
\hline
while(true)\\
time=now()\\
\hspace{1em}Output(select * from tab1 where ts$>$time-t and ts$<$time)\\
\hspace{1em}Sleep(t)\\
\hline
\end{tabular}
}

{\bf splitwhen:} find the change of field in packet. For example, when VM migrated, its packets' inport may change on a certain switch. The equivilent of \textit{select mac, inport from tab1 spiltwhen inport} is:\\
{\footnotesize
\begin{tabular}{|l|}
\hline
mac\_inport=new dict$<$mac, int$>$()\\
while(true)\\
\hspace{1em} time=now()\\
\hspace{1em} sleep(t)\\
\hspace{1em} Var = select * from tab where ts$>$time order by ts asc\\
\hspace{1em} for each record r in Var\\
\hspace{2em} if r.inport != mac\_inport(r.mac) \\
\hspace{3em} mac\_inport(r.mac)=r.inport\\
\hspace{3em} Output(r.mac, r.inport)\\
\hline
\end{tabular}
}

\subsubsection{Diagnostic Applications}
\label{sec:app}
\label{examples}
%{\color{red} statistics: packet number, flow size, flow duration, field statistics, fields pair statistics, conversation, protocol hierarchy? IO graph(sort by time). monitoring: throughput, rtt, tcpstream. diagnosis: loss, latency. }
Based on the diagnosis query interface, the tenant can develop their own applications, and the cloud administrator can also provide diagnosis applications as services. 

One kind of applications is {\bf statisctical applications}. For example, the tenants may need distribution of IP, MAC, etc. These applications can be developed as follows:\\
{\footnotesize
\begin{tabular}{|l|}
\hline
var1=select field, count(*) from tab group by field\\
var2=select count(*) from tab\\
for each record r in var1\\
\hspace{1em}Output $<$r.field, r.count/Var2$>$\\
\hline
\end{tabular}
}

For network {\bf monitoring} propose, the tenant may need flow throughput and RTT. Throughput monitoring is implemented as this:\\
{\footnotesize
\begin{tabular}{|p{0.44\textwidth}|}
\hline
\textit{\# assume the time stamp unit is second}\\
select ceil(ts), sum(payload\_length) from table group by ceil(ts)\\
\hline
\end{tabular}
}

The RTT monitoring is a bit complicated. Assume the trace tables have the format as follows:\\
{\setlength{\tabcolsep}{0.3em}
{\centering \footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
ts & id & srcIP & dstIP & srcPort & dstPort & seq & ack & payLen \\
\hline
... & ... & ... & ... & ... & ... & ... & ... & ... \\
\hline
\end{tabular}
}
}\\
To find the RTT of a certain hop, we need to find a packet and its ACK, then use the difference of their time stamps to represent the RTT. The application is designed as follows:\\
{\footnotesize
\begin{tabular}{|p{0.45\textwidth}|}
\hline
1) create view T1\_f as select * from T1 where srcIP=IP1 and dstIP = IP2\\
2) create view T1\_b as select * from T1 where dstIP=IP1 and srcIP = IP2\\
3) create view RTT as select f.ts as t1, b.ts as t2 from T1\_f as f, T1\_b as b where f.seq+f.payload\_length = b.ack\\
4) select avg(t2-t1) from RTT\\
\hline
\end{tabular}
}

For network {\bf diagnosis} purpose, the tenant may want to know the packet loss of of two hops. It can be describes as this:\\
{\footnotesize
\begin{tabular}{|l|}
\hline
select * from T1 where T1.id not in (select * from T2)\\
\hline
\end{tabular}
}
