\section{Implementation}
\label{sec:impl}
We prototyped the APLAD on a small layer-2 cluster with 3 HP T5500 workstations 
and 1 HP Procurve switch. Each workstation has 2 quad-core CPUs, a 10Gbps NIC and 12GB memory. 
The Open vSwitch and KVM hypervisor are installed 
in each physical server to simulate the cloud environment. 

A table server is a virtual machine with a trace collector, a trace parser and a query executor. 
We implement a table server as a virtual machine image which can be deployed easily in the 
cluster. The trace collector and trace parser are implemented in python using the pcap and dpkt package.

The query executor and the analysis manager in the control server are actually a distributed
database system. We use MySQL Cluster to achieve their functions. We use MySQL daemon as the 
analysis manager and the MySQL Cluster data node as the query executor.

Policy manager is designed as a component integrated with the existing cloud management platform.
We have not implemented this because the current platform (e.g. OpenStack) does not support the Openflow multi-table 
feature (Openflow 1.3). Without multi-table supported Openflow protocol, 
the routing control becomes very complicated as discussed in
Section~\ref{sec:multi-table}. Currently, we use shell scripts to set up cloud clusters and 
APLAD. In our experiment setup, we make use of the OVS's multi-table features.
%to implement the data collection policy.

APLAD cluster (composed of a control server and table servers) can be integrated with existing cloud platforms.
APLAD cluster can be implement as a virtual cluster in the cloud, with the table servers as virtual machines 
and overlay communication among table servers and the control server. 
The difference between this virtual diagnostic cluster and a tenant's virtual cluster is that 
1) APLAD is deployed by a network administrator, and
2) the APLAD control server can send trace duplication request to the cloud controller to dump the flow of interest.
