The following chapter explores in detail the test setup used during the latter phase of our project - implementation. The implemented code base can be found at the project's repository TODO.

\section{General Overview}

In order to explore the implementational difficulties of providing honeynets as a service, Amazon Web Services (AWS) were used as a testbed. The main reason for choosing AWS over other open-source cloud technologies is the Infrastructure as a Service model which is already in place. AWS comes provisioned with robust virtualisation technologies and accompanying automation tools which allowed us to concentrate on the buildup of the underlying components for that given environment rather than the needed infrastructure. However, this can be also rendered by a custom-tailored solution that relies on own hardware in combination with software such as OpenStack and XEN.

\subsection{The setup}

A distinctive characteristic of the test setup is that the network traffic-handling components of the HaaS implementation expect to process only traffic originating from attackers destined towards the invalid resources (honeypots) in the user's network. The architecture relies on the fact that the distinction is made by the routing or firewall infrastructure part of the users's network. Handling of traffic towards valid user resources, such as their web-servers on the other hand remains a responsibility of the user.

Depicted in Fig.~\ref{fig:architecture_model} are the HaaS components required for servicing a single user of the service. 

\vspace{-.5cm}
\begin{figure}[h]
	\hspace*{-3.8cm}
	\includegraphics[width=5.9in]{Figures/test_setup.png}
	\caption{Component overview for a single client}
	\label{fig:architecture_model}
\end{figure}
\vspace{1.5cm}

Users of the service would have as an entry point a Front-end component which can be perceived as a Web-based application. As it does not have a central role in the implementation of the service, the component is considered as part of future work on the project. The envisioned primary functionalities of the component should be:

\begin{itemize}
	\item User-identity tracking
	\item Forms for specifying the software stacks and network-based characteristics of honeypots
	\item Aggregated statistics generated by the logging mechanisms of honeypots
\end{itemize}
 

The back-end, depicted on the right-hand side of Fig.~\ref{fig:architecture_model}, is an always-on EC2-based instance that serves two distinctive purposes. While encapsulating the necessary functionalities that need to be provided to the front-end, it is also responsible for controlling the overall lifecycle of all other EC2-based virtual instances and their configuration. The direction of black arrows in the figure infers the type of inter-component communication that can occur.

The rest of the depicted EC2-based components - the monitor station, the Virtual Private Cloud (VPC) and the honeypot(s) part of it - are also dedicated to servicing a single user. 


\section{Networking}

At the heart of the network model lies:
- minimal involvement of client's resources
- minimal configuration and deployment effort for clients
- segmentation of resources 

This section explores in detail the way networking has been established within the test model. Initially, a discussion of how clients offload their traffic to the cloud-based infrastructure of the service is presented followed by the way user is traffic is handled by the different components of the service.

\subsection{User traffic offloading}

In order to build the test setup various network models were considered, based on open-source technologies such as Linux IPv4/IPv6 forwarding and Iptables, so that the deployment and configuration effort users would have to perform are minimal. In addition, the implementation strives to minimize the utilization of the user's infrastructure by taking care of delivering return traffic to attackers without its involvement. However, the applicability of the service may vastly differ. Some users may wish to further secure their internal resources from breaches, whereas others their publicly accessible ones. In the former case users would deploy honeynets as part of their private network whereas in the latter at publicly accessible (routable) IP addresses.
The way users would have to set up traffic offloading within the two scenarios would be inevitably different. In order to prevent the disclosure of additional information on the location of honeynets, users would have to specify within the front-end web-forms the way they would achieve traffic offloading. An overview of considered traffic offloading options for the client follow:
\begin{itemize}
	\item Routing - In the case of routing additioinal information about the location of honeynets may be disclosed by the hop count filed part of network packets. If routing is to be used by clients, the monitor component would have to take care of modifying the relevant hop count field.
 	\item Forwarding - Any combination of PAT, DNAT and/or SNAT would obscure either the original source or destination address of packets which makes it impossible for the EC2-based honeynets to route traffic back to attackers. 
 	\item Proxying - A situation similar to forwarding would be created but only in terms of source addresses.
 	\item Tunneling - A situation similar to routing would be created but with the need of additional facilities on both sides for maintaining the tunnel.
\end{itemize}

Another concern with all considered traffic offloading techniques users may employ, is the fact that their infrastructure should not be burdened with facilities such as connection tracking. As connection tracking is a feature applied by default even for packets passing through netfilter's INPUT routing table, hints and best-practice information would have to be left within the front-end to specify the optimal ways for configuring the different types of traffic offloading.

As it is too restrivtive to force a single model upon users for traffic offloading, the test setup was build with traffic cloning on the users side so that packets retain their original source and destination addresses, do not undergo increments of their hop count field and are discarded by the users infrastructure before connection tracking occurs at the raw table of the INPUT chain.


\subsection{EC2 networking}



\section{Back-end}

The back-end is represented by an always-on EC2 instance which serves two distinct purposes. First, it is responsible for processing requests originating from the front-end. Although the front-end insofar has been described as a Web-based application which in its meaning would require a Web-server and Web-based application code on the back-end side, instead the back-end provides plain Object-oriented Python class definitions with a main driver script for simulating the activities of a user in terms of starting and stopping honeynets. In addition, the back-end instance itself is an Ubuntu 12.04 system backed up to am AMI image which has two network adapters - one bound to an Elastic IP and the other to a VPC via which honeypots should ultimately send logged data.

In order to automate the processes of configuring, starting and stopping Amazon EC2 instances that represent monitor and honeypot nodes the AWS Python API was utilized. The features of the automation process cover are the following:

\begin{itemize}
	\item General:
	\begin{itemize}
		\item Amazon AWS region selection
		\item Types of instances to start
		\item Tracking groups of instance that belong to a single user - a tuple of monitor and honeypots
		\item Starting and stopping instances
		\item AWS security group settings
	\end{itemize}
	\item Monitor and honeypot-specific:
	\begin{itemize}
		\item Choosing associated AMI image
		\item Remote management access policies
		\item Network adapter(s)
		\item Network addressing
	\end{itemize}
\end{itemize}

The relevant classes representing the functionality as per the code base are Client Instance Manager TODO. The following limitation exist with the current model:
\begin{itemize}
	\item Users are not capable of picking a particular operating system and vulnerable software stack for their honeypots

	\item The back-end does not provide storage facilities for storing the various logging information generated by the honeypots
\end{itemize}


The second purpose of the back-end is to service the configuration requests of monitor nodes. Such requests seek out the IPs passed by users of the front-end which represent the IPs from their network at which the honeypots would be virtually placed. The requests themselves are handled via Python Pyro remote method invocations. The functionality has been encapsulated within a separate called Monitor RMI TODO. The class implements a single method that reads out the IPs specified by a user and sends them back to the monitor.



\section{Monitor}


The monitor is responsible for handing-off communication originating from the attackers of users resources to the appropriate honeypot. Contrary to the back-end, the monitor is a user-specific component of the service which is created upon the receipt of a user request. It is also based on an Ubuntu 12.04 system which was initially booted, configured and saved to an AMI image. The accompanying hand-off scripts were left within the root partition so that the they are merged with the image. The nature of the scripts however, is such that a certain level of initial configuration is required. As monitor nodes are responsible for handing communication off to the appropriate user honeypot based on the IP a users has assigned to the honeypot in their own network, their accompanying scripts need to obtain that mapping. The following general solutions were considered:

\begin{itemize}
	\item Creating a single EBS-based block device that stores the configuration of all monitor nodes.
	\item Creating an EBS-based block device for every monitor instance that contains its specific configuration
	\item Post-initialization configuration
\end{itemize}

Initially, considerations were made that a single, EBS-based storage container could be used for storing the configuration parameters needed by the different monitor nodes, however it is impossible to share EBS-based block devices between multiple EC2 instances. As an alternative, creating separate EBS-based block devices that hold the configuration of a single monitor was considered, however the solutions does not scale well in terms of the current Amazon EBS storage fees.\\
The second considered solution was to use post-initialization configuration for monitor nodes via either a distributed storage filesystem or a remote invocation procedure. The back-end is an acceptable node at which such a filesystem server could be initialized, however the elegance and flexibility of Python Remote Method Invocations was chosen instead as the best configuration method.


A python script controls the flow of network packets on the monitor. After system startup, the script's execution begins. As its first job it fetches the user-provided IP addresses in order to set up firewall rules for cloning packets.  One of the samples is sent to the script itself whereas the other to the appropriate honeypot. That way, monitor nodes can strictly regulate traffic destined towards honeypot and completely cut if off by removing firewall rules if needed. This behavior is achieved by using the iptables addon called xtabled (TODO). The firewall rules inserted by the script take the following form:

\\\\
bla bla bla
\\\\

A limitation of the current test setup, which contradicts Fig.~\ref{fig:architecture_model}, is that the honeynets of users were not placed in private subnets, but were rather assigned Elastic IPs. This was done due to the high costs incurred by VPC clouds. However, the Amazon API was examined in order to establish that it is indeed possible to automate the creation and assignment of VPCs to EC2-based instanced. Within such a model, the networking of the monitor would also have to be change to a combination of NAT masquerade in combination with packet cloning.

-what happens when the instance starts?\\
-what does networking look like


\section{Honeypots}

Honeypots are also encapsulated into Amazon AMI images so that configuring and starting such instances is reduced to specifying an appropriate AMI. As network packets are passed to the honeypots with preserved source and destination addresses, destination NAT is performed upon receipt of packets destined towards any of the user-provided IPs. 


