\part{Methods}
\label{sec:methods}

As a first step toward creating an algorithm or heuristic for
balancing a cluster of Xen VMMs, we have implemented a daemon
in Python which can monitor the performance statistics of the
virtual machines and perform migrations according to arbitrary
algorithms implemented within the daemon. We developed and tested this
software in a lab testbed consisting of 4
1Ghz Intel Pentium 4 machines with 512 MB of RAM each. Each of these
nodes ran Xen 3.0.1, the latest release of the Xen Virtual Machine
Monitor as of January 2006. Storage for virtual domains was hosted via
Network File System (NFS) shares hosted in the hypervisors of each
node. A diagram of our testbed can be is provided by Figure~\ref{fig:testbed_setup}.

In addition, we have implemented a simulator in Java to test our
algorithms in scenarios that are too large or complex to test in our
small lab environment. While modern data centers generally use the state of the
art hardware for which the Xen VMM was designed, our lab testbed used
machines that were several years old. While these were
sufficient for the exploratory work we performed, the ability to
simulate higher performance machines allowed us to test our algorithms
in more realistic environments and also get a sense of their
performance as they scaled to situations including Xen's target of 100
virtualized servers on one piece of modern hardware.

\begin{figure}
\center
\psfig{file=figures/nfssetup.ps,angle=0,width=6in}
\caption{Our testbed. The red domains represent the privileged dom0 on
each Xen machine. This domain cannot be migrated, and provides
interfaces into the hypervisor. For our tests, they additionally ran
NFS servers serving the disk storage for our virtual machines. The
migratable domains,shown in green, ran apache servers or prime number
generators as normal userspace applications in our tests.}
\label{fig:testbed_setup}
\end{figure}

\section{Anatomy of the Xen VMM}
\label{sec:xenanatomy}
The Xen VMM is currently installed within an installation of
GNU/Linux. For our work, Xen was installed over the latest stable
version of Debian GNU/Linux\cite{debian}. The modified XenLinux kernel is loaded at
boot time, and after startup the {\tt xend} daemon is started. This
daemon allows users to start and interact with virtual machines via the
command line using the {\tt xm} utility, via an HTTP interface, or
through the Python {\tt xen.lowlevel.xc} module.

In order to migrate virtual domains from one node to another, disk
storage must be implemented in such a way that access on the
destination host is identical to access on the source host. A number
of solutions are currently available for this kind of disk storage,
the most common probably being the Network File System (NFS)
protocol \cite{nfs},
available on nearly all Unix-like platforms. Unfortunately, the most
common implementations of this protocol are neither secure nor
completely stable under extremely heavy loads. A faster solution
that is becoming increasingly popular in fiber networks is the iSCSI
protocol\cite{iscsi}. Unfortunately, drivers and implementations of this protocol
are not provided by the most common GNU/Linux distributions, including
Debian. As a result, we chose to use NFS set up on each of the four
testing machines running in the hypervisor domain (domain 0). To
attempt to minimize bottlenecks, as often as possible we attempted to
host the disk storage of an equal number of domains on each
machine. In a faster network with more hardware, it would make sense
to set one machine aside as a file server, but to maximise the number
of machines we had for testing, we chose to distribute storage.

While this storage solution was acceptable for our tests, several
factors make it less than ideal for a production environment. First,
running the NFS daemon in the priviledged hypervisor domain poses a
serious security and performance risk. A malicious user could, after
gaining access to the hypervisor through a security hole in NFS,
easily gain access to the guest domains or run a CPU intensive process
which would deny service to the guest domains. Second, because the
hypervisor domain is being used for the NFS server and coordination
and control of the guest domains, the chance of unexpected
CPU and network interactions (the increase in network activity due to
an disk read heavy domain running on another node but hosted locally, for
example) could create unwanted interactions within our
tests. Fortunately, our tests indicate the CPU load of serving the
disk access needs of even a disk access intensive virtual machine
 is not significant, as discussed in Section~\ref{sec:readintensiveresults}.

\section{xenbal Balancing Software}
\label{sec:xenbal}
The existence of interfaces into Xen internals played the largest part in the
choice of Python as the programming language for our balancing
daemon. The most useful of these 
interfaces is the {\tt xen.lowlevel.xc} interface which provides
information about domains and facilities to adjust memory allocations
and processor assignments (in multi-CPU settings) for each
domain. While the flexibility of dynamic load balancing algorithms
would be enhanced greatly by the ability to adjust domain memory
allocations automatically  for simplicity this we did not use these 
functions in our work. The domain information provided by this
interface includes memory allocation, CPU time, domain state and VCPU
assignment. Unfortunately, network usage information about each domain
is not available in this interface. Ideally in the future this
information will be made available here.

\subsection{Architecture}
\label{sec:xenbal_arch}
To provide remote access to the {\tt xen.lowlevel.xc} module, our
first step was to implement an XML-RPC server. Using this server, the
Python standard library module {\tt libxmlrpc} provides the ability to call functions on a
server proxy object which will relay the function call across the
network and return a dictionary from the XML-RPC server. This provides
an elegant network transparent method for accessing virtual
domain performance statistics. 

Our balancing daemon's first step after initialization is to start this
XML-RPC server in a new thread. It then begins querying other XML-RPC
servers in a list provided in a configuration file. During each query,
the remote XML-RPC server collects CPU load information about the Xen system through
the underlying {\tt xen.lowlevel.xc} interface and returns it to the
querying daemon. The information is stored in Node and Domain objects
along in a variable length history list. 

After initializing the XML-RPC server and load information updaters,
the balancer enters a loop during which it queries the current
migration decision algorithm and executes the migration decisions
returned, as illustrated in Figure~\ref{fig:balancerloop}. This
algorithm can be replaced at run-time, creating the 
possibility of dynamically shifting algorithms based on the state of
the system. To turn off load balancing, the balancer simply uses an 
algorithm which does nothing. To ensure uniform sampling of CPU loads,
a separate thread queries and updates load information, and the
balancer simply uses the information stored in the Node and Domain
object.

\begin{figure}

\begin{algorithmic}
\STATE {\it \{Start XML-RPC server\}}
\STATE {\it \{Start load information updaters\}}
\WHILE {running}
\STATE {\it \{Migration Decision Step\}}
\STATE {\it \{Migration Execution Step\}}
\STATE {\it \{Sleep\}}
\ENDWHILE
\end{algorithmic}
\caption{High level pseudocode for {\tt xenbal}. The migration
  decision step returns a list of necessary migrations, allowing each
  migration decision to significantly change the state of the system.}
\label{fig:balancerloop}
\end{figure}

Balancing algorithms are implemented as classes that extend a
MigrationFunction class. These classes are required to provide a
function {\tt get\_migration\_list} which returns a list of dictionaries
providing {\tt 'source'}, {\tt 'destination'} and {\tt 'domain'}
entries. Each dictionary may also provide {\tt 'live'} and {\tt 'resource'}
entries which specify whether to use Live Migration and the maximum
network bandwidth to use for the migration process. The migrations
specified in each dictionary are executed one at a time. Subsequent
migration decisions are not made until all migrations have been
performed and a waiting period has passed. This allows a single
migration decision to specify a network configuration it
would like to reach to balance load. In practice, our algorithms only
return one migration dictionary.

\subsection{Balancing Algorithms}
\label{sec:xenbal_alg}

Two balancing algorithms have currently been implemented for
xenbal. The first greedily picks a domain and destination for
migration based on CPU load. The second uses a slightly more
complicated algorithm to examine various possible future states of the system
and pick a new state to move to based on the overall ``lost
productivity'' of the system. Drawing on process migration work, each
algorithm comes in a sender initiated and receiver initiated
flavor.

Currently, both algorithms use CPU load estimates computed from a
domain cputime statistic provided by the Xen hypervisor. The
{\tt get\_cpu\_load} function queries the hypervisor, sleeps for a specified
interval and then queries again. The CPU load is computed by dividing
the cputime statistic by the length of the sleep time. Empirical
testing has shown this appears to be a reasonably accurate way of
computing this statistic. 

Other load estimates based on I/O usage would have been useful for our
work, but were not as readily accessible in the interface provided by
Xen. A longer discussion of this subject can be found in
Section~\ref{sec:performancemodeling}.

\subsubsection{Simple Balancing Algorithm}
\label{sec:simplebalalg}
Our first algorithm simply attempts to offload or aquire work in a
greedy fashion. Consider a host node $v_s$ with a set of domains $D$ and a
set of peer nodes $N$. In the sender initiated version of this
algorithm, if the host node detects its load is above a
threshold $u_s$ it will search for an appropriate migration
partner $v_r$ with a load below a threshold $l_s$. If such a partner can
be found, the algorithm will migrate the domain $d\in D$ with the
highest load to $v_r$. If such a partner cannot be found, but some
node $v_r\in N$ has a number of domains less than or equal to two less
than the number of domains on $v_s$, $v_s$ will
migrate a domain to $v_r$.

\begin{figure}

\begin{algorithmic}
\IF {$v_s.load\geq 0.95$}
\IF {$N.minload().load\leq 0.5$}
\STATE $v.migrate(v_s.D.maxload(), N.minload())$
\ELSIF {$v_s.D.length\geq N.leastDomains().D.length+2$}
\STATE $v_s.migrate(v_s.D[0],N.leastDomains()$
\ENDIF
\ENDIF
\end{algorithmic}
\caption{Simple sender initiated algorithm with $u_s=0.95$ and $l_s=0.5$}
\label{fig:simplesi}
\end{figure}

In the
receiver initiated version of this algorithm, when the host node $v_r$
with a set of domains $D$
detects its load is below a threshold $l_r$ it will search for a
migration partner $v_s$ with a load above a threshold $u_r$. If it can
find one, it will migrate the domain $d\in D$ with the highest load
from $v_s$ to $v_r$. If not, but node $v_s\in N$ has a number of
domains at least two more than the number of domains
on $v_r$, $v_r$ will tell $v_s$ to migrate a domain to $v_r$.
\begin{figure}

\begin{algorithmic}
\IF {$v_r.load\leq 0.3$}
\IF {$N.maxload().load\geq 0.95$}
\STATE $v.migrate(N.maxload().D.maxload(), v_r)$
\ELSIF {$\exists v_s\in N such that v_s.D.length\geq v_r.D.length+2$}
\STATE $v_s.migrate(v_s.D[0],v_r)$
\ENDIF
\ENDIF
\end{algorithmic}
\caption{Simple receiver initiated Algorithm with $l_r=0.3$ and $u_r=0.95$.}
\label{fig:simpleri}
\end{figure}

Pseudocode for the sender initiated version can
be found in Figure~\ref{fig:simplesi} and pseudocode for the receiver
initiated version can be found in Figure~\ref{fig:simpleri}.


\subsubsection{P-loss Balancing Algorithm}
\label{sec:plossbalalg}
Our second algorithm, called the p-loss algorithm, is loosely based on
Bayesian utility theory, 
in which we calculate the utility of making several different actions,
and execute the action with the highest utility. In our case, we will
define utility as the change in two statistics: {\it productivity loss} and
{\it unfairness}.

Consider a node $v$ hosting two virtual domains $a$ and $b$. If both
domains are using 50\% of the CPU time of $v$, $v$'s CPU load will be
100\%. If we assume each of the virtual domains could be using 100\%
of $v$'s CPU, we say the lost productivity of each domain is
$100\%-50\%=50\%$. In general, we define productivity loss of a node $n$ as the
difference between the amount of work $n$ could be doing and the
amount of work $n$ is currently doing.

Now consider a second node $u$ hosting a single virtual domain $c$
which is using 100\% of the $u$'s CPU. The productivity loss of $c$ is
$100\%-100\%=0\%$. Then we say the unfairness in the system is equal to
$50\%-0\%=50\%$. In general, we define unfairness as the difference
between the highest and lowest losses in a Xen cluster.

The P-loss algorithm works by computing the expected losses and
unfairness in the system after a given migration decision and
comparing it to all other migration decisions we might make in a given
migration decision step. It does this by assuming that after a
migration, each domain on a machine at full utilization will receive
full utilization. Then the loss for each domain on a system like this
after migration will be equal to $(1-\frac{1}{(number of domains on
  node)})\times 100\%$

In the Sender Initiated version of the P-loss algorithm, a node
simulates migrating each of its domains to every other node in the
cluster. Thus, a node with $n$ virtual domains in a cluster of $m$
nodes must perform $n m$ simulations for each migration
decision. 

In the receiver initiated version of the P-loss algorithm, a node
simulates migrating each domain from every other node in the cluster
to itself. Thus, a node with $n$ virtual domains in a cluster with $m$
total virtual domains must perform $m-n$ simulations for each
migration decision.

Each of these algorithms then attempts to find the migration scenario
with the lowest loss. If this scenario is a no migration scenario, or
no scenario significantly improves upon the no migration scenario, the
algorithms then attempt to find the scenario with the least
unfairness. If this is still the no migration scenario, no action is taken.

As with our simple balancing algorithm, a number of improvements are
desirable upon this basic framework. Taking other performance
information into account and better estimating each domains potential
for work are two particularly important and interesting directions
explore. For a more detailed discussion of this, see
Section~\ref{sec:performancemodeling}

\section{XenSim Simulator}
\label{sec:xensim}
We have implemented a Xen migration simulator in Java in order to
explore cluster configurations we were unable to explore due to
hardware restrictions. As noted by Barham \etal \cite{xenart03}, Xen's
architecture is targetted at 100 virtual domains running concurrently
on modern hardware. While this may seem excessive, the adoption of 64
bit processors will pave the way for system memory capacities capable
of giving each virtual domain in this scenario more than enough memory
to function effectively as a modern web server. Unfortunately, hardware
even remotely capable of supporting this kind of environment was
unavailable at the time of our work.

\subsection{Core objects}
XenSim is built around Node and Domain objects.  Node
objects represent physical servers running the Xen VMM and include
hardware specifications and a list of Domain objects. They have
support for simulating the execution of workloads specified by the
Domain objects. Nodes have methods to start, shutdown, and migrate
Domain objects they are currently hosting, as well as a method used
internally for scheduling Domains. Whenever this method is called, the
Node takes a Domain from the top of a queue. If the Domain has work it
would like to do, the Node schedules it for as long as it needs up to
a maximum length of time specified in a configuration file. This
simple round robin scheduling algorithm is a simplification of the
algorithm used by Xen, and could be replaced by the more complicated
algorithm in the future. 

Domain objects supply several functions to specify
workload, including a CPU work function, network work function,
and memory dirtying function. Whenever an action is performed by the
simultor that needs access to resources (sending information across
the simulated network during migration, for example), these functions
are used to determine the amount of each resource available. One of
the bigger problems with running tests on the simulator is finding
appropriate values for these functions. In particular, data on the
page dirtying rate of a typical server application over time was not
available. More extensive and realistic tests could be performed in
the future with more accurate information about these variables.

\begin{figure}
\center
\epsfig{file=figures/xensim.eps,angle=0,width=5in}
\caption{Node, Domain, and ScheduleEvent objects. Each Node maintains
  a list of Domains, which are scheduled periodically by
  ScheduleEvents for each Node. The precise mechanics of scheduling
  depend on CPUFn objects within each domain which specify the CPU
  usage of each domain at any given time.}
\end{figure}

Two other important objects include a TransferDaemon and a
Monitor. A new TransferDaemon is created at the beginning of each
migration, and is responsible for overseeing the migration
process. Essentially, it is responsible for implementing the migration
steps discussed in Section~\ref{sec:processmigration}. The Monitor
object is responsible for gathering information about the Node and
Domain objects and making them available to the balancing 
and experiment recording facilities in XenSim.

\subsection{Execution model}
\label{sec:xensimexecmodel}
The execution model of XenSim is implemented as an event based
simulator. Its basic run loop 
pulls an Event object out of a priority queue and calls its 
execute method. Several different types of events exist, but the
most important is the ScheduleEvent. One of these events should
exist in the event queue for each node in the system. When a
node's ScheduleEvent's  execute() method is called it
delegates the responsibility of scheduling to the schedule
method of the Node object it is responsible for.

Other important events include the BalancerEvents and the
MigrationEvent. BalancerEvents are used to implement migration
decision policies. Whenever they are executed, they gather information
about the system from the Monitor object and make a migration decision
based on this information. They then add another BalancerEvent to the
event queue. This distribution of responsibilities
closely mirrors our real world implementation, which hopefully gives
an accurate model of real world performance. 

Because of this event based implementation, it is occasionally
necessary to take the integral of resource functions. For example,
during migration we would like to be able to find out how much
bandwidth will be available over the next 10 seconds. To do this, we
could take the integral of the sum of the network functions of each
Domain on a Node over the next 10 seconds, and subtract this value
from the total physical bandwidth of the system over the next 10
seconds. In addition, to determine the amount of physical memory
dirtied during one iteration of migration we would need to take the
integral of the page dirtying rate over the next 10
seconds. Unfortunately, we did not use any kind of mathematical
analysis package that included these capabilities, so integral
functions would need to be specified by hand. Instead, we chose to
slow the simulation down during migrations, scheduling a new
MigrationQuantum event every time quantum. This MigrationQuantum event
was responsible for determining quantities like available network
bandwidth and dirtied pages and updating the appropriate values. 

\subsection{Simulation variables}
A number of important variables can be set with a combination of a
configuration file and the initialization methods of various objects
in the system. These can be broken into two subsets: variables that
affect the simulation of the system, and variables that affect the
simulation of the balancing facilities. Important example of the
former include the simulation time quantum, the monitor update time,
the round robin scheduler time quantum, and the resource functions of
the domains. Important examples of the second type include the
load thresholds that will trigger migrations, and the ``load memory,''
the number of past loads to average to when computing CPU loads for
balancing purposes.

\subsection{Improvements}
XenSim is currently in need of a large number of improvements. The
first, and probably most important, is to debug its performance at
higher levels of nodes and domains. While XenSim seems to simulate
smaller situations like the one we considered in our lab well,
preliminary tests have indicated something is not working correctly
once the numbers get higher. These indications include instantaneous
migration of large numbers of domains. In addition, performance at
higher levels of domains is currently very poor. 

Aside from this, future support for multiple processors, more
sophisicated network simulation, the use of the SEDF scheduling
algorithm instead of our simple algorithm, and a reconsideration of
the way resource loads are computed, possibly utilizing a calculus
package would all be interesting and important improvements to XenSim.

Finally, only the simple sender and receiver initiated algorithms
have been implemented in the simulator. While implementing p-loss is
only a matter of implementing the specific BalancerEvents, we did not
find time to do so.

