\part{Introduction}
\label{sec:intro}
Recent improvements in computer hardware have made possible the use of
virtualization techniques. Instead of accepting the limitations of
modern computer hardware, virtualization allows software developers to
focus on more abstract information organization principles. Probably
the most widely used example of virtualization are the interpreted and
virtualized programming languages like Java, Perl, Python, or Ruby. In
each case, virtualization frees programmers from the need to worry
about underlying hardware details. 

A second form of virtualization which has gained popularity in the
past several years is operating system virtualization. This technique
usually utilizes a piece of software known as a ``hypervisor'' which
presents a complete, or nearly complete, hardware interface to an
operating system. The hypervisor is generally able to handle
virtualizing hardware interfaces for several operating systems at
once, and is responsible for ensuring proper partitioning of machine
resources between the virtual machines. 

The Xen Virtual Machine Monitor \cite{xenart03} provides an
environment for the simultaneous execution of multiple virtualized
operating systems much like a modern operating system provides an
environment for the simultaneous execution of multiple
processes. While many potential applications for this
technology are still unexplored, one of the primary motivations for its
development is found in large scale data center environments.

Consider a ``server farm'', a group of hardware servers providing a 
number of independent or mirrored web or database servers. Currently,
administrators either configure each physical machine to run a single
server to ensure maximum dependability or run several different types
of servers on each machine to utilize resources more effectively. Using Xen,
administrators can essentially use a group of a few physical machines
to provide many ``dedicated'' servers without fear of unexpected
interactions through machine resource usage. Because Xen is so new,
however, techniques to manage physical resources automatically given a pool of
virtual operating systems have not been developed.

Building on the results of Krueger and Livny \cite{krueger88}, which concluded that
migration of live processes between nodes in a cluster can provide
significant performance enhancements, a great deal of research has been
done on algorithms for effectively balancing cluster workloads at the
process level. Unsurprisingly, migration techniques are more
effective as a solution for load balancing for certain types of
processes. Virtualized operating systems, because of their long running
nature, fall squarely into this set of
processes \cite{harcholbalter97exploiting}. While their total migration cost is quite
high, the benefits and penalties of good or bad migration choices
make the migration decision algorithm very important.

Part~\ref{sec:intro} of this paper will provide a history of the development of process
migration and its applications and implications for
dynamic load balancing techniques. We will then discuss the Xen
Virtual Machine Monitor (VMM), and examine the migration technique employed
by this technology. Finally, we will discuss possible strategies for
applying this technique to a modern ``server farm'' environment in
which we hypothesize dynamic load balancing of virtual operating systems 
using migration will provide significant benefits to responsiveness and 
hardware utilization.

Part~\ref{sec:methods} will provide details of xenbal, load balancing
software we have implemented in Python, and XenSim, an event-based Xen simulator
we have implemented in Java. It will also provide details on the two
algorithms we have tested for keeping a cluster of Xen VMMs balanced. 

Part~\ref{sec:results} will detail the results of using our two
balancing algorithms on a real Xen cluster and within the simulator.

In Part~\ref{sec:discussion} we will provide a discussion of the
algorithms we have implemented, and on desirable properties for a
general Xen cluster balancing algorithm. We will also discuss problems
we had with Xen, and suggestions for future improvements. Finally, we
will discuss possible future applications for Xen, and the use
of dynamic balancing algorithms in each of these applications.


\section{Background}
\label{sec:background}
Both dynamic load distribution and process migration are heavily
studied fields. Despite the complexity needed to support process
migration, numerous systems with this capability have been
developed\cite{mosix,lsf,condor}. However, as noted by Harchol-Balter and Downey, most of
these systems ``have [not] implemented a policy that specifies which
processes should be implemented for the purposes of load
balancing'' \cite{harcholbalter97exploiting}. This aversion to
implementation of such policies is likely a result of the sometimes
conflicting results reported on their benefits.


\subsection{Dynamic Load Distribution}
Dynamic load distribution as a means of better utilizing groups of
processors is an extensively studied subject. While a very large
amount of research has gone into finding ways to balance workloads on
multi-processor systems \cite{eckhousemulti,enslow,tartar,arden}, we
will focus on dynamic load distribution across network communication
systems. In particular, we will discuss the widely studied field of
process level load balancing. Given a pool of execution environments
(compute nodes in a cluster, or individual servers in a server farm),
research in this area attempts to discover techniques for assigning
processes to environments such that either a) each node has an
approximately equal amount of work at all times or b) no node is ever
idle while another is executing multiple processes. These algorithms
are of particular interest to those in the field of scientific
computation, in which problems can often be broken into smaller pieces
and solved in a distributed manner. Ensuring this work is fairly
shared by all compute nodes in a supercomputer is one important
application of the techniques we will discuss. Research in this area
can be divided into several categories.  

{\it Load balancing} algorithms attempt to equalize the load on each of the
nodes in a system. Generally, more theoretical work has been done on
this problem, as it lends itself to formal formulation. In particular,
Aggarwal \etal \cite{aggarwal03load} provided a thorough treatment to the problem of
selecting an optimal reassignment of jobs to processers given an
arbitrary reassignment cost.

{\it Load sharing} algorithms relax this goal somewhat, and instead
attempt to ``conserve the ability of the system to perform work by
assuring that no node is idle while processes wait for
service'' \cite{krueger88}. In traditional process migration
environments, this is acheived by either having nodes solicit for jobs
when they become idle or search for idle nodes to transfer jobs to
when they become overloaded, and otherwise simply execute
normally. While these algorithms do provide limited load balancing,
they are generally less aggressive about keeping the system balanced.

{\it Non-Preemptive load distributing} refers to using information
about system performance to pick an appropriate node for execution at
the beginning of the life of a process. For short running processes in
which migration during run-time may prove more costly than any
performance lost by executing on an overloaded system these types of
algorithms are really the only choice.

{\it Preemptive load distributing} refers to the practice of
interrupting process execution and transmitting all of the process's
state to a different node in the system. Because of the extremely
short nature of the majority of processes \cite{harcholbalter97exploiting}, most
do not benefit from preemptive load distributing. However,
Harchol-Balter and Downey (\cite{harcholbalter97exploiting}) found
that ``while 3.5\% of processes live longer than 2 seconds... these
processes make up more than 60\% of the total CPU load.'' As a result,
preemptive techniques like those described by \cite{harcholbalter97exploiting}
and \cite{krueger88} have been able to provide balancing results which
have in many cases matched or exceeded the performance of
non-preemptive techniques.

In addition to these four categories, load distributing algorithms
typically follow one of four general strategies for initiating
migration \cite{367728}.

\begin{itemize}
\item A {\it sender-initiated} migration policy relies on having
  nodes search for underloaded nodes to move work to when they become
  overloaded. Because these policies require heavily loaded nodes to
  perform work to find acceptable candidates for migration, this
  policy is generally preferable for low to medium loaded
  systems. 
\item A {\it receiver-initiated} migration policy requires nodes to
  search for work when they become underloaded. While in systems with
  sender initiated migration policies, nodes may ``move work'' by
  reassigning processes in their processor queue to other machines, nodes
  in a system using a receiver initiated migration policy should be
  able to find more work at any time, and thus generally gain greater
  benefit from preemptive techniques. 
\item A {\it symmetric} migration policy combines sender-initiated and
  receiver-initiated policies, and is thus appropriate in a wider
  variety of situations.
%\item Finally, {\it random} migration policies have been explored,
%  which simply migrate work randomly from nodes in a
%  system. Significant performance increases have been observed using
%  this policy for process migration.[TODO:need reference!!!]

\end{itemize}

Several major studies of these techniques have yielded sometimes
conflicting results. Generally, however, it appears that, especially
with modern network bandwidths and processor speeds, preemptive
migration is a promising technique for improving the performance of
distributed systems.

\subsection{Preemptive Vs. Non-Preemptive Load Balancing}
One of the first studies to suggest that active process
migration provides significant advantages over simple intelligent
initial placement techniques was conducted by Krueger and Livny
\cite{krueger88}.  In this work, Krueger and Livny used a simulator to
examine the benefits gained by augmenting non-preemptive job migration
policies strategies with active process migration policies under a
variety of file system conditions. In particular, they examined sender
initiated load sharing, sender initiated load balancing, symmetric
load sharing, and symmetric load balancing policies over a range of
``Place Factors'' designed to model situations ranging from
exclusively local to completely distributed file systems. Their
program simulates 20 nodes generating processes at a
homogenous rate, with a mean process migration size of 100K and a
communication device bandwidth of 10Mbits/sec. Processes are selected
for migration using a simple criterion which favors long running
processes with few previous migrations. In general, it is assumed that
systems with Place Factors indicating less distributed storage will
necessitate higher migration and initial placement times, since all
data needed by the process will need to be moved with the process.

Importantly, Krueger and Livny conclude that the addition of process
migration to load distributing algorithms can improve performance in
a wide range of environments. Particularly interesting is their
conclusion that load sharing techniques offer significant improvements
over no balancing even at high levels of migration overhead.
Also importantly they find that load sharing techniques benefit more
from migration than do load balancing techniques when compared to
simpler initial placement methods. 

Unfortunately, Eager \etal (\cite{55604}), in a paper also published in 1988
used analytic techniques and basic simulation models to conclude 
that in almost all cases, non-preemptive techniques are only
outperformed ``modestly'' by preemptive load distributing
techniques, a result which appears to contradict Krueger and
Livny. However, the fact Krueger and Livny took a more simulation
motivated approach coupled with the fact that implemented models have 
appeared to acheive performance closer to Krueger and Livny's results
\cite{harcholbalter97exploiting} make it likely that true performance
is somewhat dependent on factors absent from Eager \etal's study.
Despite this, contradicting results such as those presented by
these two papers suggest some of the reasons why active process
migration policies have not become a more widespread part of distributed
systems. 



\subsection{Later Work}

Harchol-Balter and Downey's 1997 study
\cite{harcholbalter97exploiting} found many of the same results as
Krueger and Livny. Again using a simulator to model behavior of
migration policies, the largest differences between this work and
\cite{krueger88} lie in the more realistic (``bursty'') workload and
elimination of additional non-preemptive techniques. Instead of
exploring a wide range of migration policies, Harchol-Balter and
Downey instead choose to examine the effects of a sender initiated
cross between load sharing and load balancing, in which overloaded
hosts attempt to offload all processes whose migration is expected to
increase the performance of both ``the migrant process and the other
processes at the source host.'' They conclude that migration of long
running jobs away from busy hosts helps not only the jobs themselves,
but also shorter jobs that subsequently arrive at the host. In
addition, they find that even in scenarios with high memory-transfer
costs, preemptive migration outperforms non-preemptive migration, a
phenomenon they attribute to the higher penalties of incorrect
job length predictions in non-preemptive strategies.

Very recently, Aggarwal \etal \cite{aggarwal03load} have formulated
the problem of load 
rebalancing formally, deriving a polynomial-time approximation scheme
which simplifies the complexity of finding an optimal reconfiguration
of jobs on a set of nodes. This theoretical work has already been
utilized to enhance the game play experience of massively multiplayer
games \cite{1065982}, and promises a plethora of applications.

\subsection{Process Migration}
\label{sec:processmigration}
A thorough treatment of the history and current state of process
migration techniques can be found in Milojicic \etal \cite{367728}
Because the focus of this paper is not on past migration techniques,
we will offer only a brief discussion of these before moving on to
discuss the techniques employed by Xen.

While migration techniques vary from application to application, the
algorithm for migrating a process from a \source node to
a \target node usually takes the following form:

{\singlespace
\begin{enumerate}\itemsep 0in
\item Issue migration request to \target
\item Detach process from \source
\item Redirect communication
\item Extract \source process state
\item Create destination process on \target
\item Transfer state from \source to \target
\item Forward references
\item Resume new instance
\end{enumerate}
}
\doublespace

Essentially, these steps encompass shutting down, transferring, and
restarting a process's execution while preserving communication and
state information. A variety of techniques have been utilized to
implement these procedures ranging from kernel implementations like
MOSIX \cite{mosix} to userspace implementation like those found in the
LSF \cite{lsf} and Condor \cite{condor} projects.

\section{The Xen Virtual Machine Monitor}
\label{sec:xen}
Long put by the wayside as a novel application whose performance
penalties precluded widespread use, virtualization
technology has, as a result of the power of modern computers, recently
seen renewed interest.

The Xen Virtual Machine Monitor, introduced by Barham \etal in 2003
\cite{xenart03}, is one of several recent implementation of operating
system level virtualization. Similar to the better known VMWare, Xen
provides an environment in which multiple {\it guest operating
  systems} can execute concurrently. One important facet of this
virtualization is the idea of complete isolation of individual guest
systems from each other. Ideally, misbehaving guest operating systems
should not adversely effect their peers, whether through unexpected
processor or memory utilization or unexpected security violations. Xen
provides these capabilities by implementing a {\it hypervisor} which
runs at a higher priviledge level than guest systems. This hypervisor
is responsible for virtualizing system calls made by the guest systems
and enforcing proper separation of guest systems. 

\begin{figure}[htb]
\center
\psfig{file=figures/xen-3-arch.ps,angle=0,width=5in}
\caption{Architecture of the Xen VMM. VM0 is a priviledged domain
  with interfaces into the hypervisor layer, VMs 1 and 2 are
  traditional paravirtualized operating systems, while VM 3 runs on
  the new VT-x or Pacifica x86 extensions.}
\label{fig:xen30}
\end{figure}

The distinguishing feature of the Xen system is the need for kernel level
modifications to guest operating systems. 
To achieve performance nearly identical to unvirtualized operating
systems, Xen requires operating system kernels to be
modified to support the virtualized system calls provided by the Xen
hypervisor. Because 
the scope of these modifications are small compared to the total size
of the kernel  (2995 lines or 1.36\% of the total x86 code base in
Linux), this does not provide an insurmountable barrier to the
adoption of Xen as a production quality virtualization solution. In
addition, support for Xen is expected to be merged into future
versions of the Linux kernel \cite{rooney05}. 

While thie paravirtualization strategy has proven very effective,
licensing issues have impeded attempts to port closed source operating
systems like Windows XP to Xen. Fortunately, recent work by CPU
manufacturers Intel and AMD has resulted in the inclusion of
virtualization extensions in some CPUs. These extensions will allow
Xen to run unmodified operating systems without sacrificing the
performance gained through this technique.

Xen has exhibited quite promising performance in a variety of
applications, outperforming VMWare in all tests performed by Basham
\etal, and performing nearly as well as an unmodified Linux kernel for
some applications \cite{xenart03}. For this reason, it is being
actively examined as a potential solution to the venerable problem of
resource under-utilization in many-processor environments like those found
in the modern datacenters which form the backbone of the world wide
web.

\subsection{Datacenter Applications}

Traditional approaches to management of Internet datacenters have
typically required a choice between a one-to-one relationship between
applications such as web or database servers and hardware or the risk
of unexpected performance and security interactions between
applications. In the former scenario, hardware is typically
underutilized, even for servers of pages which see heavy traffic. In
the latter, proper security and performance guarantees are difficult,
if not impossible, to make.

To better understand this, consider the following example. Two users,
Tom and Jim, would both like to create home pages. These home pages
will consist of a personal web log, a gallery of personal photos, and
a few custom PHP scripts. After exploring their options, they find
they have two choices for hosting. The first is to
rent an entire server, a situation known as {\it dedicated
hosting}. This will allow them root access to the operating system and
the least number of restrictions imposed by external
administrators. In addition, they will not need to worry about
compromising the security of each other's pages. The second
option, usually referred to as {\it colocation}, is
to each get an account with an allocation of disk space on a shared
server. In this case, they will likely face restrictions on what they
are able to host (like those provided by PHP's safe mode, for
instance) in order to minimize the possibilities of a security break
in on Tom's site effecting Jim's site. However, if, for example, a
malicious user initiates a Denial of Service attack on Tom's site,
Jim's site may still be affected. While the first case is considerably
more expensive than the second as a direct result of the amount of
resources available at any time, the second can be restrictive to the
point of being unusable for certain applications.

\begin{figure}[p]
\center
\psfig{file=figures/t&jdedicated.ps,angle=0,width=3in}
\hspace{5mm}
\psfig{file=figures/t&jcolo.ps,angle=0,width=3in}

\hbox{\hspace{1.55in} (a) \hspace{3.07in} (b)}
\caption{a) Dedicated hosting, which gives optimum performance and
  security but can be wasteful vs. b) Colocation, which allows
  multiple users to share machine resources, and hopefully increases
  use, but can introduce unexpected performance and security issues.}
\label{fig:dedicatedvscolo}
\end{figure}

\begin{figure}[p]
\center
\psfig{file=figures/t&jvirtual.ps,angle=0,width=5in}

\caption{Virtualization allows each user to have what they believe is
  a dedicated server, but uses only one machine. Additional memory and
  other resources can be allocated dynamically with the user's needs.}
\label{fig:virtualhosting}
\end{figure}

Operating system virtualization allows for a compromise between these
two situations. Tom and Jim can be assigned private
virtual operating systems on the same piece of hardware. Each of them
may be given complete control over their respective systems without
fear of security or performance interactions. Because the design of
Xen is targeted at ``hosting up to 100 virtual machine instances
simultaneously on a modern server,'' average hardware utilization is virtually
guaranteed to be higher than in traditional single application per
operating system scenarios, while allowing for robust seperation of
applications \cite{xenart03}.


\subsection{Cluster Applications}
The potential benefits of this technology in a cluster or grid like
environment are numerous as well. In particular, the ability to supply
clients with secure virtual machines as configurable as an operating
system is quite desirable. Because Xen already supports facilities to
monitor virtual machine CPU usage, one obvious possible application
would be in time-share situations like those found in supercomuting
environments. Instead of submitting jobs for processing, users could
simply be given virtual machines and allowed to run arbitrary code
while paying for CPU time as necessary for their application. Because
of the migration facilities now included in Xen, these environments
could even be set up on a private computer and transferred to the
supercomputer or grid during processor intensive applications. This
idea actually served as the inspiration for the development of Xen at
the University of Cambridge in the form of the Xenoserver
project. This project is discussed further in Section~\ref{sec:xensciencomp}.


\subsection{Live Migration}
To enhance support for datacenter and cluster-like environments, Clark
\etal \cite{livemigration} have implemented migration facilities for
the virtual operating systems created by Xen. While a simple migration
like the one described in Section~\ref{sec:processmigration} is possible,
more advanced techniques to better support real time applications like
web and game servers was also pursued. Focusing on fast network
situations with the goal of migration ``downtimes'' (as measured by an
external entity with no knowledge of the host's virtualized nature) of
only milliseconds, these facilities are specifically designed to
support the management of virtual operating systems in the types of
applications described above. Clark \etal achieve these levels of
performance by using the hypervisor's priviledged status to
copy portions of a migrating operating system's memory
footprint iteratively to a new host, at each step transferring pages of memory
which were dirtied during the last copy. Once the process in charge of
coordinating the migration detects pages being dirtied faster than
they can be transferred, it shuts down the virtual operating system
and quickly copies the remaining pages of memory over before
restarting it on the new node. Even with a heavily loaded node, Clark
\etal observe downtimes of only 210ms. This excellent performance
bodes well for future applications of this technology.


\section{Current Work}
While Live Migration promises to be an effective strategy for managing
clusters of nodes running many operating systems under Xen, currently
facilities to automate this management either do not exist or are in
the early stages of development. One of the next steps toward
the widespread utilization of these techniques is, as noted by Clark
\etal, the development of ``cluster control software which can make
informed decisions as to the placement and movement of virtual
machines'' \cite{livemigration}. Unfortunately, very little research
has been done regarding the best ways to make these decisions. While
in many ways this problem is similar to those studied by Krueger and
Livny and others, in many ways it is quite different.

\subsection{Problem Details}
A comparison to Krueger and Livny proves instructive in understanding
the characteristics of this problem.

\subsubsection{Metrics}

Our metrics for measuring performance improvement will
be considerably different from those of Krueger and Livny. Where they
measured response time (the total amount of time a process is in the
system) and response ratio (the reciprocal of the rate at which the
process receives service from the CPU), measuring response time in
our case simply does not make sense, as we assume virtual operating systems
will run indefinitely. 

Instead, a statistic similar to response ratio,
CPU time, will be more indicative of virtual machine performance. This
statistic will measure amount of time allocated to a given virtual
machine. Higher CPU times per
node, given high levels of overall CPU utilization, will be
indicative of better service to that node. In addition, external
measures catered to specific applications (like throughput for HTTP
servers) will be important measures of user experience, and thus
virtual machine performance.

\subsubsection{Load Balancing Algorithm}
The design of our load distributing algorithm will likely be quite
different from that proposed by Krueger and Livny. One important way
in which they will differ is in the acceptable complexity of the
decision making process. Because the total overhead of migrating a
process between nodes is, in most cases, relatively small, the costs
of finding the best possible source and target hosts candidates for
migration can be prohibitive. 

For our applications, the time required
to make the best possible decisions about migration will, in most cases, be small
compared to the amount of time required to migrate a virtual machine
between nodes. In addition, the possible performance penalties of
making a bad migration decision could potentially be quite large, as in
the case of a node which, after accepting a new virtual machine,
suddenly experiences a peak in activity on some of the virtual machines it
previously held. It is not hard to imagine a situation in which the
performance of a particular virtual operating system could actually be
negatively impacted by such an event, rendering the time and resources
already devoted to migration a complete waste.
Instead of taking Krueger and Livny's approach of ``limit[ing
  migration] negotiation to a subset of nodes,'' we will be able to
  explore our options quite thoroughly to make better informed
  decisions.

Like Harchol-Balter and Downey, we will likely want to use a strategy
somewhere in between load balancing and load sharing for our
purposes. Where load balancing would likely prove a very difficult,
and potentially costly (because of the high cost of migration)
exercise, load sharing in the traditional sense alone would prove
ineffective, as it relies on nodes becoming idle to prompt
migration. Instead, we will likely use a relaxed version of load
sharing, in which migration is sparked when total system performance
metrics rise above or fall below certain levels.

\subsubsection{Virtual Machine Selection}

While most studies of dynamic load distribution at the
process level have used process lifetime distributions to determine
which processes might benefit from migration, in the case of virtual
operating systems we assume an infinite lifetime for all systems. To
select appropriate candidates for migration when dealing with virtual
operating systems, then, we will need a different metric. One obvious
choice is the percent of total CPU time devoted to a virtual operating
system over some unit of time. While in an environment with high
utilization of all virtual
machines on a node equal CPU sharing should be enforced by the
hypervisor, in real world situations we should be able to tell how
busy a virtual machine is by how much of the CPU it uses because this
number should correspond to the amount of CPU it is requesting. This
assumption may not hold in situations where one virtual machine
wants an extremely large amount of time on the CPU while one or more
other virtual machines want less but still relatively large amounts of
time. In this case, all heavily utilized virtual domains will appear
to be the same, when in reality one is seeing more use. In this
situation, however, all of the involved virtual machines should still
see some performance improvements when one is moved to a less utilized
machine.

With these considerations in mind, we next introduce our balancing
software, and discuss two preliminary algorithms for balancing a
cluster of Xen machines.