\documentclass[dvips,12pt]{article}


\setlength{\topmargin}{-0.5in}
\setlength{\textheight}{9in}
\setlength{\oddsidemargin}{0in}
\setlength{\evensidemargin}{0in}
\setlength{\textwidth}{6.5in}

%Grabbed from Jim Teresco. Thanks, Jim!
\newcommand{\etal}{{\it et al}.$\:$}
\newcommand{\eg}{{\it e}.{\it g}.$\:$}
\newcommand{\cf}{{\it cf}.$\:$}
\newcommand{\ie}{{\it i}.{\it e}.$\:$}

\newcommand{\singlespace}{
  \protect\renewcommand\baselinestretch{1.0}
  \protect\normalsize
}

\newcommand{\doublespace}{
  \protect\renewcommand\baselinestretch{1.5}
  \protect\normalsize
}

%My macros
\newcommand{\source}{{\it source}$\:$}
\newcommand{\target}{{\it target}$\:$}


\begin{document}
\date{}
\title{Dynamic Load Balancing and its Possible Applications to the Xen
Virtual Machine Monitor}

\author{Travis Vachon}

\maketitle


\begin{abstract}
\end{abstract}


\doublespace

\section{Introduction}
\label{sec:intro}
Recent improvements in computer hardware have made possible the use of
virtualization techniques. While many of these techniques have focused on
providing virtualized programming language environments, recently 
technology has emerged which aims to virtualize entire operating system 
environments.

The Xen Virtual Machine Monitor provides an
environment for the simultaneous execution of multiple virtualized
operating systems much like a modern operating system provides an
environment for the simultaneous execution of multiple
processes. While many potential applications for this
technology are still unexplored, one of the primary motivations for its
development is found in large scale data center environments.

Consider a ``server farm'', a group of hardware servers providing a 
number of independent or mirrored web or database servers. Currently,
administrators either configure each physical machine to run a single
server to ensure maximum dependability or run several different types
of servers on each machine to better utilize resources. Using Xen,
administrators can essentially use a group of a few physical machines
to provide many ``dedicated'' servers without fear of unexpected
interactions through machine resource usage. Because Xen is so new, however, techniques to 
effectively utilize physical resources given a pool of virtual 
operating systems have not been developed.

Building on the results of Krueger and Livny \cite{krueger88}, which concluded that
migration of live processes between nodes in a cluster can provide
significant performance enhancements, a great deal of research has been
done on algorithms for effectively balancing cluster workloads at the
process level. Unsurprisingly, migration techniques are more
effective as a solution for load balancing for certain types of
processes. Virtualized operating systems, because of their long running
and computationally intensive nature, fall squarely into this set of
processes \cite{harcholbalter97exploiting}. While their total migration cost is quite
high, the benefits and penalties of good or bad migration choices
make the migration decision algorithm a very interesting problem.

This paper will provide a history of the development of process
migration and its applications and implications for
dynamic load balancing techniques. We will then discuss the Xen
Virtual Machine Monitor, and examine the migration technique employed
by this technology. Finally, we will discuss possible strategies for
applying this technique to a modern ``server farm'' environment in
which we hypothesize dynamic load balancing of virtual operating systems 
using migration will provide significant benefits to responsiveness and 
hardware utilization.


\section{Background}
\label{sec:background}
Both dynamic load distribution and process migration are heavily
studied fields. Despite the complexity needed to support process
migration, numerous systems with this capability have been
developed. However, as noted by Harchol-Balter and Downey, most of
these systems ``have [not] implemented a policy that specifies which
processes should be implemented for the purposes of load
balancing'' \cite{harcholbalter97exploiting}. This aversion to
implementation of such policies is likely a result of the sometimes
conflicting results reported on their benefits.


\subsection{Dynamic Load Distribution}
Dynamic load distribution as a means of better utilizing groups of
processors is an extensively studied subject. While a very large
amount of research has gone into finding ways to balance workloads on
within multi-processor systems, this paper will focus on dynamic load
distribution utilizing network-like communication systems. Research in
this area can be divided into several categories.

{\it Preemptive load distributing} refers to the practice of
interrupting process execution and transmitting all of the process's
state to a different node in the system. Because of the extremely
short nature of the majority of processes \cite{harcholbalter97exploiting}, most
do not benefit from preemptive load distributing. However,
Harchol-Balter and Downey (\cite{harcholbalter97exploiting}) found
that ``while 3.5\% of processes live longer than 2 seconds... these
processes make up more than 60\% of the total CPU load.'' Preemptive
techniques like those described by \cite{harcholbalter97exploiting}
and \cite{krueger88} are thus very interesting topics.

{\it Non-Preemptive load distributing} refers to using information
about system performance to pick an appropriate node for execution at
the beginning of the life of a process. 

{\it Load balancing} algorithms attempt to equalize the load on each of the
nodes in a system. Generally, more theoretical work has been done on
this problem, as it lends itself to formal formulation. In particular,
Aggarwal \etal \cite{aggarwal03load} provided a thorough treatment to the problem of
selecting an optimal reassignment of jobs to processers given an
arbitrary reassignment cost.

{\it Load sharing} algorithms relax this goal somewhat, and instead
attempt to ``conserve the ability of the system to perform work by
assuring that no node is idle while processes wait for
service'' \cite{krueger88}. In traditional process migration
environments, this is acheived by either having nodes solicit for jobs
when they become idle or search for idle nodes to transfer jobs to
when they become overloaded, and otherwise simply execute
normally. While these algorithms do provide limited load balancing,
they are generally less aggressive about keeping the system balanced.

In addition to these four categories, load distributing algorithms
typically follow one of four general strategies for initiating
migration \cite{367728}.

\begin{itemize}
\item A {\it sender-initiated} migration policy relies on having
  nodes search for underloaded nodes to move work to when they become
  overloaded. Because these policies require heavily loaded nodes to
  perform work to find acceptable candidates for migration, this
  policy is generally preferable for low to medium loaded
  systems. 
\item A {\it receiver-initiated} migration policy requires nodes to
  search for work when they become overloaded. While in systems with
  sender initiated migration policies, nodes may ``move work'' by
  starting processes in their processor queue on other machines, nodes
  in a system using a receiver initiated migration policy should be
  able to get more work at any time, and thus generally gain greater
  benefit from migration techniques. 
\item A {\it symmetric} migration policy combines sender-initiated and
  receiver-initiated policies, and is thus appropriate in a wider
  variety of situations.
\item Finally, {\it random} migration policies have been explored,
  which simply migrate work randomly from nodes in a
  system. Significant performance increases have been observed using
  this policy for process migration.

\end{itemize}

Several major studies of these techniques have yielded sometimes
conflicting results. Generally, however, it appears that, especially
with modern network bandwidths and processor speeds, preemptive
migration is a promising technique for improving the performance of
distributed systems.

\subsection{Preemptive Vs. Non-Preemptive Load Balancing}
One of the first studies to suggest that active process
migration provides significant advantages over simple intelligent
initial placement techniques was conducted by Krueger and Livny
\cite{krueger88}.  In this work, Krueger and Livny use a simulator to
examine the benefits gained by augmenting non-preemptive job migration
policies strategies with active process migration policies under a
variety of file system conditions. In particular, they examine sender
initiated load sharing, sender initiated load balancing, symmetric
load sharing, and symmetric load balancing policies over a range of
``Place Factors'' designed to model situations ranging from
exclusively local to completely distributed file systems. Their
program simulates 20 nodes generating processes at a
homogenous rate, with a mean process migration size of 100K and a
communication device bandwidth of 10Mbits/sec. Processes are selected
for migration using a simple criterion which favors long running
processes with few previous migrations.

Importantly, Krueger and Livny conclude that process migration can
improve performance in nearly all environments. While less distributed
environments increase the overhead of migration, because completely
local storage increases the overhead of initial placement techniques
as well, Krueger and Livny actually observe greater benefits from 
this augmentation in these environments. Also importantly
they find that load sharing techniques benefit more from migration
than do load balancing techniques when compared to simpler initial
placement methods. 

Unfortunately, Eager \etal (\cite{55604}), in a paper also published in 1988
used analytic techniques and basic simulation models to conclude 
that in almost all cases, non-preemptive techniques are only
outperformed ``modestly'' by preemptive load distributing
techniques, a result which appears to contradict Krueger and
Livny. However the fact Krueger and Livny took a more simulation
motivated approach coupled with the fact that implemented models have 
appeared to acheive performance closer to Krueger and Livny's results
\cite{harcholbalter97exploiting} make it likely that true performance
is dependent on factors absent from Eager \etal's study.
Despite this, contradicting results such as those presented by
these two papers suggest some of the reasons why active process
migration policies have not become a more widespread part of distributed
systems. 



\subsection{Later Work}

Harchol-Balter and Downey's 1997 study
\cite{harcholbalter97exploiting} found many of the same results as
Krueger and Livny. Again using a simulator to model behavior of
migration policies, the largest differences between this work and
\cite{krueger88} lie in the more realistic (``bursty'') workload and
elimination of additional non-preemptive techniques. Instead of
exploring a wide range of migration policies, Harchol-Balter and
Downey instead choose to examine the effects of a sender initiated
cross between load sharing and load balancing, in which overloaded
hosts attempt to offload all processes whose migration is expected to
increase the performance of both ``the migrant process and the other
processes at the source host.'' They conclude that migration of long
running jobs away from busy hosts helps not only the jobs themselves,
but also shorter jobs that subsequently arrive at the host. In
addition, they find that even in scenarios with high memory-transfer
costs, preemptive migration outperforms non-preemptive migration, a
phenomenon they attribute to the higher penalties of incorrect
job length predictions in non-preemptive strategies.

Very recently, Aggarwal \etal have formulated the problem of load
rebalancing formally, deriving a polynomial-time approximation scheme
which simplfies the complexity of finding an optimal reconfiguration
of jobs on a set of nodes. This theoretical work has already been
utilized to enhance the game play experience of massively multiplayer
games \cite{1065982}, and promises a plethora of applications.

\subsection{Process Migration}
\label{sec:processmigration}
A thorough treatment of the history and current state of process
migration techniques can be found in Milojicic \etal \cite{367728}
Because the focus of this paper is not on past migration techniques,
we will offer only a brief discussion of these before moving on to
discuss the techniques employed by Xen.

While migration techniques vary from application to application, the
algorithm for migrating a process from a \source node to
a \target node usually takes the following form:

{\singlespace
\begin{enumerate}\itemsep 0in
\item Issue migration request to \target
\item Detach process from \source
\item Redirect communication
\item Exract \source process state
\item Create destination process on \target
\item Transfer state from \source to \target
\item Forward references
\item Resume new instance
\end{enumerate}
}
\doublespace

Essentially, these steps encompass shutting down, transferring, and
restarting a process's execution while preserving communication and
state information. A variety of techniques have been utilized to
implement these procedures ranging from kernel implementations like
MOSIX \cite{mosix} to userspace implementation like those found in the
LSF \cite{lsf} and Condor \cite{condor} projects.

\section{The Xen Virtual Machine Monitor}
\label{sec:xen}
Long put by the wayside as a novel application whose performance
penalties precluded widespread use, virtualization
technology has, as a result of the power of modern computers, recently
seen renewed interest.

The Xen Virtual Machine Monitor, introduced by Barham \etal in 2003
\cite{xenart03}, is one of several recent implementation of operating
system level virtualization. Similar to the better known VMWare, Xen
provides an environment in which multiple {\it guest operating
  systems} can execute concurrently. One important facet of this
virtualization is the idea of complete isolation of individual guest
systems from each other. Ideally, misbehaving guest operating systems
should not adversely effect their peers, whether through unexpected
processor or memory utilization or unexpected security violations. Xen
provides these capabilities by implementing a {\it hypervisor} which
runs at a higher priviledge level than guest systems. This hypervisor
is responsible for virtualizing system calls made by the guest systems
and enforcing proper separation of guest systems. One final, and very
important, part of the Xen system is the need for kernel level
modifications to guest operating systems. To achieve acceptable
performance, Xen requires operating system kernels to be
modified to support the virtualized system calls provided by the Xen
hypervisor. Because 
the scope of these modifications are small compared to the total size
of the kernel  (2995 lines or 1.36\% of the total x86 code base in
Linux), this does not provide an insurmountable barrier to the
adoption of Xen as a production quality virtualization solution. In
addition, support for Xen is expected to be merged into future
versions of the Linux kernel \cite{rooney05}. 

Xen has exhibited quite promising performance in a variety of
applications, outperforming VMWare in all tests performed by Basham
\etal, and performing nearly as well as an unmodified Linux kernel for
some applications \cite{xenart03}. For this reason, it is being
actively examined as a potential solution to the venerable problem of
resource utilization in many-processor environments like those found
in the modern datacenters which form the backbone of the world wide
web.

\subsection{Datacenter Applications}

Traditional approaches to management of Internet datacenters have
typically required a choice between a one to one relationship between
applications such as web or database servers and hardware or the risk
of unexpected performance and security interactions between
applications. In the former scenario, hardware is typically
underutilized, even for servers of pages which see heavy traffic. In
the latter, proper security and performance guarantees are difficult,
if not impossible, to make.

Operating system virtualization allows for a compromise between these
two situations. Two users, Tom and Jim, can be assigned private
virtual operating systems on the same piece of hardware. Each of them
may be given complete control over their respective systems without
fear of security or performance interactions. Because the design of
Xen is targeted at ``hosting up to 100 virtual machine instances
simultaneously on a modern server,'' average hardware utilization is virtually
guaranteed to be higher than in traditional single application per
operating system scenarios, while allowing for robust seperation of
applications \cite{xenart03}.

\subsection{Cluster Applications}
The potential benefits of this technology in a cluster or grid like
environment are numerous as well. In particular, the ability to supply
clients with secure virtual machines as configurable as an operating
system is quite desirable. Because Xen already supports facilities to
monitor virtual machine CPU usage, one obvious possible application
would be in time-share situations like those found in supercomuting
environments. Instead of submitting jobs for processing, users could
simply be given virtual machines and allowed to run arbitrary code
while paying for CPU time as necessary for their application. Because
of the migration facilities now included in Xen, these environments
could even be set up on a private computer and transferred to the
supercomputer or grid during processor intensive applications. While
there are currently no advertised implementations of this idea, its
potential benefits suggest it may only be a matter of time before
these environments become available.

\subsection{Live Migration}
To enhance support for datacenter and cluster-like environments, Clark
\etal \cite{livemigration} have implemented migration facilities for
the virtual operating systems created by Xen. While a simple migration
like the one described in ~\ref{sec:processmigration} is possible,
more advanced techniques to better support real time applications like
web and game servers was also pursued. Focusing on fast network
situations with the goal of migration ``downtimes'' (as measured by an
external entity with no knowledge of the host's virtualized nature) of
only milliseconds, these facilities are specifically designed to
support the management of virtual operating systems in the types of
applications described above. Clark \etal achieve these levels of
performance by using the hypervisor's priviledged status to
iteratively copy portions of a migrating operating system's memory
footprint to a new host, at each step transferring pages of memory
which were dirtied during the last copy. Once the process in charge of
coordinating the migration detects pages being dirtied faster than
they can be transferred, it shuts down the virtual operating system
and quickly copies the remaining pages of memory over before
restarting it on the new node. Even with a heavily loaded node, Clark
\etal observe downtimes of only 210ms. This excellent performance
bodes well for future applications of this technology.

\section{Current Work}
While Live Migration promises to be an effective strategy for managing
clusters of nodes running many operating systems under Xen, currently
facilities to automate this management either do not exist or are in
the early stages of development. One of the next steps toward
the widespread utilization of these techniques is, as noted by Clark
\etal, the development of ``cluster control software which can make
informed decisions as to the placement and movement of virtual
machines.'' \cite{livemigration} Unfortunately, very little research
has been done regarding the best ways to make these decisions. While
in many ways this problem is similar to those studied by Krueger and
Livny and others, in many ways it is quite different.

\subsection{Problem Details}
A comparison to Krueger and Livny proves instructive in understanding
the characteristics of this problem.

\subsubsection{Metrics}

Our metrics for measuring performance improvement will
be considerably different from those of Krueger and Livny. Where they
measured response time (the total amount of time a process is in the
system) and response ratio (the reciprocal of the rate at which the
process receives service from the CPU), measuring response time in
our case simply does not make sense, as we assume virtual operating systems
will run indefinitely. 

Instead, a statistic similar to response ratio,
CPU time, will be more indicative of virtual machine performance. This
statistic will measure amount of time allocated to a given virtual
machine compared to the total CPU time of a node. Higher CPU times per
node, especially at high levels of overall CPU utilization, will be
indicative of better service to that node. In addition, external
measures catered to specific applications (like throughput for HTTP
servers) will be important measures of user experience, and thus
virtual machine performance.

\subsubsection{Load Balancing Algorithm}
The design of our load distributing algorithm will likely be quite
different from that proposed by Krueger and Livny. One important way
in which they will differ is in the acceptable complexity of the
decision making process. Because the total overhead of migrating a
process between nodes is, in most cases, relatively small, the costs
of finding the best possible source and target hosts candidates for
migration can be prohibitive. 

For our applications, the time required
to perform these extensive tests will, in most cases, be small
compared to the amount of time required to migrate a virtual machine
between nodes. In addition, the possible performance penalties of
making a bad migration decision could potentially be quite large, as in
the case of a node which, after accepting a new virtual machine,
suddenly experiences a peak in activity on some of the virtual machines it
previously held. It is not hard to imagine a situation in which the
performance of a particular virtual operating system could actually be
negatively impacted by such an event, rendering the time and resources
already devoted to migration a complete waste.
Instead of taking Krueger and Livny's approach of ``limit[ing
  migration] negotiation to a subset of nodes,'' we will be able to
  explore our options quite thoroughly to make better informed
  decisions.

Like Harchol-Balter and Downey, we will likely want to use a strategy
somewhere in between load balancing and load sharing for our
purposes. Where load balancing would likely prove a very difficult,
and potentially costly (because of the high cost of migration)
exercise, load sharing in the traditional sense alone would prove
ineffective, as it relies on nodes becoming idle to prompt
migration. Instead, we will likely use a relaxed version of load
sharing, in which migration is sparked when total system performance
metrics rise above or fall below certain levels.

\subsubsection{Virtual Machine Selection}

While most studies of dynamic load distribution at the
process level have used process lifetime distributions to determine
which processes might benefit from migration, in the case of virtual
operating systems we assume an infinite lifetime for all systems. To
select appropriate candidates for migration when dealing with virtual
operating systems, then, we will need a different metric. One obvious
choice is the percent of total CPU time devoted to a virtual operating
system over some unit of time. While in an environment with high
utilization of all virtual
machines on a node equal CPU sharing should be enforced by the
hypervisor, in real world situations we should be able to tell how
busy a virtual machine is by how much of the CPU it uses because this
number should correspond to the amount of CPU it is requesting. This
assumption may not hold in situations where one virtual machine
wants an extremely large amount of time on the CPU while one or more
other virtual machines want less but still relatively large amounts of
time. In this case, all heavily utilized virtual domains will appear
to be the same, when in reality one is seeing more use. In this
situation, however, all of the involved virtual machines should still
see some performance improvements when one is moved to a less utilized
machine.

\subsubsection{Assumptions}
Because of the increasing popularity of network attached storage
solutions in datacenters, we will assume for our work that no files
required by any virtual machines are stored locally. This is a
reasonable assumption, as Xen's Live Migration technique also makes
this assumption. Disk access on one side of a transfer should be
identical to disk access on the other side, and the best way to
accomplish this is with solutions like NFS or iSCSI, a newer, higher
speed NFS-like file serving solution becoming popular in fiber based networks.

\subsection{Planned research}
Between January and June of 2006, we will be exploring this question
using a group of machines ranging from 450MHz to 1.7GHz running
XenLinux. Because these machines are not as powerful as those described
in \cite{livemigration}, we will likely use a scaled back set of test
applications and virtual machines. This will, hopefully, still suggest
which load distibution strategies are likely to prove optimal for this
problem, and suggest directions for future research.

To better understand this problem, we also plan to implement a
simulator to explore the performance of virtual operating system
migration on scales like those proposed by Xen's creators.

\section{Conclusion}
Operating system virtualization is a very young field, and migration
of virtual operating systems between physical machines is younger
still. Thouroghly explored techniques developed for process level
migration, however, suggest directions to take in exploring this
problem. As technology continues to integrate itself
into almost every aspect of our lives, techniques like operating
system virtualization which hold the promise of increased resource
utilization will become more and more important. To this end, research
like that which we propose is an important step towards making Xen
usable environment on which to build virtual machine datacenters and
clusters. 



\pagebreak


\singlespace
\bibliographystyle{abbrv}
\bibliography{references}

\end{document}