\part{Discussion}
\label{sec:discussion}

Through our real world tests and implementation of XenSim, we have
come to better understand the problem of load balancing operating
system sized objects. Because operating system virtualization is a
relatively new technology, we believe that one of our most interesting
results is the solidification of the requirements for a general
virtual operating system load balancer. The discussion of these
requirements will be broken into several pieces. First, we will
discuss the necessity of real world performance data for the problem
domains we are interested in. Part of this discussion will include a
critique of our algorithms as they apply to real possible real world
data. Next, we will discuss the balancing techniques we used in our
work. We will also expand on these techniques and offer suggestions
for future work in this area. We will finish with a discussion of the
potential future uses of Xen, and discuss the role of dynamic load
balancing in each of these. In addition, we will also discuss the
current state of XenSim, and mention enhancements that would permit it
to simulate real world situatio0ns more accurately.

\section{Our algorithms: strengths and weaknesses}
Before beginning this discussion, however, we will review the
strengths and weaknesses of our algorithms. While we were able to get
some interesting results from these algorithms, a number of factors in
their design contributed to rather strong weaknesses. 

\subsection{Simple balancing algorithm}
The simple balancing algorithm performed well in the scenarios for
which we tested it. This
is likely a result of the fact that we designed it
specifically to acheive a desired transformation from several
unbalanced configurations to a state we considered ideal. Nonetheless,
we believe its performance in real world scenarios would be acceptable
as well. In particular, the sender initiated version of this algorithm
should perform well in webserver-like environments in which some, but
not all, of a large pool of virtual servers are active at any
time. Considerable improvements should be possible by considering
I/O information including networking and disk accesses, especially
given the necessity of remote storage for migration. Beyond the
obvious example of monitoring network load and incorporating it into
migration decisions, future work could also explore differences in
performance due to remote storage in situations where a mixture of
remote and local storage exists. Our lab setup it one such situation
in that domains would sometimes be hosted on the node hosting their
NFS disk storage.

One major weakness of this algorithm is its inability to consider long
term patterns in the data. Despite the load memory feature present in
later versions of the algorithm, we are only capable of seeing the
short term state of the system. Especially in an environment like a
web server farm, where load is likely to rise slowly to a peak during
the day and decline slowly at night, focusing on short term changes
could actually be a liability, leading to frequent, expensive
migrations.

\subsection{P-loss balancing algorithm}
This algorithm did not perform particularly well, though better
performance could have perhaps been achieved with a more thorough
exploration of the sensitivity variable. In addition, this algorithm
would likely benefit from the same I/O information that would have
improved our simple migration algorithm. 

One important weakness of this algorithm is its expected performance
in a large scale virtual machine pool environment. In this case, where
each domain receives a relatively small portion of the physical
resources, the idea of expected performance, crucial to the
calculation of productivity loss, is very different from our
assumption of 100\% utilization. For example, in Xen's target
environment of 100 virtual machines hosted on a single piece of modern
hardware, the assumption that each virtual machine could be using 100\% of the
physical resources at any given time is somewhat ridiculous. 

On the other hand, this algorithm, unlike our simple algorithm, could
easily be modified to incorporate bigger picture load information. If,
for example, the algorithm had access to a function which approximated
the load of a domain at any given time based on past data, we could
use this expectation as our basis for calculating productivity
loss. Using this data would also solve the problem noted above, by
eliminating the need to assume 100\% potential resource
utilization. This ability to incorporate longer term data is likely to
be an extremely important factor in effectively balancing real
clusters of Xen machines, as we will discuss next.

\section{Performance modeling}
\label{sec:performancemodeling}
As has been mentioned several times, and is alluded to in the section
above, real world resource utilization data will be an critical
component of any algorithm designed to effectively balance a cluster
of Xen machines. This is one of the crucial differences between our
work and the kind of process balancing work discussed in
Part~\ref{sec:background}. Because of the relatively short nature
and/or consistent workload of most processes, short term assumptions
about processor load are both reasonable and useful. Operating
systems, however, are neither short running nor consistently loaded in
any regard. Network, memory, and CPU usage vary greatly throughout the
life of the system. This being the case, it is not clear that a single
``load average'' could ever properly characterize an operating
system's behavior. 

One of the first considerations, then, in the design of a better load
balancing algorithm is the expected performance model for each
domain. There are several possibilities for constucting this model,
and it is likely that each would be useful in a different situation.

\subsection{Data pre-gathering} 
One potentially very effective technique for constructing performance
models would be gathering data about expected performance from
previous applications, either virtualized or non-virtualized. In
particular, this kind of construction technique would likely work well
for web servers and massively multiplayer online gaming servers, since
these servers are likely to follow fairly regular usage
patterns. Using standard statistical techniques, administrators could
construct mathematical formulas to describe the expected behavior of a
domain and associate these formulas with the domains within the load
balancer. These formulas could then be used to either make balancing
decisions, for example moving a web server to its own node during the
middle of the day when traffic is expected to be high, or as a basis
for techniques like our p-loss algorithm. Additionally, the underlying
data model could be updated periodically with observed data, allowing
it to make slow adjustments to long term trends. It is likely that
algorithms using this technique would have some trouble compensating
for unexpected spikes in performance, for example, a link to a web
server being posted on Slashdot, but would likely provide good long
term behavior.

\subsection{Short term trend identification}
A second, somewhat related technique for deriving expected performance
could search for shorter term trends in data and attempt, on the fly,
to fit mathematical functions to them. In this technique, an algorithm
could examine all data within a certain time window and use regression
to fit a line or curve to the data within that window. Upward trends
in a domain could be used to trigger migrations giving that domain
more resources, while downward trends could be used to consolidate
domains on a node. A great deal of work could be done in finding
appropriate time windows on which to do this, and the answer would
probably not be the same for all applications.

\subsection{Performance guarantees}
\label{sec:perfguar}
One possibly rich field of inquiry for divining performance
expectations could be found in performance guarantees. Xen has
recently made this an even more interesting subject with the use of a
new domain scheduling algorithm. The domain scheduling algorithm is
the arbitrator that makes decisions about when a particular domain
gets to use the CPU, very similar to the process scheduler within an
operating system. SEDF, the latest scheduling algorithm, has the
ability to make CPU performance guarantees to individual domains. This
capability is actually available from the Python interface from which
we extracted performance information. 

Performance guaratees offer several possible performance expectation
possiblities. First, we could treat a guarantee as expected
performance. While this would likely underestimate performance nearly
all of the time, it would allow these guarantees to be maintained
between migrations. Not only could we guarantee virtual machines a
portion of the resources of a machine, we could guarantee them a set
block of resources regardless of their host.

Expected performance could also be defined as a multiple of a
performance guarantee. The specific coefficient used to make this
calculation would likely need to be discovered empirically, but this
might provide a more accurate picture of expected performance.

\subsection{Resource balancing}
Throughout this discussion, we have used the term resource to denote
an abstract idea of physical limits on performance. In reality, there
are several different resources that constrain computational
performance in the real world. A short discussion of several of the
main types is provided here, followed by a discussion of their
applications to load balancing.

\subsubsection{CPU Load}
This is perhaps the most obvious variable to associate with load
balancing, as it seems directly related to performance. However, this
is only the case in computations that are bounded by the CPU. Examples
of these kinds of applications that might be common in virtual
operating systems in the future include web servers with dynamic
content, game servers, and scientific computation.

\subsubsection{Network Load}
One potentially very important load variable, especially given the
necessity of network storage for migration, is network load. This
includes router load and network load, as well as
line speed. While the rising popularity of fiber networks may make this
less of an issue, the sheer volume of data we may require to pass
across an internal network or through an external gateway may prove to
be a limiting factor in many Xen deployments. Thus, algorithms which
can recognize and correct situations in which a virtual machine's
performance is network bounded, as was the case with our html tests in
Section~\ref{sec:loadexplorations}, are both interesting and
important.

\subsubsection{Memory capacity}
One feature of Xen's {\tt xend} interface that we did not take advantage of
for our work is its ability to change domain memory
allocations dynamically. This ability could be leveraged to make dynamic
performance adjustments both within machines and across machines. For
example, a domain using a lot of memory on one machine could be
migrated to a system with a larger amount of physical memory and given
a larger memory allocation. 

Even without this ability, memory footprint information is definitely
a factor that should be taken into consideration in any mature
migration decision. Especially in situations where the total memory
allocations of all domains in the system is close to the total
physical memory capacity of the system, ensuring migrations are
possible by carefully managing memory footprints could be a
complicated and important feature.

\subsubsection{Other factors}
The detection of performance limiting factors would make a very
interesting field of general study, and its application toward our
problem it both large and obvious. Many other factors could have an
influence on the performance of a virtual operating system including
memory latency, cache sizes, and the presence of parallelization
technologies like Intel's hyperthreading. All of these could be
factored into a future balancing algorithm as possible limiting
factors on virtual machine performance.

\subsection{Machine learning}
The task of making predictions about future performance based on set
of previous observations is one that has been extensively studied in
the field of machine learning. Because of this, we believe these
techniques could be used quite effectively to build and maintain
expected performance models. In particular, online supervised
algorithms like regression and neural networks could probably prove
particularly useful in generating load expectation functions. While
either of these methods would introduce significant time and space
overhead, especially in situations with large numbers of virtual
machines, their benefits in terms of accurate prediction of expected
performance could be substantial. Like many of the other techniques we
have discussed, balancing could be performed based on the expected
data, or the expected data could be used within a more complicated
algorithm like our p-loss technique.

While these techniques would be prohibitively expensive in many
previous dynamic balancing problems, the large size of virtual
domains, and potential consequences of a bad decision may justify
using techniques like this that will make better decisions more likely.


\section{Balancing techniques}
With this in mind, we will now discuss several general techniques we
believe will be useful in optimizing the performance of Xen clusters.

\subsection{Load based balancing}
Probably the most obvious technique for improving performance is
resource balancing. Put simply, the heart of this technique lies in
searching for performance limiters within the system and attempting to
compensate for them. In the case of CPU load balancing, this means
migrating virtual machines away from nodes with high CPU
utilization. In network load balancing, this means finding saturated
network connections and migrating domains to another network
interface. In memory load balancing, this means finding nodes whose
memory is fully utilized, and moving domains which could use a larger
memory allocation to nodes with free memory. 

It is useful to compare this problem to that of process level load
balancing, in which we have the capability to modify resource
utilization by dynamically assigning processes to execution
environments. As we discussed in Part~\ref{sec:background}, the
benefits of actual dynamic migration techniques have been debated, to
a large degree because of the cost of the migration and decision
making frameworks relative to the benefits they offer. More
importantly, the notion of ``load'' in process migration algorithms
is generally related to the length of time the process will take to
execute. Migration decisions become dependent on the amount of time a
process has already executed, as this is seen as indicative of the
amount of time the process has
left\cite{harcholbalter97exploiting,krueger88}. In contrast, the
concept of load in operating system load balancing is extremely
complex, as discussed above.

Unsurprisingly, the potential for sophistication of the resulting
algorithms is quite high. We suggest two categories for potential load
balancing algorithms, motivated by the two algorithms we
developed.

The first type deal with short term fluctuations in resource
loads. Regardless of the particular resource chosen, short term spikes
due to increased virtual machine use are likely in many of the
applications envisioned for operating system virtualization. The
``Slashdot effect'' for web servers, tournaments or other special
events on game servers, and increased user interaction on interactive
systems are all examples of events the could be responsible for such
short range differences. Because these are precisely the situations in
which a bad resource configuration would be most noticable, it is
particularly important that a good load balancing algorithm deal with
this type of fluctuation. 

Our simple balancing algorithm is of this type. More advanced versions
of this algorithm could include trend analysis, as discussed above,
and more sophisticated techniques for finding migration
partners. For example, instead of simply picking its least loaded
neighbor, as our algorithm currently does, a future version could
search for a neighbor whose load has been consistently low or whose
short term load expectations are low. While longer term data might be
incorporated to a degree in this way, we submit that this would
actually be a liability for this type of algorithm. While short term
balancing alone probably will not result in optimal configurations,
and could lead to a high number of migrations, in situations with
critical load imbalances these algorithms will be vital.

The second type of algorithms deal with long term expected performance
of a virtual domain. Daily or weekly periodic load functions for web
servers, game servers, and even interactive systems could all provide
the basis for these algorithms. As discussed above, there are two
ways these balancers could be implemented. 

First, they could make balancing decisions based solely on expected
performance. Balancers could examine the expected long term
performance of an individual domain and make a decision to isolate or
consolodate that domain based on how it is expected to perform. Using
this technique, a balancer might attempt to move a web server domain
to its own system at 9:00AM on weekdays in anticipation of higher
load. Alternatively, balancers could examine the expected performance of
all virtual machines in a pool and search for optimal configurations
based within these expectation functions. This technique might attempt
to put two virtual machines together on a node if their expected
performance functions complemented each other well (that is, if the
sum of their expected performance was close to 100\% at all times).

A second, and probably better class of long term balancers are
algorithms similar to our p-loss algorithm. Instead of balancing
directly on the expected performance data, these algorithms would
incorporate this data into a more sophisticated decision making
framework. Our p-loss algorithm is an example of a balancer which uses
long term expectation information to make short term migration
decisions, but modifications would allow it to make these decisions
based on long term future expected performance. For example, instead
of choosing a configuration with the lowest productivity loss, it
could choose a configuration with the lowest expected productivity
loss over the next several hours or days. 

\subsection{Number based balancing}
One final topic related to performance modeling is the number based
balancing we incorporated into our simple migration algorithm, and
which was useful in the same situations as our p-loss ``unfairness''
technique. 

First, however, we would like to comment on this
symmetry. While the idea that number based balancing in our simple
algorithm has a perfect analog in our p-loss algorithm is elegant and
attractive, the similarities were greatly exaggerated by our assumption
of 100\% potential productivity. Essentially, this meant that we were
balancing on the inverse of the number of domains on a node. Although
this would still be present to a degree without this assumption, in
that nodes with many domains would probably tend toward higher levels
of unfairness, while nodes with few domains would likely have low
levels of unfairness, this would not necessarily be the case. 

In general, however, number based balancing may still be useful in
situations where all nodes are fully loaded. In this case, a better
configuration will not involve overall load because of the upper limit
of system resources. Instead, a better configuration will simply be
equally distributed resources. A slightly more flexible variation on
this idea is assigning priority ratings or, as discussed earlier,
resource guarantees to domains. A load balancing agent could then
attempt to keep the total priority ratings on nodes balanced when load
balancing fails. In any case, this type of balancing will likely
function best as a heuristic when resource load balancing algorithms fail.

\subsection{Machine learning algorithms}
A final class of algorithms that could prove useful for load balancing
are machine learning algorithms. One fascinating advantage of these
algorithms are their ability to find order in bodies of data
magnitudes larger than human beings are able to process. This
capability could be used to learn good configurations of a cluster and
attempt to move clusters into these configurations when performance
variables begin to decline. One potentially interesting possiblity for
this would be to incorporate application specific performance
indicators into these algorithms. 

For example, when balancing a web
server farm, a balancer could keep track of server throughput and
response time, that is, the total amount of data served from a
particular virtual machine in a set amount of time and the average
time it took for that machine to process a request. This monitoring
could be used to deduce ``optimal'' configurations, that is,
configurations which seem to promote high server throughput and low
request response time. When these variables begin to degrade, the
balancer could attempt to move the cluster into a configuration it has
previously observed as ``good.'' 

While there are a large number of
technical considerations that would need to be addressed in before this scheme
would function, its potential to monitor external performance
characteristics could prove quite powerful. It should also be noted
that this data could probably also be incorporated into some of the
algorithms discussed above, particularly load balancing
algorithms. Because machine learning algorithms like neural networks
are adept at finding complex patterns in data, however, these
techniques could have unique advantages in this regard.



\section{Other considerations}

Several other considerations will be important in the design of more
complicated load balancing algorithms. They include, but are not
limited to, monitor architecture and node affinity.  We will briefly
discuss each of these before moving on. 

\subsection{Monitor architecture}
One of our first design decisions was to implement our balancer as a
distrubuted decision maker. As it is currently implemented, a copy of
{\tt xenbal} must be run on each node in a Xen cluster. This daemon is
then responsible for providing load information to other daemons,
gathering load information about the system periodically, and making
migration decisions based on this information. This process is
performed in parallel on each of the nodes in the cluster, and
migration decisions are currently made completely independently of any
other migration decision in the cluster. 

This had several implications for our work. First, and probably most
importantly, simultaneous migration decisions which individually made
sense but together actually put the system into a new configuration
that was no better were not only possible but fairly common,
especially in the p-loss algorithm. While an attempt was made to
correct for this through the implementation of ``migration locking''
within the balancer, in which a node would not attempt to migrate a
domain to a node already in the process of performing a migration,
this did not always prove effective. Similarly, attempts to avoid this
effect by requiring balancers to sleep for as long as a minute were
not entirely successful.

Despite this, we feel that a distributed algorithm was the correct
choice for our particular hardware setup. Because all of our nodes had
reltively limited resources and we wanted to keep the nodes as
homogenous as possible, it was not desirable for us to use any one
node as a ``master balancer.'' Distributed algorithms would probably
also prove useful in very large scale scenarios, where the overhead
of calculating migration decisions could potentially be very high.

The ability to dedicate a single machine to act as a master balancer,
however, would likely prove effective in solving migration
coordination issues elegantly. In this kind of centrally controlled
algorithm, a single balancing process would periodically gather
information and examine all possible migration decisions. It would
then choose some number of them to execute based on this
information. One potential downside to this algorithm might be its
inability to monitor individual nodes as closely for short term
fluctuations. 

A third, fairly novel possiblity for making migration decisions would
be to concentrate the decision making process within the domains
themselves. In this model, each domain would act as an independent
agent, competing with all other domains for resources and executing
migrations when it decided the ``grass was greener'' on another
node. While this would require the each domain to be somewhat aware of
the fact that it was being virtualized, the basic architecture for
doing this is already implemented in {\tt xenbal} in the form of the
XML-RPC interface to {\tt xend}.

\subsection{Node affinity}
Complex relationships between virtual machines and between virtual
machines and nodes could provide additional dimensions for migration
decisions. One example of this can be found in the NFS storage system
we used for our virtual machines. Because each domain's disk storage
was housed on one of the nodes in the cluster, situations in which the
domain was hosted on the node hosting that domain's disk storage would
have reduced overall network traffic. While our tests indicate this
reduction may be small, it could nevertheless prove considerable in a
larger scale situation. The idea of being able to assign ``node
affinities'' to domains could potentially prove very
valuable. Similarly, the ability to prefer situations in which two
closely related domains (for example, a web server and its
corresponding database server) are kept in close proximity could
provide significant improvements to application performance.

\section{Future uses of Xen and applications of node balancing within}
Because Xen is a new entry in the field of operating system
virtualization, and because operating system virtualization itself is
a relatively new field, the uses of this technology are nowhere near
fully explored. We will briefly discuss what we believe to be the most
promising future uses of Xen clusters, and the role dynamic load balancing will
likely play in each.

\subsection{Server pool environments}
Currently the most wide spread use of virtualization technology
because of its potential to reduce data center operating costs,
virtualization of headless servers which only communicate via standard
networking is quite attractive. The addition of network storage
accessible transparently from any node in the system adds the
possibility of the kind of load balancing we have explored. Load
balancing strategies for this type of work have already been  
extensively addressed above. 

\subsection{Scientific computation}
\label{sec:xensciencomp}
Interestingly, scientific ``grid computation'' environments were the
original motivation for Xen's development. The Xenoserver project
``aims to build a public infrastructure for wide-area distributed
computing''\cite{xenoserver}. Essentially, organizations like
Cambridge University will be able to create a cluster of Xen
machines. This cluster will be aware of other clusters of Xen
machines, and interested parties will be able to submit virtual
operating systems for execution in the grid environment. These virtual
operating systems could then be run anywhere within the grid and
returned to the user when finished. The ability to create an entire
virtual operating system environment for execution in a grid
environment is an important and novel future use of this technology,
and we believe it will prove to be one of its more important uses.

Load balancing is especially relevant in this context. Because of the
possiblity of charging money for computational time, the ability to
dynamically ensure a given virtual machine has access to appropriate
resources is particularly important. Pricing systems could be built
upon the types of guarantees we discussed in
Section~\ref{sec:perfguar}, and balancing strategies could be tweaked to
reflect them. In addition, the complexities added by having
not only intra-cluster but also intra-grid migration possibilities are
considerable. 

Finally, the fault tolerance benefits of such a system are
considerable. Before being brought down for maintainance, a Xen node
in this environment could be evacuated of all virtual domains via
live or non-live migration. After the required work is completed, the
domains could be migrated back. The need to shut down an operating
system for hardware related reasons would be completely nullified,
creating an even more stable environment for long computations.

\subsection{Workstation virtualization}
One novel and and as yet unexplored application for this technology
can be found in a typical workstation environment like that found in
most modern companies or computer labs. Many modern workers require
computers for their day-to-day work. The best current solution to this
problem is to give each person in a company a physical machine,
generally integrated into his or her work space. Unfortunately, this
is an enormous waste of resources. Unimaginable numbers of CPU cycles
are wasted each day as workers use text processing programs, e-mail
clients, and web browsers which do not even begin to strain the
capabilities of modern personal computers. When users actually do need
to perform computationally intensive tasks, they are either stuck with
the machine they have, or forced to use more complicated techniques
like job submission on computational nodes in companies that make this
kind of technology available.

Workstation virtualization would provide a solution to both of these
problems. In an ideal scenario, workers would simply have a remote
terminal-like device at their desk. This device would connect
to a Xen cluster-like environment on which a pool of workstation
virtual machines was running. The number of machines in this pool
could be considerably smaller than the number of workstations
needed. Workstations could be migrated around the cluster according to
their load requirements in many of the ways we have
discussed. Additionally, these clusters could be integrated into a
grid like environment like the one discussed in
Section~\ref{sec:xensciencomp} to generate income. A workstation cluster like this
could theoretically pay for itself if this form of scientific
computing becomes popular.

In this case ideal performance could be seen as the providing end
users with constant illusion of having the full power of the most
powerful node in the Xen cluster at their disposal. In a bad
situation, that is, one close to the full utilization of the Xen
cluster, ideal performance could be seen as equal resources being
available to all users. 

This idea is very similar to Sun's ``Sun Ray'' system and other so
called ``thin client'' systems \cite{thinclient, sunray}. These systems work
by providing users with a stripped down personal computer capable of
little more than managing a display and making a remote connection to
a central server on which applications can be run. A key advantage of
using Xen for this would be the ability to provide users with the
illusion of full operating system control. As with the datacenter
environment, users could be given full root access. Possibly even more
importantly, Xen provides an elegant security solution, in which users
are not aware of the other operating systems on their host node. On
the other hand, it is likely that current thin client hardware could
be adapted or used as-is with a cluster of Xen machines as described.

