\part{Results}
\label{sec:results}
After implementing xenbal and XenSim, we performed a number of tests
to both verify the correctness of our code and to begin to formulate
an algorithm for balancing a cluster of Xen machines. In all of the
figures below, domain CPU load plots are labeled ``{\it node-name} domain
{\it domain-number}.'' {\it Node-name} refers to the node hosting the
domain's disk space, and not the node currently hosting the domain.

\section{Major Obstacles}
During the testing phase of our work, we ran into several major
obstacles which slowed us and prevented us from getting the full range of
results we had hoped for. The first of these was a lack of information
provided by the Xen interfaces for this kind of balancing. While CPU
load information and physical information about the Xen nodes was
readily available through the Python libraries, I/O information was
more difficult to find. While late discussions with some of the Xen
developers suggested this information was available within the
hypervisors, it is not available through the Python interface. Because
of this, we were unable to incorporate this information into our
balancing algorithms.

A second major obstacle was found within the Xen Live Migration
technique itself. Occasionally, while migrating, virtual machines would
become non-responsive. Upon further inspection via a console from
their hosting hypervisor it was discovered that they had
crashed. Unfortunately, these crashes seemed to be more common under
heavy loads. After talking with several people close to Xen
development, we have determined that this is a problem with the Live
Migration technique with roots either in Xen or the Linux
kernel. While we expect this will be fixed in future versions of Xen,
this is a fairly large problem for Xen's Live Migration. Generally, we
found the best way to fix a node once it hosted a domain which had
crashed in this fashion was to reboot. If we did not, Xen would
continue to set aside a small piece of memory for the crashed domain,
even after restarting the {\tt xend} daemon. While this did not make
testing impossible, it slowed our algorithm development and testing
considerably. 

\section{Preliminary Tests}
\label{sec:prelimtests}


\subsection{XenSim Verification Tests}
\label{sec:xensimver}
To verify the correctness of our simulator, we first simulated the
test bed described in Clark \etal's ``Live Migration of Virtual Operating
Systems''\cite{livemigration}. This consisted of two dual 2Ghz servers
with 2GB of memory hosting a virtual machine with an 800MB
memory allocation. A simple web server serving an html file to 100
clients was used to examine the performance of the live migration
technique under a normal load. Clark found that migration using the iterative
copying technique took about 70 seconds to migrate between hosts. In
our simulation, shown in Figure~\ref{fig:livemigbench}, we used 2
single 2 Ghz CPU servers, as the simulator does not yet support
multiple CPUs. Instead of a web server, we simulated a CPU load of
95\%, and a page dirtying rate of 0.5\%. 
As in Clark's tests, we observe a long
initial migration period followed by 
successively shorter but higher bandwidth migrations. Overall transfer
time for our simulation was about 77 seconds. 
\begin{figure}[p]
\center
\psfig{file=figures/clarketalmigration.ps,angle=0,width=6in}

\caption{Results of migrating a virtual machine running SPECweb, a web
server benchmarker. Figure taken from \cite{livemigration}.}
\label{fig:clarketalmigration}
\end{figure}


\begin{figure}[p]
\center
\psfig{file=figures/livemigbench.ps,angle=-90,width=5in}

\caption{Results of simulating a system similar to that of Clark
  \etal \cite{livemigration}. }
\label{fig:livemigbench}
\end{figure}

Our second test simulated the environment in which we would be
performing our real world tests. Real world tests were performed using
two 1.0 Ghz machines with 512MB of memory. A virtual domain running a
prime number generating program was migrated between nodes in
approximately 14 seconds. A simulation using XenSim, as seen in
Figure~\ref{fig:livemigbench2}, took approximately 16 seconds to
perform the same migration. Again, a page dirtying rate of 0.5\% was
used, but the initial bandwidth limit was lowered to 50Mbit/sec. While
all of these numbers are dependent 
on a numerous simulation variables like initial migration bandwidth
limit and page dirtying rate, we believe our chosen values are
appropriate.
\begin{figure}[htb]
\center
\psfig{file=figures/livemigbench2.ps,angle=-90,width=5in}

\caption{Results of simulating our hardware setup. The first migration
iteration takes 15 seconds at a transfer rate of about 52
Mbit/sec. Three more iterations (the last two too short to see on the
graph) take less than a section.}
\label{fig:livemigbench2}
\end{figure}


\subsection{Read Intensive NFS Test}
\label{sec:readintensiveresults}
A third important preliminary test we performed attempted to
determine the effects of using NFS storage based in the hypervisor
domains of each Xen machine as the disk storage for our virtual
domains. For this test we used all four of our Xen nodes. Each node
ran one virtual machine running an apache server servicing a high
number of requests per second. One node additionally ran a virtual
machine which ran a disk read intensive test application. This test
application opened a file across the NFS share, read its contents
into memory, and closed it. Half way through the test, the domain was
migrated to another node, but at no point during the test was the node
hosted on the system hosting its NFS share.

\begin{figure}[hp]
\center
\psfig{file=figures/readintensivesetup.ps,angle=0,width=5in}

\caption{Setup for read NFS tests. Disk storage is served via NFS to
  each domain. One domain, andaluza domain 2, is being hosted
  remotely.}
\label{fig:readintensivesetup}
\end{figure}

\begin{figure}[hp]
\center
\psfig{file=figures/memtest.ps,angle=-90,width=5in}

\caption{Andaluza domain 2 begins in an idle state hosted on
  delidia. At about 40 seconds, it begins running a read intensive
  application.  At about 90 seconds, it is migrated to africander. }
\label{fig:memtest}
\end{figure}


As we can see in Figure~\ref{fig:memtest}, the fact that an NFS
intensive application was started does not appear to have affected
the performance of any of the domains in a noticable way. In this
graph, andaluza-dom2-16, in light blue, began running a read intensive
application while hosted on delidia at about 40 seconds. As expected,
because it had to share a physical CPU with the other virtual domain
on delidia, each virtual machine was scheduled for about half of the
available CPU time. While the
graph shows a downspike in the CPU load of the other virtual machine
hosted on andaluza (as we might expect if NFS service was a serious
burden) this downspike is also observed in the virtual machine hosted
on epirus. This may actually indicate fluctuations in the load
monitoring and recording process, but do not indicate significant effects
from increased CPU load. At about 80 seconds, andaluza-dom2 was
migrated to africander, with no noticable effects on the other domain
with NFS hosting on andaluza. 

While this indicates negligible CPU effects from using NFS storage
based in our hypervisors, the network effects are slightly more
difficult to examine with our testing framework. To attempt to get an
idea of these effects, we ran two more tests similar to the one
described above. In the first, we made a large number of requests to
the virtual domain apache servers hosted on each of the nodes but did
not run an NFS read intensive application. In the second, we did the
same thing but ran an NFS read intensive application from the start of
the test. Each time we recorded the total throughput of the Apache
servers. If the NFS server did indeed have a detrimental effect on network 
bandwidth, we hypothesized that the second test would have a lower
total throughput, and this was case. In the first test, we observed a
throughput of 3517.95MB, while in the second test we observed a
throughput of 2859.12MB. Unfortunately, as shown in in
Figure~\ref{fig:nomigmemtest}(b), running the read intensive
application reduced the amount of CPU time available to the apache
server on delidia domain 1, which should also reduce the total
throughput of the test. 

While this result seems to suggest that the network effects of even a
read heavy application over NFS storage are negligible, more testing
would definitely be required to make strong conclusions. Despite this,
because the applications we tested with are not nearly as read heavy
as our test application, we believe the effects of NFS storage on our
results to are minimal.

\begin{figure}[htb]

\vspace{9pt}

\centerline{\hbox{ \hspace{0.0in}
  \psfig{file=figures/nomignoreadmemtest.ps,angle=-90,width=3in}
  \hspace{0.25in}
  \psfig{file=figures/nomigreadmemtest.ps,angle=-90,width=3in}
  }
}
\vspace{9pt}
\hbox{\hspace{1.55in} (a) \hspace{3.07in} (b)}
\caption{(a) No read intensive application running, (b) read intensive
  application running in andaluza domain 2 which is hosted on
  delidia. Performance of andaluza domain 1, hosted on the node whose
  hypervisor serves the file system of andaluza domain 2, does not
  appear to be significantly impacted.}
\label{fig:nomigmemtest}
\end{figure}

\section{Primes: Full Load Tests}
\label{sec:primestests}
Our next tests were done to ensure our first simple balancing
algorithm would perform as expected. These were done both to debug and
verify our migration code, and to begin to examine the properties of a
good balancing algorithm. To reduce the amount of variability in our
tests, we started by running prime number generators within our
virtual machines. This meant that each virtual machine would try to
use as much of the CPU as possible at all times.

In our first such test, we started one virtual domain on each of two
nodes, two domains on one node, and no domains on a fourth node. A
prime number generator was then started in each of the virtual
machines. The balancer, executing our sender initiated simple
algorithm, correctly and quickly moved one domain to the empty node
based on CPU loads, as shown in Figure~\ref{fig:primestest}.
\begin{figure}[htb]
\center
\psfig{file=figures/primes-test.ps,angle=-90,width=5in}
\caption{Prime number generators running in domains, simple sender
  initiated migration policy. Domains 2 and 3 are initially on the
  same node. }
\label{fig:primestest}
\end{figure}


\section{Apache Servers: Real World Load Tests}
\label{sec:apachetests}

\subsection{Initial tests}
\label{sec:initialapachetests}
Because we were interested in the performance of our algorithms on real
world situations, the remainder of our tests utilized apache servers
within our virtual machines. 

\subsubsection{Hammerhead - a web server benchmarking tool}
To generate loads large enough to trigger
our balancer we used hammerhead \footnote{http://hammerhead.sourceforge.net}, a
web server benchmarking tool. This relatively simple tool allowed us
to create ``scenarios'', essentially lists of URLs on which to make
http requests, and simulate large numbers of concurrent connections
from one machine. Configuration files allow users to specify sleep time
between requests and output log files which record throughput
information. 

In order to facilitate the integration of the hammerhead tool into our
tests, we implemented an XML-RPC server, again using Python's {\tt
  SimpleXMLRPCServer} module, which allowed us to configure and
start hammerhead remotely. This requires automatically generating and
writing configuration files to disk, but allowed us to use a number of
multi-processor FreeBSD workstations in the Williams Computer Science
lab for testing.

\subsubsection{Load explorations}
\label{sec:loadexplorations}

Our first step in running apache tests was to try to understand
typical load for a web server. Unfortunately, we faced a very large
challenge in our lack of real world web server
data. Because the ideal scenario for virtual operating system load
balancing is found in commercial ``server farm'' environments, we did
not have access to real world data reflecting situations we were
interested in. This has a number of implications which are discussed
further in Section~\ref{sec:performancemodeling} but for our work it meant
we needed to first decide on a testing workload.


Our first test to do this was to configure hammerhead to query an html
file on each of our web servers. Utilizing hammerhead's ability to
simulate multiple clients, we examined the performance of our apache
servers with the number of simultaneous clients ($c$) ranging from 10 to
500 and inter-request sleep times ($t_{sleep}$) ranging from 500 to 15000
milliseconds. Graphs of the $t_{sleep}=500$ tests are shown in
Figure~\ref{fig:htmlhammerhead}, while a full listing appears in
Appendix~\ref{append:graphs}. Probably the most interesting thing to
note about this data is that CPU usage typically tops out around
30\%. This indicates the limiting factor in these tests is not
available CPU cycles, but network bandwidth. After reaching $c=300$
and $t_{sleep}=500$, CPU usage stops increasing.

\begin{figure}[p]
\vspace{9pt}

\centerline{\hbox{ \hspace{0.0in}
  \psfig{file=figures/60-second-25-thread-500-wait-html-test.ps,angle=-90,width=3in}
  \hspace{0.25in}
  \psfig{file=figures/60-second-50-thread-500-wait-html-test.ps,angle=-90,width=3in}
  }
}
\vspace{9pt}
\hbox{\hspace{1.55in} (a) \hspace{3.07in} (b)}
\vspace{9pt}

\centerline{\hbox{ \hspace{0.0in}
 \psfig{file=figures/60-second-100-thread-500-wait-html-test.ps,angle=-90,width=3in}
  \hspace{0.25in}
  \psfig{file=figures/60-second-200-thread-500-wait-html-test.ps,angle=-90,width=3in}  
  }
}
\vspace{9pt}
\hbox{\hspace{1.55in} (c) \hspace{3.07in} (d)}
\vspace{9pt}

\centerline{\hbox{ \hspace{0.0in}
  \psfig{file=figures/60-second-400-thread-500-wait-html-test.ps,angle=-90,width=3in}
  \hspace{0.25in}
  \psfig{file=figures/60-second-500-thread-500-wait-html-test.ps,angle=-90,width=3in}
  }
}
\vspace{9pt}
\hbox{\hspace{1.55in} (e) \hspace{3.07in} (f)}
\vspace{9pt}
\caption{(a) 25, (b) 50, (c) 100, (d) 200, (e) 400, and (f) 500 client
html request tests with $t_{sleep}=500$. Note that CPU load tops out
around 30\%.}
\label{fig:htmlhammerhead}
\end{figure}

\begin{figure}[p]
\vspace{9pt}

\centerline{\hbox{ \hspace{0.0in}
  \psfig{file=figures/60-second-25-thread-10000-wait-cgi-test.ps,angle=-90,width=3in}
  \hspace{0.25in}
  \psfig{file=figures/60-second-50-thread-10000-wait-cgi-test.ps,angle=-90,width=3in}
  }
}
\vspace{9pt}
\hbox{\hspace{1.55in} (a) \hspace{3.07in} (b)}
\vspace{9pt}

\centerline{\hbox{ \hspace{0.0in}
 \psfig{file=figures/60-second-100-thread-10000-wait-cgi-test.ps,angle=-90,width=3in}
  \hspace{0.25in}
  \psfig{file=figures/60-second-200-thread-10000-wait-cgi-test.ps,angle=-90,width=3in}  
  }
}
\vspace{9pt}
\hbox{\hspace{1.55in} (c) \hspace{3.07in} (d)}
\vspace{9pt}

\centerline{\hbox{ \hspace{0.0in}
  \psfig{file=figures/60-second-400-thread-10000-wait-cgi-test.ps,angle=-90,width=3in}
  \hspace{0.25in}
  \psfig{file=figures/60-second-500-thread-10000-wait-cgi-test.ps,angle=-90,width=3in}
  }
}
\vspace{9pt}
\hbox{\hspace{1.55in} (e) \hspace{3.07in} (f)}
\vspace{9pt}
\caption{(a) 25, (b) 50, (c) 100, (d) 200, (e) 400, and (f) 500 client
cgi request test with $t_{sleep}=10000$. CPU Load tops out around 45\%
because of activity in the hypervisor domain, which is not shown. }
\label{fig:cgihammerhead}
\end{figure}


To increase CPU utilization, we turned to technology which has become
ubiquitous in the modern Internet, server side
scripting. Because of its ability to generate content dynamically,
some form of server side scripting powers most modern Internet
applications including blogging software, wikis, and the content
management software used by most major news sites. As a result,
incorporating this technology into our tests not only enabled us to
examine high levels of CPU utilization without running into network
saturation, but also made our tests more realistic. 

To do this, we wrote a short CGI\footnote{CGI stands for Common
  Gateway Interface, and is a popular method for enabling web servers
  to run executable files to generate dynamic content. For more
  information, please see http://hoohoo.ncsa.uiuc.edu/cgi/} 
script in Python which generated 100 prime numbers and returned
them. Results of running the same tests as in
 Figure~\ref{fig:htmlhammerhead} with $t_{sleep}=10000$ can  
be see in Figure~\ref{fig:cgihammerhead}. Most apparent is the now
full utilization of the CPU at higher levels of concurrent
clients. For the remainder of our tests we used a $c=200 or 300$ and
  $t_{sleep}=10000$ 
second sleep cgi scenario because of its ability to give high levels of CPU
utilization but also provide a degree of variability that might be
  found in real world data.




\subsection{Simple Algorithm Development}
\label{sec:simplealgdevel}


Our simple migration algorithm was developed iteratively. The first
version simply sampled CPU loads instantaneously and made migration
decisions based on CPU load. Later versions incorporated load averages
by keeping the last several recorded CPU loads in memory, and number
based migration, which attempts to balance the number of domains on
each node if load based balancing fails.

\subsubsection{First test}
The results of our first iteration of this migration policy in a
 $c=300, t_{sleep}=10000$ scenariob in which 4 nodes started hosting 2
domains each can be seen in
Figure~\ref{fig:nomemnonumsimplesi}. Because no averaging is used to
compute CPU loads, a momentary lull on the node running domains 1 and
2 trigger a migration of domain 7 to this node. Instead of
maintaining the optimal configuration of two domains per node, the
algorithm wrongly migrated a highly utilized domain to an already
loaded node. Even worse, once in the bad configuration the algorithm
did not detect it, and left one node running three heavily loaded
domains while another node ran only one.

\begin{figure}[p]
\center
\psfig{file=figures/nomemnonumsimplesi.ps,angle=-90,width=5in}
\caption{First simple sender initiated algorithm. The algorithm makes
  a bad decision based on instantaneous data at about 8 seconds.}
\label{fig:nomemnonumsimplesi}
\end{figure}

\begin{figure}[p]
\center
\psfig{file=figures/nonumsimplesi.ps,angle=-90,width=5in}
\caption{Simple sender initiated algorithm with ``memory.'' The
  algorithm maintains a good configuration for the duration of the test.}
\label{fig:nonumsimplesi}
\end{figure}

\subsubsection{Load memory}
Our first step to remedy this was to add memory to the CPU load
calculation function. This was done by keeping a list of past CPU
loads associated with each node and domain. Requests for load average
information can specify a number of past requests to average over, or
use the default value of 10. When a domain first arrives on a node, it
cannot be migrated until its memory list has been filled. The results
of performing the above test 
with this modification are in Figure~\ref{fig:nonumsimplesi}. As
desired, the migration policy maintains an optimal configuration. 

\begin{figure}[p]
\center
\psfig{file=figures/nonumsimplesiunbal.ps,angle=-90,width=5in}
\caption{$c=300, t_{sleep}=10000$ test with an initially unbalanced
  configuration. As desired, the algorithm divides the work between
  the four nodes.}
\label{fig:nonumsimplesiunbal}
\end{figure}
%hammerhead test 10
\begin{figure}[p]
\center
\psfig{file=figures/nonumunbal2sitest.ps,angle=-90,width=5in}
\caption{Simple balancing algorithm in a scenario in which 2 nodes had
3 domains each and 2 nodes had only one. Because all nodes are fully
loaded, no balancing takes place. Ideally, the algorithm would grant
all domains equal access to physical resources.}
\label{fig:nonumunbal2sitest}
\end{figure}

These tests tested the ability of our algorithm to maintain a good
configuration. Our next step was to ensure it would properly
reconfigure bad configurations. To do this, we started in an
unbalanced situation in which two nodes hosted two domains each
and two nodes hosted no domains. The results of this test can be seen
in Figure~\ref{fig:nonumsimplesiunbal}. As desired, one domain is
migrated to each of the unloaded nodes. 

Our next test examined a slightly more complicated unbalanced
situation. In this test, two nodes were started with one domain each,
and two nodes were started with three domains each. We assumed that the
optimal configuration would be one in which each domain had access to
the same amount of machine resources. This would only happen if our
algorithm migrated a domain to each of the single domain nodes from
each of the three domain nodes. The results of this test can be seen
in Figure~\ref{fig:nonumunbal2sitest}. As we can see, the bad
configuration is not detected. This happens because our algorithm only
examined CPU loads, and could not find any migration partners with
acceptably low loads.


\subsubsection{Number based balancing}
To fix this, we needed to add a second step to our balancing
algorithm. After looking for migration partners with acceptably low
load, the next iteration of our simple balancing algorithm looked for
migrations partners with significantly fewer (that is, two or more
fewer) domains. The underlying assumption behind this step is that all
domains deserve equal amounts of CPU resources. If one node is running
three domains and another is running one domain, a better
configuration can be acheived my migrating one domain from the first
domain to the second. This step should only be undertaken when load
based reconfiguration does not happen, and only when all nodes are
close to fully loaded. The results of using our new algorithm on the
scenario described above can be seen in Figure~\ref{fig:numunbalsitest}.


After an initial migration triggered by the number based balancing,
all of our tests show additional later migrations. Our initial
hypothesis regrading these migrations was that they were somehow being
caused by a bug in our number based migration. After examining log
files and our code, however, we discovered that they were indeed
legitimate load based migrations. To suppress them, we made two
modifications to our algorithm. First, we introduced a waiting time
between migrations. Once 
a balancer triggered a migration, it would now be required to wait
for an amount of time specified at startup, but which defaulted to one
minute. Second, we lowered the threshold of acceptable load for
recipient nodes from 70\% to 50\%. Thus, a recipient node would need
to drop below 50\% CPU load overall to be seen as an acceptable
migration partner. This drop was designed add stability to the
algorithm since it reduced that chance we would migrate to a busy node.


This was the final simple sender initiated algorithm with which we
tested. Figure~\ref{fig:finalsimplesi} shows a five minute, 300
client/ 10 second sleep test. One of these tests was completed for
each of the algorithms we developed. From here on, we shall refer to
these tests as the benchmark tests for our algorithms.
%hammerhead test 11
\begin{figure}[ph!]

\vspace{9pt}
\centerline{\hbox{ \hspace{0.0in}
  \psfig{file=figures/60-second-300-thread-10000-wait-numbal.ps,angle=-90,width=3in}
  \hspace{0.25in}
  \psfig{file=figures/300-second-200-thread-10000-wait-numbal.ps,angle=-90,width=3in}
  }
}
\vspace{9pt}
\hbox{\hspace{1.55in} (a) \hspace{3.07in} (b)}
\vspace{9pt}

\centerline{\hbox{ \hspace{0.0in}
 \psfig{file=figures/600-second-200-thread-10000-wait-numbal.ps,angle=-90,width=3in}
  \hspace{0.25in}
  \psfig{file=figures/900-second-200-thread-10000-wait-numbal.ps,angle=-90,width=3in}  
  }
}
\vspace{9pt}
\hbox{\hspace{1.55in} (c) \hspace{3.07in} (d)}
\vspace{9pt}


\caption{(a) 60, (b) 300, (c) 600, (d) 900 second number based
  balancing. In each case, number based balancing appears to be
  triggering migrations, but subsequent load-based migrations are
  triggered as well. To solve this, we decided to make the algorithm
  slightly less willing to migrate based on load.}
\label{fig:numunbalsitest}
\end{figure}
\begin{figure}[ph!]

\vspace{9pt}
\centerline{\hbox{ \hspace{0.0in}
  \psfig{file=figures/600-second-200-thread-10000-wait-numballowthresh.ps,angle=-90,width=3in}
  \hspace{0.25in}
  \psfig{file=figures/1200-second-200-thread-10000-wait-numballowthresh.ps,angle=-90,width=3in}
  }
}
\vspace{9pt}
\hbox{\hspace{1.55in} (a) \hspace{3.07in} (b)}
\vspace{9pt}


\caption{(a) 600, (b) 1200 second number based balancing with a lower
  threshold and waiting period. In both cases, the algorithm creates
  and sustains an optimal configuration. }
\label{fig:numunbalsitestlowerthreshold}
\end{figure}
\begin{figure}[ph!]
\center
\psfig{file=figures/finalsimplesi.ps,angle=-90,width=5in}
\caption{Final tests with simple sender initiated algorithm. As
  expected, the algorithm finds and maintains an optimal configuration.}
\label{fig:finalsimplesi}
\end{figure}

\subsubsection{Receiver initiated modifications}
Many of the modifications we made to our sender initiated algorithm
translated directly to the receiver initiated algorithm, including
load memory and number based balancing. Obviously, load thresholds for
migration had to be considered in a very different way. Most notably,
the source load threshold was reduced from 95\% CPU load in the sender
initiated algorithm to 80\% in the receiver initiated algorithm. This
change was based on the idea that the receiver initiated algorithm
wants work, and is willing to take work from a node that is even
slightly overloaded, while the sender initiated algorithm only wants
to migrate work when it is a full capacity. Similarly, the destination
load threshold was reduced from 50\% in the sender initiated algorithm
to 30\% in the receiver initiated algorithm. The benchmark test
results for this algorithm can be seen in
Figure~\ref{fig:finalsimpleri}. With these configuration variables
this algorithm is as stable as our simple sender initiated algorithm.



\section{P-loss algorithm}
As discussed in Section~\ref{sec:plossbalalg}, our second algorithm was
based on computing the overall productivity loss in the system and
seeking to move the system into a configuration which minimized
this loss. Interestingly, the two statistics the algorithm examined,
overall productivity loss and unfairness, correspond to the two
statistics examined by our simple balancing algorithm, CPU load and
number of domains per node. Like CPU load balancing, overall
productivity loss balancing fails in all-node full utilization
scenarios, because overall productivity loss in an unbalanced full
utilization scenario is the same as productivity loss in a balanced
full utilization scenario. Looking at unfairness, the difference
between the node with the highest productivity loss and the lowest
productivity loss, can help differentiate a good configuration from a
bad configuration in this case.

Perhaps the most important variable in the p-loss algorithm is the
load productivity loss sensitivity. This is the degree to which a
configuration's expected total productivity loss or unfairness must be
better than the 
current productivity loss or unfairness in order to choose the
migration action corresponding to that configuration. Unfortunately,
due to time constraints we did not explore this variable more than
casually. 

Benchmark tests for these the sender initiated and receiver initiated
versions of this algorithm can be seen in Figures~\ref{fig:finalutilitysi}
and~\ref{fig:finalutilityri}


\begin{figure}[p]
\center
\psfig{file=figures/finalsimpleri.ps,angle=-90,width=5in}
\caption{Receiver initiated version of simple balancing algorithm. The
algorithm does not appear to perform significantly differently from
the sender initiated version.}
\label{fig:finalsimpleri}
\end{figure}

\begin{figure}[p]
\center
\psfig{file=figures/finalutilitysi.ps,angle=-90,width=5in}
\caption{Performance of the sender initiated version of the p-loss
  algorithm. Unfortunately, this algorithm is does not maintain a
  stable configuration. More research into the parameters of the
  algorithm might yield better results.}
\label{fig:finalutilitysi}
\end{figure}

\begin{figure}[p]
\center
\psfig{file=figures/finalutilityri.ps,angle=-90,width=5in}
\caption{Performance of the receiver initiated version of the p-loss
  algorithm. This version shows similarly poor results, but might be
  improved as described above.}
\label{fig:finalutilityri}
\end{figure}
