\section{Future Work}

As discussed in the Results section, our system currently works as a
centralized checkpoint/restart platform build using Xen and NILFS. 
However, there are several areas where our system needs improvement before it
can offer acceptable performance.

\subsection{NILFS and performance}

One of the major performance constraints of our system stems from the 
inability of NILFS to support read-write privileges on the mounted 
snapshots. When a node fails, we rollback the VM(s) that were running on 
the failed node from a previously saved checkpoint. The disk 
checkpoint information used in this process is stored in the NILFS snapshot. When we 
mount these snapshots to access the filesystem, we require both read 
and write operations to be permitted on the snapshot. However, NILFS 
currently supports only read operations on mounted snapshots. Our work-around for 
this problem involves copying the entire contents of the snapshot to a 
new location, which is a very expensive operation. 

Although the snapshot mounting limitation may seem to hedge the actual 
benifit we derive from using this system, the cost of the actual 
snapshot operation using NILFS is very lightweight and remains 
lucrative. Also, the NILFS project is relatively new and is under 
active developement. Indications suggest that this limitation will be 
fixed in the near future. Hence we have decided to stick with NILFS and 
will continue to revolve our solution around the functionality that it 
provides. 

In the mean time, we are looking into completely avoid the expensive 
snapshot copying operation by incorporationg UnionFS into our system. 
UnionFS is a stackable filesystem that trackes changes that are made 
to a base folder. Since our snapshots are currently in a read-only 
format, we plan to setup a UnionFS over the mounted snapshot that will 
track changes made to the snapshot. Then, the UnionFS changes can be 
periodically merged in the background with the read-only snapshot to 
ensure a consistent disk state.

\subsection{Xen live migration}

Our current system hinges on a reactive fault tolerance mechanism. The 
placement and migration mechanisms are invoked only in the case when a 
particular node fails. We plan on adding support for live migration 
that will periodically cosolidate VM's running in the cluster in an 
energy and resoure efficient manner. Xen has native functionality 
that allows a VM to live migrate from one host to another with little 
or no downtime involved. Also, since we already set up SigLM on each 
node, we plan on enabling the consolidation support already built 
into SigLM. Hence, at periodic intervals, we plan on invoking SigLM's 
placement mechanism by passing the current set of live hosts and VMs. 
Then based on the SigLM recommendations, we can invoke Xen live 
migration to effect the necessary changes in the system.

\subsection{Cluster manager}

Currently, our cluster manager (CM) has been built of a proof of concept, and
is hamstrung by several issues. Primarily, the CM is currently single-threaded,
and this makes scalability impossible because many of the node server functions
block for several seconds.  Extending the CM to support multiple threads and
asynchronous operations is crucial for scalability and quick failure reaction. 
However, this will introduce concurrency concerns and greatly complicate the
CM's internal state machine, making careful planning a necessity.

Secondly, the CM would benefit heavily from a web-based frontend for
use by an administrator.  Currently, console stdout / stderr are the only
outputs available, making minor events difficult to track and parameters such as
frequency settings impossible to adjust.  A web frontend would allow
administrators to observe the system as it operates, perform on-demand
migration, and view and select from snapshots as needed.

\subsection{Power profiles}

As described in the introduction, we have built ``generator'' virtual machine
images which attempt to produce the full range of values for a given
system resource (CPU, disk etc).  This range allows us to interpolate
resource-to-power mappings instead of extrapolating them, which yields much higher accuracy.

Figure \ref{profile_creation_fc} shows the intended full profile generation
process. Currently, we have only implemented the first stage, ``Generate resource
utilization''.

To complete the profile generation system, we need to determine the most
accurate regression method(s) and implement them as part of a standalone
framework.  This framework would consume resource and power logs, and use them
to continously update and improve power profiles stored in the repository. 
Initial or ``seed'' data would be used to first create the profile, generated
using the ``generator'' images, but over time logs from the system's normal
running could be added in as well.  To this end, integration with SigLM, which
performs its own continuous resource logging, would be of great benefit,
because it would allow us to leverage a distributed resource logging
platform which is already installed on all physical hosts.

\begin{figure}
  \centering
  \psfig{file=images/profile_creation_fc.png, height=3.75in, width=3.5in,}
  \caption{Profile generation process}
  \label{profile_creation_fc}
\end{figure}

\subsection{SigLM}

SigLM is an important part of our framework, and tighter integration with it
would improve performance and yield more accurate VM placement.

If we are able to successfully integrate Xen's live migration functionality
into our system, one important side-effect will be the possiblity of
utilizing SigLM's periodic recommendation algorithm.  Currently, we wait until
we discover a VM or host failure before asking SigLM to make a placement
decision, and when doing so we restrict the target VMs to only those that need
migrating.  If we are able to migrate VMs while they continue to run (live
migration), we could provide SigLM with the full list of all running VMs, and
let its periodic recommendation system consolidate our cluster without hurting
performance. 

The cluster manager needs tighter integration with the SigLM package to improve
performance. SigLM is currently queried by writing text files, calling SigLM
using those files as input, parsing output files and finally making a decision.
This could be streamlined through the use of interprocess communication, such
as sockets or pipes.

Finally, the SigLM package includes daemons that run on each physical machine,
which log the resource time-series needed for its decision-making routines.  We
currently are able to deploy these daemons as part of the node server load process, but
this procedure could be significantly streamlined.  In addition, since SigLM is
written in Java, pre-compiled JAR files could be distributed and run, instead
of the current process of rebuilding.

