\section{Introduction}
%
Cloud computing is a novel way for delivering service to customers. With cloud
computing, the value of these software and hardware products are delivered
on-demand in the form of services over the network. Providing these services is
made profitable by the economies scale in modern data centers. However, the
power consumed by these large data centers accounts for an alarming 2\% of
the total energy consumed by the United States and it is predicted to double
every five years . In addition, because of the high density of systems within
datacenters, power management concerns are tightly intertwined with the issues
of temperature safety, cooling costs, operational costs, and ultimately system
reliability. With power being a very critical resource, it has become one of
the major research areas in datacenter management, and thus is a core focus of
cloud computing research.

In recent years, many technical publications have been centered around energy
efficient distributed systems. The power aware distributed system research till
date either points at CPU throttling (dynamic voltage scaling and ACPI support)
\cite{kim_usenix09, verma_isc08, nathuji_icac07} or consolidation of work
loads \cite{choi_mascots08, grit_vtdc06, hermenier_xhpc06, verma_ibm09, verma_isc08}.

System failure is another of the major concerns in distributed systems, and all
distributed systems must be able to handle failures. In the realm of fault
tolerance most current work takes a more proactive approach, using techniques
such as live migration \cite{nagrajan_ics08} and replication
\cite{hans_IEEE07}. Also, present reactive fault tolerant methods
\cite{vallee_hapcw06} also fall short when it comes to energy conciousness in particular.

With these motivations in mind, we have laid the groundwork for an energy-aware
checkpoint/restart framework built on the Xen hypervisor. Our system implements
periodic and reactive checkpoint/restart using the NILFS logging filesystem for
disk image management.  We use SigLM \cite{gong_iwqos09} for placement decisions during
migrations, maximizing cluster utilization and consolidating workloads.  In
addition, we have begun development of a power profile generation toolkit,
which will allow us to make energy-aware placement deicions.

\subsection{Checkpoint / restart}
One of the major design challenges of our system was the ability to extract a
consistent view of a virtual machine running on a system. This encompasses
creating a checkpoint of the memory and the current state of the VM as well as
a corresponding snapshot of the current state of the disk associated with the VM.

Native Xen implementation offers support for checkpoints of the memory and
processor state, but ignores the disk completely. Also, live checkpointing,
where the VM continues to run for the entire duration of the checkpoint
operation, is not supported either. Keeping these limitations in mind, we
leveraged the existing checkpointing support of Xen \cite {barham_sosp03} and designed a work-around
that used NILFS disk snapshots \cite{konishi_sigops09}. Some of the previous work related to disk
snapshots involved using a stackable UnionFS \cite{vallee_hapcw06} while others explored
duplicating and tagging all writes to disk to a backup disk \cite{cully_nsdi08}. 

We chose NILFS over other solutions primarily because NILFS offers \cite{konishi_sigops09}:

\begin{enumerate}
  \item Read/write performance benefits
  \item Low snapshot creation overhead
  \item Light weight size of snapshots
  \item Ability to index and mount different snapshots 
\end{enumerate}

\subsection{Power profiles}
%
Our system uses SigLM \cite{gong_iwqos09} for placement decisions: when a
physical host fails, we ask SigLM for the best location to place its now-homeless VMs.  In our
experience, SigLM typically makes recommendations which result in
consolidation, packing more VMs on fewer physical hosts.  This in itself is
beneficial to reduce energy consumption, and consolidation is a common approach
in energy-aware migration schemes.  However, lower energy
consumption is just a side effect with this approach, and cannot be estimated directly.

To remedy this, and provide direct estimation of energy for a given physical 
host and set of virtual machines, we have begun development of an automated
framework for generation of power profiles.  A power profile for a physical
machine is an equation or set of equations which provides a mapping of resource
onsumption (CPU, memory, disk IO, network IO etc) to power consumed.  This
profile is unique for each physical machine, as differing architectures,
chipsets, and components will yield different power consumption.  Even two
machines with the same model number and components may yield different
consumption characteristics, due to different thermal characteristics from dust
buildup, wear and tear of cooling infrastructure, partial component failure, etc.

Creation of these power profiles involves collecting power data under a range
of resource utilization, and performing regression in an attempt to accurately
map any given utilization vector to a power value.  Regression in general is
much more accurate when using interpolation than when using extrapolation, so
having power data for the full range of values of a given resource is crucial.

We have laid the groundwork for this approach by creating ``generation'' images
for CPU and disk, which in our experience were by far the two largest consumers
of energy in workstation/desktop machines.  These generation images produce the
full range of utilization and in several differing patterns for their specific
resource, in an attempt to realistically mimic all possible utilization
patterns in as short a time as possible.

The resource and power logs created by operation of the ``generation'' images
can then be fed into a regression engine, and ultimately used to interpret SigLM
output in a manner that considers power explicitly.  This process can be seen in
detail in Figures \ref{migration_decision_fc} and \ref{profile_creation_fc}.
