﻿\section{Checkpoint/Restart Framework}

\subsection{Architectural overview}

Our checkpoint/restart framework is composed of three primary components:

\begin{enumerate}
  \item Cluster manager
  \item Node server
  \item Backend store (NAS)
\end{enumerate}

\begin{figure*}
  \centering
  \psfig{file=images/overall_architecture_diagram.png, height=3.75in, width=5in,}
  \caption{System Overview}
  \label{overall_architecture_diagram}
\end{figure*}

\subsubsection{Cluster Manager}

The cluster manager (CM) is the centralized management module that is
responsible for:

\begin{itemize}
  \item enumerating virtual machine / physical machine configurations
  \item requesting snapshot and migration operations
  \item detecting physical and virtual machine failures
  \item handling timing information 
\end{itemize}

The CM initiates snapshot and migration operations by requesting
them from the individual node servers, acting as an XML-RPC client.  The
node servers in turn perform the operations and return state objects
represening their new, updated status.

The CM periodically re-enumerates all physical hosts, to detect changes such as
added/reconfigured virtual machines.  For each virtual machine, the CM keeps a
state object which contains all XenStore values needed for management and
migration, such as:

\begin{itemize}
  \item disk file and location
  \item all existing snapshots
  \item network settings
  \item kernel and ramdisk
\end{itemize}

In support of these roles, the CM is the single entity in our system charged
with maintaining the current state of the cluster.  This allows it to make
accurate decisions about how to react, and lets node servers act as simple
slaves which do not maintain any state information, outside of their XenStore
instance.

Migration targets are selected by a call to SigLM, discussed below. 
The full decision process used in selecting migration targets can be seen in
Figure \ref{migration_decision_fc}.

\subsubsection{Node Server}

Node servers are daemons which run on physical hosts, acting as a conduit for
information to flow from their local XenStore instance to the cluster manager,
and as workers to execute commands issued by the cluster manager.

Our original design strategy was to use shell scripts via SSH to accomplish node
server tasks, but the XML-RPC framework offered a more secure and reliable alternative.
However, we maintain the same general approach, where node servers are simple
slaves with no knowledge of state information (outside their XenStore instance).

Node servers export the functions listed in Figure \ref{node_server_function_table} 
for use by the CM, which enable the manipulation of XenStore and the VMs
remotely.

\begin{figure}
    \begin{tabular}{ | l | l | }
  	\hline
  	{\bf Function Name} & {\bf Description} \\ \hline
  	EnumerateVMs & Enumerates virtual machines currently \\ & running on that node via a dump \\
  	& of XenStore \\ \hline 
  	TakeSnapshot & Takes a snapshot of the given VM and \\ 
  	& places it on the NAS \\ \hline 
  	Migrate & Migrates the given VM to this node from \\
  	& its most recent snapshot files \\ \hline 
  	SetupSiglm & Performs setup operations needed by SigLM \\ \hline
    \end{tabular}
  \caption{XML-RPC functions exported by the Node Server}
  \label{node_server_function_table}
\end{figure}

\subsubsection{Backend Store (NAS)}

The backend store, which in our experimental setup is a single NAS, provides
common storage for image files and NILFS mount points.  It allows us to avoid
costly transfer of image files upon failure, and serves as a centralized
repository for snapshot files and associated logs.  Figure
\ref{nas_architecture_diagram} shows the backend store in relation to the
physical hosts, which each run a node server.  The backend store is mounted to
the same path in all nodes filesystems, allowing file IO operations to be
performed as if the backend store was local to the node.

\begin{figure}
  \centering
  \psfig{file=images/nas_architecture_diagram.png, height=4in, width=3.5in,}
  \caption{Role of the NAS}
  \label{nas_architecture_diagram}
\end{figure}

\subsection{Snapshot/restart}

An important feature in our system design is the ability to extract information 
about the current state of the virtual machines running on a node. This
operation creates a checkpoint of each virtual machine which, in case of a
failure, can be use to rollback the system to a consistent state. The Xen
Hypervisor provides mechanisms to save the current state of the VM into
checkpoints. The following steps are involved in the checkpoint process \cite{vallee_hapcw06}:

\begin{enumerate}
  \item Suspend VM
  \item Copy memory and system information (page tables, registers)
  \item Flush i/o
\end{enumerate}

Since the VM is paused before the information is copied, the checkpoint data
saved is considered to be in a consistent state.

Xen also provides support to restore a VM from a checkpoint. The restart
operation creates a new domain on the host with the saved checkpoint
information. However, when the VM is restarted, it assumes that the disk
information is preserved and is consistent with the information in the restores
page tables. Hence, in order to extract a consistent view of the entire system,
it is imperative to simultainously maintain a snapshot of the disk as well.

\begin{figure}
  \centering
  \psfig{file=images/timing_snapshot.png, height=2.65in, width=3.5in,}
  \caption{Snapshot timing}
  \label{timing_snapshot}
\end{figure}

Figure \ref{timing_snapshot} illustrates the checkpoint restart algorithm. Let
us assume that the checkpoint operation starts at time 't'. The Xen save and
NILFS snapshot operation starts simultaneously after which, the VM goes down for
a moment. The restart operation is then started immediately after the completion
of the save operation so as to minimize downtime of the system. The Xen save
snapshot is first stored locally and then transferred to the backend storage.

\subsection{Migrate operation}

Migration is a key feature of our reactive fault tolerance mechanism. When a
node fails, our system is designed to relocate the resulting failed VM(s)
on a new host. The checkpoint mechanism makes sure that at regular intervals,
consistent disk and VM snapshot are recorded and stored in the backend store.
This translates to the periodic preservation of the work done by a VM. We
leverage this functionality during the migration operation by restoring the
failed VM from its checkpoint rather then restarting the VM from scratch.

In the event of a node failure, SigLM is invoked to make a placement decision
for the resulting failed VM(s). Once the target host is derived from the SigLM
output, our system then initiates the migrate operation for each VM previously
executing on the failed host.

\begin{figure}
  \centering
  \psfig{file=images/timing_migrate.png, height=2.65in, width=3.5in,}
  \caption{Migration timing}
  \label{timing_migrate}
\end{figure}

Figure \ref{timing_migrate} illustrates the migration timing in greater
detail. When the operation is invoked, the VM checkpoint is transferred from the
backend store to the local disk. This is done to facilitate a faster restore
operation.

Along with the checkpoint copy operation, the corresponding disk snapshot
is mounted onto a temporary location. This gives us a a view of the filesystem
that is consistent with the information stored in the VM checkpoint. A major
limitation with NILFS is that snapshots can only be mounted in read-only
format. Therefore, in order to facilitate read-write privileges, the entire
contents of the mounted snapshot is copied to a new location (also in NILFS).
This is a very expensive operation and is a major bottleneck in our system.
NILFS is an extremely active project and a patch to fix this limitation is in
the pipeline. We hope to incorporate these changes in the future which would
drastically reduce the amount of time taken by the snapshot setup operation. 
This issue is discussed in more detail in the Future Work section.

Once the contents of the snapshot is copied to the new location, the VM is then
restored from the local checkpoint which completes the migrate opertaion.

\subsection{SigLM}

The SigLM framework \cite{gong_iwqos09} is a signature-drive load management
system which can perform periodic and on-demand placement decisions for a cluster of physical
machines and a set of virtual machines or jobs.  We use SigLM to make
performance-educated decisions about VM placement when our framework discovers
that a virtual machine has crashed, or an entire physical machine has gone
down.  SigLM's on-demand placement routine is especially well-suited to our
purposes, because it takes a set of target VMs and PMs and determines the best
allocation, using its own resource monitoring framework and VM signatures. 
This allows our framework to focus on the mechanics of snapshotting and
migration operations, while treating the actual placement decision as a black
box.

The overall failure reaction process, including SigLM input, is detailed in
Figure \ref{migration_decision_fc}.

\begin{figure}
  \centering
  \psfig{file=images/migration_decision_fc.png, height=3.4in, width=3.5in,}
  \caption{Migration decision process}
  \label{migration_decision_fc}
\end{figure}

As stated in the introduction, SigLM itself is beneficial to overall energy
consumption due to its tendency to consolidate.  However, we would like to
account for energy explicitly, as shown in Figure \ref{migration_decision_fc}
step 4 "Alter SigLM output using power profiles".  The work needed to complete this is
discussed in the future work section below.
