\section{Results}

\subsection {Power Profiling}
Experiments were conducted with RuBiS (Rice University Bidding System) and
hadoop workloads. Hadoop workloads are more disk I/O intensive. The profiling
was done to find the correlation between the system metrics and power. Both
resource profiles and power profiles are collected in parallel for every
experiment. The RuBiS is a multi tier web service application. The distributed
components of this application is instantiated in different virtual machines.
Both the virtual machines handling the web server and the database are initiated
in a single machine and workload generator is initiated in another machine. The
experiment below is conducted with 1500 worker threads for a span of 1 hour.
The CPU and power metrics were plotted in Matlab to obtain Figures
\ref{cpu_readings} and \ref{power_readings}.

\begin{figure}[h]
  \centering
  \psfig{file=images/CPU.png, height=3in, width=3.5in,}
  \caption{CPU readings}
  \label{cpu_readings}
\end{figure}

\begin{figure}[h]
  \centering
  \psfig{file=images/power.png, height=3in, width=3.5in,}
  \caption{Power readings}
  \label{power_readings}  
\end{figure}

The graph's trend shows that for RuBiS workload with less disk I/O, CPU is in
near-complete correlation with power. Even though network is one of the metrics
collected and there is high network usage, it does not affect the power output
much.

\subsection {Frequency of Snapshotting vs Power}

The snapshot operation takes place frequently. This experiment was
conducted to make sure that the frequency of snapshot does not increase
the power consumption in a significant way. This experiment was conducted with
2 physical hosts, a cluster manager and a node server with 1 VM running Hadoop
workload. The snapshot frequency was altered to analyze the total power consumption of the node
server, seen in Figure \ref{freq_power_table}. The frequency was initially set
to 10 minutes. It was later reduced to 3 minutes. The time required for completion of a single checkpoint is
3 minutes, which is therefore assumed to be the minimum checkpoint frequency.

\begin{figure}
  \begin{tabular}{|l|l|}
  \hline
  {\bf Snapshot Frequency} & {\bf Avg Power Consumption} \\ \hline
  10 mins & 78.8 W \\ \hline
  3 mins & 77.1 W \\ \hline
  \end{tabular}
  \caption{Avg power consumption at various snapshot frequencies}
  \label{freq_power_table}
\end{figure}

\begin{figure}
  \centering
  \psfig{file=images/ckpt.png, height=2.5in, width=3in,}
  \caption{Power readings during snapshots}
  \label{ckpt_power}
\end{figure}

From Figure \ref{ckpt_power} it is evident that checkpointing is not a
costly operation and does not impose additional overhead on the 
system. Figure \ref{checkpoint_time_table} gives an indication of the 
total time taken per checkpoint operation. This experiment was 
conducted on 4 physical machines running one VM each.

\begin{figure}
  \begin{tabular}{|l|l|}
  \hline
  {\bf Host Machine} & {\bf Avg. checkpoint/restore time taken} \\ \hline
  nanjing & 12 s \\ \hline
  cairo & 10 s \\ \hline
  palm04 & 28 s \\ \hline
  palm05 & 12 s \\ \hline
  \end{tabular}
  \caption{Avg time taken to complete the checkpoint operation}
  \label{checkpoint_time_table}
\end{figure}

From Fig \ref{checkpoint_time_table}, it is clear that the time taken to perform the 
checkpoint/restore operation heavily depends on the configuration of 
the host machine. However, we feel that the downtime is negligible and 
does not have a negative impact on the overall performance of the VM.

Our experiments with migration were hampered by lack of capable 
hardware. Our results were too inconsistent to pinpoint an average 
time taken to migrate. However one aspect of the migration that was 
evident was that the bottleneck lies in the snapshot copy operation. 
We hope to address this bottleneck in our future work.
