\documentclass[10pt, conference, compsocconf]{IEEEtran}
\usepackage{listings}
\usepackage{caption}
\usepackage{xcolor}
\usepackage{textcomp}
\usepackage{cite}
\usepackage[pdftex]{graphicx}
\DeclareGraphicsExtensions{.pdf,.jpeg,.png}

%\usepackage[cmex10]{amsmath}
%\interdisplaylinepenalty=2500
%\usepackage{algorithmic}
%\usepackage{array}
%\usepackage{mdwmath}
%\usepackage{mdwtab}
%\usepackage{eqparbox}
%\usepackage[tight,footnotesize]{subfigure}
%\usepackage[caption=false]{caption}
%\usepackage[font=footnotesize]{subfig}
%\usepackage[caption=false,font=footnotesize]{subfig}
%\usepackage{fixltx2e}
%\usepackage{stfloats}

\usepackage{url}
\hyphenation{op-tical net-works semi-conduc-tor}

\lstset{
basicstyle = \ttfamily\scriptsize\color{black},%\bfseries
keywordstyle = \color{brown},%\bfseries
keywordstyle = [2]\ttfamily\color{black},
stringstyle = \color{blue},
captionpos=b,
frame=single,
numbers=left,
numberstyle=\tiny\color{gray},
stepnumber=2,
numbersep=5pt,
showstringspaces = false,
tabsize=2
} 

\newif\ifdraft
% comment out the next line to turn off comments
\drafttrue

\ifdraft
  \definecolor{darkgreen}{rgb}{0,0.5,0}
  \newcommand{\woz}[1]{ {\noindent \textcolor{darkgreen} { Wozniak: #1 }}}
  % Red star denotes items that need further work or discussion
  \newcommand{\TODO}{$\textcolor{red}{\pmb{\star}}$\xspace}
\else
  \newcommand{\woz}[1]{}
\fi

\begin{document}
%---------------------------------------------------------------------------------------
%---------------------------------------------------------------------------------------
\title{Evaluating Cloud Computing Techniques for Smart Power Grid Design \\ Using Parallel Scripting}
%---------------------------------------------------------------------------------------
%---------------------------------------------------------------------------------------
\author{\IEEEauthorblockN{Ketan Maheshwari\IEEEauthorrefmark{3},
Ken Birman\IEEEauthorrefmark{1}, 
Justin M. Wozniak\IEEEauthorrefmark{3},
Devin Van Zandt\IEEEauthorrefmark{2}}
\IEEEauthorblockA{\IEEEauthorrefmark{1}Department of Computer Science\\
Cornell University,
Ithaca, NY 14853}
\IEEEauthorblockA{\IEEEauthorrefmark{2}GE Energy Management, Schenectady, NY 12345}
\IEEEauthorblockA{\IEEEauthorrefmark{3}MCS Division, Argonne National Laboratory\\  Argonne, IL 60439}}
\maketitle
%---------------------------------------------------------------------------------------
%---------------------------------------------------------------------------------------
\begin{abstract}

Applications used to evaluate next-generation electrical power grids
(``smart grids'') are anticipated to be compute and data-intensive. In
this work, we parallelize and improve performance of one such
application which was run sequentially prior to the use of our
cloud-based configuration. We examine multiple cloud computing
offerings, both commercial and academic, to evaluate their potential
for improving the turnaround time for application results.  Since the
target application does not fit well into existing computational
paradigms for the cloud, we employ parallel scripting tool, as a first
step toward a broader program of adapting portable, scalable
computational tools for use as enablers of the future smart grids. We
use multiple clouds as a way to reassure potential users that the risk
of cloud-vendor lock-in can be managed. This paper discusses our
methods and results. Our experience sheds light on some of the issues
facing computational scientists and engineers tasked with adapting new
paradigms and infrastructures for existing engineering design
problems.

\end{abstract}
%---------------------------------------------------------------------------------------
%---------------------------------------------------------------------------------------

\begin{IEEEkeywords}
Parallel scripting, cloud computing, smart grid
\end{IEEEkeywords}
\IEEEpeerreviewmaketitle

%---------------------------------------------------------------------------------------
%---------------------------------------------------------------------------------------
\section{Introduction}
% Total 8 pages:
% Intro 0.5
% Results
    %plots 0.5
    %desc 0.5
% About clouds 0.5
% About Swift 0.5
% About Application 0.5
% Related Work 0.5
% Evaluation 0.5
% Conclusion 0.5
% Acknowledgement 0.25
% References 0.75

With the advent of cloud computing, users from multiple application areas are
becoming interested in leveraging inexpensive, ``elastic'' computational
resources from external services. Engineers designing an autonomic electrical
power grid (``smart grid'') constitute one such user group.  The smart grid
will require major technological steps, such as the deployment of synchrophasor
based monitoring technologies that could enable real-time grid-state estimation
on the production and delivery side of the equation, and the widespread use of
smart-meter based technologies to optimize behavior on the consumption
side~\cite{bose, hazra, orderly}. As the size of the systems modeled by the
software and the number of sensitivities increase, the need to improve
computation time for the analysis engines has become crucial.

Our overarching premise is that cloud computing may be best matched to the
computation and data management needs of the smart grid, but also that a
step-by-step process will be required to learn to carry out tasks familiar from
other settings in a smart-grid environment and that, over time, a series of
increasingly difficult problems will arise.  In this paper we describe our
experience in deploying one representative commercial smart grid application to
the cloud, and leveraging resources from multiple cloud allocations seamlessly with
the help of the Swift parallel scripting framework~\cite{swift-parco}. The
application is used for planning and currently has a time horizon suited
primarily to relatively long-term resource allocation questions. Our goal here
is to show that cloud resources can be exploited to gain massive speedups
without locking the solution to any specific cloud vendor. We present the following

\begin{enumerate}
  \item A seamless approach for leveraging cloud resources from multiple vendors to perform
    smart grid applications;
  \item A use case that involved parallelizing an existing smart grid application and deploying it on cloud resources;
  \item An evaluation of the resulting paradigm for portability and
        usability to novel application areas.
\end{enumerate}

Applications from many engineering and scientific fields show similar
complexities in their characteristics and computational requirements. Thus, one
such deployment brings the promise for more applications. The potential
benefit is that once the applications are coded, the effort
invested pays itself off over a long period by applying the same pattern to
similar applications. At the same time, however, we attempt to reduce the
complexity of application development by using parallel scripting. 

Scripting has been a popular method of automation among computational
users. Parallel scripting builds on the same familiar practice, with the
advantage of providing parallelization suitably interfaced to the
underlying computational infrastructures. Adapting applications to
these models of computation and then deploying the overall solution,
however, is still a challenge. Parallel scripting has reasonably
addressed this challenge by playing a role in deployment of many
solutions on HPC platforms~\cite{CDM_2009}. A familiar C-like syntax
and known semantics of parallel scripting make the process usable and
adaptable. Its flexibility and expressibility far exceed that of rigid 
frameworks such as MapReduce. 

Traditionally, such applications have been run on established computing
facilities. However, organizations have either halted acquisition
of new clusters or downsized existing clusters because of their high maintenance
costs. Clouds are different from organizational clusters: from a management point
of view, the cloud resource provisioning model is accounted at fine granularity
of resources; from a computational point of view, the work cycles are readily
available with virtualized resources and absence of shared scheduling. In certain 
contexts, clouds present a model of computation infrastructure management where
clusters might not be suitable~\cite{jha-cloud}.

In practice, cloud allocations are granted to groups in institutions and often
a group ends up having its slice from multiple cloud allocations pies.
Furthermore, one is limited by the allocation policies on how much of the resources
one can obtain simultaneously from a single allocation. For instance,
standard Amazon EC2 allocation allows only 20 cloud instances \emph{per
allocation} for a region at a time~\cite{amazon-ec2-limit}. Even when cloud
resources are virtualized, accessing resources across multiple clouds is not
trivial and involves specialized setup, configuration, and administrative
routines posing significant challenges.

%Cite data and job clustering paper from DIDC and DA-TC paper from Katz et. al. 
In our implementation, we seamlessly and securely span application runs to
multiple clouds. We use Amazon's EC2, Cornell's
RedCloud~(\textit{www.cac.cornell.edu/redcloud}), and the NSF-funded
FutureGrid~(\textit{portal.FutureGrid.org}) cloud in this study. Using Swift,
we orchestrate the application tasks that they run on multiple clouds in
parallel while preserving the application semantics.

%---------------------------------------------------------------------------------------
%---------------------------------------------------------------------------------------
\section{Application characterization} 

The GE Energy Management's Energy Consulting group has developed the
\textit{Concorda Software Suite}, which includes the Multi Area Production
Simulation (\textit{MAPS}) and the Multi Area Reliability Simulation
(\textit{MARS}). These products are internationally known and widely
used~\cite{mars} for planning and simulating smart power grids, assessing the
economic performance of large electricity markets, and evaluating generation
reliability.

The \textit{MARS} modeling software enables the electric utility planner to
quickly and accurately assess the ability of a power system, comprising a
number of interconnected areas, to adequately satisfy the customer load
requirements. Based on a full, sequential Monte Carlo simulation
model~\cite{monte}, \textit{MARS} performs a chronological hourly simulation of
the system, comparing the hourly load demand in each area with the total
available generation in the area, which has been adjusted to account for
planned maintenance and randomly occurring forced outages. Areas with excess
capacity will provide emergency assistance to those areas that are deficient,
subject to the transfer limits between the areas.

\textit{MARS} consists of two major modules: an input data processor and the
Monte Carlo simulator. The input processor reads and checks the study
data, and arranges it into a format that allows the Monte Carlo module
to quickly and efficiently access the data as needed for the simulation. The
Monte Carlo module reads the data from the input processor and
performs the actual simulation, replicating the year until the stopping
criterion is satisfied. The execution of \textit{MARS} can be divided by
executing each replication--\textit{marsMain} separately and
merging \textit{marsOut}, the generated output for all replications at the end.

\begin{figure}[htb]
    \begin{center}
  \includegraphics[width=7.5cm]{figures/mars-characterization}
  \caption{Characterization of the GE MARS application dataflow}
  \label{fig:mars}
  \end{center}
\end{figure}

A dataflow characterization diagram of \textit{MARS} is shown in Figure
\ref{fig:mars}. The application is a two-stage computational application
involving data flow characteristics. The input to first stage consists of raw
data, control files, and a license file. The input amounts to 6.1~MB in size.
The output to this stage consists of the intermediate results of each replica.
The size of outputs varies between 193 and 352~MB, amounting to 275~MB on
average.  For a medium-sized run, 100 such instances are executed followed by
one merge task, totalling 101 jobs. This size could expand to between 1,000 and
10,000 runs in practice. The execution time of each \textit{marsMain} job on a
lightly (load average between 0.0 and 0.5) and heavily loaded (load average
between 2.0 and 3.5) local host is 35.65 and 37.5 seconds, respectively.
However, the time varies significantly depending on the processor load and
available compute cycles on the target virtualized environment. See table
\ref{tbl:exec_clouds} for execution time averaged over 100 runs on individual
cloud instances for the three cloud infrastructure subjects of this experiment.
The \textit{marsOut} stage is highly optimized data merging stage, which takes
between 5 and 15 seconds on the resources used for this study. One
\textit{marsMain} job submitted from a submit-host will involve the following
steps: (1) stage in the 6.2~MB of input data from submit-host to cloud
instance; (2) execute the \textit{marsMain} job; and (3) stage out the 275~MB
of intermediate results from cloud instance to the submit-host. These steps are
performed each time the \textit{marsMain} application is invoked (100 times in
this study).  The intermediate results are important for application and used
for analysis and archival purposes. The \textit{marsOut} stage requires a
partial subset of intermediate results, which amounts to 150~MB for a single
run. The ultimate result of \textit{marsOut} amounts to 5.6~MB. Consequently
the \textit{marsOut} application run involves the following steps: (1) stage in
the 150~MB of input data from submit-host to cloud instance; (2) execute the
\textit{marsOut} job; and (3) stage out the 5.6~MB of results from cloud
instance to the submit-host.

%---------------------------------------------------------------------------------------
%---------------------------------------------------------------------------------------

\section{Cloud Infrastructures}
In this section, we briefly describe the cloud infrastructures used in the
current work and their key properties. 

\paragraph{Amazon EC2} Amazon EC2 is a large-scale commercial cloud
infrastructure~(\textit{aws.amazon.com/ec2/}). Amazon offers compute resources
on demand from its virualized infrastructures spanning eight centers from
worldwide geographical regions. Three of the centers are in the United States,
two in Asia, and one each in the EU, South America, and Australia. An
institutional allocation from Amazon will typically allow one to acquire 20
instances of any size per region. In addition, Amazon provides a mass storage
device called S3, which can be configured to be mounted on instances as a local
file-system. For the current work, we considered the US-based regions mainly
for the proprietary and secondly for performance reasons. Consequently, we were
limited to a maximum of 60 instances from the Amazon EC2 cloud. Amazon provides
a web-based console and a native command-line implementation to create,
configure, and destroy resources.

\begin{table*}[htb]
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
\textbf{localhost1} & \textbf{localhost2} & \textbf{Amazon EC2}  & \textbf{Cornell RedCloud} & \textbf{FutureGrid} \\
\hline
35.65~$\pm5.01$ &49.62~$\pm7.41$ & 68.49~$\pm11.43$ & 55.21~$\pm10.41$ & 47.89~$\pm7.71$\\
\hline
\end{tabular}
\caption{Average execution time in seconds of a single \textit{marsMain} task with standard deviation on the three cloud instances}
\label{tbl:exec_clouds}
\end{center}
\end{table*}

\paragraph{Cornell RedCloud} Cornell's Advanced Computing Center offers a
small-scale cloud computing infrastructure through its RedCloud facility . One
RedCloud allocation typically allows a maximum of 35 cloud instances drawn from
a single 96-core physical HPC cluster on a multi-Gigabit network backbone. The
resources are managed through a command-line implementation of the
Eucalyptus~\cite{euca} middleware tool.

\paragraph{NSF FutureGrid Cloud} The NSF-funded FutureGrid cloud is
administered by Indiana University. It offers a variety of resources via a
multitude of interfaces. Currently, it offers cloud resources via three
different interfaces: Eucalyptus, Nimbus~(\textit{www.nimbusproject.org}), and
OpenStack~(\textit{www.openstack.org}). The total number of resources at
FutureGrid is close to 5000 CPU cores and 220~TB of storage from more than six
physical clusters. We use the resources offered by one such cluster via the
Nebula middleware.

Neither RedCloud nor FutureGrid offers a web-based interface to manage
resources similar to the one offered by Amazon EC2.

%---------------------------------------------------------------------------------------
%---------------------------------------------------------------------------------------

\section{Parallel Scripting in Clouds}
We parallelize our application using the parallel scripting paradigm for high
performance computing. Swift has been traditionally used on clusters,
supercomputers, and computational grids. Recently, it also has gained momentum
on cloud environments. In the present work, we employ Swift to run our application
on multiple clouds in a seamless fashion. We use Swift and related technologies
to express, configure, and orchestrate the application tasks.

\paragraph{Swift script} Swift script provides an efficient and compact
C-like syntax and advanced parallel semantics to express an application's tasks
and dataflow. Parallel constructs such as \texttt{foreach} and future variables provide
for implicit parallelism. Advanced mappers and \texttt{app} definitions easily map
script variables to application data and executables, respectively.

\paragraph{Coasters} The Swift Coasters~\cite{Coasters_UCC_2011} framework
provides a service-worker interfaced with Swift task dispatching framework on
the inside and a variety of computing infrastructures from the outside. The
execution provider schedules and coordinates application execution on target
infrastructure. The data provider stages data. Coaster services connect to worker
agents on remote nodes securely using ssh tunnels, thus providing crucial data
communications security across clouds. 

\paragraph{Collective Data Management} Collective data
management~(CDM)~\cite{CDM_2009} techniques improve the data staging
performance when data is available on shared filesystems. It creates symbolic
links instead of actually moving data, thus saving on data staging time.

\paragraph{Karajan Execution Engine} The Karajan engine~\cite{karajan}
orchestrates the tasks defined and ensures the right connections between the
tasks dictated by dataflow semantics of application.

\section{Experiments: Setup and Implementation}
In this section, we describe the experiments conducted on cloud infrastructures
via a parallel scripting implementation and execution of GE MARS application
using the Swift framework.

\paragraph{Bringing up cloud instances} Suitable ``machine-images" were
prepared in advance for each of the cloud infrastructures. This is a one-time
activity: the images can be stored in the cloud account and reused
to create instances. The application binaries and supporting libraries were
preinstalled to these images. No special software was required
for Swift, since coaster workers run standard Perl, which is installed by default.
Data is largely dynamic, so it is of little practical value to have data on the
images. A separate Swift script was used to run in parallel to bring up the
cloud instances on multiple clouds. In our case, parameterized commands to the
cloud middleware run in parallel to bring up a desired combination of cloud
instances.

\paragraph{Data movement} The data from the first stage of
computation, \textit{marsMain}, is required as input to the second
stage, \textit{marsOut}. Since there are 100 instances of results from first
stage, they all would be required to stage at the location of the execution of
second stage. In order to avoid this expensive staging, the lightweight
\textit{marsOut} was set up to run on the submit-host.

\paragraph{Security and firewalls} Each of the cloud environments we used has
its own security policies; and in all cases the connection to outside world
were closed, which required special configuration to open. However, the port 22
for secure ssh connections was open for all cases. We used the ssh port
forwarding and tunneling strategy, which saved us the effort of configuring
firewalls on each of the instances while providing a secure data channel.

\paragraph{Distributed file system} A parallelizing
environment must run efficiently in both a shared and a distributed file system.
In order to form run strategies that spans multiple infrastructures.

\paragraph{Network bias} In order to avoid a network affinity bias, the
experiments were conducted from a remote machine outside the
network domains of the target cloud infrastructures, especially the Cornell
RedCloud.

\begin{lstlisting}[float=ht!,label=lst:swift,caption={A Swift script specification of GE MARS application: lines 2-7 define app calls; lines 20-23 make parallel calls to \textit{marsMain}.}]
type file;
app (file _maino,file _res[],file _binres) marsmain (...){
  mars @_mainctl stdout=@_maino stdin="/dev/null";
}
app (file _outo, file _outres[]) marsout (...){
  marsout @_outctl stdout=@_outo;
}
// list of control files 
string ctlfilelist[] = readData ("ctlfilelist.txt");

//map the items in above list to actual files
file ctl[]<array_mapper; files=ctlfilelist>;
file inp[]<filesys_mapper; location="infiles/">;
file out[]<simple_mapper; location="outs">;

string binresfilelist[] = readData ("binresfilelist.txt");
file binres[]<array_mapper; files=binresfilelist>;
// Licence file
file licence<single_file_mapper; file="MARS-LIC">;
foreach ctlfile, i in ctl {
  file res[]<ext; exec="mapper.sh", arg=i>;
  (out[i],res,binres[i]) = marsmain (ctlfile,licence,inp);
}
file outo<"outo.txt">;
file outctl<"mars-out.ctl">;
file msgerr<"result0/mars.ot09">;
string outresfilelist[] = readData("outresfilelist.txt");
file outres[]<array_mapper; files=outresfilelist>;
(outo, outres)=marsout (outctl, binres, msgerr);
\end{lstlisting}

In less than 30 lines of code, Swift can specify the application flow. Although
the real work is done by the Swift framework and application code, the
abstraction helps users rapidly express, parallelize, and productionalize
applications.

%---------------------------------------------------------------------------------------
%---------------------------------------------------------------------------------------

\section{Results}
In this section, we present the results we obtained by parallelizing the
application and deploying it on multiple clouds: Amazon EC2, Cornell's RedCloud,
and NSF-funded FutureGrid cloud. 

We first present the cloud characterization results by measuring network and
data-movement properties of clouds. We then perform our application execution
on incrementally sophisticated scenarios: starting from a single localhost to
single cloud in serial mode to multiple clouds in task-parallel mode. The
application submission was done from a single remote submit-host. The
application data resides on the submit-host, and the executables with supporting
libraries were preinstalled on cloud images from which cloud instances were
spawned.

Figure~\ref{fig:cloudnet} shows an asymmetric bandwidth matrix between the
cloud instances and the submit-host considered in this work. All measurements
are obtained by using the Linux ``iperf" network performance measurement utility.
The rows are servers and the columns are clients. Separate measurements of 20 iperf
sessions were recorded over 20 days. Mean and standard deviation of bandwidths
were recorded. A spectrum of bandwidth values across the cloud
instances and between the instances of the same cloud is seen. Some of the
measurements that go beyond 1~Gbit gives an indication that those instances are
probably sliced from a single high-speed cluster or even a single physical
machine. The bandwidth between two regions of Amazon EC2 was observed to be
significantly and unusually lower compared with that of other pairs.

%\begin{table*}[htb]
%\begin{center}
%\begin{tabular}{|l|l|l|l|l|l|}
%\hline
%& \textbf{localhost} & \textbf{Amazon EC2-west} & \textbf{Amazon EC2-east} & \textbf{Cornell RedCloud} & \textbf{FutureGrid} \\
%\hline
%\textbf{localhost}  & -- & 65.16~$\pm5.53$ & 741.65~$\pm21.95$ & 252.35~$\pm88.82$ & 37.81~$\pm12.88$  \\
%\hline
%\textbf{Amazon EC2-west}  & 77.99~$\pm3.49$ & 1004~$\pm17$  & 34.87~$\pm2.95$ & 71.42~$\pm5.45$ & 5.93~$\pm15.36$ \\
%\hline
%\textbf{Amazon EC2-east}  & 675.15~$\pm115.716$ & 90.74~$\pm1.80$ & 836.8~$\pm22.42$ & 215.55~$\pm70.98$ & 198.94~$\pm158.52$ \\
%\hline
%\textbf{Cornell RedCloud} & 723.85~$\pm55.75$ & 48.34~$\pm24.69$ &442.33~$\pm177.99$ & 2096~$\pm8$& 53.92~$\pm63.67$ \\
%\hline
%\textbf{FutureGrid} & 740.5~$\pm12.67$ & 92.275~$\pm7.62$ & 424.55~$\pm109.99$ & 12.76~$\pm15.91$ & 726.05~$\pm204.83$ \\
%\hline
%\end{tabular}
%\caption{An inter-cloud network performance matrix: the measurements are
%average bandwidths averaged over 20 readings with standard deviation. The
%measurements are in Mbits/sec}
%\label{tbl:cloudnet}
%\end{center}
%\end{table*}

\begin{figure}[htb]
\begin{center}
  \includegraphics[width=\linewidth]{results/heatmap-overlay}
  \caption{Heatmap for intercloud network performance matrix. The measurements are
average bandwidths in Mbits/sec over 20 readings with standard deviation. Color blue indicates a low bandwidth, while red indicates a high bandwidth.}
  \label{fig:cloudnet}
\end{center}
\end{figure}

Shown in Figure \ref{fig:datamove} are the performance results of moving data
in different sized files (1~M to 1000~M) to different cloud locations. Note
that the plot is in log scale. The measurements were made for the Linux
\emph{scp} secure copy utility. In the special case of Amazon S3, the system
was mounted on a running cloud instance using the ``fuse"~\cite{fuse} software
service. The data was written to the S3 mount point by using the Linux ``dd"
utility. We use the measurements over local file system as benchmarks and see
that a locally mounted S3 drive performs worst for 10~M and only second from
worst for the 1000~M case. 

\begin{figure}[htb]
    \begin{center}
  \includegraphics[width=7.5cm]{results/s3fs_write_times.png}
  \caption{Plot showing data movement times across file systems.}
  \label{fig:datamove}
  \end{center}
\end{figure}

The plot shown in Figure~\ref{fig:shfs} is the application performance on a
single host. The host has 32 CPU cores and was set up to run on successively
higher degrees of parallelism utilizing from 1 to 32 cores. The application was
run in two modes: a simple file-based data staging and under the CDM mode where
in the input files were symbolically linked to the execution directory for each
run (this saved 100~$\times$~6.2 M of data movement for complete application
run). We see significant improvement in performance for up to 8 cores however,
no performance gain was achieved beyond this because of a high volume of disc I/O
dominating the run.

\begin{figure}[htb]
    \begin{center}
  \includegraphics[width=7.5cm]{results/sharefs_sharemem_w_ps.png}
  \caption{Performance on a single large machine (32 cores). Shown
  here is performance on an increasing number of cores in two modes of file
  movement: staging and Collective Data Management (CDM).}
  \label{fig:shfs}
  \end{center}
\end{figure}

The plots in Figure~\ref{fig:clouduptime} show the time to bring up the cloud
instances after the command was invoked, reflecting the elasticity of each
cloud.  We see a marked increase in time to an order of magnitude between those
of Amazon EC2 compared with RedCloud and FutureGrid clouds (Y axis being in
logscale). Note that by default, FutureGrid running the Nimbus interface does
not have a means to submit multiple requests in parallel; therefore, a
semi-parallel method had to be implemented, running requests in close
succession to each other in order to avoid instance-ids to collide.

\begin{figure}[htb]
    \begin{center}
  \includegraphics[width=7.5cm]{results/clouduptime.png}
  \caption{Elasticity measurement for clouds.}
  \label{fig:clouduptime}
  \end{center}
\end{figure}

Figure~\ref{fig:indiv} shows the application's performance on individual
cloud resources using a single core in a sequential data staging and execution
order versus a parallel execution and data staging on 10 cores. While we
clearly get an advantage in speed for parallel execution, a significant
performance variation is also seen in serial execution among the cloud
infrastructures.

\begin{figure}[htb]
    \begin{center}
  \includegraphics[width=7.5cm]{results/individual_clouds}
  \caption{Performance on individual cloud infrastructures: serial
  on single instance versus parallel on ten instances.}
  \label{fig:indiv}
  \end{center}
\end{figure}

The plot in Figure~\ref{fig:timeline} shows the timeline for the serial
execution on cloud shown in Figure~\ref{fig:indiv}~(FutureGrid). The time line
is plotted from an analysis of the Swift log for this run. In terms of
percentage, stage-in activity is 1.08\%, stage-out is 48.8\%, and execution is
50.06\% of the time. Note that the stage-in stage completes rapidly and does
not get recorded for most instances. A zoomed-in version of a small interval
shows the stage-in stage with respect to the adjacent running and stage-out
stages.  The time line shows potentials for parallelization not only in
application execution but also data staging. The parallel version of
application execution performs a configurable number of stagings and executions
in parallel. This is especially beneficial in cases where staging time is
almost equal to or is greater than the execution time.

\begin{figure}[htb]
    \begin{center}
  \includegraphics[width=7.5cm]{results/timeline}
  \caption{Serial execution timeline on FutureGrid showing
  intervals for application run, data stage-in and stage-out.}
  \label{fig:timeline}
  \end{center}
\end{figure}

Figure \ref{fig:multicloud} shows application performance results on different
combinations on instances on multiple clouds. We notice a significant
performance improvement going from 10 to 20 instances. However, the performance
improvements are not linearly proportional as we increase the number of cloud
instances successively. This behavior is caused by a significant stage-out time
in the run which is bound to a single input channel of fixed bandwidth coming
into the submit-host.

\begin{figure}[htb]
    \begin{center}
  \includegraphics[width=7.5cm]{results/multiclouds.png}
  \caption{Performance on combinations of instances from multiple clouds.}
  \label{fig:multicloud}
  \end{center}
\end{figure}

%Need to move data across file system. Application can be deployed
%``out-of-band" but data is dynamic and can be moved only on-the-fly. Can use
%resources from multiple, independent domains.

%---------------------------------------------------------------------------------------
%---------------------------------------------------------------------------------------

\section{Related Work}
Our work concerns the three broad research areas, cloud computing,
parallel and distributed application orchestration, and smart power grid
computations. In this section, we discuss related work from each area. 

\subsection{Cloud Computing}
A large section of the community has a collective vision~\cite{buyya-cloud,
magellan2, katz-survey, fahringer} for the near and long-term future of
distributed and cloud computing comprising the following salient points:

\begin{enumerate}
    \item A wide scale spread and adaptation of cloud models of computation
    across HPC and HTC infrastructures
    \item Economical utilization of storage space and computational power by
    adapting more and more new application areas to run in clouds
\end{enumerate}

Workflow-oriented applications have been reported to be specially suited to the
cloud environment~\cite{hetero-cloud, ioan-cloud}. Swift has been ported and
interfaced to one cloud~\cite{swift-cloud1}. Ours is the first multi-cloud
implementation. 

Cloud performance issues have been studied in the
past~\cite{fahringer, cloudperformance}. Our work covers these areas, albeit with
a finer view of evaluating cloud characteristics for a new application area.
With this approach, we attempt to validate the community vision while at the
same time solve a real-world problem.

\subsection{Parallel and Distributed Application Orchestration}
% === Tangential but from PC === 2 para
Interoperability among multiple distributed systems has been a hot topic in
distributed computing community. The recent SHIWA~\cite{shiwa} project
addressed many of the challenges facing users seamlessly running precoded
workflow applications on multiple distributed computing infrastructures. The
dominant approach in SHIWA has been to wrap the workflow expression in order
achieve interoperable workflows on top of already-running workflows
ported to selected infrastructures. We believe that the scripting approach to
workflow~\cite{gscript} and the coaster mechanism makes interoperability easier
by providing a portable and compact representation of application ready to be
interfaced to infrastructure without wrappers.

% == MapReduce ==
MapReduce~\cite{MapReduce} is a system designed to run two function combinators
in a distributed environment. Modern MapReduce distributions such as
Hadoop~(\textit{hadoop.apache.org}) come with many components
that have their own adaptation curve involving learning, familiarizing,
installation and setup. These steps often prove to be barriers to effective
usage by scientific end-users.

Swift is a Turing-complete language that can run arbitrary
compositions of applications on a distributed system, including
MapReduce-like systems. In short, Swift can do MapReduce, but
MapReduce cannot do Swift. Some attempts have been made to improve
the applicability of MapReduce to scientific applications, such as the
addition of features to support
iteration~\cite{IterativeMapReduce_2010}.  We feel, however, that the
conventional control constructs (\texttt{foreach} loops, \texttt{if}
blocks) in Swift enable a more natural, expressive language for
quickly constructing scientific workflow prototypes or adding to existing
scripts.

\subsection{Smart Power Grid Applications}
% === Smart Grid and Power Area === 2 para
The timely availability of processed data, supporting configuration, and
application libraries is a key to performance computing for smart grid
applications. Many smart grid applications are inherently distributed in
nature because of a distributed deployment of devices and buses. The work
described in~\cite{smart-grid-cloud-rusi} is the closest treatment of steering
smart grid computations into the clouds. The work analyzes smart grid
application use-cases and advocates a generic cloud-based model. In this
regard, our work verifies the practical aspects of the model presented, by
evaluating various aspects of clouds.

%---------------------------------------------------------------------------------------
%---------------------------------------------------------------------------------------

\section{Evaluation}

In this section we present an evaluation of the cloud infrastructure
characteristics and parallel scripting paradigm in light of our experience
deploying the GE-MARS application.

\subsection{Usability}
Clouds present a familiar usage model of traditional clusters with an
advantage of direct, super-user, scheduler-less access to the virtualized
resources. This gives the users much required control over the resources and
simplifies the computing without jeopardizing the system security. 

We do observe disparities between the commercial and academic clouds
in terms of elasticity and performance. Network bandwidth plays a
crucial role in improving application performance. Data movement in
clouds is only as fast as the underlying network bandwidths. Bandwidth
disparities in clouds and those between regions of a single cloud must
be taken into account before designing an application distribution
strategy. In a mixed model such as ours, prioritizing tasks could
alleviate many of these disparities.

Swift is easy to set up. Installation is required only on the submit
host.  Coasters uses the native, local file system and dynamically
installs worker agents to run on the target cloud instances. Swift is less
invasive of applications compared with systems such as Hadoop which
requires close integration with applications and a customized file
system installation on resources. However, it is a relatively new
paradigm of parallel computing. This arguably poses adaptability
challenges for new applications. The concept of a highly expressive
yet implicitly parallel programming language does impose a learning
curve on the users used to traditional imperative scripting
paradigms. Debugging in such scenarios is one of the biggest
challenges for users. However, the returns on investment are expected
to be positive as many applications in the smart grid domain
exhibit similar patterns~\cite{maheshwari-lim-etal:2013}.

\subsection{Economy}
The economy of computation in presence of commercial-academic collaboration is
especially notable. Thanks to a universal, pay-as-you-go model of computation,
we are not dealing with cluster maintenance and cross-institutional access
issues. With the ability to run the application on multiple clouds, we can move
on to another cloud if need be and avoid vendor lock-in. A high-level policy
and usage agreement allows the costs of cloud allocation to be shared among
multiple parties having stakes in the same research.

%---------------------------------------------------------------------------------------
%---------------------------------------------------------------------------------------

%\section{Summary and Lessons Learned}
%\begin{table*}[htb]
%\begin{center}
%%\hspace*{-1cm}
%% use packages: array
%\begin{tabular}{|l|l|l|l|}
%\hline
% & \textbf{Amazon EC2} & \textbf{Cornell RedCloud} & \textbf{FutureGrid} \\
%\hline
%\textbf{CPUs} & & &\\
%\hline
%\textbf{Storage} & & &\\
%\hline
%\textbf{Pricing} & & &\\
%\hline
%\textbf{Interface} & & &\\
%\hline
%\textbf{Speed} & & &\\
%\hline
%\end{tabular}
%\label{tbl:cloud-eval}
%\caption{A comparative summary on the key aspects of Amazon EC2, Cornell
%RedCloud and FutureGrid Cloud Infrastructures}
%\end{center}
%\end{table*}
%---------------------------------------------------------------------------------------
%---------------------------------------------------------------------------------------

\section{Conclusions and Ongoing Work}
In this paper we discuss and evaluate the cloud side of a network-intensive
problem characterized by wide-area data collection and processing. We use a
representative parallel scripting paradigm. We analyze the properties of
multiple cloud systems as applied to our problem space. One notable limitation
of each of the environments is that they do not have efficient support for
fault tolerance and seamless assurance of data availability in the event of
failure. 

Not only computational but performant bandwidth resources are needed in order
to achieve desired application performance.  Apart from a basic application
execution, in a complex and networked environment, additional requirements are
foreseen. These requirements include high assurance, dynamic configuration,
fault tolerance, transparent connection migration, distributed data repository,
and overall task coordination and orchestration of computation. Not all
requirements are addressed in this work.  However, the resource provision model
of our implementation forms a strong basis to address these requirements.

The intercloud bandwidth analysis is useful for large scale task placement.
Each independent instance of pipelines could be placed on nodes showing high
affinity in terms of bandwidth. Additionally, future work is taking advantage
of specialized multi-core platforms offered by cloud vendors, usage of
efficient distributed caching technologies offered by tools such as
\texttt{memcached}.

\section*{Acknowledgment}
We thank our colleague Robbert van Renesse for his valuable inputs. This
work was partially supported by the U.S. Department of Energy, under Contract
No. DE-AC02-06CH11357.

\bibliographystyle{IEEEtran}
\bibliography{ref} \end{document}
