
\documentclass[conference]{IEEEtran}

\usepackage{graphicx,epsfig}
\usepackage{amsmath}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{wrapfig}
\usepackage{subfigure}
\usepackage{balance}

% correct bad hyphenation here
\hyphenation{op-tical net-works semi-conduc-tor}


\begin{document}

\title{Adaptive Computing Resource Provision for In-situ Data Analysis}


\maketitle


\begin{abstract}


\end{abstract}

\section{Introduction}

The approach to data analysis by the high performance computational science community has been to store simulation data and analyze it at a later time. At extreme scales, however, such a simulation-analytical workflow involves both multi-platforms and big data workloads and thus imposes new challenges. Several recent approaches have promoted a new in-situ data processing concept to tackle this big data problem. More specifically, a set of staging analysis are employed to perform analysis and/or visualization within the supercomputer. This scheme offers a solution to the waiting and contention normally associated with large scale simulations. Simulation output happens in steps and is held in the staging nodes memory rather than written to disk/external storage for later analysis. At each step of simulation output, analysis is able to be performed on the simulation data. This eliminates redundant I/O and also addresses power efficiency issues within the supercomputer.Data from the staging nodes is then stored to the parallel file system.

\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{figures/HardWareA.eps}
\caption{ Hardware architecture for in-situ data analysis on staging nodes}
\label{fig:HardWareArchitecture}
\end{figure}

Current simulation programs usually consist of multiple simulation steps. When a step outputs the simulation data for in-situ analysis, the analysis jobs corresponding to the output are queued on the analysis nodes. However, since the memory/CPU are limited, the analysis jobs should be finished before the output of next step. 

There are two important challenges in employing staging nodes for in-situ data analysis.The first is to determine a proper amount of computing resources for data analysis jobs in order to cope with the simulation programs.Over allocationof resources will cause a lower degree of system utilization and also waste energy. The second is to dynamically adjust the computing resource for in-situ data analysis jobs during the simulation.


\section{Background and Motivation}

\section{Motivation: A Simple Example}

If simulation outputs data approximately every 10 seconds, the analysis jobs corresponding each output should be finished in less than or equal to 10 seconds. Otherwise (analysis is slower than simulation output), in subsequent simulation/analysis activities, resource contention and performance degradation may occur, even worse we lose data.

With this in-situ scheme, care must be taken to account for the limited memory and computing resources available in the staging nodes when performing analysis jobs. Critically, analysis tasks must be completed in the time each simulation step takes to complete. If analysis takes longer than simulation then resource contention and performance degradation could occur, at worst resources could be contended to the point that data is lost. To effectively utilize resources in the staging nodes a dynamic resource scheduler is required that can determine the proper amount of resources to allocate to analysis tasks. 

\section{Big Data Analysis Framework: Spark}

Spark is a MapReduce framework for big data analysis. Spark introduces resilient distributed datasets (RDDs) to facilitate the programming of parallel applications. Each RDD represents a collection of data partitions that spreads across the cluster. To analyze a data set, two important components are usually involved, a scheduler and many executors. The scheduler is in charge of scheduling tasks, monitoring their progress, and fault handling through task re-execution. The executors are responsible for executing the actual computing and data processing tasks. 

\section{Adaptive resource allocation for in-situ data analysis}

\subsection{Design goals and system architecture}

\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{figures/SoftwareA.eps}
\caption{ Software architecture for in-situ data analysis}
\label{fig:SoftwareArchitecture}
\end{figure}

In respect to resource allocation we propose an adaptive model to dynamically allocate computing resources for in-situ data analysis. This model will account for the resource needs of the analysis task launched on the analysis nodes, put simulation data into memory, and launch the analysis tasks with respect to simulation data distribution.

We propose an adaptive model to dynamic allocate computing resource for in-situ data analysis. …

Firstly, we use an Adaptive In-situ Computing Resource Allocation layer to decide the computing resource in runtime.  We will collect and feed runtime information to a method ( M/D/C ?) to determine the proper amount of computing resource for subsequent analysis. The runtime information consists of the parameters of simulation program and the historical execution time, including the output time and data sizeof each simulation step, the total time spent on analyzing the data in each simulation step, and the historical computing resource usage. 

Secondly, we will store the simulation data into the distributed memory on staging nodes according to the computing resource allocation in the first step. The simulation dataset will be broken into pieces. 

Thirdly, we will dynamically schedule the analysis tasks according to the data distribution. 

\subsection{M/D/C queue for in-situ data analysis}

Analysis tasks are queued using the M/D/C queuing method…..

\subsection{Workflow of in-situ data analysis with Spark }
We implement our proposed method using Spark. 

To deal with simulation data, we firstly need to implement data receivers to get data from simulation processes. For data at each simulation step, multiple analysis tasks will be formed and sent to the scheduler for execution. Specifically, the workflow involves following steps:

Step 1. Multiple data receivers are launched on analysis nodes and they will receive data from HPC simulation programs. Currently, we use TCP/IP technique to transfer the simulation data from simulation nodes to analysis nodes. Simulation applications write their output data to a port while a data receiver will listen to the port and get the data on the port.  The data receivers will store these data as RDD datasets in Spark.

\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{figures/DataToSpark.eps}
\caption{ Save simulation into spark}
\label{fig:InputData}
\end{figure}

Step 2.For output data of each simulation step, multiple analysis jobs corresponding to different analysis programs are formed. According to simulation data size \& distribution, each job will be partitioned into a set of tasks.

Step 3.The analysis jobs will be put into a queue and submitted to the Scheduler.

Step 4. Spark Scheduler will assign tasks to different Executor processes for execution.

\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{figures/Scheduler.eps}
\caption{ Schedule tasks to different executor processes on the adaptive computing resource}
\label{fig:Scheduler}
\end{figure}

\subsection{Implementation}

To test our proposed method, we modify existing Scheduler to assign tasks. Firstly, we add our calculation method (M/D/C?) into the scheduler component. The method will be called before scheduling tasks, which takes the runtime information as input and output the number of executors (amount of computing resources) for subsequent analysis.  Then we modify the scheduler to schedule tasks. [Current scheduler uses a round-robin manner to assign tasks]. 

\subsection{Implementation of in-situ data analysis with Spark }

\section{Experiment}

\section{Conclusion}
The conclusion goes here.


\bibliographystyle{abbrv}
\bibliography{SC-InSitu.bib}

\end{document}


