%\documentclass[10pt, conference, compsocconf]{IEEEtran}

\documentclass{sig-alternate}

\usepackage{graphicx,epsfig}
\usepackage{amsmath}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{wrapfig}
\usepackage{subfigure}
\usepackage{balance}

\begin{document}

\title{SLAM: Scalable Locality-Aware Middleware for I/O in Scientific Analysis and Visualization}

\numberofauthors{2} 

\author{
\alignauthor
Jiangling Yin, Xuhong Zhang, Junyao Zhang, Jun Wang\\
       \affaddr{EECS, U. of Central Florida}\\
       %\affaddr{University of Central Florida}\\
       %\affaddr{Orlando, Florida 32826}\\
       \email{\{jyin,jwang\}@eecs.ucf.edu}
\alignauthor
Wu-chun Feng\\
       \affaddr{Department of Computer Science}\\
       \affaddr{Virginia Tech}\\
       %\affaddr{Virginia Tech, Blacksburg, VA 2406}\\
       \email{wfeng@vt.edu}
\alignauthor
%Pavan Balaji\\
 %      \affaddr{Mathematics and Computer Science Division}\\
  %     \affaddr{Argonne National Laboratory}\\
       %\affaddr{Virginia Tech, Blacksburg, VA 2406}\\
   %    \email{balaji@mcs.anl.gov}
}

\maketitle

\begin{abstract}

Whereas traditional scientific applications are computationally intensive, recent applications require more data-intensive analysis and visualization to extract knowledge due to the explosive growth of scientific information and simulation data. As the computational power and size of computer clusters continue to increase, the insufficient read rates of the I/O system and associated network make these analysis applications suffer from a long I/O latency to move the big data from the network/parallel file systems, resulting in a serious performance bottle neck.

In this paper, we present our ``Scalable Locality-Aware Middleware'' (SLAM) for scientific analysis applications in order to address the above problem. SLAM leverages a distributed file system (DFS) to provide scalable data access for scientific applications. Since the conventional and parallel I/O operations from the high-performance computing (HPC) community are not supported by DFS, we propose a translation layer to translate these I/O operations into DFS I/O. More importantly, a novel data-centric scheduler (DC-scheduler) is proposed to enforce data-process locality and deliver load balancing for enhanced performance. 
We prototype our proposed SLAM into two state-of-the-art real scientific analysis applications, mpiBLAST and ParaView, along with the Hadoop distributed file system (HDFS) across a wide variety of computing platforms. 
In comparison to existing deployments using NFS, PVFS or Lustre as the underlying storage systems, SLAM can greatly reduce the I/O cost and double overall performance. 

\end{abstract}

%\keywords
%\begin{IEEEkeywords}
\keywords { MPI/POSIX I/O; HDFS; Parallel BLAST; ParaView;}

%\end{IEEEkeywords}

\section{Introduction}

Modern technological advances have led to increasingly efficient and accurate scientific instruments and computer simulations, which create and collect extremely accurate and diverse datasets. Global collaboration has led to larger data repositories than ever seen before. For instance, the collective amount of genomic information is expanding to be terabytes and petabytes~\cite{ref:genedata} and doubles every 12 months~\cite{benson2010genbank, 2008genbank, ostell2005databases}. Petascale simulations like cosmology simulations~\cite{Roadrunner} compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. 

To readily analyze and interpret this data, many scientific analysis/visualization applications have been designed. For instance, gene analysis tools such as parallel BLAST help researchers to better understand the composition and functionality of biological entities and processes~\cite{ref:genedata, ref:Genomesproject, BLAST, ScalaBLAST, pearson1988improved}. Also, visualization applications such as ParaView~\cite{ahrens2005paraview} and Chimara~\cite{pettersen2004Chimera} can interpret and graphically represent raw simulation/scientific data in parallel. %so that scientists may readily recognize the qualitative aspects of a dataset. 
These types of scientific applications typically run on HPC system or a cluster with many parallel processes, which execute the same computation algorithm but process different portions of datasets.
The parallel programming model for these applications is the MPI programming model, in which the shared dataset is stored in a network accessible storage system like NFS, PVFS~\cite{ross2000pvfs}, or Lustre~\cite{schwan2003lustre}, transferred to a parallel MPI process during execution. This programming model is well-known as the compute-centric model. 

However, along with the tera/peta-scale datasets comes a new challenge for these MPI-based programs in regards to the transmission and analysis of data in a computationally efficient manner. This is because most computation systems rely on a single link or a handful of network links to transfer data to and from the cluster~\cite{Grider:PaScal}. Unfortunately, in today's big data era, these links encumber the scalability of a parallel system. Recently there have been many research efforts such as I/O forwarding techniques~\cite{IOforwarding}, decoupled execution~\cite{chen2012decoupled} and the server pushing method~\cite{sun2007server} to address the big data challenge. However, these schemes must add extra data processing nodes/phases with the current execution paradigm and are not able to scale out with an increased cluster size due to network limitations. 

In contrast, a data-centric architecture provides co-located compute and storage resources on the same node in order to facilitate locality oriented task scheduling, offering a strong structure for big data applications. These storage resources are usually organized by distributed file systems (DFS). A DFS is constructed from machines with locally attached disks and allows clients on a network to access locally stored data with minimal network cost and can scale with the number of nodes as needed. For instance, the Hadoop system employs a DFS for MapReduce applications and allows map tasks to access data locally. As the number of cluster nodes increase, the Hadoop system can easily scale-out, expanding storage and launching MapReduce tasks on the additional nodes. 

Compared to the MPI programming model, however, the MapReduce programming model lacks the flexibility and efficiency to implement the complex algorithms executed in scientific applications such as parallel BLAST~\cite{Lin:mpiBLAST-pio,MR_MPI}, FLASH physics, or visualization applications. Furthermore, without low level MPI primitives~\cite{MPI} MapReduce,
characterized as a batch processing framework has difficulties dealing with
runtime data processing requests, such as interactively slicing the raw data to make interior visible, or extracting regions with particular qualities in ParaView~\cite{ref:paraview}. 

\begin{figure}[!t]
\centering
\includegraphics[width=0.44\textwidth]{figures/Motivation.eps}
\caption{The HDFS1 test using default HDFS represents the exact performance of ParaView without taking data location into account. The HDFS2 test represents an ideal case where all of the data needed by the MPI process is present on the given DataNode. }
\label{fig:Motivation}
\end{figure}

In this paper, we propose ``Scalable Locality-Aware Middleware" (SLAM), which allows scientific analysis applications to benefit from data locality exploitation with the use of HDFS, while also maintaining the flexibility and efficiency of the MPI programming model. The fundamental challenges include the incompatibility of conventional parallel I/O, such as \textit{ MPI\_File\_read()}, and HDFS I/O, such as  \textit{hdfsRead()}; and the implementation of the co-located compute and storage properties of MPI-based programs, which usually doesn't take the physical location of data into account during task assignment. 

In order to show the importance of locality computation, we run an instance of ParaView over HDFS on PRObE's Marmot 128-node cluster and collect the execution time of each I/O operation, as shown in Figure~\ref{fig:Motivation}. The HDFS1 test uses the HDFS defaults with a replication factor of three. This test represents the exact performance of ParaView without taking the data location into account during task assignment. The HDFS2 test represents an ideal case in which all of the data needed by the MPI process is present on the working data node. More details can be found in section four. As seen in Figure 1, read time increases dramatically when an I/O operation is executed remotely. This is due to the fact that data is distributed in advance within the HDFS and the task assignment among the data server processes of ParaView doesn't take the physical location of the dataset into account. 

Our SLAM consists of three components: (1) a process-to-data mapping scheduler (DC-scheduler), which transforms a compute-centric mapping into a data-centric one so that a computational process always accesses data from a local or nearby computation node, (2) a data location aware monitor to support the DC-scheduler, and (3) a virtual translation layer to enable computational processes to execute on underlying distributed file systems.
 
We realize a SLAM prototype system using mpiBLAST and ParaView as case-study applications to demonstrate the efficacy of SLAM. In our work, SLAM runs on the popular Hadoop distributed file system (HDFS). Because every BLAST task is always scheduled to execute on a compute node and accesses the necessary data directly from locally attached disks, the I/O cost of data movement is highly reduced. By experimenting with the SLAM prototype on PRObE's Marmot 128-node cluster testbed and UCF's on-site 46-node cluster, we find that SLAM doubles the overall performance of existing solutions for I/O-intensive applications. 

The remainder of this paper is organized as follows:
Section 2 discusses background information on scientific analysis applications. 
Section 3 presents our proposed system. 
Section 4 shows performance results and analysis. 
Section 5 discusses related work and Section 6 concludes the paper.

\section{Scientific Analysis/Visualization Applications}
In computational biology, genomic sequencing tools are used to compare given query sequences against datatbase to characterize new sequences and study their effects. There are many different alignment algorithms in this field such as Needleman-Wunsch~\cite{needleman1970general}, FASTA~\cite{pearson1988improved}, and BLAST~\cite{BLAST}. Among them, the BLAST family of algorithms is the most widely used in the study of biological and biomedical research. %It compares a query sequence with database sequences by a two-phased heuristic-based alignment algorithm. 
At present, BLAST is a standard defined by the National Center for Biotechnology Information (NCBI).

mpiBLAST~\cite{mpiBLAST:design} is a parallel implementation of NCBI BLAST. It organizes all parallel processes into one master process and many worker processes. Before performing an actual search, the raw sequence database is formatted into many fragments and stored in a shared network file system with the support of MPI or POSIX I/O operations. mpiBLAST follows a compute centric model and moves the requested database fragments to the corresponding compute processes. By default, the master process accepts gene sequence search jobs from clients and generates task assignment according to the database fragments, and mpiBLAST workers load database fragments from a globally accessible file system over a network and perform the BLAST task according to the master scheduling.
To search in a large database, the I/O cost, which takes place before the real BLAST execution, takes a significant amount of time.

\begin{figure}[!t]
\centering
\includegraphics[width=0.40\textwidth, height=0.13\textheight] %theight=0.20\textheigh]
{figures/Multi-block.eps}
\caption{ A protein dataset is partitioned across multiple parallel processes;  the left figure is the sub dataset rendering picture, while the right one is the composite picture of a whole dataset.}    
\label{fig:multiblock}
\end{figure}

ParaView~\cite{ahrens2005paraview} is an open-source, multi-platform application for the visualization and analysis of scientific datasets. ParaView has three main logical components: data server, render server, and client. The data server reads in files from shared storage and processes data through the pipeline to the render server that renders this processed data to present the results to the client. The data server can exploit data parallelism by partitioning the data and assigning each data server process a part of the dataset to analyze. By splitting the data, ParaView is able to run data processing tasks in parallel. Figure~\ref{fig:multiblock} demonstrates an example of parallel visualization for a Protein Dataset.

Current MPI based visualization applications adopt a compute-centric scheduling where each data server process is assigned tasks according to their MPI ranks. Once a data processing task is scheduled to a data server process, the data will be transferred from a shared storage system to the compute node. Since parallel file systems such as PVFS or Lustre, are usually deployed on storage nodes and data server processes are deployed on compute nodes, this compute-centric model involves a significant amount of data movement for big data problems and becomes a major stumbling block to high performance and scalability.
  
\section{SLAM Design and Implementation}

In this section, we present the design and implementation of SLAM using mpiBLAST and ParaView as case-study applications.  
We illustrate the architecture of SLAM in section 3.1, 3.2 and 3.3, using parallel BLAST as an example. 
ParaView implements SLAM in the same way as BLAST but employs a different task assignment method, as such, we show how SLAM is implemented in Paraview in section 3.4.

\subsection{Design Goals and System Architecture}

\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth, %theight=0.20\textheigh
]{figures/ProposedSystemArchitecture3.eps}
\caption{ Proposed SLAM for parallel BLAST. (a) The DC-scheduler employs a ``Fragment Location Monitor'' to snoop the fragments location and dispatch unassigned fragments to computation processes such that each process could read the fragments locally via SLAM-I/O. (b) The SLAM software architecture. Two new modules are used to assist parallel BLAST in accessing the distributed file system and intelligently read fragments with awareness of data locality.}
\label{fig:ProposedSystemArchitecture}
\end{figure*}

As data repositories expand exponentially with time and scientific applications become ever more data intensive as well as computationally intensive, a new problem arises in regards to the transmission and analysis of data in a computationally  efficient manner. Programs running on large-scale clusters in parallel suffer from potentially long I/O latency resulting from non-negligible data movement, especially in commodity clusters. Scientific analysis applications could significantly benefit from local data access in a distributed fashion, similarly adopted by successful MapReduce systems.

We propose a middleware called ``SLAM", which allows scientific analysis programs to benefit from data locality exploitation in HDFS, while also maintaining the flexibility and efficiency of the MPI programming model. Since the data is distributed in advance within HDFS, the default task assignment, without considering data distribution, may not let parallel processes fully benefit from local data access. Thus, we need to intercept the original tasks scheduling and re-assign the tasks so as to achieve maximum efficiency on a parallel system with a high degree of data locality and load balancing. Also, we need to solve the I/O incompatibility issue, such that the data stored in the HDFS can be accessed through conventional parallel I/O methods, e.g, MPI or POSIX I/O. 

SLAM implements a fragment location monitor, which collects an unassigned fragment list at all participating nodes. To achieve this, the monitor needs to make connections to the HDFS NameNode using libHDFS, and request chunk location information by asking NameNode (specified by a file name, offset within the file, and length of the request). The NameNode replies with a list of the host DataNodes where the requested chunks are physically stored. Based on this locality information, our proposed scheduler will make informed decisions on which node to execute a computation task to take advantage of data locality. This could realize local data access and avoids data movement in the network.


Figure~\ref{fig:ProposedSystemArchitecture} illustrates the SLAM framework for parallel BLAST which consists of three major components: a translation I/O layer called SLAM-I/O, a data centric load-balanced scheduler called DC-scheduler and a fragments location monitor.
We chose HDFS as the underlying storage system, which is designed for MapReduce programs.  The SLAM-I/O is implemented as a non-intrusive software component added to existing application code, such that many successful performance tuned parallel algorithms and high performance noncontiguous I/O optimization methods~\cite{Lin:mpiBLAST-pio, MPI} can be directly inherited in SLAM. The DC-scheduler determines which specific data fragment is assigned to each node to process. It aims to minimize the number of fragments pulled over the network. The DC-scheduler is incorporated into the runtime of parallel BLAST applications. To get the physical location of all unassigned fragments, the fragment location monitor will be invoked by the DC-scheduler to report the database fragments locations. 

By tracking the location information, DC-scheduler schedules computation tasks at the appropriate compute nodes, namely, moves computation to data. Through SLAM-I/O, MPI processes can directly access fragments treated as chunks in HDFS from local hard drive, which is part of the entire HDFS storage. 

\subsection{SLAM-I/O: A Translation Layer}

Current scientific parallel applications are mainly developed with the MPI model, which employ either MPI or POSIX-I/O to run on a network file system or a network-attached parallel file system. SLAM uses HDFS to replace these file systems, and therefore entails handling the I/O compatibility issues between MPI-based programs and HDFS.

More specifically, scientific parallel applications access files through MPI-I/O or POSIX-I/O interfaces, which are supported by local UNIX file systems or parallel file systems. These I/O methods are different from the I/O operations in HDFS. For example, HDFS uses a client-server model, in which servers manage metadata while clients request data from servers. These compatibility issues make scientific parallel applications unable to run on HDFS.

We implement a translation layer, SLAM-I/O, to handle the incompatible I/O semantics. The basic idea is to transparently transform high-level I/O operations of parallel applications to standard HDFS I/O calls. We elaborate how SLAM-I/O works as follows. SLAM-I/O first connects to the HDFS server using hdfsConnect() and mounts HDFS as a local directory at the corresponding compute node. Hence each cluster node works as one client to HDFS. Any I/O operations of parallel applications that work in the mounted directory are intercepted by the layer and redirected to HDFS. Finally, the correspondent hdfs I/O calls are triggered to execute specific I/O functions \emph{e.g.} open /read /write /close.

How to handle concurrent writes is another challenge in SLAM. Parallel applications usually produce distributed results and may leave every engaged process write to disjoint ranges in a shared file. For instance, mpiBLAST takes advantage of Independent/Collective I/O to optimize the searched output. The \emph{WorkerCollective} output strategy introduced by Lin ~\emph{et. al.}~\cite{Lin:mpiBLAST-pio} realizes a concurrent write semantic, which can be interpreted as ``multiple processes write to a single file at the same time''. These concurrent write schemes often work well with parallel file systems or network file systems. However, HDFS only supports appended write, and most importantly, only one process is allowed to open the file for writing (otherwise an open error will occur).

To resolve this incompatible I/O semantics issue, we revise ``concurrent write to one file'' to ``concurrent write to multiple files''. We allow every process output their results and their write ranges independently into a physical file in HDFS. Logically, all output files produced for a data processing job are allocated in the same directory. The overall results are retrieved by joining all physical files under the same directory. 

In the experiments to follow, we prototype SLAM-I/O using FUSE~\cite{ref:fusehdfs}, a framework for running stackable file systems in a non-privileged mode. An I/O call from application to Hadoop file system is illustrated in Figure~\ref{fig:I/Oflow}. The Hadoop file system is mounted on all participating cluster nodes through the SLAM-I/O layer. The I/O operations of mpiBLAST are passed through a virtual file system (VFS), taken over by SLAM-I/O through FUSE and then forwarded to HDFS. HDFS is responsible for data storage management. In regards to concurrent write, SLAM-I/O automatically inserts a subpath using the same name as the output filename and appends its process ID at the end of the file name. For instance, if a process with id $30$ writes into $/hdfs/foo/searchNTresult$, the actual output file is $/hdfs/foo/serachNTresult/searchNTresult30$.

\begin{figure}[h]%[!t]
\centering
\includegraphics[width=0.41\textwidth, %height=0.13\textheight
]{figures/mpiBLAST-IO1.eps}
\caption{ The I/O call in our prototype. A FUSE kernel module redirects file system calls from parallel I/O to SLAM-I/O. SLAM-I/O wraps HDFS clients and translates the I/O call to DFS I/O. 
}
\label{fig:I/Oflow}
\end{figure}

\subsection{A Data Centric Load-balanced Scheduler}{\label{section:scheduler}}

As discussed, the key to realize scalability and high-performance in big data scientific applications is to achieve data locality and load balance. However, several heterogeneity issues exist that could potentially result in load imbalance. For instance, in parallel gene data processing, the global database is formatted into many fragments. The data processing job is divided into a list of tasks corresponding to the database fragments. On the other hand, HDFS random chunk placement algorithm may distribute database fragments unevenly within the cluster, leaving some nodes with more data than others. In addition, the execution time of a specific BLAST task could greatly vary and is hard to predict according to the input data size and different computing capacities per node~\cite{sequence-searching:ad-hoc,Lin:mpiBLAST-pio}.

We implement a fragment location monitor as a background daemon to report updated unassigned fragment status to the DC-scheduler. At any point of time, DC-scheduler always tries to launch a \emph{local task} of the requesting process, that is, a task with its corresponding fragment available on the node of the requesting process. In practice, a high degree of data locality could often be achieved as each fragment has three physical copies in HDFS, leaving three different node candidates available for scheduling.

Upon an incoming data processing job, the DC-scheduler invokes the location monitor to report the physical location of all target fragments. If a process from a specific node requests a task, the scheduler assigns a task to the process using the following procedure. First, if  local tasks exist on the requesting node, the scheduler will evaluate which local task should be assigned to the requesting process in order to make other parallel processes achieve locality as much as possible (details will be provided later). Second, if no local task exists on the node, the scheduler will assign a task to the requesting process by comparing all unassigned tasks in order to make other parallel processes achieve locality. The node will then pull the corresponding fragment over the network.

The scheduler is detailed in Algorithm 1. The input data processing job consists of a list $F$ of individual tasks, each performing against a distinct fragment. While the tasks list, $F$, is not empty, parallel process reports to the scheduler for assignments. Upon receiving a task request from a process on Node $i$, the scheduler process determines a task for the process as follows: 
\begin{itemize}
\item 1. If Node $i$ has some local tasks, then the local task $x$ that could make the number of unassigned local tasks on all other nodes as balanced as possible will be assigned to the requesting process. Figure~\ref{fig:Scheduler} illustrates an example to demonstrate how this choice is made. In the example, there are $4$ parallel processes running on $4$ nodes, where $W1$ requests a task with its unassigned local tasks being $F_1 = <f2, f4, f6>$.  For each task $f_x$ in $F_1$, we compute the minimum number of unassigned tasks %fragments 
among all other nodes containing $f_x$'s associated fragment. For example, the task $f_2$ is local to $F_2$ and $F_4$, so we compute $\min (|F_2|,|F_4|) = 2$. We assign the task with the largest such value to the idle process, which is $f_6$ in the example. After the assignment, the number of unassigned local tasks for node $W2$, $W3$ and $W4$ become $2$, $2$, $2$, as shown in Figure~\ref{fig:Scheduler} (b).
\item 2. If node $i$ does not contain any unassigned local tasks, the scheduler will perform the above calculation for all unassigned tasks in $F$ and assign the task with the largest $\min$ value to the requesting process, which needs to pull data from over the network.
\end{itemize}

%%%%%%%********************%%%%%%%%%%%%%%%%%
\begin{algorithm}
\caption{Data centric load-balanced Scheduler Algorithm}
\label{alg:ADLAScheduling}
\begin{algorithmic}[1]
 \STATE Let $F$ = \{${f_1, f_2,..., f_m}$\} be the set of tasks %fragments
 \STATE Let $F_i$ be the set of unassigned local tasks located on node $i$
\renewcommand{\algorithmicensure}{\textbf{Steps:}}
\ENSURE

\STATE Initialize $F$ for a data processing job %input query sequences
\STATE Invoke $Location$ $monitor$ and initialize $F_i$ for each node $i$

\WHILE {$|F| \neq 0$}
	\IF {a worker process on node $i$ requests a task}
		\IF { $ |F_i| \neq 0 $ }
			\STATE Find $f_x \in F_i$ such that
			\STATE $x = \underset{x}{\operatorname{argmax}}(\underset{ F_k \ni f_x, k \neq i}{\operatorname{min}}(|F_k|))$
			\STATE Assign $f_x$ to the requesting process on node $i$
		\ELSE
			\STATE Find $f_x \in F$ such that
			\STATE $x = \underset{x}{\operatorname{argmax}}(\underset{F_k \ni f_x, k \neq i}{\operatorname{min}}(|F_k|))$
			\STATE Assign $f_x$ to the requesting process on node $i$
		\ENDIF
		\STATE Remove $f_x$ from $F$ 
		\FORALL {$F_k$ s.t. $f_x \in F_k$} 
			\STATE Remove $f_x$ from $F_k$
		\ENDFOR
	\ENDIF	
\ENDWHILE
\end{algorithmic}
\end{algorithm}
%%%%%%%********************%%%%%%%%%%%%%%%%%


\begin{figure}[!t]
\centering
\epsfig{file=Figures/Scheduler2.eps, %height=0.18\textheight, 
width=0.44\textwidth}
\caption{A simple example where the DC-scheduler receives the task request of the 
process ($W1$). The scheduler finds the unassigned local tasks of $W1$ ($f2$, $f4$ and $f6$ in this example). The task $f6$ will be assigned to $W1$ since the minimum unassigned task value is $3$ on $W2$ and $W3$, which also has $f6$ as a local task. After assigning $f6$ to $W1$, the number of unassigned local tasks of $W1$--$4$ is $2$.}
\label{fig:Scheduler}
\end{figure}

Since mpiBLAST adopts a master-slave architecture, the DC-scheduler could be directly incorporated into the master process, which performs dynamic scheduling according to which nodes are idle at any given time. For such scheduling problems, minimizing the longest execution time is known to be NP-complete~\cite{garey1976complexity} when the number of nodes is greater than $2$, even for the case that all tasks are executed locally. However we will show that for this case, the execution time of our solution is at most two times that of the optimal solution. 

Suppose there are $m = |F|$ tasks with actual execution times of $t_1, t_2, ..., t_m$ on $n = |W|$ nodes. We use $T^*$ to denote the maximum execution time of a node in the optimal solution. Notice that $T^*$ cannot be smaller than the maximum execution time of a single task. This observation gives us a lowerbound  for $T^*$: 

\begin{equation}
T^* \ge \underset{1 \leq k \leq m}{\operatorname{max}}(t_{k})
\label{equ:lowerbound}
\end{equation}

Let $T$ be the maximum execution time of a node in a solution given by our algorithm. Without loss of generality, we assume that the last completed task $f_n$ is scheduled on node $n$. Let $s_{f_n}$ denote the start time of task $f_n$ on node $n$, so $T = s_{f_n} + t_n$.

All nodes should be busy until at least time $s_{f_n}$; otherwise, according to our algorithm, the task $f_n$ will be assigned to some nodes earlier. Therefore we have $T^* \ge s_{f_n}$. Because $T^* \ge \underset{1 \leq k \leq m}{\operatorname{max}}(t_{k})$, we have $T^* \ge t_n$. This gives us the desired approximation bound:
\begin{equation}
T = s_{f_n} + t_n \le 2T^*
\end{equation}

The scheduling problem is more complex when we take the location variable into consideration. However, we will conduct real experiments to examine its locality and parallel execution in Section 4.

\subsection{ParaView with SLAM}

\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth, %height=0.13\textheight
]{figures/ParaViewDesign.eps}
\caption{ Proposed SLAM for ParaView. The DC-scheduler assigns data processing tasks to MPI processes such that each MPI process could read the needed data locally.
}
\label{fig:ParaViewArchitecture}
\end{figure}

In the big data era, ParaView could benefit from local data access in a distributed fashion through SLAM. ParaView implements SLAM in the same way as mpiBLAST but employs a static data partitioning method, as such, we show how SLAM is implemented in Paraview in this section.

ParaView employs reader modules on data server processes to interpret data from files. % according to the task assignment.
Currently, there are a large number of data readers with support for various scientific file formats. Specifically, examples of parallel data readers~\cite{ref:paraview} are Partitioned Legacy VTK Reader, MultiBlock Data Reader, Hierarchical Box Data reader, and Partitioned Polydata Reader. To process a dataset, the data servers running in parallel will call the reader to read a meta-file, which points to a serial of data files. Then, each data server will compute their own part of the data assignment according to the number of data files, number of parallel servers, and the their own server rank. Data servers will read the data in parallel from the shared storage and then filter/render.        

In order to achieve locality computation for ParaView, the default tasks assignment needs to be intercepted to use our proposed DC-scheduler to assign tasks for each data server at run time. Specifically, the SLAM framework for ParaView also includes three components, the translation layer --- SLAM-I/O, the DC-scheduler, and the fragments location monitor, as illustrated in Figure~\ref{fig:ParaViewArchitecture}. The DC-scheduler determines which specific data part is assigned to each data server process. To get the physical location of the target datasets, the Location Monitor is invoked by the DC-scheduler to report the data fragments locations. 
Through SLAM-I/O, data server processes can directly access data treated as chunks in HDFS from the local hard drive. 

Our proposed DC-scheduler algorithm in Section 3.3 is very suitable for applications with dynamic scheduling algorithms, such as mpiBLAST, in which scheduling is determined by which nodes are idle at any given time. However, since the data assignment in ParaView uses a static data partitioning method, the work allocation is determined beforehand; no process works as a central scheduler. For this kind of scheduling, we adopt a round-robin request order for all data servers in Step 6 of Algorithm \ref{alg:ADLAScheduling}. Until the set $F$ is empty, the data server process with a specific process ID can get all the data pieces assigned to it. The data servers will read the data in parallel and then filter/render.

\subsection{Specific HDFS Considerations}

HDFS employs some default data placement policies. A few considerations should be taken into account when we choose HDFS as the shared storage. First, each individual fragment file size should not exceed the configured chunk size, otherwise the file will be broken up into multiple chunks with each chunk being replicated independently of other related chunks. If only a fraction of the specific fragment can be accessed locally,  other parts must be pulled over the network. Consequently, the locality benefit is lost. As a result, we should keep the file size of each data fragment smaller than the chunk size when formatting the dataset. Second, for parallel BLAST, when applying the database format method, each fragment includes seven related files. %, six of which are smaller files and one is bigger. 
The ~\emph{hadoop Archive} method should be applied to ensure that these seven files are stored together during a formatted execution.

\section{Experiments and Analysis}\label{experiment}

\subsection{Experimental Setup }

We conducted comprehensive testing on our proposed middleware SLAM on both Marmot and CASS clusters with different storage systems. \emph{Marmot} is a cluster of the PRObE on-site project~\cite{ref:probegarth} and housed at CMU in Pittsburgh. The system has 128 nodes / 256 cores and each node in the cluster has dual 1.6GHz AMD Opteron processors, 16GB of memory, Gigabit Ethernet, and a 2TB Western Digital SATA disk drive. 
\emph{CASS} consists of $46$ nodes on two racks, one rack including $15$ compute nodes and one head node and the other rack containing $30$ compute nodes. Each node is equipped with dual 2.33GHz Xeon Dual Core processors, 4GB of memory, Gigabit Ethernet and a 500GB SATA hard drive. 

In both clusters, MPICH $[1.4.1]$ is installed as parallel programming framework on all compute nodes running CENTOS55-64 with kernel 2.6. We chose Hadoop $0.20.203$ as the distributed file system, which is configured as follows: one node for the NameNode/JobTracker, one node for the secondary NameNode, and other compute nodes as the DataNode/TaskTracker. 

\subsection{Evaluating Parallel BLAST with SLAM}

To make comparison with the open source parallel BLAST, we deploy mpiBLAST version $[1.6.0]$ on all the nodes in the clusters, which support the techniques of query prefetch and computation \& I/O coordination methods that would coordinate dynamic load balancing of computation and high performance noncontiguous I/O. Equipped with our SLAM-I/O layer at each cluster node, HDFS can be mounted as a local directory and used as shared storage for parallel BLAST. BLAST itself can then run on HDFS without recompilation. We implement the fragment location monitor and DC-scheduler and incorporate both modules into the mpiBLAST master scheduler to exploit data locality as shown in Figure~\ref{fig:ProposedSystemArchitecture}. When running parallel BLAST, we let the scheduler process run on the node where NameNode is configured, and parallel processes run on DataNode for the sake of locality concern.

We select nucleotide sequence database~\textit{nt} as our experimental database. The~\textit{nt} database contains the GenBank, EMB L, D, and PDB sequences. At the time when we performed experiments, the~\textit{nt} database contained \mbox{17,611,492} sequences with a total raw size of about $45$ GB. The input queries to search against the~\textit{nt} database are randomly chosen from~\textit{nt} and revised, which guarantees that we find some close matches in the database. 
 
When running mpiBLAST on the cluster with directly attached disks, users usually run \emph{fastasplitn} and \emph{formatdb} once and reuse the formatted database fragments. To deliver database fragments, we use a dynamic copying method such that the node will copy and cache a data fragment only when a search task to the fragment is scheduled on the node. These cached fragments are reused for the next sequences searches. mpiBLAST is configured with two conventional file systems---NFS and PVFS2 for comparison to SLAM. We run experiments on NFS because it is the default shared file system on most clusters and the developers of mpiBLAST also use NFS as shared storage~\cite{Lin:mpiBLAST-pio}. Additionally, we installed PVFS2 version $[2.8.2]$ with default setting on the cluster nodes: one node as the metadata server for PVFS2, and other nodes as the I/O servers. %SLAM employs HDFS as a distributed storage. Therefore, there is no need for gathering a fragment over network from multi data nodes like PVFS, and we do not cache fragments in local disks either.

We studied how SLAM could improve the performance for parallel BLAST. We scaled up the number of nodes in the cluster and compared the performance with three host file system configurations: NFS, PVFS2 and HDFS, respectively. For clarity, we labeled them as NFS-based, PVFS-based and SLAM-based BLAST. During the experiments, we mount NFS, HDFS and PVFS2 as local file systems at each node if a BLAST process is running on that node.
We used the same input query in all running cases and fix the query size to be $50$ KB with 100 sequences, which generated a same output result in the amount of around $5.8$ MB. The~\textit{nt} database was partitioned into $200$ fragments.

\subsubsection{Results from the Marmot cluster}

%The experimental results collected from Marmot are illustrated in Figures ~\ref{fig:IOperformance},~\ref{fig:InitialDataPreparation},~\ref{fig:DetailAnalysis} and~\ref{fig:ImproveRation_Marmot}.

\begin{figure}[!t]
\centering
\epsfig{file=figures/IOperformance_marmot1.eps, %height=0.20\textheight, 
width=0.42\textwidth}
\caption{Read bandwidth comparison of NFS, PVFS and SLAM based BLAST schemes. 
}
\label{fig:IOperformance}
\end{figure}

To test scalability we collected results of aggregated read bandwidth for an increasing number of nodes as illustrated in Figure~\ref{fig:IOperformance}. The bandwidth is based on the total read time and overall amount of data processing. 
We find that in SLAM the read bandwidth greatly increases as the number of nodes increases, proving it to be a scalable system. However, the NFS and PVFS based BLAST schemes have a considerably lower overall bandwidth, and as the number of nodes increases, they do not achieve the same bandwidth increase. This indicates a large data movement overhead exists in NFS and PVFS over the network that bars them from being efficiently scalable.

\begin{figure}[!t]
\centering
\epsfig{file=figures/InitialDataPreparation1.eps, %height=0.20\textheight, 
width=0.42\textwidth}
\caption{I/O latency comparison of PVFS and SLAM based BLAST schemes on the ~\textit{nt} database.}
\label{fig:InitialDataPreparation}
\end{figure}

To gain some insight on the time costs of data movement, data input time, referred as I/O latency, was measured for an increasing number of nodes on both SLAM and PVFS based BLAST, as illustrated in Figure~\ref{fig:InitialDataPreparation}. We find that the total I/O latency of PVFS based BLAST is close to 2,000 seconds for clusters of 32 nodes and increases thereafter. On the other hand, SLAM achieves a much lower I/O latency than PVFS with latency times being a quarter of that of PVFS for clusters up to 64 nodes, and a fifth for larger networks. NFS-based SLAM performs much worse than PVFS and SLAM based BLAST for an increasing number of nodes and is not shown in the figure.

Figure~\ref{fig:DetailAnalysis} shows the particular case of I/O latency times on a 64 node cluster using SLAM, PVFS, and NFS. Three latency figures are presented for each file system: maximum node latency, minimum node latency, and average latency. SLAM excels in having low latency times in all three tests while maintaining a small difference in I/O time between the fastest and slowest node, and ultimately achieves the lowest latency times of all three systems. NFS and PVFS suffer from an imbalance in node latency and on average are considerably slower than SLAM. 

\begin{figure}[!t]
\centering
\epsfig{file=figures/DetailAnalysis-max-min1.eps, % height=0.20\textheight, 
width=0.41\textwidth}
\caption{Max and min node I/O time comparison of NFS, PVFS and SLAM based BLAST on the ~\textit{nt} database.}
\label{fig:DetailAnalysis}
\end{figure}

\begin{figure}[!t]
\centering
\epsfig{file=figures/ImproveRation_Marmot2.eps, %height=0.20\textheight, 
width=0.42\textwidth}
\caption{Performance gain of BLAST execution time when searching the ~\textit{nt} database using SLAM, compared to NFS and PVFS-based.}
\label{fig:ImproveRation_Marmot}
\end{figure}

When running parallel BLAST on a 108-node configuration system, we found the total program execution time with NFS, PVFS and SLAM based BLAST to be 589.4, 379.7 and 240.1 seconds, respectively. We calculate the performance gain as Equation~\ref{equ:improvemnt}, where $T_{\text{SLAM-based}}$ denotes the overall execution time of parallel BLAST based on SLAM and $T_{\text{NFS/PVFS-based}}$ is the overall execution time of mpiBLAST based on NFS or PVFS.
\begin{equation}
improvement = 1- \frac {T_\text{SLAM-based}}{T_\text{NFS/PVFS-based}}.
\label{equ:improvemnt}
\end{equation}
As seen from Figure~\ref{fig:ImproveRation_Marmot}, we conclude that SLAM-based BLAST could reduce overall execution latency by 15\% to 30\% for small-sized clusters with less than 32 nodes as compared to NFS-based BLAST. Given an increasing cluster size, SLAM reduces overall execution time by a greater percentage, reaching 60\% for a 108-node cluster setting. This indicates that NFS-based setting is not scaling well. In comparison to PVFS-based BLAST, SLAM runs consistently faster by about 40\% for all cluster settings. 

\subsubsection{Results from the CASS cluster}

\begin{figure}[!t]
\centering
\epsfig{file=figures/searchtime1.eps, %height=0.20\textheight,
 width=0.41\textwidth}
\caption{BLAST execution time (excluding I/O time) comparison of NFS, PVFS and SLAM based BLAST programs on the ~\textit{nt} database.}
\label{fig:Searchtime}
\end{figure}

\begin{figure}[!t]
\centering
\epsfig{file=figures/AverageCopytime2.eps,% height=0.20\textheight,
 width=0.41\textwidth}
\caption{Average I/O time of NFS, PVFS and SLAM based BLAST on the ~\textit{nt} database.}
\label{fig:AverageCopytime}
\end{figure}

For comprehensive testing, we did similar experiments at the on-site CASS cluster. We distinguish the average actual BLAST times from I/O latency to gain some insights about scalability.

Figure~\ref{fig:Searchtime} illustrates the average actual BLAST computation times (excluding I/O latency) for an increasing cluster size. We find that the average actual BLAST time in Figure~\ref{fig:Searchtime} decreases sharply as the number of nodes grows. Interestingly, the three systems obtain comparable BLAST performance. This matches with our conjecture as SLAM only targets I/O rather than BLAST computation. Different file system configurations---NFS, PVFS, and HDFS account for the differences among three BLAST programs. Figure~\ref{fig:AverageCopytime} illustrates the I/O phase of the BLAST workflow. With NSF or PVFS, the average I/O cost stays around 100 seconds after cluster size exceeds 15. In contrast, SLAM adopts a scalable HDFS solution, which realizes a decreasing I/O latency with an increasing number of nodes.

\begin{figure}[!t]
\centering
\epsfig{file=figures/DC-SchedulerDataAccess.eps, %height=0.20\textheight,
width=0.42\textwidth}
\caption{Illustration of data fragments that are accessed locally or remotely according to node ID. The blue triangles represent the data fragments accessed locally during the search, while the red dots represent the fragments accessed remotely.}
\label{fig:DetailedDataAccess}
\end{figure}

The priority of our DC-scheduler is to achieve data-task locality subject to a load balance constraint. 
To explore how effectively DC-scheduler works (\emph{i.e.}, to what extent search processes are scheduled to access fragments from local disks), Figure~\ref{fig:DetailedDataAccess} illustrates one snapshot on the fragments searched on each node and the fragments access by the network. We specifically ran experiments five times to check how much data moves through the network with a 30-node cluster, and track down the total number of fragments 150, 180, 200, 210, 240 respectively. As seen from the Figure~\ref{fig:DetailedDataAccess}, most nodes search a comparable number of fragments locally. More than 95\% of data fragments are read from local storage. 

We also run mpiBLAST on HDFS using only our I/O translation layer without the locality scheduler, and found the performance is only slightly better than that of PVFS-based BLAST. This is because BLAST processes need to read data remotely without the coordination of a data locality scheduler. For such kind of experiments, we will show the detailed comparison in next Section.

\subsubsection{Comparing with Hadoop-based BLAST}
We only show a simple comparison with Hadoop-based Blast on Marmot, as such a comparison may be unfair since efficiency, while being the design goal of MPI, is not the key feature of the MapReduce programming model.   

We chose Hadoop-Blast~\cite{ref:Hadoop-Blast} as the Hadoop-based approach. The database for both programs are `nt' and the input query is the same. Using 25-nodes on marmot, SLAM-based BLAST takes 568.664 seconds while Hadoop-Blast takes 1427.389 seconds. Over several trials, SLAM based BLAST was always found to be more efficient than Hadoop-based BLAST. The reasons being, 1) the task assignment of Hadoop-Blast relies on Hadoop Scheduler which is built on heartbeat mechanism, 2) the advantages of I/O optimization based on MPI are not adapted by Hadoop-Blast, and 3) the difference on efficiency of Java and C/C++ implementation~\cite{MR_MPI}.
    
\subsection{Evaluating ParaView with SLAM}
To test the performance of ParaView with SLAM, ParaView 3.14 was installed on all nodes in the cluster. To enable off-screen rendering, ParaView made use of Mesa 3D graphics library version 7.7.1. The DC-scheduler is implemented with VTK MultiBlock datasets reader for data task assignment. A multi-block dataset is a series of sub datasets, together they represent an assembly of parts or a collection of meshes. %of different types from a coupled simulation~\cite{vtk}.

To deal with MultiBlock datasets, a meta-file with extension ``.vtm'' or ``.vtmb'' is read as an index file, which points to a series of VTK XML data files constituting the subsets. The series of data files are either PolyData, ImageData, RectilinearGrid, UnstructuredGrid or StructuredGrid. %with the extension .vtp, .vti, .vtr, .vtu or .vts. 
Specifically, our scheduler method is implemented in the vtkXMLCompositeDataReader class and called in the function ReadXMLData(), which assigns the data pieces to each data server after processing the meta-file. Through intercepting the original static data assignment into our DC-scheduler, each data server process can receive the proper task assignment with it's associated data locally accessible. %The data server will then be able to perform the data processing tasks according to the data assignment.

For our test data we use the Macromolecular datasets %of a Macromolecular structure 
that was obtained from a Protein Data Bank~\cite{abola1984protein} containing a repository of atomic coordinates, information describing proteins and biological macromolecules.
The processed output of these protein datasets are polygonal images, and ParaView is used to process and display such structures as in Figure~\ref{fig:multiblock} %from the datasets.  %Through ParaView scientists can also compare different biological datasets by measuring the distances and bond angles of protein structures to identify unique features. 
In our test, we take each dataset as a time step and convert it to a subset of ParaView's MultiBlock file with extension ``.vtu". 
Due to the need to download multiple datasets to the test system, we duplicate some datasets with a little revision and save them as new datasets in ``binary" mode.
For each rendering 96 subsets from 960 datasets were selected. As a result, our test set was approximately 40 GB in total size and 3.8 GB per rendering step. We use a python script to setup the visualization environment and needed filters to create a reproducible test. The script was submitted to the ParaView server via the provided pvbatch utility to produce a test run on a given node count.

\subsubsection{ Performance Improvement in Use of SLAM}
%Performance is characterized by two aspects of the ParaView experiment: the overall execution time of the ParaView rendering and the data read time per I/O operation proceeding program execution.  

\begin{figure}[!t]
\centering
\epsfig{file=figures/ParaViewExecutionTime.eps, %height=0.20\textheight, 
width=0.42\textwidth}
\caption{Execution time of PVFS, HDFS and SLAM based ParaView.}
\label{fig:ParaVieExecutionTime}
\end{figure}

Figure~\ref{fig:ParaVieExecutionTime} illustrates the overall execution time of a ParaView analysis for an increasing number of nodes with the use of PVFS, HDFS and SLAM. With a small cluster size, the total time of the ParaView experiment did not greatly differ %between them three methods 
though there was some advantage to the SLAM based ParaView. %which executed in 300 seconds. %A 32 node cluster displayed the same attributes with ParaView executing in 200 seconds. 
At 64 nodes however, the SLAM based ParaView shows it's strength in large clusters seeing a major reduction in total time when compared with the PVFS and HDFS based ParaView, being nearly 100 seconds quicker in execution for a total execution time of 110 seconds. In a 96 node cluster, the difference between SLAM and the other filesystems is lessened, but still a great improvement is observed with SLAM based ParaView executing in 70 seconds, a reduction almost twice over PVFS and HDFS based ParaView. 

\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth %height=0.25\textheight
]
{figures/IOOperationTime.eps}
\caption{Trace of time taken for each call to vtkFileSeriesReader with and without SLAM supported ParaView. Compared to PVFS-based, the number of spikes in read time are diminished and there is a smaller deviation around the trend line as computation is kept predominantly to nodes containing local copies of the needed data.}
\label{fig:VTKtime}
\end{figure*}

%Figure~\ref{fig:VTKtime} visualizes the read time of I/O operation for a ParaView simulation on a 96 node cluster on marmot. 

Figure~\ref{fig:VTKtime} shows the partial traces of the request time for each call into vtkFileSeriesReader on a 96 node cluster on marmot. 
Each I/O operation performs operations on data about 60 Mb in size.
With PVFS, data read times are consistently slow due to it's network loaded datasets and it frequently exhibits bursts in data read times indicating a network bottleneck in read requests. PVFS achieves a 9.82 second average with a standard deviation of 0.669. %HDFS-based Paraview shows the benefit of strictly implementing a distributed file system without a locality algorithm or fragmentation tracker. 
HDFS-based (without DC-scheduler) ParaView in certain instances is able to achieve very low read times with a fastest time of 2.63 seconds, however these instances of quick access are negated with the times that data is not locally available and must be fetched from the network. Overall it achieves an average read time of 5.65 seconds with a standard deviation of 1.339. SLAM-based ParaView consistently achieves low read times with but a few outliers in which an I/O operation read time was above usual. SLAM achieves an average read time of 3.17 seconds with a standard deviation of 0.316. Overall, SLAM is able to take better advantage of large clusters by consistently making data locally available to processes %through it's DC scheduler and fragmentation monitor 
whereas PVFS and HDFS (without DC-scheduler) are constantly hindered by delays in network limitation.

\subsubsection{Experiments with Lustre and Discussion}

Lustre is a popular parallel cluster file system known for powering seven of the ten largest HPC clusters worldwide~\cite{schwan2003lustre}, we deploy Lustre as a storage system for comprehensive testing. 

We set up a experimental architecture similar to HPC system, a dedicated shared storage of a fixed number of I/O nodes and a variable of clients to access the storage. 
In practical use it is not likely that Lustre would be co-located on the compute nodes within a cluster since Lustre is highly dependent on hardware. For example, in the experiments performed on Marmot, if one I/O node is disabled the storage system becomes inaccessible or very slow. In comparison, an HDFS DataNode failure will not affect storage performance and will in fact be completely transparent to the user.


16 nodes were selected as dedicated storage nodes. PVFS2 version $[2.8.2]$ and Lustre version $[1.8.1]$ were installed with default settings on them with one node acting as the Metadata Server and the other nodes acting as Object Storage Servers or I/O servers. For HDFS, we always co-located the storage and compute nodes. %implementation storage was first co-located with 16 compute nodes and then scaled out by adding DataNode as needed. 
We use the Macromolecular datasets as well. %same datasets in 4.1.
 
Using PVFS, Lustre, HDFS and SLAM, we first run ParaView with 16 data server processes as client processes and then increase the processes. A comparison of their performance is shown in Table~\ref{table:Lustre}. Each I/O operation performs operations on data about 60 Mb in size. The processing time is then collected from vtkFileSeriesReader. Lustre performs very well in the experiment compared to PVFS and HDFS(without DC-scheduler), however, as with PVFS it fails to scale after a certain number of client processes is reached indicating a peak in bandwidth. SLAM however is the best performer in the experiment able to have an average of less than four seconds for all numbers of tested client processes.

\begin{table}[h]
\small
\caption {Average Read Time per I/O Operation (s) }
\label{table:Lustre}
\begin{center}
    \begin{tabular}{ | l | l | l | l | l |}% p{5cm} |}
    \hline
     \# of Client process & 16 & 64 & 116 & 152 \\ \hline
    PVFS-based & 6.17 & 11.42 & 20.38 & -- \\ \hline
    Lustre-based& 2.98  & 4.82 & 6.15 & 8.55 \\ \hline
    HDFS-based (w/o Scheduler) & 4.34 & 6.64 & 6.94 & -- \\ \hline
    SLAM-based & 3.039 & 3.47 & 3.91 & -- \\
    \hline
    \end{tabular}
\end{center}
\end{table}

A read performance comparison is illustrated in Figure~\ref{fig:Lustre} which illustrates that PVFS and Lustre reach their bandwidth peak at 64 and 116 client processes respectively, after which there will be no gain in read performance with any increase in client processes. In fact, with 152 client processes, the bandwidth of Lustre is even slightly less than that of 116 client processes. This shows that dedicated storages will reach a bandwidth bottle neck with an increasing number of client processes. 


\begin{figure}[!t]
\centering
\epsfig{file=figures/Lustre.eps, 
width=0.42\textwidth}
\caption{Read bandwidth comparison of Lustre, PVFS, HDFS (without scheduler) and SLAM based ParaView.
}
\label{fig:Lustre}
\end{figure}

\subsection{Efficiency of SLAM-I/O layer and HDFS}

In the SLAM, a translation layer---SLAM-I/O is developed to make parallel I/O able to execute on distributed file system. In our prototype, we chose FUSE mount to transparently relink these I/O operations to HDFS I/O interface. Thus, there is a need for evaluating the incurred overhead of a FUSE-based implementation. 

%SLAM-I/O is built through the Virtual File System (VFS). The I/O call needs to go through the kernel of the client operating system. For instance, to read an index file~\textit{nt.mbf} in HDFS, mpiBLAST issues an open() call first through the VFS's generic system call (sys-open()). Next, the call is translated to hdfsOpenFile(). Finally, the open operation will take effect on HDFS. We conduct experiments to quantify how much overhead the translation layer running parallel BLAST incurs.

We run the BLAST programs and measured the time it takes to open the 200 formatted fragments during initialization. We perform two tests. The first one directly uses the HDFS library while the other is with a default POSIX I/O, running HDFS file open through our SLAM-I/O layer. For each opened file, we read the first 100 bytes and then close the file. We repeated the experiment several times. We found that the average total time through SLAM-I/O is around 15 seconds. The time through direct HDFS I/O was 25 seconds. This may result from the overhead connecting and disconnecting with hdfsConnect() independently for each file. A second experiment was to run BLAST processes on multiple nodes through SLAM-I/O. The average time to open a file in HDFS is around 0.075 seconds, which is negligible compared with the overall data read time and processing time. In all, a FUSE based implementation introduces negligible overhead. Sometimes, SLAM-I/O actually performs better than the libhdfs based hard coding solution.

In the default mpiBLAST, each worker maintains a fragmentation list to track the fragments on its local disk and transfers the metadata information to the master scheduler via message passing. The master uses a fragment distribution class to audit scheduling. In SLAM, the NameNode is instead responsible for the metadata management. At the beginning of a computational workflow, a fragment location monitor retrieves the physical location of all fragments by talking to Hadoop's NameNode. We evaluated the HDFS overhead by retrieving the physical location of 200 formatted fragments and found the average time  to be around 1.20 seconds, which accounts for a very small portion of the overall data read time.


\section{Related Work}

There are many methods that are used in parallel BLAST. ScalaBLAST~\cite{ScalaBLAST} is a highly efficient parallel BLAST, which organizes processors into equally sized groups and assigns a subset of input queries to each group. It can use both distributed memory and shared memory architectures to load the target database.
Lin~\emph{et. al.} ~\cite{lin:EfficientDataAccess} develop another efficient data access method to deal with initial data preparation and result merging in memory. MR-MPI-BLAST~\cite{MR_MPI} is a parallel BLAST application employing a MapReduce-MPI library. However, these parallel applications still follow a compute-centric model and have a significant amount of data movement for big data problems.

AzureBlast~\cite{Lu:AzureBlast} is a case study of developing science applications such as BLAST on the cloud. 
CloudBLAST~\cite{CloudBLAST} adopts a MapReduce paradigm to parallelize genome index and search tools and manage their executions in the cloud. Both AzureBlast and CloudBLAST only implement query segmentation but exclude database segmentation. Hadoop-BLAST~\cite{ref:Hadoop-Blast} and bCloudBLAST~\cite{bCloudBLAST} present a MapReduce-parallel implementation for BLAST but don't adopt existing advanced techniques like collective I/O as well as computation and I/O coordination. Our SLAM allows parallel BLAST applications to benefit from data locality exploitation in HDFS, as well as take the flexibility and efficiency of the MPI programming model. 

%There are existing methods to improve the parallel I/O performance. 
C. Sigovan~\emph{et. al.}~\cite{VisualNetworkIO} present a visual analysis method for I/O trace data that takes into account the fact that HPC I/O systems can be represented as networks.  
The FlexIO~\cite{FlexIO} is a middleware that offers simple abstractions and diverse data movement methods to couple simulation with analytics.
VisIO ~\cite{Wang:VisIO} obtains a linear scalability of I/O bandwidth for ultra-scale visualization %. 
%The VisIO implementation calls the HDFS I/O library directly from the application programs, which is an intrusive scheme and 
but requires significant hard coding effort to rewrite the ParaView read methods. 
There are other research efforts such as I/O forwarding techniques~\cite{IOforwarding}, decoupled execution~\cite{chen2012decoupled} and server pushing method~\cite{sun2007server} to address the big data challenge. %these schemes need to add extra data processing nodes/phases into the current execution paradigm and couldn't scale with the cluster size due to the bandwidth limitation.
Different from these methods, our SLAM uses an I/O middleware to allow parallel applications to achieve scalable data access with an underlying distributed file system. 

The data locality or in-situ computation is a desirable technique to improve I/O performance. 
Janine~\emph{et. al.}~\cite{In-situProcessing} develop a platform which realizes efficient data movement between in-situ and in-transit computations that perform on large-scale scientific simulations.
Mesos~\cite{Mesos} is a platform for sharing commodity clusters between multiple diverse cluster computing frameworks.
%Mesos shares resources in a fine-grained manner, allowing frameworks to achieve data locality by taking turns reading data stored on each machine. 
The Hadoop Distributed File System (HDFS) is an open source community response to the Google File System (GFS), specifically for the use of MapReduce style workloads~\cite{dean2008mapreduce}.
Dryad %~\cite{isard2007dryad} 
and Spark~\cite{Spark} are two other frameworks to support data locality computation. 
The idea behind these frameworks is that that it is faster and more efficient to send the compute executables to the stored data and process in-situ rather than to pull the data needed from storage. 
However, these general purpose cluster frameworks usually need to call their specific APIs and don't support MPI programming, SLAM is orthogonal to these techniques and allows MPI-based programs to achieve data locality computation.

%The aforementioned proposed solutions work at different contexts from SLAM, which improves the I/O performance of scientific parallel application by capitalizing on data locality aware computing.

\section{Conclusions}

In this paper, we developed a scalable locality-aware middleware to dramatically improve the I/O performance of scientific analysis/visualization applications. A SLAM-I/O layer is implemented to allow traditional MPI or POSIX based applications to run using Hadoop distributed file system. Although we prototyped SLAM-I/O for mpiBLAST and ParaView, it is applicable to any MPI-based program to run over HDFS without any recompiling effort. More importantly, we proposed a novel data-centric load-balancing scheduler to exploit data-task locality and enforce the load balance within the cluster. The scheduler is independent of specific applications and could be adopted for other MPI-based programs that benefit from some form of data locality computation. By conducting comprehensive experiments over two different clusters with four popular file systems, we found that SLAM can greatly reduce the I/O cost and double the overall execution performance as compared with existing schemes.

\iffalse
\section{Acknowledgments}

This material is based upon work supported by the National Science Foundation under the following NSF program: Parallel Reconfigurable Observational Environment for Data Intensive Super-Computing and High Performance Computing (PRObE).
\fi

\bibliographystyle{abbrv}
\bibliography{BibFiles/BLAST-HDFS}

\balance

\end{document}
