\section{Introduction}
%Large-scale computing clusters are being increasingly utilized to execute large, data-intensive applications in several scientific domains. Such domains include high-resolution simulation of natural phenomenon, large-scale image analysis, climate modelling, and complex financial modelling. The I/O requirements of such applications can be staggering, ranging from terabytes to petabytes, and managing such massive data sets has become a significant bottleneck in parallel application performance.
%
%We focus on the resource demands at the storage layer and develop a novel cache management strategy, the virtual I/O cache, to deliver increased hit rates and improved performance to client applications
%
%clear need for effective and dynamic resource management.
%
%We focus on the resource demands at the storage layer and develop a novel cache management strategy, the virtual I/O cache, to deliver increased hit rates and improved performance to client applications.
%
%Due to the rapid increase in the number of multi-core, multi-processor nodes running applications in high-end clusters, the disparity between computational power and I/O throughput does not appear be lessening. pfs are able to provide scalable i/o performance for applications accessing large contiguous file regions. However,...
%Both the I/O servers and the storage servers are shared among concurrently executing applications. 

Many scientific applications, simulations and visualizations running on high-end computing clusters are producing massive amounts of data. For example,  nuclear astrophysics simulations demand high temporal resolution and lead to the production of many petabytes and even exabytes data volumes. Similarly, accurate climate modeling can generate remarkable data volumes which are expected to reach hundreds of exabytes by 2020. Such massive data sets require extreme amounts of I/O to store and retrieve results for later use and analysis. The disk access latencies of these data intensive applications has resulted in I/O becoming a significant bottleneck. 

Caching is a fundamental and pervasive approach of alleviating disk access latencies for data intensive applications. Large-scale high performance computing platforms typically are hierarchically organized and can employ buffer caches in multiple layers. Such architectures can significantly enhance the scalability and availability of the systems and reduce I/O operation costs. In general, application executing on compute nodes layer issue I/O requests that are forwarded to the underlaying I/O server layer. I/O servers collect I/O demands from compute nodes and request data from storage servers that are connected with storage devices. In such system architectures, each compute node can have a shared or private cache; and each I/O node can have a cache that could be accessed by multiple compute nodes. Similarly, the storage nodes accommodate another level of cache shared by all compute nodes that have access to it. Clearly, how to take advantage of such buffer cache hierarchy in high performance computing platforms for data intensive scientific applications is critical from the performance point of view.

Software approaches have been shown to be effective at improving I/O performance for data intensive applications. A large-scale data-intensive application may use several layers of software for I/O optimizations. For example, high-level I/O libraries such as Parallel netCDF (PnetCDF) are used by applications to specify formats and abstractions for I/O while I/O middleware such as MPI-IO implementation organizes and coordinates I/O within applications using their access patterns. Collective I/O is one of the most important I/O access optimizations in MPI-IO. Collective I/O improves parallel I/O performance by combining several small, random file accesses into one long, sequential I/O operations. Figure \ref{fig:collorig} illustrates how a group of read operations can benefit from a collective routine. Four processes $P_{0},P_{1},P_{2}$ and $P_{3}$ each requests four data blocks at four times respectively. For example, $P_{0}$ issues the requests for block 4, 5, 13 and 14 at time $t_{1}$. Without collective I/O, each processes have to issue two read requests separately, because Unix-style I/O only allows to read a single contiguous piece of data at a time. In this case there would be eight small requests in total issued from the compute nodes to the I/O server.

\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figures/p4f1.eps}
\caption{collective I/O access}
\label{fig:collorig}
\end{figure}

Instead of making several read calls, collective I/O services requests from all processes together. As shown in Figure \ref{fig:collorig}, the implementation of collective I/O waits for all those processes exchange their access offsets until time $t_{4}$. After analyze the access requests of different processes, collective I/O filters overlapping, combines the interleaved noncontiguous requests and constitutes a large contiguous data access. The advantage of this method is several-fold. First, filtering overlapping reduces redundant data requests from multiple progresses. For instance, block 5 is requested by $P_{0}$ at $t_{1}$ and $P_{2}$ at $t_{3}$. Collective I/O merges these two requests and only issues a single request for this block at $t_{4}$. Second, combing the small and non-contiguous requests reduces the number of system calls which involve expensive context switches overhead. Eight small requests in this case are combined into a large and contiguous one. Third, handling the contiguous access on disk-based I/O servers is generally more efficient than handling non-contiguous accesses because contiguous access requires fewer disk head movements.

However, in this example we can observe that with collective I/O the detailed access patterns available at the application level are removed when the aggregated I/O request reaches the low level buffer cache. How many times a block is requested and the original temporal information are hidden to low level buffer cache after collective I/O. Therefore data blocks could reside in low level buffer caches undiscerningly for a long period of time before they become cold enough to be replaced by a local replacement algorithm. Further, collective I/O can potentially bring to low level cache more data elements than it needs. For instance, in Figure \ref{fig:collorig}, at $t_{4}$ four blocks in the combined large data chunk are extra data not truly required by processes. These extra data blocks increases the pressure on the low level buffer caches. The effective cache capacity is reduced, which in turn affects application performance. Without a proper coordination between I/O middleware and the low level buffer cache, the shadow pattern caused by collective I/O can cause the buffer cache seriously under-utilized. 

To address these limitations, in this paper we propose a collective I/O locality aware buffer cache management scheme, in which first level buffer cache is exposed with the original locality of access stream and has the better potential to exploit it. The collective I/O aware layer in buffer cache of Figure \ref{fig:collorig} shows the same data accesses under an alternate cache layout optimized using our approach, technical details of which will be presented in the following sections.  With our strategy, the data elements accessed by processes are organized based on the and stored in consecutive locations, which helps us minimize the number of data blocks occupied in buffer cache. 

In summary, we make the following contributions in this paper: 
\begin{itemize}
\item First, we identify the performance bottleneck imposed by collective I/O ; 
\item Second, we propose a new collective I/O layout aware strategy that exposes the locality of original access stream to the first level buffer cache;
\item Third, we demonstrate our approach using synthetic and application benchmarks to yield significant improvement. 
\end{itemize}

The rest of this paper is organized as follows. Section \uppercase\expandafter{\romannumeral2} briefly reviews essential concepts of collective I/O and middleware caching as the background of this study. The design and implementation of collective I/O aware cache management strategy are presented in Section \uppercase\expandafter{\romannumeral3}, and the evaluation methodology and experimental results with analysis are given in Section \uppercase\expandafter{\romannumeral4} and \uppercase\expandafter{\romannumeral5}. Section \uppercase\expandafter{\romannumeral6} discusses related work, latest advancements in this field, and compares them with this study. We conclude this study in Section \uppercase\expandafter{\romannumeral7}.


%employ buffer caches in multiple layers. The requested file blocks can be cached at compute nodes, storage nodes and I/O nodes.
%The inadequate I/O subsystem capability could substantially affect the application performance.

%Finally, a data management challenge is inevitable because of the long integration times necessary to obtain the robust statistics required for studying quasi-steady, turbulent flow. 