\section{Collective I/O Aware Buffer Cache}

Each I/O request from application represents a caching opportunity for the lower level storage systems.  In this section, we propose a collective I/O aware buffer cache framework to make application specific information available to low-level cache. We also present methods for exploiting these knowledges to improve the overall buffer cache performance.

\subsection{Collective I/O Aware Buffer Cache Framework}

\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figures/temparch.eps}
\caption{collective I/O Aware Cache}
\label{fig:arch}
\end{figure}

Figure \ref{fig:arch} illustrates the high-level view of the proposed collective I/O aware cache framework. As shown in the figure, the computations of parallel scientific applications are carried out on compute processes, which generate a number of I/O requests for underlying parallel file system. Each parallel process launches a main thread to perform I/O related operations. The caching helper thread is attached to each main thread to assist buffer cache data management. It delivers the original access information of each parallel process to the pattern detection module. 

Pattern detection module is embedded inside the MPI I/O library. It collects and processes the stream of access requests dispatched from caching helper threads. Since I/O requests are specified with MPI datatypes and file views, the access information is evident. Pattern detection module can easily exploit the access information to build pattern values and pass them to the underlying MPI I/O caching library. Pattern detection module also track function-call identifiers to synchronize the caching helper thread and the main thread collective I/O calls, and to force the caching helper thread to run properly.

The cache library maintains a global buffer cache among multiple processes at the client side. It captures the access information patterns transferred from pattern detection module and manages the actual data fetching to the buffer cache. The regular collective I/O is enhanced to take advantages of the cached data residing in the buffer cache. An I/O requesting process must first check the caching status of the requested blocks before exchanging I/O accesses with other processes. If the requested blocks are cached, the requesting process will fetch data from buffer cache directly. 

The performance of cache replacement operations is highly dependent on the application access pattern. Therefore a single fixed replacement algorithm can not properly serve all types of access patterns of various applications. Given the diversity of access patterns obtained from the pattern detection module, the cache library provides a replacement module with a repository of multiple replacement algorithms. The replacement module is responsible for making appropriate replacement decisions according to access patterns so as to maximize the cache utilization.  

\subsection{MPI I/O Access Patterns Detection}

The success of proposed CIO aware buffer cache management relies on extracting and utilizing original I/O access information before the collective I/O aggregation. We choose a multi-threading approach to obtain the actual access information of each parallel process. A caching helper thread is constructed in each MPI process when opening the file and destroyed when closing the file. The caching helper thread shares certain resources with the main thread, such as MPI file handles, file views and process rank. It only performs essential computation for data address calculation. A list of offsets and a list of request sizes are created and broadcasted to the pattern detection module. 

One caching helper thread only evaluates how a file is accessed by a local process. Pattern detection module receives local patterns from all processes involving in collective I/O operations. It analysis these local patterns and combine them into a global pattern. Pattern detection module considers the following four factors when producing the global pattern: I/O operation type, spatial locality, temporal pattern and iterative behavior.

I/O operation type is classified as read, write, or read/write. The spatial locality can be contiguous, noncontiguous, and the combinations of contiguous and noncontiguous patterns. When the application conducts one collective I/O operation, each process may access several noncontiguous portions of a file while the requests of multiple processes are often overlapped. These gaps and overlaps can help caching library identify the potential candidate data blocks to be placed into buffer caches. Capturing temporal patterns is also helpful for organizing the cache blocks. If at one point in time a particular data block is requested by one process, it is likely that the same block will be requested again in the near future. Replacement module in our proposed caching library manage the cache blocks by taking benefits of the temporal information obtained from the previous I/O accesses. Scientific parallel programs using MPI I/O usually issue data requests with a few loops. This I/O access pattern can be described as iterative behavior. When repetitive I/O access patterns are captured, some data blocks can be effectively kept longer in the cache. The cached data can be completely used before evicting them to make room for the new blocks.

Taking the factors mentioned above into account, the global pattern stores information of the file descriptor, process id, I/O operation, time stamp, dimension, starting offset, request sizes and number of repetitions. Consider, as an example, a globe pattern value with parameters \{[3],READ, 0.023184, 1, [(2622716, 510080), (1573632, 510080)], 64\}. This indicates a one dimensional read access pattern . At time 0.023184, the third MPI process accesses a region whose staring offset are 2622716 and 1573632 respectively.  The request size is 510080 bytes for both accesses. This one dimensional pattern repeats 64 times. Using the pattern value and data block sizes, caching library can identify the set of data blocks captured in the buffer cache.

\subsection{MPI IO Caching Library}
MPI-IO based data cache can leverage other MPI library components to take advantage of the collective nature of parallel I/O. For example, MPI communicators allows user contact remote processes transparently. Incorporating the caching into MPI library also increase the implementation portability. MPI-IO based caching can easily interfaces with different underlying file systems. Several research projects have been working on MPI-IO caching libraries. Active buffering uses additional memory available at the compute node to copy the users output buffer into a managed output buffer for improving the performance of MPI collective write operations. Liao et al. developed a collective caching library implemented at the MPI-IO level. Collective caching maintains a global buffer cache among multiple processes in the client side. We use this libary as the starting point for our study. Figure \ref{fig:ColCachLib} demonstrates the high level view of the collective caching design. Each client contributes part of its memory to construct the global cache pool. The cached data is transfered among clients through the high-speed interconnect network. Metadata of cached blocks is maintained to locate data quickly. A simplified cache-coherency protocol is used to maintain consistency among cache copies in the cache pool. At most a single copy of file data is allowed to be cached among all MPI processes. In this study we customize the collective caching prototype implementation by disabling write caching and enabling read caching only. In addition, we utilize a replacement module in conjunction with pattern detection results to direct caching policy. The details will be discussed in next subsection.

\begin{figure}[htbp]
\centering
\includegraphics[width=0.35\textwidth]{figures/ColCachLib.eps}
\caption{collective I/O access}
\label{fig:ColCachLib}
\end{figure}

\subsection{Using Replacement Module for Caching Management}

Replacement module in the MPI-IO caching library manages the cache by applying specific replacement policies that best utilize the cache under that access pattern. By taking full benefit of original access patterns delivered from pattern detection module, used for making the block replacement decisions, cache performance can be enhanced. There has been an extensive research on designing cache replacement algorithms, e.g. LRU, LFU, FIFO and ARC, etc. A single replacement algorithm can not be effective for a wide range of access patterns. It is more desirable for buffer cache to have a repertoire of replacement algorithms and assign the efficient one based on different access patterns. In this subsection, we illustrate how cache replacement policies can be extended to take advantage of original access pattern from MPI-IO processes. 

\subsubsection{Collective IO Access Patterns with LRU}
We extend the Least Recently used (LRU) cache replacement policy by using original access temporal locality filtered by collective I/O to manage the LRU list and to decide whether or not to cache accessed blocks. 

We first extract the values of staring offset and request size from each globe pattern value.  The request is divided into blocks of size equal to the buffer cache block size. We check whether each block is already in the buffer cache or not. If the block is cached, we first update the last access time of this block with its original temporal information and move the block to the most-recently-used end. Then the block is directly copy from buffer cache to user's buffer by using \textit{memcpy()} function call. The exact location where the buffer cache should be copied to is decided by the index of the requested block in user’s buffer. If the block is not placed in the cache buffer, we    
Algorithm \ref{alg:LRU} 

\IncMargin{1em}
\begin{algorithm}
\SetAlgoNoEnd
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\SetAlgoNoLine
\SetAlgoShortEnd
%\LinesNumbered
\Input{A sequence of globe pattern values $S_{v}$ from pattern detection module}
\Output{The contents of buffer cache}
\BlankLine

\ForEach{globe pattern value $g_{v}$ $\in$ $S_{v}$}{
split $g_{v}$ into blocks $B_{s}$ \;
\ForEach{block $b_{i}$ $\in$ $B_{s}$}{
\eIf(\tcp*[f]{cache hit}){$b_{i}$ $\in$ buffer cache}
{hits++\;
\tcp{update $b_{i}$ last access time}\label{cmt}
$Last(b_{i})$ $\leftarrow$ $b_{i}$ time stamp\;
\tcp{copy data $b_{i}$ to user using memcpy()}\label{cmt}
user specified buffer $\leftarrow$ $b_{i}$ in buffer cache\;
}(\tcp*[f]{cache miss})
{
\tcp{perform I/O from disk}\label{cmt}
user specified buffer $\leftarrow$ $b_{i}$ in buffer cache\;
\tcp{evicting the LRU block}\label{cmt}
min $\leftarrow$ current time\;
\ForEach{block $b_{j}$ $\in$ buffer cache}{
\If{Last($b_{j}$) < min}{
victim $\leftarrow$ $b_{j}$ \;
min $\leftarrow$ $Last(b_{j})$ \;
}
}
\If{victim is dirty}{
flush the victim to the disk\;
}
fetch $b_{i}$ into the buffer frame held by victim\;
$Last(b_{i})$ $\leftarrow$ $b_{i}$ time stamp\;
}
}
}
\caption{ CIO Access Pattern Aware LRU}\label{alg:LRU}
\end{algorithm}
\DecMargin{1em}


\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figures/lru.eps}
\caption{collective I/O access}
\label{fig:lru}
\end{figure}





\subsection{Access Patterns with LFU}
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figures/lfu.eps}
\caption{collective I/O access}
\label{fig:lfu}
\end{figure}

\subsection{Access Patterns with FIFO}
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figures/fifo.eps}
\caption{collective I/O access}
\label{fig:fifo}
\end{figure}

\subsection{Access Patterns with ARC}
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figures/arc.eps}
\caption{collective I/O access}
\label{fig:arc}
\end{figure}
%We propose associating with every parallel I/O request a hint indicating high-level data access patterns. Such a mechanism of passing hints has the following advantages: Hint information is usually so small that it can be attached along with parallel I/O requests. The communication overhead for the hint traversal is negligible. The necessary changes to the standard MPI I/O interface and parallel file system are simple and cheap.

%We expect that hints can help identify the blocks that are likely to be re-referenced quickly. In particular, for I/O intensive scientific applications that require access a large amount of overlapped file regions, we anticipate that more overlapping data can be signified by hints and maintained in cache as shown in figure 2. The overlapping I/O referred to in this paper occurs when I/O requests from multiple processes are issued to the file system and overlaps exist among the file regions by these requests. Obviously, the overlapped data are referenced by compute processes more than other data blocks. Therefore these data should have higher priorities to stay in cache. To reflect these priorities, intuitively we want each I/O request attaches a hint that has a data field indicates the how many application processes are requesting this data block. 

%The I/O request type also needs to be considered to carry within hint. Assuming parallel file system assigns a timestamp to each request when it arrives. Now suppose a request for a page p is assigned with a timestamp t0. There are three possibilities whether page p will be requested again after t0. First, the p is never referenced again after t0. In this case, there is no benefit to cache p at t0. Secondly, the next request for page p is a read request with timestamp t1 where t1 larger than t0. In this case, if p is cached at time t0, it will benefit by being able to serve the read request at t1 from cache. The third possibility is that the next request for page p is a write request at time t1. Then a cached copy of p at time t0 would not help serve the following write request efficiently. Thus it is better cache p at t1 rather than at t0. Above replacement decisions are optimum because they require advance knowledge of the future I/O request stream. Alternatively, we propose add the request type in the hints by hint generator. Note that hint generator does not need to understand the hint, nor does it need to know how file system operates the cache based on these hints. Hint interpreter tracks the hint statistics and analysis them later. 

%-----

%In ROMIO\'\ s implementation of collective reads, each process ﬁrst analyzes its own I/O request and creates a list of offsets and a list of lengths, where length[i] gives the number of bytes that the process needs from location offset[i] in the ﬁle.

%The performance of a cache replacement algorithm is highly dependent on the application access pattern

%Advice exploits a programmer’s knowledge of application and system implementations to recommend how resources should be managed.


%It is responsible for determining the data blocks to maintain in the buffer cache.Based on these patterns our proposed scheme choose an appropriate replacement algorithm to make intelligent replacement decisions so as to maximize the cache utilization. 

%The accesses seen by cache replacement module have stronger locality than those available after collective I/O aggregation. Having the original locality at the application level is helpful in selecting efficient caching replacement strategies.

%The I/O access patterns are usually gotten lost if such information can not be passed to the lower layer caches.
%As was noted in Section 1, parallel I/O accesses optimization strategies such as collective I/O filter overlapping and interleaved requests from multiple processes.
%It interprets pattern values delivered from pattern detection module.
%Having the original locality at the application level is helpful in selecting efficient caching replacement strategies.
%MPI collective I/O takes advantage of correlations between those requests and services them together. 

%Pattern detection module is embedded inside the MPI I/O library. MPI I/O library can define any noncontiguous file accesses using MPI basic/derived datatype and file view. Since I/O requests are specified with MPI datatypes and file views, the access patterns are evident and they can easily be exploited by access pattern generator.  Pattern detection module intercepts I/O requests from processes before the collective I/O aggregation. It studies application semantics to specify access types and build pattern values. These pattern values hidden by collective I/O are propagated directly from to low layer cache management subsystem.

%Cache management subsystem is responsible for determining the data blocks to maintain in the buffer cache. Pattern extractors resided in cache management subsystem interpret pattern values delivered from access pattern generator. Given the diversity of access patterns obtained by the pattern extractor, the cache management subsystem provides a repository of multiple replacement algorithms since a single fixed replacement algorithm can not properly serve all types of access patterns of various applications. Having the information available at the application level is helpful in selecting efficient caching replacement strategies. Based on these patterns our proposed scheme choose an appropriate replacement algorithm to make intelligent replacement decisions so as to maximize the cache utilization. The  pattern-aware cache allocation module manages the cache resource allocated to serve MPI I/O operations. It dynamically allocate new cache blocks to and reclaim unused blocks when application performs collective I/O. The count of blocks to allocate is determined by a configurable cache allocation policy. The rationale behind this policy is that during the MPI I/O operations, more cache blocks should be allocated for higher performance gain but not only for higher cache demand. In this study, we mainly investigate an allocation policy which can effectively reduce the overall cache misses, thereby improve the overall cache performance. In section, we will discuss this policy in detail. 

%The thread catches read()/write() system calls from applications and determineswhether the I/Orequest should go to the ﬁle servers or the caching sub-system.

%I/O accesses of parallel applications can be divided into local patterns and global patterns. Local patterns are determined per process (or thread), showing how a file is accessed by a local process. Global patterns are over the parallel application, representing how multiple processes access a file. 

%I/O access patterns can be classified from five key factors: temporal information, spatial locality, size of accesses, iterative behavior and I/O operation type. 

%The spatial locality can be contiguous, noncontiguous, and the combinations of contiguous and noncontiguous patterns. When the application conducts one collective I/O operation, each process may access several noncontiguous portions of a file while the requests of multiple processes are often overlapped. These gaps and overlaps can help cache management subsystem identify the potential candidate data blocks to be placed into buffer caches. In particular, for I/O intensive scientific applications that require access a large amount of overlapped file regions, we anticipate that more overlapping data can be signified and maintained in cache. The overlapped data are referenced by compute processes more than other data blocks. Therefore these data should have higher priorities to stay in cache. 
%To reflect these priorities, intuitively we want each I/O request attaches a hint that has a data field indicates the how many application processes are requesting this data block.

%Request sizes can be small, medium or large. The sizes of requests can be either fixed or varying. Request size is crucial for efficient cache utilization. When the size of cyclic accessed blocks is large than the cache size, the blocks to be referenced in near future may always be evicted from cache. It is desirable to reduce cache misses by providing extra cache resource in such situations. 

%Scientific parallel programs using MPI I/O usually issue data requests with a few loops. This I/O access pattern can be described as iterative behavior. When repetitive I/O access patterns are captured, some data blocks can be effectively kept longer in the cache. The cached data can be completely used before evicting them to make room for the new blocks.

%The type of I/O operation is classified as read, write, or read/write.
% I/O accesses can occur either periodically or irregularly.

%In particular, for I/O intensive scientific applications that require access a large amount of overlapped file regions, we anticipate that more overlapping data can be signified and maintained in cache. The overlapped data are referenced by compute processes more than other data blocks. Therefore these data should have higher priorities to stay in cache. 
