\section{Related Work}

%Extensive studies have focused on improving the performance of high performance computing systems for data-intensive applications at various levels. This section discusses existing studies along three lines: architecture improvements, programming model improvements, and runtime system improvements, and compares with our work.

\subsection{Hierarchical Cache Management}

%Current parallel programming models are designed for comp-utation-intensive applications. These programming models include Message Passing Interface (MPI) \cite{GrLT99}, Global Arrays \cite{NiHL94}, Unified Parallel C \cite{ElSm06}, Chapel, X10, Co-array Fortran, and data parallel programming models such as High Performance Fortran (HPF). These programming models primarily focus on the memory abstractions and communication mechanism among processes. I/O is treated as a peripheral activity and often a separate phase in these programming models, which is often achieved through a subset of interfaces such as MPI-IO \cite{TRLG97}.
 
Most prior work  focuses on improving behavior of storage (or second-level) cache management because the behavior of the second-level cache is often hard to characterize, making cache management schemes inadequate. Particularly, Zhou et al. show that LRU is not suitable for managing storage cache. To address LRU's poor performance, several techniques, such as multi-queue replacement, eviction-based placement policies, and CLOCK have been proposed. Choi et al. propose a fine-grained file-level characterization of chunk references in buffer management. Vilayannur et al. present selective caching because caching of certain block is not always beneficial. Sarhan and Das propose to use the on-disk buffers for caching intervals between successive streams, while multimedia-on-demand servers improves resource sharing by intelligent request schedulers. Our approach complements any existing caching policies with improved cache locality because of file layout transformation.

Recently, many studies looked into cache management for multi-level storage hierarchies. The main motivation for these studies is that the modern networked storage systems have a hierarchy of caches, and special care needs to be taken in order to manage those cache hierarchies efficiently. A key idea is how to reduce negative interference while keeping most valuable blocks in shared cache. Techniques to extract and predict the most valuable blocks include transforming application-level requirement into I/O reservations, correlating program counters with program context, exploiting reference regularities ,locality of file chunks of non-uniform strength, and automatic application reference pattern detection. Wong and Wilkes explore the exclusive cache policies against the prevalent inclusive ones. These studies are system-level approches and are, therefore, orthogonal to our approach.

\subsection{Middle-ware based Caching}
Cooperative caching seeks to improve network file system performance by mutually sharing the contents of client data caches . In cluster environments where high performance, low latency message passing networks are frequently available, accessing remote clients to retrieve cached data may result in improved file system throughput. Cooperative caching offers the most opportunity for performance improvements when the client exhibit a large degree of inter-client sharing. Many projects have explored the use of cooperative caching within the file system as an effective means for improving file system performance. Bragodia, et al. performed simulation studies of cooperative caching for MPI-IO benchmarks on four and eight client processes. Their simulator, MPISIM, was used to study the number of disk accesses performed using several varieties of cooperative caching. The study results are difficult to extrapolate to modern machine due to the small number of client processes used and the dissimilarity of their simulator to the latest parallel I/O systems (e.g. all file systems interactions use list I/O).

The Center for Ultra-Scale Computing and Information Security at Northwestern University has prototyped several file cache designs  with ROMIO, an open source implementation of the MPI-IO standard. The basic approach involves partitioning the file into a set of fixed size pages. Pages are then assigned to a single computation node by taking the modulo of the page number. Clients processes access file data by requesting it from the client responsible for the cache page rather by accessing the file system, a cooperative caching approach. In one scheme the file data may only be cached at node responsible for the cached page. Another scheme implements directories at the responsible node so that another node may cache the page. All of these schemes require that file data is cached at only one node and that all file accesses occur on page aligned boundaries. Our studies intend to explore the effects of relaxing the requirement to cache data in only one
location and measure the benefits of allowing file access on non-aligned boundaries.
