\section{Prototype Implementation}

For example, we can plug hint interpreter either in an I/O library or integrate it into client-side file system at system space to improve the performance of cache buffers for client processes. We also can incorporate hint interpreter in server-side file system that dynamically guilds I/O server cache management.

I implemented a detailed trace driven shared buffer cache simulator which mimics the storage model mentioned in Section 3.1. The simulator is adapted from the Purdue buffer cache simulator, AccuSim . We simulate the disk by interfacing the multi-level buffer cache simulator with DiskSim [28]. I use the message passing library to simulate the interaction between multiple buffer caches. Similar to the Linux buffer cache, our simulator maintains a global free list and a list of buffers for disks. In order to facilitate fast search, we use a hash table. When there is a request for data, the disk buffer list is searched. If it is already in the buffer cache, the pointers are rearranged to reflect the access (depending on the cache replacement policy). If not, a buffer from the free list is allocated for the data. If no buffers are on the free list, the data in one of the buffers in use is replaced with the new requested data (the victim chosen depends on the replacement policy).

The traces used to drive the simulator are obtained from live applications running
on Linux using a utility such as strace. Traces captured contain information such as the
inode, size of I/O block, and type of I/O access (seek, read, write), process id etc. 

%The major simulation parameters used along with their default values are given in Table 2.1. Additionally, I implemented the APP [22] scheme described in Chapter 3 andMnemosyne described in Chapter 5 in Linux 2.6 and used 10K RPM Seagate Cheetah disks [30] for all experiments. The experiments were executed on a 16-node Linux cluster. Each machine in the testbed is a dual-core AMD Athlon node with 512MB main memory connected by 1GBps Ethernet and Myrinet. The APP module is coded as a multi-threaded user-level library and the modules distributed on the different machines communicate with one another using TCP.

I define the overall buffer cache hit rate as the ratio of the total number of buffer cache hits to the total number of buffer cache accesses made by all concurrently executing applications since the time the first application was instantiated on the system (i.e., since time = 0). The overall storage cache hit rate is different from the individual
hit rate of an application (which accounts for the hits and accesses of the individual application only since its instantiation on the system) and captures all hits and accesses coming from all applications that exercise the storage system since the initiation of the first application on the system.

I define disk throughput as the amount os data transferred to and from the disk per second. The higher the disk throughput, the better the performance of the application,i.e., the lower the application execution time. The disk throughput is also expressed as shown in Equation


%We focus on two-tier cache hierarchies for clarity of presentation, but our discussion and proposed techniques extend to cache hierarchies with more than two tiers