\documentclass[twocolumn,10pt]{article}
\usepackage{usenix}
\usepackage{url}
\usepackage{graphics}
\usepackage{listings}
\usepackage{hyperref}
\usepackage{algorithm2e}

% -------------------------------------------------------------------------
%                                   Title
% -------------------------------------------------------------------------

\title{\bf \textsf{Analysis of Memory and Filesystem performance}}

\author{Anand Krishnamurthy \\
       {\em \normalsize Computer Sciences Department}\\
       {\em \normalsize University of Wisconsin-Madison}\\
       {\tt \normalsize anand@cs.wisc.edu}}

\begin{document}
\date{}
\raggedbottom

\maketitle

\pagenumbering{arabic}
\pagestyle{plain}
\setlength{\footskip}{25pt}
% -------------------------------------------------------------------------
%                               Abstract
% -------------------------------------------------------------------------
\begin{abstract}
\small
\noindent  Profound understanding of memory and filesystems is quintessential for knowing both their good and bad properties. Through such profound understanding comes the knowledge to reason about and make design decisions when architecting and/or configuring memory and filesystem for various workloads and vice-versa. In this project, we analyze the performance of memory and filesystem on a UNIX-based machine by designing various workloads. By doing so we present the niceties and stress points of memory and the filesystem and provide guidelines for application developers and OS developers/system admininstrators to get best performance for the applications from the OS.
\end{abstract}


% -------------------------------------------------------------------------
%                               Introduction
% -------------------------------------------------------------------------
\begin{sloppypar}

\section{Introduction}
\label{sec:Introduction}

A deep insight into the behaviour of memory and filesystem enables development of optimized applications. Application level read/write requests traverse through muliple stages of the I/O stack in the operating system and hardware like memory, processor and disks. There are complex interactions among these components of the I/O stack and understanding their implications on performance for different workloads is a valuable skill. The motivation of this skill is two fold. First with such skill, an application developer can write applications avoiding designs that have memory/filesystem interactions with poor performance. On the other hand, an OS developer can identify performance problems for a specific application and tune OS properties to get better performance of the application or they can upgrade the hardware accordingly.

The goal of this project is to design 3 workloads for memory and file system.
\begin{itemize}
\item a ``\emph{good}" workload to obtain best possible performance
\item a ``\emph{bad}" workload to obtain worst possible performance
\item an ``\emph{ugly}" workload to obtain performance somewhere in between the good and bad workloads.
\end{itemize}
The workload should consist of I/O operations to memory and disks in some pattern to exhibit the behaviour of the system.

The rest of the report is organized as follows. Section 2 provides the background of the project, the hardware and software details of the system upon which workloads are run. Section 3 analyzes the performance of memory with different workloads and Section 4 analyzes the file system performance. In both section 3 and 4, we give pseudocode of the workload wherever required. Section 5 has solutions for additional point questions. We conclude with our learnings in Section 6.

% -------------------------------------------------------------------------
%                                   Background, Experimental Platform
% -------------------------------------------------------------------------

\section{Background}
\label{sec:Background}

\subsection{Experimental Platform}
The experimental setup consists of Dell inspiron laptop with configuration as in the table. The laptop has dual boot setup and could be booted into either Kubuntu 12.10 or Windows 7. The harddisk has 3 partition of which 2 partitions are NTFS \cite{NTFS} used by the Windows operating system. We run all the experiments in the Ext4 \cite{EXT4} partition. The other 2 paritions are unmounted during the run of the experiments. The actual RAM size of the system is 8GB but only 2GB is set as maximum address limit using the kernel boot option \emph{\lstinline$mem=2GB$}. \cite{kernelboot}
\subsection{Timer}
The accuracy of timer has a great impact on the accuracy of the measurements. In all the experiments described in this report, we used POSIX high resolution interface to timer, clock\_gettime()\cite{clockgettime} with CLOCK\_REALTIME mode. This internally (in x86 machines) uses Time Stamp Counters (TSC) to keep track of highly accurate time and is able to provide a nanosecond accuracy. 

For the rest of the report, we refer to the primary memory or RAM as just memory and refer to the secondary memory as disks.
\newline \newline
\begin{tabular}{| p{2.7cm} | p{4cm}|}
    \hline
    \textbf{\textit{Configuration /\newline Parameter}} & \textit{Value} \\ \hline    
    \textbf{Processor} & Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz \\ \hline    
    \textbf{Primary \newline Memory} & 2GiB SODIMM \newline DDR3 Synchronous 1600 MHz (0.6 ns) \newline (Reduced from 8GB by grub option) \\ \hline    
    \textbf{Secondary Memory} & WD 1000GB hard drive \newline Rotational Speed: 5400 RPM \newline Average Latency: 5.50 ms \newline Logical Sector size: 512 bytes \newline Physical Sector size: 4096 bytes
\\ \hline    
    \textbf{Operating system and Kernel version} & KUbuntu 12.10 32 bit \newline Linux kernel 3.5.0-17\\ \hline  
    \textbf{FileSystem} & EXT4(fourth extended filesystem) - 250GB \\ \hline    
    \textbf{Device \newline readAhead} & 128KB\\ \hline  
\end{tabular}
\newline

% -------------------------------------------------------------------------
%                                   Memory
% -------------------------------------------------------------------------
\section{Analysing Performance of Memory}
\label{sec:Analysing Performance of Memory}

\subsection{Life cycle of a Memory Read}
We briefly describe the various components involved in a primary memory I/O. Let us consider an example of a memory read and trace the elements that may be involved in the read and their impact on the latency of the read.(Figure ~\ref{fig:life_cycle_of_memory_op}) The journey begins with a fetch, decode and execute cycle of a program. Say we are in the middle of program and the current instruction pointer is pointing to a memory location with instruction ``movl -92(\%ebp), \%edx". First the processor fetches and decodes the instruction(1). Say the value of -92(\%ebp) is 4095 and this moves the value at location 4095 to edx register. The address 4095 is a virtual address and this can be present at a different physical address. Then the processor checks if the data is present in L1 cache(2) and if not it checks in L2 cache(3). If both are missed, then it checks the TLB(translation lookaside buffer) if it has mapping between the virtual address and the physical address for the page(4) in which the virtual address is present. If so, it directly fetches the data from the RAM. (page address + offset = location). If not, it does a page walk of the page table(6), using the page table base register and the length register for virtual page(5) and fills in the TLB(7). It then fetches the data 500 from the RAM(8). The nominal latency involved in accessing a location in RAM is around 70ns and this value is obtained by one of our experiments and confirmed by lmbench. So, our expectation is that a miss of L1 cache, L2 cache, TLB resulting in pagewalk should add overhead in the order of nanoseconds. 

Under Linux, a page of a process can be paged out to disk to make room for other pages of the process or other processes. This enables to run multiple processes with virtual address space much higher than the physical memory size. Say the virtual address with value 600 has been paged out to disk. The instruction pointer now points to the next memory location and the instruction is ``movl -108(\%ebp), \%edx". Say the value of -108(\%ebp) is 5011. After a), b), c), d), e) and f) similar to 1), 2), 3), 4), 5) and 6) respectively, it figures out that the page for virtual address 5011 isn't present in any physical page and is actually paged out to the disk ie) a major page fault has happened. Now the page fault handler reads in the page from the paging file(g) in disk to primary memory(h) and adds the entry to the page table(i). The TLB is also filled with the new mapping(j) and the data is read from RAM by the processor. As we saw in the disk configuration, the nominal latency of a disk read is around 5.5 milliseconds this is \begin{math}10^6\end{math} times the latency of read from RAM. 
Thus we have identified the key bottleneck in the memory I/O stack and we can design good, bad and ugly workloads accordingly.

\begin{figure}[h!!]
\centering
\resizebox{3.4in}{!}  
{\includegraphics{Memory_Stack}}
\caption{ \small \bf Life cycle of a memory read}
\label{fig:life_cycle_of_memory_op}
\end{figure}

\subsection{Workloads}

From the life cycle of a memory read, it is clear that a good workload should have memory accesses to only RAM and should not result in reading a page from disk. Now we work backwards and design a workload and an environment in which we get the above behaviour. What can cause a major page fault? When the resident set of all the currently running processes and the kernel is greater than the physical memory, then some pages are chosen heuristically to be paged out.

We design a single program and tune the memoryToAllocate and readType parameter to get the different performances out of memory. The program allocates the specified memory and does one of the 3 read patterns. In sequential reads, we start at a random offset in the buffer and then read sequentially the specified number of bytes. For randOffsetsAndSeqReads, we do sequential reads at specific number of random offsets. All reads are done at random offsets in the buffer for the randomReads. 

\begin{algorithm}
 \caption{MemoryAnalyzing Workload}
     \KwIn{memoryToAllocate, numReads, readType}
     \KwOut{throughput, latency}
     buffer = allocate "memoryToAllocate" using malloc\;  
     startTimer\;
     call \textbf{\textit{sequentialReads}} or \textbf{\textit{randOffsetsAndSeqReads}} or \textbf{\textit{randomReads}} according to the \textit{readType} passing in the allocated \textit{buffer} and \textit{numReads} parameters\;
     endTimer\;
     latency = timeDifference / numReads\;
     throughput = numReads / timeDifference\;
     \Return{latency, throughput}\;
\end{algorithm}

\begin{algorithm}
 \caption{sequentialReads}
 \KwIn{buffer, numReads}
  offset = rand()\;
 \For{i=1 \emph{\KwTo} numReads} {
     access buffer[offset]\;     
     offset = offset + 1\;
  } 
\end{algorithm}
\begin{algorithm}
 \caption{randOffsetsAndSeqReads}
 \KwIn{buffer, numReads}
 \For{i=1 \emph{\KwTo} numRandomOffsets} {
      offset = rand()\;
     \For{j=1 \emph{\KwTo} numReads/numRandomOffsets} {
         access buffer[offset]\;     
         offset = offset + 1\;
      }
  }
\end{algorithm}
\begin{algorithm}
 \caption{randomReads}
 \KwIn{buffer, numReads}
 \For{i=1 \emph{\KwTo} numReads} {
     offset = rand()\;
     access buffer[offset]\;     
  }
\end{algorithm}

\subsection{Good workload}
To get best possible performance we should avoid paging. As long as the buffer stays in primary memory, the access time to any memory location/offset in buffer should be on the order of nanoseconds. Ofcourse, we are reading from ``Random" Access Memory! When the memory allocated is less than a limit (around 1.5GB from our experiment), we will read from the primary memory for all read patterns. When paging starts happening because of large allocated memory or contention with other processes, a sequential read will still fetch data from the paging file, but subsequent reads will be from primary memory because the unit of transfer is pages(4096 bytes) and readahead effects. These two constitute good workloads.
This is seen in the latency graph Figure ~\ref{fig:Memory_bufferSize_vs_latency_low} and throughput graph Figure ~\ref{fig:Memory_bufferSize_vs_thpt_low}.

\begin{figure}[h!!]
\centering
\resizebox{2.8in}{!}
{\includegraphics{Memory/bufferSize_vs_latency_low.png}}
\caption{ \small \bf Latency for Good workload. Entire buffer within memory or Sequential Reads with large buffer}
\label{fig:Memory_bufferSize_vs_latency_low}
\end{figure}

\begin{figure}[h!!]
\centering
\resizebox{2.8in}{!}
{\includegraphics{Memory/bufferSize_vs_thpt_low.png}}
\caption{ \small \bf Throughput for Good workload. Entire buffer within memory or Sequential Reads with large buffer}
\label{fig:Memory_bufferSize_vs_thpt_low}
\end{figure}

\subsection{Bad workload}
To stress the memory subsystem and get the worst possible performance, we should try to page out the buffer from primary memory to the paging file and read offsets that cause many pages to be brought from disk to primary memory. This incurs overhead in the order of milliseconds. To get this behaviour we allocate huge buffer and read random offsets from the buffer, many of which will involve page faults. Read ahead benefits won't have much impact because we are reading at random offsets.

\begin{figure}[h!!]
\centering
\resizebox{2.8in}{!}
{\includegraphics{Memory/bufferSize_vs_latency_high.png}}
\caption{ \small \bf Latency. Bad - Large buffer and Random Reads, Ugly - Large buffer and RandOffsets\_SequentialReads}
\label{fig:Memory/bufferSize_vs_latency_high}
\end{figure}

\begin{figure}[h!!]
\centering
\resizebox{2.8in}{!}
{\includegraphics{Memory/bufferSize_vs_thpt_high.png}}
\caption{ \small \bf Throughput. Bad - Large buffer and Random Reads, Ugly - Large buffer and RandOffsets\_SequentialReads}
\label{fig:Memory_bufferSize_vs_thpt_high}
\end{figure}

\subsection{Ugly workload}
To get a performance from memory in between the good and bad workload, we should have a mixture of reads from both memory and disks. To achieve this, we allocate a huge buffer and read at certain random offsets. At each random offsets, we then start reading sequentially certain number of bytes. So, we will have lesser overheads than the bad workloads, because the expected number of page faults is lesser in this case. We will get worse performance than the good workloads, because some reads cause overheads while paging-in a page from disk.

The increase in latency and decrease in throughput for bad and ugly workloads are visibly seen in the graphs Figure ~\ref{fig:Memory/bufferSize_vs_latency_high} and Figure ~\ref{fig:Memory_bufferSize_vs_thpt_high}

% -------------------------------------------------------------------------
%                                   File system
% -------------------------------------------------------------------------
\section{Analysing Performance of File system}
\label{sec:Analysing Performance of File system}

\subsection{Trivial approach}
As we saw from the above experiments, reading/writing a random block from/to disk has latency in the order of milliseconds. The access time is actually the sum of seek time, rotational latency and the transfer time. When we read adjacent blocks in the disk, we almost read them from the same track in the next rotationally optimal place. So we don't incur any cost for seeking and rotating. There are further optimizations like on-arrival read ahead on the disk and block prefetching by the file system buffer cache. Using these properties one can trivially construct a good workload which reads sequentially from a file and a bad workload by reading random blocks of a file. An ugly workload too can be fabricated by reading random blocks followed by sequential block reads. However, one of the objectives/constraints of this project is not to focus on these trivial features and hence we will design a new approach.

\subsection{Life cycle of a File Write}
In this section, we briefly describe the life cycle of a file write with respect to the system calls and the operating environment.(Figure ~\ref{fig:life_cycle_of_fs_op}) Let us consider two interesting cases of writing to a file.
In the first case, an application writes at certain sequential offsets of a file using the write system call as indicated by (1). The unit of allocation of a file is block, so the file system allocates a block and appends data to the block incore, allocating more block as required. In the ext4 file system, there are various steps involved in creating a new file like allocating a new inode, multi block allocations, using extents, delayed allocation and for this discussion we will focus on the ``write" alone. The write system call to a file writes only to the buffer cache in primary memory and doesn't push the data to the disk immediately called writeback. Eventhough write is a synchronous call, writing to buffer cache in primary memory is faster and is in the order of nanoseconds (2). Linux has a background daemon called pdflush which flushes dirty pages to disk every 30 seconds or when the dirty page ratio is above a threshold as indicated by the path (a)(b)(c)(d)(e). Since this is done asynchronously, the application is not blocked by this write. However, when the power goes off or machine reboots the date unwritten to disk is lost.

In the second case, the application always or sometimes does a fsync system call(3) following a write system call called writethrough. fsync transfers all modified in-core data of the file to the disk so that all changed information can be retrieved even after the system crashed or was rebooted.(4)(5)(6) The call blocks until the device reports that the transfer has completed and we know that the nominal latency to a disk write is in the order of milliseconds. This being a synchronous call, adds a performance overhead to the application.

\begin{figure}[h!!]
\centering
\resizebox{2.2in}{!}  
{\includegraphics{Filesystem}}
\caption{ \small \bf Life cycle of a write}
\label{fig:life_cycle_of_fs_op}
\end{figure}

Now we will try to design the required workloads using the above characterististics of the file system and also give some real life examples for each of them. 
The program takes three parameters, the number of writes ``numWrites" , the write size ``writeSize" and the fsyncPercentage ``fsyncPercentage". It does numWrites sequential writes and each write has a size of writeSize bytes. The third parameter fsyncPercentage controls the interval in which fsync is done in the entire writes. For example, if fsyncPercentage is 100, no fsync is done during the writes. If fsyncPercentage is 10 and the numWrites is 500, fsync is invoked for every 50 writes. If fsyncPercentage is 50, fsync is invoked after 250th write. We control this parameter to generate the 3 workloads. In all the cases, fsync is done at the end of the writes.

\begin{algorithm}
 \caption{writeAndFsync}
 \KwIn{blockSize, numWrites, fsyncPercentage}
 \KwIn{latency, throughput}
 fsyncInterval = fsyncPercentage * numWrites / 100\;
 startTimer\;
 \For{i=1 \emph{\KwTo} numWrites} {
 write a block of sizeSize using write systemcall \;
  \If{i \% fsyncInterval == 0}{
     call fsync systemcall\;
   }
  }
  endTimer\;
  measure latency as the timedifference and also calculate throughput\;
 \Return{latency, throughput}\;
\end{algorithm}

\subsection{Good workload}
To get a good workload, we should avoid the synchronous transfer as much as possible. So we set fsyncPercentage to 100 and no fsync is done in the middle of writes and all writes happen to the file system buffer cache. We get a good performance for this workload. One real life example of such workload would be application loggers which log non-critical data in the disk.

\begin{figure}[h!!]
\centering
\resizebox{2.8in}{!}
{\includegraphics{FS/bufferSize_vs_latency_all.png}}
\caption{ \small \bf Latency for good(fsync=100\%), ugly(fsync=5\%) and bad(fsync=0.1\%) }
\label{fig:FS_bufferSize_vs_latency_all}
\end{figure}

\begin{figure}[h!!]
\centering
\resizebox{2.8in}{!}
{\includegraphics{FS/bufferSize_vs_thpt_all.png}}
\caption{ \small \bf Throughput for good(fsync=100\%), ugly(fsync=5\%) and bad(fsync=0.1\%) }
\label{fig:FS_bufferSize_vs_thpt_all}
\end{figure}

\subsection{Bad workload}
To obtain worst possible performance from the file system, we should go through the OS path involving more ``actual" disk writes. Here we set fsyncPercentage to a low value like 1 and frequent fsyncs are done. Because of this synchronous call, we get a very bad performance for the workload. A fitting real life example is database recovery managers which frequently write log data and flush to disk or synchronous commits during high frequency of transactions. Another example could be a highly loaded mail server which persists all email messages to disk before acknowledging to the user.

\subsection{Ugly workload}
To obtain performance between the good and bad workload, we should do a modest number of fsyncs and hence we choose a value of 15 for fsyncPercentage. Thus few fsyncs happen, allowing loss of only few percent of data in case of machine crash or reboot. A real life example is download manager. Say we are downloading a huge software like Kubuntu linux image(around 700 MB), the download manager might periodically write the downloaded part to disk. Upon machine reboot, network disconnection the download manager can resume writing to the file(provided there is support in the server side to resume download from any offset of the image file).

As seen in Figure ~\ref{fig:FS_bufferSize_vs_latency_all}, the latency for good workload is very less as it doesn't incur the fsync overhead.
The ugly one's latency has higher latency because it does fsync every 5\% of writes. The bad workload has horrible performance because it does frequent fsyncs. Similar result can be seen in the throughtput graph Figure ~\ref{fig:FS_bufferSize_vs_thpt_all}.
\newline \newline
\begin{tabular}{| p{2.7cm} | p{4cm}|}
    \hline
    \textbf{\textit{Ratio}} & \textit{Value} \\ \hline    
    \textbf{Memory good:ugly:bad throughput } & 78.87 : 21.129 : 0.001 \\ \hline 
    \textbf{File system good:ugly:bad throughput} & 78.29 : 21.66 : 0.05 \\ \hline 
\end{tabular}

\section{Additional Points Solutions}

\subsection{Question 1}
\subsubsection{Workload for memory}
We use the ugly workload ie) start at certain random offsets and then do a set of sequential reads. We adjust the parameter ``number of sequential reads" from 1 to 1000. When the numSequentialReads is close to 0, the behaviour is similar to bad workload, when close the numReads ie 1000, we get good performance. This is seen in Figure ~\ref{fig:Memory_bufferSize_vs_thpt_adjust}

\begin{figure}[h!!]
\centering
\resizebox{2.8in}{!}
{\includegraphics{Memory/bufferSize_vs_thpt_adjust.png}}
\caption{ \small \bf Throughput for different numSequentialReads for ugly workload(changes from good to bad)}
\label{fig:Memory_bufferSize_vs_thpt_adjust}
\end{figure}

\subsubsection{Workload for file system}
In the generic workload that we designed we vary the fsync percentages from 0.1 to 100 and generate graphs. When fsync percent is close 0, we get the bad performance and when it is close to 100, we get the performance of the good workload. This is illustrated in the Figure \ref{fig:FS_bufferSize_vs_thpt_adjust}.

\begin{figure}[h!!]
\centering
\resizebox{2.8in}{!}
{\includegraphics{FS/bufferSize_vs_thpt_adjust.png}}
\caption{ \small \bf Throughput for different fsyncPercentages}
\label{fig:FS_bufferSize_vs_thpt_adjust}
\end{figure}

\subsection{Question 4}
\subsubsection{Novel way to exercise a component in OS}
For a particular disk, there can be only one fsync executing at a time. So when a task is doing fsync on a big file, another task which does fsync on small files has to still wait on the big task. We will excercise this file system behaviour and see the impact of a big fsync on small fsync.
Our workload consists of two threads. First thread does 10 fsyncs of 1024 byte writes. Meanwhile second thread does a single fsync on different  writes of size ranging from 1KB to 400MB. As seen in the graph, initially, the latency of first thread's workload is less when the second thread does fsync on small size. As the size of second thread's file increases, the total latency of first workload dramatically reduces. This is seen in Figure \ref{fig:FS_fsync_size_vs_latency.png}

\begin{figure}[h!!]
\centering
\resizebox{2.8in}{!}
{\includegraphics{FS/fsync_size_vs_latency.png}}
\caption{ \small \bf Latency with a competing fsync thread on ugly workload}
\label{fig:FS_fsync_size_vs_latency.png}
\end{figure}


\subsection{Question 3}
\subsubsection{Throughput split percentage for memory workloads}
In Figure \ref{fig:Memory_thpt_ratio}, we show how the throughput is split between good, ugly and bad memory workloads for various buffer sizes.  

\begin{figure}[h!!]
\centering
\resizebox{2.8in}{!}
{\includegraphics{Memory/thpt_ratio.png}}
\caption{ \small \bf Throughput split for good, ugly and bad memory worloads}
\label{fig:Memory_thpt_ratio}
\end{figure}

\subsubsection{Throughput split percentage for file system workloads}
In Figure \ref{fig:FS_thpt_ratio}, we show how the throughput is split between good, ugly and bad file system workloads. 
As seen in graph, when the buffer size is 6000 which is greater than 4096, the throughput for ugly workload with 5\% fsync is reduced because of more number of blocks to be allocated and written to disk each time. This is not seen in good workload as the adjacent buffers are merged as blocks before beign written to disk.

\begin{figure}[h!!]
\centering
\resizebox{2.8in}{!}
{\includegraphics{FS/thpt_ratio.png}}
\caption{ \small \bf Throughput split for good, ugly and bad file system worloads}
\label{fig:FS_thpt_ratio}
\end{figure}

\subsection{Question 2}
\subsubsection{Individual latencies of various components during file system workload}
For the file system workload, we ran the workload for 1000 writes with fsync percentage of 0.1 and varying buffersizes 512, 1024, 3000, 4096, 6000.
fsync is done every 10 writes in this case. We also individually measured the various components of OS, ie) memory allocation, block allocation, write system call, fsync system call and find that the sum of these latencies addup to the total latency. As seen in graph Figure \ref{fig:FS_numwrites_vs_latency_path_1.png}, the fsync time dominates the latency of the workload. Figure \ref{fig:FS_numwrites_vs_latency_path_2.png} shows the amount of time taken to allocate the buffer and the time taken for write system call.

\begin{figure}[h!!]
\centering
\resizebox{2.8in}{!}
{\includegraphics{FS/numwrites_vs_latency_path_1.png}}
\caption{ \small \bf Throughput split for good, ugly and bad file system worloads}
\label{fig:FS_numwrites_vs_latency_path_1.png}
\end{figure}

\begin{figure}[h!!]
\centering
\resizebox{2.8in}{!}
{\includegraphics{FS/numwrites_vs_latency_path_2.png}}
\caption{ \small \bf Throughput split for good, ugly and bad file system worloads}
\label{fig:FS_numwrites_vs_latency_path_2.png}
\end{figure}

% -------------------------------------------------------------------------
%                                   Conclusions
% -------------------------------------------------------------------------
\section{Conclusions}
\label{sec:Conclusions}
Understanding the internals of memory subsystem and file system was important to design the workloads. We showed the components involved in the various OS paths and they interact with each other for diffrent  patterns of workloads and their implications in the latency/throughput. Then we derived good, bad and ugly workloads and explained the rationale behind their access patterns. In summary, this project aided in understanding the performance of memory and file system for different workloads and certainly provided guidelines to develop good performing applications and to make wise configuration decisions for operating systems.
Source code for the project can be found here \emph{http://code.google.com/p/memory-fs-performance/source/browse/}.
\end{sloppypar}

% -------------------------------------------------------------------------
%                             End Stuff
% -------------------------------------------------------------------------

\setlength{\baselineskip}{12pt}

\nocite{*}
\bibliographystyle{plain}
\bibliography{my} 
\end{document}
