\documentclass[11pt]{article}
\usepackage{LaTeX}
\usepackage{amsmath}
\usepackage{verbatim}
\usepackage{graphics}
\usepackage{clrscode3e}
\usepackage{hyperref}
\usepackage{enumitem}


\title{ Linux Page Replacement Investigation }
\author{Ben Marks, Chris Lekas}

% TODO: Make footnotes appear.
% TODO: Talk about prefetching? 

\begin{document}

\maketitle

\section{Introduction}

In this report, we summarize our attempts to empirically determine the
Linux page replacement algorithm and analyze its effectiveness on
various standard workloads. We started out suspecting that Linux
implemented a Working Set based page replacement policy, but the
results of our experiments suggest that Linux implements something
closer to an LRU approximation. This algorithm favors interactive
processes over batch processes, but allows for both types of programs
in a mixed environment.


\section{Implementation Details}

We measured the performance of Linux based on the number of page
faults a given workload produced. To easily access these data, we
implemented a system call, {\tt pgfltstats}, which took in a {\tt
  struct pf\_info\_struct}, a {\tt PID}, and a flag indicating whether
to return the page faults of a particular process or the page faults
of a particular user. We returned both the minor and major page faults
of a process, stored in {\tt min\_flt} and {\tt maj\_flt} fields of
the {\tt task\_struct} respectively.

Our implementation differed based on whether we needed to determine
the page faults for a single process or for multiple processes. In the
case of a single process, an RCU read lock was sufficient to ensure
consistent state during our access of the {\tt task\_struct}. In the
case where we needed to obtain data for multiple tasks, we locked the
entire process list with a readlock, and iterated through all
processes, summing the relevant information.

In order to determine information about a given process, we used several
Linux utilities:
\begin{description}
\item [ {\tt watch} ] We used {\tt watch} to periodically run a
  program that called {\tt pgfltstats} and printed the results. This
  allowed us to easily monitor the page faults encountered by a
  different process soon after they occurred and notice changes in the
  rate of page faults.

\item [ {\tt ps } ] We used {\tt ps} to verify the results of our
  system call. We used the following parameters: {\tt ps axo pid,
    ruid, min\_flt,maj\_flt}, which printed out each process, it's
  owner, and the major and minor page faults.

\item [ {\tt /proc/PID/status} ] We used this to monitor the virtual
  memory utilization and the resident set size, {\tt VmRSS}, for
  specific processes as they ran.

\item [ {\tt /proc/meminfo} ] We used this to determine the current
  system {\tt swap} usage. This was especially helpful in determining
  the number of pages that the user could access before swapping was
  necessary.

\end{description}

\section{Experiments to 
  Determine Page Replacement Algorithm}

\subsection{Determining Number of Pages in RAM}

We began by determining how many pages a single process could allocate
before swapping became necessary. With a RAM size of 256 MB and a page
size of 4 kB, we would expect approximately 60,000 pages to fit in
RAM, since most of RAM is allocated for user level processes. To test
this, we wrote a program that continually allocated and accessed
pages, checking after each allocation if a threshold of 30 major page
faults had been reached.  Once this threshold was reached, the program
exited. Over 85 runs, the threshold was reached after an average of
58,933 pages were allocated ($\sigma = 123.95$).

{\em On a Linux system with 256 MB of RAM, approximately 59,000 pages
  (each 4 kB) are available before swapping becomes necessary.}

\subsection{Global or Local Replacement?}

Once we had determined the number of pages that fit in RAM, we
attempted to determine whether Linux uses a local or global page
replacement algorithm. We suspected that Linux uses a global
replacement algorithm, since a local replacement algorithm could be
problematic in a multiprogramming enviornment, where new processes can
enter at any point (since Linux does not employ a long term
scheduler). To investigate this, we created a program that allocated
space for approximately 2/3 of the usable memory for processes. After
allocation and initialization, we counted the number of page faults on
access to the initialized memory. Then, the process blocked waiting
for user input.

Then, we ran another instance, which also allocated 2/3 of the usable
memory. Subsequently, the first process accessed its initialized
memory again, counting the number of page faults encountered.

If Linux uses a local page replacement algorithm the number of page
faults before and after the second process ran should not differ
substantially. If Linux uses a global page replacement algorithm, some
of the first process' pages should have been evicted when the second
process allocated and accessed memory.

On the first access, over 180 trials, therre were consistently zero
major page faults. On the second access, over 180 trials, therre were
an average of 2451.9 major page faults ($\sigma = 15.0$). {\em These
  results suggest that Linux uses a global page replacement
  algorithm.}



\subsection{Determining if Linux Uses Random or Strict
  LRU Replacement}

We then investigated if Linux used a random replacement policy or
strict LRU replacement. We suspected that Linux uses neither, since
random replacement does not exploit temporal or spacial locality and
strict LRU is costly to implement.

To determine if Linux uses a random replacement policy, we devised the
following tests :
\begin{enumerate}
\item Pattern 1 :
  \begin{enumerate}

  \item Allocate 84,000 pages, divided into three sections
    \begin{enumerate}
    \item Section 1: Pages 0 - 27,999
    \item Section 2: Pages 28,000 - 55,999
    \item Section 3: Pages 56,000 - 83,999
    \end{enumerate}
    Note that two sections can fit in memory at a time. 
  \item Memory initialization: access the sections in this order: {\tt
      1 2 1 3 }
  \item Subsequently access Section 1 and count the number of major
    page faults.
  \end{enumerate}

\item Pattern 2 :
  \begin{enumerate}
  \item Divide usable memory into three large sections as described
    above.
  \item Memory initialization: access the chunks in this order: {\tt 1
      2 1 3 }
  \item Subsequently access Section 2 and count the number of major
    page faults.
  \end{enumerate}

\end{enumerate}

The table below shows the mean number and standard deviation of page
faults encountered during the last section access for each pattern.

\begin{minipage}{6cm}
\begin{tabular}{ | p{1.1cm} | p{1.5cm} | p{1.5cm} | p{1.5cm} | }
\hline
Pattern & Number of Trials & $\mu$ Major Page Faults 
& $\sigma$ Major Page Faults \\ \hline
1 & 85 & 3125.3 & 305.17 \\
2 \footnote{Analysis with 2 outliers included.} & 85 & 65.8 & 422.1 \\
2 \footnote{Analysis with 2 outliers excluded.} & 83 & 2.25 & 8.61 \\ \hline
\end{tabular}
\end{minipage}

If Linux uses Random replacement, then Pattern 1 should do slightly
better (fewer major page faults) than Pattern 2.  The reasoning behind
this expectation is that Section 1 is accessed twice and more recently
during the initialization stage, making it less likely than Section 2
to be evicted during the access of Section 3 during initialization.

If Linux uses a strict LRU policy, in Pattern 1, the final access of
Section 1 should not cause many page faults, since Section 1 should
still be mostly in RAM. Furthermore, in Pattern 2, the final access of
Section 2 should cause many page faults, since Section 2 would not be
likely to be in RAM.

Since our data contradict the predictions for a Random replacement
algorithm and for a a strict LRU replacement algorithm, {\em Linux
  probably does not implement a random or strict LRU based policy.}


\subsection{Eliminating Strict Working Set}

We then investigated if Linux uses a strict Working Set policy. Since
working set allows for the highest level of multiprogrmaming while
still preventing thrashing, we expected Linux would use a working set
based algorithm. Based on the data below, however, it seems that this
intuition is incorrect.

The following test was used to determine whether Linux uses working set:
\begin{enumerate}
\item Make a process, $P_1$ and have it access a specific set of $n$
  pages, from low index to high index, 100 times.
  \item Make a new process, $P_2$.
  \item $P_1$ blocks.
  \item $P_2$ accesses 70,000 pages.
  \item $P_1$ reaccesses its set of $n$ pages, from high index to low
    index, recording the number of page faults encountered.
\end{enumerate}

The following table shows the numbers of page faults encountered by
$P_1$ after $P_2$ made a pass through the memory for 4 different
values of $n$.

\begin{tabular} { | p{1.5cm} | p{1.5cm} | p{1.5cm} | p{1.5cm} |}
\hline
$n$ & Number of Trials & $\mu$ Major Page Faults 
& $\sigma$ Major Page Faults \\ \hline
25 & 86 & 5.03 & 0.32 \\
100 & 85 & 14.6 & 0.76 \\
250 & 85 & 33.10 & 2.50 \\
1000 & 85 & 127.56 & 3.57 \\ \hline
\end{tabular}

The number of page faults increase approximately linearly with $n$,
suggesting that $P_1$'s pages are all being evicted. Since $P_1$
reaccesses the pages from high index to low index (opposite the order
in which it originally accessed them), it is unlikely that $P_1$'s
initial accesses after $P_2$ runs cause faults on subsequent
accesses. Even the pages $P_1$ accessed immediately before blocking
are evicted by $P_2$.

{\em These results indicate that Linux does not use strict Working
  Set}, as in a working set implementation, some small number of
$P_1$'s pages would be expected to remain in memory when $P_1$
unblocked.  As the two smallest values of $n$ are 25 and 100, it
should not be the case that all of the values of $n$ chosen exceed the
maximum working set size on any modern system that uses working set.

There is some indication that Linux may keep track of the working set
of a process, attempting to predict future accesses of a process based
on prior ones. For instance Linux kernel code, information in {\tt
  /proc/}, and documentation make references to a process' working set
size. It is probable that some elements of working set are
incorporated into Linux's page replacement algorithm, however the
algorithm clearly deviates from working set, as evidenced by the
experiment performed.



\subsection{Linux Uses An LRU-Like Policy}

We then investigated whether Linux uses an approximation of LRU. As
noted in the previous section, we initially suspected the Linux uses a
Working Set based algorithm, which turned out to not be the case. The
results in this section, indicating that Linux uses an approximation
of LRU, were unexpected, though LRU is certainly a reasonable policy.
LRU seems like a natural approximation of the optimal algorithm -
furthest in the future - in many circumstances. While we can't know
when a page will be referenced next, pages that haven't been used in a
long time seem unlikely to be used in the future, and are thus
reasonable candidates for eviction.  Since we have already showed that
Linux does not use a strict LRU policy, it is possible that some
elements of Working Set are used in implementing this approximation.

At this point, we have not yet ruled out FIFO or MRU as page
replacement policies. Further, it has not yet been shown that the policy
approximates LRU.  The following test checks for FIFO behavior, MRU
behavior, and LRU-like behavior.

\begin{enumerate}

\item Allocate 90,000 pages
\item Iterate through pages 0 - 55,000, but every 5000 pages, access
  pages 0 - 500 again (call this Chunk 1). This has the effect of
  keeping Chunk 1 recently used by repeatedly accessing it.
\item Iterate through pages 55,000 through 70,000 without
  accessing Chunk 1.
\item Access Chunk 1 again and record the number of major page faults.
\item Access pages 3,000 - 3,500 (call this Chunk 2) again and record
  the number of major page faults.
\end{enumerate}

The following table shows the mean number and standard deviation of
page faults encountered during the final access of each of Chunk 1 and
Chunk 2:

\begin{tabular} {| p{1.5cm} | p{1.5cm} | p{1.5cm} | p{1.5cm} |}
  \hline
  Chunk Accessed & Number of Trials & $\mu$ Major Page Faults 
  & $\sigma$ Major Page Faults \\ \hline
  1 & 70 & 11.66 & 9.42 \\
  2 & 70 & 62.87 & 4.37 \\ \hline
\end{tabular}

The small number of page faults encountered during the final access of
Chunk 1 suggests that MRU is not the policy implemented, as the
repeated accessing of Chunk 1 should have caused it to be evicted
during step 3 if the replacement policy were MRU.  Additionally, Chunk
1 consisted of the first pages accessed, but these pages were
typically not evicted even after accessing pages 55,000 - 70,000
(i.e. even after accessing more pages than could fit in RAM),
suggesting that FIFO is not Linux's page replacement policy.

{\em Evidence suggests that Linux does not use MRU or FIFO based page
  replacement algorithms.}

The evidence above also suggests that the recency of page usage is a
factor in Linux's page replacement algorithm. The set of recently
accessed pages - in this case the first 500 pages - was generally not
evicted. {\em This suggests that Linux uses an LRU based replacement
  policy.} In practice, we suspect that the LRU approximation is
implemented using clock, as clock is known to be efficient and close
in functionality to LRU.  The implementation of clock might also
include some functionality inspired by Working Set.



\section{Linux Performance On Standard Workloads}

A process can generally be classified in one of two categories: batch
or interactive.  A user might choose to run predominantly batch
processes, mostly interactive processes, or a mixture of the two.
Linux's LRU-like page replacement policy has varying performance,
depending on the workload.

\subsection { Best Performance: Interactive Workloads }

LRU is a much more effective policy for interactive workloads than it
is for batch workloads.  Interactive workloads are less likely to loop
over large arrays of data, so they should not encounter the problem of
systematic eviction of pages that are about to be accessed as
frequently as batch workloads do.  The high temporal locality of
interactive processes makes LRU a particularly
effective policy for them.

Linux's OOM Killer is another feature that favors interactive
workloads, as a user can fairly easily restart an interactive program.
Killing a batch process is a much less desirable course of action, as
batch processes tend to perform long computations that are not useful
unless they complete.  Using a medium-term or long term scheduler
instead of the OOM Killer would favor batch processes more.

It makes sense for Linux to perform best on interactive workloads, as
it is intended for general use, and most user programs, such as
Firefox and Microsoft Word are interactive.


\subsection { Fair Performance: Batch Workloads }

LRU is a reasonably effective policy for batch workloads, but it is
not at its best in this scenario.  Batch workloads tend to access
large amounts of data, which increases the likelihood of page faults
occurring under any page replacement policy.  Batch workloads also
have a tendency to access large amounts of their data in sequence
(linearly), resulting in low temporal locality.  As a batch process
executes, it may loop over the same array of data many times.  With
LRU, being low on memory could cause consistent page faults.  The data
at the beginning of the array are evicted to make room for those at
the end, but then the process goes through the entire array again,
starting at the beginning. The LRU approximation implemented by Linux
accommodates batch workloads, but not necessarily well.

\subsection { Favoring Interactivity over Batch: Mixed Workloads }

Mixed workloads allow for the possibility that a batch process will
access a large amount of data while several interactive processes run
at the same time. When memory consumption is high, LRU will tend to
evict pages from batch processes, since batch processes often perform
calculations on large amounts of data / instructions, and thus the
time between any particular page access will likely be longer than
with interactive processes. Further, since Linux prioritizes
interactive processes over batch ones in scheduling (using something
like a MLFQ), the pages of a batch process are more likely to be least
recently used at any given point.  This favoring of interactive
processes makes sense, as Linux is typically used with predominantly
interactive workloads.

\subsection { Favoring All: Prefetching }

In many of the experiments above, almost all accesses caused page
faults, yet the total number of major page faults differed from the
number of faulting pages by a factor of $\approx 8$. We suspect that
Linux employs a prefetching algorithm; thus, when a page fault occurs,
Linux fetches multiple pages around the faulting page to avoid
subsequent page faults. This has the benefit of favoring both batch
and interactive processes that exploit spacial locality (eg: by
accessing data in a linear fashion). Given that hard drive access time
is high, it makes sense that such a prefetching algorithm would be
beneficial and implemented.

\section{Conclusion: Linux and LRU}

We tested the Linux page replacement algorithm under different access
patterns. Our initial hypothesis that Linux uses Working Set was shown
to be, at least partially, incorrect. The tests conducted showed that
Linux uses a page replacement algorithm that is most similar to LRU,
which allows Linux to predict subsequent page accesses based on
temporal locality. We suspect that this is implemented using a variant
of the clock algorithm, perhaps with some additional state to
preferentially avoid evicting pages believed to be in a process'
working.  Certain aspects of Linux, such as the {\tt OOM Killer} and
the lack of a medium or long term scheduler, coupled with an LRU
approximation page replacement policy, suggest that Linux is optimized
for an interactive workload, and favors interactive processes over
batch processes. This makes sense, given the expected use of Linux as
a personal operating system for diverse purposes.

Designing and interpreting experiments was quite difficult. While only
a few tests are noted above, over 15 experiments were attempted, with
most being discarded. Often, our tests targeted a specific behavior,
but yielded inconclusive results.  Linux's algorithm incorporates
elements of multiple algorithms, so designing tests, which could not
be dependent on specific implementations, was challenging.

\section{References}
%{\small 
We consulted the following references for information about the Linux
scheduling algorithm:

\begin{itemize}[noitemsep]
\item Understanding the Linux Kernel 3rd Edition; Daniel P. Bovet and
  Marco Cesati
\item Operating Systems Concepts; Abraham Silberschatz, Peter
  B. Galvin, Greg Gagne
\item The Linux Kernel; David A. Rusling
\subitem Available at {\url http://www.tldp.org/LDP/tlk/tlk-title.html}
\item W4188 Operating Systems Slides; Junfeng Yang
\subitem Available at {\url http://www.cs.columbia.edu/~junfeng/10sp-w4118/lectures/l23-vm-linux.pdf }
\item Page Replacement Policy in Linux; Dina Thomas, Prijanka Garg
\end{itemize}
%}

\end{document}





In order to attempt to classify the page replacement algorithm of Linux,
we devised the following test:

\begin{enumerate}
\item Allocate 90,000 pages, divided into three sections
  \subitem Section 1: Pages 0 - 29,999
  \subitem Section 2: Pages 30,000 - 59,999
  \subitem Section 3: Pages 60,000 - 89,999
\item Note that, above RAM was determined to only hold approximately
  57,000 pages at a time; thus some pages will be swapped.
\item Read and write to all pages in the following sequence of sections:
  \subitem Section 1
  \subitem Section 2
  \subitem Section 3
  \subitem Section 2
  \subitem Section 1
  \subitem Section 2
\end{enumerate}

If Linux uses random replacement, then we would expect to see many
misses evenly distributed through the accesses. However, if Linux uses
LRU, we should see few misses on accesses to Section 2.


%% 1 - 3642,
%% 2 - 3819, 


https://www.kernel.org/doc/gorman/html/understand/understand013.html
http://www.cs.columbia.edu/~junfeng/10sp-w4118/lectures/l23-vm-linux.pdf

http://www.tldp.org/LDP/tlk/mm/memory.html


\section{Experiments to Determine Linux Performance by 
  Cases}

In order to test different workloads, we devised a program that had
customizable memory access patterns. Parameters included:
\begin{itemize}
\item Size: Number of pages to allocate {\tt malloc} 
\item Working Set Size: Number of pages to initialize and access
\item Sleeping: Randomly sleep for up to 10 seconds between memory access
\item Random: Access memory in working set randomly or linearly
\end{itemize}

We then devized two program types that approximated interactive and
batch processes respectively.

\begin{itemize}
\item Interactive Processes: Allocate 60,000 pages, initialize 6,500
  pages (similar to emacs), sleep between access, and access linearly.
\item Batch Processes: Allocate 30,000 pages, initialize 15,000 pages,
  do not sleep between access, and access linearly.
\end{itemize}

To test the performance of Linux on different workloads, we tried
running different combinations of process types, and noted how many
processes we could run before system performance degraded
significantly (measured by a consistently high number of page faults
and general increased lag times for operations).


Further, we suspected that Linux would perform well on
workloads with moderate amounts of memory usage, typical of
interactive processes. We found that, in addition to performing well
in this situation, Linux handled batch processes fairly well too.
