\documentclass[preprint,9pt]{sigplanconf}
%\documentclass[draft,preprint,10pt]{sigplanconf}
\usepackage{url}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsmath,amsthm}
\usepackage[dvips]{graphicx}
\usepackage{tikz}
\usepackage{pgfplots}
\usepackage{tkz-berge}
\usepackage{color}
\usepackage{listings}
\usepackage{subfig}
\DeclareCaptionType{copyrightbox}

\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{prop}[theorem]{Proposition}
\newtheorem{conj}[theorem]{Conjecture}
\newtheorem{cor}[theorem]{Corollary}
{\theoremstyle{definition}
\newtheorem{defn}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{exercise}[theorem]{Exercise}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{rem}[theorem]{Remark}
\newtheorem{note}[theorem]{Note}
\newtheorem{question}[theorem]{Question}
\newtheorem{algorithm}[theorem]{Algorithm}}
\def\endmark{\hskip 2em$\square$\par}
\def\proof{\trivlist \item[\hskip \labelsep{\bf Proof\ }]}
\def\endproof{\null\hfill\endmark\endtrivlist}

\begin{document}

\conferenceinfo{VEE 2012}{London, UK, March 3–4 2012.}
\copyrightyear{2012}
\copyrightdata{[to be supplied]}

%\titlebanner{banner above paper title}        % These are ignored unless
%\preprintfooter{short description of paper}   % 'preprint' option specified.

\title{Virtual Machine for Data Prefetching}
%\subtitle{Subtitle Text, if any}

%\authorinfo{Amir Averbuch}
%           {Tel Aviv University \\ P.O.Box 39040 \\ Ramat Aviv, Israel}
%           {amir@math.tau.ac.il}
          
%\authorinfo{George Goldberg}
%           {Tel Aviv University \\ P.O.Box 39040 \\ Ramat Aviv, Israel}
%           {georgeka@post.tau.ac.il}
          
%\authorinfo{Michael Kiperberg}
%           {Tel Aviv University \\ P.O.Box 39040 \\ Ramat Aviv, Israel}
%           {kiperber@post.tau.ac.il}
          
%\authorinfo{Nezer Jacob Zeidenberg}
%           {University of Jyv\"askyl\"a\\ P.O.Box 35, FI-40014\\ Jyv\"askyl\"a, Finland\\}
%           {nezer.j.zaidenberg@jyu.fi}
\authorinfo{} {} {}

\maketitle

\begin{abstract}
Today applications find ample of CPU power,
but suffer greatly form I/O latency. We propose employing pre-execution
to trade this CPU power for an improvement of I/O latency: the I/O relative
instructions are extracted from the original program to separate threads.
These, so called, prefetching threads are faster than the original, since they
contain less instructions, which allows them to run ahead of the main computation.
LLVM-Prefetch is able to:
(a) find the I/O instructions benefiting most from pre-execution;
(b) construct multiple pre-execution threads for parallel prefetching;
(c) synchronize between the pre-execution thread and the main computation;
(d) terminate threads that don't improve the overall performance;
(e) optimize the pre-execution threads aggressively based on runtime data.
Our virtual machine derives from the Low Level Virtual Machine (LLVM) project.
\end{abstract}

\category{D.3}{Programming Languages}{Processors}

\terms
Code generation,
Compilers,
Memory Management,
Optimization,
Just-In-Time,
Prefetching,
Pre-execution

\keywords
LLVM, Slicing, Prefetching

\section{Introduction}

\subsection {CPU--I/O Performance Gap}

Recent advances in CPU technologies provide scientific 
applications with great power to achieve tremendous results. 
However, I/O and disk technologies didn't improve at the
same pace and it was repeatedly shown (in
\cite{Ludwig2004, May2001,Reed1997,Reed2003,Womble1997,Sun2005}
among other sources) that scientific computation performance suffers
greatly from I/O latency and poor response time.

The root cause for the performance hit is
the nature of improvement. While CPU rate and multi-core technology develop at
almost exponential rate (according to Moore's law or close by) I/O speed sees
linear improvement. Today's disks may be 2-3 times faster then the disks
manufactured 10 years ago but the CPUs today offer 4 times more cores and
significant per core improvement.

The notably slower improvement rate of disk performance when compared
to CPU performance has created what is now known as the I/O wall, or performance gap.

There has been great effort in reducing the I/O latency and improving I/O
performance in multiple fields. In the physical level, multiple disks
are now used in RAID environment, allowing linear improvement in read
and write speed by the use of more spindles. Advances in network and
interconnect technologies with the introduction of high speed
interconnects such as Infiniband \cite{Infiniband}, FCoE \cite{FCoE-paper} etc. have
led to the use of caching and distributed systems. Distributed file systems,
such as the Google file system \cite{GFS}, Linux's Ceph \cite{Ceph}
or IBM's Global-Parallel file system \cite{GPFS} (GPFS) and other systems
\cite{HDFS, Lustre, PVFS}, use multiple disk storage from remote 
interconnected locations. 
This distributed technique can usually increase
read and write throughput in linear scale, however cannot improve access latency as
the data is still stored on high latency disk drives.

Two physical approaches to reduce access latency are caching file pages
in memory and new solid state disk drives.
Caching refers to storing data in RAM instead of disk drive, typically
where the data is located near the server cache. A common
implementation of this method is memcached \cite{memcached} which
is found in LiveJournal, Facebook and Wikipedia.

Using caching server is a by product of the recent observation, that,
for the first time since computer networks were developed, they provide
faster response time then local disk.
Caching allows for high performance and low latency access to random data
but comes with very high price tag. However, when the data set is
very large, a system to decide which files to load to cache is still needed.

Using solid state disks is an interim solution for reducing access times that
still bears a very high price tag. Solid state disks today are found only on
high end servers and laptops and are yet to appear on mundane number crunchers.
Furthermore, even solid state disk drives are inefficient when compared
to the CPU power of even modest number cruncher.

\cite{Kotz1994,Reed2003} have shown that I/O access patterns of a distributed
memory caching systems include large number of small irregular accesses.
The problem of I/O modeling, condensing multiple I/O
requests to a single request and data sieving have been researched by
\cite{Thakur1999} and \cite{Reed2003}. However, it is clear that in many cases
it is impossible to combine small I/O requests due to the nature of the application.

In this paper we focus on I/O prefetching --- another commonly
used technique to hide I/O latency by bringing I/O pages to memory long before
they are needed. Several previous studies on prefetching \cite{diskseen,Kotz1990,
May2001, Patterson1997, Reed2003,Scott2005} demonstrated performance gains and
success. However, as processor technology continues to improve and the
performance gap widens, more aggressive prefetching methods are required.

Today CPU computation rate is often million times faster then disk access time.
CPUs today usually have several cores whose performance is measured in nanoseconds,
while disk access time is still measured in milliseconds.
The huge gap creates the necessity
for more complex systems to predict and prefetch the
pages needed for the running application.
Furthermore, now more complex prediction systems have the opportunity
to be successful in prefetching the required pages. This is due to
the availability of 64bit systems, lower memory prices, OS support
for buffer caches and the aforementioned technologies (Infiniband, FCoE)
provide much greater and cheaper I/O bandwidth.

\subsection{Virtualization}

Process virtual machines (hereby VM) or application virtual machines run
as normal applications inside the OS and support a single running process.
(As opposed to system virtual machines such as KVM\cite{KVM} or Xen\cite{Xen}
that run a complete system).

The VM is created when the process is created and is destroyed when the
process is terminated. The VM provides platform independent programming
environment and means to access storage, memory or threading resources.

The VM may provide high level abstraction for programming languages
such as Java or C\# (JVM\cite{JVM} or CLR\cite{CLR} respectively) or
low level abstraction for programming language such as C (LLVM).
% LLVM by itslef is not a virtual machine but rather a framework for
% building virtual machines and compilers

Furthermore, in some cases the virtual machine acts as an agent that provides
additional services besides execution and platform independence.
Such services may include distributed execution such as the
case of PVM\cite{PVM}, byte code verification\cite{JVM} and
memory leaks and thread debugging\cite{Valgrind}.

\subsection{Main Results}

Considering all these new technology trends and observations, we propose
a pre-execution prefetching approach to improve the I/O
access performance. It avoids the limitation of traditional prediction-based
prefetching approaches that must rely on perceivable patterns among
I/O accesses, and is applicable for many kinds of applications, including
those with unknown access patterns and random accesses.

Several authors proposed to implement the prefetching technique inside the
pre-processor. However, this solution is either time-consuming, since an
enormous amount of code should be emitted and compiled, or requires a
profile-driven compilation, in order to locate the most critical spots
on which the technique is then applied.

Our implementation is part of a virtual machine: the compilation time
of the program remains unchanged and no profiling phases are required.
During program's execution, the VM locates the critical spots ---
program instructions that would benefit most from prefetching --- and
applies the prefetching technique to them.

The proposed approach, runs several prefetching threads in parallel, to
utilize the CPU power more aggressively. This was enabled by a dependency
analysis technique, which we called the layer decomposition.

The prefetching threads are then optimized by eliminating rarely used
instructions. This was achieved through program instrumentation, carried by
the VM during the run-time.

The rest of the paper is organized as follows: section \ref{sec:preexec} explains
what "pre-execution" is and how can it help I/O-bound applications. Section
\ref{sec:isolation} presents the means used to isolate the prefetching threads
from the main computation, and the reason behind the need for such isolation.
Section \ref{sec:llvm} introduces LLVM --- a framework we used to build the
virtual machine. The actual construction of the prefetching threads is described
in section \ref{sec:construction}.
A library, developed to support the VM, is described in section \ref{lib}.
Sections \ref{sec:result} and \ref{sec:case} present the empirical result measured
during the evaluation of our system. While the former section gives a broad overview of
the results, the later provides a detailed analysis of a specific program's execution.
 We review the related work in section \ref{sec:related}.
Conclusions are made in section \ref{sec:conclusion}.

\section{Pre-execution}

\label{sec:preexec}

The runtime of a computer program can be seen as
an interleaving of CPU-bound and I/O-bound segments.
In the CPU-bound segments the software waits for some computation
to be completed, and likewise, in I/O-bound segments the
system awaits the completion of the I/O operations \cite{OSC}.

A well known idea is that if I/O-bound and CPU-bound segments
can be executed in parallel then computation time can be significantly
improved. That is the key feature of the proposed approach:
we would like the VM to automatically, without need for manual injections
of functions like madvise(2), to overlap the CPU-bound
and I/O-bound segments via prefetching.

Since we already discussed the performance gap between CPU and I/O we assume
that more often then not the system is running I/O bound segments. 
We  assume that during the runtime of the process there is
available CPU power that can be used for prefetching. 
Furthermore, we assume I/O has significantly
high latency while CPU has almost no latency so prefetching should also
contribute to eliminate system idle time.

The pre-execution of code is done by additional threads, added
to the program by the VM, called prefetching threads.
The prefetching thread is composed of only I/O related operations
of the original thread. The original source code is transformed by the
Just-In-Time (JIT) compiler to obtain the prefetching threads. The
prefetching threads execute faster than the main thread because they
contain only the essential instructions for data address calculation,
Therefore the prefetching threads are able to produce
effective prefetches for the original, main computation thread.
The prefetching threads
are supported by an underlying prefetching library that
provides the prefetch counterparts of normal I/O function calls. It
collects speculated future references, generates prefetch requests,
and schedules prefetches. The prefetching library can also track function-call
identifiers to synchronize the prefetching threads and the computation
thread I/O calls, and to force the prefetching threads to run properly.

The proposed prefetching approach has many technical challenges.
First of all we should generate accurate future I/O references,
while guaranteeing expected
program behavior.
The construction of pre-execution threads should be efficient.
The pre-execution threads should synchronize with the original thread.
Finally, everything should be done while the program runs.
We address these challenges in the following sections.

\section{Isolation of Pre-execution}
\label{sec:isolation}

This section explores various aspects of prefetch threads construction 
main problem:
how to prefetch precisely the same pages as the main computation,
without interfering with it?
The actual method of thread construction is discussed in the next section.

The prefetching threads run in the same process and  at the same time with the
main thread. Usually they run ahead of the main thread to trigger I/O operations
earlier. This allows to reduce the access latency for the original thread.

This approach essentially tries to overlap the expensive I/O access with
the computation in the original thread as much as possible. 
The main design considerations include two aspects: correctness and effectiveness. 
Correctness means that the prefetching must not compromise the correct behavior of the
original computation. Since the prefetching threads share certain
resources with the main thread, such as memory address space
and opened file handles, an inconsiderate design of
the prefetching might result in unexpected results. We discuss in
detail our design to guarantee that the prefetching does not disturb
the main thread with regards to memory and I/O behavior.
The design provides a systematic way to perform prefetching effectively
and generate accurate future I/O references.

\subsection{Memory Isolation}

We do not guarantee correctness of the prefetch threads, in the
sense, that they can generate results that differ from the results of
the main computation. Therefore we have to prevent these threads to alter the
state of the main thread through the shared memory.

Our method for dealing with a shared memory is similar to
\cite{Kim2004,Luk2001,Hiding,Sohi2001}.
We remove all store instructions from the prefetching threads
to guarantee that the main thread's memory won't be altered
by any of the prefetch threads, thus preserving the correctness of
the main computation.

While this method alleviates the need for additional memory allocation
and managing, it decreases the accuracy of the prefetching threads.
This inaccurate behavior will not affect the correctness
of the program though. It merely decreases the accuracy of the prefetching,
and thus affects the effectiveness. We have not researched other
methods for memory management.

However, as we shall see in section \ref{lib}, the library used by the
prefetching threads, can detect such an inaccurate behavior and terminate
the misbehaving threads. In this sense, the proposed optimization technique
never harms.

\subsection{I/O Isolation}

To simplify the discussion and focus on the methodology itself, we
only deal with memory-mapped files. We made this decision based on
the fact that memory regions are shared between all threads.
Thus, by reading from files, prefetch threads could alter the state
of the main computation.

The underlying prefetching library provides functionality to support
the proposed approach; specifically it provides the \textsc{prefetch}
function, which not only prefetches data but also synchronizes
between the prefetch threads and the main computation. 

We decided not to make too strong assumptions regarding the operating system, 
in order to increase
the portability of \textsc{LLVM-Prefetch}.

\section{Introduction to LLVM}
\label{sec:llvm}

The proposed solution is heavily based on the Low Level Virtual Machine (LLVM).
This section provides a short introduction to LLVM's most important elements.

LLVM is a compiler infrastructure,
providing the middle layers of a complete compiler system, taking
intermediate representation (IR) code from a compiler and emitting
an optimized IR. This new IR can then be converted and linked into
machine-dependent assembly code for a target platform. LLVM can also
generate binary machine code at run-time.

The LLVM code representation is designed to be used in three different forms:
as an in-memory compiler IR, as an on-disk bytecode representation
(suitable for fast loading by a Just-In-Time compiler), and as a human
readable assembly language representation. This allows LLVM to provide a
powerful intermediate representation for efficient compiler transformations
and analysis, while providing a natural means to debug and visualize the
transformations. The three different forms of LLVM are all equivalent.

LLVM supports a language-independent instruction set and type system.
Each instruction is in static single assignment form (SSA), meaning
that each variable (called a typed register) is assigned once and is
frozen. This helps simplify the analysis of dependencies among variables.
LLVM allows code to be left for late-compiling from the IR to machine
code in a just-in-time compiler (JIT) in a fashion similar to Java.

LLVM programs are composed of "Module"s, each of which is a translation
unit of the input programs. Each module consists of functions, global variables,
and symbol table entries. Modules may be combined together with the LLVM linker,
which merges function (and global variable) definitions, resolves forward
declarations, and merges symbol table entries.

A function definition contains a list of basic blocks, forming the CFG
(Control Flow Graph) for the function. Each basic block may optionally start
with a label (giving the basic block a symbol table entry), contains a list of
instructions, and ends with a terminator instruction (such as a branch or function
return).

Every instruction contains a, possibly empty, list of arguments. While in the human
readable assembly language, every argument is represented by a previously
defined variable, the in-memory IR holds a pointer to the instruction
defining this variable. Thus, the in-memory IR forms a data dependency graph,
which is required by the slicing mechanism.

\section{Prefetching Threads Construction}
\label{sec:construction}

Prefetching threads as well as prefetching instructions
can be inserted manually.
Linux and most other operating systems supports this
feature and provide madvise(2) or an equivalent API.
However, the process of inserting such instruction is difficult,
time consuming and bug prone.

In this section, we present the
design of a prototype of a virtual machine, equipped with a just-in-time
compiler to address the challenges of constructing the prefetching
threads automatically and efficiently.

We augment LLVM program execution with the program slicing technique
that was discussed in \cite{Hiding, Slicing} to automatically construct
prefetching threads. The program slicing technique was originally
proposed for studying program behavior, as knowing
which results depend on other can greatly assist in debugging and
detecting bugs' root cause. Nowadays, program slicing is a rich set of techniques
for program decomposition, which allows extracting instructions
relevant to specific computation within a program.
Program slicing techniques rely on the Program Dependence Graph
(PDG) \cite{PDG} analysis --- a data and control dependence analysis
which  can be carried out easily with LLVM.

Prefetching threads are running a sub-set of the original
program (a ``slice''), where I/O instructions are of interest.
Therefore, prefetching threads construction is, in
its essence, a program slicing problem.

\subsection{Hot Loads}

Since we are dealing with memory-mapped files, I/O operations are disguised
as memory accesses, specifically "load" instructions. If we could find the
memory accesses that cause the operating system to perform I/O operations,
we would be able to reveal the instructions that require prefetching. If we, then, slice
the original program with these load instructions and their arguments as
slice criteria, we obtain all I/O related operations, that is, I/O operations
and the critical computations that might affect those I/O operations.

We call the load instructions that cause the operating system to
perform most I/O operations "hot loads".
In order to find the hot loads we use a profiling technique. We instrument
the original program with a counter for every load instruction.
Just before the load instruction we insert a code that asks the operating
system whether the page accessed by the load instruction resides
in the main memory; if it doesn't, i.e. the operating system has to perform
I/O operations to fetch it, the corresponding counter is incremented
by one.

After some time, we can pause the programs execution and traverse the
counters in order to find the load instructions that caused 80\% of
the I/O operations --- the hot loads. Other load instructions either
don't fetch from a memory-mapped file or are rarely accessed. Anyhow
the benefit of prefetching the corresponding memory addresses is
insignificant, thus making it unworthy to apply the prefetching technique
to these instructions.

\subsection{Slicing with LLVM}
We begin this section with a definition of slice.

\begin{defn}
Consider a set $I$ of instructions. A \emph{slice} with respect to $I$ is a set
$S(I) \supseteq I$ such that if $i \in S(I)$ belongs to a basic block $B$ then:
\begin{enumerate}
\item \label{defn:rec} all $i$'s arguments are in $S(I)$
\item the termination instruction of $B$ is in $S(I)$
\item if $i$ branches, conditionally or not, to a basic block $B'$
then the termination instruction of $B'$ is in $S(I)$.
\end{enumerate}
\end{defn}

It may seem that the first requirement is sufficient.
Recall, however, that every basic block should end with a termination instruction.
That's why we need the second requirement. Finally, if some basic block is empty
it is automatically removed, causing an error if some instruction branches to it.
That's why we need to retain at least one instruction in such blocks, giving us
the third requirement.

%The following example demonstrates the construction of $S(I)$ step-by-step.
%!!!!!!!!!!!FILL IN!!!!!!!!!!

\subsection{Threads Construction}

We construct the prefetching threads by slicing with respect to a family of
disjoint sets $L_1,L_2,\dots,L_n$ of frequently used memory access instructions
--- the hot loads. Note that if the result of some load instruction is never used
in the slice then this load instruction can be replaced by a \emph{prefetch} instruction.
A \emph{prefetch} instruction is a request to the prefetch library to fetch a page 
containing the given address
from the disk. The prefetch instruction doesn't fetch the page by itself but
rather places an appropriate request on the prefetch queue. The library
later dequeues the request and fetches the page, hopefully before it is needed by
the main thread.

Before continuing, we introduce several definitions:

\begin{defn}
A graph $G=(V,E)$ is a data dependency graph of some function if $V$ is the set
of function's instructions and $(x,y) \in E$ if $y$ computes a value used
(directly) by $x$.
\end{defn}

Specifically, we are interested in a subset of $G$'s vertexes, those corresponding
to the hot loads, and the relations between them.

\begin{defn}
An $L$-minor of a data dependency graph $G$ is a graph $G_L=(L,E)$ for
$L \subseteq V(G)$ s.t. $(x,y) \in E$ if $G$ contains a path from $x$
to $y$ not passing through the vertexes in $L$ (besides x,y).
\end{defn}

Figure \ref{fig:minor} presents an example of $L$-minor.

\begin{figure}
\centering
\subfloat[The graph $G$]{
\begin{tikzpicture}[scale=0.6,transform shape]
  \tikzstyle{every node}=[node distance = 4cm,%
                          bend angle    = 45,%
                          fill          = gray!30]
  \tikzset{VertexStyle/.style = {shape=circle,fill=gray,draw}}
  \Vertex{A}

  \tikzset{VertexStyle/.style = {shape=circle,fill=white,draw}}
  \NOEA(A){X}
  \SOEA(A){Y}
  
  \tikzset{VertexStyle/.style = {shape=circle,fill=gray,draw}}
  \NOEA(X){B}
  \SOEA(Y){C}
  
  \tikzset{VertexStyle/.style = {shape=circle,fill=white,draw}}
  \SOEA(B){Z}
  
  \tikzset{VertexStyle/.style = {shape=circle,fill=gray,draw}}
  \SOEA(Z){D}
  
  \tikzstyle{EdgeStyle}=[post]
  \Edge(A)(X)
  \Edge(A)(Y)
  \Edge(X)(B)
  \Edge(Y)(C)
  \Edge(B)(Z)
  \Edge(Z)(D)
  \Edge(B)(C)
  \tikzstyle{EdgeStyle}=[bend left,post]
  \Edge(B)(D)
\end{tikzpicture}
}
\hspace{16pt}
\subfloat[An $L$-minor of $G$]{
\begin{tikzpicture}[scale=0.6,transform shape]
  \tikzstyle{every node}=[node distance = 4cm,%
                          bend angle    = 45,%
                          fill          = gray!30]
  \tikzset{VertexStyle/.style = {shape=circle,fill=gray,draw}}
  \Vertex{A}  
  \NOEA(X){B}
  \SOEA(Y){C}  
  \SOEA(Z){D}
  
  \tikzstyle{EdgeStyle}=[post]
  \Edge(A)(B)
  \Edge(B)(C)
  \Edge(A)(C)
  \Edge(B)(D)
\end{tikzpicture}
}
\caption{\label{fig:minor} The original graph $G$, presented on the left figure, has seven vertexes. The subset $L$ of $G$'s vertexes contains the vertexes $A$, $B$, $C$ and $D$ colored gray. The right figure shows the $L$-minor of $G$. }
\end{figure}

If we are lucky to have a cycle-free minor, then we can decompose its vertexes
to disjoint sets, such that the decomposition has some desirable property.

\begin{defn}
Given a tree $T=(V,E)$, its layer decomposition is a family of disjoint sets
covering the vertex set $V=L_1 \uplus L_2 \uplus \cdots \uplus L_n$ constructed
as follows: $L_1$ is the set of $T$'s leaves (vertexes with indgree $0$),
$L_2$ is the set of $T-L_1$'s leaves, $L_3$ is the set of $T-L_1-L_2$'s leaves
and so on.
\end{defn}

\begin{prop}
Let $V=L_1 \uplus L_2 \uplus \cdots \uplus L_n$ be the layer decomposition of
the tree $T=(V,E)$. For every edge $(x,y) \in E$ with $x \in L_i$ and $y \in L_j$
we have $i \leq j$.
\end{prop}

\begin{proof}
Suppose (for contradiction) that there is an edge $(x,y) \in E$ with $x \in L_i, y\in L_j$
and $i > j$. By definition the layer $L_j$ was built before the layer $L_i$. Just
before the construction of $L_j$ both $x$ and $y$ were present in the tree. Thus, the
indegree of $y$ wasn't $0$. By the definition of layer decomposition, $y \notin L_j$
--- contradicting the assumption.
\end{proof}

The proposition enables us to construct the sets in a way that
guarantees a wonderful property. After some point in time, no thread
 encounters a cache
miss due to a hot load. Let $G$ be the data dependency graph, $L$ --- the set of
all hot loads and $G_L$ --- $G$'s $L$-minor.
Assume, meanwhile, that $G_L$ is a tree and denote by $L_1, L_2, \dots, L_n$ its
layer decomposition.

\begin{theorem}
There is a point in time after which no hot load encounters a cache miss.
\end{theorem}

\begin{proof}
The prefetch library terminates misbehaving slices. Therefore, we can
assume that all slices access the same sequence of pages
$p_1, p_2, \dots, p_i, \dots$.
We denote by $t_{i,j}$ the time at which the page $p_i$ was accessed by the
slice $S_j$. Note that $S_{j+1}$ contains all the instructions of $S_j$
and additional ones.
Moreover, the prefetch instructions of $S_j$ may appear in $S_{j+1}$ as load
instructions. Thus, the slice $S_j$ is ``faster'' than $S_{j+1}$  by some factor,
ie. $t_{i,j-1} < t_{i,j} / c_j$ for some $c_j > 1$.

We assume that the computation is long enough in the following sense.
Given a time $t$, almost all pages are accessed after $t$. More
formally, for every time $t$ there is an index $r$ such that $t_{i,j}>t$ for all
$i \geq r$.

We show that every slice $S_j$ almost never encounters a cache miss.
Denote by $T$ the maximal time needed to prefetch a page and
assume meanwhile that no pages are evicted from memory.
Consider a point in time $t=\frac{T}{c_j-1}$.
By the previous discussion, almost all pages are accessed after $t$.
In other words, there is an index $r$ for which $t_{i,j-1}>\frac{T}{c_j-1}$ for all
$i \geq r$.
We conclude that there is delay of at least $T$ between the accesses to $p_i$ of $S_j$ and
$S_{j-1}$. The conclusion is justified by the following inequality:
\[ t_{i,j} > t_{i,j-1} c_j = t_{i,j-1} + t_{i,j-1} (c_j-1) > t_{i,j-1} + T \]
This delay guarantees that the page $p_i$ resides in memory by the time $t_{i,j}$.
Thus $S_j$ doesn't encounter cache misses on any of the pages $p_i,p_{i+1},\dots$ for
$i \geq r$.

Note that the index $r$ varies from slice to slice. However, since there is only finitely
many slices, we can choose the maximal such $r$ and denote it by $r'$. We have
that no slice encounters a cache miss on any of the pages $p_i,p_{i+1},\dots$ for
$i \geq r'$.
\end{proof}

If $G_L$ is not a tree, then we compute the graph of its strongly connected
components $H$. The layer decomposition is performed on $H$ rather than on $G_L$,
where the meaning of $C \in L_i$, for a strongly connected component $C$, is
that all its vertexes are in $L_i$.

\subsection{Optimizing the Termination Instructions}

As can be seen experimentally, a termination instruction may introduce undesired
instructions to the slice. These are the instructions that compute a result that is not used
in the near future. Thus, this result may be omitted. However omitting these instructions
poses a dependency problem, since the result is an argument of the termination
instruction. Before describing the solution, we should explain, first, how to
determine whether some result is likely to be used in the near future.

Recall, that a basic block contains no termination instruction till its very end,
thus if some instruction of a basic block was executed then all of them would be.
By adding a counter to every basic block, we can determine for every instruction
how frequently it is executed. If some termination instruction was not executed at
all then we can (optimistically) assume that it won't be executed in the near future,
thus we can omit all instructions that compute its arguments.

After removing the unnecessary instructions, we should take care of the termination
instruction arguments. Clearly these arguments should be removed, where possible, or
replaced by some default value. The exact solution depends on the type of the termination
instruction.

Note that this optimization can not be performed at compile time, making our
run-time approach beneficial. This method is similar to dynamic slicing \cite{dynamicslicing}.

\subsection{Effectiveness of Prefetching Threads}

The prefetching threads are able to run ahead of the original thread
and are effective in fetching data in advance to overlap the computation
and I/O accesses for the following reasons. As the previous discussion
illustrates, the code not relevant to I/O operations is
sliced away, which makes the prefetching thread contain only the essential
I/O operations. Therefore, the prefetching I/O thread is not involved
in enormous computations and runs much faster than the main thread.
Secondly, the prefetch version of I/O calls are used within the prefetching
threads to replace normal I/O calls. These prefetch calls are implemented
with non-blocking accesses to accelerate the prefetching threads.


\section{I/O Prefetching Library}
\label{lib}
This section discusses the design of the underlying library support
for I/O prefetching. The goal of the library is to provide an I/O
functionality missing in the operating system. Although some operating
systems do provide a partial implementation of the library's
functionality, we chose to re-implement it, in order to achieve a high
portability of our system. It enabled us to run the system on Linux,
Free BSD and Mac OS X with equal success.

\begin{figure}
\includegraphics[scale=.47]{lib.eps}
\caption{Prefetching Library. \textsc{prefetch} enqueues the address in the \textsc{prefetch-queue} and an error detecting queue. A worker thread checks periodically whether the queue is non-empty, and if so dequeues the eldest address and accesses the corresponding memory. If a page is not cached, \textsc{fetch} increments the counter corresponding to the ID. Likewise, \textsc{fetch} dequeues an address from an error detecting queue and compares it against its argument to reveal a misbehaving prefetch thread.}
\label{fig:lib}
\end{figure}

The library provides only two functions: \textsc{fetch} and \textsc{prefetch}.
Both functions have two arguments: an ID, whose meaning we explain later, and
an address that should be either fetched (accessed synchronously) or prefetched
(accessed asynchronously). Figure \ref{fig:lib} demonstrates the internal
organization of the library.

The \textsc{prefetch} function enqueues the address in the \textsc{prefetch-queue}.
A worker thread checks periodically whether the queue is non-empty, and if so
dequeues the eldest address and accesses the corresponding memory. 
%Note that the
%same functionality could be achieved through the POSIX system call
%\textsc{madvise} passing the \textsc{madv\_willneed} argument. 
%However, there is
%no equivalent system call on Windows, so for higher portability, we were required
%to implement it in the library.

Recall that the optimization is performed in two phases:
\begin{enumerate}
\item finding the hot loads
\item running slices to prefetch the hot loads.
\end{enumerate}

The \textsc{fetch} function has
a different purpose during each the two phases.

During the first phase the \textsc{fetch} function counts the number of
cache misses encountered by each load instruction as follows.
The main thread (which is the only one during this phase) invokes the
\textsc{fetch} function just before each
of its load instructions, passing it the memory address being accessed by the load
instruction. The ID parameter we have mentioned previously is the unique identifier
of the load instruction that invoked \textsc{fetch}.
The \textsc{fetch} function asks the operating system whether the corresponding
page resides in the memory, using POSIX's \textsc{mincore} system call.
If the system call replies negatively, i.e. the page is not cached,
\textsc{fetch} increments the counter corresponding to the ID.
When the phase finishes, the counters are traversed to find the hot loads
and construct the slices accordingly.

During the second phase the function is looking for misbehaving prefetch threads.
Note that for every hot load instruction,
there is exactly one call to \textsc{fetch}
with some ID $h$ and exactly one call to
\textsc{prefetch} with the same ID $h$.
The sequence of addresses passed to these functions should be exactly the same.
A divergence of these sequences indicates the the corresponding prefetch thread
misbehaves. The library compares the two sequence as follows. It allocates a queue,
a circular buffer,
for each ID holding the addresses already passed to \textsc{prefetch} but not yet
passed to \textsc{fetch}. On every invocation \textsc{prefetch} enqueues the address
in the corresponding (to the ID) queue; \textsc{fetch} dequeues an address from
the corresponding queue and compares it against its argument. If the addresses
are not equal then a misbehaving prefetch thread --- the thread that enqueued the
address --- was found.

The last responsibility of the library is to synchronize between the main thread
and the prefetching threads. Without synchronization, the prefetching threads
will populate the memory too fast, causing eviction of previously prefetched pages
which weren't yet accessed by the main thread.

The synchronization mechanism is simple: when the \textsc{prefetch} function is
called check whether the corresponding buffer queue has at least $\ell$
elements; if so, suspend the thread (by waiting) until queue's size decreases.
This mechanism guarantees that the hot loads and the corresponding prefetch
instructions don't cause an eviction of pages that were prefetched but not yet
used. Other load instructions can affect the page cache. However, the
effect of these instructions is minor since they are invoked rarely.
Thus, to overcome this problem, it is enough to decrease a little the value of
$\ell$. Note, that the page cache is system-wide and other I/O intensive
applications might evict the prefetched pages. Thus, one should ensure that no
other I/O intensive applications are running.


\section{POSIX Implementation and Porting to Other Platforms}

POSIX (and later versions of POSIX known as Single UNIX Specification)
is the standard to which UNIX systems
adhere. FreeBSD, Linux and Mac OS X are all UNIX systems that implement
the POSIX API.

The POSIX API includes the system call
%s madvise(2) and 
\textsc{mincore} which
we rely on. 
%madvise(2) is used to provide advise to the kernel (mandatory
%advice in Linux) to fetch or evict pages to memory.  
The \textsc{mincore} system call provides information about
whether pages are core resident, i.e. stored in memory and access
to those pages doesn't require I/O operations.

Provided the existence of \textsc{mincore} 
%and madvise(2) 
it is relatively simple to
port our software to multiple platforms. 

%Unfortunately these functions
%does not exist under Microsoft Windows and a we left finding suitable
%replacement for future work.

\section{Experimental Results}
\label{sec:result}
We have carried out experiments to verify the benefits of the proposed
prefetching technique. 
%This section discusses the experimental setup
%and its results. 
%We evaluate the results with two major metrics,
%execution time reduction and cache miss improvement.
The conducted experiments prove the necessity of different components
of the proposed system. Some of the experiments were based on synthetic
programs, each of which, nevertheless, represents a large class of
real-life applications.

The simplest program of our benchmark, \textsc{buk}, implements an integer
bucket sort algorithm, which is part of the NAS parallel benchmark
suit \cite{nas}. This program contains a single memory access instruction
executed in a loop. A single pre-execution thread is constructed to prefetch
the data ahead of the main computation. The obtained results demonstrate (see figure
\ref{fig_buk}) that
(our implementation of) prefetching is effective in this case.

\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis} [    x=0.18cm,
                y=0.5cm,
                enlarge y limits={true, value=0.75},
                symbolic y coords={regular,optimized},
                ytick=data,
                xbar,
                xmin=0]
    \addplot coordinates {
        (30,regular)
        (16,optimized)
    };
\end{axis}
\end{tikzpicture}
\caption{\label{fig_buk} Running time (in seconds) of the optimized version of bucket sort algorithm versus the regular (non-optimized) one.}
\end{figure}

\label{mat-mult}
Next we want to show that the optimization should be applied only to those
memory accesses that cause many cache misses, the "hot loads" in our terminology.
Other memory accesses have insignificant impact on overall program performance, so
even total removal of these instructions would not reduce the running time notably.
This is demonstrated by the matrix "multiplication" program, to be precise, the program
computes the sum
\[ \sum_{i=1}^{N}\sum_{j=1}^{N}\sum_{k=1}^{N}f(a_{ik},b_{kj}) \]
where $a_{ij},b_{ij}$ are elements of the matrices $A,B$, respectively,
stored on a disk row-by-row. Clearly, most cache misses are encountered due
to accesses to $B$, thus the optimizer constructs a pre-execution thread, which
prefetches only the elements of $B$. We compare this behavior to a regular
execution with no prefetch threads and to an execution in which the elements
of $A$ are prefetched as well. The obtained results demonstrate (see figure \ref{fig_mul})
that our strategy is optimal --- it is not worse than the fully optimized execution.

% our strategy 28, without optimization 49, with both prefetches 28

\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis} [    x=0.1cm,
                y=0.5cm,
                enlarge y limits={true, value=0.5},
                symbolic y coords={regular,too optimized,optimized},
                ytick=data,
                xbar,
                xmin=0]
    \addplot coordinates {
        (49,regular)
        (28,too optimized)
        (28,optimized)
    };
\end{axis}
\end{tikzpicture}
\caption{\label{fig_mul} Running time (in seconds) of the matrix multiplication algorithm executed regularly (without optimizations), with an unnecessary optimizations on both memory accesses, and with a smart optimization of a problematic memory access (our approach).}
\end{figure}

This paper's main innovation is the construction of parallel pre-execution
threads for prefetching. The third program of our benchmark shows the benefit
of parallel prefetching approach. This program computes the sum
\[ \sum_{i=1}^{N} a_{b_{i \cdot P}} c_i \]
where $a$, $b$ and $c$ are arrays of $N$ elements, $P$ is the page size of the
machine and all indices are computed modulo $N$. The program computes a sum of
products $a_j \cdot c_i$ where $j = b_{i\cdot P}$, i.e. the elements of $b$ contain
the indices of elements of $a$ that should be multiplied with the elements of $c$.
We have three memory-access instruction
$\alpha$, $\beta$ and $\gamma$ that access the arrays $a$, $b$ and $c$, respectively.
Clearly, only every $P$th invocation of $\gamma$ encounters a cache miss, compared to
every invocation of $\beta$, and, probably, every invocation of $\alpha$ (this is true
with high probability for random $b$ and large $N$). The analysis suggests that $\alpha$
and $\beta$ should be pre-executed and $\gamma$ should be ignored.
Since the \emph{addresses} of $\alpha$ depend of the \emph{values} of $\beta$ the
proposed system constructs two pre-execution threads. The first issues prefetch requests
with the addresses of $\beta$. The second uses the values of $\beta$ to compute the 
addresses of $\alpha$ and to issue the
corresponding prefetch requests.

We compare the performance of four systems:
\begin{enumerate}
\item Regular execution with no prefetch threads.
\item A single prefetch thread that prefetches $\beta$.
\item A single prefetch thread that executes $\beta$ and prefetches $\alpha$.
\item The proposed optimization (having two prefetch threads).
\end{enumerate}
As can be seen on figure \ref{fig_ind}, parallelizing pre-execution improves
performance when a sufficient computational power is available.


\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis} [    x=0.07cm,
                y=0.5cm,
                enlarge y limits={true, value=0.25},
                symbolic y coords={regular,beta,alpha,optimized},
                ytick=data,
                xbar,
                xmin=0]
    \addplot coordinates {
        (40,regular)
        (44,beta)
        (31,alpha)
        (29,optimized)   
};
\end{axis}
\end{tikzpicture}
\caption{\label{fig_ind} Running time (in seconds) of the indirect array access program (on average for single iteration). The graph compares the four executions under consideration: a non-optimized (regular) execution, an execution which doesn't prefetch $\beta$ but prefetches $\alpha$, an execution which doesn't prefetch $\alpha$ but prefetches $\beta$ and finally the proposed execution which prefetches both $\alpha$ and $\beta$. }
\end{figure}

\section{Case Study: Matrix Multiplication}

\label{sec:case}

\begin{figure*}[t!]
\includegraphics[scale=0.47]{test_wo.eps}
\caption{\label{fig_test_wo} Regular non-optimized execution of the matrix %
multiplication program. Only one core is utilized. %
Yellow means the core is idle, green --- program calculations are performed, %
cyan --- I/O operations are performed, red --- OS calculations are performed. %
The green sections are the CPU-bound segments and %
the cyan sections are the I/O bound segments.} 
\end{figure*} 

\begin{figure*}[t!]
\includegraphics[scale=0.47]{test_w.eps}
\caption{\label{fig_test_w} Optimized execution of the matrix %
multiplication program. The execution begins with the main thread occupying the %
first core. The prefetch threads starts after 5 seconds and replaces the main thread. %
The main thread runs on the second core for about two minutes, after which it is swapped
with the main computation.
Yellow means the core is idle, green --- program calculations are performed, %
cyan --- I/O operations are performed, red --- OS calculations are performed. %
The green sections are the CPU-bound segments and %
the cyan sections are the I/O bound segments.} 
\end{figure*} 

The matrix multiplication program demonstrates clearly the problem we are trying to solve
and the achieved solution. Essentially, our approach tries to overlap the I/O-intensive
program sections with CPU-intensive sections, by utilizing an additional CPU core.

A typical program, in our case the matrix multiplication program, described in section \ref{mat-mult},
is neither I/O-intensive nor CPU-intensive in general. Its intensiveness type varies over time.
As can be seen in figure \ref{fig_test_wo} the program is broadly half of the time I/O intensive
and half of the time I/O-intensive. Note that since the program is single-threaded, the second
core is mostly idle (it executed some background programs, for example PCP Charts used to
create the graph and other OS services).

Pre-execution uses the idle core to pre-execute the main computation and prefetch the
required data. Figure \ref{fig_test_w} shows the CPU utilization during the optimized
execution of the matrix multiplication program. The execution begins with only the main
computation executing on the first core for 5 seconds. During these 5 seconds the optimizer
collects profiling data for optimization. After these 5 seconds the pre-execution thread
is constructed and starts execution on the first core; the main computation moves to the
second core. The two threads swap the cores after about two minutes. The time axis can be
divided naturally into 30 seconds segments of CPU-intensive sections in the main computation.
Note that these segments start with a CPU-intensive section in the prefetch thread followed
by an I/O-intensive section. In other words, the CPU-intensive section of the
main computation overlap (almost entirely) with the I/O-intensive section of the prefetch
thread.

\section{Related Work}
\label{sec:related}

Snyder and Hong has done initial work on prefetching in LLVM
\cite{Snyder}. Their work however included a prefetch
instruction, similar to \textsc{madvise} in Linux and didn't include
any prediction or slicing.

Outside the virtualization realm there has been great effort in
prefetching which could be classified to two main approaches
speculative execution and heuristic based prediction as discussed
in \cite{May2001}. Our approach as we have seen includes aspects
of both.

The heuristic approach for prefetching is using observed patterns among past
I/O requests history to predict future requests. Naturally, heuristic
prefetching can only work if the application follows regular and
perceivable patterns in its I/O requests. When the
application I/O requests follow random or unknown patterns then obviously
heuristic prefetching cannot improve the application performance.

Speculative execution approach for prefetching is the  more general approach.
Theoretically, speculative execution can work for every application and
if done right has high chance to circumvent future I/O originated lags.

Our approach of using slicing and prefetching threads is a speculative
execution approach while some decision making (in the case of termination
instructions' argument removal) has heuristic origin.

Other speculative execution approaches for prefetching
that have been researched includes Chang\textquoteright{}s SpecHint
\cite{SpecHint} system and Patterson\textquoteright{}s et-al
informed prefetching TIP \cite{Patterson95} system.

Both SpecHint and TIP approaches demonstrate the feasibility of speculating
future I/O requests accurately and ahead of time and provide the
information to the underlying system so that I/O will be prefetched in advance.
Both of these methods are relatively conservative in terms of amount of
CPU cycle dedicated to prefetching. In comparison our approach is
significantly more aggressive and provides better prefetching results.

The aggressive pre-execution approach is also being studied extensively
to reduce memory access latency to attack the \textquotedblleft{}memory
wall\textquotedblright{} problem \cite{Chen2007,Kim2004,Sohi2001,Luk2001}.
Similarly to our system these approaches contain
source code transformation and prefetching instruction injection. However
in our case the prefetching instructions are all inserted in separate threads.


%Yang\textquoteright{}s AASFP approach \cite{Yang02} is another efficient speculative
%approach to prefetching, that provide good results when it works. However, AASFP
%only works for sequential applications by design.

%I AM NOT SURE THAT OUR APPROACH WORKS FOR PARALLEL APPLICATIONS

%\section{Benchmarks}
%
%We have run LLVM-prefetch benchmarks on the following platforms
%\begin{table}
%    \begin{center}
%        \begin{tabular} { | p{1cm} | p{1cm} | p{1cm} | p{1cm} | p{1cm} | p{1cm} |  }
%        \hline
%            Name & OS & Kernel & CPU & Memory & Hard drive
%        \\ \hline
%            Linux 32bit & Debian & 3.0.0 & Intel Core 2 E6400
%& 1 GB & SATA 
%        \\ \hline
%            Linux 64bit & Ubuntu 10.11 & 3.0.0 & Intel Core i7 &
% 2 ,4 ,6 ,8 ,12 ,16,24 GB &  SATA,  SSD-SATA 
%        \\ \hline
%            FreeBSD 64bit & FreeBSD & 8.2  & Intel Core i7 &
% 2 ,4 ,6 ,8 ,12 ,16,24 GB &  SATA,  SSD-SATA 
%        \\ \hline
%            iMac - Lion & Mac OS X Lion & 11.2.0 & intel Core 2 Due 2.4Ghz & 4 GB & Firewire 320GB
%        \\ \hline
%        \end{tabular}
%    \end{center}
%\caption{benchmark systems}
%\end{table}
%
%Our goal was to test results cross multiple OS hard drives, memory configurations
%and CPU types.
%
%\subsection{Benchmark software}
%
%We ran the software used in the experimental results section with data set
%sizes varying.
%
%\subsection{32bit benchmarks}
%
%We ran all benchmarks under our core2 Linux system.
%
%We have used 2GB as the data size for bucket sort program
%We have used 2GB as the data size for matrix multiplication program
%We have used 2GB as the data size for indirect memory access program
%
%
%Sensitivity to amount of free memory:
%\begin{table}
%\begin{tabular}{| p{3cm} | l | l | l |}
%\hline
%Benchmark & 1 GB & 2 GB & 3 GB
%\\
%\hline
%Bucket & & &
%\\
%\hline
%Matrix Multiplation & & &
%\\
%\hline
%Indirect memory access & & &
%\\
%\hline
%\caption{Sensitivity to amount of memory}
%\end{table}
%
%\subsection{64bit benchmarks}
%64bit benchmarks were almost identical to the 32 bit benchmark above.
%(with mmap64 and mmap2 replacing mmap)
%
%We have used 2GB, 8GB, 30GB as the data size for bucket sort program
%We have used 2GB, 8GB, 30GB as the data size for matrix multiplication program
%We have used 2GB, 8GB, 30GB as the data size for indirect memory access program
%
%\subsubsection{Linux}
%
%\begin{table}
%\begin{tabular}{| p{3cm} | l | l | l | l |}
%\hline
%Benchmark & 2 GB & 8 GB & 16 GB & 24 GB
%\\
%\hline
%Bucket 2GB & & &
%\\
%\hline
%Bucket 8GB & & &
%\\
%\hline
%Bucket 30GB & & &
%\\
%\hline
%Matrix multiplication 2GB & & & &
%\\
%\hline
%Matrix multiplication 8GB & & & &
%\\
%\hline
%Matrix multiplication 30GB & & & &
%\\
%\hline
%Indirect memory
%access 2GB & & & &
%\\
%\hline
%Indirect memory
%access 8GB & & & &
%\\
%\hline
%Indirect memory
%access 30GB & & & &
%\\
%\hline
%\caption{Sensitivity to amount of memory}
%\end{table}
%
%\subsubsection{Mac OS X}
%
%\begin{table}
%\begin{tabular}{| p{3cm} | l | l | l |}
%\hline
%Benchmark & 1 GB & 2 GB & 3 GB
%\\
%\hline
%Bucket & & &
%\\
%\hline
%Matrix Multiplation & & &
%\\
%\hline
%Indirect memory access & & &
%\\
%\hline
%\caption{Sensitivity to amount of memory}
%\end{table}
%
%\subsubsection{FreeBSD}
%
%\begin{table}
%\begin{tabular}{| p{3cm} | l | l | l |}
%\hline
%Benchmark & 1 GB & 2 GB & 3 GB
%\\
%\hline
%Bucket & & &
%\\
%\hline
%Matrix Multiplation & & &
%\\
%\hline
%Indirect memory access & & &
%\\
%\hline
%\caption{Sensitivity to amount of memory}
%\end{table}


\section{Conclusion}
\label{sec:conclusion}
The performance gap between CPU and I/O performance is already
very significant and there are no indications that it will be
eliminated in the near future. 

As long as disk performance is so far behind CPU power, I/O
performance will have significant effect on computation run time.
As a result more aggressive and complex measures for I/O prefetching
will be required.

Furthermore, programmers wages continue to be high and thus manually
inserting prefetching instruction into the code is expensive.

LLVM-prefetch addresses the performance gap issue by overlapping computation
with future disk access pre-executed in parallel.

The main contributions of the system are:
\begin{enumerate}
\item We have proposed an innovative pre-execution approach
for trading computing power for more effective I/O use.
This approach allows the VM to pre-execute operations and automatically
prefetch needed pages without intervention of the programmer.
\item We have presented the system implementation, benchmarked and tested it
compared to naive running of the program and to specific running of the same
program with programmer inserted pre-fetching instruction.
\item We have implemented the system for LLVM.
\item We have presented careful design considerations for
constructing the pre-execution thread and a VM with JIT compiler for
automatic program slicing. This system can later be re-implemented for
other environment such as JVM, CLR and others.
\item We have tested the environment on several popular UNIX systems.
\end{enumerate}

The described approach shows great promise, especially
with the recent trend of using process virtual machines.
Decreasing I/O latency provides great improvements in
computation times and, allowing the VM to do so automatically,
decreases the cost and man power overhead.
The experimental results have confirmed that the proposed approach is
beneficial and has real potential to eliminate I/O access delay, expedite
the execution time and improve systems performance.

%\appendix
%\section{Appendix Title}

%This is the text of the appendix, if you need one.

%\acks

%Acknowledgments, if needed.

% We recommend abbrvnat bibliography style.

\bibliographystyle{abbrvnat}

\bibliography{llvm-prefetch}

% The bibliography should be embedded for final submission.

\end{document}