\documentclass{llncs}

\usepackage{graphicx}
\usepackage{url}

\def\qed{\unskip\kern 10pt{\unitlength1pt\linethickness{.4pt}\framebox(6,6){}}}

\newcommand{\fun}[1]{\mbox{\textbf{#1}}}
\newcommand{\lb}[1]{#1_{\downarrow}}
\newcommand{\ub}[1]{#1_{\uparrow}}
\newcommand{\varset}[1]{\mbox{$\cal{#1}$}}

\sloppy

\begin{document}

\title{Parameter Based Constant Propagation}

\author{P\'{e}ricles Rafael Oliveira Alves, Igor Rafael de Assis Costa,\\
Fernando Magno Quint\~{a}o Pereira and Eduardo Lage Figueiredo}

\institute{Departamento de Ci\^{e}ncia da Computa\c{c}\~{a}o --
UFMG\\
  Av. Ant\^{o}nio Carlos, 6627 -- 31.270-010 -- Belo Horizonte -- MG -- Brazil        
  \email{\{periclesrafael,igor,fernando,figueiredo\}@dcc.ufmg.br}
}

\maketitle

\begin{abstract}
JavaScript is nowadays the lingua franca of web browsers.
This programming language is not only the main tool that developers have to
implement the client side of web applications, but it is also the target of
frameworks such as Google Web Toolkit.
Given this importance, it is fundamental that JavaScript programs can be executed
efficiently.
Just-in-time (JIT) compilation is one of the keys to achieve this much necessary
efficiency.
An advantage that a JIT compiler has over a traditional compiler is the
possibility to use runtime values to specialize the target code.
In this paper we push JIT speculation to a new extreme: we have empirically
observed that many JavaScript functions are called only once during a typical
browser section.
A natural way to capitalize on this observation is to specialize the code
produced by a function to the particular values that are passed to this function
as parameters.
We have implemented this approach on IonMonkey, the newest JIT compiler used in
the Mozilla Firefox browser.
By coupling this type of parameter specialization with classical compiler
optimizations, such as constant propagation and global value numbering, we have
been able to experimentally observe speedups of up to 25\% on well-known algorithms.
These gains are even more remarkable because they have been obtained over a
worldly known, industrial quality JavaScript runtime environment.
\end{abstract}


\section{Introduction}
\label{sec:int}
% The importance of dynamic languages in general, and JavaScript in particular
Dynamically typed programming languages are today widespread in the computer
science industry.
Testimony of this fact is the ubiquity of PHP, Python and Ruby in the server side
of web applications, and the dominance of JavaScript on its client side.
This last programming language, JavaScript, today not only works as a tool that
developers may use to code programs directly, but also fills the role of an
assembly language for the Internet~\cite{Gardner12}.
The Google Web Toolkit, for instance, allows programmers to
develop applications in Java or Python, but translates these programs to a
combination of JavaScript and HTML~\cite{Chaganti07}.
Given this importance, it is fundamental that dynamically typed languages, which
are generally interpreted, can be executed efficiently, and the just-in-time
(JIT) compilers seem to be a key player to achieve this much needed
speed~\cite{Aycock03}.

% The difficulties to compile these languages efficiently.
However, in spite of the undeniable importance of a language such as JavaScript,
executing its programs efficiently is still a challenge even for a JIT
compiler.
The combination of dynamic typing and late binding hides from the compiler core
information that is necessary to generate good code.
The type of the values manipulated by a JavaScript program is only known at
runtime, and even then it might change during program execution.
Moreover, having a very constrained time window to generate machine code, the JIT
compiler many times gives up important optimizations that a traditional
translator would probably use.
Therefore, it comes as no surprise that the industry and the academic community
starts investing a huge amount of effort in the advance of JIT
technology~\cite{Gal09,Gardner12}.
Our intention in this paper is to contribute further in this effort.

% A core observation: most of the functions are called only once
In this paper we discuss a key observation, and a suite of ideas to capitalize
on it.
After instrumenting the Mozilla Firefox browser, we have empirically observed
that almost half the JavaScript functions in the 100 most visited websites in the
Alexa index~\footnote{\texttt{http://www.alexa.com/}} are only called once.
This observation motivates us to specialize these functions to their arguments.
Hence, we give to the JIT compiler an advantage that a traditional compilation
system could never have: the knowledge of the values manipulated at runtime.

% Speculative optimizations: lets specialize the functions based on the parameters.
We propose a form of constant propagation that treats the arguments passed to
the function to be compiled as constants.
Such approach is feasible, because we can check the values of these parameters at
runtime, during JIT compilation.
If the target function is only called once, then we have a win-win condition: we
can generate simpler and more effective code, without having to pay any penalty.
On the other hand, if the function is called more than once, we must recompile
it, this time using a traditional approach, which makes no assumptions about the
function arguments.


\subsection{Why parameter specialization matters}

%Explain the chart, and motivate our optimization
The main motivation to our work comes out of an observation:
most of the JavaScript functions in typical webpages are called only once.
To corroborate this statement, Figure~\ref{fig:NumOfCalls} plots the
percentage of JavaScript functions called $N$ times, $1 \leq N \leq 50$.
To obtain this plot we have used the same methodology adopted by
Richard {\em et al.}~\cite{Richards10}: we have instrumented the browser,
and have used it to navigate through the 100 most visited pages according to the
Alexa index.
This company offers users a toolbar that, once added to their browsers,
tracks data about browsing behavior.
This data is then collected and used to rank millions of websites by
visiting rate.

\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.9\columnwidth]{images/NumOfCalls}
\caption{A plot that shows how many times each different JavaScript function
is called in a typical browser section in the 100 most visited pages according
to the Alexa website.}
\label{fig:NumOfCalls}
\end{center}
\end{figure}

From Figure~\ref{fig:NumOfCalls} we see that the number of times each JavaScript
function is called during a typical browser section clearly obeys a power
law.
About 47\% of the functions are called only once, and about 59\% of the functions
are called at most twice.
Therefore, many functions will be given only one set of arguments.
Nevertheless, a traditional just-in-time compiler generates code that is
general enough to handle any possible combination of parameters that their
types allow.
In this paper, we propose the exact opposite: lets specialize functions to their
arguments; hence, producing efficient code to the common case.
Functions that are called more than once must either be re-compiled or
interpreted.
In this way we can produce super-specialized binaries, using knowledge that is
only available at runtime; thus, achieving an advantage that is beyond the
reach of any ordinary static compiler.

\section{Parameter Based Method Specialization}
\label{sec:pareval}

In this section we illustrate our approach to runtime code optimization via
an example.
Then we explain how we can use runtime knowledge to improve constant
propagation, a well-known compiler optimization.

\subsection{Super-Specialization by Example}
\label{sub:ex}

We illustrate how a just-in-time compiler can benefit from parameter based
specialization via the example program in Figure~\ref{fig:example}.
The function \texttt{closest} finds, among the \texttt{n} elements of the array
\texttt{v}, the one which has the smallest difference to an integer \texttt{q}.
We see the control flow graph (CFG) of this program at the right side of this
figure.
This CFG is given in the Static Single Assignment~\cite{Cytron91} (SSA)
intermediate representation, which is adopted by IonMonkey, our baseline
compiler.
The SSA has the core property that each variable name has only one definition
site.
Special instructions, called $\phi$-functions, are used to merge together
different definitions of a variable.
This program representation simplifies substantially the optimizations that we
describe in this paper.

\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.9\columnwidth]{images/example}
\caption{
(Left) The JavaScript program that will be our running example.
(Right) A SSA-form, three-address code representation of the example.}
\label{fig:example}
\end{center}
\end{figure}

The control flow graph in Figure~\ref{fig:example}(Right) differs from the CFG
normally produced by a compiler because it has two entry points.
The first, which we call {\em function entry point}, is the equivalent to the
entry point of an ordinary CFG.
The second, which we call the {\em On-Stack Replacement} block (OSR), is
created by the just-in-time compiler, in case the function was compiled while
being executed.
All the functions that we optimize start execution from this block, as they are
compiled only once.
If a function is executed several times, then subsequent calls will start at the
function entry.

The knowledge of the runtime values of the parameters improves some compiler
optimizations.
In this section we will show how this improvement applies onto four
different compiler optimizations:
dead-code elimination, array bounds check elimination, loop inversion and
constant propagation.
Figure~\ref{fig:opt1}(a) shows the code that we obtain after a round of
dead-code elimination.
Because we enter the function from the OSR block, the code that is reachable
only from the function entry point is dead, and can be safely removed.
This elimination also removes the test that checks if the array has a non
zero size.
Notice that even if reachable from the OSR block, we would be able to
eliminate this test, given that we know that the result would be always
positive.

\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.9\columnwidth]{images/opt1}
\caption{
The code that results from two different optimizations in sequence.
(a) Dead-code elimination.
(b) Array bounds check elimination.}
\label{fig:opt1}
\end{center}
\end{figure}

Figure~\ref{fig:opt1}(b) shows the result of applying array bounds check
elimination onto our example.
JavaScript is a strongly typed language; thus, to guarantee the runtime
consistency of programs, every array access is checked, so that memory is never
indexed out of declared bounds.
In our case, a simple combination of range analysis~\cite{Patterson95}, plus
dead-code elimination is enough to remove the test performed over the
limits of \texttt{v}.
This limit, 100, is always greater than any value that the loop counter
\texttt{i} can assume throughout program execution.

Figure~\ref{fig:opt2}(a) shows the result of applying loop inversion on the
example, after dead-code elimination has pruned useless code.
Loop inversion~\cite{Muchnick97} converts a while into a do-while loop.
The main benefit of this optimization is to replace the combination of
conditional and unconditional branches used to implement the while loop
by a single conditional jump, used to implement the repeat loop.
Under ordinary circumstances an extra conditional test, wrapped around the
while, is necessary, to certify that iterations will be performed only on
non-null counters.
However, given that we know that the loop will be executed at least once,
this wrapping is not necessary.

\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.9\columnwidth]{images/opt2}
\caption{
Final code after two more optimizations.
(a) Loop inversion.
(b) Constant Propagation.}
\label{fig:opt2}
\end{center}
\end{figure}

Finally, Figure~\ref{fig:opt2}(b) shows the code that we obtain after
performing constant propagation.
Out of all the optimizations that we have discussed here, constant propagation
is the one that most benefits from parameter specialization.
Given that the parameters are all treated as constants, this optimization has
many opportunities to transform the code.
In our example, we have been able to propagate the array limit
\texttt{n}, and the query distance \texttt{q}.
Constant propagation is the optimization that we have chosen, in this paper,
to demonstrate the effectiveness of parameter based code specialization.
In the rest of this paper we will be discussing its implementation in our
scenario, and its effectiveness.

\subsection{``Constification"}
\label{sub:constification}

% The general memory layout
Argument based value specialization works by replacing the references to the
parameters of a function about to be JIT compiled by the actual values of these
parameters, in a process that we have dubbed {\em constification}.
Before introducing the basic principles that underlie this technique, we will
explain how the interpreter and the native code produced by the just-in-time
compiler communicate.
In this section we will describe the memory layout used by the SpiderMonkey
interpreter; however, this organization is typical in other environments where
the interplay between just-in-time compilation and interpretation happens, such
as the Java Virtual
Machine~\footnote{See \texttt{Java SE HotSpot at a Glance}, available on-line}.

% The stack of activation records
Whenever SpiderMonkey needs to interpret a JavaScript program, it allocates a
memory space for this script, which, in Mozilla's jargon is called the
{\em stack space}.
This memory area will store the global data created by the script, plus the
data allocated dynamically, for instance, due to function calls.
A functions keep the data that it manipulates in a structure called activation
record.
This structure contains the function's parameters, its
return address, the local variables, a pointer to the previous activation
record, and a nesting link, which allows a nested function to find variables in
the scope of the enclosing function.
Activation records are piled on a stack, as different function calls take
place.
For instance, Figure~\ref{fig:stack}(a) shows a stack configuration containing
the activation records of two functions.

% Conversation between interpreted and native code:
In principle, both the interpreter and the just-in-time compiled program could
share the same stack of activation records, and thus we would have a seamless
conversation between these two worlds.
However, whereas the interpreter is a stack-based architecture, the native
code runs on a register based machine.
In other words, the two execution modes use different memory layouts.
To circumvent this shortcoming, once an initially interpreted function is
JIT compiled, its activation record is extended with a new memory area that is
only visible to the JITed code.
Figure~\ref{fig:stack}(b) illustrates this new layout.
In this example, we assume that both functions in execution in
Figure~\ref{fig:stack}(a) have been compiled to native code.
The native code shares the activation records used by the interpreter -- that is
how it reads the values of the parameters, or writes back the return value that
it produces upon termination.
The interpreter, on the other hand, is oblivious to the execution of code
produced by the just-in-time compiler.

\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.9\columnwidth]{images/stack.pdf}
\caption{SpiderMonkey's memory layout.
(a) Interpretation.
(b) Execution of native code. Data that is not visible to the interpreter is
colored in gray.
{\em Stack Segment} contains a pointer to the current top of stack, and to
the chain of activation records of the native functions.
Each activation record has a fixed-size area called the {\em stack frame}.
{\em Slots} denote area whose layout depends on the function.
Arguments of the function are stored before the stack frame, and the local
variables are stored after it.
}
\label{fig:stack}
\end{center}
\end{figure}

%The figure \ref{fig:stack} illustrates the stack used by the VM in two different sceneries: in the first we only have interpreted methods executions and in the second we have some native calls interleaved with interpreted execution. The memory layout used in native calls is abstracted, where the arguments necessary are passed by through an array of values. The native frames are linked like stack frames, as represented in figure \ref{fig:stack} (b), but also do not represent \textit{directly called by} relation.

\noindent
\textbf{Reading the values of parameters: }
% Reading the parameters
JavaScript methods, in the Firefox browser, are initially interpreted.
Once a method reaches a certain threshold of calls, or a loop reaches a certain
threshold of iterations, the interpreter invokes the just-in-time compiler.
At this point, a control flow graph is produced, with the two entry blocks that
we have described in Section~\ref{sub:ex}.
Independent on the reason that has triggered just-in-time compilation,
number of iterations or number of calls, the function's actual parameters are in
the interpreter's stack, and can be easily retrieved.
However, we are only interested in compilation due to an excessive number of
loop iterations, for the compilation due to excessive calls might
indicate a function that will be called many more times in the future.
During the generation of the native code, we can find the values bound to the
parameters of a function by inspecting its activation record.
Reading these parameters has almost zero overhead when compared to the time to
compile and execute the program.

% Propagating the values of the parameters
After inspecting the parameter values, we redefine them in the two entry blocks
of the CFG.
For instance, in Figure~\ref{fig:example}(Right) we would replace the two
load instructions, e.g., \texttt{v = param[0]} in the function entry and the OSR
block, by the value of \texttt{v} at the moment the just-in-time compiler was
invoked.
Then, we replace all the uses of the parameters in the function body by their
actual values.
This last phase is trivial in the Static Single Assignment representation,
because each variable name has only one definition site; hence, there is not
the possibility of we wrongly changing a use that is not a parameter.


\subsection{Argument Based Constant Propagation}
\label{sub:cp}

%Using the parameter specialization approach we have upgraded the classic Constant Propagation(CP) algorithm into a new algorithm: Argument Based Constant Propagation (ABCP). This algorithm insert constants on the CFG for each parameter, and take advantage of the dynamics of the CP worklist propagation engine. The ABCP is organized in 3 steps: constant insertion, parameter uses substitution and traditional Constant Propagation execution. In the first step, the value of each parameter is recovered in the Interpreter Stack. For each parameter, a new constant is inserted in the program CFG, both in the function entry block and in the OSR entry block. In the second step, the parameters uses in the remaining blocks are replaced by the new constants created. In the last step, the CP algorithm is executed and propagated the constants created for the parameters and other constants found in the CFG.

In order to show that parameter based code specialization is effective and useful
to just-in-time compilers, we have adapted the classic constant propagation
algorithm~\cite{Wegman91} to make the most of the values passed to the
functions as parameters.
We call the ensuing algorithm {\em argument based constant propagation}, or
ABCP for short.
We have implemented ABCP in the
IonMonkey~\footnote{\texttt{https://wiki.mozilla.org/IonMonkey}} compiler.
This compiler is the newest member of the {\em Monkey family}, a collection of
JavaScript just-in-time engines developed in the Mozilla Foundation to be
used in the Firefox browser.
We chose to work in this compiler for two reasons.
Firstly, because it is an open-source tool, which means that its code can be
easily obtained and modified.
Secondly, contrary to previous Mozilla compilers, IonMonkey has a clean design
and a modern implementation.
In particular, because it uses the SSA form, IonMonkey serves as a basis for
the implementation of many modern compiler techniques.

Constant propagation, given its simple specification and straightforward
implementation, is the canonical example of a compiler
optimization~\cite[p.362]{Muchnick97}.
%As we have explained in the previous section, constant propagation is one of
%the optimizations that most benefits from parameter based value specialization.
Constant propagation is a {\em sparse} optimization.
In other words, abstract states are associated directly with variables.
The classic approach to constant propagation relies on an iterative algorithm.
Initially all the variables are associated with the $\top$ abstract state.
Then, those variables that are initialized with constants are added to a
work list.
If a variable is inserted into the worklist, then we know, as an invariant, that
it has a constant value $c_1$, in which case its abstract state is $c_1$ itself.
In the iterative phase, an arbitrary variable is removed from the worklist,
and all its uses in the program code are replaced by the constant that it
represents.
It is possible that during this updating some instruction $i$ is changed to use
only constants in its right side.
If such an event occurs, then the the variable defined by $i$, if any, is
associated to the value produced by $i$, and is inserted into the worklist.
These iterations happen until the worklist is empty.
At the end of the algorithm, each variable is known to have a constant value
($C_i$) or is not guaranteed to be a constant, and is thus bound to $\bot$.


Constant propagation suits just-in-time compilation very well, because it is
fast.
The worst-case time complexity of this algorithm is $O(V^2)$, where $V$ is the
number of variables in the program.
To derive this complexity, we notice that a variable can enter into the
worklist at most once, when we find that it holds a constant value.
A variable can be used in up to $O(I)$ program instructions, but normally
$O(I) = O(V)$.
Thus, replacing a variable by the constant it represents takes $O(V)$ worst-case
time.
In practice a variable will be used in a few sites; therefore, constant
propagation tends to be $O(V)$ in practice.

\section{Experiments}
\label{sec:exp}

%TODO: put the correct url for benchmark tests access
In order to validate our approach, we have created a small benchmark
that contains 8 well known algorithms, plus three programs from the SunSpider
test suite.
These benchmarks are publicly available at \url{http://code.google.com/p/im-abcp/source/browse/trunk/tests}.
Figure~\ref{fig:benchs} describes each of these programs.

\begin{figure}
\scalebox{0.675} {
\footnotesize
\begin{tabular}{l | c | l | l} \hline
Benchmark & LoC & Complexity & Description \\ \hline
SunSpider::math-cordic($R, C, A$) & 68 & $O(R)$ & Calls a sequence of
transcendental functions $R$ times. \\
SunSpider::3d-morph($L, X, Z$) & 56 & $O(L \times X \times Z)$ & Performs
$L \times X \times Z$ calls to the sin transcendental function. \\
SunSpider::string-base64($T_{64}, B, T_2$) & 133 & $O(|T_{64}|)$ & Converts an
array of integers to a Base-64 string. \\
matrix-multiplication($M_1, M_2, K, L, M$) & 46 & $O(K \times L \times M)$ &
Multiplies a $K \times L$ matrix $M_1$ by a $L \times M$ matrix $M_2$. \\
k-nearest-neighbors($P, V, N, K$) & 47 & $O(K \times N)$ & Finds the $K$
2-D points stored in $V$ that are closest of the 2-D \\
 & & & point $P$. \\
rabin-karp($T, P$) & 46 & $O(|T| \times |P|)$ & Finds the first occurrence of
the pattern $P$ in the string |T|.\\
1d-trim($V, L, U, N$) & 22 & $O(N)$ & Given a vector $V$ with $N$ numbers, remove
all those numbers \\
 & & & that are outside the interval $[L, U]$. \\
closest-point($P, V, N$) & 35 & $O(N)$ & Finds, among the 2-D points stored in
$V$, the one which is the \\
 & & & closest to $P$. \\
tokenizer($S, P$) & 23 & $O(|S|\times|P|)$ & Splits the string $S$ into
substrings separated by the characters \\
 & & & in the pattern $P$. \\
split-line($V, N, A, B$) & 41 & $O(N)$ & Separates the 2-D points stored in $V$
into two groups, those \\
 & & & below the line $y = Ax + b$, and those above.  \\
string-contains-char($C, S$) & 13 & $O(|S|)$ & Tells if the string $S$ 
contains the character $C$. \\ \hline
\end{tabular}
}
\caption{\label{fig:benchs}
Our benchmark suite.
LoC: lines of JavaScript code.}
\end{figure}

IonMonkey does not provide a built-in implementation of Constant Propagation.
Therefore, in order to demonstrate the effectiveness of our implementation, we
compare it with the implementation of Global
Value Numbering (GVN) already available in the IonMonkey toolset.
GVN is another classic compiler optimization.
IonMonkey uses the algorithm first described by Alpern
{\em et al.}~\cite{Alpern88}, which relies on the SSA form to be fast and
precise.
Alpern {\em et al.} have proposed two different approaches to GVN:
pessimistic and optimistic.
Both are available in IonMonkey.
In this section we use the pessimistic approach, because it is considerably
faster when applied to our benchmarks.
All the implementations that we discuss in this section are intra-procedural.
None of the IonMonkey built-in optimizations are inter-procedural, given the
difficulties to see the entirety of dynamically loaded programs.
All the runtime numbers that we provide are the average of 1000 executions.
We do not provide average errors, because they are negligible given this high
quantity of executions.

Figure~\ref{fig:cp_vs_gvn} compares our implementation of constant propagation
with the implementation of global value number produced by the engineers that
work in the Mozilla Foundation. The baseline of all charts in this section
is the IonMonkey compiler running with no optimizations.
The goal of figure~\ref{fig:cp_vs_gvn} is to show that our implementation
is not a straw-man: for our suite of benchmarks it produces better code than GVN,
which is industrial-quality.
We see in the figure that both optimizations slowdown the benchmarks.
This slowdown happens because the total execution time includes the time to
optimize the program, and the time that the program spends executing.
Neither optimization, constant propagation or global value numbering, finds
many opportunities to improve the target code in a way to pay for the
optimization overhead.
On the other hand, they all add an overhead on top of the just-in-time engine.
However, the overhead imposed by constant propagation, a simpler optimization, is
much smaller than the overhead imposed by global value numbering, as it is
evident from the bars in Figure~\ref{fig:cp_vs_gvn}.

\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.9\columnwidth]{images/cp_vs_gvn.png}
\caption{Speedup of the original version of Constant Propagation and Global Value Numbering.}
\label{fig:cp_vs_gvn}
\end{center}
\end{figure}

Figure \ref{fig:cp_vs_abcp} compares our implementation of constant
propagation, with and without parameter specialization.
We see that traditional constant propagation in fact slows down many of our
benchmarks.
Our implementation of the classic Rabin-Karp algorithm, for instance, suffers
a 4\% slowdown.
Traditional constant propagation does not find many opportunities to remove
instructions, given the very small number of constants in the program code,
and given the fact that it runs intra-procedurally.
On the other hand, the argument based implementation fares much better.
It naturally gives us constants to propagate in all the benchmarks, and it
also allows us to replace boxed values by constants.
The algorithm of highest asymptotic complexity, matrix multiplication,
experiments a speedup of almost 25\%, for instance.

%Graph 1: time comparison between CP vs ABCP, bars (GVN=off)
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.9\columnwidth]{images/cp_vs_abcp.png}
\caption{Speedup of the original and parameter based version of Constant Propagation.}
\label{fig:cp_vs_abcp}
\end{center}
\end{figure}

Figure~\ref{fig:gvn_vs_abgvn} compares the implementation of
global value numbering with and without parameter based specialization.
Contrary to constant propagation, global value numbering does not benefit much
from the presence of more constants in the program text.
We see, for instance, that the SunSpider's \texttt{string-based64} benchmark
suffers a slowdown of over 30\%.
This slowdown happens because of the time spent to load and propagate the
values of the arguments.
None of these arguments are used inside loops - although expressions derived
from them are - and thus GVN cannot improve the quality of these loops.
On the other hand, again we observe a speed up in matrix multiplication.
This speedup does not come from GVN directly.
Rather, it is due to the fact that our constification replaces the loop
boundaries by integer values, as a result of the initial value propagation
that we perform upon reading values from the interpreter stack.
Figure~\ref{fig:abcp_vs_abgvn} compares constant propagation and global value
numbering when preceded by parameter based value specialization.
On the average the argument based constant propagation delivers
almost 25\% more speedup than argument based global value numbering.

%Graph 2: time comparison between GVN vs ABGVN, bars (cp=off)
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.9\columnwidth]{images/gvn_vs_abgvn.png}
\caption{Speedup of the original and parameter based version of Global Value Numbering.}
\label{fig:gvn_vs_abgvn}
\end{center}
\end{figure}

%Graph 3: time comparison between ABGVN vs ABCP (GVN=off), bars
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.9\columnwidth]{images/abcp_vs_abgvn.png}
\caption{Speedup of the parameter based versions of Constant Propagation and Global Value Numbering.}
\label{fig:abcp_vs_abgvn}
\end{center}
\end{figure}

%Table 4: 1: static results: folded instructions (CP, ABCP, GVN, ABGVN), bars
Figure~\ref{fig:numbers} gives us some further subsidies to understand the
speedups that parameter specialization delivers on top of constant propagation.
First, we notice that in general constant propagation leads to less code
recompilation.
In general a just-in-time compiler might have to re-compile the same function
several times, while this function is still executing.
These recompilations happen because some assumptions made by the JIT may no
longer hold during program execution, or it may infer new facts about the
program.
For instance, the JIT may discover that a reference is used as an integer
inside a loop, and this new knowledge may trigger another compilation.
If, eventually this reference receives a non-integer value, or some arithmetic
operation causes this integer to overflow, then a new compilation is in order.
Second, it is clear from the table that argument based specialization
considerably improves the capacity of constant propagation to eliminate
instructions.
When an instruction is eliminated because all the variables that it uses are
constants, we say that the instruction has been {\em folded}.
At least in our benchmark suite, traditional constant propagation does not find
many opportunities to fold instructions.
However, once we replace parameters by constants, it produces remarkably good
results.
In some cases, as in the function \texttt{string-contains-char}, it can eliminate
almost one fourth of all the native instructions generated.

\begin{figure}[t!]
\begin{center}
\small
\renewcommand{\tabcolsep}{6pt}
\begin{tabular}{| l | c c c c | c c c c |} \hline
& \multicolumn{4}{c|}{CP} & \multicolumn{4}{c|}{ABCP} \\
Benchmark & R & I & F & \% F & R & I & F & \% F \\
\hline
math-cordic & 1 & 287 & 0 & 0\% & 1 & 295 & 22 & 7\%\\
3d-morph & 2 & 582 & 0 & 0\% & 2 & 600 & 63 & 11\%\\
string-base64 & 3 & 1503 & 30 & 2\% & 3 & 1519 & 58 & 4\%\\
matrix-mul & 8 & 1574 & 0 & 0\% & 3 & 558 & 84 & 15\%\\
k-nearest-neighbors & 2 & 530 & 4 & 1\% & 1 & 432 & 46 & 11\%\\
rabin-karp & 2 & 583 & 9 & 2\% & 2 & 595 & 57 & 10\%\\
strim & 1 & 154 & 0 & 0\% & 0 & 82 & 13 & 16\%\\
closest-point & 1 & 228 & 0 & 0\% & 0 & 142 & 13 & 9\%\\
tokenizer & 2 & 296 & 3 & 1\% & 2 & 308 & 39 & 13\%\\
split-line & 2 & 390 & 0 & 0\% & 1 & 300 & 26 & 9\%\\
string-contains-char & 0 & 58 & 0 & 0\% & 0 & 64 & 15 & 23\%\\ \hline
\end{tabular}
\caption{\label{fig:numbers}A comparison, in numbers, between constant
propagation, and argument based constant propagation.
R: number of recompilations.
I: total number of instructions produced by the JIT compiler.
F: number of instructions removed (folded) due to constant propagation.
\%F: percentage of folded instructions.}
\end{center}
\end{figure}

% Traditional Constant Propagation
% - Number of variables that you can fold
% - Speed

% Speculative Constant Propagation
% - Static data:
% -- Number of instructions that we have folded
% -- Number of branches that have been eliminated
% -- Number of types that have been specialized
% - Time to run the specialization.
% - Speed
% -- How speed varies with different compiler optimizations (variations of our optimization).

% - (tentative) Number of times we had to trash the specialized code, and then recompile the function.

\section{Related Work}
\label{sec:rel}

The dispute for market share among Microsoft, Google, Mozilla and Apple
has been known in recent years as the ``browser war"~\cite{Shankland09}.
Performance is a key factor in this competition.
Given that the performance of a browser is strongly connected to its
capacity to execute JavaScript efficiently, today we watch the development
of increasingly more reliable and efficient JavaScript engines.

The first just-in-time compilers were method based~\cite{Aycock03}.
This approach to just-in-time compilation is still used today with very
good results.
Google's V8~\footnote{\texttt{http://code.google.com/p/v8/}} and
Mozilla's JaegerMonkey~\footnote{\texttt{https://wiki.mozilla.org/JaegerMonkey}}, are method-based JIT compilers.
Method based compilation is popular for a number of reasons.
It can be easily combined with many classical compiler optimization, such as
Value Numbering and Loop Invariant Code Motion.
It also capitalizes on decades of evolution of JIT technology, and can
use old ideas such as Self's style type specialization~\cite{Chambers89}.
Furthermore, this technique supports well profiling guided
optimizations~\cite{Chang91}, such as Zhou's dynamic elimination of partial
redundancies~\cite{Zhou11} and Bodik's array bounds checks
elimination~\cite{Bodik00}.
IonMonkey, the compiler that we have adopted as a baseline in this paper,
is an method-based JIT compiler, that resorts to hybrid type
inference~\cite{Hackett12} in order to produce better native code.

A more recent, and substantially different JIT technique is trace
compilation~\cite{Bala00}.
This approach only compiles linear sequences of code from hot
loops, based on the assumption that programs spend most of their
execution time in a few parts of a function.
Trace compilation is used in compilers such as Tamarim-trace~\cite{Chang09},
HotpathVM~\cite{Gal06b}, Yeti~\cite{Zaleski07} and TraceMonkey~\cite{Gal09}.
There exist code optimization techniques tailored for trace
compilation, such as Sol {\em et al.}'s~\cite{Sol11} algorithm to eliminate
redundant overflow tests.
There are also theoretical works that describe the semantics of trace
compilers~\cite{Guo11}.

\section{Conclusion}
\label{sec:con}
% Future works

In this paper we have introduced the notion of {\em parameter based code
specialization}.
We have shown how this technique can be used to speedup the code produced by
IonMonkey, an industrial-strength just-in-time compiler that is scheduled
to be used in the Mozilla Firefox browser.
Parameter based specialization has some resemblance to partial evaluation;
however, we do not pre-execute the code in order to improve it.
On the contrary, we transform it using static compiler optimizations, such as
constant propagation for instance.
Our entire implementation, as well as the benchmarks that we have used in this
paper, is completely available at \url{http://code.google.com/p/im-abcp/}.

We believe that parameter based code specialization opens up many different ways
to enhance just-in-time compilers.
In this paper we have only scratched the tip of these possibilities, and much
work is still left to be done.
In particular, we would like to explore how different compiler optimizations
fare in face of parameter specialization.
From our preliminary experiments we know that some optimizations such as
constant propagation do well in this new world; however, optimizations
such as global value numbering cannot benefit much from it.
We have already a small list of optimizations that we believe could benefit
from our type of specialization.
Promising candidates include loop inversion, loop unrolling and dead code
elimination.

Even though we could add non-trivial speedups on top of Mozilla's JavaScript
engine, our current implementation of parameter based code specialization is
still research-quality.
We are actively working to make it more robust.
Priorities in our to-do list include to specialize functions that
are compiled many times, and to cache the values of the specialized parameters,
so that future function calls can use this code.
Nevertheless, the preliminary results seem to be encouraging. 

\bibliographystyle{plain}
\bibliography{igor}

\end{document}
