\documentclass[twocolumn]{article}
\usepackage{style/latex8}
\usepackage{times}
\usepackage{graphicx}
\usepackage{alltt}
\usepackage{latexsym}
\usepackage{amssymb}
\usepackage{color}
\usepackage{style/bigbox}

\def\graphics#1#2{
\includegraphics[width=#1\textwidth]{#2}
%\rotatebox{-90}{\includegraphics[height=#1\textwidth]{#2}}
}
\def\rotategraphics#1#2{
%\includegraphics[width=#1\textwidth]{#2}
\rotatebox{-90}{\includegraphics[height=#1\textwidth]{#2}} }
\def\rotategraphicsup#1#2{
%\includegraphics[width=#1\textwidth]{#2}
\rotatebox{90}{\includegraphics[height=#1\textwidth]{#2}} }
\hyphenation{op-tical net-works semi-conduc-tor}

\title{Speedup Java Compilations by Restructuring Class Libraries}

\author{\begin{tabular}{cc}Yijun Yu, John Mylopoulos & Jianguo Lu\\ University of Toronto & University of Windsor \\
\{yijun, jm\}@cs.toronto.edu & jlu@uwindsor.ca \\
\end{tabular}}

\begin{document}
\maketitle
\begin{abstract}
Large-scale Java programs take long to compile, thereby hampering productivity.
This paper presents a tool that reduces compilation time by incrementally
restructuring class libraries. We demonstrate through experiments the
effectiveness in achieving significant speedup universal to all Java compilers.
\\\\ 
{\bf Keywords} {\footnotesize compilation performance, class library, redundancies, false dependencies} \end{abstract}
%\thispagestyle{empty}

\section{Introduction} \label{sec:introduction}

Managing complexity of large-scale software development constitutes a central
problem for Software Engineering~\cite{vanVliet03}.  Ever-growing demands for
improved functionality, better quality and more services, has resulted in
software systems that are ever more complex, in accordance with Lehman's second
law of evolution ~\cite{lehman96lncs}. Such pressures obscure the internal
structure and quality of the software, making it difficult to understand and
maintain~\cite{homy03phd}. A large part of such complexity is due to low
agility to changes, while system test processes must wait hours for a
ready-to-run system to be built after some changes made by developers. This
situation can thus be improved by reducing the build time of the programs,
which is especially important in software development process, when numerous
code changes and compilations happen. 

When developing software in Java, in addition to the class libraries introduced
by Java runtime environment (JRE), many other libraries are introduced along
the way to develop the software. Very often, to use a small fraction of an
API, software developers tend to load the entire library, hoping that the
machine is fast enough to accommodate. When software becomes large, those
libraries pile up, the compilation process becomes hideously slow and software
development is stalled. 

One may wonder why the existing Java IDE such as Eclipse can not solve the
problem. It is known that the Eclipse\footnote{http://www.eclipse.org} Java
Development Tool (JDT) can {\em reorganize} {\tt import} statements by
replacing wildcard import packages with the precise importation of necessary
classes.  Experiment results have shown that updating import statements cannot
substantially reduce the build time. 

Unlike redundancy removal by reorganizing {\tt import} statements in the
Eclipse, in this paper we present a restructuring approach that analyzes the
redundancies inside the class libraries and produces minimal packages that are
necessary to compile the programs. Since we do not reorganize the import
statements in the Java programs, it can be applied without changing a single
line of the Java programs. The restructuring approach has been inspired by
our previous work~\cite{yu05icsm, dayani-fard05fase} that delivered substantial
(40 times) speedup to the build of large-scale C/C++ programs.  Such solutions
include a {\em redundancy removal} front-end {\tt
precc}\footnote{http://sourceforge.net/projects/precc}, followed by {\em header
restructuring} and {\em clustering} steps. The tool removes redundancies in the
programs to speedup the compilation, produces caching results that are
independent of compilers and produces binaries that are the same as the
originally compiled one.

This paper presents a major technical improvement over the algorithm
implemented in {\em precc}, which is a new algorithm in {\em precj} that can
avoid false dependency by incrementally updating the partitioning set as well
as the dependency graph. The overhead of incremental restructuring can be
greatly reduced as code changes are usually small and frequent.  We also
applies these algorithms to Java programs: this paper reports on experimental
findings using { \tt
precj}\footnote{svn://www.cs.toronto.edu/cs/htdocs/km/precj/repo}, the
precompilation tool for Java.  

%As Java is increasingly adopted in large-scale software systems, one may wonder
%whether our toolkit can be applied. In this paper, we report on experimental
%findings using { \tt
%precj}\footnote{svn://www.cs.toronto.edu/cs/htdocs/km/precj/repo}, the
%precompilation tool for Java.  The new tool inherits the following virtues from
%its predecessor {\tt precc}:
%\begin{itemize}
%\vspace*{-0.25cm}
%\item Remove redundancies in the programs to speedup the compilation;
%\vspace*{-0.25cm}
%\item Produce caching results that are independent of compilers;
%\vspace*{-0.25cm}
%\item Produce binaries that are the same as the originally compiled one;
%\vspace*{-0.25cm}
%\end{itemize}

%Conceptually, the redundancy removal algorithm is the same in both tools.
Since {\tt precj} works at Java class level, it is safer and faster as it does
not touch the source code.  In addition, {\tt precj} can be applied to all Java
applications and is platform-independent (thanks to Java), whereas {\tt precc}
must be done on each platform due to platform-specific C/C++ macros.

%Tools that further reduce redundancy inside classes are orthogonal to the {\tt
%precj} precompilation tool.
%Unlike redundancy removal by {\em reorganizing} {\tt import} statements in the
%Eclipse\footnote{http://www.eclipse.org} Java Development Tool (JDT), our
%approach analyzes the redundancies inside the class libraries and produces
%minimal packages that are needed to compile the programs. Since we do not
%reorganize the import statements in the Java programs, it can be applied
%without changing a single line of the Java programs.

In the context of Java library restructuring, our process consists of four
steps: 1) the Java classes are parsed by a standard Java compiler which accepts
standard Java language features, or by an adapted open-source GNU Compiler for
Java ({\tt gcj}) 3.4.5 parser which is much more efficient in the compilation
time but does not cover all Java language features (see
Section~\ref{sec:extracting}); 2) a redundancy removal algorithm is applied to
remove unnecessary library classes to produce minimal libraries (see
Section~\ref{sec:removing}); 3) a false dependency removal algorithm is
performed to repartition the necessary library classes for each Java package
(see Section~\ref{sec:restructuring}) to accommodate incremental compilations;
4) finally, a new incremental update algorithm for the restructured packages is
presented to reduce the restructuring overhead for small and frequent changes
(see Section~\ref{sec:incremental}).

In addition to the algorithms for identifying and removing redundancies, the
paper also presents an experimental evaluation.
%
The resulting technique is transparent to the build process, incremental to the
development and independent of the choice of compiler and platform. The speedup
effect is reported universal to various Java compilers we tried, including
various {\tt JDK} compilers {\tt Sun, IBM, Blacktown}, Eclipse Java compilers
(aka {\tt ecj}), and GNU Java compilers ({\tt gcj}).

The rest of the paper is organized as follows.
Sections~\ref{sec:extracting}-~\ref{sec:restructuring} present a simple
motivating example and several formal algorithms that serves as foundations for
our approach. Section~\ref{sec:experiments} outlines experimental results when
applied to several public-domain Java components of {\tt Eclipse} (e.g., {\tt
ECJ}, {\tt SWT}).  Section~\ref{sec:related} describes and compares related
work in the compilation optimization area. Section~\ref{sec:conclusion}
provides some concluding remarks.

\section{Motivating Example} \label{sec:extracting}

Before presenting the formalism, we first introduce the problem with
a simple example to motivate the fundamental solution. \\
%It is the `Hello World' example in every introductory programming course.
%\hspace*{-0.5cm}
\begin{bigbox}
\vspace*{-0.5cm}
\begin{alltt}\footnotesize
1. import java.lang.*;
2. import java.io.*;
3. public class Hello \{
4.   public static void main(String args[]) \{
5.      System.out.println("Hello, world!");
6.   \}
7. \}
\end{alltt}
\vspace*{-0.5cm}
\end{bigbox}

%\paragraph{A motivating example}
%
In this program, lines 1-2 introduce two packages to be imported in order for
the compiler to know the semantics of symbols declared by external classes. We
show later that even after removing these two lines, it still takes almost as
long to compile.  

In this example, as shown by line 4, a class `java.lang.String' is needed and
by line 5, classes `java.lang.System' and `java.io.PrintStream' are needed for
the compiler to understand the object `System.out' and the signature of the
function `println'.

A few more classes are also necessary for compiling this simple
program: `java.lang.Object', `java.io.FilterOutputStream',
`java.io.OutputStream' are indirectly needed to understand the byte
code of the above mentioned classes that inherit them. Nothing else
is further needed. As a result, to compile the `Hello.java', one
only needs to pack the above 6 classes into a 16KB archive `rt.jar' and
load it from the command line\footnote{Different Java compiler may
require different default runtime libraries, see
Table~\ref{tab:hello-results}. Here we explain using
the JDK 1.5.0\_06 runtime library.} by the following
commands:
\begin{bigbox}
\vspace*{-0.5cm}
\begin{alltt} \footnotesize
1. jar xf \$JAVA_HOME/jre/lib/rt.jar
2. jar cf rt.jar java/lang/Object.class ...
3. javac -bootclasspath rt.jar Hello.java
\end{alltt}
\vspace*{-0.5cm}
\end{bigbox}

Line 1 extracts all the 15,200 classes from the 51,715KB runtime libraries of
JDK 1.5.0\_06. They are {\em by default} the library classes of the Java
compiler {\tt javac}. Line 2 repacks a smaller runtime library `rt.jar' that
only contains the 6 necessary classes, which measures 16.2KB in size. Line 3
instructs the Java compiler to use the smaller runtime library in place of the
default one, as a result, the compilation takes $0.42$ seconds, a speedup of
$57\%$ compared to the original compilation.

One may wonder why such a simple optimization has not been adopted by a
standard Java compiler. For example, in Eclipse JDT, the two wildcard import
statements can be minimized to precisely the necessary classes. In this example,
the 6 necessary classes are removed completely as they are among the `standard'
Java import classes (For Eclipse there is no need to import any classes in the
java.lang package). However, the expected boost of compilation speed does not
happen in Eclipse after removing the two lines. It saves a tiny fraction of
time for parsing two lines of statements, but it still takes relatively
enormous time to filter out the 6 classes from the 50MB default class
library, unless one can avoid loading the rest of the library by replacing it
with the 16KB one.

Not only the Eclipse compiler suffers from the redundancy load: experiments of
three `standard' Java compilers have shown that by replacing the library, the
compilation of the same program can become much faster.

Table~\ref{tab:hello-results} shows the experiment result on compiling the
`Hello' program with different Java compilers.  The hardware of the experiment
is a dedicated Intel 1.66GHz CPU laptop with 512MB memory running the Gentoo
Linux.  The elapsed time of each configuration is measured as the minimal in 10
runs for the program to be compiled.  The number of library classes and
the bytes of the compressed library archives are shown as well.

\begin{table*}[ht]\centering
\caption{Comparison of Java compilers before and after minimizing the
load classes for `Hello.java'} \label{tab:hello-results}
\begin{tabular}{||l||r|r||r|r||r|r||r||}\hline
\small javac ver. & \small  \# class & \small  \# class' & \small  \# bytes & \small  \# bytes' & \small  seconds & \small  seconds' & \small 
speedup \\\hline
\small ibm 1.5.0.01 & \small  16,093 & \small  6 & \small  34,087,051 & \small  20,131 & \small  0.89 & \small  0.63 & \small  41\% \\
\small sun 1.5.0.06-rc2 & \small  15,200 & \small  6 & \small  52,956,351 & \small  16,569 & \small  0.66 & \small  0.42 & \small  57\% \\
\small ibm 1.4.2.03 & \small  12,952 & \small  6 & \small  26,364,438 & \small  16,070 & \small  0.65 & \small  0.42 & \small  54\% \\
\small sun 1.4.2.10-rc2 & \small  10,653 & \small  6 & \small  35,840,446 & \small  13,683 & \small   0.48 & \small  0.40 & \small  20\% \\
\small blackdown 1.4.2.03 & \small  10,655 & \small  6 & \small  35,847,970 & \small  13,683 & \small  0.41 & \small  0.28 & \small  46\% \\
\small ecj 3.1.0 & \small  10,655 & \small  6 & \small  35,847,970 & \small  13,683 & \small  1.32 & \small  0.64 & \small  106\% \\
\small ecj (native) 3.1.0  & \small  10,655 & \small  6 & \small  35,847,970 & \small  13,683 & \small  0.22 & \small  0.12 & \small  83\% \\
\small gcj 3.4.5 & \small  2,355 & \small  11 & \small  5,346,055 & \small  12,858 & \small  0.02 & \small  0.01 & \small  100\% \\\hline
\end{tabular}
%\vspace*{-0.6cm}
\end{table*}
Note that (1) the default bootclasspath of {\tt ecj} is the same as the JDK
(blackdown) that was used to compile it in this experiment; Blackdown JDK is
the default JDK compiler on the Gentoo Linux; (2) five more classes were
loaded to compile `Hello.java' by {\tt gcj}, namely,
\begin{alltt}\footnotesize
java.lang.CharSequence
java.lang.Comparable
java.lang.String\$CaseInsensitiveComparator
java.io.Serializable
java.util.Comparator
\end{alltt}
(3) Both Sun's and IBM's JDK 1.4.2 were experimented to compare with
Blackdown's JDK 1.4.2; we also tested both Sun's and IBM's JDK 1.5.0 to compare
the size with their JDK 1.4.2 libraries. At the moment we could not test
Blackdown's JDK 1.5 yet.

The fastest Java compiler in this experiment is {\tt gcj}.  Apparently native
compilers are faster than the Java-based ones, however, the number of classes
in the runtime library used by {\tt gcj} 3.4.5 is smaller than that of the JRE.
Since {\tt gcj} is an on-going open-source project, the coverage of its
supported classes for JDK is catching up: e.g., a project as complex as Eclipse
SDK can be compiled by {\tt gcj} into a native application with certain
patches.

Common to the above experiments, regardless of whether a native or a pure Java
compiler is used, once the library is replaced with a minimal one
that only contains the necessary classes for compiling `Hello', one can obtain
a speedup ranging from 20\% to 106\%.

Last but not the least, the size of the JDK has grown 29\% (IBM) or 47\% (Sun)
from 1.4.2 to 1.5.0, indicating that the build performance gets worse for the
given program. In this experiment, the slowdown factor is respectively 27\%
(IBM) and 28\% (Sun).

\section{Removing Redundancies}\label{sec:removing}

To generalize the solution found in the previous example, we must extract
all the classes that are needed by the program being compiled.

Let $C$ denote the set of classes to be compiled, $L$ the set of the library
classes loaded during the compilation of $C$. A class dependency graph is
defined as $G = (L\cup C, D)$ where $D \subset (L\cup C) \times (L \cup C)$ is
the dependency relation that tells which classes in $L\cup C$ are necessary when
compiling the classes in $C$. 

For the `HelloWorld' example in the previous section, $C$ contains a single
class $\{Hello\}$, whereas $L$ contains 15,200 classes in the runtime
libraries. The class dependency graph is a relation shown by the following
pairs: 
\vspace*{-0.25cm}
\begin{alltt}\footnotesize
(java.lang.String,           Hello)
(java.lang.System,           Hello)
(java.io.PrintStream,        Hello)
(java.lang.Object,           java.lang.String)
(java.lang.Object,           java.lang.System)
(java.io.FilterOutputStream, java.io.PrintStream)
(java.io.OutputStream,  java.io.FilterOutputStream)
(java.lang.Object,           java.io.OutputStream)
\end{alltt}
\vspace*{-0.25cm}
By the closure of the dependency relation over $C$, the necessary classes
$B \subseteq L$ can be obtained: in this example, the 6 above mentioned
classes.

Since our objective is to load only the necessary classes $B$, we
shall not construct the class dependency graph explicitly if the
compiler already produces it.

Both JDK compilers 1.5 and 1.4.2 can report loaded classes after the
compilation when the `-verbose' option is turned on. In JDK 1.5,
it also displays the default bootclasspath used by the compiler.

One can create a script to turn the verbose output into
a set of uncompressing commands for extracting the necessary classes
from the libraries.
\begin{bigbox}
\vspace*{-0.5cm}
\begin{alltt}\footnotesize
\$JDK/bin/javac -verbose Hello.java
\end{alltt}
\vspace*{-0.5cm}
\end{bigbox}
For example, the above command produces the following:\\
\begin{bigbox}
\vspace*{-0.5cm}
\begin{alltt}\footnotesize
[parsing started Hello.java]
[parsing completed 88ms]
[loading rt.jar(java/lang/Object.class)]
[loading rt.jar(java/lang/String.class)]
[checking Hello]
[loading rt.jar(java/lang/System.class)]
[loading rt.jar(java/io/PrintStream.class)]
[loading rt.jar(java/io/FilterOutputStream.class)]
[loading rt.jar(java/io/OutputStream.class)]
[wrote Hello.class]
[total 3728ms]
\end{alltt}
\vspace*{-0.5cm}
\end{bigbox}
A line starting with `[loading' tells us which library class will be loaded
during the compilation. Now the following scripts are able to extract these
classes and repack them into a new library.

\hspace*{-0.5cm}
\begin{bigbox}
\vspace*{-0.5cm}
\begin{alltt}\footnotesize
\# precj.sh
 1. javac -verbose \$* >\& /tmp/precj.log
 2. cat /tmp/precj.log | awk -f precj.awk
\# precj.awk
 3. /loading/ \{
 4.   n = split (\$2, a, "(");
 5.   if (n==2) \{
 6.    m = split(a[2], b, ")");
 7.    if (length(jars[a[1]]) == 0)
 8.      jars[a[1]] = " ";
 9.    jars[a[1]] = jars[a[1]] " " b[1];
10.    classes = classes " " b[1];
11.  \}
12. \}
13. END \{
14.   for (jar in jars)
15.     system("jar xf " jar " " jars[jar]);
16.   system("jar cf rt.jar " classes);
17.   system("rm -f " classes);
18. \}
\end{alltt}
\vspace*{-0.5cm}
\end{bigbox}

%Note that if `precj.sh' is called for the second time, the classes previously
%extracted will be loaded in place of those original archived classes and the
%verbose compilation could not tell where to extract these classes. Therefore
%Line 1 first cleans up so as to obtain them from the verbose compilation mode
%(Line 2).
Line 1 generates a log file from the verbose output of Java compilation.
Line 2 processes the output through an AWK script in Lines
3-18. In brief, this AWK script parses each line that matches `loading' (Lines
3-12) to assign all necessary JAR archives and classes into corresponding variables.
At the end (Lines 13-18), the script extracts the exact library classes from the JAR
archives, repacks them into `rt.jar', and finally removes the extracted classes.

Alternatively, we implemented a more efficient solution by directly modifying
the GNU Compiler for Java ({\tt gcj}) 3.4.5 to do just that: in {\tt
gcc/java/jcf-parse.c}, the function {\tt jcf-parse} is hacked after the type
checking statement with an invocation to the following function we created:\\
\begin{bigbox}
\vspace*{-0.5cm}
\begin{alltt}\footnotesize
 1. static void
 2. output\_a\_class(JCF *jcf)
 3. \{
 4.    if (jcf->zipd==0) // not in a zip file
 5.        return;
 6.    char fulldir[500];
 7.    char name[500], precj\_classname[500];
 8.    strcpy(name, main\_input\_filename);
 9.    strcat(name, ".precj/");
10.    strcat(name, jcf->classname);
11.    strcpy(precj\_classname, name);
12.    char *dir = strtok(name, "/");
13.    strcpy(fulldir, "");
14.    while (dir !=NULL ) \{
15.        strcat(fulldir, dir);
16.        strcat(fulldir, "/");
17.        dir = strtok(NULL, "/");
18.       if (dir!=NULL)
19.            mkdir(fulldir, 0755);
20.    \}
21.    int fd = open(precj\_classname,
              O\_CREAT | O\_WRONLY, 0644);
22.    if (fd!=-1) \{
23.        write(fd, jcf->buffer,
              jcf->buffer_end - jcf->buffer);
24.        close(fd);
25.    \}
26. \}
\end{alltt}
\vspace*{-0.5cm}
\end{bigbox}

The above function does the following: for each class that has been parsed into
the `jcf' data structure because they are necessary for the type checking, its
bytecode is generated into a file accompanied with the source file
`main\_input\_filename' (Lines 8-20). Here since the zip processing routine in
{\tt gcj} is reused to prepare the bytecode (Line 21-25), there is no need to
parse the JAR files ourselves. The advantage of this precompiler over the AWK
script is that the necessary library classes can be obtained while parsing the
program.

The redundancy removal algorithm for a single class can be easily applied to a
set of classes. As long as the set of classes are indicated to the precompiler
as arguments on the command line, the necessary library classes for compiling all
of them can also be derived.

\section{Removing false dependencies}\label{sec:restructuring}

Using the redundancy removal algorithm from the previous section,
technically all the necessary library classes for compiling a class $c$ can be
packed into a corresponding archive, denoted by the set $B_c$. Thus
if the class $c$ is compiled again, only the necessary library classes in
$B_c$ are loaded.

When a project contains many compilation units, in this section, we explain why
incremental Java compilations are not enough in the context of redundancy
removal and present an incremental restructuring algorithm to remove the
redundancies without invoking the precompiler on every compilation unit.

\subsection{Incremental compilation} \label{sec:inccomp}
To explain what happens to a standard Java compiler when there are incremental
changes to the program, a small example is illustrated in
Figure~\ref{fig:1}.  In this example, two Java classes {\tt Foo} and {\tt Bar}
are in source form, each defined in a compilation unit. The class {\tt Foo}
depends on the class {\tt Bar} in the source form, denoted as ({\tt Foo}, {\tt
Bar}).  Since there are direct or indirect dependencies ({\tt Foo}, {\tt
Moo1}), ({\tt Moo1}, {\tt Moo}), ({\tt Bar}, {\tt Moo2}) and ({\tt Moo2}, {\tt
Moo}), the classes {\tt Moo1}, {\tt Moo}, {\tt Bar}, {\tt Moo2} must be
imported from the classpath.  

As a Java compiler is parsing a compilation unit, whenever an unknown Java
class is needed, it first searches for the bytecode form of the class in the
bootclasspath. When found, this bytecode form class will be loaded and stored
for reuse in memory. Only when such bytecode is not found in the specified
bootclasspath, the compiler will search for the source form that can be parsed
into the bytecode form.  Such process is recursively applied to the dependent
classes in the source form until all the necessary classes are found.

In this example, upon the fresh build of this Java project, both compilation
units {\tt Foo} and {\tt Bar} must be {\it compiled}, whereas other classes must
be {\it loaded}.  As a result, the classes in the bytecode form {\tt Foo.class}
and {\tt Bar.class} are stored in a directory that may be included to the
classpath.  By default, the directory is the same as the one containing
the source compilation units.

Upon an incremental build, on the other hand, the compilation process can be
shortcut. For one example, no change occurs to the class {\tt Bar} whereas a
change happens to the class {\tt Foo}. In this scenario, the bytecode form of
class {\tt Bar} will be {\it loaded}, depriving the need to recompile it.
Meanwhile, the dependent classes {\tt Moo2} does not need to be loaded for
compiling the class {\tt Foo}.

\begin{figure}\centering
\includegraphics[width=0.4\textwidth]{figures/fig1}
\caption{Fine-grained dependencies in Java}\label{fig:1}
\vspace*{-0.5cm}
\end{figure}

Conceptually the fine-grained Java class dependencies (Figure~\ref{fig:1}) can
be classified into four kinds according to the form of the source/target
artifacts: (1) {\em source} dependency $D_1$ where both depender and dependee
are in Java source form, e.g.  ({\tt Foo.java}, {\tt Bar.java}); and (2)
{\em direct binary} dependency $D_2$ where the depender is in Java source
form and the dependee is in bytecode form: e.g. ({\tt Foo.java}, {\tt
Moo1.class}); (3) {\em indirect binary} dependency $D_3$ where both
depender and dependee are in binary form: e.g. ({\tt Moo1.class}, {\tt
Moo.class}); (4) {\em parse} dependency $D_4$ where the depender is in
bytecode form and the dependee is in source form: e.g. ({\tt Bar.class}, {\tt
Bar.java}).  We also denote the transitive closure operation for the
dependencies $D$ as $\mbox{closure}(D)$.

Given the set of classes in the source form $C$, the set of touched source
classes $\Delta C$, and the set of default library classes in binary form $B$,
the necessary library classes for compiling $C$ can be computed as follows.  First
the set of recompiled compilation units $R$ are calculated on basis of the
transitive closure of the source dependencies. Thus
\[R =\{ r | c \in \Delta C \wedge r \in C \wedge (c, r) \in
\mbox{closure}(D_1)\} \cup \Delta C \] The set of necessary classes for any
recompiled compilation unit can be calculated as a union of two sets. The first
set contains the parse dependent classes needed by $R$ when the source classes
are updated in the classpath:
\[B_1 = \{b | r \in R \wedge (r, b) \in D_4\};\]
The second set contains transitively all binary dependent classes on the
bootclasspath needed by $R$ when the source classes do not exist or are
obsolete in the classpath:
\[B_2 =  \{b | r \in R \wedge \ not\ r \in \ \mbox{dom}\ D_4 \wedge (r, b) \in \mbox{closure}(D_2\cup D_3) \}, \]
here the domain operator of a relation is defined as $\mbox{dom}\  D_4 = \{r|
(r, b)\in D_4\}.$

The Java compiler can thus be used {\em incrementally} to compute the necessary
classes for compiling a given set of changed Java sources. 

\subsection{Incremental restructuring}\label{sec:incremental}

If our redundancy removal precompiler in section~\ref{sec:removing} were used
along with the existing incremental Java compilation explained in
section~\ref{sec:inccomp}, there are still two maintenance problems.  

The first problem is called {\em false dependency}: the necessary library
classes obtained from the redundancy removal process for one set of changed
compilation units may not be sufficient for another set of changes, because they
are computed separately in two incremental builds. Merging the two sets
into one may introduce redundant library classes for each set of the
changes. We solved this problem using a restructuring algorithm that extracts
common classes among the different compilation units on basis of the class
dependency graph such that no extra (false) classes can be loaded for compiling
any compilation unit.

The second problem is the {\em overhead} of applying the restructuring
algorithm to the complete code base once there is a small change.  If the
computation can be scoped to the changed parts incrementally, then overhead can
be highly reduced. In this section we also provide an incremental algorithm to
update the restructured packages according to the changed classes.

\paragraph{Restructuring: extracting common classes}
Preparing separate necessary library archive $B_c$ for individual class $c$ may
introduce many overlapping archives if the redundancy removal algorithm is
applied individually.  Using the algorithm in Section 3, two necessary archives $B_{c1}$ and $B_{c2}$ for two classes $c_1$ and $c_2$ may overlap a lot. 
A naive solution to avoid such duplicates is to unite all the necessary library
classes for all the compiled
classes in the set $C$ into one library archive $B = \bigcup_{c \in C} B_c$.  Then
all the classes are compiled by loading the same set of library classes.
Collectively, there is still no redundancy because all the classes in $B$ are
needed by at least one class in $C$. Unfortunately, such redundancy removal has
to be carried out on the complete set of compilation units.  Two practical
situations makes it less useful: (1) when there are too many classes to be
restructured, they cannot even be provided on a single command line as
arguments to the precompiler due to the limitation of the number of arguments
by a shell command; (2) The changed classes in an IDE are usually a
subset of all the restructured classes.  Therefore, it is too ideal to
remove redundancy for all the changed classes in one step.

In practice, therefore, we have to assume that the classes are incrementally
compiled in separate invocation of the compiler.  Hence, the library
archive $B$ prepared by the redundancy removal must be partitioned into
disjoint packages such that the partitioned packages can be reused as much as
possible among separate compilations.

When a class is compiled by loading some unwanted library classes
that are necessary for other classes, we say that a {\em false dependency} between the
library classes and the compiled class occurs.  We have presented
elsewhere~\cite{yu03cascon, yu05icsm} algorithms to remove false dependencies
based on the extracted syntax dependency graph of program entities. In this
paper, the same algorithms can be reused, however, the input to the algorithm
is changed to an extracted dependency graph among the classes.  In brief, we
explain the false dependency removal algorithm for restructuring as follows.

Suppose the redundancy in the original library $L$ has been removed and the
necessary library classes $B$ for compiling all the classes $C$ have been
computed. The class dependency graph can be rewritten as $G = (B \cup C, D)$
where $D \subset (B\cup C) \times (B \cup C)$ is the dependency relation.

Given that the classes $C$ are compiled in separate phases by loading different
subsets of library classes, we need to partition $B$ to have
minimal\footnote{~\cite{dayani-fard05fase} explains how a clustering algorithm
is used to repackage the compilation units according to the dependency graph.
Here we assume the original packaging of the Java compilation units are not
subject to change, which could limit the algorithm from completely getting rid
of false dependency among classes, but allows it to obtain no false dependency
among packages.} false dependency among the partitioned library classes
and the compilation units.  With the containment relation between packages and
classes, we define a {\em package dependency graph} (PDG)
$\mathcal{G}=(\mathcal{P},\mathcal{D})$ where $\mathcal{P}$ represents the set
of packages. Each element of $\mathcal{P}$ contains a subset of classes $B\cup
C$ in the class dependency graph. The vertices $\mathcal{P}$ are separated into
library class packages $\mathcal{B}$ and compiled class packages
$\mathcal{C}$, then the edges $\mathcal{D} \subset (\mathcal{B}\cup
\mathcal{C}) \times (\mathcal{B}\cup\mathcal{C})$ are the package dependencies.
The relation between the class dependency graph $G$ and the package dependency
graph $\mathcal{G}$ is determined by the $N$-to-1 partitioning mapping
$\mathcal{X}: B \times \mathcal{P}$, where each element of $\mathcal{P}$ is a
partitioned (disjoint) subset of $B$.

In a package dependency graph, a dependency between a package with classes $A
\subset B$ and a package with classes $B \subset C$ is {\em false} if there is
no class dependency from any class $a \in A$ to any class $b \in B$.  Our
restructuring only considers false dependencies caused by spurious classes
in library classes. A possible remedy to this problem is to split the
library classes so that only true package dependencies occur.
However, we do not split the compilation unit packages into individual classes
because classes within a package are usually compiled together, and thus the
false dependencies among these classes have little impact on the build time.
Hence, we keep the existing mapping between the compiled classes and
packages and replace all classes $C_i$ in the $i$-th package with one node $i
\in \mathcal{C}$. Thus the new class dependency graph has a condensed vertex
set $B \cup \mathcal{C}$ where $B$ is the union of the necessary library
classes in all the compilation packages $\mathcal{C}$.

Each necessary library class $u \in B_i$ of the $i$-th compiled package
has a dependency $(u, i)$ in the new class dependency graph. After union of the
library classes in compiled packages, we know a set of compilation units
that depends on each $u \in B$ as $\mathcal{D}(u) = \{i | u \in B_i\}$.  Starting
from a naive partitioning where each library class in $B$ is a separate
partition, we merge the classes that belong to the same sets of packages
and update the partitioning $\mathcal{X}$. 

Intuitively, the partitioning algorithm creates a partitioning $\mathcal{X}$ on
basis of equivalence classes that every element $u$ of the same partition are
dependent by the same set of compilation units (equal compilation unit closure
$D(u)$). Figure~\ref{fig:2}A presents an example of such calculation for each
node in the graph. The algorithm hence creates a new package dependency
relation $\mathcal{D}$ that mirrors a partial ordering defined by the set
inclusion relation of the compilation unit closure set of each partitions.

The resulting partitioning plus the package dependency relation describe the
new library packages to be generated in Algorithm 2.  After false
dependency removal (see Algorithm 1), the restructured partitions can be used
to generate the library packages (see Algorithm 2) that are necessary for
compiling each package, without wasting time on loading falsely dependent
library classes. 
\begin{figure}\centering
\begin{bigbox}
\vspace*{-1cm}
\begin{tabbing}\hspace{0.2cm}\=\hspace{.2cm}\=\hspace{.2cm}\=\hspace{.2cm}\=\hspace{.2cm}\=\\ \small
{\bf Algorithm 1. Partitioning Library Packages} \\\small
{\bf Input:} A set of library classes $B_i$ for each package $i \in \mathcal{C}$; \\\small
{\bf Output:} A partition $\mathcal{X}$ of $B$ where $B = \bigcup_{i \in \mathcal{C}} B_i$ \\\small
\small and a package dependency relation $\mathcal{D}$ \\\small
\small {\bf begin} \\
\>\small /* Initializing */ $B = \{\}$; $\forall u \in B: \mathcal{D}(u) = \{ \}$;\\\small
\>\small {\bf for each} $i \in \mathcal{C}:$ $B = B \cup B_i$; $\forall u \in B_i: \mathcal{D}(u) = \mathcal{D}(u) \cup \{ i \}$; \\\small
\>\small /* Partitioning */ {\bf let} $\mathcal{X} = \{\{u\}| u \in B \}$; \\\small
\>\small {\bf repeat} \\\small
\>\small \>\small done = true; \\\small
\>\small \>\small {\bf for each} $A, B \in \mathcal{X}$ and $A \neq B$: \\\small
\>\small \>\small \>\small {\bf if} $\bigcup_{a \in A} {\mathcal{D}(a)}=\bigcup_{b \in B} {\mathcal{D}(b)}$ {\bf then}:\\\small
\>\small \>\small \>\small \>\small $\mathcal{X} = \mathcal{X} \cup \{ A \cup B \} \setminus \{A, B\}$; done = false;\\\small
\>\small {\bf until} done;\\\small
\>\small {\bf return } $\mathcal{X}$, $\mathcal{D}$;\\\small
{\bf end} \\\small
\end{tabbing}
\vspace*{-1cm}
\end{bigbox}
\end{figure}
\begin{figure}\centering
\begin{bigbox}
\vspace*{-1cm}
\begin{tabbing}\hspace{0.2cm}\=\hspace{.2cm}\=\hspace{.2cm}\=\hspace{.2cm}\=\hspace{.2cm}\=\\ \small
{\bf Algorithm 2. Generate a Library Package} \\\small
{\bf Input:} A package $i \in \mathcal{C}$, 
a package dependency relation $\mathcal{D}$ and \\\small
  a partitioning $\mathcal{X}$ of all the library classes; \\\small
{\bf Output:} Library classes $B_i$ necessary for compiling $C_i$ \\\small
\small {\bf begin} \\\small
\>\small {\bf for each} partition $B \in \mathcal{X}:$ \\ \small
\>\> \small {\bf if} $i \in \mathcal{D}(B)$ {\bf then} $B_i = B_i \cup B $ {\bf end if}\\\small
\>\small {\bf return } $B_i$\\\small
{\bf end} \\\small
\end{tabbing}
\vspace*{-1cm}
\end{bigbox}
\end{figure}

In~\cite{yu05icsm} we proved three properties of the C/C++ header restructuring
algorithm: (1) no false dependency; (2) maximal granularity and (3) correct
code generation ordering. In the context of our Java class restructuring
algorithm, the first two properties still hold: (1) the adapted restructuring
algorithm can avoid false dependencies among packages and (2) it can also
achieve the minimal partitioning of library classes because false dependencies
would be introduced by merging any two partitioned sets of library classes.
Unlike C/C++ restructuring, the third property is not necessary, as the
set-oriented nature of the Java class load mechanism does not require any
ordering inside the generated packages to be preserved. 

\paragraph{Incrementally update the restructured partitions}

The time complexity of Algorithm 1 is $\mathcal{O}(n)$ set operations where $n$
is the number of classes. In order to obtain all the sets of library
classes, one has to invoke the redundancy removal for each compilation unit.
Therefore it should not be performed when there is a small change in the code
base.  Instead, it is more efficient to compute the delta of partitions on the
basis of changed source classes while maintaining the achieved goals of no
false dependency and maximal granularity (see Algorithms 3 and 4).

Algorithm 3 can be briefly described as follows. Given a set of changed
compilation units (classes or packages) $\Delta \mathcal{C}$, a subset of the
compilation units $\mathcal{C}$, the Java precompiler is invoked {\em only} to
the changed programs to obtain a set of necessary classes $B'_j$ ($j \in \Delta
\mathcal{C}$). The difference (addition and removal) of the original necessary
classes $B_j$ is retrieved from the partition by Algorithm 2.  Then the
difference is compared to obtain $\Delta B_j = B'_j - B_j$.

\begin{figure}\centering
\begin{bigbox}
\vspace*{-1cm}
\begin{tabbing}\hspace{0.2cm}\=\hspace{.2cm}\=\hspace{.2cm}\=\hspace{.2cm}\=\hspace{.2cm}\=\\ \small
{\bf Algorithm 3. Compute the Changed Library Classes} \\\small
{\bf Input:} A set of changed packages $\Delta \mathcal{C}$, \\\small
a package dependency relation $\mathcal{D}$ and a partitioning $\mathcal{X}$ \\\small 
of the library classes before the changes; \\\small
{\bf Output:} The changed library classes $\Delta B_j$ for each \\\small
$j \in \Delta \mathcal{C}$\\
\small {\bf begin} \\\small
{\bf for each} $j \in \Delta \mathcal{C}:$ \\\small
\>\small $B'_j$ = ExtractNeededClasses($C_j$) /* see Section 3 */\\\small
\>\small $B_j$ = GeneratePackage($j$, $\mathcal{D}$, $\mathcal{X}$) /* Algorithm 2 */\\\small
\>\small $\Delta B_j$ = $B'_j - B_j$; /* set difference */ \\\small
\small {\bf end} \\\small
\end{tabbing}
\vspace*{-1cm}
\end{bigbox}
\end{figure}

Algorithm 4 can be explained briefly as follows. Given delta $\Delta B_j$ to
any individual package $j$ calculated by Algorithm 3, the compilation unit
closure $D(u)$ of a library class $u$ (including both the added and
removed classes in the delta set) are updated. The updated equivalent class
relation on basis of the compilation unit closure result in a new partitioning
$\mathcal{X}$, which also leads to an update  package dependency relation
$\mathcal{D}$ on the basis of the set inclusion relation of the compilation
unit closure sets. 

\begin{figure}\centering
\begin{bigbox}
\vspace*{-1cm}
\begin{tabbing}\hspace{0.2cm}\=\hspace{.2cm}\=\hspace{.2cm}\=\hspace{.2cm}\=\hspace{.2cm}\=\\ \small
{\bf Algorithm 4. Update the Partitioning} \\\small
{\bf Input:} A partitioning $\mathcal{X}$ of the necessary library classes \\ \small 
a package dependency relation $\mathcal{D}$ and a set of necessary \\\small
library classes $\Delta B_j$ for each changed package $j \in 
\Delta \mathcal{C}$; \\\small
{\bf Output:} An updated partitioning $\mathcal{X}'$ of necessary library \\\small
classes and package dependency relation $\mathcal{D}'$ after the changes\\\small
\small {\bf begin} \\\small
\small {\bf for each} class $u$ in $\Delta B_j:$ \\\small
\>\small {\bf if} $u$ is an addition to $B_j$ {\bf then} \\\small
\>\>\small $\mathcal{D}'(u) = \mathcal{D}(u) \cup \{j\}$ \\\small
\>\small {\bf else if} $u$ is a deletion from $B_j$ {\bf then} \\\small
\>\>\small $\mathcal{D}'(u) = \mathcal{D}(u) \setminus \{j\}$ \\\small
\>\small {\bf endif} \\\small
\>\small update=false; n = $|\mathcal{X}|$; \\\small
\>\small {\bf for } i = 1, n\\\small
\>\>\small $B_i = \mathcal{X}(i)$ \\\small
\>\>\small {\bf if} $\mathcal{D}'(u) = \mathcal{D}(B_i)$ {\bf then} \\\small
\>\>\>\small $B'_i = B_i \cup \{u\}$; update=true; \\\small
\>\>\small {\bf else if} $\mathcal{D}(u)=\mathcal{D}(B_i)\wedge \mathcal{D}'(u)\neq \mathcal{D}(B_i)$ {\bf then} \\\small
\>\>\>\small $B'_i = B_i \setminus \{u\}$; update=true; \\\small
\>\>\small {\bf else} $B'_i = B_i$ {\bf endif} \\\small
\>\small {\bf if} !update {\bf then} $n=n+1$; $B'_{n}=\{u\}$ {\bf endif}\\\small
\>\small $\mathcal{X}'= \{B'_i | i=1..n \} $ \\\small
{\bf return} $\mathcal{X}'$, $\mathcal{D}'$; \\\small
\small {\bf end} \\\small
\end{tabbing}
\vspace*{-1cm}
\end{bigbox}
\end{figure}

Figure~\ref{fig:2} presents examples of the computations the package dependency
relation and the partitioning when there are new library classes
introduced into the changed compilation units. Initially (see Figure~\ref{fig:2}(A)), the four compilation
units (labeled as 4, 5, 6, 7 respectively) needs 3 library classes
respectively. After Algorithm 1, the library classes are partitioned
into 7 partitions $[\{e\}, \{c\}, \{f\}, \{a\}, \{b\}, \{g\}, \{h\}]$
corresponding to the 7 nodes on the package dependency graph. The dependency
relation mirrors the set inclusion relation between the compilation unit 
closure sets of the library packages.

Now (see Figure~\ref{fig:2}(B)), suppose the compilation unit $4$ is changed
and it needs a new set of library classes after redundancy removal as
$\{a, c, e, i\}$.  Algorithm 3 detects a new library class $\{i\}$ is
inserted into the compilation unit after the change. Since $\{i\}$ is only used
by the compilation unit $4$, same as the other elements in $4$, Algorithm 4
does not change the dependency graph. The library class partitioning is
updated as $[\{e\}, \{c\}, \{f\}, \{a, i\}, \{b\}, \{g\}, \{h\}]$ accordingly.

Again (see Figure~\ref{fig:2}(C)), suppose the compilation unit $4$ has been
changed and this time a new library class $\{b\}$ is used, which is
actually redundant with the library classes needed by the compilation
unit $5$. In this case, since the compilation unit closure set of $\{b\}$ has
been changed from $\{5\}$ to $\{4, 5\}$, it is moved from the partition $5$
into partition $2$ accordingly.  In addition, since the new partition $5$ is
the same as $2$, it can be removed from the dependency graph if it is no
longer needed by other packages. Otherwise $5$ will be kept as a placeholder
on the dependency graph. All the library packages other than $2$ and $5$
remain the same. The library class partitioning is updated as $[\{e\},
\{b, c\}, \{f\}, \{a, i\}, \{g\}, \{h\}]$ accordingly.

Finally (see Figure~\ref{fig:2}(D)), suppose the compilation unit $4$ is
changed again and this time a new library class $\{h\}$ is actually
redundant with the library classes needed by the compilation unit $7$. In
this case, since the compilation unit closure set of $\{h\}$ has been changed
from $\{7\}$ to $\{4, 7\}$, it is moved from the partition $7$ into a new
partition $8$ accordingly.  In addition, since the new partition $7$ is the
same as the combination of $3$ and $8$, it can be removed from the dependency
graph if no other package depends on it.  All the library packages other
than $7$ and $8$ remain the same. The library class partitioning is
updated as $[\{e\}, \{b, c\}, \{f\}, \{h\}, \{a, i\}, \{g\}]$ accordingly
(which is the same as the previous example !) while the package dependency
graph has changed incrementally.

\begin{figure}[ht]\centering
\vspace*{-0.25cm}
\includegraphics[width=0.5\textwidth]{figures/fig2}
\caption{Incremental update of partitions}\label{fig:2}
\vspace*{-0.5cm}
\end{figure}

\section{Case Studies}\label{sec:experiments}

We have applied our approach to two case studies: SWT\footnote{http://www.eclipse.org/swt} and 
ECJ\footnote{http://gentoo-portage.com/dev-java/eclipse-ecj}. The former is a
GUI library that form the basis of the Eclipse Workbench platforms and the
latter is the Eclipse Java compiler used in the Eclipse Java Development IDE
and can also be used separately as a standalone Java compiler when it is run
under the batch mode.

Our experimental results, presented in the following subsections, show that, in
both non-trivial cases, the improvement of our approach was significant.

\subsection{Restructuring SWT}

The public domain program SWT (Standard Widget Toolkit) 3.1.2 was studied. The
source code includes 426 {\tt .java} files that are scattered in 13 packages.
In this experiment, although {\tt gcj} is the
fastest Java compiler, it (as of 3.4.5) could not compile 12 .java programs.
Thus we chose the Blackdown JDK 1.4.2.03 (see ~\ref{tab:hello-results}) as the default Java
compiler. The experiment results are listed in Table~\ref{tab:swt-packages}.
\begin{table}\label{tab:swt-packages}
\caption{Case Study 1. SWT. }
\begin{tabular}{||l||r|r||r|r|r||}\hline
\small packages & \small  \# .java & \small  \# LOC & \small  T & \small  T' & \small  $\frac{T-T'}{T'}$\\\hline
\small accessibility & \small  13 & \small  3,363 & \small  0.92 & \small  0.85 & \small  8\% \\
\small awt & \small  1 & \small  215 & \small  0.65 & \small  0.52 & \small  25\% \\
\small browser & \small  27 & \small  5,228 & \small  1.22 & \small  1.10 & \small  11\% \\
\small custom & \small  56 & \small  26,170 & \small  1.81 & \small  1.58 & \small  15\% \\
\small dnd & \small  24 & \small  4,297 & \small  0.93 & \small  0.86 & \small  8\% \\
\small events & \small  42 & \small  2,357 & \small  0.70 & \small  0.52 & \small  35\% \\
\small graphics & \small  28 & \small  15,175 & \small  1.43 & \small  1.19 & \small  20\% \\
\small internal & \small  164 & \small  25,827 & \small  2.00 & \small  1.92 & \small  4\% \\
\small layout & \small  9 & \small  3,169 & \small  0.86 & \small  0.74 & \small  16\% \\
\small printing & \small  3 & \small  685 & \small  0.59 & \small  0.40 & \small  48\% \\
\small program & \small  1 & \small  805 & \small  0.64 & \small  0.51 & \small  25\% \\
\small widgets & \small  55 & \small  42,107 & \small  2.24 & \small  2.06 & \small  9\% \\
\small main & \small  3 & \small  3,290 & \small  0.64 & \small  0.46 & \small  39\% \\\hline
\small by package & \small  426 & \small  132,688 & \small  14.63 & \small  12.71 & \small  15\% \\\hline
\small by .java & \small  426 & \small  132,688 & \small  227.6 & \small  175.5 & \small  29\% \\\hline
\small fresh build& \small  426 & \small  132,688 & \small  4.68 & \small  4.57 & \small  2\% \\\hline
\end{tabular}
\vspace*{-0.5cm}
\end{table}
In Table~\ref{tab:swt-packages}, each row shows the number of .java files
compiled, their lines of code, the compilation time using the original
bootclasspath, the time using the precompiled library classes and the speedup
ratio. The `by package' row shows the sum of the incremental build for
individual packages.  The `by .java' row shows the sum of results when each
.java file is compiled by a separate command.  The `fresh build' row shows the
results of the fresh build when all the 426 files are compiled in one batch.
While compiling a ".java" file, several dependent ".java" files and ".class"
files need to be loaded. At the fresh build, each of these dependent ".java"
file needs to be compiled as well. On the other hand, at the incremental build,
the dependent ".java" files have been compiled and thus only those compiled
".class" files will be loaded. Therefore, the minimal library package for a
given ".java" file should include both the directly and indirectly dependent
".class" files at the fresh build; but should only include the directly
dependent ".class" files at the incremental build.  When all the code is
compiled in a fresh build, the overall speedup is only 2\%. However, when the
code is compiled individually by packages, the overall speedup climbs to 15\%;
when the code is compiled incrementally by the individual .java files, the
overall speedup becomes 29\%.

%\paragraph{Measure the incremental build time using the CVS repository}

%\paragraph{Probabilistic analysis of the incremental build time}

%\subsubsection{Testing}

\subsection{Restructuring the Eclipse Compiler for Java (ECJ)}

ECJ is a component of the Eclipse JDT project.  The following experiment
concerned the released version of Eclipse SDK 3.1.2.  The source code includes
297 {\tt .java} files that are scattered in 14
packages.
The experiment results are listed in Table~\ref{tab:ecj-packages}.
\begin{table}\label{tab:ecj-packages}
\caption{Case Study 2. ECJ. }
\begin{tabular}{||l||r|r||r|r|r||}\hline
\small packages & \small  \# .java & \small  \# LOC & \small  T & \small  T' & \small  $\frac{T-T'}{T'}$\\\hline
\small core & \small  4 & \small  4,384 & \small  0.75 & \small  0.62 & \small  21\% \\
\small antadaptor & \small  1 & \small  54 & \small  0.52 & \small  0.40 & \small  30\% \\
\small ast & \small  109 & \small  26,191 & \small  1.99 & \small  1.82 & \small  9\% \\
\small env & \small  19 & \small  1,069 & \small  0.58 & \small  0.46 & \small  26\% \\
\small flow & \small  11 & \small  2,645 & \small  0.86 & \small  0.68 & \small  26\% \\
\small impl & \small  42 & \small  3,146 & \small  1.02 & \small  0.86 & \small  19\% \\
\small util & \small  17 & \small  2,852 & \small  0.87 & \small  0.74 & \small  18\% \\
\small codegen & \small  15 & \small  8,978 & \small  1.26 & \small  1.08 & \small  17\% \\
\small compiler & \small  10 & \small  6,118 & \small  1.14 & \small  1.01 & \small  13\% \\
\small batch & \small  7 & \small  3,309 & \small  0.93 & \small  0.85 & \small  9\% \\
\small lookup & \small  53 & \small  19,745 & \small  1.77 & \small  1.58 & \small  12\% \\
\small parser & \small  21 & \small  20,982 & \small  1.66 & \small  1.50 & \small  11\% \\
\small classfmt & \small  7 & \small  2,346 & \small  0.86 & \small  0.74 & \small  16\% \\
\small problem & \small  10 & \small  6,564 & \small  1.14 & \small  1.02 & \small  18\% \\\hline
\small by package & \small  297 & \small  108,383 & \small  15.35 & \small  13.36 & \small  15\% \\\hline
\small by .java & \small  297 & \small  108,383 & \small  381.7 & \small  300.4 & \small  27\% \\\hline
\small fresh build& \small  297 & \small  108,383 & \small  4.12 & \small  3.99 & \small  3\% \\\hline
\end{tabular}
\vspace*{-0.5cm}
\end{table}

Table~\ref{tab:ecj-packages} shows when all code is compiled in a fresh build,
the overall speedup is only 3\%. However, when the code is compiled
individually by packages, the overall speedup climbs to 15\%; when the code is
compiled incrementally by the individual .java files, the overall speedup
becomes 27\%.

%\subsubsection{Improvement of fresh builds}

%\subsubsection{Improvement of incremental builds}

%\paragraph{Measure the incremental build time using the CVS repository}

%\paragraph{Probabilistic analysis of the incremental build time}

%\subsubsection{Testing}

\section{Related works and future directions}\label{sec:related}

The performance of Java programs was usually considered slower than the
programs in more optimized languages such as Fortran and C/C++. However,
with the wide adoption of the Java language by the application programmers, its
usage in large-scale software development has been increased dramatically and
therefore the build performance of the Java programs is becoming a problem for
application software development. This section compares our work with related
works in the field of build performance optimization of large-scale software
development processes.

\paragraph{Redundancy removal through restructuring}
Redundancy is a problem for inconsistency management (see Finkelstein et
al~\cite{finkelstein94}, e.g. update consistency) as well as performance
optimizations (see Aho et al~\cite{aho86} e.g. dead code elimination).
In~\cite{yu05icsm} we reported a light-weighted technique to remove
redundancies in the code on basis of syntax dependencies. Such code
redundancy removal does not change the executable binaries.  Clone
detection~\cite{baxter98clone} and refactoring~\cite{fowler99refactoring}
can be combined to produce leaner code for compilation at a higher
cost.  Variants of dead code elimination on basis of program slicing analysis
(see Weiser, ~\cite{weiser81}) can also reduce the size of the program in terms
of their binary executables (see De Sutter et al~\cite{desutter05toplas}).

In this work, it is recognized that restructuring large-scale Java programs to
reduce its binary size is costly regarding compilation time, if not impossible.
Restructuring the library classes has much lighter weight and does not
require any change to the source code, making it possible to be used in
large-scale. Other restructuring techniques using the program analysis
techniques can be easily combined with our technique, producing still better
results.

\paragraph{False dependency removal and smart compilations} 
A concept related to false dependency is the Ratio of Use to Visibility
(RUV) (Borison, ~\cite{borison89phd}). Here {\em Use} defines the number of compilation
units where a declaration is used, and {\em Visible} defines the number of
compilation units where the declaration is used. RUV can be seen as an
indicator of false dependencies. After our restructuring, the ratio will
be restored to 1.  In (Adams et al~\cite{adams94}), the cost to various
recompilation techniques was surveyed.  The {\em cascading} recompilation
triggers recompilation whenever a change to the {\tt make} target happens; the
{\em surface} recompilation does not trigger a {\em cascading} recompilation
when changes are made to comments; the {\em cutoff} recompilation triggers a
{\em cascading} recompilation only when changes are made to preprocessed
images.  The {\em smart} recompilation~\cite{adams94} triggers a {\em
cascading} recompilation only when a change is made to the smallest file
dependency graph derived from the libraries.  Unlike us, these techniques do not
restructure the libraries to reflect the true dependencies, rather it maintains a
dependency graph using existing libraries, thus the RUV they obtained was still
below 1; the link-time {\em smartest recompilation} has to rely on the type
inference to generalize types of undeclared identifiers, and as noted by the
author, it may be counterproductive because it slows the error removal.

In~\cite{yu03cascon, yu05icsm} we presented algorithms to remove all
false dependencies through header restructuring C/C++ programs.  This paper
builds on top of these algorithms (Algorithm 1 and 2) a new tool that can take
the Java class dependencies into account to produce efficient solutions to the
compilation of Java programs. In addition, we present incremental restructuring
techniques (Algorithms 3 and 4) to efficiently update the restructured
library packages for the incremental restructuring.

Lagorio et al~\cite{lagorio04sac} presented a smart compilation tool for Java
programs. This tool reduces the dependencies between changed classes: if the
changed class in source form does not lead to a change to its bytecode form,
then any dependency to the source form is ignored. On basis of program
analysis, it attempts to check the state space of the program to detect such
change patterns. In the future we may consider using this technique 
to selectively treat the parse dependency relation $D_4$ as ignorable. 

At the time of writing, we are thinking of the following extensions or improvements
to the current work: 
\vspace*{-0.5cm}
\paragraph{Supporting strongly-typed languages and their IDE} 
The smarter compilations~\cite{adams94} work for Modula-2, our previous work for
C/C++ and this work for Java, all these are strongly-typed languages. 
Building such type-checked systems in large scale in general becomes slower when 
extraneous definitions and types are parsed. As these have shown, one can
find similar problems as well as solutions for other strongly-typed languages,
such as C\#.  In addition, the performance of an IDE for the strongly-typed language
may suffer when any code edit triggers a recompilation in the environment. The
Eclipse IDE, for example, lazily loads a large number of plugins (89 plugins
for the SDK version 3.1.2), each contains redundancies in terms of their
library classes. We will investigate how to integrate the improvement we
developed into these environments, not only the library loading process, but
also the indexing process for code navigations.
\vspace*{-0.5cm}
\paragraph{Runtime Java performance for embedded systems} 
The library class restructuring can be applied not only to the
compilation or build process, but also to the runtime executions on the
reflective `ClassLoader' process.  Admittedly not every Java application uses
such reflexion mechanism, however, quite a few large-scale systems use it to
manage the libraries and plugins at runtime. For example, both Eclipse 3.x OSGi
framework\footnote{http://www.eclipse.org/osgi} and
Prot\'eg\'e\footnote{http://protege.stanford.edu} treat their plugin library at
runtime through their own ClassLoaders. In such systems, the loaded classes are
usually stored in a data structure inside the memory, which become problematic
for the memory footprint that can be severe for embedded systems.  Because our
redundancy and false dependency removal reduces the memory consumption for the
loaded classes, it can be applied to such runtime optimizations.

\section{Conclusion}\label{sec:conclusion}
This paper presents a set of incremental algorithms and techniques for
improving the speed of compilation in large-scale software systems.  The
presented algorithms rely on class dependencies and can be used as a
precompilation step.  By adapting a Java compiler without changing its
implementation, the {\em precompilation} tool requires little change on the
existing build files.  Experiments have shown that our proposed algorithms can
achieve up to 2 times gain over the speed of compilation and a large-scale Java
software (e.g.  Eclipse) can adopt directly our technique.

\bibliographystyle{style/latex8}
\bibliography{precj}
\end{document}
