<!-- 
trindex metastudy, additional studies
smaller tables
-->
<head>
<title>
Java pulling ahead?  Java versus C++ benchmarks
</title>
<link rel="stylesheet" type="text/css" href="hotstyle.css">
</head>

<META Name="keywords" Content="java versus C, java versus C++, java performance, java benchmarks, java C benchmarks">

<!-- use this to set the font size for tables> -->
<style>
.smallertable, .smallertable TD, .smallertable TH
{
font-size:10pt;
}
</style>

<!-- 

hotspot white papers
http://java.sun.com/products/hotspot/docs/whitepaper/Java_Hotspot_v1.4.1/Java_HSpot_WP_v1.4.1_1002_1.html#perform

another numeric page, sparce matricies and scimark, has Jave about 30% behind
on average, faster than one fortran slowr than another
http://www.dl.ac.uk/TCSC/UKHEC/JASPA/node3.html

another another numeric page, less organized, vc++ tested
http://jeanpaul.lefevre.free.fr/java/ceaxx.html

Performance tests show Java as fast as C++
(3 poor microbenchmarks)
http://www.javaworld.com/javaworld/jw-02-1998/jw-02-jperf.html

Illustrate single-benchmark problems:
 "java is 12 times fast than c++ "
http://www.javalobby.org/threadMode2.jsp?forum=61&thread=6719&start=0
(later revised claim to be 4x, no wait, .6x, no, 1x, ...)

http://www.self-similar.com/temporaries.html
C++ return of an object has two needless calls to the copy constructor
also, 
http://cplus.about.com/library/weekly/aa113002a.htm

http://met.sourceforge.net/
Due to the overhead of temporaries and copying of matrix objects, 
C++ lagged behind fortran's performance by an order of magnitude. 

http://www.windevnet.com/wdn/webextra/2003/0313/
windows developer benchmarks of C,C++,D,C#,java of silly stuff:
application startup, string comparison, ...

-->

<body>
<table width="547" border="0" cellspacing="0" cellpadding="0">
<tr>
<td valign="top" width="532">
<font size=2 face="verdana, helvetica, arial, sans-serif">

<!-- ---------------------------------------------------------------- -->
<center>
<h2> Performance of Java versus C++ </h2>
<!-- <h2> On the Persistence of (Java) Performance Myths </h2> -->
<font size=-1>
J.P.Lewis and Ulrich Neumann
<br>
Computer Graphics and Immersive Technology Lab
<br>
University of Southern California
<br><br>
<a href="http://www.idiom.com/~zilla">
<font size=-2>
www.idiom.com/~zilla</a>
<p>
Jan. 2003
<br>
<em>updated 2004</em>
</font>
</font>
</center>

<p><p>
<b>[also see this <a href="javaCbenchmarkFAQ.html">FAQ</a>]</b>
<p>
<p><p>
<br>
<!-- The Nth law of computing: Java cannot be mentioned
without characterizing it as slow.
Benchmarks show otherwise. 
-->
<p>
This article surveys a number of benchmarks and finds that
Java performance on numerical code is comparable to that of C++, 
with hints that Java's relative performance is continuing to improve.

We then describe clear theoretical reasons why these
benchmark results should be expected.

<!-- ---------------------------------------------------------------- -->
<!--
<h2>What is slow?</h2>
Before getting to the data, let's calibrate what "slow" means.

If you write a number of small benchmarks in several different types of 
programming language, the broad view of the performance relative to assembler
might be something like this:
<p>
<center>
<table border="1">
<tr>
	<td>Language class</td><td>		typical slowdown </td>
</tr><tr>
	<td>Assembler:</td><td>  1</td>
</tr><tr>
	<td>Low level compiled (Fortran, C):</td><td>  1-2</td>
</tr><tr>
	<td>Byte-code (python):</td><td>		25-50</td>
</tr><tr>
	<td>Interpreted strings (csh, tcl?):</td><td>	250x</td>
</tr>
</table>
</center>
<p>
<p>
The benchmarks below show that java has e.g.
0.8-1.4 times the speed of C across a number of benchmarks,
and is generally a bit faster than some C compilers (gcc).
Is "slow" the right word?
<p>
When stuck on an airplane I'll sometimes look through PC World or similar magazines.  
They usually have a systems shootout with language like 
<em>"System X trounces the competition"</em>.  
The table showing the WinBizBench2000 scores
reveals that System X is 4% faster than the other model.  
If you consider a 4% speed increase to be 
significant then indeed Java is slow...
and you write everything in assembler no doubt!
On the other hand if you think C and C++ have acceptable performance,
read on.
-->

<!-- ---------------------------------------------------------------- -->
<h2>Benchmarks</h2>
<!-- ---------------------------------------------------------------- -->

Five composite benchmarks listed below show that modern
Java has acceptable performance,
being nearly equal to (and in many cases faster than) 
C/C++ across a number of benchmarks.
<!-- A sixth benchmark indicates that OpenGL graphics performance
on several programs is also acceptable. -->
<p>
<ol>

<li><b>Numerical Kernels</b>
<p>
<a href="HTTP://www.philippsen.com/JGI2001/finalpapers/18500097.pdf">
<b>Benchmarking Java against C and Fortran for Scientific Applications</b></a>
<br>
Mark Bull, Lorna Smith, Lindsay Pottage, Robin Freeman,
<br>
EPCC, University of Edinburgh (2001).
<p>
The authors test some real numerical codes (FFT, Matrix factorization,
SOR, fluid solver,  N-body) 
on several architectures and compilers.
On Intel they found that the Java performance was very reasonable
compared to C (e.g, 20% slower), and that Java was faster than
at least one C compiler (KAI compiler on Linux).
<p>
The authors conclude, "On Intel Pentium hardware,
especially with Linux, 
<b>the performance gap is small enough
to be of little or no concern</b> to programmers."

<li><h4>More numerical methods: SciMark2 scores</h4>


<p>
R. F. Boisvert, J. Moriera, M. Phillipsen, R. Pozo,
<br>
Java and Numerical Computing,
<br>
Computing in 
Science & Engineering,
3(2):18-24, Mar.-Apr.,
2001.
<p>
SciMark includes a number of numerical codes.
<!-- (Run java jnt.scimark2.commandline): -->
On a PIII/500, MFlops (higher is better):
<center>
<table border="1" class="smallertable">
<tr>
<td>ibm jdk 1.3.0			</td>
<td>84.5</td>
</tr>
<tr>
<td>	linux2.2 gcc (2.9x) -O6		</td>
<td>87.1</td>
</tr>
</table>
</center>



<li><h4>Still more numerical methods</h4>
From the book 
<a href="http://216.239.53.100/search?q=cache:dPwXe7Qsql0C:www.devx.com/java/free/book.asp+besset+numerical+java&hl=en&ie=UTF-8">
 Object-Oriented Implementations of Numerical Methods</a>
by Didier Besset (MorganKaufmann, 2001):
<!-- The author is a PhD in Physics, worked at Stanford Linear Accelerator, etc. -->
<p>
<center>
<table border="1"  class="smallertable">
<tr>

<th>Operation</th><th>Units</th><th>C</th><th>Smalltalk</th><th><b style="color:black;background-color:#99ff99">Java</b></th>
</tr>
<tr>
<td>Polynomial 10th degree</td><td>msec.</td><td>1.1</td><td>27.7</td><td>9.0</td>
</tr>
<tr>
<td>Neville Interpolation (20 points)</td><td>msec.</td><td>0.9</td><td>11.0</td><td>0.8</td>

</tr>
<tr>
<td>LUP matrix inversion (100 x 100)</td><td>sec.</td><td>3.9</td><td>22.9</td><td>1.0</td>
</tr>
</table> 
</center>
<p>
</li>


<li><b>Microbenchmarks (cache effects considered)</b>
<p>
Several years ago these 
<a href="http://www.aceshardware.com/read.jsp?id=153">
benchmarks</a>
showed java performance at the time to be somewhere in the middle
of C compiler performance - faster than the worst C compilers,
slower than the best.  
These are "microbenchmarks", but they do have the advantage
that they were run across a number of different problem sizes
and thus the results are not reflecting a lucky cache interaction
(see more details on this issue in the next section).
<p>
These benchmarks were 
<a href="http://www.visi.com/~khuber/java/JavaC.pdf">updated</a>
with a more recent java(1.4) and gcc(3.2), using full optimization
(gcc -O3 -mcpu=pentiumpro -fexpensive-optimizations -fschedule-insns2...).
This time <b>java is faster than C the majority 
of the tests</b>, by a factor of more
than 2 in some cases...
<p>
... suggesting that java performance is catching up to or even pulling ahead
of gcc at least.
<!-- To establish this it would be necessary to run the test on the 
same machine as the original 
tests were run on (factor out possible confounding variable of
"the older machine was better supported by the C compiler than the new one",
and using the full range of C compilers rather than just gcc.
-->
<p>
These test were mostly integer (except for an FFT). 
</li>

<li><h4>Microbenchmarks (cache effects not considered)</h4>
In January 2004 OSNews.com posted an article, 
<a href="http://www.osnews.com/story.php?news_id=5602">
Nine Language Performance Round-up: Benchmarking Math & File I-O</a>.
These are simple numeric and file I/O loops, and no doubt suffer
from the arbitrary cache interaction factor described below.
They were however run under several different compilers, which helps.
Again Java is competitive with (actually slighty faster than) 
several C++ compilers including Visual C++ in the majority of the benchmarks.
<p>
(One exceptional benchmark tested trigonometry library calls.
Java numerical programmers are aware that these calls became slower
in java 1.4;  recent benchmarks suggest this issue was fixed in java 1.4.2)
</li>

<!-- 
<li><h4>OpenGL</h4>

<p>
Timings on several OpenGL programs available in both C++ and Java
versions were reported
<a href="http://servlet.java.sun.com/javaone/sf2002/conf/sessions/display-3167.en.jsp">here.</a>

<center>
<table border="1"  class="smallertable">

<tr>
<td>General prediction:</td>
<td>65-90% of optimized C++</td>
</tr>

<tr>
<td>Nvidia demo: </td>
<td>90% of optimized C++  </td>
</tr>

<tr>
<td>MIT synthetic character program:</td>
<td>86% of speed optimized C++ speed</td>
</tr>

</table>
</center>

<p>
Comment: these tests used SUN's jdk1.4, which is generally
worse than IBM's for numerical code.
-->

</ol>

Note that these benchmarks are on Intel architecture machines.
Java compilers on some other processors are less developed at present.

<!-- ---------------------------------------------------------------- -->
<h2>And In Theory: Maybe Java Should be Faster</h2>
<!-- ---------------------------------------------------------------- -->

Java proponents have stated that Java
will soon be faster than C.  Why?
Several reasons (also see reference [1]):

<h3>1) Pointers make optimization hard</h3>
This is a reason why C is generally a bit slower than Fortran.
<p>
  In C, consider the code
<pre>
        x = y + 2 * (...)
        *p = ...
	arr[j] = ...
        z = x + ...
</pre>
Because p could be pointing at x,
a C compiler cannot keep x in a register and
instead has to write it to cache and read it back -- 
unless it can figure out where p is pointing at compile time.
And because arrays act like pointers in C/C++,
the same is true for assignment to array elements:  
arr[j] could also modify x.

<p>

This pointer problem in C resembles the 
<b>array bounds checking</b> issue
in Java: in both cases, if the compiler can determine the array (or pointer)
index at compile time it can avoid the issue.  
<p>
In the loop below, for example, 
a Java compiler can trivially avoid testing the lower 
array bound because the loop counter is only incremented, never decremented.
A single test before starting the loop handles the upper bound test if 'len'
is not modified inside the loop (and java has no pointers, so simply looking
for an assignment is enough to determine this):
<pre>
   for( int i = 0; i < len; i++ ) { a[i] = ... }
</pre>
<p>
In cases where the compiler cannot determine 
the necessary information at compile time,
<b>the C pointer problem may actually be the 
bigger performance hit</b>.  In the java case, the loop bound(s)
can be kept in registers, and the index is certainly in a register,
so a register-register test is needed.  In the C/C++ case a 
load from memory is needed.

<!-- ---------------------------------------------------------------- -->

<h3>2) Garbage collection- is it worse...or better? </h3>
Most programmers say garbage collection is or should be slow, 
with no given reason- it's assumed but never discussed.
Some computer language researchers say otherwise.
<p>
Consider what happens when you do a new/malloc: a) the allocator
looks for an empty slot of the right size,
then returns you a pointer.  b) This pointer is pointing to some
fairly random place.
<p>
With GC, a) the allocator doesn't need to look for memory, 
it knows where it is, b) the memory it returns is adjacent
to the last bit of memory you requested.  The wandering around part
happens not all the time but only at garbage collection.
And then (depending on the GC algorithm) 
things get moved of course as well.

<h3>The cost of missing the cache</h3>
<p>
The big benefit of GC is <b>memory locality</b>.
Because newly allocated memory is adjacent to the 
memory recently used, it is more likely to already be in the cache.
<p>
How much of an effect is this?  One rather dated (1993) 
<a href="http://www.idiom.com/~zilla/Computer/cachekiller.html">example</a>
shows that missing the cache can be a big cost:
changing an array size in small C program from 1023 to 1024 
results in a slowdown of 17 times (not 17%).  
This is like switching from C to VB!
This particular program stumbled across what was probably the worst
possible cache interaction for that particular processor (MIPS);
the effect isn't that bad in general...but with processor
speeds increasing faster than memory, missing the cache
is probably an even bigger cost now than it was then.
<p>
(It's easy to find other research studies demonstrating this; here's 
 <a href="http://ncstrl.cs.princeton.edu/expand.php?id=TR-482-94">one</a>
from Princeton: they found that (garbage-collected) ML programs
translated from the SPEC92 benchmarks
have lower cache miss rates than the equivalent C and Fortran programs.)
<p>
This is theory, what about practice?  In a well known paper [2]
several widely used programs (including perl and ghostscript) were
adapted to use several different allocators including a garbage
collector masquerading as malloc (with a dummy free()).
The garbage collector was as fast as a typical malloc/free;
<b>perl was one of several programs that ran faster 
when converted to use a
garbage collector.</b>
Another interesting fact is that the cost of malloc/free is significant: 
both perl and ghostscript spent roughly 25-30% of their time 
in these calls.

<p>
Besides the improved cache behavior, also note that 
automatic memory management allows escape analysis,
which identifies local allocations that can be placed on the stack.
(Stack allocations are clearly cheaper than heap allocation
of either sort).

<!--
<p>
Malloc/Free and GC may have more in common than not.
Several GC algorithms have up to a 2x "memory slop", e.g. 
one classic algorithm copies all live objects from one memory
pool to another during GC, requiring twice the memory 
(or less if some dead objects are discovered).  
One malloc implementation has a very similar characteristic:
in order to reduce the "searching through lists" cost,
it rounds every request up to the nearest power of two,
and keeps all blocks of a given (rounded) size on a list;
this (smaller) list is the only one that has to be searched
when looking for a free block.
-->

<!-- ---------------------------------------------------------------- -->
<h3>3) Run-time compilation</h3>

The JIT compiler knows more than a conventional "pre-compiler", and
it may be able to do a better job given the extra information:

<!-- Modern Java virtual machines do some amazing things: they profile the
program while it is running, determine which parts to optimize,
compile them, then <i>uncompile and recompile</i> 
them if new classes are loaded that override methods that were inlined!
--> 
				       
<p>
<ul>
<li>
The compiler knows what processor it is running on, and
can generate code specifically for that processor.
It knows whether (for example) the processor is a PIII or P4, 
if SSE2 is present, and how big the caches are.
A pre-compiler on the other hand has to target
the least-common-denominator processor, at least in
the case of commercial software.
<br><br>

<li>
Because the compiler knows which classes are actually 
loaded and being called, it knows which methods can
be de-virtualized and inlined.  
(Remarkably, modern java compilers also know how to "uncompile" 
inlined calls in the case where an overriding method is loaded
after the JIT compilation happens.)
<br><br>

<li>
A dynamic compiler may also get the branch prediction hints 
right more often than a static compiler.  
<!-- On a cpu with a long pipeline (Intel P4) missing a branch
is a big cost. -->

</ul>

<!--
Of the three reasons why Java may soon produce faster code
than C/C++, I find this one the least convincing.  I think
it applies mainly to tasks (like web application servers)
that are literally running for months under reasonably heavy load
and slowly changing conditions; recompiling repeatedly is worth
it in these cases.
For more typical desktop programs, the compiler would need
to be sure that a particular loop is going to run long
enough for a re-optimize to be worthwhile.

hotspot -server compiles a routine
after watching it run for a bit, so it tends to get the branch
prediction right better than static compilation does
(for instruction sets with ordering or branch prediction hints)
it also has real information to use for other compilation decisions.
There is no reason why this couldn't apply in C as well, but
as far as I know with most/all C compilers profiling gives
info to the programmer but not directly back to the compiler.
-->

<br><br>
It might also be noted that Microsoft has some similar
comments regarding C# performance [5]:
<ul>
<li> "<b>Myth: JITed Programs Execute Slower than Precompiled Programs</b>"
<br><br>
<li> .NET still provides a traditional pre-compiler ngen.exe, but "<b>since the run-time only optimizations cannot be provided... the code is usually not as good as that generated by a normal JIT.</b>"
</ul>

<!-- ---------------------------------------------------------------- -->
<h2>Speed and Benchmark Issues</h2>
<!-- ---------------------------------------------------------------- -->

Benchmarks usually lead to extensive and heated discussion in 
popular web forums.
From our point of view there are several reasons why 
such discussions are mostly "hot air".
<h3>What is slow?</h3>
The notion of "slow" in popular discussions is often poorly calibrated.
If you write a number of small benchmarks in several different types of 
programming language, the broad view of performance
might be something like this:
<p>
<center>
<table border="1"  class="smallertable">
<tr>
	<td>Language class</td><td>		typical slowdown </td>
</tr><tr>
	<td>Assembler:</td><td>  1</td>
</tr><tr>
	<td>Low level compiled (Fortran, C):</td><td>  1-2</td>
</tr><tr>
	<td>Byte-code (python):</td><td>		25-50</td>
</tr><tr>
	<td>Interpreted strings (csh, tcl?):</td><td>	250x</td>
</tr>
</table>
</center>
<p>
Despite this big picture, performance differences of less than
a factor of two are often upheld as evidence in speed debates.
As we describe next, differences of 2x-4x or more are often
just noise.


<!-- ---------------------------------------------------------------- -->
<h3>Don't characterize the speed of a language based on a 
single benchmark of a single program.</h3>
<!-- Looking at a Flawed Benchmark -->
<!-- ---------------------------------------------------------------- -->

We often see people drawing conclusions from a single benchmark. 
For example, an article posted on slashdot.org [3]
claims to address the question
"Which programming language provides the fastest tool for number crunching under Linux?", yet it discussed only one program.

<p>
Why isn't one program good enough?
<p>
For one, it's common sense; the compiler
may happen to do particularly well or particularly poorly
on the inner loop of the program; this doesn't generalize.
The fourth set of benchmarks above show Java as
being faster than C by a factor two
on an FFT of an array of a particular size.
Should you now proclaim that Java is always twice as fast as C?
No, it's just one program.
<p>
There is a more important issue than the code quality on 
the particular benchmark, however:
<br><br>
<b>Cache/Memory effects.</b>
<p>
Look at the FFT microbenchmark that we referenced above.
The figure is reproduced here with permission:
<br>
<img src="javaCbenchmark.gif">
<p>
<b>On this single program, depending on the input size,
the relative performance of 'IBM' (IBM's Java)  
varies from about twice as slow to twice as fast as
'max-C' (gcc)</b>
(-O3 -lm -s -static -fomit-frame-pointer -mpentiumpro -march=pentiumpro -malign-functions=4 -fu nroll-all-loops -fexpensive-optimizations -malign-double -fschedule-insns2 -mwide-multiply -finline-function s -fstrict-aliasing).
So what do we conclude from this benchmark?  Java is twice as
fast as C, or twice as slow, or ...
<p>
This performance variation due to factors of data placement 
and size is universal.  
A more dramatic example of such cache effects is the
<a href="http://www.idiom.com/~zilla/Computer/cachekiller.html">link</a>
mentioned in the discussion on garbage collection above.

<!-- <p>
(Regarding the benchmark mentioned above, 
readers pointed out other problems, e.g.
the programmer wasn't experienced with java idioms --
someone found a 2 line change that gave a large performance increase,
and the author tried several C compilers but only one
Java compiler.)
<p>
<quote>
-->

<p>
The person who posted [3] demonstrated the fragility of his
own benchmark in a followup 
<a href="http://www.javalobby.org/threadMode3.jsp?message=38302442&thread=6300&forum=61">post</a>,
writing that 
<quote>
  "Java now performs as well as gcc on many tests"
</quote>
after changing something 
(note that it was not the Java language that changed).

<!-- ----------------------------------------------------------------
In addition to the cache interaction issue, 
<p>Look at the FFT benchmark above.
Pick a size where Java is faster than C by a factor 2.
Should you then proclaim that Java is always 2 times as fast as C?
No, it's just one program.

<p>
The article
<a href="http://developers.slashdot.org/article.pl?sid=03/01/01/2217234&mode=thread&tid=156">
Linux Number Crunching: Languages and Tools</a>,
posted on Slashdot recently, 
proclaimed that 
<center>
<b><font color="#FF0000">Java 1.4 is slow</font></b>
</center>
<p>(in a red font).

<p>
Some readers of the article posted the following points:
<ul>
<li> It is only one program -- should one really base conclusions
on only one data point?  

<p>Look at the benchmarks above, 
you'll see cases where Java is both faster and slower than C/Fortran.
Next, pick a case where Java is faster than C by a factor 1.5.
Should you then proclaim that Java is always 1.5 times as fast as C?
No, it's just one program.

<li> The benchmark makes no attempt to control for <em>cache interaction</em>.
Changing the declaration order of variables or the size of 
the input can produce a factor of plus-or-minus two (or more) in performance;
this probably explains the "C is faster than Fortran" pronouncement
in this study.
<p>
To see this effect in action, look at the 2nd benchmark above:
it runs the same programs with different sized inputs;
notice the C-versus-Java performance change from ahead to behind
and back ahead on the same program.
<p>For a more dramatic example see this
<a href="http://www.idiom.com/~zilla/Computer/cachekiller.html">link</a>,
discussed more below.
<p>
Of course most benchmarks get around this issue by averaging across
several real programs.

<li> The author probably was not an experienced Java programmer.  
Someone on javalobby.org changed two lines of code and got
a major speedup. Just as experienced C programmers know to access 
2D arrays with the column index changing fastest, 
Java programmers know to move allocation outside the inner loop
when possible.

<li> Multiple C compilers were used, including the good Intel
compiler, but the java compiler that does best on numerics
(IBM's) was not used.
</ul>
 ---------------------------------------------------------------- -->

<!-- ---------------------------------------------------------------- -->
<h2>Conclusions: Why is "Java is Slow" so Popular?</h2>
<!-- ---------------------------------------------------------------- -->

Java is now nearly equal to (or faster than) 
C++ on low-level and numeric benchmarks.
<!-- This means we'll see cases where new releases of C++ compilers 
do better than Java, and vice-versa, just as different 
C/C++ compilers often have performance differences of up to a factor two 
or so.
<p> 
-->
This should not be surprising: Java is a compiled language 
(albeit JIT compiled).

<p>
Nevertheless, the idea that "java is slow" is widely believed.
Why this is so is perhaps the most interesting
aspect of this article.

<!--
google "java slow": 732000
groups java slow 102000
google "python slow" 132000
pascal slow 116000
javascript	231000
groups		20000
visual basic 	511000, vb 144000
fortran 70,000
tcl	62,000
groups tcl 13000
google "cobol slow" 32000



google malloc slow	35,000
garbage collection slow	120,000
groups gc 13000
groups malloc 10100

Abstraction penalty-
<p>
On the other hand, performance does depend (a lot) on
how the program is written. 
-->

<p>
Let's look at several possible reasons:

<ul>

<li>Java circa 1995 was slow.
The first incarnations of java did not java a JIT compiler, 
and hence were bytecode interpreted (like Python for example).
JIT compilers appeared in JVMs from Microsoft, Symantec,
and in Sun's java1.2.

<p>
This explanation is implausible.
Most "computer folk" are able to rattle off the exact speed
in GHz of the latest processors, and they track this information
as it changes each month (and have done so for years).  
Yet this explanation asks us to believe that they are not
able to remember that a single and rather important language 
speed change occurred in 1996.

<li>
Java can be slow still.  
For example, programs written with the thread-safe Vector
class are necessarily slower (on a single processor at least) 
than those written with the equivalent thread-unsafe ArrayList class.
<p>
This explanation is equally unsatisfying, because C++ and other
languages have similar "abstraction penalties".  For example,
The Kernighan and Pike book <em>The Practice of Programming</em> 
has a table with the following entries, describing the performance
of several implementations of a text processing program:

<center>
<table border="1"  class="smallertable">

<tr>
<th>Version</th><th>400 MHz PII</th>
</tr>

<tr>
<td>C</td>               <td>0.30 sec      </td>
</tr>
<tr>
<td>C++/STL/deque</td>      <td>11.2 sec           </td>
</tr>
<tr>
<td>C++/STL/list</td>       <td>1.5 sec            </td>
</tr>
</table>
</center>
<p>
Another evidently well known problem in C++ is the
overhead of returning an object from a function 
(several unnecessary object create/copy/destruct cycles are involved).
<!-- As compilers mature the penalties for 
These results are dated, and C++ compilers probably do better with 
STL these days, but using C++ features still doesn't come for free.
(For more information look for results of the OOPACK benchmark --
this benchmark is specifically designed to measure the 
overhead of using STL-like abstraction in C++.) -->

<li>Java program startup is slow.
As a java program starts, it unzips the java libraries and compiles
parts of itself, so an interactive
program can be sluggish for the first couple seconds of use. 
<p>
This approaches being a reasonable explanation for the speed myth.
But while it might explain user's impressions, it does not explain
why many programmers (who can easily understand the idea of
an interpreted program being compiled) share the belief.

</ul>

Two of the most interesting observations regarding this issue are that:
<ol>
<li> there is a similar "garbage collection is slow" myth
that persists despite decades of evidence to the contrary,
and 
<li> that in web flame wars, people are happy to discuss
their speed impressions for many pages without ever referring
to actual data.
</ol>
Together these suggest that it is possible that
no amount of data will alter peoples' beliefs, and that
in actuality these "speed beliefs" probably have little to do with java,
garbage collection, or the otherwise stated subject.
<!--
perhaps they are just mantras used by people who do not
wish to learn about the said subjects.
-->
Our answer probably lies somewhere in sociology or psychology.
Programmers, despite their professed appreciation of logical thought,
are not immune to a kind of mythology, though these particular
"myths" are arbitrary and relatively harmless.

<!-- ----------------------------------------------------------------
<p>
<p>
For me, however, the big conclusion is that
<ul>
<li> Practicing programmers care little about actual
  performance data
</ul>
<p> 
Data indicating the Java-C performance gap has closed
(including the benchmarks cited here) has been available to the public
for several years, but the "java is slow" statements continue.
The malloc-versus-GC study was published in 1992,
is well cited, and has been supported by many other studies.
Yet ten years later we are 
still saying that GC is slow, never a voice to the contrary.
If this level of evidence cannot influence beliefs probably nothing will
--
among programmers (as with any other group of people)
superstition is more powerful than fact.
-------------------------------------------------------------- -->

<h3>Acknowledgements</h3>
Ian Rogers and Curt Fischer clarified some points.

<h3>References</h3>
<!-- <p>
[1] Z. Budimli, K. Kennedy, and J. Piper, "The Cost of Being Object-Oriented: A Preliminary Study," Scientific Computing, 7(2), 87, 1999.
-->

<img src="/cgi-bin/Count.cgi?df=zillajava.dat&sh=F">

<p>
[1] K. Reinholtz, Java will be faster than C++,
<i>ACM Sigplan Notices</i>, 35(2): 25-28 Feb 2000.

<p>
[2] Benjamin Zorn, The Measured Cost of Conservative Garbage Collection
<i>Software - Practice and Experience</i> 23(7): 733-756, 1992.

<p>
[3]
Linux Number Crunching: Languages and Tools,
referenced on
<a href="http://developers.slashdot.org/article.pl?sid=03/01/01/2217234&mode=thread&tid=156">slashdot.org</a>

<p>
[4]
Christopher W. Cowell-Shah,
Nine Language Performance Round-up: Benchmarking Math & File I/O,
appeared at 
<a href="http://www.osnews.com/story.php?news_id=5602">OSnews.com, 
Jan. 2004.</a>

<p>
[5] E. Schanzer,
<a href="http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dndotnet/html/dotnetperftechs.asp">
Performance Considerations for Run-Time Technologies in the .NET Framework</a>,
Microsoft Developer Network article.

<p>

<!-- 
Some other links 
http://groups.google.com/groups?q=lisp+garbage+collection+benchmark+lisp+pointers&start=10&hl=en&lr=&ie=UTF-8&selm=4e6sd4%24l4s%40wcap.centerline.com&rnum=11

converted from php to java server pages, handled slashdot load
with 75% cpu idle
http://www.aceshardware.com/read.jsp?id=50000347

Barry Ruff of SynaPix had the following to say of his company's choice to use Magician's Java-to-OpenGL bindings. (These comments were made before Arcane's announcement about Magician's termination.)
SynaPix is developing a visual effects workstation product called SynaFlex. SynaFlex analyzes film and video footage and creates 3D representations of a given scene. The scene is then combined with synthetic objects in a single 3D space where a variety of techniques can be used to merge 2D and 3D elements. The SynaFlex system is about 90 percent written in Java. In a large part that is due to the Magician API.
SynaFlex was originally going to use OpenGL and a higher level scene graph representation like Optimizer or Performer and then use Java for its UI. But after some initial benchmarking of Magician's performance, it became clear that there was no downside to coding OpenGL components directly in Java. And clearly there was a big upside to moving to a more Java-centric system. So, we proceeded to develop our own scene graph making heavy use of Magician for our OpenGL interactive display and picking.
http://www.gamasutra.com/features/19990625/java_04.htm
-->

</td>
</tr>
</table>
</body>
</html>
