<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-type" content="text/html;charset=UTF-8" />
<title>Optimization &middot; Crafting Interpreters</title>

<!-- Tell mobile browsers we're optimized for them and they don't need to crop
     the viewport. -->
<meta name="viewport" content="width=device-width, initial-scale=1"/>
<link rel="stylesheet" type="text/css" href="style.css" />

<!-- Oh, God, Source Code Pro is so beautiful it makes me want to cry. -->
<link href='https://fonts.googleapis.com/css?family=Source+Code+Pro:400|Source+Sans+Pro:300,400,600' rel='stylesheet' type='text/css'>

<link rel="icon" type="image/png" href="image/favicon.png" />
<script src="jquery-3.4.1.min.js"></script>
<script src="script.js"></script>

<!-- Google analytics -->
<script>
  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
  })(window,document,'script','https://www.google-analytics.com/analytics.js','ga');

  ga('create', 'UA-42804721-2', 'auto');
  ga('send', 'pageview');
</script>

</head>
<body id="top">

<!-- <div class="scrim"></div> -->
<nav class="wide">
  <a href="/"><img src="image/logotype.png" title="Crafting Interpreters"></a>
  <div class="contents">
<h3><a href="#top">Optimization<small>30</small></a></h3>

<ul>
    <li><a href="#measuring-performance"><small>30.1</small> Measuring Performance</a></li>
    <li><a href="#faster-hash-table-probing"><small>30.2</small> Faster Hash Table Probing</a></li>
    <li><a href="#nan-boxing"><small>30.3</small> NaN Boxing</a></li>
    <li><a href="#where-to-next"><small>30.4</small> Where to Next</a></li>
    <li class="divider"></li>
    <li class="end-part"><a href="#challenges">Challenges</a></li>
</ul>


<div class="prev-next">
    <a href="superclasses.html" title="Superclasses" class="left">&larr;&nbsp;Previous</a>
    <a href="a-bytecode-virtual-machine.html" title="A Bytecode Virtual Machine">&uarr;&nbsp;Up</a>
    <a href="backmatter.html" title="Backmatter" class="right">Next&nbsp;&rarr;</a>
</div>  </div>
</nav>

<nav class="narrow">
<a href="/"><img src="image/logotype.png" title="Crafting Interpreters"></a>
<a href="superclasses.html" title="Superclasses" class="prev">←</a>
<a href="backmatter.html" title="Backmatter" class="next">→</a>
</nav>

<div class="page">
<div class="nav-wrapper">
<nav class="floating">
  <a href="/"><img src="image/logotype.png" title="Crafting Interpreters"></a>
  <div class="expandable">
<h3><a href="#top">Optimization<small>30</small></a></h3>

<ul>
    <li><a href="#measuring-performance"><small>30.1</small> Measuring Performance</a></li>
    <li><a href="#faster-hash-table-probing"><small>30.2</small> Faster Hash Table Probing</a></li>
    <li><a href="#nan-boxing"><small>30.3</small> NaN Boxing</a></li>
    <li><a href="#where-to-next"><small>30.4</small> Where to Next</a></li>
    <li class="divider"></li>
    <li class="end-part"><a href="#challenges">Challenges</a></li>
</ul>


<div class="prev-next">
    <a href="superclasses.html" title="Superclasses" class="left">&larr;&nbsp;Previous</a>
    <a href="a-bytecode-virtual-machine.html" title="A Bytecode Virtual Machine">&uarr;&nbsp;Up</a>
    <a href="backmatter.html" title="Backmatter" class="right">Next&nbsp;&rarr;</a>
</div>  </div>
  <a id="expand-nav">≡</a>
</nav>
</div>

<article class="chapter">

  <div class="number">30</div>
  <h1>Optimization</h1>

<blockquote>
<p>The evening&rsquo;s the best part of the day. You&rsquo;ve done your day&rsquo;s work. Now you
can put your feet up and enjoy it.</p>
<p><cite>Kazuo Ishiguro, <em>The Remains of the Day</em></cite></p>
</blockquote>
<p>If I still lived in New Orleans, I&rsquo;d call this chapter a <em>lagniappe</em>, a little
something extra given for free to a customer. You&rsquo;ve got a whole book and a
complete virtual machine already, but I want you to have some more fun hacking
on clox. This time, we&rsquo;re going for pure performance. We&rsquo;ll apply two very
different optimizations to our virtual machine.  In the process, you&rsquo;ll get a
feel for measuring and improving the performance of a language implementation<span class="em">&mdash;</span>or any program, really.</p>
<h2><a href="#measuring-performance" id="measuring-performance"><small>30&#8202;.&#8202;1</small>Measuring Performance</a></h2>
<p><strong>Optimization</strong> means taking a working application and improving its
performance. An optimized program does the same thing, it just takes less
resources to do so. The resource we usually think of when optimizing is runtime
speed, but it can also be important to reduce memory usage, startup time,
persistent storage size, or network bandwidth. All physical resources have some
cost<span class="em">&mdash;</span>even if the cost is mostly in wasted human time<span class="em">&mdash;</span>so optimization work
often pays off.</p>
<p>There was a time in the early days of computing that a skilled programmer could
hold the entire hardware architecture and compiler pipeline in their head and
understand a program&rsquo;s performance just by thinking real hard. Those days are
long gone, separated from the present by microcode, cache lines, branch
prediction, deep compiler pipelines, and mammoth instruction sets. We like to
pretend C is a &ldquo;low-level&rdquo; language, but the stack of technology between</p>
<div class="codehilite"><pre><span class="i">printf</span>(<span class="s">&quot;Hello, world!&quot;</span>);
</pre></div>
<p>and a greeting appearing on screen is now perilously tall.</p>
<p>Optimization today is an empirical science. Our program is a border collie
sprinting through the hardware&rsquo;s obstacle course. If we want her to reach the
end faster, we can&rsquo;t just sit and ruminate on canine physiology until
enlightenment strikes. Instead, we need to <em>observe</em> her performance, see where
she stumbles, and then find faster paths for her to take.</p>
<p>Much like agility training is particular to one dog and one obstacle course, we
can&rsquo;t assume that our virtual machine optimizations will make <em>all</em> Lox programs
run faster on <em>all</em> hardware. Different Lox programs stress different areas of
the VM, and different architectures have their own strengths and weaknesses.</p>
<h3><a href="#benchmarks" id="benchmarks"><small>30&#8202;.&#8202;1&#8202;.&#8202;1</small>Benchmarks</a></h3>
<p>When we add new functionality, we validate correctness by writing tests<span class="em">&mdash;</span>Lox
programs that use a feature and validate the VM&rsquo;s behavior. Tests pin down
semantics and ensure we don&rsquo;t break existing features when we add new ones. We
have similar needs when it comes to performance:</p>
<ol>
<li>
<p>How do we validate that an optimization <em>does</em> improve performance, and by
how much?</p>
</li>
<li>
<p>How do we ensure that other unrelated changes don&rsquo;t <em>regress</em> performance?</p>
</li>
</ol>
<p>The Lox programs we write to accomplish those goals are <strong>benchmarks</strong>. These
are carefully crafted programs that stress some part of the language
implementation. They measure not <em>what</em> the program does, but how <span
name="much"><em>long</em></span> it takes to do it.</p>
<aside name="much">
<p>Most benchmarks measure running time. But, of course, you&rsquo;ll eventually find
yourself needing to write benchmarks that measure memory allocation, how much
time is spent in the garbage collector, startup time, etc.</p>
</aside>
<p>By measuring the performance of a benchmark before and after a change, you can
see what your change does. When you land an optimization, all of the tests
should behave exactly the same as they did before, but hopefully the benchmarks
run faster.</p>
<p>Once you have an entire <span name="js"><em>suite</em></span> of benchmarks, you can
measure not just <em>that</em> an optimization changes performance, but on which
<em>kinds</em> of code. Often you&rsquo;ll find that some benchmarks get faster while others
get slower. Then you have to make hard decisions about what kinds of code your
language implementation optimizes for.</p>
<p>The suite of benchmarks you choose to write is a key part of that decision. In
the same way that your tests encode your choices around what correct behavior
looks like, your benchmarks are the embodiment of your priorities when it comes
to performance. They will guide which optimizations you implement, so choose
your benchmarks carefully, and don&rsquo;t forget to periodically reflect on whether
they are helping you reach your larger goals.</p>
<aside name="js">
<p>In the early proliferation of JavaScript VMs, the first widely used benchmark
suite was SunSpider from WebKit. During the browser wars, marketing folks used
SunSpider results to claim their browser was fastest. That highly incentivized
VM hackers to optimize to those benchmarks.</p>
<p>Unfortunately, SunSpider programs often didn&rsquo;t match real-world JavaScript. They
were mostly microbenchmarks<span class="em">&mdash;</span>tiny toy programs that completed quickly. Those
benchmarks penalize complex just-in-time compilers that start off slower but get
<em>much</em> faster once the JIT has had enough time to optimize and re-compile hot
code paths. This put VM hackers in the unfortunate position of having to choose
between making the SunSpider numbers get better, or actually optimizing the
kinds of programs real users ran.</p>
<p>Google&rsquo;s V8 team responded by sharing their Octane benchmark suite, which was
closer to real-world code at the time. Years later, as JavaScript use patterns
continued to evolve, even Octane outlived its usefulness. Expect that your
benchmarks will evolve as your language&rsquo;s ecosystem does.</p>
<p>Remember, the ultimate goal is to make <em>user programs</em> faster, and benchmarks
are only a proxy for that.</p>
</aside>
<p>Benchmarking is a subtle art. Like tests, you need to balance not overfitting to
your implementation while ensuring that the benchmark does actually tickle the
code paths that you care about. When you measure performance, you need to
compensate for variance caused by CPU throttling, caching, and other weird
hardware and operating system quirks. I won&rsquo;t give you a whole sermon here,
but treat benchmarking as its own skill that improves with practice.</p>
<h3><a href="#profiling" id="profiling"><small>30&#8202;.&#8202;1&#8202;.&#8202;2</small>Profiling</a></h3>
<p>OK, so you&rsquo;ve got a few benchmarks now. You want to make them go faster. Now
what? First of all, let&rsquo;s assume you&rsquo;ve done all the obvious, easy work. You are
using the right algorithms and data structures<span class="em">&mdash;</span>or, at least, you aren&rsquo;t using
ones that are aggressively wrong. I don&rsquo;t consider using a hash table instead of
a linear search through a huge unsorted array &ldquo;optimization&rdquo; so much as &ldquo;good
software engineering&rdquo;.</p>
<p>Since the hardware is too complex to reason about our program&rsquo;s performance from
first principles, we have to go out into the field. That means <em>profiling</em>. A
<strong>profiler</strong>, if you&rsquo;ve never used one, is a tool that runs your <span
name="program">program</span> and tracks hardware resource use as the code
executes. Simple ones show you how much time was spent in each function in your
program. Sophisticated ones log data cache misses, instruction cache misses,
branch mispredictions, memory allocations, and all sorts of other metrics.</p>
<aside name="program">
<p>&ldquo;Your program&rdquo; here means the Lox VM itself running some <em>other</em> Lox program. We
are trying to optimize clox, not the user&rsquo;s Lox script. Of course, the choice of
which Lox program to load into our VM will highly affect which parts of clox get
stressed, which is why benchmarks are so important.</p>
<p>A profiler <em>won&rsquo;t</em> show us how much time is spent in each <em>Lox</em> function in the
script being run. We&rsquo;d have to write our own &ldquo;Lox profiler&rdquo; to do that, which is
slightly out of scope for this book.</p>
</aside>
<p>There are many profilers out there for various operating systems and languages.
On whatever platform you program, it&rsquo;s worth getting familiar with a decent
profiler. You don&rsquo;t need to be a master. I have learned things within minutes of
throwing a program at a profiler that would have taken me <em>days</em> to discover on
my own through trial and error. Profilers are wonderful, magical tools.</p>
<h2><a href="#faster-hash-table-probing" id="faster-hash-table-probing"><small>30&#8202;.&#8202;2</small>Faster Hash Table Probing</a></h2>
<p>Enough pontificating, let&rsquo;s get some performance charts going up and to the
right. The first optimization we&rsquo;ll do, it turns out, is about the <em>tiniest</em>
possible change we could make to our VM.</p>
<p>When I first got the bytecode virtual machine that clox is descended from
working, I did what any self-respecting VM hacker would do. I cobbled together a
couple of benchmarks, fired up a profiler, and ran those scripts through my
interpreter. In a dynamically typed language like Lox, a large fraction of user
code is field accesses and method calls, so one of my benchmarks looked
something like this:</p>
<div class="codehilite"><pre><span class="k">class</span> <span class="t">Zoo</span> {
  <span class="i">init</span>() {
    <span class="k">this</span>.<span class="i">aardvark</span> = <span class="n">1</span>;
    <span class="k">this</span>.<span class="i">baboon</span>   = <span class="n">1</span>;
    <span class="k">this</span>.<span class="i">cat</span>      = <span class="n">1</span>;
    <span class="k">this</span>.<span class="i">donkey</span>   = <span class="n">1</span>;
    <span class="k">this</span>.<span class="i">elephant</span> = <span class="n">1</span>;
    <span class="k">this</span>.<span class="i">fox</span>      = <span class="n">1</span>;
  }
  <span class="i">ant</span>()    { <span class="k">return</span> <span class="k">this</span>.<span class="i">aardvark</span>; }
  <span class="i">banana</span>() { <span class="k">return</span> <span class="k">this</span>.<span class="i">baboon</span>; }
  <span class="i">tuna</span>()   { <span class="k">return</span> <span class="k">this</span>.<span class="i">cat</span>; }
  <span class="i">hay</span>()    { <span class="k">return</span> <span class="k">this</span>.<span class="i">donkey</span>; }
  <span class="i">grass</span>()  { <span class="k">return</span> <span class="k">this</span>.<span class="i">elephant</span>; }
  <span class="i">mouse</span>()  { <span class="k">return</span> <span class="k">this</span>.<span class="i">fox</span>; }
}

<span class="k">var</span> <span class="i">zoo</span> = <span class="t">Zoo</span>();
<span class="k">var</span> <span class="i">sum</span> = <span class="n">0</span>;
<span class="k">var</span> <span class="i">start</span> = <span class="i">clock</span>();
<span class="k">while</span> (<span class="i">sum</span> &lt; <span class="n">100000000</span>) {
  <span class="i">sum</span> = <span class="i">sum</span> + <span class="i">zoo</span>.<span class="i">ant</span>()
            + <span class="i">zoo</span>.<span class="i">banana</span>()
            + <span class="i">zoo</span>.<span class="i">tuna</span>()
            + <span class="i">zoo</span>.<span class="i">hay</span>()
            + <span class="i">zoo</span>.<span class="i">grass</span>()
            + <span class="i">zoo</span>.<span class="i">mouse</span>();
}

<span class="k">print</span> <span class="i">clock</span>() - <span class="i">start</span>;
<span class="k">print</span> <span class="i">sum</span>;
</pre></div>
<aside name="sum" class="bottom">
<p>Another thing this benchmark is careful to do is <em>use</em> the result of the code it
executes. By calculating a rolling sum and printing the result, we ensure the VM
<em>must</em> execute all that Lox code. This is an important habit. Unlike our simple
Lox VM, many compilers do aggressive dead code elimination and are smart enough
to discard a computation whose result is never used.</p>
<p>Many a programming language hacker has been impressed by the blazing performance
of a VM on some benchmark, only to realize that it&rsquo;s because the compiler
optimized the entire benchmark program away to nothing.</p>
</aside>
<p>If you&rsquo;ve never seen a benchmark before, this might seem ludicrous. <em>What</em> is
going on here? The program itself doesn&rsquo;t intend to <span name="sum">do</span>
anything useful. What it does do is call a bunch of methods and access a bunch
of fields since those are the parts of the language we&rsquo;re interested in. Fields
and methods live in hash tables, so it takes care to populate at least a <span
name="more"><em>few</em></span> interesting keys in those tables. That is all wrapped
in a big loop to ensure our profiler has enough execution time to dig in and see
where the cycles are going.</p>
<aside name="more">
<p>If you really want to benchmark hash table performance, you should use many
tables of different sizes. The six keys we add to each table here aren&rsquo;t even
enough to get over our hash table&rsquo;s eight-element minimum threshold. But I
didn&rsquo;t want to throw an enormous benchmark script at you. Feel free to add more
critters and treats if you like.</p>
</aside>
<p>Before I tell you what my profiler showed me, spend a minute taking a few
guesses. Where in clox&rsquo;s codebase do you think the VM spent most of its time? Is
there any code we&rsquo;ve written in previous chapters that you suspect is
particularly slow?</p>
<p>Here&rsquo;s what I found: Naturally, the function with the greatest inclusive time is
<code>run()</code>. (<strong>Inclusive time</strong> means the total time spent in some function and all
other functions it calls<span class="em">&mdash;</span>the total time between when you enter the function
and when it returns.) Since <code>run()</code> is the main bytecode execution loop, it
drives everything.</p>
<p>Inside <code>run()</code>, there are small chunks of time sprinkled in various cases in the
bytecode switch for common instructions like <code>OP_POP</code>, <code>OP_RETURN</code>, and
<code>OP_ADD</code>. The big heavy instructions are <code>OP_GET_GLOBAL</code> with 17% of the
execution time, <code>OP_GET_PROPERTY</code> at 12%, and <code>OP_INVOKE</code> which takes a whopping
42% of the total running time.</p>
<p>So we&rsquo;ve got three hotspots to optimize? Actually, no. Because it turns out
those three instructions spend almost all of their time inside calls to the same
function: <code>tableGet()</code>. That function claims a whole 72% of the execution time
(again, inclusive). Now, in a dynamically typed language, we expect to spend a
fair bit of time looking stuff up in hash tables<span class="em">&mdash;</span>it&rsquo;s sort of the price of
dynamism. But, still, <em>wow.</em></p>
<h3><a href="#slow-key-wrapping" id="slow-key-wrapping"><small>30&#8202;.&#8202;2&#8202;.&#8202;1</small>Slow key wrapping</a></h3>
<p>If you take a look at <code>tableGet()</code>, you&rsquo;ll see it&rsquo;s mostly a wrapper around a
call to <code>findEntry()</code> where the actual hash table lookup happens. To refresh
your memory, here it is in full:</p>
<div class="codehilite"><pre><span class="k">static</span> <span class="t">Entry</span>* <span class="i">findEntry</span>(<span class="t">Entry</span>* <span class="i">entries</span>, <span class="t">int</span> <span class="i">capacity</span>,
                        <span class="t">ObjString</span>* <span class="i">key</span>) {
  <span class="t">uint32_t</span> <span class="i">index</span> = <span class="i">key</span>-&gt;<span class="i">hash</span> % <span class="i">capacity</span>;
  <span class="t">Entry</span>* <span class="i">tombstone</span> = <span class="a">NULL</span>;

  <span class="k">for</span> (;;) {
    <span class="t">Entry</span>* <span class="i">entry</span> = &amp;<span class="i">entries</span>[<span class="i">index</span>];
    <span class="k">if</span> (<span class="i">entry</span>-&gt;<span class="i">key</span> == <span class="a">NULL</span>) {
      <span class="k">if</span> (<span class="a">IS_NIL</span>(<span class="i">entry</span>-&gt;<span class="i">value</span>)) {
        <span class="c">// Empty entry.</span>
        <span class="k">return</span> <span class="i">tombstone</span> != <span class="a">NULL</span> ? <span class="i">tombstone</span> : <span class="i">entry</span>;
      } <span class="k">else</span> {
        <span class="c">// We found a tombstone.</span>
        <span class="k">if</span> (<span class="i">tombstone</span> == <span class="a">NULL</span>) <span class="i">tombstone</span> = <span class="i">entry</span>;
      }
    } <span class="k">else</span> <span class="k">if</span> (<span class="i">entry</span>-&gt;<span class="i">key</span> == <span class="i">key</span>) {
      <span class="c">// We found the key.</span>
      <span class="k">return</span> <span class="i">entry</span>;
    }

    <span class="i">index</span> = (<span class="i">index</span> + <span class="n">1</span>) % <span class="i">capacity</span>;
  }
}
</pre></div>
<p>When running that previous benchmark<span class="em">&mdash;</span>on my machine, at least<span class="em">&mdash;</span>the VM spends
70% of the total execution time on <em>one line</em> in this function. Any guesses as
to which one? No? It&rsquo;s this:</p>
<div class="codehilite"><pre>  <span class="t">uint32_t</span> <span class="i">index</span> = <span class="i">key</span>-&gt;<span class="i">hash</span> % <span class="i">capacity</span>;
</pre></div>
<p>That pointer dereference isn&rsquo;t the problem. It&rsquo;s the little <code>%</code>. It turns out
the modulo operator is <em>really</em> slow. Much slower than other <span
name="division">arithmetic</span> operators. Can we do something better?</p>
<aside name="division">
<p>Pipelining makes it hard to talk about the performance of an individual CPU
instruction, but to give you a feel for things, division and modulo are about
30-50 <em>times</em> slower than addition and subtraction on x86.</p>
</aside>
<p>In the general case, it&rsquo;s really hard to re-implement a fundamental arithmetic
operator in user code in a way that&rsquo;s faster than what the CPU itself can do.
After all, our C code ultimately compiles down to the CPU&rsquo;s own arithmetic
operations. If there were tricks we could use to go faster, the chip would
already be using them.</p>
<p>However, we can take advantage of the fact that we know more about our problem
than the CPU does. We use modulo here to take a key string&rsquo;s hash code and
wrap it to fit within the bounds of the table&rsquo;s entry array. That array starts
out at eight elements and grows by a factor of two each time. We know<span class="em">&mdash;</span>and the
CPU and C compiler do not<span class="em">&mdash;</span>that our table&rsquo;s size is always a power of two.</p>
<p>Because we&rsquo;re clever bit twiddlers, we know a faster way to calculate the
remainder of a number modulo a power of two: <strong>bit masking</strong>. Let&rsquo;s say we want
to calculate 229 modulo 64. The answer is 37, which is not particularly apparent
in decimal, but is clearer when you view those numbers in binary:</p><img src="image/optimization/mask.png" alt="The bit patterns resulting from 229 % 64 = 37 and 229 &amp; 63 = 37." />
<p>On the left side of the illustration, notice how the result (37) is simply the
dividend (229) with the highest two bits shaved off? Those two highest bits are
the bits at or to the left of the divisor&rsquo;s single 1 bit.</p>
<p>On the right side, we get the same result by taking 229 and bitwise <span
class="small-caps">AND</span>-ing it with 63, which is one less than our
original power of two divisor. Subtracting one from a power of two gives you a
series of 1 bits. That is exactly the mask we need in order to strip out those
two leftmost bits.</p>
<p>In other words, you can calculate a number modulo any power of two by simply
<span class="small-caps">AND</span>-ing it with that power of two minus one. I&rsquo;m
not enough of a mathematician to <em>prove</em> to you that this works, but if you
think it through, it should make sense. We can replace that slow modulo operator
with a very fast decrement and bitwise <span class="small-caps">AND</span>. We
simply change the offending line of code to this:</p>
<div class="codehilite"><pre class="insert-before">static Entry* findEntry(Entry* entries, int capacity,
                        ObjString* key) {
</pre><div class="source-file"><em>table.c</em><br>
in <em>findEntry</em>()<br>
replace 1 line</div>
<pre class="insert">  <span class="t">uint32_t</span> <span class="i">index</span> = <span class="i">key</span>-&gt;<span class="i">hash</span> &amp; (<span class="i">capacity</span> - <span class="n">1</span>);
</pre><pre class="insert-after">  Entry* tombstone = NULL;
</pre></div>
<div class="source-file-narrow"><em>table.c</em>, in <em>findEntry</em>(), replace 1 line</div>

<p>CPUs love bitwise operators, so it&rsquo;s hard to <span name="sub">improve</span> on that. </p>
<aside name="sub">
<p>Another potential improvement is to eliminate the decrement by storing the bit
mask directly instead of the capacity. In my tests, that didn&rsquo;t make a
difference. Instruction pipelining makes some operations essentially free if the
CPU is bottlenecked elsewhere.</p>
</aside>
<p>Our linear probing search may need to wrap around the end of the array, so there
is another modulo in <code>findEntry()</code> to update.</p>
<div class="codehilite"><pre class="insert-before">      // We found the key.
      return entry;
    }

</pre><div class="source-file"><em>table.c</em><br>
in <em>findEntry</em>()<br>
replace 1 line</div>
<pre class="insert">    <span class="i">index</span> = (<span class="i">index</span> + <span class="n">1</span>) &amp; (<span class="i">capacity</span> - <span class="n">1</span>);
</pre><pre class="insert-after">  }
</pre></div>
<div class="source-file-narrow"><em>table.c</em>, in <em>findEntry</em>(), replace 1 line</div>

<p>This line didn&rsquo;t show up in the profiler since most searches don&rsquo;t wrap.</p>
<p>The <code>findEntry()</code> function has a sister function, <code>tableFindString()</code> that does
a hash table lookup for interning strings. We may as well apply the same
optimizations there too. This function is called only when interning strings,
which wasn&rsquo;t heavily stressed by our benchmark. But a Lox program that created
lots of strings might noticeably benefit from this change.</p>
<div class="codehilite"><pre class="insert-before">  if (table-&gt;count == 0) return NULL;

</pre><div class="source-file"><em>table.c</em><br>
in <em>tableFindString</em>()<br>
replace 1 line</div>
<pre class="insert">  <span class="t">uint32_t</span> <span class="i">index</span> = <span class="i">hash</span> &amp; (<span class="i">table</span>-&gt;<span class="i">capacity</span> - <span class="n">1</span>);
</pre><pre class="insert-after">  for (;;) {
    Entry* entry = &amp;table-&gt;entries[index];
</pre></div>
<div class="source-file-narrow"><em>table.c</em>, in <em>tableFindString</em>(), replace 1 line</div>

<p>And also when the linear probing wraps around.</p>
<div class="codehilite"><pre class="insert-before">      return entry-&gt;key;
    }

</pre><div class="source-file"><em>table.c</em><br>
in <em>tableFindString</em>()<br>
replace 1 line</div>
<pre class="insert">    <span class="i">index</span> = (<span class="i">index</span> + <span class="n">1</span>) &amp; (<span class="i">table</span>-&gt;<span class="i">capacity</span> - <span class="n">1</span>);
</pre><pre class="insert-after">  }
</pre></div>
<div class="source-file-narrow"><em>table.c</em>, in <em>tableFindString</em>(), replace 1 line</div>

<p>Let&rsquo;s see if our fixes were worth it. I tweaked that zoological benchmark to
count how many <span name="batch">batches</span> of 10,000 calls it can run in
ten seconds. More batches equals faster performance. On my machine using the
unoptimized code, the benchmark gets through 3,192 batches. After this
optimization, that jumps to 6,249.</p><img src="image/optimization/hash-chart.png" alt="Bar chart comparing the performance before and after the optimization." />
<p>That&rsquo;s almost exactly twice as much work in the same amount of time. We made the
VM twice as fast (usual caveat: on this benchmark). That is a massive win when
it comes to optimization. Usually you feel good if you can claw a few percentage
points here or there. Since methods, fields, and global variables are so
prevalent in Lox programs, this tiny optimization improves performance across
the board. Almost every Lox program benefits.</p>
<aside name="batch">
<p>Our original benchmark fixed the amount of <em>work</em> and then measured the <em>time</em>.
Changing the script to count how many batches of calls it can do in ten seconds
fixes the time and measures the work. For performance comparisons, I like the
latter measure because the reported number represents <em>speed</em>. You can directly
compare the numbers before and after an optimization. When measuring execution
time, you have to do a little arithmetic to get to a good relative measure of
performance.</p>
</aside>
<p>Now, the point of this section is <em>not</em> that the modulo operator is profoundly
evil and you should stamp it out of every program you ever write. Nor is it that
micro-optimization is a vital engineering skill. It&rsquo;s rare that a performance
problem has such a narrow, effective solution. We got lucky.</p>
<p>The point is that we didn&rsquo;t <em>know</em> that the modulo operator was a performance
drain until our profiler told us so. If we had wandered around our VM&rsquo;s codebase
blindly guessing at hotspots, we likely wouldn&rsquo;t have noticed it. What I want
you to take away from this is how important it is to have a profiler in your
toolbox.</p>
<p>To reinforce that point, let&rsquo;s go ahead and run the original benchmark in our
now-optimized VM and see what the profiler shows us. On my machine, <code>tableGet()</code>
is still a fairly large chunk of execution time. That&rsquo;s to be expected for a
dynamically typed language. But it has dropped from 72% of the total execution
time down to 35%. That&rsquo;s much more in line with what we&rsquo;d like to see and shows
that our optimization didn&rsquo;t just make the program faster, but made it faster
<em>in the way we expected</em>. Profilers are as useful for verifying solutions as
they are for discovering problems.</p>
<h2><a href="#nan-boxing" id="nan-boxing"><small>30&#8202;.&#8202;3</small>NaN Boxing</a></h2>
<p>This next optimization has a very different feel. Thankfully, despite the odd
name, it does not involve punching your grandmother. It&rsquo;s different, but not,
like, <em>that</em> different. With our previous optimization, the profiler told us
where the problem was, and we merely had to use some ingenuity to come up with a
solution.</p>
<p>This optimization is more subtle, and its performance effects more scattered
across the virtual machine. The profiler won&rsquo;t help us come up with this.
Instead, it was invented by <span name="someone">someone</span> thinking deeply
about the lowest levels of machine architecture.</p>
<aside name="someone">
<p>I&rsquo;m not sure who first came up with this trick. The earliest source I can find
is David Gudeman&rsquo;s 1993 paper &ldquo;Representing Type Information in Dynamically
Typed Languages&rdquo;. Everyone else cites that. But Gudeman himself says the paper
isn&rsquo;t novel work, but instead &ldquo;gathers together a body of folklore&rdquo;.</p>
<p>Maybe the inventor has been lost to the mists of time, or maybe it&rsquo;s been
reinvented a number of times. Anyone who ruminates on IEEE 754 long enough
probably starts thinking about trying to stuff something useful into all those
unused NaN bits.</p>
</aside>
<p>Like the heading says, this optimization is called <strong>NaN boxing</strong> or sometimes
<strong>NaN tagging</strong>. Personally I like the latter name because &ldquo;boxing&rdquo; tends to imply
some kind of heap-allocated representation, but the former seems to be the more
widely used term. This technique changes how we represent values in the VM.</p>
<p>On a 64-bit machine, our Value type takes up 16 bytes. The struct has two
fields, a type tag and a union for the payload. The largest fields in the union
are an Obj pointer and a double, which are both 8 bytes. To keep the union field
aligned to an 8-byte boundary, the compiler adds padding after the tag too:</p><img src="image/optimization/union.png" alt="Byte layout of the 16-byte tagged union Value." />
<p>That&rsquo;s pretty big. If we could cut that down, then the VM could pack more values
into the same amount of memory. Most computers have plenty of RAM these days, so
the direct memory savings aren&rsquo;t a huge deal. But a smaller representation means
more Values fit in a cache line. That means fewer cache misses, which affects
<em>speed</em>.</p>
<p>If Values need to be aligned to their largest payload size, and a Lox number or
Obj pointer needs a full 8 bytes, how can we get any smaller? In a dynamically
typed language like Lox, each value needs to carry not just its payload, but
enough additional information to determine the value&rsquo;s type at runtime. If a Lox
number is already using the full 8 bytes, where could we squirrel away a couple
of extra bits to tell the runtime &ldquo;this is a number&rdquo;?</p>
<p>This is one of the perennial problems for dynamic language hackers. It
particularly bugs them because statically typed languages don&rsquo;t generally have
this problem. The type of each value is known at compile time, so no extra
memory is needed at runtime to track it. When your C compiler compiles a 32-bit
int, the resulting variable gets <em>exactly</em> 32 bits of storage.</p>
<p>Dynamic language folks hate losing ground to the static camp, so they&rsquo;ve come up
with a number of very clever ways to pack type information and a payload into a
small number of bits. NaN boxing is one of those. It&rsquo;s a particularly good fit
for languages like JavaScript and Lua, where all numbers are double-precision
floating point. Lox is in that same boat.</p>
<h3><a href="#what-is-and-is-not-a-number" id="what-is-and-is-not-a-number"><small>30&#8202;.&#8202;3&#8202;.&#8202;1</small>What is (and is not) a number?</a></h3>
<p>Before we start optimizing, we need to really understand how our friend the CPU
represents floating-point numbers. Almost all machines today use the same
scheme, encoded in the venerable scroll <a href="https://en.wikipedia.org/wiki/IEEE_754">IEEE 754</a>, known to mortals as the
&ldquo;IEEE Standard for Floating-Point Arithmetic&rdquo;.</p>
<p>In the eyes of your computer, a <span name="hyphen">64-bit</span>,
double-precision, IEEE floating-point number looks like this:</p>
<aside name="hyphen">
<p>That&rsquo;s a lot of hyphens for one sentence.</p>
</aside><img src="image/optimization/double.png" alt="Bit representation of an IEEE 754 double." />
<ul>
<li>
<p>Starting from the right, the first 52 bits are the <strong>fraction</strong>,
<strong>mantissa</strong>, or <strong>significand</strong> bits. They represent the significant digits
of the number, as a binary integer.</p>
</li>
<li>
<p>Next to that are 11 <strong>exponent</strong> bits. These tell you how far the mantissa
is shifted away from the decimal (well, binary) point.</p>
</li>
<li>
<p>The highest bit is the <span name="sign"><strong>sign bit</strong></span>, which
indicates whether the number is positive or negative.</p>
</li>
</ul>
<p>I know that&rsquo;s a little vague, but this chapter isn&rsquo;t a deep dive on
floating point representation. If you want to know how the exponent and mantissa
play together, there are already better explanations out there than I could
write.</p>
<aside name="sign">
<p>Since the sign bit is always present, even if the number is zero, that implies
that &ldquo;positive zero&rdquo; and &ldquo;negative zero&rdquo; have different bit representations, and
indeed, IEEE 754 does distinguish those.</p>
</aside>
<p>The important part for our purposes is that the spec carves out a special case
exponent. When all of the exponent bits are set, then instead of just
representing a really big number, the value has a different meaning. These
values are &ldquo;Not a Number&rdquo; (hence, <strong>NaN</strong>) values. They represent concepts like
infinity or the result of division by zero.</p>
<p><em>Any</em> double whose exponent bits are all set is a NaN, regardless of the
mantissa bits. That means there&rsquo;s lots and lots of <em>different</em> NaN bit patterns.
IEEE 754 divides those into two categories. Values where the highest mantissa
bit is 0 are called <strong>signalling NaNs</strong>, and the others are <strong>quiet NaNs</strong>.
Signalling NaNs are intended to be the result of erroneous computations, like
division by zero. A chip <span name="abort">may</span> detect when one of these
values is produced and abort a program completely. They may self-destruct if you
try to read one.</p>
<aside name="abort">
<p>I don&rsquo;t know if any CPUs actually <em>do</em> trap signalling NaNs and abort. The spec
just says they <em>could</em>.</p>
</aside>
<p>Quiet NaNs are supposed to be safer to use. They don&rsquo;t represent useful numeric
values, but they should at least not set your hand on fire if you touch them.</p>
<p>Every double with all of its exponent bits set and its highest mantissa bit set
is a quiet NaN. That leaves 52 bits unaccounted for. We&rsquo;ll avoid one of those so
that we don&rsquo;t step on Intel&rsquo;s &ldquo;QNaN Floating-Point Indefinite&rdquo; value, leaving us
51 bits. Those remaining bits can be anything. We&rsquo;re talking
2,251,799,813,685,248 unique quiet NaN bit patterns.</p><img src="image/optimization/nan.png" alt="The bits in a double that make it a quiet NaN." />
<p>This means a 64-bit double has enough room to store all of the various different
numeric floating-point values and <em>also</em> has room for another 51 bits of data
that we can use however we want. That&rsquo;s plenty of room to set aside a couple of
bit patterns to represent Lox&rsquo;s <code>nil</code>, <code>true</code>, and <code>false</code> values. But what
about Obj pointers? Don&rsquo;t pointers need a full 64 bits too?</p>
<p>Fortunately, we have another trick up our other sleeve. Yes, technically
pointers on a 64-bit architecture are 64 bits. But, no architecture I know of
actually uses that entire address space. Instead, most widely used chips today
only ever use the low <span name="48">48</span> bits. The remaining 16 bits are
either unspecified or always zero.</p>
<aside name="48">
<p>48 bits is enough to address 262,144 gigabytes of memory. Modern operating
systems also give each process its own address space, so that should be plenty.</p>
</aside>
<p>If we&rsquo;ve got 51 bits, we can stuff a 48-bit pointer in there with three bits to
spare. Those three bits are just enough to store tiny type tags to distinguish
between <code>nil</code>, Booleans, and Obj pointers.</p>
<p>That&rsquo;s NaN boxing. Within a single 64-bit double, you can store all of the
different floating-point numeric values, a pointer, or any of a couple of other
special sentinel values. Half the memory usage of our current Value struct,
while retaining all of the fidelity.</p>
<p>What&rsquo;s particularly nice about this representation is that there is no need to
<em>convert</em> a numeric double value into a &ldquo;boxed&rdquo; form. Lox numbers <em>are</em> just
normal, 64-bit doubles. We still need to <em>check</em> their type before we use them,
since Lox is dynamically typed, but we don&rsquo;t need to do any bit shifting or
pointer indirection to go from &ldquo;value&rdquo; to &ldquo;number&rdquo;.</p>
<p>For the other value types, there is a conversion step, of course. But,
fortunately, our VM hides all of the mechanism to go from values to raw types
behind a handful of macros. Rewrite those to implement NaN boxing, and the rest
of the VM should just work.</p>
<h3><a href="#conditional-support" id="conditional-support"><small>30&#8202;.&#8202;3&#8202;.&#8202;2</small>Conditional support</a></h3>
<p>I know the details of this new representation aren&rsquo;t clear in your head yet.
Don&rsquo;t worry, they will crystallize as we work through the implementation. Before
we get to that, we&rsquo;re going to put some compile-time scaffolding in place.</p>
<p>For our previous optimization, we rewrote the previous slow code and called it
done. This one is a little different. NaN boxing relies on some very low-level
details of how a chip represents floating-point numbers and pointers. It
<em>probably</em> works on most CPUs you&rsquo;re likely to encounter, but you can never be
totally sure.</p>
<p>It would suck if our VM completely lost support for an architecture just because
of its value representation. To avoid that, we&rsquo;ll maintain support for <em>both</em>
the old tagged union implementation of Value and the new NaN-boxed form. We
select which representation we want at compile time using this flag:</p>
<div class="codehilite"><pre class="insert-before">#include &lt;stdint.h&gt;

</pre><div class="source-file"><em>common.h</em></div>
<pre class="insert"><span class="a">#define NAN_BOXING</span>
</pre><pre class="insert-after">#define DEBUG_PRINT_CODE
</pre></div>
<div class="source-file-narrow"><em>common.h</em></div>

<p>If that&rsquo;s defined, the VM uses the new form. Otherwise, it reverts to the old
style. The few pieces of code that care about the details of the value
representation<span class="em">&mdash;</span>mainly the handful of macros for wrapping and unwrapping
Values<span class="em">&mdash;</span>vary based on whether this flag is set. The rest of the VM can
continue along its merry way.</p>
<p>Most of the work happens in the &ldquo;value&rdquo; module where we add a section for the
new type.</p>
<div class="codehilite"><pre class="insert-before">typedef struct ObjString ObjString;

</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert"><span class="a">#ifdef NAN_BOXING</span>

<span class="k">typedef</span> <span class="t">uint64_t</span> <span class="t">Value</span>;

<span class="a">#else</span>

</pre><pre class="insert-after">typedef enum {
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>When NaN boxing is enabled, the actual type of a Value is a flat, unsigned
64-bit integer. We could use double instead, which would make the macros for
dealing with Lox numbers a little simpler. But all of the other macros need to
do bitwise operations and uint64_t is a much friendlier type for that. Outside
of this module, the rest of the VM doesn&rsquo;t really care one way or the other.</p>
<p>Before we start re-implementing those macros, we close the <code>#else</code> branch of the
<code>#ifdef</code> at the end of the definitions for the old representation.</p>
<div class="codehilite"><pre class="insert-before">#define OBJ_VAL(object)   ((Value){VAL_OBJ, {.obj = (Obj*)object}})
</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert">

<span class="a">#endif</span>
</pre><pre class="insert-after">

typedef struct {
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>Our remaining task is simply to fill in that first <code>#ifdef</code> section with new
implementations of all the stuff already in the <code>#else</code> side. We&rsquo;ll work through
it one value type at a time, from easiest to hardest.</p>
<h3><a href="#numbers" id="numbers"><small>30&#8202;.&#8202;3&#8202;.&#8202;3</small>Numbers</a></h3>
<p>We&rsquo;ll start with numbers since they have the most direct representation under
NaN boxing. To &ldquo;convert&rdquo; a C double to a NaN-boxed clox Value, we don&rsquo;t need to
touch a single bit<span class="em">&mdash;</span>the representation is exactly the same. But we do need to
convince our C compiler of that fact, which we made harder by defining Value to
be uint64_t.</p>
<p>We need to get the compiler to take a set of bits that it thinks are a double
and use those same bits as a uint64_t, or vice versa. This is called <strong>type
punning</strong>. C and C++ programmers have been doing this since the days of bell
bottoms and 8-tracks, but the language specifications have <span
name="hesitate">hesitated</span> to say which of the many ways to do this is
officially sanctioned.</p>
<aside name="hesitate" class="bottom">
<p>Spec authors don&rsquo;t like type punning because it makes optimization harder. A key
optimization technique is reordering instructions to fill the CPU&rsquo;s execution
pipelines. A compiler can reorder code only when doing so doesn&rsquo;t have a
user-visible effect, obviously.</p>
<p>Pointers make that harder. If two pointers point to the same value, then a write
through one and a read through the other cannot be reordered. But what about two
pointers of <em>different</em> types? If those could point to the same object, then
basically <em>any</em> two pointers could be aliases to the same value. That
drastically limits the amount of code the compiler is free to rearrange.</p>
<p>To avoid that, compilers want to assume <strong>strict aliasing</strong><span class="em">&mdash;</span>pointers of
incompatible types cannot point to the same value. Type punning, by nature,
breaks that assumption.</p>
</aside>
<p>I know one way to convert a <code>double</code> to <code>Value</code> and back that I believe is
supported by both the C and C++ specs. Unfortunately, it doesn&rsquo;t fit in a single
expression, so the conversion macros have to call out to helper functions.
Here&rsquo;s the first macro:</p>
<div class="codehilite"><pre class="insert-before">typedef uint64_t Value;
</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert">

<span class="a">#define NUMBER_VAL(num) numToValue(num)</span>
</pre><pre class="insert-after">

#else
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>That macro passes the double here:</p>
<div class="codehilite"><pre class="insert-before">#define NUMBER_VAL(num) numToValue(num)
</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert">

<span class="k">static</span> <span class="k">inline</span> <span class="t">Value</span> <span class="i">numToValue</span>(<span class="t">double</span> <span class="i">num</span>) {
  <span class="t">Value</span> <span class="i">value</span>;
  <span class="i">memcpy</span>(&amp;<span class="i">value</span>, &amp;<span class="i">num</span>, <span class="k">sizeof</span>(<span class="t">double</span>));
  <span class="k">return</span> <span class="i">value</span>;
}
</pre><pre class="insert-after">

#else
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>I know, weird, right? The way to treat a series of bytes as having a different
type without changing their value at all is <code>memcpy()</code>? This looks horrendously
slow: Create a local variable. Pass its address to the operating system through
a syscall to copy a few bytes. Then return the result, which is the exact same
bytes as the input. Thankfully, because this <em>is</em> the supported idiom for type
punning, most compilers recognize the pattern and optimize away the <code>memcpy()</code>
entirely.</p>
<p>&ldquo;Unwrapping&rdquo; a Lox number is the mirror image.</p>
<div class="codehilite"><pre class="insert-before">typedef uint64_t Value;
</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert">

<span class="a">#define AS_NUMBER(value)    valueToNum(value)</span>
</pre><pre class="insert-after">

#define NUMBER_VAL(num) numToValue(num)
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>That macro calls this function:</p>
<div class="codehilite"><pre class="insert-before">#define NUMBER_VAL(num) numToValue(num)
</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert">

<span class="k">static</span> <span class="k">inline</span> <span class="t">double</span> <span class="i">valueToNum</span>(<span class="t">Value</span> <span class="i">value</span>) {
  <span class="t">double</span> <span class="i">num</span>;
  <span class="i">memcpy</span>(&amp;<span class="i">num</span>, &amp;<span class="i">value</span>, <span class="k">sizeof</span>(<span class="t">Value</span>));
  <span class="k">return</span> <span class="i">num</span>;
}
</pre><pre class="insert-after">

static inline Value numToValue(double num) {
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>It works exactly the same except we swap the types. Again, the compiler will
eliminate all of it. Even though those calls to
<code>memcpy()</code> will disappear, we still need to show the compiler <em>which</em> <code>memcpy()</code>
we&rsquo;re calling so we also need an <span name="union">include</span>.</p>
<aside name="union" class="bottom">
<p>If you find yourself with a compiler that does not optimize the <code>memcpy()</code> away,
try this instead:</p>
<div class="codehilite"><pre><span class="t">double</span> <span class="i">valueToNum</span>(<span class="t">Value</span> <span class="i">value</span>) {
  <span class="k">union</span> {
    <span class="t">uint64_t</span> <span class="i">bits</span>;
    <span class="t">double</span> <span class="i">num</span>;
  } <span class="i">data</span>;
  <span class="i">data</span>.<span class="i">bits</span> = <span class="i">value</span>;
  <span class="k">return</span> <span class="i">data</span>.<span class="i">num</span>;
}
</pre></div>
</aside>
<div class="codehilite"><pre class="insert-before">#define clox_value_h
</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert">

<span class="a">#include &lt;string.h&gt;</span>
</pre><pre class="insert-after">

#include &quot;common.h&quot;
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>That was a lot of code to ultimately do nothing but silence the C type checker.
Doing a runtime type <em>test</em> on a Lox number is a little more interesting. If all
we have are exactly the bits for a double, how do we tell that it <em>is</em> a double?
It&rsquo;s time to get bit twiddling.</p>
<div class="codehilite"><pre class="insert-before">typedef uint64_t Value;
</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert">

<span class="a">#define IS_NUMBER(value)    (((value) &amp; QNAN) != QNAN)</span>
</pre><pre class="insert-after">

#define AS_NUMBER(value)    valueToNum(value)
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>We know that every Value that is <em>not</em> a number will use a special quiet NaN
representation. And we presume we have correctly avoided any of the meaningful
NaN representations that may actually be produced by doing arithmetic on
numbers.</p>
<p>If the double has all of its NaN bits set, and the quiet NaN bit set, and one
more for good measure, we can be <span name="certain">pretty certain</span> it
is one of the bit patterns we ourselves have set aside for other types. To check
that, we mask out all of the bits except for our set of quiet NaN bits. If <em>all</em>
of those bits are set, it must be a NaN-boxed value of some other Lox type.
Otherwise, it is actually a number.</p>
<aside name="certain">
<p>Pretty certain, but not strictly guaranteed. As far as I know, there is nothing
preventing a CPU from producing a NaN value as the result of some operation
whose bit representation collides with ones we have claimed. But in my tests
across a number of architectures, I haven&rsquo;t seen it happen.</p>
</aside>
<p>The set of quiet NaN bits are declared like this:</p>
<div class="codehilite"><pre class="insert-before">#ifdef NAN_BOXING
</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert">

<span class="a">#define QNAN     ((uint64_t)0x7ffc000000000000)</span>
</pre><pre class="insert-after">

typedef uint64_t Value;
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>It would be nice if C supported binary literals. But if you do the conversion,
you&rsquo;ll see that value is the same as this:</p><img src="image/optimization/qnan.png" alt="The quiet NaN bits." />
<p>This is exactly all of the exponent bits, plus the quiet NaN bit, plus one extra
to dodge that Intel value.</p>
<h3><a href="#nil-true-and-false" id="nil-true-and-false"><small>30&#8202;.&#8202;3&#8202;.&#8202;4</small>Nil, true, and false</a></h3>
<p>The next type to handle is <code>nil</code>. That&rsquo;s pretty simple since there&rsquo;s only one
<code>nil</code> value and thus we need only a single bit pattern to represent it. There
are two other singleton values, the two Booleans, <code>true</code> and <code>false</code>. This calls
for three total unique bit patterns.</p>
<p>Two bits give us four different combinations, which is plenty. We claim the two
lowest bits of our unused mantissa space as a &ldquo;type tag&rdquo; to determine which of
these three singleton values we&rsquo;re looking at. The three type tags are defined
like so:</p>
<div class="codehilite"><pre class="insert-before">#define QNAN     ((uint64_t)0x7ffc000000000000)
</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert">

<span class="a">#define TAG_NIL   1 </span><span class="c">// 01.</span>
<span class="a">#define TAG_FALSE 2 </span><span class="c">// 10.</span>
<span class="a">#define TAG_TRUE  3 </span><span class="c">// 11.</span>
</pre><pre class="insert-after">

typedef uint64_t Value;
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>Our representation of <code>nil</code> is thus all of the bits required to define our
quiet NaN representation along with the <code>nil</code> type tag bits:</p><img src="image/optimization/nil.png" alt="The bit representation of the nil value." />
<p>In code, we check the bits like so:</p>
<div class="codehilite"><pre class="insert-before">#define AS_NUMBER(value)    valueToNum(value)

</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert"><span class="a">#define NIL_VAL         ((Value)(uint64_t)(QNAN | TAG_NIL))</span>
</pre><pre class="insert-after">#define NUMBER_VAL(num) numToValue(num)
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>We simply bitwise <span class="small-caps">OR</span> the quiet NaN bits and the
type tag, and then do a little cast dance to teach the C compiler what we want
those bits to mean.</p>
<p>Since <code>nil</code> has only a single bit representation, we can use equality on
uint64_t to see if a Value is <code>nil</code>.</p>
<p><span name="equal"></span></p>
<div class="codehilite"><pre class="insert-before">typedef uint64_t Value;

</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert"><span class="a">#define IS_NIL(value)       ((value) == NIL_VAL)</span>
</pre><pre class="insert-after">#define IS_NUMBER(value)    (((value) &amp; QNAN) != QNAN)
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>You can guess how we define the <code>true</code> and <code>false</code> values.</p>
<div class="codehilite"><pre class="insert-before">#define AS_NUMBER(value)    valueToNum(value)

</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert"><span class="a">#define FALSE_VAL       ((Value)(uint64_t)(QNAN | TAG_FALSE))</span>
<span class="a">#define TRUE_VAL        ((Value)(uint64_t)(QNAN | TAG_TRUE))</span>
</pre><pre class="insert-after">#define NIL_VAL         ((Value)(uint64_t)(QNAN | TAG_NIL))
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>The bits look like this:</p><img src="image/optimization/bools.png" alt="The bit representation of the true and false values." />
<p>To convert a C bool into a Lox Boolean, we rely on these two singleton values
and the good old conditional operator.</p>
<div class="codehilite"><pre class="insert-before">#define AS_NUMBER(value)    valueToNum(value)

</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert"><span class="a">#define BOOL_VAL(b)     ((b) ? TRUE_VAL : FALSE_VAL)</span>
</pre><pre class="insert-after">#define FALSE_VAL       ((Value)(uint64_t)(QNAN | TAG_FALSE))
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>There&rsquo;s probably a cleverer bitwise way to do this, but my hunch is that the
compiler can figure one out faster than I can. Going the other direction is
simpler.</p>
<div class="codehilite"><pre class="insert-before">#define IS_NUMBER(value)    (((value) &amp; QNAN) != QNAN)

</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert"><span class="a">#define AS_BOOL(value)      ((value) == TRUE_VAL)</span>
</pre><pre class="insert-after">#define AS_NUMBER(value)    valueToNum(value)
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>Since we know there are exactly two Boolean bit representations in Lox<span class="em">&mdash;</span>unlike
in C where any non-zero value can be considered &ldquo;true&rdquo;<span class="em">&mdash;</span>if it ain&rsquo;t <code>true</code>, it
must be <code>false</code>. This macro does assume you call it only on a Value that you
know <em>is</em> a Lox Boolean. To check that, there&rsquo;s one more macro.</p>
<div class="codehilite"><pre class="insert-before">typedef uint64_t Value;

</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert"><span class="a">#define IS_BOOL(value)      (((value) | 1) == TRUE_VAL)</span>
</pre><pre class="insert-after">#define IS_NIL(value)       ((value) == NIL_VAL)
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>That looks a little strange. A more obvious macro would look like this:</p>
<div class="codehilite"><pre><span class="a">#define IS_BOOL(v) ((v) == TRUE_VAL || (v) == FALSE_VAL)</span>
</pre></div>
<p>Unfortunately, that&rsquo;s not safe. The expansion mentions <code>v</code> twice, which means if
that expression has any side effects, they will be executed twice. We could have
the macro call out to a separate function, but, ugh, what a chore.</p>
<p>Instead, we bitwise <span class="small-caps">OR</span> a 1 onto the value to
merge the only two valid Boolean bit patterns. That leaves three potential
states the value can be in:</p>
<ol>
<li>
<p>It was <code>FALSE_VAL</code> and has now been converted to <code>TRUE_VAL</code>.</p>
</li>
<li>
<p>It was <code>TRUE_VAL</code> and the <code>| 1</code> did nothing and it&rsquo;s still <code>TRUE_VAL</code>.</p>
</li>
<li>
<p>It&rsquo;s some other, non-Boolean value.</p>
</li>
</ol>
<p>At that point, we can simply compare the result to <code>TRUE_VAL</code> to see if we&rsquo;re
in the first two states or the third.</p>
<h3><a href="#objects" id="objects"><small>30&#8202;.&#8202;3&#8202;.&#8202;5</small>Objects</a></h3>
<p>The last value type is the hardest. Unlike the singleton values, there are
billions of different pointer values we need to box inside a NaN. This means we
need both some kind of tag to indicate that these particular NaNs <em>are</em> Obj
pointers, and room for the addresses themselves.</p>
<p>The tag bits we used for the singleton values are in the region where I decided
to store the pointer itself, so we can&rsquo;t easily use a different <span
name="ptr">bit</span> there to indicate that the value is an object reference.
However, there is another bit we aren&rsquo;t using. Since all our NaN values are not
numbers<span class="em">&mdash;</span>it&rsquo;s right there in the name<span class="em">&mdash;</span>the sign bit isn&rsquo;t used for anything.
We&rsquo;ll go ahead and use that as the type tag for objects. If one of our quiet
NaNs has its sign bit set, then it&rsquo;s an Obj pointer. Otherwise, it must be one
of the previous singleton values.</p>
<aside name="ptr">
<p>We actually <em>could</em> use the lowest bits to store the type tag even when the
value is an Obj pointer. That&rsquo;s because Obj pointers are always aligned to an
8-byte boundary since Obj contains a 64-bit field. That, in turn, implies that
the three lowest bits of an Obj pointer will always be zero. We could store
whatever we wanted in there and just mask it off before dereferencing the
pointer.</p>
<p>This is another value representation optimization called <strong>pointer tagging</strong>.</p>
</aside>
<p>If the sign bit is set, then the remaining low bits store the pointer to the
Obj:</p><img src="image/optimization/obj.png" alt="Bit representation of an Obj* stored in a Value." />
<p>To convert a raw Obj pointer to a Value, we take the pointer and set all of the
quiet NaN bits and the sign bit.</p>
<div class="codehilite"><pre class="insert-before">#define NUMBER_VAL(num) numToValue(num)
</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert"><span class="a">#define OBJ_VAL(obj) \</span>
<span class="a">    (Value)(SIGN_BIT | QNAN | (uint64_t)(uintptr_t)(obj))</span>
</pre><pre class="insert-after">

static inline double valueToNum(Value value) {
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>The pointer itself is a full 64 bits, and in <span name="safe">principle</span>,
it could thus overlap with some of those quiet NaN and sign bits. But in
practice, at least on the architectures I&rsquo;ve tested, everything above the 48th
bit in a pointer is always zero. There&rsquo;s a lot of casting going on here, which
I&rsquo;ve found is necessary to satisfy some of the pickiest C compilers, but the
end result is just jamming some bits together.</p>
<aside name="safe">
<p>I try to follow the letter of the law when it comes to the code in this book, so
this paragraph is dubious. There comes a point when optimizing where you push
the boundary of not just what the <em>spec says</em> you can do, but what a real
compiler and chip let you get away with.</p>
<p>There are risks when stepping outside of the spec, but there are rewards in that
lawless territory too. It&rsquo;s up to you to decide if the gains are worth it.</p>
</aside>
<p>We define the sign bit like so:</p>
<div class="codehilite"><pre class="insert-before">#ifdef NAN_BOXING

</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert"><span class="a">#define SIGN_BIT ((uint64_t)0x8000000000000000)</span>
</pre><pre class="insert-after">#define QNAN     ((uint64_t)0x7ffc000000000000)

</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>To get the Obj pointer back out, we simply mask off all of those extra bits.</p>
<div class="codehilite"><pre class="insert-before">#define AS_NUMBER(value)    valueToNum(value)
</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert"><span class="a">#define AS_OBJ(value) \</span>
<span class="a">    ((Obj*)(uintptr_t)((value) &amp; ~(SIGN_BIT | QNAN)))</span>
</pre><pre class="insert-after">

#define BOOL_VAL(b)     ((b) ? TRUE_VAL : FALSE_VAL)
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>The tilde (<code>~</code>), if you haven&rsquo;t done enough bit manipulation to encounter it
before, is bitwise <span class="small-caps">NOT</span>. It toggles all ones and
zeroes in its operand. By masking the value with the bitwise negation of the
quiet NaN and sign bits, we <em>clear</em> those bits and let the pointer bits remain.</p>
<p>One last macro:</p>
<div class="codehilite"><pre class="insert-before">#define IS_NUMBER(value)    (((value) &amp; QNAN) != QNAN)
</pre><div class="source-file"><em>value.h</em></div>
<pre class="insert"><span class="a">#define IS_OBJ(value) \</span>
<span class="a">    (((value) &amp; (QNAN | SIGN_BIT)) == (QNAN | SIGN_BIT))</span>
</pre><pre class="insert-after">

#define AS_BOOL(value)      ((value) == TRUE_VAL)
</pre></div>
<div class="source-file-narrow"><em>value.h</em></div>

<p>A Value storing an Obj pointer has its sign bit set, but so does any negative
number. To tell if a Value is an Obj pointer, we need to check that both the
sign bit and all of the quiet NaN bits are set. This is similar to how we detect
the type of the singleton values, except this time we use the sign bit as the
tag.</p>
<h3><a href="#value-functions" id="value-functions"><small>30&#8202;.&#8202;3&#8202;.&#8202;6</small>Value functions</a></h3>
<p>The rest of the VM usually goes through the macros when working with Values, so
we are almost done. However, there are a couple of functions in the &ldquo;value&rdquo;
module that peek inside the otherwise black box of Value and work with its
encoding directly. We need to fix those too.</p>
<p>The first is <code>printValue()</code>. It has separate code for each value type. We no
longer have an explicit type enum we can switch on, so instead we use a series
of type tests to handle each kind of value.</p>
<div class="codehilite"><pre class="insert-before">void printValue(Value value) {
</pre><div class="source-file"><em>value.c</em><br>
in <em>printValue</em>()</div>
<pre class="insert"><span class="a">#ifdef NAN_BOXING</span>
  <span class="k">if</span> (<span class="a">IS_BOOL</span>(<span class="i">value</span>)) {
    <span class="i">printf</span>(<span class="a">AS_BOOL</span>(<span class="i">value</span>) ? <span class="s">&quot;true&quot;</span> : <span class="s">&quot;false&quot;</span>);
  } <span class="k">else</span> <span class="k">if</span> (<span class="a">IS_NIL</span>(<span class="i">value</span>)) {
    <span class="i">printf</span>(<span class="s">&quot;nil&quot;</span>);
  } <span class="k">else</span> <span class="k">if</span> (<span class="a">IS_NUMBER</span>(<span class="i">value</span>)) {
    <span class="i">printf</span>(<span class="s">&quot;%g&quot;</span>, <span class="a">AS_NUMBER</span>(<span class="i">value</span>));
  } <span class="k">else</span> <span class="k">if</span> (<span class="a">IS_OBJ</span>(<span class="i">value</span>)) {
    <span class="i">printObject</span>(<span class="i">value</span>);
  }
<span class="a">#else</span>
</pre><pre class="insert-after">  switch (value.type) {
</pre></div>
<div class="source-file-narrow"><em>value.c</em>, in <em>printValue</em>()</div>

<p>This is technically a tiny bit slower than a switch, but compared to the
overhead of actually writing to a stream, it&rsquo;s negligible.</p>
<p>We still support the original tagged union representation, so we keep the old
code and enclose it in the <code>#else</code> conditional section.</p>
<div class="codehilite"><pre class="insert-before">  }
</pre><div class="source-file"><em>value.c</em><br>
in <em>printValue</em>()</div>
<pre class="insert"><span class="a">#endif</span>
</pre><pre class="insert-after">}
</pre></div>
<div class="source-file-narrow"><em>value.c</em>, in <em>printValue</em>()</div>

<p>The other operation is testing two values for equality.</p>
<div class="codehilite"><pre class="insert-before">bool valuesEqual(Value a, Value b) {
</pre><div class="source-file"><em>value.c</em><br>
in <em>valuesEqual</em>()</div>
<pre class="insert"><span class="a">#ifdef NAN_BOXING</span>
  <span class="k">return</span> <span class="i">a</span> == <span class="i">b</span>;
<span class="a">#else</span>
</pre><pre class="insert-after">  if (a.type != b.type) return false;
</pre></div>
<div class="source-file-narrow"><em>value.c</em>, in <em>valuesEqual</em>()</div>

<p>It doesn&rsquo;t get much simpler than that! If the two bit representations are
identical, the values are equal. That does the right thing for the singleton
values since each has a unique bit representation and they are only equal to
themselves. It also does the right thing for Obj pointers, since objects use
identity for equality<span class="em">&mdash;</span>two Obj references are equal only if they point to the
exact same object.</p>
<p>It&rsquo;s <em>mostly</em> correct for numbers too. Most floating-point numbers with
different bit representations are distinct numeric values. Alas, IEEE 754
contains a pothole to trip us up. For reasons that aren&rsquo;t entirely clear to me,
the spec mandates that NaN values are <em>not</em> equal to <em>themselves</em>. This isn&rsquo;t a
problem for the special quiet NaNs that we are using for our own purposes. But
it&rsquo;s possible to produce a &ldquo;real&rdquo; arithmetic NaN in Lox, and if we want to
correctly implement IEEE 754 numbers, then the resulting value is not supposed
to be equal to itself. More concretely:</p>
<div class="codehilite"><pre><span class="k">var</span> <span class="i">nan</span> = <span class="n">0</span>/<span class="n">0</span>;
<span class="k">print</span> <span class="i">nan</span> == <span class="i">nan</span>;
</pre></div>
<p>IEEE 754 says this program is supposed to print &ldquo;false&rdquo;. It does the right thing
with our old tagged union representation because the <code>VAL_NUMBER</code> case applies
<code>==</code> to two values that the C compiler knows are doubles. Thus the compiler
generates the right CPU instruction to perform an IEEE floating-point equality.</p>
<p>Our new representation breaks that by defining Value to be a uint64_t. If we
want to be <em>fully</em> compliant with IEEE 754, we need to handle this case.</p>
<div class="codehilite"><pre class="insert-before">#ifdef NAN_BOXING
</pre><div class="source-file"><em>value.c</em><br>
in <em>valuesEqual</em>()</div>
<pre class="insert">  <span class="k">if</span> (<span class="a">IS_NUMBER</span>(<span class="i">a</span>) &amp;&amp; <span class="a">IS_NUMBER</span>(<span class="i">b</span>)) {
    <span class="k">return</span> <span class="a">AS_NUMBER</span>(<span class="i">a</span>) == <span class="a">AS_NUMBER</span>(<span class="i">b</span>);
  }
</pre><pre class="insert-after">  return a == b;
</pre></div>
<div class="source-file-narrow"><em>value.c</em>, in <em>valuesEqual</em>()</div>

<p>I know, it&rsquo;s weird. And there is a performance cost to doing this type test
every time we check two Lox values for equality. If we are willing to sacrifice
a little <span name="java">compatibility</span><span class="em">&mdash;</span>who <em>really</em> cares if NaN is
not equal to itself?<span class="em">&mdash;</span>we could leave this off. I&rsquo;ll leave it up to you to
decide how pedantic you want to be.</p>
<aside name="java">
<p>In fact, jlox gets NaN equality wrong. Java does the right thing when you
compare primitive doubles using <code>==</code>, but not if you box those to Double or
Object and compare them using <code>equals()</code>, which is how jlox implements equality.</p>
</aside>
<p>Finally, we close the conditional compilation section around the old
implementation.</p>
<div class="codehilite"><pre class="insert-before">  }
</pre><div class="source-file"><em>value.c</em><br>
in <em>valuesEqual</em>()</div>
<pre class="insert"><span class="a">#endif</span>
</pre><pre class="insert-after">}
</pre></div>
<div class="source-file-narrow"><em>value.c</em>, in <em>valuesEqual</em>()</div>

<p>And that&rsquo;s it. This optimization is complete, as is our clox virtual machine.
That was the last line of new code in the book.</p>
<h3><a href="#evaluating-performance" id="evaluating-performance"><small>30&#8202;.&#8202;3&#8202;.&#8202;7</small>Evaluating performance</a></h3>
<p>The code is done, but we still need to figure out if we actually made anything
better with these changes. Evaluating an optimization like this is very
different from the previous one. There, we had a clear hotspot visible in the
profiler. We fixed that part of the code and could instantly see the hotspot
get faster.</p>
<p>The effects of changing the value representation are more diffused. The macros
are expanded in place wherever they are used, so the performance changes are
spread across the codebase in a way that&rsquo;s hard for many profilers to track
well, especially in an <span name="opt">optimized</span> build.</p>
<aside name="opt">
<p>When doing profiling work, you almost always want to profile an optimized
&ldquo;release&rdquo; build of your program since that reflects the performance story your
end users experience. Compiler optimizations, like inlining, can dramatically
affect which parts of the code are performance hotspots. Hand-optimizing a debug
build risks sending you off &ldquo;fixing&rdquo; problems that the optimizing compiler will
already solve for you.</p>
<p>Make sure you don&rsquo;t accidentally benchmark and optimize your debug build. I seem
to make that mistake at least once a year.</p>
</aside>
<p>We also can&rsquo;t easily <em>reason</em> about the effects of our change. We&rsquo;ve made values
smaller, which reduces cache misses all across the VM. But the actual real-world
performance effect of that change is highly dependent on the memory use of the
Lox program being run. A tiny Lox microbenchmark may not have enough values
scattered around in memory for the effect to be noticeable, and even things like
the addresses handed out to us by the C memory allocator can impact the results.</p>
<p>If we did our job right, basically everything gets a little faster, especially
on larger, more complex Lox programs. But it is possible that the extra bitwise
operations we do when NaN-boxing values nullify the gains from the better
memory use. Doing performance work like this is unnerving because you can&rsquo;t
easily <em>prove</em> that you&rsquo;ve made the VM better. You can&rsquo;t point to a single
surgically targeted microbenchmark and say, &ldquo;There, see?&rdquo;</p>
<p>Instead, what we really need is a <em>suite</em> of larger benchmarks. Ideally, they
would be distilled from real-world applications<span class="em">&mdash;</span>not that such a thing exists
for a toy language like Lox. Then we can measure the aggregate performance
changes across all of those. I did my best to cobble together a handful of
larger Lox programs. On my machine, the new value representation seems to make
everything roughly 10% faster across the board.</p>
<p>That&rsquo;s not a huge improvement, especially compared to the profound effect of
making hash table lookups faster. I added this optimization in large part
because it&rsquo;s a good example of a certain <em>kind</em> of performance work you may
experience, and honestly, because I think it&rsquo;s technically really cool. It might
not be the first thing I would reach for if I were seriously trying to make clox
faster. There is probably other, lower-hanging fruit.</p>
<p>But, if you find yourself working on a program where all of the easy wins have
been taken, then at some point you may want to think about tuning your value
representation. I hope this chapter has shined a light on some of the options
you have in that area.</p>
<h2><a href="#where-to-next" id="where-to-next"><small>30&#8202;.&#8202;4</small>Where to Next</a></h2>
<p>We&rsquo;ll stop here with the Lox language and our two interpreters. We could tinker
on it forever, adding new language features and clever speed improvements. But,
for this book, I think we&rsquo;ve reached a natural place to call our work complete.
I won&rsquo;t rehash everything we&rsquo;ve learned in the past many pages. You were there
with me and you remember. Instead, I&rsquo;d like to take a minute to talk about where
you might go from here. What is the next step in your programming language
journey?</p>
<p>Most of you probably won&rsquo;t spend a significant part of your career working in
compilers or interpreters. It&rsquo;s a pretty small slice of the computer science
academia pie, and an even smaller segment of software engineering in industry.
That&rsquo;s OK. Even if you never work on a compiler again in your life, you will
certainly <em>use</em> one, and I hope this book has equipped you with a better
understanding of how the programming languages you use are designed and
implemented.</p>
<p>You have also learned a handful of important, fundamental data structures and
gotten some practice doing low-level profiling and optimization work. That kind
of expertise is helpful no matter what domain you program in.</p>
<p>I also hope I gave you a new way of <span name="domain">looking</span> at and
solving problems. Even if you never work on a language again, you may be
surprised to discover how many programming problems can be seen as
language-<em>like</em>. Maybe that report generator you need to write can be modeled as
a series of stack-based &ldquo;instructions&rdquo; that the generator &ldquo;executes&rdquo;. That user
interface you need to render looks an awful lot like traversing an AST.</p>
<aside name="domain">
<p>This goes for other domains too. I don&rsquo;t think there&rsquo;s a single topic I&rsquo;ve
learned in programming<span class="em">&mdash;</span>or even outside of programming<span class="em">&mdash;</span>that I haven&rsquo;t ended
up finding useful in other areas. One of my favorite aspects of software
engineering is how much it rewards those with eclectic interests.</p>
</aside>
<p>If you do want to go further down the programming language rabbit hole, here
are some suggestions for which branches in the tunnel to explore:</p>
<ul>
<li>
<p>Our simple, single-pass bytecode compiler pushed us towards mostly runtime
optimization. In a mature language implementation, compile-time optimization
is generally more important, and the field of compiler optimizations is
incredibly rich. Grab a classic <span name="cooper">compilers</span> book,
and rebuild the front end of clox or jlox to be a sophisticated compilation
pipeline with some interesting intermediate representations and optimization
passes.</p>
<p>Dynamic typing will place some restrictions on how far you can go, but there
is still a lot you can do. Or maybe you want to take a big leap and add
static types and a type checker to Lox. That will certainly give your front
end a lot more to chew on.</p>
<aside name="cooper">
<p>I like Cooper and Torczon&rsquo;s <em>Engineering a Compiler</em> for this. Appel&rsquo;s
<em>Modern Compiler Implementation</em> books are also well regarded.</p>
</aside></li>
<li>
<p>In this book, I aim to be correct, but not particularly rigorous. My goal is
mostly to give you an <em>intuition</em> and a feel for doing language work. If you
like more precision, then the whole world of programming language academia
is waiting for you. Languages and compilers have been studied formally since
before we even had computers, so there is no shortage of books and papers on
parser theory, type systems, semantics, and formal logic. Going down this
path will also teach you how to read CS papers, which is a valuable skill in
its own right.</p>
</li>
<li>
<p>Or, if you just really enjoy hacking on and making languages, you can take
Lox and turn it into your own <span name="license">plaything</span>. Change
the syntax to something that delights your eye. Add missing features or
remove ones you don&rsquo;t like. Jam new optimizations in there.</p>
<aside name="license">
<p>The <em>text</em> of this book is copyrighted to me, but the <em>code</em> and the
implementations of jlox and clox use the very permissive <a href="https://en.wikipedia.org/wiki/MIT_License">MIT license</a>.
You are more than welcome to <a href="https://github.com/munificent/craftinginterpreters">take either of those interpreters</a> and
do whatever you want with them. Go to town.</p>
<p>If you make significant changes to the language, it would be good to also
change the name, mostly to avoid confusing people about what the name &ldquo;Lox&rdquo;
represents.</p>
</aside>
<p>Eventually you may get to a point where you have something you think others
could use as well. That gets you into the very distinct world of programming
language <em>popularity</em>. Expect to spend a ton of time writing documentation,
example programs, tools, and useful libraries. The field is crowded with
languages vying for users. To thrive in that space you&rsquo;ll have to put on
your marketing hat and <em>sell</em>. Not everyone enjoys that kind of
public-facing work, but if you do, it can be incredibly gratifying to see
people use your language to express themselves.</p>
</li>
</ul>
<p>Or maybe this book has satisfied your craving and you&rsquo;ll stop here. Whichever
way you go, or don&rsquo;t go, there is one lesson I hope to lodge in your heart. Like
I was, you may have initially been intimidated by programming languages. But in
these chapters, you&rsquo;ve seen that even really challenging material can be tackled
by us mortals if we get our hands dirty and take it a step at a time. If you can
handle compilers and interpreters, you can do anything you put your mind to.</p>
<div class="challenges">
<h2><a href="#challenges" id="challenges">Challenges</a></h2>
<p>Assigning homework on the last day of school seems cruel but if you really want
something to do during your summer vacation:</p>
<ol>
<li>
<p>Fire up your profiler, run a couple of benchmarks, and look for other
hotspots in the VM. Do you see anything in the runtime that you can improve?</p>
</li>
<li>
<p>Many strings in real-world user programs are small, often only a character
or two. This is less of a concern in clox because we intern strings, but
most VMs don&rsquo;t. For those that don&rsquo;t, heap allocating a tiny character array
for each of those little strings and then representing the value as a
pointer to that array is wasteful. Often, the pointer is larger than the
string&rsquo;s characters. A classic trick is to have a separate value
representation for small strings that stores the characters inline in the
value.</p>
<p>Starting from clox&rsquo;s original tagged union representation, implement that
optimization. Write a couple of relevant benchmarks and see if it helps.</p>
</li>
<li>
<p>Reflect back on your experience with this book. What parts of it worked well
for you? What didn&rsquo;t? Was it easier for you to learn bottom-up or top-down?
Did the illustrations help or distract? Did the analogies clarify or
confuse?</p>
<p>The more you understand your personal learning style, the more effectively
you can upload knowledge into your head. You can specifically target
material that teaches you the way you learn best.</p>
</li>
</ol>
</div>

<footer>
<a href="backmatter.html" class="next">
  Next Part: &ldquo;Backmatter&rdquo; &rarr;
</a>
Handcrafted by Robert Nystrom&ensp;&mdash;&ensp;<a href="https://github.com/munificent/craftinginterpreters/blob/master/LICENSE" target="_blank">&copy; 2015&hairsp;&ndash;&hairsp;2021</a>
</footer>
</article>

</div>
</body>
</html>
