<!doctype html><html lang=zh-cn><meta charset=utf-8><meta name=viewport content="width=device-width,initial-scale=1"><title>Always Bump Downwards The simple way to round n up to a multiple of align</title><link rel=stylesheet href=https://note-2019-images.oss-cn-hangzhou.aliyuncs.com/notes.css media=all><script src=/static/main.js></script><body data-category=default data-clipid=1573626030><div id=toc_c>三</div><div id=toc></div><div class="mx-wc-main yue"><div><div><article><div><p>When writing a bump allocator, always bump downwards. That is, allocate from
high addresses, down towards lower addresses by decrementing the bump
pointer. Although it is perhaps less natural to think about, it is more
efficient than incrementing the bump pointer and allocating from lower addresses
up to higher ones.<h3>What is Bump Allocation?</h3><p>Bump allocation is a super fast method for allocating objects. We have a chunk
of memory, and we maintain a “bump pointer” within that memory. Whenever we
allocate an object, we do a quick test that we have enough capacity left in the
chunk, and then, assuming we have enough room, we move the bump pointer over by
<code>sizeof(object)</code> bytes and return the pointer to the space we just reserved for
the object within the chunk.<p>That’s it!<p>Here is some pseudo-code showing off the algorithm:<pre><code class=highlight></code></pre><p>The trade off with bump allocation is that we can’t deallocate individual
objects in the general case. We can deallocate all of them en mass by resetting
the bump pointer back to its initial location. We can deallocate in a LIFO,
stack order by moving the bump pointer in reverse. But we can’t deallocate an
arbitrary object in the middle of the chunk and reclaim its space for new
allocations.<p>Finally, notice that the chunk of memory we are bump allocating within is always
split in two: the side holding allocated objects and the side with free
memory. The bump pointer separates the two sides. Furthermore, note that I
haven’t defined which side of the bump pointer is free or allocated space, and
I’ve carefully avoided saying whether the bump pointer is incremented or
decremented.<h3>Bumping Upwards</h3><p>First, let’s consider what we <em>shouldn’t</em> do: bump upwards by initializing the
bump pointer at the low end of our memory chunk and incrementing the bump
pointer on each allocation.<p>We begin with a <code>struct</code> that holds the start and end addresses of our chunk of
memory, as well as our current bump pointer:<pre><code class=highlight></code></pre><p>Constructing our upwards bump allocator requires giving it the <code>start</code> and <code>end</code>
pointers, and it will initialize its bump pointer to the <code>start</code> address:<pre><code class=highlight></code></pre><p>To allocate an object, we will begin by grabbing the current bump pointer, and
saving it in a temporary variable: this is going to be the pointer to the newly
allocated space. Then we increment the bump pointer by the requested size, and
check if it is still less than <code>end</code>. If so, then we have capacity for the
allocation, and can commit the new bump pointer to <code>self.ptr</code> and return the
temporary pointing to the freshly allocated space.<p>But first, there is one thing that the pseudo-code ignored, but which a real
implementation cannot: alignment. We need to round up the initial bump pointer
to a multiple of the requested alignment before we compute the new bump pointer
by adding the requested size.<sup><a href=#foot-0>0</a></sup><p>Put all that together, and it looks like this:<pre><code class=highlight></code></pre><p>If we compile this allocation routine to x86-64 with optimizations, we get the
following code, which I’ve lightly annotated:<pre><code class=highlight></code></pre><p>I’m not going to explain each individual instruction. What’s important to
appreciate here is that this is just a small handful of fast instructions with
only a single branch to handle the not-enough-capacity case. This is what makes
bump allocation so fast — great!<p>But before we get too excited, there is another practicality to consider: to
maintain memory safety, we must handle potential integer overflows in the
allocation procedure, or else we could have bugs where we return pointers
outside the bounds of our memory chunk. No good!<p>There are two opportunities for overflow we must take care of:<ol><li><p>If the requested allocation’s size is large enough, <code>aligned + size</code> can
overflow.<li><p>If the requested allocation’s alignment is large enough, the <code>ptr + align -
1</code> sub-expression we use when rounding up to the alignment can overflow.</ol><p>To handle both these cases, we will use checked addition and return a null
pointer if either addition overflows. Here is the new Rust source code:<pre><code class=highlight></code></pre><p>Now that we’re handling overflows in addition to the alignment requirements,
let’s take a look at the x86-64 code that <code>rustc</code> and LLVM produce for the
function now:<pre><code class=highlight></code></pre><p>Now there are three conditional branches rather than one. The two new branches
are from those two new overflow checks that we added. Less than ideal.<p>Can bumping downwards do better?<h3>Bumping Downwards</h3><p>Now let’s implement a bump allocator where the bump pointer is initialized at
the end of the memory chunk and is decremented on allocation, so that it moves
downwards towards the start of the memory chunk.<p>The <code>struct</code> is identical to the previous version:<pre><code class=highlight></code></pre><p>Constructing a <code>BumpDown</code> is similar to constructing a <code>BumpUp</code> except we
initialize the <code>ptr</code> to <code>end</code> rather than <code>start</code>:<pre><code class=highlight></code></pre><p>When we were allocating by incrementing the bump pointer, the original bump
pointer value before it was incremented pointed at the space that was about to
be reserved for the allocation. When we are allocating by decrementing the bump
pointer, the original bump pointer is pointing at either the end of the memory
chunk, or at the last allocation we made. What we want to return is the value of
the bump pointer <em>after</em> we decrement it down, at which time it will be pointing
at our allocated space.<p>First we subtract the allocation size from the bump pointer. This subtraction
might overflow, so we check for that and return a null pointer if that is the
case, just like we did in the previous, upward-bumping function. Then, we round
that down to the nearest multiple of <code>align</code> to ensure that the allocated space
has the object’s alignment. At this point, we check if we are down past the
start of our memory chunk, in which case we don’t have the capacity to fulfill
this allocation, and we return null. Otherwise, we update the bump pointer to
its new value and return the pointer!<pre><code class=highlight></code></pre><p>And here is the x86-64 code generated for this downward-bumping allocation
routine!<pre><code class=highlight></code></pre><p>Because rounding down doesn’t require an addition or subtraction operation, it
doesn’t have an associated overflow check. That means one less conditional
branch in the generated code, and downward bumping only has two conditional
branches versus the three that upward bumping has.<p>Additionally, because we don’t need to save the original bump pointer value,
this version uses fewer registers than the upward-bumping version. Bump
allocation functions are designed to be fast paths that are inlined into
callers, which means that downward bumping is creating less register pressure at
every call site.<p>Finally, this downwards-bumping version is implemented with eleven instructions,
while the upwards-bumping version requires thirteen instructions. In general,
fewer instructions implies a shorter run time.<h3>Benchmarks</h3><p>I recently switched <a href=https://github.com/fitzgen/bumpalo>the <code>bumpalo</code> crate</a> from bumping upwards to
bumping downwards. It has a nice, little micro-benchmark suite that is written
with the excellent, statistics-driven <a href=https://github.com/bheisler/criterion.rs>Criterion.rs benchmarking
framework</a>. With Criterion’s built-in support for defining a baseline
measurement and comparing an alternate implementation of the code against it, I
compared the new, downwards-bumping implementation against the original,
upwards-bumping implementation.<p>The new, downwards-bumping implementation has <strong>up to 19% better allocation
throughput</strong> than the original, upwards-bumping implementation! We’re down to
2.7 nanoseconds per allocation.<p>The plot below shows the probability of allocating 10,000 small objects taking a
certain amount of time. The red curve represents the old, upwards-bumping
implementation, while the blue curve shows the new, downwards-bumping
implementation. The lines represent the mean time.<p><a href=https://fitzgeraldnick.com/media/bumpalo-switch-to-downwards-bumping-criterion-report.svg><object></object></a><p>You can view the complete, nitty-gritty benchmark results <a href=https://github.com/fitzgen/bumpalo/pull/37>in the pull
request</a>.<h4>The One Downside: Losing a <code>realloc</code> Fast Path</h4><p><code>bumpalo</code> doesn’t only provide an allocation method, it also provides a
<code>realloc</code> method to resize an existing allocation. <code>realloc</code> is <em>O(n)</em> because
in the worst-case scenario it needs to allocate a whole new region of memory and
copy the data from the old to the new region. But the old, upwards-bumping
implementation had a fast path for growing the last allocation: it would add the
delta size to the bump pointer, leaving the allocation in place and avoiding
that copy. The new, downwards-bumping implementation also has a fast path for
resizing the last allocation , but even if we reuse that space, the start of the
allocated region of memory has shifted, and so we can’t avoid the data copy.<p>The loss of that fast path leads to a 4% slow down in our <code>realloc</code> benchmark
that formats a string into a bump-allocated buffer, triggering a number of
<code>realloc</code>s as the string is constructed. We felt that this was worth the trade
off for faster allocation.<h3>Less Work with More Alignment?</h3><p>It is rare for types to require more than word alignment. We could enforce a
minimum alignment on the bump pointer at all times that is greater than or equal
to the vast majority of our allocations’ alignment requirements. If our
allocation routine is monomorphized for the type of the allocation it’s making,
or it is aggressively inlined — and it definitely should be — then
we should be able to completely avoid generating any code to align the bump
pointer in most cases, including the conditional branch on overflow if we are
rounding up for upwards bumping.<pre><code class=highlight></code></pre><p>The trade off is extra memory overhead from introducing wasted space between
small allocations that don’t require that extra alignment.<h3>Conclusion</h3><p>If you are writing your own bump allocator, you should bump downwards:
initialize the bump pointer to the end of the chunk of memory you are allocating
from within, and decrement it on each allocation so that it moves down towards
the start of the memory chunk. Downwards bumping requires fewer registers, fewer
instructions, and fewer conditional branches. Ultimately, that makes it faster
than bumping upwards.<p>The one exception is if, for some reason, you frequently use <code>realloc</code> to grow
the last allocation you made, in which case you <em>might</em> get more out of a fast
path for growing the last allocation in place without copying any data. And if
you do decide to bump upwards, then you should strongly consider enforcing a
minimum alignment on the bump pointer to recover some of the performance that
you’re otherwise leaving on the table.<p>Finally, I’d like to thank <a href=https://www.red-bean.com/~jimb/>Jim Blandy</a>, <a href=https://github.com/alexcrichton>Alex
Crichton</a>, <a href=https://jeenalee.com/>Jeena Lee</a>,
and <a href=https://jorendorff.blogspot.com/>Jason Orendorff</a> for reading an early
draft of this blog post, for discussing these ideas with me, and for being
super friends :)<hr><p><small><sup>0</sup> The simple way to round <code>n</code> up to a multiple of
<code>align</code> is</small><pre><small><code>(n + align - 1) / align * align</code></small></pre><p><small>Consider the numerator: <code>n + align - 1</code>. This is ensuring that if there
is any remainder for <code>n / align</code>, then the result of the division sub-expression
is one greater than <code>n / align</code>, and that otherwise we get exactly the same
result as <code>n / align</code> due to integer division rounding off the remainder. In
other words, we only round up if <code>n</code> is not aligned to <code>align</code>.</small><p><small>However, we know <code>align</code> is a power of two, and therefore <code>anything /
align</code> is equivalent to <code>anything &gt;&gt; log2(align)</code> and <code>anything * align</code> is
equivalent to <code>anything &lt;&lt; log2(align)</code>. We can therefore rewrite our expression
into:</small><pre><small><code>(n + align - 1) &gt;&gt; log2(align) &lt;&lt; log2(align)</code></small></pre><p><small>But shifting a value right by some number of bits <code>b</code> and then shifting
it left by that same number of bits <code>b</code> is equivalent to clearing the bottom <code>b</code>
bits of the number. We can clear the bottom <code>b</code> bits of a number by bit-wise
and’ing the number with the bit-wise not of <code>2^b - 1</code>. Plugging this into our
equation and simplifying, we get:</small><pre><small><code>  (n + align - 1) &gt;&gt; log2(align) &lt;&lt; log2(align)
= (n + align - 1) &amp; !(2^log2(align) - 1)
= (n + align - 1) &amp; !(align - 1)</code></small></pre><p><small>And now we have our final version of rounding up to a multiple of a power
of two!</small><p><small>If you find these bit twiddling hacks fun, definitely find yourself a
copy of <a href=https://www.goodreads.com/book/show/276079.Hacker_s_Delight>Hacker’s
Delight</a>. It’s a
wonderful book! <a href=#back-foot-0>↩</a></small></div></article></div></div><hr><div><label>原网址: <a href=https://fitzgeraldnick.com/2019/11/01/always-bump-downwards.html>访问</a></label><br><label>创建于: 2019-11-13 14:20:30</label><br><label>目录: default</label><br><label>标签: <code>dev</code>, <code>algorithm</code></label></div></div><script>var info = {"clipId":"1573626030","format":"html","title":"Always Bump Downwards The simple way to round n up to a multiple of align","link":"https://fitzgeraldnick.com/2019/11/01/always-bump-downwards.html","category":"default","tags":["dev","algorithm"],"created_at":"2019-11-13 14:20:30","filename":"index.html"};</script><script src=https://note-2019-images.oss-cn-hangzhou.aliyuncs.com/highlight.pack.js></script><link rel=stylesheet href=https://note-2019-images.oss-cn-hangzhou.aliyuncs.com/highlight.vs.css><script src=https://note-2019-images.oss-cn-hangzhou.aliyuncs.com/tocbot.min.js></script><script src=https://note-2019-images.oss-cn-hangzhou.aliyuncs.com/notes.js></script>