<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Implementing a software cache</title>
<link rev="made" href="mailto:" />
</head>

<body style="background-color: white">

<p><a name="__index__"></a></p>
<!-- INDEX BEGIN -->

<ul>

	<li><a href="#implementing_a_software_cache">Implementing a software cache</a></li>
	<ul>

		<li><a href="#introduction">Introduction</a></li>
		<li><a href="#software_vs__hardware_cache">Software vs. Hardware cache</a></li>
		<li><a href="#basic_requirements_and_definitions">Basic requirements and definitions</a></li>
		<li><a href="#to_infinity_and_beyond_">To infinity and beyond ?</a></li>
		<li><a href="#cache_removal">Cache removal</a></li>
		<li><a href="#requirements_revisited">Requirements revisited</a></li>
		<li><a href="#design">Design</a></li>
		<li><a href="#implementation">Implementation</a></li>
		<li><a href="#using_the_cache">Using the cache</a></li>
		<li><a href="#efficiency_revisited">Efficiency revisited</a></li>
	</ul>

</ul>
<!-- INDEX END -->

<hr />
<p>
</p>
<h1><a name="implementing_a_software_cache">Implementing a software cache</a></h1>
<p>
</p>
<h2><a name="introduction">Introduction</a></h2>
<p>In the last column we talked about accelerating recursive computations
with Memoization. I mentioned that Memoization is just a method of 
<em>caching</em> that can be applied to a wider range of problems. And indeed,
not only recursive computations can benefit from caching. In this
column we'll implement a complete caching class that can be readily applied
to any problem, and will see a concrete
example of using the cache to speed-up calculations.</p>
<p>
</p>
<h2><a name="software_vs__hardware_cache">Software vs. Hardware cache</a></h2>
<p>By reading this column you are most likely already using a cache. Your computer
caches what you see on the screen and thus is able to work faster. This cache
is a hardware cache, deep below in the hierarchy of computer architecture / hardware /
software. It is completely <em>transparent</em> to us, the simple PC users.</p>
<p>It was noticed a long time ago that while CPU (Central Processing Unit) speeds accelerate quickly,
the speed of computer memory (DRAM - Dynamic Random Access Memory) and the bus to 
the memory (Front Side Bus in a PC, or FSB in short) can't keep up. Each access
to the memory is expensive, and the CPU spends many cycles just waiting for the 
data to arrive. Thus, caches were invented. A hardware cache is a small and extremely
fast memory that is usually located on the same chip with the CPU. 
Its access time is almost as fast as the CPU itself and there is no external
bus to wait for. When the CPU accesses some memory location, it stores it in
the cache and in future accesses it's done much more quickly. Caches usually guess
that if the CPU read some memory location, it has a good chance to read the next one
as well, so they store a whole chunk of memory which often results in a very
high cache hit-rate (the percentage of memory accesses that find what they want in
the cache).</p>
<p>In reality, things are much more complicated. Caches pre-fetch data from the memory
with methods based on the principles of time and space locality. These days 
there are at least 2 (sometimes 3)
levels of cache in a computer, and complex algorithms are involved in cleaning up the 
cache and keeping it coherent (in sync) with the mainy memory, especially when multiple
CPUs / cores are working together. This is a fascinating topic, and if you're interested
there's a lot of free and well-written information floating on the web, just run a web
search.</p>
<p>But this is all hardware cache. In this articule I want to discuss software caches.
A software cache is a programming technique used to speed up repetitive calculations,
and we saw concrete examples of this in the previous column. The implementation level
may be different, but the principles are the same. All we really want is to remember
computations we already did and not repeat them unnecessarily.</p>
<p>
</p>
<h2><a name="basic_requirements_and_definitions">Basic requirements and definitions</a></h2>
<p>A cache has to store results of computations. That is, for inputs, it stores outputs.
Therefore, a cache is somewhat similar to a dictionary data structure - it stores
key/value pairs - given a key we want to quickly find the value. A hardware cache,
for example, stores the contents of memory locations. Therefore its key is a memory
address and its value is the address's contents.</p>
<p>How fast should a cache be ? The obvious answer - as fast as possible, is not accurate.
It depends on the keys we use. A solution that works fastest in the most general case
is not always the solution that works fastest for some specific cases. We'll get back 
to this a bit later.</p>
<p>
</p>
<h2><a name="to_infinity_and_beyond_">To infinity and beyond ?</a></h2>
<p>There is an important complication with caches we still haven't considered, however.
Caches, like all data structures (and physical structures) have some finite size.
Can we store all the calculations we'll ever need in a cache ? Can we store the contents
of all memory locations in a hardware cache ?</p>
<p>The answer to both questions is, of course, no. Hardware caches, for example, are
far smaller than the main memory (caches are made of fast hardware which makes them
very expensive. And since the amount of memory is finite, we can't
let our software caches grow for ever. Therefore, the cache must have a limited size.
The exact limit depends heavily on the application, the data and the amount of available
mamory, so it's best to let the user of the cashe decide how large he wants it to be.</p>
<p>
</p>
<h2><a name="cache_removal">Cache removal</a></h2>
<p>So now we know our cache should be of limited size. This raises an important question:
what to do when the cache is full ? We can just stop at it and not add new keys,
but this is obviously a bad solution. The alternative is to make free space for new
keys by removing old ones - <em>cache removal</em>.</p>
<p>There are many algorithms and methods of cache removal, some of which depend on the 
data. Here are some of the more popular approaches:</p>
<ol>
<li><strong><a name="item_random">Random</a></strong><br />
</li>
Using this approach, when the cache is full and we want to add a new key to it, we just
throw out some old key at random.
<p></p>
<li><strong><a name="item_lru">LRU</a></strong><br />
</li>
LRU stands for Least Recently Used. Using this approach, we throw out the key that is the 
oldest - that is, it was accessed least recently.
<p></p>
<li><strong><a name="item_mru">MRU</a></strong><br />
</li>
MRU is Most Recently Used. We throw out the newest key, the one accessed most recently.
<p></p></ol>
<p>All three have their merits, and may be useful for certain types of data. In our cache
implementation, I will use LRU, since I believe it fits the more common applications
of a cache, and has a certain logical sence. After all, if there is some key we accessed
more recently than another, it makes sence that the more recent key takes part in the
current computations and the older key is the one that should be thrown away.</p>
<p>
</p>
<h2><a name="requirements_revisited">Requirements revisited</a></h2>
<p>Lets define the operations we want the cache to perform.</p>
<ul>
<li><strong><a name="item_creation_and_initialization">Creation and initialization</a></strong><br />
</li>
We'd like to specify the cache size upon its creation - that is the maximal number of keys
it stores.
<p></p>
<li><strong><a name="item_lookup">Lookup</a></strong><br />
</li>
We'd like to ask the cache for a key and get the value, or an indication that this key doesn't
exist in the cache.
<p></p>
<li><strong><a name="item_insertion">Insertion</a></strong><br />
</li>
We'd like to add keys to the cache. If the key already exists in the cache, its value will be updated with
the latest value. If there's no such key in the cache, it will be added to the cache. If the 
cache is full, the LRU key will be removed to make space for the new key.
<p></p></ul>
<p>
</p>
<h2><a name="design">Design</a></h2>
<p>We certainly need a data structure that lets us look up values for keys efficiently. This will
be the core cache table. We can use the C++ standard <code>map</code> container for this purpose - it
provides logarithmic lookup and insertion (<code>O(log N)</code> where N is the cache size).</p>
<p>But how to we implement LRU removal ? We can keep some count of ``recent access time stamp'' for
each key, but how do we know which to throw away ? Going over the whole cache to find the LRU
key is a <code>O(N)</code> operation, too slow.</p>
<p>We solve this using a very common programming trick - we sacrifice time by space. Such problems
are usually solved by using another data structure that provides the special requirement 
quickly and is kept fully coherent with the main data structure. What we need here, for example,
is a <em>priority queue</em> - keys sorted in a linear structure with the last recent key in some known
location - like the front of the queue, which lets us remove it quickly.</p>
<p>This leaves the question of how to implement the queue. We could go for a simple array, but that 
won't do (Can you figure out why ?). The problem is that when there's a lookup on some cache key,
it immediately becomes the most-recently-used key and should be marked as such, for example by
being moved to the back of the queue. This operation is called <em>splicing</em> - take an item from
the middle of a container and put it in the end. Splicing in arrays is expensive (<code>O(N)</code>), which
is unacceptable.</p>
<p>Fortunately, there is a solution - a linked list. In a linked list insertion and removal at both
ends is <code>O(1)</code>, and so is splicing, granted that we already have a pointer/handle to the key we
want to splice. But that can be arranged by holding such a pointer in the main cache data structure.</p>
<p>So, we'll go for two data structures: a <code>map</code> for the table, and a <code>list</code> (another container
in the C++ standard library) for the recent-usage queue. For each key, the table will hold the 
value and a pointer to the key in the queue, which makes it trivial to mark it as recent on 
lookups.</p>
<p>So, enough babbling, lets get to the code.</p>
<p>
</p>
<h2><a name="implementation">Implementation</a></h2>
<p>The source code package provided with this column contains a file named cache.h - this is the
implementation of the cache (it is wholly in a .h file because it's templated):</p>
<pre>
 template &lt;typename key_type, typename value_type&gt;
 class cache</pre>
<p>Our cache can work for any key type and value type, given to it at creation as template
arguments.
Here is a portion of the cache class that lists its data members:</p>
<pre>
 typedef typename list&lt;key_type&gt;::iterator list_iter;
 
 struct cached_value
 { 
        cached_value(value_type value_, list_iter cache_i_) 
                : value(value_), cache_i(cache_i_)
        {
        }</pre>
<pre>
        value_type value;
        list_iter cache_i;
 };
 
 typedef typename map&lt;key_type, cached_value&gt;::iterator table_iter; 
 
 unsigned maxsize;
 
 list&lt;key_type&gt; lru_list;
 
 map&lt;key_type, cached_value&gt; table;</pre>
<p><code>maxsize</code> is the maximal size given to the cache at creation. <code>table</code> is the main cache table - for
each key, it holds a value and a pointer to the queue. <code>lru_list</code> is the queue - a list sorted by
recent use (with the most recently used key in the front).</p>
<p>Note that the class also defines a <code>cache_statistics</code> subtype. This is to collect statistics of 
cache usage. The implementation of statistics is simple enough that I won't mention it in the column.
It can be very useful, however, when you plan to use the cache for your need and want to analyze its
performance.</p>
<p>Lookup of keys in the cache is done as follows:</p>
<pre>
 value_type* find(const key_type&amp; key)
 { 
        table_iter ti = table.find(key);</pre>
<pre>
        IF_DEBUG(stats.finds++);</pre>
<pre>
        if (ti == table.end())
                return 0;</pre>
<pre>
        IF_DEBUG(stats.finds_hit++);</pre>
<pre>
        list_iter li = ti-&gt;second.cache_i;
        lru_list.splice(lru_list.begin(), lru_list, li);</pre>
<pre>
        return &amp;(ti-&gt;second.value);
 }</pre>
<p>The key is looked up in the table which has efficient lookups. If the key wasn't found, we simply
return 0. If the key was found, we have to splice the accessed key out of its place in the 
queue and place it in the front - since now this key is the most recently used. Then, we return
the value of the key.</p>
<p>Insertion is just a little more complex:</p>
<pre>
 void insert(const key_type&amp; key, const value_type&amp; value)
 {
        value_type* valptr = find(key);</pre>
<pre>
        if (valptr)
        {
                *valptr = value;
        }
        else
        { 
                lru_list.push_front(key);
                cached_value cv(value, lru_list.begin());
                table.insert(make_pair(key, cv));</pre>
<pre>
                if (lru_list.size() &gt; maxsize)
                {
                        key_type lru_key = lru_list.back();
                        table.erase(lru_key);
                        lru_list.pop_back();</pre>
<pre>
                        IF_DEBUG(stats.removed++);
                }
        }
 }</pre>
<p>First we look for the key in the table. Note that the local, cache's function <code>find()</code> is
used here, because if we do find the element we want it marked as MRU.</p>
<p>If the key was found, we just update its value and return. More interesting is what happens
when the key is not found - here the insertion takes place. After adding the key to the 
cache, we check if the cache size is exceeded. If it is, we throw out the key that's in
the back of <code>lru_list</code> which is, if you recall, the LRU key - just what we need !</p>
<p>
</p>
<h2><a name="using_the_cache">Using the cache</a></h2>
<p>Using this cache is very simple. Here's a small demonstration:</p>
<pre>
 cache&lt;string, double&gt; cc(4);</pre>
<pre>
 cc.insert(&quot;pi&quot;, 3.14);
 cc.insert(&quot;e&quot;, 2.71);
 cc.insert(&quot;gold&quot;, 1.61);
 cc.insert(&quot;sq2&quot;, 1.41);</pre>
<pre>
 cc.debug_dump();</pre>
<pre>
 cc.insert(&quot;zero&quot;, 0);</pre>
<pre>
 cc.debug_dump();</pre>
<pre>
 double* e_value = cc.find(&quot;e&quot;);</pre>
<pre>
 cc.insert(&quot;one&quot;, 1);</pre>
<pre>
 cc.debug_dump();
 cc.statistics();</pre>
<pre>
 for (int i = 0; i &lt; 30; ++i)
        double* one_value = cc.find(&quot;one&quot;);</pre>
<pre>
 cc.statistics();</pre>
<p>Run this (don't forget to <code>#include &quot;cache.h&quot;</code> and run in debug mode, so that statistics 
will be collected and printed). Try to predict what the state of the cache is during the 
execution.</p>
<p>In the first dump, you see the items you inserted, in MRU order. In the second dump, you
don't see ``pi''. That's because it's LRU and was removed when ``zero'' was added. In the 
second dump you don't see ``gold''. Why not ``e'', which was inserted before ``gold'' ? Because
``e'' was accessed by <code>find</code>, and thus was marked MRU.</p>
<p>
</p>
<h2><a name="efficiency_revisited">Efficiency revisited</a></h2>
<p>The way the cache is currently implemented, it does all operation in <code>O(log N)</code> (N being
the cache size). LRU removal/splicing is very efficient (<code>O(1)</code>), what takes most time
is the <code>map</code> lookups. Can't we make more efficient ?</p>
<p>As a matter of fact, we can. Well, in most cases. By using a hash table instead of <code>map</code>
(which uses trees and hence is logarithmic), we can make all cache operations <code>O(1)</code>.
There's only one catch though - this can be made only if we have good hashing functions
for our keys. But since most of the keys would be either numbers or strings, and good hashing
functions for those exist, it's not a real problem.</p>
<p>Interestingly, the C++ standard library has an extension container named <code>hash_map</code>, which
is a hash table. Since it's not standard yet (it's only an extension), its implementations
differ and aren't very stable. Bruce Eckel, in his ``Thinking in C++'' book, creates a benchmark
that gives him 4x speedup with <code>hash_map</code> against <code>map</code>.</p>
<p>Maybe his implementation of <code>hash_map</code>
is better, but I didn't get such results with my tests (on Microsoft Visual C++ .NET's implementation
of STL). I got only a minimal (about 20%) speedup for integer keys (Eckel's benchmark, in my
opinion, is very dubious - the data he uses isn't too good for reliable benchmarking). When
I tried <code>string</code>s as keys, <code>hash_map</code> was, in fact twice slower than <code>map</code>.</p>
<p>Hence, I sticked with <code>map</code>, but I'm confident that given a good implementation of a hash
map and a good hashing function for the keys, the cache can be made more efficient. The fact
that the cache size is limited and known before-head only helps to create a very speedy hash
table. This is left as an exercise for the astute reader.</p>
<hr><p>Copyright (C) &lt;2005&gt; Eli Bendersky</p>

</body>

</html>
