<html>
<body>

<p><h1>MapReduce over Tahoe-LAFS</h1></p>

<p>Tahoe (a Least Authority File System) can be used as the backing store for MapReduce. 
It is assumed that there is at least a Tahoe webserver-client (if not both webserver and 
storage node) running on each machine that will act as map or reduce workers, which will 
each use the local webserver-client to read and write to and from Tahoe-LAFS.</p>

<p>1. After installing Tahoe-LAFS, initialize Tahoe for MapReduce by running 
<span style="font-family: courier">
<b>$HADOOP_HOME/bin/hadoop org.apache.hadoop.fs.lafs.Init</b></span>
(this is analogous to running <span style="font-family: courier">
<b>$HADOOP_HOME/bin/hadoop namenode -format</b></span>) This will
print out the URI of the root capability under which all MapReduce data
will be stored. Copy this into core-site.xml as outlined below.</p>

</p>2. Add the following to core-site.xml: </p>

<span style="font-family: courier">
<p>
&lt;property&gt; <br />
&lt;name&gt; fs.lafs.impl &lt;/name&gt; <br />
&lt;value&gt; org.apache.hadoop.fs.lafs.LAFS &lt;/value&gt; <br />
&lt;/property&gt; <br />
</p>

<p>
&lt;property&gt; <br />
&lt;name&gt; lafs.rootcap &lt;/name&gt; <br />
&lt;value&gt; [URI of the root directory created in step 1] &lt;/value&gt; <br />
&lt;/property&gt; <br />
</p>

<p>
&lt;property&gt; <br />
&lt;name&gt; fs.default.name &lt;/name&gt; <br />
&lt;value&gt; lafs://localhost &lt;/value&gt; <br />
&lt;/property&gt; <br />
</p>
</span>

<p>Optionally, the amount of data (in bytes) to store in memory before writing across 
the network to Tahoe can be set by configuring the property:</p>

<span style="font-family: courier"><p>
&lt;property&gt; <br />
&lt;name&gt; lafs.write_chunk_size &lt;/name&gt; <br />
&lt;value&gt; 100000000 &lt;/value&gt; <br />
&lt;/property&gt; <br />
</p></span>

<p> The default is 100 MB </p>

<p>3. Start MapReduce via the start-mapred.sh script!</p>

<p>4. Be sure to run Tahoe's garbage collector </p>

</body>
</html>