\documentclass[10pt,twoside,a4paper]{article}

\usepackage{epsfig}
\usepackage{graphicx}
\usepackage{epstopdf}
\usepackage{float}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{enumerate}
\usepackage{fancyhdr}
\usepackage{stfloats}
\usepackage{subfig}
\usepackage{keystroke}
\usepackage[usenames,dvipsnames]{color}
\usepackage{appendix}
\usepackage{titlesec}
\usepackage{listings}
\usepackage[colorlinks=true,urlcolor=blue,linkcolor=blue,pdfborder={0 0
0},pdfstartview={FitH}, pdftitle={MR+: Knocking Down the MapReduce Brick-wall --
Guide}]{hyperref}
\usepackage{bbding}
\usepackage{longtable}
\usepackage{array}
\usepackage[margin=1.5cm, twoside, includeheadfoot]{geometry}

\pagestyle{fancy}
\makeatletter
  \lhead[Lahore University of Management Sciences]{MR+}
  \rhead[MR+]{\textsf{MRImprov}\ Code Guide}
\makeatother

\makeatletter \def\email#1{\\*[-3mm]\small
\texttt{\href{mailto:#1}{\color{Maroon}#1}}} \def\thickhrulefill{\leavevmode
\leaders \hrule height 1pt\hfill \kern \z@} \renewcommand{\maketitle}{%
	\begin{titlepage}
	\null\vfil
	\thispagestyle{empty}
	\begin{center}\leavevmode
		\normalfont
		\begin{minipage}[c]{0.63\textwidth}
			{\huge\@title\par}%
			\vskip 0.5cm
			{\Large \@date\par}
		\end{minipage}
		\rule[-1.85 cm]{2pt}{3.9 cm}
		\begin{minipage}[c]{0.35\textwidth}
			\begin{flushright}
				\LARGE \begin{tabular}[t]{r}%
		         \@author
      		\end{tabular}\par
			\end{flushright}
		\end{minipage}
	\end{center}
	\vfil\null
	\end{titlepage}
}
\makeatother

\definecolor{usrinpcolor}{rgb}{0.55,0.55,0.55}
\definecolor{codemodulecolor}{RGB}{220,50,0}
\definecolor{codecommentcolor}{RGB}{0,128,0}
\definecolor{filepathcolor}{RGB}{170,27,4}
\definecolor{elemxmlrowcolor}{RGB}{200,200,200}

\title{{\textsf{\color{Blue}\textbf{MR+}}}\\ \\ \textsf{MRImprov}\ Code Guide \\ \large \textsf{\url{http://code.google.com/a/apache-extras.org/p/mrplus/}}}
\author{Ahmad Humayun \email{ahmad.humyn@gmail.com}
\\*[1.5mm] Momina Azam \email{momina.azam@gmail.com}
\\*[1.5mm] Zubair Nabi \email{zn.zubairnabi@gmail.com}}

\date{\today}


\begin{document}
\maketitle

\setlength{\parindent}{0pt}
\setlength{\parskip}{0.5em}

\textbf{\textit{If you are copying text from this document, make sure you replace any missing under-scores.}}

\section{System Requirements}
\label{sec:systemreq}
There are no particular hardware requirements to run the \textsf{MRImprov} system, except those posed by the required programs below.

The following programs are required on all nodes for running the system:
\begin{enumerate}
\item \textsf{\textbf{Python 2.5}} is the basic requirement for running the
system. All \textsf{MRImprov} scripts are written in Python running on Linux (although it wouldn't be unreasonable for attempting to remove the dependence on \textsf{Linux}; except the requirement for Hadoop HDFS, which is tailored to run on \textsf{Linux}).
\item \textsf{\textbf{Hadoop HDFS}}. Only the \textsf{HDFS} component needs to
be running - which is used by the system as the distributed file system. The
code has been tested extensively with \textsf{Hadoop HDFS} Version 0.20.0.
Nonetheless newer versions should be compatible as long as the \textsf{HDFS} API
has not changed dramatically (See \texttt{\color{codemodulecolor}src.ma.fs.dfs.\_hdfs}).
\item \textsf{\textbf{nice}} (\textsf{GNU coreutils}) used for running new processes. See \texttt{\color{codemodulecolor}src.ma.commons.core.processrunner}.
\item \textsf{\textbf{top}} (with Page Fault Count enabled) used to get CPU
usage statistics. See \texttt{\color{codemodulecolor}src.ma.utils.sysinfo}.
\item \textsf{\textbf{ifconfig}} used to get network usage statistics. See \texttt{\color{codemodulecolor}src.ma.utils.sysinfo}.
\end{enumerate}

The following are recommended requirements. If not satisfied, the core system should still run:
\begin{enumerate}
\item \textsf{\textbf{ssh}} is required by some scripts to remotely execute code on other nodes in the cluster. Although it will be easy to run, analyze and debug code when \texttt{ssh} is present, but its not required by the the core of the system.
\item \textsf{\textbf{scp}} required by some sub-scripts for transferring files within and off the cluster. Also used by the debugging script.
\item \textsf{\textbf{find}} (\textsf{GNU}) used by some run (startup) scripts.
\item \textsf{\textbf{nohup}} (\textsf{GNU coreutils}) used by some run (startup) scripts for avoiding script hangups when disconnected from the server. Also used by debug scripts.
\item \textsf{\textbf{tail}} (\textsf{GNU coreutils}) used by some run (startup) scripts to print the the system output.
\item \textsf{\textbf{rdate}} to sync time from an online service (its important to sync time to some degree between all of the nodes). See \texttt{\color{codemodulecolor}scripts.synctime}.
\end{enumerate}
Other programs like \textsf{cp}, \textsf{rm}, \textsf{ln} are also used throughout the system. It is recommended that \textsf{GNU coreutils} are installed on all the nodes in the cluster setup.

\section{Preparing the Cluster to run \textsf{MRImprov}}
\label{sec:prepcluster}
The code for this project should be at exactly the same path at all the nodes if
you want to run code remotely (ssh'ing from one specific node). In this guide we
suppose the master \textsf{Hadoop DFS} node is the main front-end node itself.
The following steps need to be taken to set up the cluster:
\begin{enumerate}
\item NOTE that the set up considers a base path where all the files (or atleast most of them) will reside. We will refer to it as \texttt{\color{filepathcolor}p}. In our case it is \texttt{\color{filepathcolor}/state/partition1}.
\item First we will attempt to run the \textsf{HDFS}. If it is already installed, skip to Step \ref{sec:prepcluster:strtdfs}. Otherwise, copy the \textsf{Hadoop} package to the main node and unpack at path \texttt{\color{filepathcolor}p} (\texttt{\color{filepathcolor}/state/partition1}):
\begin{lstlisting}[language=bash]
tar -xzf hadoop-[ver].tar.gz -C p/
\end{lstlisting}
\item Create the directories for \textsf{Hadoop HDFS}
\begin{lstlisting}[language=bash]
mkdir p/hadoop-data
mkdir p/hadoop-data/base-tmp
mkdir p/hadoop-data/dfs-name
mkdir p/hadoop-data/dfs-data
\end{lstlisting}
This task can be done automatically by running the \texttt{\color{filepathcolor}scripts/remake\_dirs.sh} script. Note that this will recursively delete the existing \texttt{\color{filepathcolor}p/hadoop-data} directory.
\item Make sure \textsf{JDK} is installed. Refer to Appendix \ref{apdx:jdksetup} for help.
\item Set values in \texttt{\color{filepathcolor}p/hadoop-[ver]/conf/hadoop-env.sh}:
\begin{lstlisting}[language=bash, commentstyle=\color{codecommentcolor}\sffamily]
export JAVA_HOME=/usr/java/jdk1.6.0_14
export HADOOP_HEAPSIZE=500				# if required
\end{lstlisting}
\item As of Hadoop\_0.20.0 there are three configuration files that need to be
populated with values: \texttt{hdfs-site.xml}, \texttt{core-site.xml} and
\texttt{mapred-site.xml}.

Write the following to the configuration file
\texttt{\color{filepathcolor}p/hadoop-[ver]/conf/hadoop-site.xml}. When
 copying, take care of the line breaks in the
 \lstinline[language=XML]$<description>$ tag, replacing all \texttt{a.b.c.d} with
 actual IP values, and replacing all \texttt{\color{filepathcolor}p} references
 with actual paths:
\begin{lstlisting}[language=XML]
	<property>
	  <name>dfs.replication</name>
	  <value>2</value>
	  <description>Default block replication.</description> 
	</property>
	
	<property>
	  <name>dfs.name.dir</name>
	  <value>p/hadoop-data/dfs-name</value>
	  <description>Path on the local filesystem where the NameNodes stores the
	  namespace and transactions logs persistently. </description> 
	</property>
	
	<property>
	  <name>dfs.data.dir</name>
	  <value>p/hadoop-data/dfs-data</value>
	  <description>Comma separated list of paths on the local filesystem of a
	  DataNode where it should store its blocks.</description>
	</property>
\end{lstlisting}

For \texttt{\color{filepathcolor}p/hadoop-[ver]/conf/core-site.xml}:
\begin{lstlisting}[language=XML]
	<property>
	  <name>fs.default.name</name>
	  <value>hdfs://a.b.c.d:9000</value>
	  <description>URI of HDFS NameNode.</description>
	</property>
	
	<property>
	  <name>hadoop.tmp.dir</name>
	  <value>p/hadoop-data/base-tmp</value>
	  <description>A base for other temporary directories.</description>
	</property>

\end{lstlisting}
For \texttt{\color{filepathcolor}p/hadoop-[ver]/conf/mapred-site.xml}:
\begin{lstlisting}[language=XML]
	<property>
	  <name>mapred.job.tracker</name>
	  <value>a.b.c.d:9001</value>
	  <description>Host or IP and port of JobTracker</description>
	</property>
	
	<property>
	  <name>mapred.map.tasks</name>
	  <value>100</value>
	  <description>The default number of map tasks per job. Typically set to a
	  prime several times greater than number of available hosts. Ignored when
	  mapred.job.tracker is local].</description> 
	</property>
	
	<property>
	  <name>mapred.reduce.tasks</name>
	  <value>20</value>
	  <description>The default number of reduce tasks per job. Typically set to a
	  prime close to the number of available hosts. Ignored when mapred.job.tracker
	  is local].</description> 
	</property>

	<property>
	  <name>mapred.tasktracker.map.tasks.maximum</name>
	  <value>2</value>
	  <description>The maximum number of maptasks that will be run simultaneously
	  by a task tracker.</description>
	</property>
	
	<property>
	  <name>mapred.tasktracker.reduce.tasks.maximum</name>
	  <value>2</value>
	  <description>The maximum number of reduce tasks that will be run
	  simultaneously by a task tracker.</description> 
	</property>
	
	<property>
	  <name>mapred.map.tasks.speculative.execution</name>
	  <value>true</value>
	  <description>If true, then multiple instances of some map tasks may be
	  executed in parallel.</description> 
	</property>
	
	<property>
	  <name>mapred.reduce.tasks.speculative.execution</name>
	  <value>true</value>
	  <description>If true, then multiple instances of some reduce tasks may be
	  executed in parallel.</description> 
	</property>
	
	<property>
	  <name>mapreduce.map.java.opts</name>
	  <value>
	     -Xmx512M 
	     -Djava.library.path=/home/mycompany/lib 
	     -verbose:gc
	     -Xloggc:/tmp/@taskid@.gc 
	     -Dcom.sun.management.jmxremote.authenticate=false
	     -Dcom.sun.management.jmxremote.ssl=false </value> 
	</property>
	
	<property>
	  <name>mapreduce.reduce.java.opts</name>
	  <value>
	     -Xmx1024M 
	     -Djava.library.path=/home/mycompany/lib 
	     -verbose:gc -Xloggc:/tmp/@taskid@.gc 
	     -Dcom.sun.management.jmxremote.authenticate=false 
	     -Dcom.sun.management.jmxremote.ssl=false
	  </value>
	</property>
	
	<property>
	  <name>mapred.child.java.opts</name>
	  <value>-Xmx1024m</value>
	</property>
	
\end{lstlisting}

\item Write all the (slave) hosts in the hosts file,
\texttt{\color{filepathcolor}p/hadoop-[ver]/conf/slaves} - these are the
machines that will run the \textsf{HDFS} datanodes and the tasktrackers. Also change the master host in the masters file, \\\texttt{\color{filepathcolor}p/hadoop-[ver]/conf/masters} - the stated machine will run the \textsf{HDFS} namenode and the \textsf{Hadoop} master. Note that the same IP can be included in \texttt{\color{filepathcolor}slaves} and the \texttt{\color{filepathcolor}masters} file - which would mean that a datanode (and a tasktracker) and a namenode process will run on the same machine.

To check the availability of nodes, ping the broadcast address by \lstinline[language=bash]$ping -b 192.168.100.255$ (of course, for this to work, ICMP packets should be enabled on the nodes).

Also remember, it is better to give the interface IPs at every point rather than
using \texttt{localhost}. This is helpful in two aspects: (1) you would know exactly what network interface is being used; (2) you can use the same configuration files on different nodes in the cluster (since IPs themeslves are global identifiers unlike \texttt{localhost}).
\item Compress this `master' copy, you have created, at the current node:
\begin{lstlisting}[language=bash]
tar cfz hadoop-master-[ver].tar.gz hadoop-[ver]/
\end{lstlisting}
\item Transfer and extract to other nodes (see Appendix \ref{apdx:remoteaccs} for help): 
\begin{lstlisting}[language=bash]
scp hadoop-master-[ver].tar.gz root@192.168.1.1:p/
\end{lstlisting}
To extract to path \texttt{\color{filepathcolor}p}:
\begin{lstlisting}[language=bash]
tar xzf hadoop-master-[ver].tar.gz -C p/
\end{lstlisting}
\item \label{sec:prepcluster:strtdfs} Before starting the the \textsf{HDFS} for the first time, format it:
\begin{lstlisting}[language=bash]
hadoop-[ver]/bin/hadoop namenode -format
\end{lstlisting}
\item The following command starts the \textsf{HDFS} namenode and slave nodes (listed in the slaves and masters file):
\begin{lstlisting}[language=bash]
hadoop-[ver]/bin/start-dfs.sh
\end{lstlisting}
\item After starting the \textsf{HDFS}, check its status to see if all nodes are running:
\begin{lstlisting}[language=bash]
hadoop-[ver]/bin/hadoop dfsadmin -report
\end{lstlisting}

You can stop the \textsf{HDFS} on all nodes by running:
\begin{lstlisting}[language=bash]
hadoop-[ver]/bin/stop-dfs.sh
\end{lstlisting}
Other useful \textsf{HDFS} commands:
\begin{enumerate}
\item To get listing of a certain path (\texttt{\color{filepathcolor}/} is taken as the root of the HDFS)
\begin{lstlisting}[language=bash]
hadoop-[ver]/bin/hadoop dfs -ls '/path1/path2'
\end{lstlisting}
\item Execute the following to delete non-empty directories and files:
\begin{lstlisting}[language=bash]
hadoop-[ver]/bin/hadoop dfs -rm '/path1/path2/file1.out'
\end{lstlisting}
\item Execute the following to delete directories recursively:
\begin{lstlisting}[language=bash]
hadoop-[ver]/bin/hadoop dfs -rmr '/path1/path2/'
\end{lstlisting}
\end{enumerate}
To get full list of HDFS console commands visit \(\rightarrow\) \url{http://hadoop.apache.org/core/docs/current/hdfs_shell.html}
\item Place the \textsf{MRImprov} code on all nodes in the cluster. For this guide we will suppose that the location for the code is  \texttt{\color{filepathcolor}p/mr/MRImprov}.

\textbf{NOTE}: Since this is a system in active development, it is possible that the user frequently requires changing the code and replicating it to all the nodes. The latter step is made easy by the following script:
\begin{lstlisting}[language=python]
python p/mr/MRImprov/scripts/transfer.py
\end{lstlisting}
This command will copy the code from the current system to all the IPs given in \texttt{CLUSTER\_IPS} (in module \texttt{\color{codemodulecolor}ma.utils.clustersetup}). Note that since this command uses IPs from \texttt{CLUSTER\_IPS}, it is supposed to be run from the main/master node.
This time synchronization task can be done automatically by running the  \texttt{\color{filepathcolor}scripts/synctime.py} script.
\item Adjust the python path to include this code's src. To maintain this
setting on each reboot, it is a good idea to append this to the
\texttt{\color{filepathcolor}~/.bashrc} file: \begin{lstlisting}[language=bash] export PYTHONPATH=p/mr/MRImprov/src
\end{lstlisting}
\item To synchronize time on all nodes, run this on every node: (Could be done by ntpd too!):
\begin{lstlisting}[language=bash, escapeinside={(*@}{@*)}]
mv /etc/localtime /etc/localtime-old
ln -sf /usr/share/zoneinfo/UTC /etc/localtime
/usr/bin/rdate -s time-a.nist.gov
nano /etc/sysconfig/clock
	(*@edit to \(\rightarrow\)@*) ZONE="Asia/Karachi"\nUTC=true\nARC=false
/sbin/hwclock --systohc
\end{lstlisting}
\item In \texttt{\color{filepathcolor}MRImprov/src/ma/conf/siteconf.xml} change the following:
\begin{enumerate}
\item \lstinline[language=XML]$<hadoop_path>$ \(\rightarrow\) to the root Hadoop directory
\item \lstinline[language=XML]$<python_exec>$ \(\rightarrow\) if needed to the python executable path (used for running other processes)
\item \lstinline[language=XML]$<local_temp_dir>$ \(\rightarrow\) where all the temporary stuff for each job would go
\end{enumerate}
\item Make sure broadcast ICMP broadcast packets are allowed on all nodes:
\begin{enumerate}
\item make sure each node responds to broadcast packets:
\begin{lstlisting}[language=bash]
echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
\end{lstlisting}
\item Make sure that ICMP broadcast packets are allowed even after restart. Set in: 
\begin{lstlisting}[language=bash]
/etc/sysctl.conf
net.ipv4.icmp_echo_ignore_broadcasts = 0
\end{lstlisting}
\end{enumerate}
\end{enumerate}

\section{\textsf{MRImprov} preliminaries}
\label{sec:jobsetup}
Before jumping to run \textsf{MRImprov}, there are some important things a user should know about the system. Firstly, There is a strict pattern of directories that need to be in place for running a job through \textsf{MRImprov}. The following is the directory structure that is needed to be in place in the \textsf{HDFS}:
\begin{flushleft}
\texttt{\color{filepathcolor}/mr\_temp/jobs/[job-id]/} \newline
\hspace*{38mm} \(\vdash\) \texttt{\color{filepathcolor}jobconf.xml} \newline
\hspace*{38mm} \(\vdash\) \texttt{\color{filepathcolor}mapflags.xml} \newline
\hspace*{38mm} \(\vdash\) \texttt{\color{filepathcolor}reduceflags.xml} \newline
\hspace*{38mm} \(\vdash\) \texttt{\color{filepathcolor}input/} \newline
\hspace*{38mm} \raisebox{1mm}{\(\llcorner\)} \texttt{\color{filepathcolor}output/}
\end{flushleft}
Each of these files/directories have an important function:
\begin{enumerate}
\item \textbf{\color{filepathcolor}jobconf.xml}: The purpose of this XML is to notify the system of customizations for a certain job (see Section \ref{sec:customization}). The most common ones include setting the mapper (\lstinline[language=XML]$<map_module>$) and reducer (\lstinline[language=XML]$<reduce_module>$) code for the job; in case of \textsf{MR}, number of reduce tasks to create (\lstinline[language=XML]$<no_reduces>$); in case of \textsf{MR+}, the number of maps input to a single reduce (\lstinline[language=XML]$<map_to_reduce_ratio>$); and so on.

This XML file is essentially a subset of \texttt{\color{filepathcolor}MRImprov/src/ma/conf/siteconf.xml}. Have a look at the available elements and their explanations in Appendix \ref{apdx:siteconf.xml}. As noted in the said Appendix, some of these settings are only available for \texttt{\color{filepathcolor}siteconf.xml} whereas some are available for both \texttt{\color{filepathcolor}siteconf.xml} and \texttt{\color{filepathcolor}jobconf.xml}. To understand this dichotomy, one needs to be aware of the purpose of both the files. \texttt{\color{filepathcolor}siteconf.xml} is the main settings file for the whole system - it sets global variables like important filenames, filepaths, architectural settings and so on. The file also contains default values for settings related to individual jobs e.g. the number of reduces that need to be input to another reduce in an \textsf{MR+} system (\lstinline[language=XML]$<reduce_input_ratio>$). It also defines values which, arguably, have no meaning for the system's global settings - like what map task runner module to use (\lstinline[language=XML]$<mrplus_map_task_runner>$). Since these are only default settings for the system, how does one define set these for a particular job? This is where \texttt{\color{filepathcolor}jobconf.xml} plays its role. It over-rides default settings given in \texttt{\color{filepathcolor}siteconf.xml} with values to be used in a particular job. Nonetheless, values for settings not defined in \texttt{\color{filepathcolor}jobconf.xml}, are defaulted to those given in \texttt{\color{filepathcolor}siteconf.xml} (see the \texttt{get\_unicode\_data()} function in \texttt{\color{codemodulecolor}ma.const}). There are only a couple of settings that necessarily have to be defined in \texttt{\color{filepathcolor}jobconf.xml} since there are no defaults for them in \texttt{\color{filepathcolor}siteconf.xml} - these include the names of the mapper (\lstinline[language=XML]$<map_module>$) and reducer (\lstinline[language=XML]$<reduce_module>$) code.
\item \textbf{\color{filepathcolor}mapflags.xml}: This is an important XML file
for a job since it contains information to how many map tasks are available for
computation; what is the ID of each map task; what input files are associated to
each map task; and is there any structure information associated to each task.
It's important to note that the system doesn't see what are the contents of the
\texttt{\color{filepathcolor}input/} directory when choosing input for a map,
rather it blindly follows the XML to find what inputs to compute i.e. the system
will misbehave if a file stated in the XML is not actually present; or will
ignore files that are present but not mentioned in the XML. In case of running
\textsf{MR+} with Master or \textsf{MR}, this XML file is read-in at startup for
noting the inputs and structure information for each map. In the current implementation of these two systems, this XML is ignored while the job progresses. By `ignored' we mean that the flags such as `picked-up' (\lstinline[language=XML]$<pu>$) and `done' (\lstinline[language=XML]$<dn>$) are not updated. Yet, if the user is running \textsf{MR+} without Master, this file acts as a lifeline. It helps orchestrating the `Mapping' of the input in absence of a Master node. It helps maintaining information like what maps have been processed (\lstinline[language=XML]$<dn>$); how many times this map has failed (\lstinline[language=XML]$<fl>$); etc.. The complete format of the file (with comments) is available in \texttt{\color{filepathcolor}MRImprov/src/ma/conf/mapflags.dtd}. Also look at \texttt{\color{codemodulecolor}ma.fs.dfs.dfsflags} to see what functionality this XML provides for \textsf{MR+} without Master. You can also see an example in \texttt{\color{filepathcolor}MRImprov/src/ma/conf/mapflags.xml}
\item \textbf{\color{filepathcolor}reduceflags.xml}: This file is simply a copy of \texttt{\color{filepathcolor}MRImprov/src/ma/conf/testreduceflags.xml} as far as the job setup is concerned. Its actual usage depends on what system the user is actually running (\textsf{MR+} or \textsf{MR+} with Master or \textsf{MR}). Currently it is only used in \textsf{MR+} (without Master) to maintain the status of existing reduce tasks on the DFS. This XML is essentially a collection of \lstinline[language=XML]$<rt>$ elements (each associated to a certain reduce task) - which indicate what are the map inputs (or reduce inputs in case of \textsf{MR+}) for a certain reduce task; is the reduce task being currently computed by a certain worker; or has the reduce task been processed, and so on. The system always starts with the \texttt{\color{filepathcolor}MRImprov/src/ma/conf/testreduceflags.xml} file, and is filled with new reduce tasks as the job progresses. The complete format of the file (with comments) is available in \texttt{\color{filepathcolor}MRImprov/src/ma/conf/reduceflags.dtd}. Also look at \texttt{\color{codemodulecolor}ma.fs.dfs.dfsflags} to see how this XML file is used in \textsf{MR+} without Master.
\item \textbf{\color{filepathcolor}input/}: Will contain all the input files for a certain job. The exact filenames are not important as long as they are reflected in the \texttt{\color{filepathcolor}mapflags.xml} file.
\item \textbf{\color{filepathcolor}output/}: This is the directory where the final output of a job is written to.
\end{enumerate}

\section{Preparing Input Data}
\label{sec:prepinputdata}
Although many users of the system would already have data to process, in some cases you would like to produce custom (synthetic) data to test the system. \textsf{MRImprov} provides number of scripts to produce such input data. All such scripts are in \texttt{\color{filepathcolor}MRImprov/src/ma/tests/}. Some of them are:
\begin{enumerate}
\item \textbf{\color{filepathcolor}inputcreator.py}: Given a small list of hard-coded words (given in list \lstinline[language=python]$keys$) with their respective weightages, this script creates files where the probability of occurence of a certain word is proportional to its weightage.

Appropriate Mappers: \texttt{\color{filepathcolor}map4.py}, \texttt{\color{filepathcolor}mapall.py}; Reducers: \texttt{\color{filepathcolor}reduce.py}, \texttt{\color{filepathcolor}reducethresh.py}
\item \textbf{\color{filepathcolor}inputcreator2.py}: Similar to \texttt{\color{filepathcolor}inputcreator.py}, except that the list of words is generated by \lstinline[language=python]$get_cycled_word()$ and the weightage of occurence is randomly decided. The number of total unique words (which form the whole dataset) is decided by:
\[n_K = (N\times T_c \times M_s / R_I) \times \alpha\]
where \(N\) is the number of nodes, \(T_c\) is the task capacity of each node, \(M_s\) is the number of steps of input for maps, \(R_I\) is the number of maps input to a single reduce, and \(\alpha\) can be set to any integer value - where higher values will produce more unique words.

Appropriate Mappers: \texttt{\color{filepathcolor}mapkeys.py},  \texttt{\color{filepathcolor}mapall.py}; Reducers: \texttt{\color{filepathcolor}reduce.py}, \texttt{\color{filepathcolor}reducethresh.py}
\item \textbf{\color{filepathcolor}inputcreator3.py}: Similar to \texttt{\color{filepathcolor}inputcreator.py2}, except that a file, \texttt{\color{filepathcolor}word\_count.txt}, is also written, containing all the words ordered by their frequency of occurence.

Appropriate Mappers: \texttt{\color{filepathcolor}mapkeys.py},  \texttt{\color{filepathcolor}mapall.py}; Reducers: \texttt{\color{filepathcolor}reduce.py}, \texttt{\color{filepathcolor}reducethresh.py}
\item \textbf{\color{filepathcolor}inputcreator\_exp\_prbl.py}: Similar to \texttt{\color{filepathcolor}inputcreator3.py}, except that the probability of occurence of a certain key is decided by an exponential function:
\[P_K = \Big\lceil\frac{1}{\gamma}\exp\Big(\frac{i}{n_K}\times \beta\Big)\Big\rceil \times \theta\]
where \(i\) is the index number of the key, \(n_K\) is the number of keys, and parameters \(\beta=12\), \(\gamma=100\), and \(\theta=10\). This seemingly meaningless function ensures that the weightages of keys (unique words) increases gradually with each new key added to the list \lstinline[language=python]$keys$.

Appropriate Mappers: \texttt{\color{filepathcolor}mapkeys.py},  \texttt{\color{filepathcolor}mapall.py}; Reducers: \texttt{\color{filepathcolor}reduce.py}, \texttt{\color{filepathcolor}reducethresh.py}
\item \textbf{\color{filepathcolor}heavykey.py}: Similar to \texttt{\color{filepathcolor}inputcreator.py}, except that the keys here are different. The file is called \texttt{\color{filepathcolor}heavykey.py} as you can produce files more inclined to produce a certain word.

Appropriate Mappers: \texttt{\color{filepathcolor}map10.py},  \texttt{\color{filepathcolor}mapall.py}; Reducers: \texttt{\color{filepathcolor}reduce.py}, \texttt{\color{filepathcolor}reducethresh.py}
\item \textbf{\color{filepathcolor}akamaigen.py}: This function generates a dataset containing only two keywords ``e526.d.akamaiedge.net'', ``junk.microsoft.com''. The function takes four arguments:
\begin{lstlisting}[language=python]
python akamaigen.py [mean] [std_dev] [starting_file_no]
\end{lstlisting}
If incorrect number of arguments is provided, the function defaults to \lstinline[language=python]$[mean]=20 [std_dev]=17 [starting_file_no]=0$. The function creates a dataset (of multiple files) where the overall \(\mu\) and \(\sigma\) of frequency of the key ``e526.d.akamaiedge.net'' in the dataset is equal to those given in the arguments.

Appropriate Mappers: \texttt{\color{filepathcolor}mapak.py}, \texttt{\color{filepathcolor}mapall.py}; Reducers: \texttt{\color{filepathcolor}reduceavg.py}, \texttt{\color{filepathcolor}reduceavg\_sleep.py}, \texttt{\color{filepathcolor}reduce.py}
\item \textbf{\color{filepathcolor}akamaigen\_master.py}: This function makes a dataset using \texttt{\color{filepathcolor}akamaigen.py}. Each time it calls the latter, it changes the mean by \lstinline[language=python]$increment_mean$ to make a new \textit{structure} of input files (see Section \ref{sec:runningmrimprov}). Since this script creates a dataset with a number of structures, it also creates a custom \texttt{\color{filepathcolor}mapflags.xml} file in the current directory.
\item \textbf{\color{filepathcolor}gutenbergsplit.py}: [TODO] $<$also explain 
\texttt{\color{filepathcolor}gutenberg\_dwnld.py}$>$
\item \textbf{\color{filepathcolor}gutenbergsplit10.py}: [TODO]
\end{enumerate}
Note that all datasets are associated with a job-id for easy book-keeping. This number job-id can be set in each of the above scripts in the variable \lstinline[language=python]$job_id$. This number is important since the user will need to specify it before running a job (see Section \ref{sec:runningmrimprov}. Also, the scripts will create the dataset at the location specified in \lstinline[language=python]$dest_dir$ (usually set as \texttt{\color{filepathcolor}/state/partition1/datasets/[job\_id]}).

Once the dataset has been created, it needs to be shifted to \textsf{HDFS}. Before that can be done, one needs to ensure that the \textsf{HDFS} is running (see Section \ref{sec:prepcluster}). There are number of ways to transfer the dataset and prepare the files for running the job:
\begin{enumerate}
\item \textbf{\color{filepathcolor}MRImprov/scripts/setupjob.py}: This script is the most convenient way to set up a job. It just needs to be given a job-id, the number of input files, and the number of files per map (this argument is usually 1, but, surely, a single map worker can take multiple files as input):
\begin{lstlisting}[language=python]
python setupjob.py [job_id] [no_input_files] [files_per_map]
\end{lstlisting}
Before starting it deletes all the xml files on the local filesystem and job
directory on DFS. What is the file pattern stored in mapflags.xml? Creates a new mapflags.xml with single structure .. if user has one replace it later on. jobconf.xml taken from testjobconf.xml .. all relavant HDFS directories shown in \ref{sec:jobsetup} (re)created. The input files should be in the current path.
\item \textbf{\color{filepathcolor}MRImprov/scripts/cpfileshdfs.py}: This script simply copies the files in the current directory to the input folder in the DFS for the specified job:
\begin{lstlisting}[language=python]
python cpfileshdfs.py [job_id] [first_file] [last_file]
\end{lstlisting}
Note that if you want to use this script, your input files should follow the default input filename pattern given in \lstinline[language=python]$<map_input_filename>$ (\texttt{\color{filepathcolor}siteconf.xml}).
\item \textbf{Manually}: $<$also explain 
\texttt{\color{filepathcolor}gutenberg\_dwnld.py}$>$
\end{enumerate}

\section{Running \textsf{MRImprov}}
To run a job your current directory should be the directory which contains the
dataset and the required flag files. Before running a job it is necessary to
refresh the flags on the \textsf{HDFS}. This can be done by executing: 
\begin{lstlisting}[language=python]
python p/MRImprov/scripts/refreshflags -j [job_id]
\end{lstlisting}
After refreshing the flags you can start an MR+ job by executing:
\begin{lstlisting}[language=python]
python p/MRImprov/scripts/runscript+_mstr -rmn [job_id]
\end{lstlisting}

\label{sec:runningmrimprov}

\section{Customizing the system}
\label{sec:customization}

\section{Debugging Problems}
\label{sec:debugging}

\newpage
\appendix
\appendixpage
\addappheadtotoc

\titleformat{\section}
	{\normalfont\Large\bfseries}{Appendix \thesection:}{1em}{}



\section{\textsf{Linux} remote access}
\label{apdx:remoteaccs}
If an SSH server is installed on a \textsf{Linux} machine (like \textsf{OpenSSH}), it is quite simple to work remotely. Using SSH you can log in remotely and work as if you were physically at the machine. Once you are logged in with SSH, the prompt behaves exactly the same as you would expect: you can run your usual scripts, install software, manage the system, etc..

The usual format of logging in remotely using SSH is as follows:
\begin{lstlisting}[language=bash]
ssh <IP>
\end{lstlisting}
Here we are SSH'ing into the given IP. If you want to SSH into a system using a particular username (suppose \texttt{root}), this is the method to follow (logging into IP \texttt{192.168.100.1}):
\begin{lstlisting}[language=bash]
ssh root@192.168.100.1
\end{lstlisting}
After running the command, the SSH server you are attempting to log into will
ask for the password. If you regualarly log into a particular server from a
machine, it is annoying to be asked for the password repeatedly. Here
\texttt{ssh-keygen} comes to the rescue. When used, it will stop SSH from asking you for the password repeatedly (which is helpful especially for running \textsf{Hadoop HDFS}): 
\begin{lstlisting}[language=bash, escapeinside={(*@}{@*)}, firstnumber=1,
commentstyle=\color{codecommentcolor}\sffamily, numbers=left, numberstyle=\tiny, stepnumber=1, numbersep=5pt, caption={Setting RSA keys for SSH}, label=lst:sshkeygen] 
# generate public & private RSA key 
ssh-keygen -t rsa -P ''
# copy public key to local authorized keys file
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# scp the public key to the remote node
scp ~/.ssh/id_rsa.pub [IP]:~/.ssh/authorized_keys (*@\label{lst:sshkeygentransfer}@*)
# at this point you will be asked for the password of the remote machine
\end{lstlisting}
Remember to replace the \lstinline[language=bash]$[IP]$ with the server IP for which you don't want to be bugged for the password. If you want to do this step  for several machines, you just need to perform step on Line \ref{lst:sshkeygentransfer}; replacing it with the respective IP. You can also run commands directly on the remote server by using the basic \texttt{ssh} command (given above) followed by the command you want to run at the remote server. This will execute the given command on the remote machine, and pipe the output to your local machine. E.g. if you want to see the contents of the \texttt{root} home directory on the remote machine:
\begin{lstlisting}[language=bash]
ssh root@192.168.100.1 'ls -l ~/'
\end{lstlisting}

\textsf{OpenSSH} comes with a tool to transfer files called Secure copy (\texttt{scp}). It is extremely useful to transfer files to and from remote machines using this tool. The basic format of the command for transferring a file is quite simple:
\begin{lstlisting}[language=bash]
scp [SRC] [DST]
\end{lstlisting}
E.g. if you want to transfer a file \texttt{/root/books-to-read.txt} from your system to user \texttt{zubair}'s directory on the remote machine, you need to:
\begin{lstlisting}[language=bash]
scp /root/books-to-read.txt root@192.168.100.1:/home/zubair
\end{lstlisting}
To transfer a file from the remote system to your local machine:
\begin{lstlisting}[language=bash]
scp root@192.168.100.1:/home/zubair/books-to-read.txt /root/
\end{lstlisting}

Among the most popular SSH client tools for Microsoft\(^\circledR\) \textsf{Windows}\(^\text{\texttrademark}\) is \textsf{putty}. It comes with an SSH GUI and a Secure copy tool called \textsf{pscp}. If the \textsf{putty} directory had been added to the \texttt{PATH} variable, you can access the Secure copy tool directly by just typing \lstinline[language=bash]$pscp$. Although there might be some difference in options given by \texttt{pscp} from \texttt{scp}, it aims to give the same functionality. One useful extra option it provides is the \lstinline[language=bash]$-pw$ option. It allows you to give the password of the remote server on the command line; rather than being asked in a prompt - this is helps in writing scripts to automate file transfers. The following example transfers a file \texttt{books-to-read.txt} from the local \textsf{Windows}\(^\text{\texttrademark}\) machine to the remote \textsf{Linux} machine:
\begin{lstlisting}[language=bash]
pscp -pw remotepassword123 "C:/Users/Ahmad/books-to-read.txt" root@203.128.4.45:/root/
\end{lstlisting}
\newpage

\section{\textsf{JDK} setup on \textsf{Linux}}
\label{apdx:jdksetup}
Setting up the \textsf{Java Development Kit} (\textsf{JDK}) is quite simple in \textsf{JDK}. Follow the steps below:
\begin{enumerate}
\item Download the current \textsf{JDK} version from \url{http://java.sun.com/javase/downloads/index.jsp} for \textsf{Linux}. Before downloading check if there are any version requirements for \textsf{JDK} for running \textsf{Hadoop HDFS}. Here we will suppose the file downloaded is \texttt{jdk-6u15-linux-i586.bin}
\item Transfer the \texttt{.bin} file to the machine where you want to install \textsf{JDK}. Refer to Appendix \ref{apdx:remoteaccs} for help.
\item Now you will need to inflate and install \textsf{JDK}. First change the file permissions to allow for execution of \texttt{java-[ver].bin}:
\begin{lstlisting}[language=bash]
chmod 0744 [full path]/jdk-6u15-linux-i586.bin
\end{lstlisting}
\item Execute the file to inflate the \textsf{JDK}:
\begin{lstlisting}[language=bash]
[full path]/jdk-6u15-linux-i586.bin
\end{lstlisting}
\item Move the inflated directory to a location in \texttt{/usr}
\begin{lstlisting}[language=bash]
mkdir /usr/java
mv [full path]/jdk1.6.0_15/ /usr/java/
\end{lstlisting}			
\end{enumerate}
\newpage

\section{\textbf{siteconf.xml} and \textbf{jobconf.xml}}
\label{apdx:siteconf.xml}
All placeholders used in any XML element values are defined in \lstinline[language=python]$xml_derived_placeholders$ in \texttt{\color{codemodulecolor}ma.const}:
\begin{longtable}{|l||c|c|>{\small}p{0.32\textwidth}|}
\hline
\multicolumn{4}{|c|}{\Large \textbf{{\lstinline[language=XML]$<element>$} availability and explanation}\rule[-2mm]{0mm}{7mm}} \\ 
\hline
& \texttt{\small \color{filepathcolor}siteconf.xml} & \texttt{\small \color{filepathcolor}jobconf.xml} & \large \textbf{Comments} \rule[-2mm]{0mm}{6.5mm} \\
\hline
\endfirsthead
\multicolumn{4}{c}{{\bfseries \tablename\ \thetable{} -- continued from previous page}} \\
\hline
& \texttt{\small \color{filepathcolor}siteconf.xml} & \texttt{\small \color{filepathcolor}jobconf.xml} & \large \textbf{Comments} \rule[-2mm]{0mm}{6.5mm} \\
\hline
\endhead
\hline \multicolumn{4}{r}{{Continued on next page}} \\
\endfoot
\hline \hline
\endlastfoot
\multicolumn{4}{|c|}{\lstinline[language=XML]$<resource_names>$ \rule[-1.5mm]{0mm}{6mm}} \\
\hline
\lstinline[language=XML]$<job_conf_filename>$ & \Checkmark & & Filename for the individual job settings file. \\
\lstinline[language=XML]$<job_flags_filename>$ & \Checkmark & & Filename for the XML noting the status of each job (used in \texttt{\color{codemodulecolor}ma.fs.dfs.dfsflags}). \\
\lstinline[language=XML]$<map_flags_filename>$ & \Checkmark & & Filename for the file that keeps maps' info. for a particular job. \\
\lstinline[language=XML]$<reduce_flags_filename>$ & \Checkmark & & Filename for the file that keeps reduces' info. for a particular job. \\
\lstinline[language=XML]$<map_input_filename>$ & \Checkmark & & Standard format for map input filenames. \\
\lstinline[language=XML]$<map_output_filename>$ & \Checkmark & & Standard format for map output filenames used for \textsf{MR+}. \\
\lstinline[language=XML]$<map_output_filename_with_key>$ & \Checkmark & & Standard format for map output filenames used specifically in \textsf{MapReduce} where the output of a map is divided in sections, each destined to a different reduce. \\
\lstinline[language=XML]$<reduce_output_filename>$ & \Checkmark & & Standard format of reduce output (i.e. the output of the whole \textsf{MR+}/\textsf{MR}). \\
\lstinline[language=XML]$<map_proc_temp_filename>$ & \Checkmark & & Standard filename format of the file used to pickle information between \texttt{\color{filepathcolor}maptip} and \texttt{\color{filepathcolor}mr*maptaskrunner*}. \\
\lstinline[language=XML]$<reduce_proc_temp_filename>$ & \Checkmark & & Standard filename format of the file used to pickle information between \texttt{\color{filepathcolor}reducetip} and \texttt{\color{filepathcolor}mr*reducetaskrunner*}. \\
\hline
\multicolumn{4}{|c|}{\lstinline[language=XML]$<filepaths>$ \rule[-1.5mm]{0mm}{6mm}} \\
\hline
\lstinline[language=XML]$<logconf_filepath>$ & \Checkmark & \Checkmark & The configuration file used by the logging module. \\
\lstinline[language=XML]$<local_temp_dir>$ & \Checkmark & & The main temporary directory for the system; used as the base directory for some of the following elements. \\
\lstinline[language=XML]$<local_input_temp_dir>$ & \Checkmark & & Temporary directory to shuffle the data from different nodes and for storing job configuration files. \\
\lstinline[language=XML]$<local_output_temp_dir>$ & \Checkmark & & Temporary directory to keep output files, which will be shuffled for input to other processes. \\
\lstinline[language=XML]$<local_compute_temp_dir>$ & \Checkmark & & Directory usually used by processes for temporary storage. \\
\lstinline[language=XML]$<local_job_conf_filepath>$ & \Checkmark & & The local location where \texttt{\color{filepathcolor}jobconf.xml} is temporarily copied \\
\lstinline[language=XML]$<local_job_flags_filepath>$ & \Checkmark & & The local location where \texttt{\color{filepathcolor}jobflags.xml} is temporarily copied (used in \texttt{\color{codemodulecolor}ma.fs.dfs.dfsflags}). \\
\lstinline[language=XML]$<local_map_flags_filepath>$ & \Checkmark & & The local location where \texttt{\color{filepathcolor}mapflags.xml} is temporarily copied (used in \texttt{\color{codemodulecolor}ma.fs.dfs.dfsflags}). \\
\lstinline[language=XML]$<local_reduce_flags_filepath>$ & \Checkmark & & The local location where \texttt{\color{filepathcolor}reduceflags.xml} is temporarily copied (used in \texttt{\color{codemodulecolor}ma.fs.dfs.dfsflags}). \\
\hline
\multicolumn{4}{|c|}{\lstinline[language=XML]$<resource_locations>$ \rule[-1.5mm]{0mm}{6mm}} \\
\hline
\lstinline[language=XML]$<job_dir_dfs_filepath>$ & \Checkmark & & Main DFS directory for the system; used as a base directory for some of the following elements.  \\
\lstinline[language=XML]$<job_conf_dfs_filepath>$ & \Checkmark & & Path where \texttt{\color{filepathcolor}jobconf.xml} is stored on the DFS for a particular job. \\
\lstinline[language=XML]$<job_flags_dfs_filepath>$ & \Checkmark & & Path where \texttt{\color{filepathcolor}jobflags.xml} is stored on the DFS (its a global file for all jobs). \\
\lstinline[language=XML]$<map_flags_dfs_filepath>$ & \Checkmark & & Path where \texttt{\color{filepathcolor}mapflags.xml} is stored on the DFS for a particular job. \\
\lstinline[language=XML]$<reduce_flags_dfs_filepath>$ & \Checkmark & & Path where \texttt{\color{filepathcolor}reduceflags.xml} is stored on the DFS for a particular job. \\
\lstinline[language=XML]$<job_input_dfs_dir>$ & \Checkmark & & DFS path where the input files for a particular job are stored. \\
\lstinline[language=XML]$<job_output_dfs_dir>$ & \Checkmark & & DFS path where the output files for a particular job are written to. \\
\lstinline[language=XML]$<hadoop_path>$ & \Checkmark & & Root directory of \textsf{Hadoop}'s installation. \\
\lstinline[language=XML]$<hadoop_exec>$ & \Checkmark & & Path for \textsf{Hadoop}'s main executable. \\
\lstinline[language=XML]$<python_exec>$ & \Checkmark & & \textsf{Python} interpreter path. Usually a \lstinline[language=bash]$-u$ flag is used for unbuffered output. \\
\hline
\multicolumn{4}{|c|}{\lstinline[language=XML]$<architecture_settings>$ \rule[-1.5mm]{0mm}{6mm}} \\
\hline
\lstinline[language=XML]$<task_capacity>$ & \Checkmark & & The number of tasks that can simultaneously run on a single node. \\
\lstinline[language=XML]$<map_to_reduce_ratio>$ & \Checkmark & \Checkmark & The number of maps that are input to a single reduce. \\
\lstinline[language=XML]$<reduce_levels>$ & \Checkmark & \Checkmark & [DEPRECATED - NOT USED] \\
\lstinline[language=XML]$<reduce_input_ratio>$ & \Checkmark & \Checkmark & The number of reduces that are input to a single reduce (at level \(>1\)) \\
\lstinline[language=XML]$<map_timeout_period>$ & \Checkmark & \Checkmark & The number of seconds before a map worker is timed out/marked as failed (currently only used by \texttt{\color{codemodulecolor}ma.fs.dfs.dfsflags}). \\
\lstinline[language=XML]$<reduce_timeout_period>$ & \Checkmark & \Checkmark & The number of seconds before a reduce worker is timed out/marked as failed (currently only used by \texttt{\color{codemodulecolor}ma.fs.dfs.dfsflags}). \\
\lstinline[language=XML]$<processrunner_fail_retries>$ & \Checkmark & & The number of times \texttt{\color{codemodulecolor}ma.commons.core.processrunner} can retry to start a worker process (map or reduce). \\
\lstinline[language=XML]$<map_max_attempts>$ & \Checkmark & \Checkmark & The maximum number of attempts the system will make to complete a map task (currently only used by \texttt{\color{codemodulecolor}ma.fs.dfs.dfsflags}). \\
\lstinline[language=XML]$<reduce_max_attempts>$ & \Checkmark & \Checkmark & The maximum number of attempts the system will make to complete a reduce task (currently only used by \texttt{\color{codemodulecolor}ma.fs.dfs.dfsflags}). \\
\lstinline[language=XML]$<dfs_copy_retries>$ & \Checkmark & & The number of attempts \texttt{\color{codemodulecolor}ma.fs.dfs.\_hdfs} will make to copy a file to or from the DFS incase of repeated failures. \\
\lstinline[language=XML]$<distributed_lock_commit_wait>$ & \Checkmark & & The number of seconds the distributed lock module waits to affirm that a resource is locked (See \texttt{\color{codemodulecolor}ma.net.distlock}). \\
\lstinline[language=XML]$<distributed_lock_lease_time>$ & \Checkmark & & The maximum number of seconds a lock can be held by a client (See \texttt{\color{codemodulecolor}ma.net.distlock}). \\
\lstinline[language=XML]$<distributed_lock_relock_wait>$ & \Checkmark & & The number of seconds the distributed lock module waits before requesting a renewal of the lock (See \texttt{\color{codemodulecolor}ma.net.distlock}). Should be less than \lstinline[language=XML]$<distributed_lock_lease_time>$ \\
\lstinline[language=XML]$<distributed_lock_reattempt_wait>$ & \Checkmark & & The number of seconds the distributed lock module waits before re-attempting to lock a resource (See \texttt{\color{codemodulecolor}ma.net.distlock}). \\
\lstinline[language=XML]$<distributed_lock_timerclass_interval>$ & \Checkmark & & The time-keeping granularity (in seconds) used in \texttt{\color{codemodulecolor}ma.utils.timerclass} for the distributed lock module (See \texttt{\color{codemodulecolor}ma.net.distlock}). Note that this values should be less than the lowest common denominator of all the timers in the distributed lock module. \\
\lstinline[language=XML]$<distributed_lock_max_locking_attempts>$ & \Checkmark & & Maximum number of attempts made to lock a resource before declaring failure (See \texttt{\color{codemodulecolor}ma.net.distlock}). \\
\lstinline[language=XML]$<map_to_reduce_schedule_ratio>$ & \Checkmark & \Checkmark & Used in \texttt{\color{codemodulecolor}taskscheduler} to decide how many maps to assign before assigning one reduce. \\
\lstinline[language=XML]$<schedule_multiply_factor>$ & \Checkmark & & Used in \texttt{\color{codemodulecolor}taskscheduler} as a scale-up factor for \lstinline[language=XML]$<map_to_reduce_schedule_ratio>$?? \\
\lstinline[language=XML]$<broadcast_file_req_response_wait>$ & \Checkmark & & Wait time (in number of seconds) for getting a response from broadcast queries in \texttt{\color{codemodulecolor}ma.commons.core.inputfetcher}. \\
\lstinline[language=XML]$<file_broadcasts_request_attempts>$ & \Checkmark & & The number of broadcast query attempts made in \texttt{\color{codemodulecolor}ma.commons.core.inputfetcher} to find all input files for a certain process. \\
\lstinline[language=XML]$<distributed_lock_unlock_wait_attempts>$ & \Checkmark & & Used in \lstinline[language=python]$block_till_locked()$ (in \texttt{\color{codemodulecolor}ma.net.distlock}) as the maximum number of times a resource is checked until its unlocked. \\
\lstinline[language=XML]$<distributed_lock_unlock_wait_time>$ & \Checkmark & & The number of seconds to wait between each of the \lstinline[language=XML]$<distributed_lock_unlock_wait_attempts>$ retries made in \lstinline[language=python]$block_till_locked()$ \\
\lstinline[language=XML]$<no_maps>$ & \Checkmark & & [DEPRECATED - NOT USED] \\
\lstinline[language=XML]$<no_reduces>$ & \Checkmark & \Checkmark & The total number of reduce tasks to be constructed in \textsf{MR}. It is used to schedule appropriate number of reduce tasks and also used as a parameter to the partitioning function (see \texttt{\color{codemodulecolor}ma.mapred.core.partitioner}). \\
\lstinline[language=XML]$<map_input_read_chunk>$ & \Checkmark & & [DEPRECATED - NOT USED] \\
\lstinline[language=XML]$<nodetracker_heartbeat_interval>$ & \Checkmark & & Number of seconds before the \texttt{\color{codemodulecolor}nodetracker} sends a status update to the \texttt{\color{codemodulecolor}ma.mrimprov.core.hdfsinteractor} (only used in \textsf{MR+} without Master). \\
\lstinline[language=XML]$<nodetracker_master_heartbeat_interval>$ & \Checkmark & & Number of seconds before the \texttt{\color{codemodulecolor}nodetracker} calls \lstinline[language=python]$heartbeatFromNT()$ for a status update to the \texttt{\color{codemodulecolor}master} (only used in \textsf{MR+} with Master and \textsf{MR}). \\
\lstinline[language=XML]$<nodetracker_task_expiry_interval>$ & \Checkmark & & The number of seconds offered by the \texttt{\color{codemodulecolor}nodetracker} to the task runners for finishing a particular process. Usually should be greater than the estimates given in \lstinline[language=XML]$<map_timeout_period>$ and \lstinline[language=XML]$<reduce_timeout_period>$ \\
\lstinline[language=XML]$<task_ping_interval>$ & \Checkmark & & The time in seconds between each health (is alive) check made for a running map or reduce process. \\
\lstinline[language=XML]$<check_task_expiry_interval>$ & \Checkmark & & [Not being used currently] - used to expire unresponsive tasks. \\
\lstinline[language=XML]$<check_on_timers_interval>$ & \Checkmark & & The time-keeping granularity (in seconds) used in \texttt{\color{codemodulecolor}nodetracker} to keep track of all time related functions. Note that this values should be less than the lowest common denominator of all the timers in the \texttt{\color{codemodulecolor}nodetracker}. \\
\lstinline[language=XML]$<child_process_niceness>$ & \Checkmark & & This is the \href{http://www.linux.com/archive/feed/58638}{niceness} value at which each map/reduce task runner process is run. \\
\lstinline[language=XML]$<stats_interval>$ & \Checkmark & & The time (in seconds) between noting statistics (see \texttt{\color{codemodulecolor}ma.stats.statster}). \\
\lstinline[language=XML]$<stats_buffer_size>$ & \Checkmark & & The buffer length used in \texttt{\color{codemodulecolor}ma.stats.statster}.\\
\lstinline[language=XML]$<stats_file_size_divider>$ & \Checkmark & & The divider used for storing all storage units used in \texttt{\color{codemodulecolor}ma.stats.statster} i.e. currently it set to store in KB. \\
\lstinline[language=XML]$<reduce_secs_from_inp_scaleup>$ & \Checkmark & \Checkmark & The number seconds to waste per value in a reduce function (currently only used in \texttt{\color{filepathcolor}MRImprov/src/ma/tests/reduceavg\_sleep.py}). \\
\lstinline[language=XML]$<mrplus_do_brickwall>$ & \Checkmark & \Checkmark & If set to 1, enforces a brick-wall when running \textsf{MR+} on a particular job i.e. forces the system to complete all maps before the reduce phase can begin. \\
\lstinline[language=XML]$<mrplus_order_by_structs>$ & \Checkmark & \Checkmark & If set to 1, forces the scheduler to process maps and reduces of a particular job in order of structures exactly as they appeared in \texttt{\color{filepathcolor}mapflags.xml}. \\
\lstinline[language=XML]$<mrplus_estimation_ignore>$ & \Checkmark & \Checkmark & This is a list of comma separated numbers e.g. 2,1,2,4. If the first number is 0 it means this option is not activated. If the first value is non-zero, it indicates the maximum level reduces have to reach for their related structure to be marked complete prematurely i.e. such a reduce will stop anymore tasks being scheduled for that structure. The numbers following the first one, are IDs of structures which cannot be marked complete prematurely; hence only a single non-zero number, lets say 2, will allow the system to mark all structures complete as soon as a reduce of level 2 for a given structure is completed (see \lstinline[language=python]$mark_reduce_complete()$ in \texttt{\color{codemodulecolor}ma.mrplus\_master.core.master}). Note this option is currently only available for \textsf{MR+} with Master. \\
\lstinline[language=XML]$<large_levels_schedule_bias>$ & \Checkmark & \Checkmark & If this option is set to 1, it will force the system to prioritize reduces by their level number - larger the level (i.e. more down the reduce tree) the more likely it will be scheduled first (see \lstinline[language=python]$choose_reduce_tasks()$ in \texttt{\color{codemodulecolor}ma.mrplus\_master.core.master}). Note this option is currently only available for \textsf{MR+} with Master. \\
\hline
\multicolumn{4}{|c|}{\lstinline[language=XML]$<input_settings>$ \rule[-1.5mm]{0mm}{6mm}} \\
\hline
\lstinline[language=XML]$<map_input_chunk_size>$ & & \Checkmark & \\
\lstinline[language=XML]$<reduce_input_read_chunk>$ & \Checkmark & \Checkmark & \\
\lstinline[language=XML]$<reduce_input_compute_chunk>$ & \Checkmark & \Checkmark & \\
\hline
\multicolumn{4}{|c|}{\lstinline[language=XML]$<computation_settings>$ \rule[-1.5mm]{0mm}{6mm}} \\
\hline
\lstinline[language=XML]$<map_module>$ & & \Checkmark & The module name to use for running the map function. This usually will be a filename (without the ext.) from \texttt{\color{filepathcolor}MRImprov/src/ma/codes/}. \\
\lstinline[language=XML]$<map_class>$ & & \Checkmark & The class-name of the Mapper in the module given by \lstinline[language=XML]$<map_module>$. Will usually be `Map'. \\
\lstinline[language=XML]$<reduce_module>$ & & \Checkmark & The module name to use for running the reduce function. This usually will be a filename (without the ext.) from \texttt{\color{filepathcolor}MRImprov/src/ma/codes/}. \\
\lstinline[language=XML]$<reduce_class>$ & & \Checkmark & The class-name of the Reducer in the module given by \lstinline[language=XML]$<reduce_module>$. Will usually be `Reduce'. \\
\lstinline[language=XML]$<change_estimation_reduce_margin>$ & \Checkmark & \Checkmark & \\
\lstinline[language=XML]$<min_inputs_to_declare_not_changed>$ & \Checkmark & \Checkmark & \\
\lstinline[language=XML]$<threshold_value>$ & & \Checkmark & \\
\lstinline[language=XML]$<mrplus_map_task_runner>$ & \Checkmark & \Checkmark & The map task runner to use for running the map process (invoked by \texttt{\color{codemodulecolor}ma.commons.core.processrunner}) in \textsf{MR+}. The current options are \texttt{\color{codemodulecolor}mrplusmaptaskrunner} and \texttt{\color{codemodulecolor}mrplusmaptaskrunner\_heavydata}. \\
\lstinline[language=XML]$<mr_map_task_runner>$ & \Checkmark & \Checkmark & The map task runner to use for running the map process (invoked by \texttt{\color{codemodulecolor}ma.commons.core.processrunner}) in \textsf{MR}. The current options are \texttt{\color{codemodulecolor}mrmaptaskrunner} and \texttt{\color{codemodulecolor}mrmaptaskrunner\_heavydata}. \\
\lstinline[language=XML]$<mrplus_reduce_task_runner>$ & \Checkmark & \Checkmark & The reduce task runner to use for running the reduce process (invoked by \texttt{\color{codemodulecolor}ma.commons.core.processrunner}) in \textsf{MR+}. The current options are \texttt{\color{codemodulecolor}mrplusreducetaskrunner} and \texttt{\color{codemodulecolor}mrplusreducetaskrunner\_heavydata}. \\
\lstinline[language=XML]$<mr_reduce_task_runner>$ & \Checkmark & \Checkmark & The reduce task runner to use for running the reduce process (invoked by \texttt{\color{codemodulecolor}ma.commons.core.processrunner}) in \textsf{MR}. The current options are \texttt{\color{codemodulecolor}mrreducetaskrunner} and \texttt{\color{codemodulecolor}mrreducetaskrunner\_heavydata}. \\
\hline
\multicolumn{4}{|c|}{\lstinline[language=XML]$<network_settings>$ \rule[-1.5mm]{0mm}{6mm}} \\
\hline
\lstinline[language=XML]$<communicating_interface>$ & \Checkmark & & The OS network interface name used for all communications. \\
\lstinline[language=XML]$<hdfs_host>$ & \Checkmark & & Host IP used by \texttt{\color{codemodulecolor}ma.fs.dfs.\_hdfs} to connect to the \textsf{HDFS} \\
\lstinline[language=XML]$<hdfs_port>$ & \Checkmark & & Host port used by \texttt{\color{codemodulecolor}ma.fs.dfs.\_hdfs} to connect to the \textsf{HDFS} \\
\lstinline[language=XML]$<binding_address>$ & \Checkmark & & Used by \texttt{\color{codemodulecolor}BroadcastListenServer} (in \texttt{\color{codemodulecolor}ma.net.brdcstsv}) for catching any broadcast packets. Binding to an empty address will bind to all interfaces. \\
\lstinline[language=XML]$<broadcast_address>$ & \Checkmark & & The broadcast address call-sign to send broadcast packets (also used in distributed lock module). \\
\lstinline[language=XML]$<broadcast_port>$ & \Checkmark & & The broadcast port to send broadcast packets (also used in distributed lock module). \\
\lstinline[language=XML]$<broadcast_packet_buffer>$ & \Checkmark & & The buffer size (in bytes) in each \lstinline[language=python]$__sock.recvfrom()$ call made in \texttt{\color{codemodulecolor}BroadcastListenServer} (in \texttt{\color{codemodulecolor}ma.net.brdcstsv}) \\
\lstinline[language=XML]$<tcp_output_srv_bind_address>$ & \Checkmark & & The TCP bind address used in \texttt{\color{codemodulecolor}OutputServer} (in \texttt{\color{codemodulecolor}ma.net.brdcstsv}) for shuffling files. \\
\lstinline[language=XML]$<net_output_srv_bind_port>$ & \Checkmark & & The TCP bind port used in \texttt{\color{codemodulecolor}OutputServer} (in \texttt{\color{codemodulecolor}ma.net.brdcstsv}) for shuffling files. \\
\lstinline[language=XML]$<net_file_transfer_buffer>$ & \Checkmark & & The buffer size used while transferring files on TCP between the \texttt{\color{codemodulecolor}ma.commons.core.outputserver} and \texttt{\color{codemodulecolor}inputfetcher}/\texttt{\color{codemodulecolor}inputfetchernoquery} \\
\lstinline[language=XML]$<net_command_buffer>$ & \Checkmark & & Decides the number of bytes per \lstinline[language=python]$request.recv()$ call for receiving a command in \texttt{\color{codemodulecolor}ThreadedTCPServer} (in \texttt{\color{codemodulecolor}ma.net.tcpsv}). \\
\lstinline[language=XML]$<mapred_master_ip>$ & \Checkmark & & IP considered to be of the Master node to establish the RPC connection from the \texttt{\color{codemodulecolor}nodetracker} to \texttt{\color{codemodulecolor}master}. \\
\lstinline[language=XML]$<mapred_master_port>$ & \Checkmark & & Port used to establish the RPC connection from the \texttt{\color{codemodulecolor}nodetracker} to \texttt{\color{codemodulecolor}master}. \\
\hline
\end{longtable}

\end{document}
