\documentclass[10pt,twocolumn]{article}

\usepackage{
  color
  , algorithm
  , algorithmic
  , fullpage
  , graphicx
  , listings
  , subfig
}

\lstnewenvironment{haskell}{\lstset{language=Haskell,basicstyle=\small}}{}
\lstnewenvironment{java}{\lstset{language=Java,basicstyle=\small}}{}
\lstnewenvironment{erlang}{\lstset{language=erlang,basicstyle=\small}}{}
\lstnewenvironment{C}{\lstset{language=C,basicstyle=\small}}{}


\title{
  \textit{CFar}: a Categorically Fault Tolerant Abstraction of Map and Reduce\\
}

\author{
  Haoyun Feng, Badi' Abdul-Wahid
}

\date{
  December 14, 2009
}

\begin{document}
\maketitle

\abstract{We present a framework to take advantage of multicore and
  distributed systems using redundant function application for
  reliability using variants of the $map$ and $reduce$ functions. The
  framework is based on the idea of mobile functions and draws
  inspiration from aspects of functional programming and category
  theory: first-class functions and functors and monoids. We evaluate
  the expressivity and scalability of CFar using a simple molecular
  dynamics simulation.}

\section{Introduction} \label{sec:intro}

Accessibility to increasing amount of data from the internet and
scientific experiment raises the demand of large scale data processing
and analysis. This is a common issue in various research fields, such
as Biological Complexity, Image Processing, Natural Language
Processing and etc. Cloud computers are constructed for dealing with
large scale computations by uniting large number of disparate
computing resources and allocating them to jobs. Such systems are
widely available for scientific research purpose, e.g. Notre Dame CRC
(Center for Research Computing) computing grids and Folding@Home
\cite{folding@home}.

Most users are not willing to deal with the technical details of
distributed computing. Instead, all they desire is getting a result
from a complex computation as fast as possible. Abstractions are
developed on top of cloud computers for users to easily express their
problem and execute it using resources sufficiently . Many cloud
computing related operations are hidden from users, such as resource
allocation, fault tolerance, etc. This not only provides convenience
to users, but also reduces the risk of a single point of failure
bringing down the whole system. Abstraction is a very crucial tool of
cloud computers, which directly affect the system's performance.

The trend of current cloud computing system is combining large number
of small dynamic resources.  Such systems use flexible protocols so
that idle resources can be sufficiently used. However, it also brings
some requirements for abstraction. For example, Folding@Home is a
distributed system for complex biological computation. It uses
hundreds of thousands of volunteered clients as part of its computing
resources. Therefore, the nodes will change very frequently over
time. In such system, scalability of abstraction is very
essential. Fault tolerance is also crucial because of lack of control
of dynamic resources.

Our work focuses on building two commonly used abstractions, map and
reduce, with the properties of scalability and error tolerance. The
map reduce is divided into two functional parts: map applies a
function on every data from a input sequence respectively and returns
a new sequence of results; reduce uses an associative function to
summarize a sequence to a single value. The data are processed in
parallel. Therefore, when the input size is huge great speed up will
be achieved by using this framework on large scale distributed system.
Many applications can be expressed in this framework, for instance,
counting frequency of specific words from documents and detecting
abnormal states from sequence of protein conformations.

We implement this framework as a Java library. It supports large scale
distributed computing on Notre Dame CRC nodes, which consists of
hundreds of CPU cores. No software is required by this library as long
as Java Virtual Machine (JVM) is available. This provides distinct
convenience to make use of large number of nodes. All the functions
and data in our library are wrapped as seriallizable objects, so that
they can be distributed over network as messages to actors. This
prevents the use of resources to store intermediate data.  When
executing jobs on nodes, CFar creates an Actor in charge of receiving,
executing and sending functions and data. This leads to the fact that
every node can be a manager. Due to this strategy, recursive map and
reduce, which are not implemented by most existing tools, is naturally
available in CFar. We also provide an interface for the user to
control the error tolerance. For obtaining more reliable results users
can require job replications and define rules to select the most
accurate result from replications.

We demonstrate the usage and performance of our map reduce library
through a sample application in molecular dynamics: generating a large
number of trajectories of molecular motion and their analysis. The
error tolerance property is especially useful in this example because
it contains stochastic process which can fail to provide accurate
results. For evaluation purposes, we inject more noise into system and
examine the accuracy of the results. When a reasonable number of
replications is used a high accuracy can be achieved even in the
presence of noise.

We review the related works in section \ref{sec:related} and outline
the advantages of our library comparing to existing ones. We provide
some background to notation, ideas borrowed functional programming,
and category theory in section \ref{sec:background}. The reliability
of replications is discussed in section \ref{sec:reliability}, the
architecture in section \ref{sec:architecture}, the implementation in
section \ref{sec:implementation}. We provide an example of the type of
usage we expect in section \ref{sec:example}, and discuss its
evaluation in section \ref{sec:evaluation}. Finally, we conclude in
section \ref{sec:conclusion}.


\section{Related Work} \label{sec:related}

MapReduce was first introduced to support distributed computing on
clusters by Google \cite{MapReduce}. It was targeted to speed up large
scale data processing on clusters with hundreds or thousands commodity
PCs. As a result, machine failures are assumed to be common and fault
tolerance is considered as a crucial issue. Its solution is to contact
the machines periodically. If a machine fails to respond for a
certain time, it will be marked as failed and its tasks will be
rescheduled to other idle machines. The major application of Google's
MapReduce is doing statistical analysis of large documents. The fault
tolerance strategy is especially suitable to such tasks, which has the
property that the size of input data is large and the map and reduce
functions are relatively simple. However, consider the situation that
some scientific computing may include many parallel jobs and each job
requires hours or days for complete, simply restarting failed job is
time consuming. And Google's MapReduce also neglects the problem of
untrustworthy results. For example, instead of failure, some machines
may return incorrect computing results due to security issue or
stochastic factors.

Fault tolerance is discussed in many other areas. ReInForM
\cite{ReInForM} introduces the strategy for reliable information
forwarding in a sensor network by sending replications of data through
multiple paths. It fully discusses the relationship between number of
replications and required reliability in a formula. It turns out that
when the reliability achieves a certain value increasing replications
does not affect the reliability much. A reasonable number of
replications can be chosen from this relationship. This inspires us to
improve the confidence of results by replicating jobs where
appropriate. We also provide users a way to filter out potentially
untrustworthy results. This is suitable for dealing with more types of
faults than machine failure.

Google's MapReduce framework is also implemented by Hadoop
\cite{hadoop}, an open source java software. It is widely used by
organizations to run data-intensive distributed applications, such as
Facebook and Yahoo!. A limitation of Hadoop is that it requires the
some administrative control over the machines in the distributed
system. This limitation makes it difficult to use dynamically. For
example, we have 32 nodes have the Hadoop software installed on which
the Hadoop map reduce can be executed. Hadoop manages files of data as
input or output and takes an executable or jar file. This increases
the complication for users to executing a job. While Hadoop achieves
impressive scalability and fault-tolerance (due in part to the
distributed file system), one limitation is that maps and reductions
cannot be called from within a mapping or reduction.

We implement CFar as a Java library. Data and functions are
distributed over the network as serializable objects. This has less
requirements for the nodes in distributed system and it is familiar
for users who know Java.

We develop the library architecture using concepts inspired from
category theory (hence the term ``categorically'' in ``CFar''). In
general, category theory interprets objects and their relationships
using layers of abstract concepts and structure preserving
functions. It is originally developed to organize knowledge and
provides much of the basis for the Haskell programming language. An
advantage of using categorical concepts is that they provide a formal
context for structuring and analysing applications. The concepts of
functors and monoids are applied in our architecture and provide
desirable properties.


\section{Background} \label{sec:background}

\subsection{Notation and Syntax} \label{sec:syntax}

Many of the ideas used are not easily represented using Java syntax,
so we borrow that of the functional language Haskell. Here we briefly
describe the syntax we use to mitigate potential confusion from its
appearance later in this paper.

\subsubsection{Type annotations}
Types are denoted with the double colon symbol ($::$). Type
annotations allow for explicit types, type variables, and type
constraints. For example, one could read

\begin{haskell}
f :: Int -> Char
\end{haskell}

as ``f is a function from $Int$ to $Char$''. Thus $f$ is a function
accepting a single integer parameter and returns a character. This
illustrates the use of concrete types in type annotations.

A generalization of $f$, call it $f^\prime$, could have the following type:

\begin{haskell}
f' :: a -> b
\end{haskell}

$f^\prime$ being a function from a value of type $a$ to a type $b$,
where $a$ and $b$ are type variables. This means that $f^\prime$ is a
polymorphic function. One could call $f^\prime$ with an $Int$, a
$Char$, a $String$, etc. This illustrates the use of type variables.

Finally, type constraints can be illustrated with $f^{\prime\prime}$:

\begin{haskell}
f'' :: Num a => a -> a
class Num a where
    (+) :: a -> a -> a
    (-) :: a -> a -> a
    -- etc
\end{haskell}

where $f^{\prime\prime}$ is a function from $a$ to $a$ and the type
variable $a$ is constrained to being a number: meaning that it
supports the operations associated with the class (akin to interfaces
in object-oriented languages). Thus passing an $Int$ or a $Double$ to
$f^{\prime\prime}$ would be valid, but not a $String$.


\subsubsection{Function application}
Functions are applied to their arguments using whitespace. For example, given

\begin{haskell}
foo :: Int -> Int -> Int
foo a b = a + b
foo' :: (Int,Int) -> Int
foo' (a,b) = a + b
\end{haskell}

both $foo$ and $foo^\prime$ perform the same operation, the addition
of two values. However, $foo$ is a function that may accept to
parameters whereas $foo^\prime$ only accepts a single parameter: a
2-tuple of $Int$s. The $foo$ function may be applied as:

\begin{haskell}
forty_two = foo 40 2
\end{haskell}


\subsubsection{Currying}
The arrow symbols in the type annotations indicate that a parameter
may be curried, which we discuss in further depth later (section
\ref{sec:fff}). For example the function

\begin{haskell}
foo :: a -> b -> c
\end{haskell}

may have its first and second parameters curried.

\subsection{Concepts}
Here we discuss some of the ideas we use: higher-order functions
(\ref{sec:fff}), functors and monoids from category theory
(\ref{sec:cat-theory}), and Actors (\ref{sec:actors}). We use the
syntax as illustrated in Section \ref{sec:syntax}. A user of our
library must be familiar with higher-order functions, currying, and
actors.

\subsubsection{Higher-order functions} \label{sec:fff} A first class
function is akin to a function pointer in C and C++ and can be
composed, transformed, and parameterized in much the same way of
numeric types or objects. For example the $apply$ function can be
defined as a polymorphic function over the type variable $a$ accepting
three parameters: a binary function and two values:

\begin{haskell}
apply :: (a -> a -> a) -> a -> a -> a
apply f x y = f x y
\end{haskell}

This allows a generalization of the addition and subtraction functions
where $apply$ is curried with the function to apply:

\begin{haskell}
add, sub :: Num a => a -> a -> a
add = apply (+)
sub = apply (-)
\end{haskell}

Thus $add \; 40 \; 2$ returns $42$ and $sub \; 44 \; 2$ returns
$42$. The definitions of $apply$, $add$, and $sub$ demonstrate a key
technique that we will use: currying.

Currying is the partial application of a function $f$ to $[1 \dots
N-1]$ of it's parameters (where $N$ is the number of parameters $f$
may accept). This creates a new, specialized function and is akin to
the Strategy design pattern in object oriented design. Thus $add$ is
$apply$ specialized with the addition operator while $sub$ specialized
$apply$ with subtraction.


\subsubsection{Category theory} \label{sec:cat-theory} 

To paraphrase John Hughes \cite{hughes-arrows}: as category theory
attempts to be a ``theory of everything'' it ends up being extremely
abstract where everything is a category if looked at long enough, so
to say that something is a category is rather unsatisfying. On the one
hand, this high level of abstraction can be confusing and require a
long time to master. On the other hand, as Hughes later points out,
this very abstractness is useful for programmers attempting to
decouple implementation details from the presented interface.

We make use of just two of the ideas from category theory: functors
and monoids.

\paragraph{Functors\\}
Formally a functor is a mapping between objects in a category, but can
be thought of as a type representing a computational context or a
container. This tranformation changes the values within the context
but not the context itself. The definition used is

\begin{haskell}
class Functor f where
    fmap :: (a -> b) -> f a -> f b
\end{haskell}

Given a functor $f$ containing values of type $a$, the $fmap$ function
applies the transformation $a \rightarrow b$ yielding a new functor of
the same context $f$ with values of type $b$. An example of a functor
is generic list: the container is the list and $fmap$ is simply the
$map$ function (recall that the type of map is $map :: (a \rightarrow
b) \rightarrow [a] \rightarrow [b]$).


Furthermore, functors must obey two laws: identity and composition:

\begin{eqnarray}
  fmap \; id &=& id  \label{functor:id} \\
  fmap \; (f \circ g) &=& (fmap \; f) \circ (fmap \; g) \label{functor:composition}
\end{eqnarray}

The identity law (equation \ref{functor:id}) states that $fmap$ing the
identity function into the functor does not alter it in any way, while
the composition law (equation \ref{functor:composition}) subsequent
$fmap$s may be fused into a single composed function requiring only a
single pass instead of multiple.


\paragraph{Monoids\\}
Monoids are a finite set of values with an identity element and an
associative function allowing the combination of elements:

\begin{haskell}
class Monoid a where
    identity :: a
    combine :: a -> a -> a
\end{haskell}

This allows one to write the reduction of a list of monoids as follows:

\begin{haskell}
mreduce :: Monoid a => [a] -> a
mreduce seq = fold combine identity seq
    where
    fold :: (a -> b -> a) -> a -> [b] -> a
\end{haskell}

The implementation of $fold$ is unimportant here, suffice it to say that it
is a higher level abstraction of recursion over a sequence.


\subsubsection{Actors} \label{sec:actors}

Actors are an abstraction for concurrency: an actor can be thought of
as an entity that can send and receive messages, make decisions based
on the messages, and create new actors. The following pseudocode
illustrates an actor-based model of two processes communicating:

\begin{C}
  actor ping:
    msg <- recieve message
    if msg is ``Start'' then
      pong <- make new pong actor
      send ``Ping'' to pong
    else if msg is ``Stop'' then
      send ``Stop''  to pong
      die
    else send ``Ping'' to msg origin
    loop

  actor pong:
    msg <- recieve message
    if msg is ``Stop'' then die
    else send ``Pong'' to msg origin
    loop

  main program:
    ping <- make new ping actor
    send ``Start'' to ping
    wait
    send ``Stop'' to ping
\end{C}


\section{Reliability from Replications} \label{sec:reliability}

Intuitively, executing multiple replications of a job would increase
the reliability of result obtained from program on an unstable running
environment. This is feasible for jobs on distributed system because
multiple copies can be run parallel and won't cause additional time
consumption. However, other resources such as RAM are still required
for this fault tolerant strategy.

The number of replications $N_s$ can actually be exactly decided by
required reliability $r$ if the expected error rate $e$ of the system
is known. Applying probability theory, the calculation expressed as
following.

\begin{equation}
N_s=\frac{log(1-r)}{log(e)}
\label{fun:rel}
\end{equation}

Based on this formula, the increase of replications only have notable
effects when the reliability is small. Consequently, huge number of
replications is not necessary.  For example, if the error rate of
system is known as 0.5, the relationship between number of
replications and reliability are shown in Figure \ref{fig:rel}.

\begin{figure}[h!] \centering
  \includegraphics[width=0.5\textwidth]{RelvsCopy.png}
  \caption{\footnotesize{The Y-axis is the required reliability for
      program while X-axis is the number replications needed for
      achieving the reliability in a system with error rate 0.5.}}
  \label{fig:rel}
\end{figure}

From Figure \ref{fig:rel}, we can conduct the conclusion that on a
system with error rate 0.5, the maximum number of replications
required is 7 and 5 replications might be good enough. Unfortunately,
the error rate of cluster computers is usually unavailable.  The exact
number of replications can not be calculated. However, equation
\ref{fun:rel} provides an rough idea of how to decide reasonable
replications.


\section{Architecture} \label{sec:architecture} 

In order to use CFar a programmer must have some understanding of
higher-order functions (\ref{sec:fff}) and actors
(\ref{sec:actors}). Some understanding of functors and monoids will be
helpful (\ref{sec:cat-theory}).


\subsection{Interface} \label{sec:interface}
The interface we attempt to provide is the following:


Using some type aliases for clarity:
\begin{haskell}
Reps = [Int] -- the number of replications
Choice = [a] -> a -- the choice function
Change = a -> b -- for 'map'
Reduction = a -> a -> a -- for 'reduce'
\end{haskell}

we can specify the general form of the parallel and distributed map
and reduce functions:
\begin{haskell}
pmap :: Reps -> Choice -> Change
     -> [a] -> [b]
preduce :: Reps -> Choice -> Reduction
         -> [a] -> a
dmap  :: Reps -> Choice -> [Node] -> Change
      -> [a] -> [b]
dreduce  :: Reps -> Choice -> [Node] -> Reduction
         -> [a] -> a
\end{haskell}

We then provide similar functions with no replications:
\begin{haskell}
pmap' :: Change -> [a] -> [b]
preduce' :: Reduction -> [a] -> a
dmap' :: [Node] -> Change -> [a] ->[b]
dreduce' :: [Node] -> Reduction -> [a] -> a
\end{haskell}

whose implementations curry the more general forms using a list of
ones (indicating no redundancy) and the choice function selecting the
first (and only element) of the list of redundant results:
\begin{haskell}
pmap' = pmap (repeat 1) head
preduce' = preduce (repeat 1) head
dmap' = dmap (repeat 1) head
dreduce' = dreduce (repeat 1) head
\end{haskell}


\subsection{Mobile functions}

The key to our distributed map and reduce is the mobile
function. Figure \ref{fig:mobile-fun} illustrates the basic idea. The
mobility of functions travelling between nodes arises naturally
(conceptually at least) from the idea that functions are first
class. The function is created on the initiator, perhaps through
currying, and is sent to the actor which can curry further arguments
and send it elsewhere, or evaluate it directly.

\begin{figure} \centering
  \includegraphics[width=0.4\textwidth]{mobile-fun.png}
  \caption{\footnotesize{A function is potentially curried with its
      argument(s) then sent to be evaluated on the actor, which
      returns the result.}}
  \label{fig:mobile-fun}
\end{figure}

We use this mechanism to propagate the node locations when doing the
distributed reduction: we curry a function such that when it is
evaluated connects to the next nodes. Thus the actor sees the type of
the function as $getNodes :: [ConnectedNode]$. This hides the
implementation details and enables any function to be used to generate
the connections.

\begin{figure} \centering
 \includegraphics[width=0.5\textwidth]{distreduce-arch.png}
 \caption{\footnotesize{Sample tracing of communication and data flow
     in a distributed reduce call. The amount of data is indicated by
     the size of the diamond on the edges.}}
  \label{fig:dreduce-arch}
\end{figure}


\subsection{Distributed Reduce and Map}
Figure \ref{fig:dreduce-arch} illustrates the dataflow and
communications for an implementation of distributed reduction. In this
case the locations are known to the initial caller of $dreduce$. On
evaluation of the function, the curried connection generator, the
choice, and reduction functions are sent with the sequence to be
reduced to the first node. Upon receiving the message the actor
selects the elements to be reduced and calls $dreduce$ on the
remaining data. This recursion continues until the last actor only has
two or three elements to reduce. The redundant evaluation is done
remotely with one instance local, after which the choice function is
applied and the result is propagated back to the initial caller of the
reduction. This has the effect that the communication overhead for the
initiator is relatively small: it sends all the data only once.

The design of the $dmap$ function is very similar. However, the
communication overhead for the initiator is significantly larger as
the function is not defined recursively. The initiator sends a message
to each actor containing the mapper function and the data to be
mapped. Once one message is sent the next element of the sequence is
dispatched with the mapper. Once all elements have been sent to be
worked on the initiator waits for the results to come in.


\subsection{Recursive distribution} \label{ssec:recur}
The design of $dmap$ and $dreduce$ using higher-order functions allows
calls to $dmap$ and $reduce$ from the mapper and reducer
functions.

\begin{haskell}
topMapping val = do
   let innerMapping x = x + gaussianNoise
        vals = [1..20]
    results <- dmap' getNodes innerMapping vals
    return (average results)

main = do
   return (dmap' getNodes topMapping [1..100])
\end{haskell}

In this example the $topMapping$ function would be executed on remote
nodes, which in turn would distributed the application of the
$innerMapping$ function.

\subsection{Parallel Reduce and Map}
Parallelizing the reduction of a sequence assumes monoid-like
elements. This is shown by the type of the reduction function: $a
\rightarrow a \rightarrow a$. This transforms the reduction to a
mapping:

\begin{haskell}
preduce'' :: (a -> a -> a) -> [a] -> a
preduce'' _ [res] = res
preduce'' f seq = preduce'' f seq'
    where
    seq' = pmap apply (mkAssocs seq)
    apply (a,b) = f a b

-- given a function
mkAssocs :: [a] -> [(a,a)]
\end{haskell}

The $preduce^{\prime\prime}$ function illustrates the core idea behind
the parallel reduction: if the sequence has only one element, it is
the result, otherwise recursively apply $preduce^{\prime\prime}$ on a
halfway reduced sequence. This partially-reduced sequence is obtained
by transforming the original sequence into a sequence of 2-tuples,
then mapping the original function using the tuple elements as
parameters. This mapping can be done in parallel. One case not shown
here is accounting for a sequence with an odd number of elements. In
practice, we append it to $seq^\prime$ before the subsequent call to
$preduce^{\prime\prime}$.


Both $pmap$ and $preduce$ are defined in terms of a lower-level basis
of map and reduce to take advantage of the function fusion (recall eqn
\ref{functor:composition} that $fmap$ing a function composition, or
fusion, is equivalent to iterative $fmap$ings, yet requires only a
single traversal). The result is that $preduce$ can be solved using an
application of a $map$-like function and $pmap$ can be solved using a
$preduce$-like function.




\section{Implementation} \label{sec:implementation}

\subsection{Higher-order functions}

Functions in Java are not first class so we needed to first provide a
framework to emulate them.  We experimented with different ways to do
so before settling on the use of interfaces: a function is an
interface $FN$, where $N$ is a value in $[0,1,...]$ indicating the
number of parameters it may accept. This allows us to curry functions
with lazy execution. These functions are also serializable, which,
combined with the way we implement currying, allows for an emulation
of portable continuations.

The general interface for our functions is as follows:

\begin{java}
public interface FN<T1,T2,...,TN,R>
    extends Serializable {
    public R f (T1 a1, T2, a2, ..., TN aN);
}
\end{java}

The type variables $T1,T2,\dots$ are the types of the parameters to
the function, which $R$ being the return type. For example:

\begin{java}
public interface F2<A,B,C>
    extends Serializable {
    public C f (A a, B b);
}
\end{java}


Currying is accomplished by returning a new function accepting a lower
number of parameters. Thus currying an $F2$ yields an $F1$, which
yields an $F0$ when curried. We explored the use of inheritance to
define relationships between functions, but found it simpler to simply
define them as interfaces and overload the $curry$ function. This
allows a workflow similar to the following:

\begin{java}
// A function accepting a single Integer
F1<Integer,Integer> f1 = /* definition */

// evaluation returns an Integer
F0<Integer> f0 = curry(f1,42);

/* different ways to evaluate f0: */
// ... directly
int r1 = f0.call();

// ... in a separate thread
Future<Integer> f = threadpool.submit(f0);
int r2 = f.call();

// ... on another node
node.sendWork(f0);
int r3 = node.get();
\end{java}


\subsection{Functors and Monoids}
We implemented the map and reduce functions in terms of functors and
monoids but do not expose this to the library user. For example we
provide a $Sequence<S,A>$ interface, where $S$ is a functor forming a
monoid with an empty value (i.e. $nil$) and the append operator, and
$A$ the contained type. This allowed us to implement both the
conventional list data structure and the lazily evaluated
$Stream$. Due to Java's type system we split the definition of a
monoid into a $Join$ and $Empty$ interfaces.



\section{Example} \label{sec:example}

For demonstrating the usage of our library, we describe a sample task
in Computational Biophysics using map reduce framework. Biologists are
always interested in conformations of protein and how conformations
transform.  Possible conformation trajectories of protein motion can
be simulated using molecular dynamics, which is usually expressed
using partial differential equations. A valuable investigation from
such trajectories would be whether there is transmission from one
specific conformation to another. In other words, we want to examine
whether two conformations coexist in one trajectory and in a certain
order. Calculating such probability including the following steps.

\begin{enumerate}
\item Generate trajectories using partial differential equation with
  distinct initial values.
\item Mark those specific conformations (e.g. drug-sensitive) in these
  trajectories.
\item Check the order of these specific conformations.
\item Calculate the number of trajectories with specific conformations
  in certain order.
\end{enumerate}

These steps can be implemented by using multiple map and reduce
processes. Firstly, trajectories with different initials can be
generated parallel. This can be implemented using map. The replication
is especially useful for this task. Because every generating process
is stochastic it may not converge and fail to return valid trajectory.
Therefore, in our implementation, we select the most converged result
among replications. The pseudo code for this
step is shown in the following. Here, the map function and choice function are wrapped into
seriallizable objects $trajMapper$ and $choose$, respectively.

\begin{java}
F1<Double, List<Double>> trajMapper
 = new F1<Double, List<double>>() {
  public List<Double> f(Double x) {
    list<Double> traj
     =Generate traj with initial x;
    return traj; 
  }
};

F1<Sequence<Stream, List>, List> choose 
 = new F1<Sequence<Stream, List>, List>() {
  public List f(Sequence trajs) {
    return a converged traj from trajs;
  }
};




main(String[] args){
  Sequence trajs 
   =Seq.distMap(List<replicas>, 
  choose,traMapper,List<x>);
}
\end{java}


Each generated trajectory is a list of conformations. To mark two
specific conformations conform\_1 and conform\_2 in one trajectory,
map is applied to scan all conformations parallel and return a list
with integers 0/-1/-2 representing others/conform\_1/conform\_2
respectively. Further, a reduce will be applied to examine whether
these two conformations exist in a certain order. Note that this map
reduce process is only for examining one trajectory. For all
trajectories, an outer layer of map is used.

\begin{java}
//map for mark conform_1/2
F1<Double, Integer> map
 = new F1<Double, Integer>() {
  public Integer f(Double conform) {
    return 0/1/2;		
  }
};

//reduce for checking order
F2<Integer, Integer, Integer> reduce
 = new F2<Integer,Integer,Integer>() {
  public Integer f(Integer a,Integer b) {
    if (a = 0 | b = 0) 
      state = a + b;
    else if (a = -1 & b = -2) 
      state = 1;
    ......
    return state;
  }
};

//map reduce for one trajectory
F1<List<Double>, Integer> mapred
 = new F1<List<Double>, Integer>() {
 public Integer f(List<Double> trajs) {
   List<Integer> confs 
    = Seq.parMap(map, trajs);
   Integer state 
    = Seq.distReduce(reduce, confs);
   return state;
  }
};

//map of map reduce for multiple trajs
main(String[] args){
  Sequence states 
   = Seq.distMap(mapred, List<trajs>);
}
\end{java}

According to our implementation, output of value 1 indicates that the
trajectory contains the specific conformations in certain order. For
all the trajectories, a list of integers will be generated from the
above process. The probability can be simply calculated by counting
the number of value 1 and divide it by number of trajectories. This
can be implemented as following. The output from reduce is the
probability we desired.

\begin{java}
F2<Integer, Integer, Integer> reduce 
 = new F2<Integer, Integer, Integer>() {
  public Integer f(Integer s1,Integer s2){
    if (s1 < 0) 
      s1 = 0;
    if (s2 < 0) 
      s2 = 0;
    return s1 + s2;
  }
};

main(String[] args){
  Sequence prob
   =Seq.distMap(reduce, List<states>);
}
\end{java}


\section{Evaluation} \label{sec:evaluation}

Our map reduce library is implemented to support parallel computing on
a single multi-core machine and also distributed computing on CRC
computers of University of Notre Dame. Since the version on
distributed system is not very robust currently, we only test CFar
library on a 32 cores machine.

We test the performance through the problem described in section
\ref{sec:example}: 10 trajectories of protein motion are generated for
the estimation and each trajectory contains 100 time steps (protein
conformations). We consider a very simple protein
system. It consists of 1 atom moving along a one dimensional
space. This is a simulation of more interesting problems such as
protein with hundreds atoms on three dimensional space. However, the
simple system is sufficient for testing the CFar library. In fact, if
we want to investigate more complicated problem, the only thing that need
to be modified is the definition of function in map and reduce. The
whole process and number of nodes needed remain the same.

In the first step of the above example, 100 trajectories are generated
from molecular dynamics (MD). MD simulations usually includes a
stochastic parameter to simulate noise in the environment. As a
result, the same set up of all parameters still can lead to different
results: non-converged trajectories could be generated. Such
trajectories are not a simulation of natural protein motion. We expect
to increase valid trajectories by running each generating process in
map with replications and define a choice function to detect
convergence. Figure \ref{fig:reliability} shows that the number of
valid trajectories increases fast along with increasing replications
at the beginning. When the number of replications is larger than 3,
the rate of increase in confidence is much smaller. Finally, 100
percent valid results are achieved when redundancy is 7. This
demonstrates our expectation that a only small number of replications
would lead to sufficient improvement on reliability. It is not
necessary to apply infinite replications to ensure perfect
performance. In this experiment, 4 redundancies lead to 96 percent
valid results, which is sufficient in this problem.

\begin{figure} \centering
  \includegraphics[width=0.5\textwidth]{redunVStraj.png}
  \caption{\footnotesize{In this experiment, 100 trajectories are
      generated by map parallel. Same redundancies are made for all
      100 generators for running one test. The X-axis represent number
      of redundancies and Y-axis shows the number converged
      trajectories generated. }}
  \label{fig:reliability}
\end{figure}

For further demonstrating the advantage of user controlled
reliability, we investigates the program's tolerant to system noise.
Since we test our method on a single 32 core machine, there is
actually no trustworthiness issue of this system. For simulating a
noise system, we manually inject a Gaussian noise to system. With
certain probability, some returned results might be changed to other
value. We examine the changes of accuracy of result along increasing
noise when applying different redundancies. Although the example is a
stochastic process, the expectation of returned probability is 0.15
and the confidence interval is 0 to 0.3.  This means that the returned
value within 0 to 0.3 are considered as accurate. The accuracy is
estimated by applying the calculation of transmission probability 100
times and check number of accurate results out of 100 runs. Figure
\ref{fig:reliability2} shows the decrease of accuracy along with
increasing noise obtained by applying 0, 1 and 2 redundancies
respectively. Larger number of redundancies makes the program with
better tolerant to the system noise. When the noise is trivial, the
programs with different redundancies perform almost the same, which is
obvious since if there is no noise at all, the accuracy will be always
100 percent. When increasing the noise to mediate, the larger
redundancy make a distinct improvement. And keep increasing noise, the
effect of redundancy becomes weaker. This shows that the redundancy
can only handle mediate noise well. When the system noise goes wild,
this fault tolerance strategy is not efficient.

\begin{figure}
   \centering
    \includegraphics[width=0.5\textwidth]{noiseVSacc.png}
    \caption{\footnotesize{The x-axis is mean of the Gaussian
        noise. The y-axis shows accuracy of result. Green color shows
        the curve generated from using no redundancies. Blue shows the
        curve from 1 redundancies implementation. Red shows the curve
        from 3 redundancies implementation.}}
   \label{fig:reliability2}
\end{figure}

We examine how well our library perform with increasing parallel
jobs. If the resource is infinite and we don't need to deal with any
process other than the parallel job, the time needed for executing
parallel map will be same as executing one function. However, in the
real case, the time will increase with increasing size of the list due
to limitations of resource and other processes supporting parallel
execution. We expect a good implementation with this increase as small
as possible. For this testing purpose, a parallel map with time
consuming function is needed. Since the previous example is not time
consuming enough, we write a separate program for this evaluation. The
test case do a parallel map with function simply sleeping for 3000
milliseconds. We increase the size of list for parallel map 100 bigger
each time and examine the running time.  Figure \ref{fig:scal1}
illustrates the comparison of parallel map (blue curve) to running all
jobs sequentially (red curve). Obviously, parallel map is much better
than sequential process. Figure \ref{fig:scal2} displays the curve for
parallel map only so that it is visible. It shows that when number of
jobs is smaller than 200, the time consumed is almost 3000
milliseconds. With increasing number of jobs, more time is consumed
and the slop of curve also increases. This is reasonable, since when
occupy more and more computer resources, continue requesting
additional resource will be consume longer time. Even so, it shows
that CFar library is scalable with increasing jobs since the curve
remains increasing reasonably. Even with 2000 jobs, the run time is
around 12,000 milliseconds, 4 times that of single job. These figures
could be improved though further code optimizations.

\begin{figure}
   \centering
    \includegraphics[width=0.5\textwidth]{jobsVStimeVSsequential.png}
    \caption{\footnotesize{The x-axis is number of parallel jobs. The
        y-axis represents time consumed for finishing all jobs. Red
        line is the expected curve of running all jobs
        sequentially. Blue line is draw from experimental results
        obtained from running jobs using parallel map. }}
   \label{fig:scal1}
\end{figure}

\begin{figure}
   \centering
    \includegraphics[width=0.5\textwidth]{jobsVStime.png}
    \caption{\footnotesize{The x-axis is number of parallel jobs. The
        y-axis represents time consumed for finishing all jobs. The
        curve is draw from experimental results obtained from running
        jobs using parallel map.}}
   \label{fig:scal2}
\end{figure}


\section{Conclusion} \label{sec:conclusion}

There are similarities between CFar and other distributed frameworks,
notable Hadoop and Swarm: CFar is inspired by the $map$ and $reduce$
functions from functional programming languages, as is Hadoop.

However, we feel that CFar is different enough to make it interesting:

The only assumption we make about nodes is that they are available on
the network. This allows applications using CFar to run on a network
for which there is little or no central control. Furthermore it is
easily to add and remove nodes to the environment, requires little
administrative control, and allows for heterogeneous platforms to be
used simultaneously.

Furthermore, one could express the MapReduce framework and other
parallel patterns in terms of CFar, but not necessarily
vice-versa. This makes CFar a generalization of these patterns: one
might build a Hadoop-like infrastructure by setting up a distributed
filesystem and providing specialized versions of our map and reduce
functions.


Abstractions are useful because they hide complexity and provide an
interface with known behavior. One might say that the act of
programming is an exercise in multiple levels of abstraction: high
level languages abstract patterns of computation, compilers abstract
the task of translation between high-level to low-level instructions,
operating systems can abstract the underlying hardware. Category
theory allows the formal analysis of various abstraction, and may be
one way of providing a formal context to programming.

Extracting parallel solutions from various problems has historically
been difficult. While there are many reasons for this complexity, it
seems inevitable that good solutions to parallel problems lead to or
are derived from some abstraction.

This is why we felt that the formal background and abstractions of
category theory were worth exploring in application to a parallel
context.

CFar is the result of this exploration, providing an abstraction for
the tradition $map$ and $reduce$ functions. This abstraction seems to
be general enough that other abstractions such as MapReduce
\textit{\`{a} la} Hadoop can be built.



\nocite{Martini96elementsof, Wadler92comprehendingmonads, Martini96elementsof,Hoare78CSP,typeclassopedia,folding@home,MapReduce,hadoop,ReInForM,hughes-arrows}


\bibliographystyle{plain}
\bibliography{references}

\end{document}
% LocalWords:  parameterized expressivity
