\section{System}\label{sec:system}                                                       
   

We fully implemented \name\ using Java programming language. We propose \name\
to be maintained as a parallel execution engine to a distributed computation
system that operate on big data. \name\ should be invoked in scenarios where
compute aggregate operations has to be performed. Our architecture is
given in the Figure~\ref{fig:Arch}. % FIXME - ADD IMAGE

 \begin{figure}[!ht]
 \centering
 \includegraphics[width=0.75\columnwidth, keepaspectratio=true,trim=0 0 0 0,
 clip]{Architecture}
 \caption{Architecture}
 \label{fig:Arch}
 \end{figure}
 
 Computation nodes exist in the Distributed Computing System. When a
 compute-aggregate problem is initialized, they start the first phase. A list of
 the computation nodes and the traits of the aggregation function is sent to the
 Controller within the aggregation subsystem. The aggregator determines the
 optimal fanout and determines the appropriate overlay on an ordered set of
 vertices. The controller tells each vertex its children and parent. All nodes
 open the relevant TCP connections while waiting for the computation phase to
 complete.
 
 When a computation node finishes, it send its results to the appropriate
 vertex.
 Each aggregator waits until it has results from all its children, then
 aggregates and pushes the results to its parent. When the root finishes it can
 send the result back to a specified node in the Distributed Computing Zone or
 write it out to the Distributed File System. If the overlay is for a
 persistant problem the vertices maintain their connections and do work when
 new input is pushed in. Otherwise a node drops connections after communicating
 its portion to its parent.

We propose extensions to two distributed programming abstractions that will
allow users to easily define and perform compute aggregate operations (1)
FlumeJava (2) RDDs. These abstractions are implemented in Apache Crunch and Apache
Spark distributed computation systems respectively.
                         
    
\subsection{Extensions to FlumeJava}  

FlumeJava is a Java library that can be used to define collections that represent possibly massive amounts of data and the operations that should be performed on these collections. The two main data structures of FlumeJava are \lstinline{PCollection} and \lstinline{PTable} which are a collection of elements and a collection of \emph{key-value} pairs of the same type(s) respectively. FlumeJava proposes four main types of operations that can be performed on its data structures (1) \lstinline{parallelDo} - performs a general operation defined by a function object (\lstinline{DoFn}) on each element (2) \lstinline{groupByKey} - groups elements of a \lstinline{PTable} based on the key (3) \lstinline{combineVales} - combines a elements of a grouped \lstinline{PTable} that has the same key based on a function object of type \lstinline{CombineFn} (4) \lstinline{flattern} - flatterns elements of two or more \lstinline{PCollection}s/\lstinline{PTable}s resulting in a single \lstinline{PCollection}/\lstinline{PTable}.      

We propose extending FlumeJava with a fifth main operation \emph{parallelAggregate}. The syntax of the operation performed on a \lstinline{PCollection<A>} is given below.
      
\begin{lstlisting}
  parallelAggregate(
	DoFn<A,B>, AggregateFn<B>) 
	: PCollection<B>
\end{lstlisting}

\lstinline{parallelAggregate} operation takes two parameters (1) A compute function of type \lstinline{DoFn<A,B>} that defined the compute operation (2) An aggregation function of type \lstinline{AggregateFn<B>} that defines the aggregation operation. The result of the operation is a \lstinline{PCollection<B>} that contains the aggregation.

Following describes how FlumeJava can be used to determine the top K elements of a dataset.

\begin{lstlisting} 
PCollection<String> originalElements = ...;	  

DoFn<String, Tuple<String, Integer> 
  computeFn = new DoFn() {
    public void process(
        String input, 
        EmitFn<Tuple<String,Integer> 
          emitter) {
      Integer intVal = 
        Integer.parseInt(input);
      emitter.emit(KEY, intVal);
    }
  } 

Aggregator<Integer> aggregateFn = 
  new Aggregator<Integer> () {
    List<Integer> results = 
	new List<Integer>();

	public void update(Integer val) {
	  results.add(val);
	}

	public Iterable<Integer> results() {
	  List<Integer> sorted = 
	    sortLargestFirst(results);
	  Iterator<Integer> iter = 
	    sorted.iterator();
	  List<Integer> topK = 
	    new ArrayList<Integer>();
	  Iterator<Integer> iter = 
	    topK.iterator();
	  for(int i=0; i < K; i++) {
	    if (!iter.hasNext()) {
		  break;
		}
	    topK.add(iter.next());
      }
      return topK;
	}
  }

PTable<String, Integer> keyedElements = 
  originalElements.parallelDo(
	computeFn, 
	tableOf(strings(), ints()));
PCollection<String, Integer> topValues 
  = keyedElements.groupByKey().
    combineValues(aggregateFn); 
	
\end{lstlisting} 


Here compute function \lstinline{computeFn} is used to generate \lstinline{Integer} elements from a \lstinline{String} dataset and the aggregation (combiner) function \lstinline{aggregateFn} is used to determine the top K elements. Since combining can only be performing on a \lstinline{PTable}, one has to be generated with an artificial key prior to performing the combine operation.

With our parallelAggregate operation code for determining top K elements get simplified as follows.

\begin{lstlisting} 
PCollection<String> originalElements = ...;

DoFn<String, Integer> computeFn = 
  new DoFn<String, Integer> () {
    public void process(
	    String input, 
	    EmitFn<Integer> emitter) {
	  Integer intVal = 
	    Integer.parseInt(input);
	  emitter.emit(intVal);
	}
}

AggregateFn<Integer> aggregateFn = 
  new AggregateFn()<Integer> {
    public process(
	    Iterator<Integer> values, 
	    AggregatorEmitFn<Integer> emitter) {
	  Iterator<Integer> sorted = 
	    sortLargestFirst(values);
	  for (int i=0; i < K; i++) {
	    emitter.emit(sorted.next());
		if (!sorted.hasNext()) {
		  return;
		}
	  }
	}
  }

	
PCollection<Integer> topK = 
  originalElements.parallelAggregate(
    computeFn, 
    aggregateFn);  
\end{lstlisting}

Here similar to pervious example, \lstinline{computeFn} is used to generate Integer elements from a String dataset and aggregation function \lstinline{aggregateFn} is used to determine the top K elements of a given set of elements but there is no need to introduce an intermediate table with an artificial key just for performing the aggregation. 

\begin{lstlisting} 

\end{lstlisting}


     
\subsection{Extensions to RDDs}

RDDs introduces a number of \emph{transformation} operations that can be used to create new datasets from existing ones. For example, the transformation \lstinline{map(func)} applies a function \lstinline{func} to each element of a given dataset and forms a new dataset using results while the transformation \lstinline{groupByKey([numTasks])}, when called on a dataset of \lstinline{(K,V)} pairs, returns a dataset of \lstinline{(K, sequence of V)} pairs. Additionally, RDDS introduces a number of \emph{action}s that return a value after running a computation on a dataset. For example, the action \lstinline{reduce(func)} aggregates elements of a dataset using the function \lstinline{func}.

We propose to extend RDDs by introducing a new transformation named \lstinline{parallelAggregate(func)} for defining compute aggregate operations. The new transformation is similar to the \lstinline{reduce(func)} action defined above but returns a collection of one or more values instead of a single value.

                      
                      

