\section{System Design}

Hadoop\cite{hadoop} is an open source implementation of MapReduce. In this project, we make changes to version 0.18.3. Internally, Hadoop contains its own distributed storage system -- Hadoop Distributed File System (HDFS). This file system behaves as an intergal part of the Hadoop's data flow. First of all, the input data should be already residing in HDFS before any Mapreduce job is launched. Map tasks are first initiated to emit $<$key value$>$ pairs to local disk.  There is an implicit barrier between the map phase and reduce phase. Before the reduce tasks can start making any progress, the run-time system needs to sort and merge the keys in each mapper's output storage. This ensures the reducer to iterate over the same key continuously. Finally the result is written back to HDFS. The whole data flow in the original Hadoop is illustrated in Figure.\ref{fig-old}.
\begin{figure}[!b]
  \begin{center}
    \includegraphics[width=0.5\columnwidth]{art/old.pdf}
  \end{center}
  \caption{\small Original Hadoop Data Flow}
  \label{fig-old}
\end{figure}

Many designs need to be revised to incorporate the different demands from streaming applications. The major changes are:
\begin{itemize}
\item The use of HDFS should be disgarded mostly in streaming. All data movement except the final result should be transferred through TCP pipes. 
\item The streaming input data is fed into the system from network or shared memory continuously and its properties (length, frequency) are unknown in prior. This indicates the disappear of the barrier between the map phase and reduce phase, and they should be active during the entire lifetime of the computation. 
\item The simultaneous existance of mappers and reducers makes impossible to provide sorting and merging fascilities in the run-time system. Whenever a map task emit a $<$key value$>$ pair, it should directly transmit the pair to a corresponding reduce task. 
\end{itemize}

As a result, we re-designed the data flow in streaming Hadoop, as shown in Figure .\ref{fig-new}. A centralized JobDispatcher is introduced to receive external data and dispatches it to multiple map tasks. Every map task connects to every reduce task via socket for the data link. To support dynamic change of the number of mappers and reducers, all tasks should register themselves to JobDispatcher and periodically update their knowledge about the system. Finally the output data will either written to HDFS or transmitted to the external sinks.
\begin{figure}[!b]
  \begin{center}
    \includegraphics[width=1.0\columnwidth]{art/new.pdf}
  \end{center}
  \caption{\small Streaming Hadoop Data Flow}
  \label{fig-new}
\end{figure}


