\section{Implementations}

In this section, we will describe in detail the implementation of our work. We start from the introduction of how the original Hadoop works. 

In the original Hadoop system, one machine in the cluster is designated as the NameNode and another machine (may reside in the same machine as NameNode as well) as the JobTracker. All other nodes in the cluster are considered as DataNode and TaskTracker. 
The user can submit a Mapreduce job via the JobTracker, which then instantiates a {\it JobInProgress(JIP)} object to track the progress of the job. The user also need to specify the format of the input by providing a concrete {\it InputFormat} class. This class calculates the number of the splits, each assigned to a map task, and tells how records can be read from each split. {\it TaskTrackers} periodically update their status to JobTracker, which in return sends them new tasks if there are more free slots in the {\it TaskTracker}. A task can be one of the following types: setup, map, reduce, and clean-up. When the {\it TaskTracker} receives a new task, it will start a new process in the same node to run that task. A map or reduce task will run the user-specified mapper or reducer function.

In the following, we will focus on the implementation of three most import parts of our streaming Hadoop system.

\subsection{JobDispatcher Class}
In the spirit of reusing existing code as much as possible, we try to keep the heartbeat scheme between the TaskTracker and JobTracker. This way, the task dispatching algorithm is preserved in the streaming Hadoop system. 

However, as mentioned above, the input path is totally different in streaming, the data is dispatched by the newly introduced {\it JobDispatcher}. 
{\it JobDispatcher} plays two roles in our streaming Hadoop system. First, it is a data server. It receives raw data from the external link, divides it into batches, and sends them to different map tasks. 
Also, it is a controller. It coordinates the communication between map tasks and reduce tasks.  
 
As a data server, {\it JobDispatcher} receives data from external and sends batches to map tasks. 
Every time new data arrives from the external link, {\it JobDispatcher} parses it and encapsulates it 
into batches according to the formatting algorithm provided by the {\it InputFormat} class. We 
introduce {\it InputFormat} here, so that the format boundary is correctly handled and the data 
completeness is guaranteed. The batches are stored temporarily in a batch list. Whenever a data 
link to map task is ready for write, {\it JobDispatcher} removes data batches from the batch list and 
sends them out for processing. However, this greedy strategy provides poor support for load 
balancing, especially on a heterogeneous infrastructure. More sophisticated dispatching 
algorithm is left for future work.  
 
{\it JobDispatcher} also acts as the communication controller of the whole system. During 
initialization, the existing map tasks will contact the {\it JobDispatcher} to report their task ID, IP address, and a listening port. This listening port is used to create data link between map task and all the reduce tasks to transport key-value pairs. {\it JobDispatcher} maintains a list of the addresses of the existing map tasks. 
Upon receiving the heartbeat message from the reduce tasks, {\it JobDispatcher} replies with a copy of the latest address list. According to the returned address 
information, reduce task initializes data links to each of the existing map tasks. Once new 
key-value pairs arrive from these data links, reduce task processes them immediately.  
 
One concern is that the {\it JobDispatcher} tends to be the bottleneck of the whole system, since it 
runs the batch creating algorithm and handles data transportation simultaneously. Currently, we 
are using the non-blocking I/O program paradigm to handle all the connections. This network 
server architecture guarantees the best performance, especially when the number of 
connections is large. In addition, it also helps to eliminate the synchronization overhead on the 
batch list because there is only a single thread to handle all the connections. As proved by the 
experiments, the {\it JobDispatcher} is never a bottleneck of the system.

\begin{figure}[!b]
  \begin{center}
    \includegraphics[width=1.0\columnwidth]{art/JobDispatcher.pdf}
  \end{center}
  \caption{\small JobDispatcher Class Overview}
  \label{fig-dispatcher}
\end{figure}

\subsection{Mapper Class}
The mapper class has been changed to divert the output data to reducers directly instead of storing in local disks. The detailed structure of a map task is illustrated in Figure. \ref{fig-mapper}.

When a map task is created; it starts up a new thread ({\it BatchFetchingThread}) to link to JobDiaptacher. Its responsibility is to periodically report to the JobDiapatcher this mapper's taskID and PairServer's port, which will be later informed to reducers to set up the link between reducers and mapper. Another thread {\it MapTaskServerThread} is for the data link between this mapper and reducers. It manages the links to all reducers because the $<$Key, Value$>$ pair produced by each mapper may end up to any of the reducers. 

{\it BatchFetchingThread} communicates with the main thread through the {\it BatchList} in a Server/Client pattern. Whenever {\it BatchFetchingThread} gets a batch of data from {\it JobDispatcher}; it pushes it into the {\it BatchList}. The main thread keeps fetching from {\it BatchList} and converting the input to $<$Key, Value$>$ pairs using user-provided map functions. The output is pushed to {\it MapOutputStreamingCollector}, which is different from the collector in the original Hadoop. In the original Hadoop, the collector needs to sort the pairs by the key and write to local disk. However, here the pair is immediately sent to the associated reducer by {\it MapTaskServerThread}. Again, the main thread and the {\it MapTaskServerThread} is communicated in the Server/Client pattern and they are wait/notified via an array of bytes called {\it DataOutputStream}.

\begin{figure}[!b]
  \begin{center}
    \includegraphics[width=1.0\columnwidth]{art/map.pdf}
  \end{center}
  \caption{\small Mapper Class Overview}
  \label{fig-mapper}
\end{figure}

\subsection{Reducer Class}
The Reduce tasks are calculated within the JIP, and changed to start right after the job is started in the cluster, as opposed to the original case that the reduce task will not be initiated until a certain point of time in the progress. As long as the reduce tasks are started, they will contacted the {\it JobDispatcher} for available map tasks live in the system and try to connect to them. When the connection is established with any map task, the map tasks will immediate start pushing the serialized intermediate results to reduce tasks. These records are received by the {\it PairFetchingThread} and forwarded to {\it DeserializationThreads}, where the intermediate $<$key, value$>$ pairs are reproduced and put to the global store, {\it ReduceValueIterator}. {\it ReduceValueIterator} are also modified to group the available $<$key value$>$ pairs by their keys and notify the awaiting main thread. Main thread will create a reduce thread for each new key. There, a reduce thread waits on its own key's group to reduce every incoming new intermediate result by calling the user-provided reducer function. This way, this makes it transparent for programmers to write a reducer function, since the iterator within each reduce thread will iterate only on one key's available values, and automatically blocks until a new record of the same key has arrived to its reduce task.

\begin{figure}[!b]
  \begin{center}
    \includegraphics[width=1.0\columnwidth]{art/reduce.pdf}
  \end{center}
  \caption{\small Reducer Class Overview}
  \label{fig-reducer}
\end{figure}

