\section{Experimental Results}

We setup our testbed in the OS cluster at the CSC department of NCSU. Each node in the cluster runs in an AMD Athlon Dual Core processor at 2GHz with 1G memory.

The application we choose is word count. In word count example, the original Hadoop reads the article from the files stored in the distributed file system and run map tasks. Each map task reads a split of files, and generate a record of $<$word, 1$>$ for each each word in the split. The reduce tasks collect all the records for each single word and sum them up. In stream Hadoop, the input is fed into the system from an external streaming generator. The work that is done in any map or reduce task is still quite similar.

The first experiment is to compare the new streaming Hadoop's execution time with the original Hadoop under different input data size. In this experiment, we use six nodes and start three map tasks and three reduce tasks. By increasing the input data size, the execution time increases linearly. At low file size input, the original Hadoop and the streaming Hadoop have similar performance. The original Hadoop tends to have super linear speedup. This is because the setup overhead is relatively large but it is amortized as the data size increases. 

\begin{figure}[!b]
  \begin{center}
    \includegraphics[width=1.0\columnwidth]{stats/compare.pdf}
  \end{center}
  \caption{\small Comparison b/w the Original Hadoop and Streaming Hadoop}
  \label{fig-compare}
\end{figure}

The second experiment is to compare the impact of the number of map tasks on the system. We use three reduce tasks and feed 40MB size of data. The result shows that two or three map tasks are optimal in this case. Fewer map tasks can be the bottle neck, while more map tasks do not perform well either due to a)the overhead of extra network links between mappers and reducers and b)the reducer becomes the bottleneck.

\begin{figure}[!b]
  \begin{center}
    \includegraphics[width=1.0\columnwidth]{stats/map.pdf}
  \end{center}
  \caption{\small The Effect of Changing Number of Mappers}
  \label{fig-map}
\end{figure}

The third experiment is to study the difference brought by varying the number of reduce tasks. We use three map tasks and feed 40MB size of data. The result shows that the bottleneck formed in the reduce phase can be solved by increasing the number of reduce tasks to a certain level. Yet after that, it cannot bring any more benefit to our system by starting more reduce tasks - the execution time stays roughly equal after reaching that level.

\begin{figure}[!b]
  \begin{center}
    \includegraphics[width=1.0\columnwidth]{stats/reduce.pdf}
  \end{center}
  \caption{\small The Effect of Changing Number of Reducers}
  \label{fig-reduce}
\end{figure}
