\newpage

\subsection{Saving Communication Cost}
While distributed computing is characterized for high scalability, it introduces communication cost which is often negligible in main-memory based processing. The communication cost could become the performance bottleneck especially when the network bandwidth in the cluster is limited. In order to save communication cost within ESJ, we propose two approaches, i.e., eliminating stream duplication and optimizing the protocol.

\subsubsection{Eliminating Stream Duplication}
To run multiple join tasks with respect to the same input streams concurrently, one naive and straightforward solution is by duplicating the input streams for every tasks. However, it multiplies the communication cost as duplicating the streams is redundant to the logical operator. In order to eliminate stream duplication, ESJ processes multiple source-sharing tasks upon the unitary pair of streams. All worker instances belonging to the same worker share the stream pair so that the processing is conducted without duplicating the streams. The saving of communication cost could be remarkable if the number of source-sharing tasks is large.

\subsubsection{Optimized Message Passing}
While ESJ facilitates evenly distributing workloads to workers, it incurs communication overhead due to the nature of message passing. The major source of communication overhead is serialization. Reducing serialization times could promote the effectiveness of message passing but degrade the efficiency of information exchange, which may improperly lead to workload imbalance. Hence, it is essential to optimize the message passing protocol to attain moderate balance between effectiveness and efficiency. 

\paragraph{MP-2PF}
ESJ adopts two-phase forwarding~\cite{sigmod11:Teubner} to assist the join processing. As a part of ESJ design, we propose a message passing protocol for two-phase forwarding, namely MP-2PF. It exploits the strategy of passively exchanging information between workers. When the state of a worker changes, it immediately informs such change to its neighbors which are dependent on this information. By exquisite implementing, such passively exchanging information could be real-time for each worker maintaining its up-to-date information.

There are three types of message in MP-2PF:

\begin{itemize}
\item \textbf{TUPLE\_BLK}: transmitting a block of tuples between neighbors.

\item \textbf{ACK}: acknowledgement for the received block of tuples.

\item \textbf{SIZE\_CHG}: informing predecessor about its workload size.
\end{itemize}

\noindent
The pair of TUPLE\_BLK and ACK is used to transfer workload between adjacent workers. According to the protocol, a copy of the forwarded tuple would be kept in the origin worker until the corresponding acknowledgement message is received. In other words, the TUPLE\_BLK message carries a forwarding block of tuples as well as leaves a copy in the origin worker, and the ACK message triggers the deletion of the copy of successfully forwarded content. The SIZE\_CHG message is used to indicate the workload size so that the worload status can be consistent between the collaborative workers.

\begin{figure}[t]
\centering
\epsfig{file=pic/message_passing_protocol.eps, width=0.95\linewidth}
\caption{Message Passing Protocol for Two-phase Forwarding (MP-2PF).}
\label{fig:mp_proto}
\end{figure}

The MP-2PF protocol is asynchronous (i.e., non-blocking). As illustrated in Figure~\ref{fig:mp_proto}, the worker transits between three states during the processing of a tuple block. Note that the worker starts with and eventually stays in the \texttt{Processing} state. When a new tuple block arrives, the worker processes the join for it, sends an acknowledgement to its predecessor, and then checks the forwarding condition to decide whether the tuple block forwarding procedure should be invoked. The \textit{forwarding condition} refers to the threshold of the difference of workload sizes between adjacent workers (which will be further explained in Section~\ref{par:reduce_serialization}). If the forwarding condition is met, the worker transits to the \texttt{Forwarding} state. In the \texttt{Forwarding} state, the worker sends the tuple block to its successor, leaves a forwarded copy in its site, and then transits back to the \texttt{Processing} state. If the worker receives one or more acknowledgements in the \texttt{Processing} state, it transits to the \texttt{Deleting} state. In the \texttt{Deleting} state, the worker deletes the copies of the previously forwarded tuples with respect to the corresponding acknowledgement messages, informs the change of workload size to its predecessor, and transits back to the \texttt{Processing} state. Since the operations in the \texttt{Forwarding} state and the \texttt{Deleting} state do not block the join processing in the \texttt{Processing} state, the system always makes progress under the asynchronous MP-2PF protocol.

\paragraph{Reducing Serialization}
\label{par:reduce_serialization}

\begin{figure}[t]
\centering
\epsfig{file=pic/tuple_block.eps, width=0.95\linewidth}
\caption{Workload Difference between Workers.}
\label{fig:tuple_blk}
\end{figure}

Messages must be serialized before sending and deserialized after receiving. Serialization and deserialization not only cost CPU but also increase the transmission size. For transmitting a fixed amount of tuples, the communication overhead is proportional to the serialization times needed. Naively forwarding one tuple per transmission~\cite{sigmod11:Teubner} incurs tremendous redundant serialization, leading to high communication overhead. Instead, the MP-2PF protocol adopts the tuple block as the transmission unit. A \textit{tuple block} is composed of a number of tuples. Let $\varsigma$ be the block size that equals to the number of tuples in a block. As can be seen in Figure~\ref{fig:tuple_blk}~(a), a worker transfer $\varsigma$ tuples to its successors only when its workload exceeds its successor's by $\varsigma$; otherwise, no tuple transfer occurs, as shown in Figure~\ref{fig:tuple_blk}~(b). This serves as the forwarding condition. Furthermore, Figure~\ref{fig:tuple_blk} also reveals that the maximum workload difference between workers is exact $\varsigma$ tuples. Therefore, there is a trade-off when choosing the appropriate $\varsigma$: big $\varsigma$ implies few serialization but much instantaneous workload imbalance. Let $\alpha \in \left( 0, 0.8 \right)$ be the weighted factor for the consideration of instantaneous workload imbalance comparing to serialization times. Setting $\alpha = 0.5$ represents equal consideration of both instantaneous workload imbalance comparing to serialization times. The following theorem shows a guideline for choosing $\varsigma$.

\begin{theorem} \label{thm:blk_size}
For specified size of join window $\varphi$, number of workers $m$, capacity of a worker $W$ and factor $\alpha$, the optimal size of tuple block $\varsigma$ is dependent on the incoming rate of input stream $\varpi$,  given as
\begin{equation*}
\varsigma = \begin{dcases*}
            {\lfloor \sqrt{\dfrac{1}{\alpha} - 1} \cdot \dfrac{\varpi\varphi}{m} \rfloor} & {if $0 < \varpi \leq \beta_{1}$} \\
            {\lfloor \dfrac{\varpi\varphi}{m \cdot \lceil \dfrac{\varpi\varphi}{mW - \varpi\varphi} \rceil} \rfloor} & {if $\beta_{1} < \varpi \leq \beta_{2}$} \\
            {1} & {if $\varpi > \beta_{2}$}
            \end{dcases*}
\end{equation*}
where $\beta_{1} = \dfrac{1}{2} \sqrt{\dfrac{\alpha}{1 - \alpha}} \cdot \dfrac{mW}{\varphi}$ and $\beta_{2} = \dfrac{mW}{\varphi}$.
\end{theorem}

\noindent
The intuition behind Theorem~\ref{thm:blk_size} is that with affordable workload imbalance, $\varsigma$ is set as large as possible to reduce serialization times. The constraint is that severe workload imbalance may incur false positive of load shedding. Detailed derivation of Theorem~\ref{thm:blk_size} is given in Appendix~\ref{append:opt_tuple_blk}.

\section*{Appendix}
\subsection{Derivation for the Optimal Size of Tuple Block}
\label{append:opt_tuple_blk}
\begin{defn} \label{defn:st}
In the time span of $\varphi$, for the average incoming rate $\varpi$, the \textit{serialization times} (ST) is given by
\begin{equation*}
ST = \dfrac{\varpi\varphi}{m\varsigma}
\end{equation*}
\end{defn}

\begin{defn} \label{defn:if}
In the time span of $\varphi$, for the average incoming rate $\varpi$, the \textit{imbalance factor} (IF) is given by
\begin{equation*}
IF = \dfrac{m\varsigma}{\varpi\varphi}
\end{equation*}
\end{defn}

\begin{defn} \label{defn:j}
The target function w.r.t. balancing ST and IF is given by
\begin{equation*}
J(\varsigma, \varpi, \varphi, m, \alpha) = (1 - \alpha) \cdot ST + \alpha \cdot IF
\end{equation*}
\end{defn}

For the goal of finding the optimal $\varsigma$, solve the problem:

\begin{equation*}
\begin{split}
&\min_{\varsigma} J(\varsigma, \varpi, \varphi, m, \alpha) \\
s.t.\ &ST, \varsigma \in \mathbb{N} \\
	 &\varsigma \leq \dfrac{1}{2}W \\
	 &\dfrac{\varpi\varphi}{m} \leq W \\
	 &\varsigma + \dfrac{\varpi\varphi}{m} \leq W
\end{split}
\end{equation*}

Considering $\dfrac{\mathrm{d}J}{\mathrm{d}\varsigma} = 0$, we get 

\begin{equation}
\varsigma = \sqrt{\dfrac{1}{\alpha} - 1} \cdot \dfrac{\varpi\varphi}{m}
\end{equation}

\noindent
\textit{Case 1}: $\sqrt{\dfrac{1}{\alpha} - 1} \cdot \dfrac{\varpi\varphi}{m} \leq \dfrac{W}{2}$

\begin{equation*}
0 < \varpi \leq \dfrac{1}{2} \sqrt{\dfrac{\alpha}{1 - \alpha}} \cdot \dfrac{mW}{\varphi}
\end{equation*}

\noindent
Then, we get optimal 

\begin{equation}
\varsigma = \lfloor \sqrt{\dfrac{1}{\alpha} - 1} \cdot \dfrac{\varpi\varphi}{m} \rfloor
\end{equation}

\noindent
\textit{Case 2}: $\dfrac{W}{2} < \sqrt{\dfrac{1}{\alpha} - 1} \cdot \dfrac{\varpi\varphi}{m}$

\begin{equation*}
\dfrac{1}{2} \sqrt{\dfrac{\alpha}{1 - \alpha}} \cdot \dfrac{mW}{\varphi} < \varpi \leq \dfrac{mW}{\varphi}
\end{equation*}

\begin{equation}
\varsigma = \lfloor \dfrac{\varpi\varphi}{m \cdot \lceil \dfrac{\varpi\varphi}{mW - \varpi\varphi} \rceil} \rfloor
\end{equation}
