\newpage

\subsection{Enhancing Join Processing Efficiency}

\begin{figure}[t]
\centering
\epsfig{file=pic/join_processing.eps, width=0.9\linewidth}
\caption{Data Structure for Join Processing. (a) FIFO queue-based brute-force pairwise comparison; (b) BST index-based search-and-join; (c) hash index-based search-and-join.}
\label{fig:join_proc}
\end{figure}

One simple way of processing the streaming join is by brute-force pairwise comparison upon first-in-first-out (FIFO) queues~\cite{sigmod11:Teubner}. However, it is far from efficient unless most of the tuple pairs satisfy the join predicate, which is rare in practice. In order to improve the performance, ESJ uses in-memory index to accelerate the join processing. When a tuple joins with the opposite stream, it searches for the join keys satisfying the predicate via index, and then join with the tuples associated with the join keys found. The performance gain comes from the avoidance of unnecessary comparison. Note that maintaining index is not free, whose benefit mainly depends on the input size. Furthermore, the performance gain is also subject to the type of index with respect to the type of predicate. 

\subsubsection{Quantitative Analysis}
To evaluate the join processing quantitatively, we formally define the indicator of processing efficiency as follows.

\begin{defn}
The join processing efficiency $\varrho$ evaluates the comparison operations contributing to the join outputs, defined as

\begin{equation*}
\varrho = \dfrac{number\ of\ outputs}{number\ of\ comparisons}
\end{equation*}
\end{defn}

Physically, $\varrho$ represents the effective CPU utilization for join processing, with the higher the better. Let $r$ and $s$ be the instant workload sizes of two opposite streams, $n = \max \{r, s\}$, $\breve{n} = \min \{r, s\}$, and $\kappa$ be the size of output produced by joining these tuples. The processing efficiency of the brute-force pairwise comparison $\varrho_{naive}$ is 

\begin{equation*}
\varrho_{naive} = \dfrac{\kappa}{r \cdot s} = \dfrac{\kappa}{n \cdot \breve{n}}
\end{equation*}

\noindent 
In practice, $\kappa \ll r \times s$ is usual, leading to $\varrho_{naive} \ll 1$. For example, $r = 500$, $s = 400$ and $\kappa = 100$ imply $\varrho_{naive} = 0.0005$, indicating that only 0.05\% of the CPU utilization for the join processing is effective. Note that $\varrho_{naive} \to 1$ implies that most of the tuples in both streams have the same join key, which is extremely rare.

\subsubsection{Accelerating the Theta-Join Processing}
In order to improve the join processing efficiency, ESJ introduces auxiliary in-memory index to support theta-join, which involves inequality join and equality join. Since the inequality operator requires accessing range of join keys indicated by the predicate, the balanced binary search tree (BST) index is suitable for the processing of inequality join. On the other hand, since the equality operator only requires single join key accessing, hash index is preferable for the processing of equality join. The keys maintained in BST and hash indices are the join keys of the corresponding predicate. The following two theorems quantify the performance expectation of using index to process streaming theta-join. 

\begin{theorem} \label{thm:bst_join}
Using BST index, processing inequality join takes $\mathcal{O}(n \log n)$ time on average if the input tuples are of large amount and not highly skewed w.r.t. the join key.
\end{theorem}

As shown in Figure~\ref{fig:join_proc}~(b), when a tuple arrives at the worker, it searches the BST index for the satisfying keys and, if found, joins with the tuples associated with those keys. The searching takes $\mathcal{O}(\log n)$ time, and joining with associated tuples takes linear time. Thus in the best case (referring to no matching tuple or joining with single tuple) the overall processing time of this search-and-join is $\mathcal{O}(n \log n)$. In practice, this is also the average processing time.

The performance of search-and-join is dependent on the domain distribution of the join key. Let $\eta_{r}$ and $\eta_{s}$ be the average number of tuples associating with one join key value in two streams, respectively, and $\eta = \max \{\eta_{r}, \eta_{s}\}$. By using BST index, the processing efficiency of inequality join $\varrho_{bst}$ is

\begin{equation*}
\varrho_{bst} 
= \dfrac{\kappa}{r \cdot \mathcal{O}(\log s) + s \cdot \mathcal{O}(\log r) + \kappa} 
\geq \dfrac{\kappa}{n \cdot (\mathcal{O}(\log n) + \eta)}
\end{equation*}

\noindent
Comparing with the processing by brute-force pairwise comparison, we have

\begin{equation*}
\dfrac{\varrho_{bst}}{\varrho_{naive}} \geq \dfrac{\breve{n}}{\mathcal{O}(\log n) + \eta}
\end{equation*}

\noindent
If the tuples are not highly skewed w.r.t. the join key, making $\eta = \mathcal{O}(\log n)$, and the amount of tuples is large enough to make $\breve{n} = \Omega(\log n)$, then we have

\begin{equation*}
\varrho_{bst} \gg \varrho_{naive}
\end{equation*}
for sufficiently large input.

\begin{theorem} \label{thm:hash_join}
Using hash index, processing equality join takes $\mathcal{O}(n)$ time on average if the input tuples are not highly skewed w.r.t. the join key.
\end{theorem}

As shown in Figure~\ref{fig:join_proc}~(c), the search-and-join processing via hash index follows similar strategy as that via BST index. Since each search via hash index takes $\mathcal{O}(1)$ time, the overall processing time is expected to be $\mathcal{O}(n)$ in average. The processing efficiency $\varrho_{hash}$ is 

\begin{equation*}
\varrho_{hash} = \dfrac{\kappa}{r \cdot O(1) + s \cdot O(1) + \kappa}
	\geq \dfrac{\kappa}{n \cdot (C_{hash} + \eta)}
\end{equation*}

\noindent
where $C_{hash}$ is the constant hidden in the big-O notation. Comparing with the processing by brute-force pairwise comparison, we have

\begin{equation*}
\dfrac{\varrho_{hash}}{\varrho_{naive}} \geq \dfrac{\breve{n}}{C_{hash} + \eta}
\end{equation*}

\noindent
If the hash function is good enough to restrict $\eta \leq C_{hash}$, and $C_{hash} \ll \breve{n}$ is held, then we have 

\begin{equation*}
\varrho_{hash} \gg \varrho_{naive}
\end{equation*}
for sufficiently large input.

Theorem \ref{thm:bst_join} and \ref{thm:hash_join} reveals that the in-memory index improves the efficiency of join processing when the streaming input is large. Our experiment result further shows that, in practice, such benefit could be orders of magnitude.
