\section{ESJ Operator}
\label{sec:design}

% equi-join
% non-equi-join

In this section, we describe the design of ESJ operator as well as its related components forming an ecosystem for processing streaming SELECT-FROM-WHERE queries.

% \subsection{Goals}
% ESJ operator aims at efficient streaming join processing in the large-scale distributed environment. To this end, we need to attain the following goals: 

% \begin{enumerate}
% \item Efficient and scalable computation. On the one hand, allocated computation resources should be exploited effectively. On the other hand, the architecture should adapt to the workload varying in a wide range.

% \item Reducing overhead. 

% Communication + Join processing

% \item Facilitating materialization and queries.
% \end{enumerate}

\begin{figure*}[t]
\centering
\epsfig{file=pic/architecture.eps, width=0.8\textwidth}
\caption{Overall Architecture of ESJ.}
\label{fig:arch}
\end{figure*}

\subsection{Architecture}
ESJ operator aims at efficient streaming join processing in the large-scale distributed environment. To attain a list of goals, we propose the architecture composed of the join engine and the peripherals as shown in Figure~\ref{fig:arch}. 

First, handling big streaming data requires scalable computation. Thus the join engine applies dataflow-oriented processing model based on message passing. All join workers are connected in a cascading way by stream channels. By instantiating a set of workers for each join task, ESJ supports running multiple join tasks with respect to the same input streams concurrently. Since the workers are organized in a cascading way, ESJ can easily increase and decrease the number of workers on demand so that it adapts to the workload varying in a wide range. 

Second, effective utilization of the allocated computation resources should be enforced. Upon meseeage passing, ESJ applies the criterion that a worker transfer part of the workload to its successor (w.r.t. the stream direction) if and only if its workload exceeds its successor's. This criterion guarantees that workloads are distributed to all workers evenly and automatically.

Third, it is essential to minimize the communication cost since the network bandwidth is usually limited. Thus all worker instances belonging to the same join worker share the stream pair so that the processing is conducted without duplicating the streams. Furthermore, an optimized message passing protocol is proposed to reduce redundant serialization and save communication overhead. 

Fourth, the generality of an operator requires that it should be applicable to diverse inputs w.r.t variety and velocity. To this end, the input adapter is used to bridge the gap of data structure between the input source and the stream processing. The load shedder is used to tolerate the fluctuation of input streams.

%%%%%%%%%%%%%%%

ESJ applies the dataflow-oriented processing model in the distributed environment. As shown in Figure~\ref{fig:arch}, the overall architecture of ESJ is composed of two main parts: join engine and peripherals. The join engine involves a series of workers connected by semi-duplex stream channels which are based on message passing. It performs distributed handshake join, inheriting the property of autonomic load balancing~\cite{sigmod11:Teubner}. All join workers are organized in a cascading way so that each join task is processed by a set of processing instances. All processing instances belonging to the same join worker share the stream pair so that the processing is conducted without duplicating the streams. The peripherals involve input adapter, load shedder, result materialization and query proxy. These components are pipelined with the join engine to provide the functionalities of stream conversion, projection, selection, input workload control, output materialization and online query. Note that the join engine and the peripherals are collaborative and decoupled. In other words, apart from bundling them together as a standalone system, the join engine could be pipelined with other stream operators, and the peripherals are agnostic to the operator's logistic so that it could provide the input/output/query services to different kinds of stream operators. 

ESJ is implementable by using the prevalent distributed stream processing framework such as Apache S4~\cite{icdm10w:Neumeyer} and Twitter Storm~\cite{web:storm}. We implemented a prototype of ESJ on Apache S4, which supports automatically scaling to multiple join tasks by instantiating join workers on the fly. 

In the following, we firstly elaborate the mechanism of join processing and message passing, and then describe the peripherals of input adapter, adaptive load shedding, result materialization and query proxy.





\subsection{Join engine}
\subsubsection{Efficient theta-join processing}

\begin{figure}[t]
\centering
\epsfig{file=pic/join_processing.eps, width=0.7\linewidth}
\caption{Data Structure for Join Processing. (a) FIFO queue-based brute-force pairwise comparison; (b) BST-based search-and-join; (c) hash map-based search-and-join.}
\label{fig:join_proc}
\end{figure}

Theta-join processing resolves the equality and inequality join predicates in the WHERE clause. One naive solution uses a First-In-First-Out (FIFO) queue for each stream (Figure~\ref{fig:join_proc}~(a)) and applies brute-force pairwise comparison~\cite{sigmod11:Teubner}. But it suffers from poor efficiency which requires $O(n^2)$ processing time in both worst case and best case, where $n$ is the number of contemporary valid tuples residing in the join workers. This becomes crucial when the incoming data rate is high and the size of join window is large.

% When a tuple arrives at a join worker, it immediately scans the opposite stream hosted in the same worker and join with the matching tuples with respect to the join key. Such scan-probe-join processing takes $O(n)$ time for each tuple (in both worst-case and best-case), leading the overall processing time to be $O(n^2)$.

Aiming at better performance, ESJ introduces auxiliary data structures to accelerate the processing of theta-join. For inequality join predicates, ESJ applies a balanced binary search tree (BST) per stream per worker (Figure~\ref{fig:join_proc}~(b)). The keys maintained in a BST are the join keys of the corresponding join task. Consequently, when a tuple arrives at a join worker, instead of performing brute-force pairwise comparison on the FIFO queue, the new arrival tuple searches in the BST for the matching key and, if found, joins with the tuples associated with that key. Each search takes $O(\log n)$ time, and joining with associated tuples takes linear time. Thus in the best case (referring to no matching tuple or joining with single tuple) the overall processing time of this search-and-join is $O(n \log n)$. Although the worst-case processing time is still $O(n^2)$, the worst case is rare because it implies that all tuples in a stream share the same join key value. In practice, the average processing time is expected to be $O(n \log n)$. For equality join predicate, ESJ applies a hash map per stream per worker, and follows similar strategy of search-and-join as in the BST-based inequality join processing (Figure~\ref{fig:join_proc}~(c)). Since each search via hash map takes $O(1)$ time, the overall processing time is expected to be $O(n)$ in average.

%%% Quantitative Analysis %%%
For quantitative analysis of performance, we define the processing efficiency $\varrho$ as follows:

\begin{equation}
\varrho = \dfrac{number\ of\ outputs}{number\ of\ comparisons}
\end{equation}

\noindent 
$\varrho$ represents the effective CPU utilization for join processing, with the higher the better. Let $r$ and $s$ be the instant load sizes of two opposite streams, $n = \max \{r, s\}$, $\breve{n} = \min \{r, s\}$, and $\kappa$ be the size of output produced by joining these tuples. The processing efficiency $\varrho_{naive}$ of the brute-force pairwise comparison is:

\begin{equation}
\varrho_{naive} = \dfrac{\kappa}{r \cdot s} = \dfrac{\kappa}{n \cdot \breve{n}}
\end{equation}

\noindent 
In practice, $\kappa \ll r \times s$ is usual, leading to $\varrho_{naive} \ll 1$. For example, $r = 500$, $s = 400$ and $\kappa = 100$ imply $\varrho_{naive} = 0.0005$, indicating that only 0.05\% of the CPU utilization for join processing is effective. Note that $\varrho_{naive} = 1$ implies that all tuples in both streams have the same join key, which is extremely rare.

ESJ applies the search-and-join mechanism to accelerate the theta-join processing. As mentioned, the performance of search-and-join is dependent on the domain distribution of the join key. Let $\eta_{r}$ and $\eta_{s}$ be the average number of tuples associating with one join key value in two streams, respectively, and $\eta = \max \{\eta_{r}, \eta_{s}\}$. The processing efficiency $\varrho_{bst}$ of the BST-based processing for inequality join is: 

\begin{equation}
\begin{split}
\varrho_{bst} &= \dfrac{\kappa}{r \cdot O(\log s) + s \cdot O(\log r) + \kappa} \\
&\geq \dfrac{\kappa}{r \cdot (O(\log s) + \eta_{s}) + s \cdot (O(\log r) + \eta_{r})} \\
&\geq \dfrac{\kappa}{n \cdot (O(\log n) + \eta)}
\end{split}
\end{equation}

\noindent 
If $\eta = O(\log n)$, then we have 

\begin{equation}
\varrho_{bst} \geq \dfrac{\kappa}{2 \cdot O(n \log n)} = \dfrac{1}{2 \cdot C_{bst}} \cdot \dfrac{\kappa}{n \log n}
\end{equation}
where $C_{bst}$ is the constant hidden in the big-Oh notation. 

To compare the processing efficiency between the brute-force pairwise comparison and the BST-based inequality join, we have 

\begin{equation}
\dfrac{\varrho_{bst}}{\varrho_{naive}} \geq \dfrac{1}{2 \cdot C_{bst}} \cdot \dfrac{\breve{n}}{\log n}
\end{equation}

\noindent 
Taking the BST maintenance into consideration, in practice, it is often the case that $C_{bst} \ll \breve{n}/\log n$. Therefore, we have

\begin{equation} \label{eq:bst_gg_naive}
\varrho_{bst} \gg \varrho_{naive}
\end{equation}
for sufficiently $\breve{n} \gg \log n$.

Similarly, for the hash map-based equality join, we have the corresponding $\varrho_{hash}$ as: 

\begin{equation}
\begin{split}
\varrho_{hash} &= \dfrac{\kappa}{r \cdot O(1) + s \cdot O(1) + \kappa} \\
&\geq \dfrac{\kappa}{r \cdot (O(1) + \eta_{s}) + s \cdot (O(1) + \eta_{r})} \\
&\geq \dfrac{\kappa}{n \cdot (O(1) + \eta)} \\
&= \dfrac{\kappa}{n \cdot (C_{hash} + \eta)}
\end{split}
\end{equation}

\noindent 
where $C_{hash}$ is the constant hidden in the big-Oh notation. If the hash function is good enough to restrict $\eta \leq C_{hash}$, then we have 

\begin{equation}
\varrho_{hash} \geq \dfrac{\kappa}{2 \cdot C_{hash} \cdot n}
\end{equation}

To compare the processing efficiency between the brute-force pairwise comparison and the hash map-based equality join, we have 

\begin{equation}
\dfrac{\varrho_{hash}}{\varrho_{naive}} \geq \dfrac{\breve{n}}{2 \cdot C_{hash}}
\end{equation}

\noindent 
Taking the hash map maintenance into consideration, in practice, it is often the case that $C_{hash} \ll \breve{n}$. Therefore, we have

\begin{equation} \label{eq:hash_gg_naive}
\varrho_{hash} \gg \varrho_{naive}
\end{equation}
for sufficiently large $\breve{n}$.

As can be seen from inequation~\eqref{eq:bst_gg_naive} and \eqref{eq:hash_gg_naive}, ESJ outperforms the primitive handshake join when dealing with big streaming data.

%%% Multi-attribute join %%%
Usually, there are multiple theta-join predicates in a WHERE clause. We separate them into two categories for different processing mechanism. One predicate is chosen to be the main predicate which is processed through BST or hasp map as mentioned in the above paragraph. Remaining predicates are treated as secondary predicates which are processed by piggybacking the processing of main predicate. Consequently, multi-attribute join predicates of one query can be processed in single join task with the same asymptotic time as in the unit predicate processing.

\subsubsection{Message passing}
For steaming join, it is essential to maintain consistent states of join processing and join window control between collaborative processing units. Multi-core and main-memory based join algorithms usually rely on \textit{shared memory} for information exchange. However, the shared memory paradigm is not suitable for distributed computation. Instead, \textit{message passing} is practical for information exchange in the distributed environment. In ESJ, since every join worker knows little about its neighbors unless they told it, we propose a message passing protocol for two-phase forwarding, namely MP-2PF. It exploits the strategy of passively exchanging information between workers. When the state of a worker changes, it immediately informs such change to its neighbors which are dependent on this information. By exquisite implementing, such passively exchanging information could be real-time for each worker maintaining its up-to-date information. 

There are 5 types of message in MP-2PF:

\begin{itemize}
\item \textbf{TUPLE}: tuple transmission between neighbors.

\item \textbf{ACK}: acknowledgement for the received tuple.

\item \textbf{TUPLE\_BLK}: tuple block transmission between neighbors.

\item \textbf{ACK\_BLK}: acknowledgement for the received tuple block.

\item \textbf{SIZE\_CHG}: informing predecessor about its load size.
\end{itemize}

The pair of TUPLE and ACK messages is for naive one-by-one tuple forwarding, and the pair of TUPLE\_BLK and ACK\_BLK is for advanced strategy of forwarding batched tuples to reduce communication overhead. Following the two-phase forwarding mechanism~\cite{sigmod11:Teubner}, a copy of the forwarded tuple (or tuple block) would be kept in the origin worker until the corresponding acknowledgement message is received. In other words, the TUPLE (or TUPLE\_BLK) message carries the forwarding tuple (or tuple block) as well as leaves a copy in the origin worker, and the ACK (or ACK\_BLK) message triggers the deletion of the copy of successfully forwarded content. The SIZE\_CHG message is used to inform the load size so that the autonomic load balancing can be conducted.

\begin{figure}[t]
\centering
\epsfig{file=pic/message_passing_protocol.eps, width=0.95\linewidth}
\caption{Message Passing Protocol for Two-phase Forwarding (MP-2PF).}
\label{fig:mp_proto}
\end{figure}

The MP-2PF protocol is asynchronous (i.e., non-blocking). As illustrated in Figure~\ref{fig:mp_proto}, the worker transits between three states during the processing for a tuple (or tuple block). Note that the worker starts with and eventually stays in the \texttt{Processing} state. When a new tuple (or tuple block) arrives, the worker processes the join for it, sends an acknowledgement to its predecessor, and then checks the forwarding condition to decide whether a tuple (or tuple block) forwarding procedure should be invoked. The \textit{forwarding condition} refers to the threshold of the difference of load sizes between adjacent workers. If the forwarding condition is met, the worker transits to the \texttt{Forwarding} state. In the \texttt{Forwarding} state, the worker sends the tuple (or tuple block) to its successor, leaves a forwarded copy in its site, and then transits back to the \texttt{Processing} state. If the worker receives one or more acknowledgements in the \texttt{Processing} state, it transits to the \texttt{Deleting} state. In the \texttt{Deleting} state, the worker deletes the copies of the previously forwarded tuples with respect to the corresponding acknowledgement messages, informs the change of load size to its predecessor, and transits back to the \texttt{Processing} state. Since the operations in the \texttt{Forwarding} state and the \texttt{Deleting} state do not block the join processing in the \texttt{Processing} state, the system always makes progress under the asynchronous MP-2PF protocol.

%%% Blocked tuple transfer %%%
Naively forwarding tuples one-by-one~\cite{sigmod11:Teubner} incurs tremendous redundant serialization and communication cost. In order to reduce the communication overhead, we propose the strategy of \textit{blocked tuple transfer}. Each join worker batches a number of tuples to construct a tuple block and uses it as the forwarding unit. This saves a lot of redundant serialization and network bandwidth. There is a trade-off of choosing the batching size: bigger batching size saves more communication overhead but lead to more instantaneous load imbalance. Let $\varsigma$ be the batching size. We define \textit{serialization ratio} (SR) to represent the overhead of serialization, \textit{transmission cost} (TC) to represent the required network bandwidth, and \textit{imbalance factor} (IF) to represent the instantaneous load imbalance.

\begin{equation}
SR = \dfrac{C_{ser}}{\varsigma}
\end{equation}
where $C_{ser}$ is a constant standing for the serialization cost.

\begin{equation}
TC(x) = (C_{tr} + SR) \cdot x
\end{equation}
where $C_{tr}$ is a constant standing for the cost of payload transmission per tuple and $x$ is the number of tuples to transfer.

\begin{equation}
IF = \dfrac{\omega_{max} - \omega_{avg}}{\omega_{avg}}
\end{equation}
where $\omega_{max}$ is the maximum instantaneous workload size among workers, and $\omega_{avg}$ is the average instantaneous workload size. According to the mechanism of blocked tuple transfer, $\omega_{max} - \omega_{avg}$ is bounded by $\varsigma$. So we have 

\begin{equation}
IF \approx \dfrac{\varsigma}{\omega_{avg}}
\end{equation}

Large batching size implies small serialization ratio and small transmission cost which result in reduced CPU consumption for serialization and network bandwidth, respectively. But it also implies big imbalance factor which results in increased load imbalance with respect to the same overall load size in average. In real world applications, we could tune the batching size to get the ``sweet spot'' of maximum performance. Let $\varpi$ be the incoming rate, $\varphi$ be the size of join window, $m$ be the number of workers, and $\alpha \in (0, 1)$ be the weighted factor for transmission cost. We describe the trade-off as the following target function: 

\begin{equation}
\begin{split}
J(\varsigma, \varpi, \varphi, m, \alpha) = \alpha \cdot TC(\varpi\varphi) + (1 - \alpha) \cdot IF \\
= \alpha \cdot (C_{tr} + \dfrac{C_{ser}}{\varsigma}) \cdot \varpi\varphi + (1 - \alpha) \cdot \dfrac{\varsigma}{\varpi\varphi / m} \\
\end{split}
\end{equation}
Note that setting $\alpha = 0.5$ represents equal consideration of both the transmission cost and the imbalance factor. For the given expectation of $\varpi$ and the user specific $\varphi$, $m$ and $\alpha$, the goal is to get the optimal $\varsigma$ such that minimizes the target function $J$:

\begin{equation}
\begin{split}
&\min_{\varsigma} J(\varsigma, E(\varpi), \varphi, m, \alpha) \\
&\Rightarrow \varsigma = E(\varpi) \cdot \varphi\sqrt{\dfrac{\alpha}{1 - \alpha} \cdot \dfrac{C_{ser}}{m}}
\end{split}
\end{equation}

%\noindent
%For example, supposing $C_{ser} = 1$ and $\alpha = 0.5$, if a join task with the join window of 10 seconds runs 4 workers and the incoming rate of the input streams is 200 tuples/second on average, 

% For batching size, we provide two strategies. For the static strategy, the batching size is fixed and set in the configuration file. For the dynamic strategy, we use the static setting as the threshold, and refer to the load size difference between worker neighbors. If the load size difference is greater than the threshold, we employ it as the batching size; otherwise, use the threshold. 

%%% Issue of missed-join pair %%%
Note that doing handshake join between multiple processing units may suffer from the \textit{missed-join pair} problem~\cite{sigmod11:Teubner}. To address this issue, we apply the \textit{two-phase forwarding} strategy~\cite{sigmod11:Teubner} in the distributed join workers.

\subsection{Peripherals}

\subsubsection{Input adapter and load shedding}
Input adapter and load shedding are responsible for the streaming source control. Input adapter converts external data source into streaming input. Moreover, it could also perform the pre-selection and pre-projection for corresponding predicates raised by the streaming query. For example, only the join related attributes of every tuple are projected before the join processing.

Load shedding is useful when the capacity of join processing is (about to be) saturated. ESJ adopts adaptive load shedding as follows: By defining a shedding threshold, if the load size is less than the threshold, nothing is shed. Otherwise, part of the incoming tuples are shed by a \textit{shed factor} defined as follows:

\begin{equation}
ShedFactor = Base_{shed} + \Delta_{shed} \times (LoadSize - thrh_{shed})
\end{equation}

\begin{equation}
\Delta_{shed} = \dfrac{1 - Base_{shed}}{Capacity - thrh_{shed}}
\end{equation}

This is an adaptive linear shedding model. When the load size exceeds the shedding threshold, the shed factor is proportional to the load size. If the processing capacity is fully saturated, all incoming tuples are dropped. In practice, the shed factor could be redefined with other shedding functions, such as a quadratic function. Note that load shedding is an optional strategy. To handle the high incoming rate of streaming input, we could either add more join workers or apply load shedding.

\subsubsection{Result materialization}
%%% Snapshot %%%
The join results are structured as snapshots. A \textit{snapshot} is defined as a set of records that share an identical temporal identifier. Records are collected in an append-only way. There are two strategies to commit a snapshot:

\begin{itemize}
\item \textbf{Epoch-based committing}. The snapshot is created periodically. An \textit{epoch} is defined as the periodical timestamp of snapshot committing. A snapshot contains the join results between two epochs and uses the committing epoch as the identifier.

\item \textbf{Punctuation-based committing}. The committing of snapshot is controlled by the content of input streams. Apart from the normal tuples, special tuples known as \textit{punctuations}~\cite{tkde03:Tucker} are injected into the streams by the input adapter. The creation of punctuations is based on the knowledge provided by the data source. Consequently, a snapshot would not be committed until a pair of punctuations are joined. The timestamp of joining a punctuation pair is used as the identifier of the corresponding snapshot.
\end{itemize}

Two optimizations for the epoch-based snapshot committing are \textit{immediate committing} and \textit{delay committing}. Their purpose is to create roughly the same sizes of snapshots. User can define a threshold as the expecting size of one snapshot. If the collection size of join results hits the threshold, the snapshot will be committed immediately before the upcoming epoch. In contrast, if the snapshot of current epoch is too small (saying, for example, less than half of the threshold), its committing will be postponed to the next epoch. However, if a snapshot has been delayed for certain epochs, it will be committed whatever size it is so that infinite delay will not occur.

%%% Post-selection and Post-projection %%%
Furthermore, post-selection and post-projection raised by the streaming query could be processed along the snapshot construction. For example, join results that not satisfy the selection predicate will be filtered. 

%%% Materialization %%%
The up-to-date results produced by the join engine are kept in memory, while the obsolete ones need to be materialized in the persistent storage. Current ESJ prototype applies Cassandra~\footnote{Apache Cassandra: http://cassandra.apache.org} and HBase~\footnote{Apache HBase: http://hbase.apache.org} as the underlying database, where the former benefits equality queries since it builds distributed hash index for every table for fast retrieval, and the latter benefits range queries since it uses B+-tree to do the loopup.

\subsubsection{Query proxy and Query processor}
Two types of queries are applicable for retrieving the join results. 

\begin{itemize}
\item \textbf{Continuous query} runs infinitely and retrieves the up-to-date result in a real-time manner. It only accesses the records held in the memory and retrieve them immediately when the snapshot is created.

\item \textbf{One-time query} requests certain time range or join key range of the join results. The time range query may need to access the database to retrieve the historical results if the time range spans over that of records held in the memory. The join key range query must scan the results held in both the memory and the database to select the qualified records.
\end{itemize}

The query is always issued by the client. We adopt the client-server model for query requesting and responding. As illustrated in Figure~\ref{fig:arch}, query proxy is used to convert the external query request into a stream event, and convert the query response stream into the external data structure. The query processor is responsible for query processing. For continuous query, it pipelines to the result materialization component and uses the up-to-date snapshot to answer the query. For one-time query, it retrieves the qualified records from both memory and database, and then sends to the query proxy through a stream. Note that the query proxy is pluggable and the query processor is stateless, meaning that the query processor can handle a scaling number of query proxies. Also, one query proxy can serve multiple clients requesting queries.

% section design (end)