\documentclass{vldb}
\usepackage{graphicx, amsmath, balance, subfigure}

\newtheorem{claim}{Proposition}
\newtheorem{theorem}{Theorem}
\newtheorem{defn}{Definition}

\begin{document}

\title{S2J: A General Operator for Stream Join in the Cloud}

\author{}

\maketitle

\begin{abstract}
  Join operation
  is of key importance for answering streaming queries, and  widely required in various streaming processing applications.
  Moreover, it is often the case that there are multiple streaming queries on an input stream pair, resulting in a set of concurrent join tasks, and  the workload of the join tasks is usually raised by a large join window and high incoming rates of the input streams.
  Hence, a general stream join operator should be application-agnostic, and meanwhile, be able to  handle the concurrent join tasks with a low cost  and process a very big workload effectively and efficiently.
  Nonetheless, few existing stream join operators could fully satisfy all these requirements.
 %
  In this paper, we present S2J a stream join operator with a general framework,
  which adopts a dataflow-oriented processing model, and  carries out
  each join task by allocating a set of join workers to distribute the workload and applying a tuple-block-based message passing protocol to reduce the communication overhead. This framework is independent of applications, naturally supports concurrent join tasks, and is scalable  to the workload.
  %
  Extensive experimental results verify the effectiveness and efficiency of our S2J operator.
\end{abstract}


%=============================================================================================================
\section{Introduction}
\label{sec:introduction}


Streaming queries aim at extracting useful information from vast volume of continuous incoming data in a real-time manner~\cite{pods02:Babcock, sigmodrec03:Golab}. In  most cases, answering these queries consists of three main types of operations, namely selection, projection, and join. Selection and projection operations are easy to process since both of them are unary reduction operators, while join operation is not so straightforward as two input streams correlate.
%
Furthermore, it is often the case that multiple join tasks run simultaneously on an input stream pair, and new join tasks need to be plugged in if new queries on the stream pair arrive.
%
Hence, how to design a join operator that can handle these join tasks effectively, efficiently, and concurrently, is of key importance for answering  streaming queries.



For example,
queries 1 \& 2 illustrated in Fig.~\ref{fig:example_psi} are two typical streaming queries from online pollutant analysis,
which help investigate the changes of contamination factors in air pollutant. These two queries run on  streams $C$ and $H$ that respectively carry the monitoring results of current AQI (Air Quality Index) and the historical AQI records one month ago. Each query keeps checking the differences between the current and historical PM10 (or PM2.5) when there is a significant gap between the current and  historical PM2.5 (or PM10) within half an hour.
%
To answer the queries, the following two concurrent join tasks, namely $C\Join_{\mid C.PM2\_5 - H.PM2\_5\mid > \tau}H$ and $C\Join_{\mid C.PM10 - H.PM10\mid > \tau}H$, are required to be executed in parallel on streams $C$ and $H$ with a join window size of 30 minutes.
%
In practice, the  number of concurrent join tasks,  the join window size, as well as the incoming rates of input streams
significantly vary with different application contexts,
all of which should be taken into account in the design of a general stream join operator.



Given the above discussion, the key requirements of a general stream join operator can be summarized as follows.



\begin{figure}[t]
\centering
\framebox[0.95\linewidth]{\epsfig{file=pic/example_psi.eps, width=0.85\linewidth}}
\caption{Examples of streaming queries on AQI.}
\label{fig:example_psi}
\end{figure}



\begin{enumerate}
\item \textbf{Generality.}
  Instead of being designed and optimized for specific applications, a general join operator should be application-agnostic. In addition, it should support as many types of join predicates as possible rather than specific types of predicates (e.g., equality join).

\item \textbf{Handling concurrent join tasks.}
  To extract more information, there are often multiple streaming queries running on an input stream pair in parallel, resulting in a set of concurrent join tasks.  A general join operator
  should be able to manage these concurrent join tasks conveniently, and carry out them efficiently.

\item \textbf{Processing big workload.}
  Large join windows and high incoming rates of streams raise the workload of join tasks, and
  challenge the processing efficiency and capacity of a join operator. Hence, a general join operator should be able to address a very big workload and  avoid  workload imbalance. Moreover,
  its control and maintenance of a large join window should not be the bottleneck of the processing efficiency.
\end{enumerate}



Nevertheless, few  existing stream join operators meet all of the above requirements. Specifically,
%
although operators for stream join on a stand-alone machine were well studied  in the past decade,
their mechanisms are often hard to be extended to distributed environments. Hence,
their processing capability is subject to the limited computing resource of the machine in use.
%
On the other hand, although several recent work paid more attention on distributed stream join, most of them was designed and optimized for specific applications, and far from generality.
%
Moreover, to handle concurrent join tasks on an input stream pair, a straightforward approach is to duplicate the input streams for each task. Nonetheless, the duplicated streams will markedly increase the cost of communication. Therefore, it is of practical significance to conduct the concurrent join tasks  without  duplicating the streams, while to the best of our knowledge, this issue has not been explicitly addressed thus far.



In this paper, we propose S2J (\textbf{S}calable \textbf{S}treaming \textbf{J}oin),
a general stream join operator that can effectively handle big workload and concurrent join tasks on an input stream pair.
To this end, S2J carries out a join task by allocating a set of processing units known as
join workers to distribute the workload of join processing, and adopting a tuple-block-based message passing protocol which helps reduce communication overhead.
Furthermore, S2J adopts
an input adapter and a load shedder to adapt to diverse input sources and their different incoming rates.
Meanwhile, S2J can efficiently support a large number of  query accesses, and
facilitate the join result materialization.
Besides, S2J also has the following three main characteristics.



\begin{itemize}
\item \textbf{General stream join operator.}
  S2J provides a general framework for stream join which is independent of applications. Moreover, it can carry out most of join predicates (i.e., predicates for theta-join) and naturally support multi-attribute join.

\item \textbf{Supporting concurrent join tasks.}
  On an input stream pair, when a new join task is submitted, S2J can automatically initialize a set of cascading join worker instances and deploy them in the cloud. The worker instances of  all concurrent join tasks share the data flow of the input stream pair, and  help S2J minimize the cost of communication.

\item \textbf{Scalable to workload.}
  S2J provides two strategies to handle the varying workload of join processing in different applications, i.e., it can  change the number of join workers on the fly to adjust its processing capability dynamically,  or conduct an adaptive load shedding~\cite{vldb02:Carney, icde03:Kang} on the incoming tuples of the input streams.
\end{itemize}



The remaining sections are organized as follows.
We review the related work in Section~\ref{sec:related_work}, and elaborate the design and functionality of our S2J operator in Section~\ref{sec:design}, following which we discuss the characteristics of the operator in Section~\ref{sec:discussion}, and report the experimental results and our findings in Section~\ref{sec:evaluation} before concluding the paper in Section~\ref{sec:conclusion}.



%=============================================================================================================
\section{Related Work}
\label{sec:related_work}


The existing research on stream join operators can be classified into two main categories, namely (1) operators that focus on
handling and optimizing stream join processing on a stand-alone machine, and (2) operators that run large-scale stream join in distributed environments.



\subsection{Stream Join on a Stand-Alone Machine}

\subsubsection{Centralized Stream Join}
Pioneering stream join operators often apply a centralized maintenance of the join states (e.g., intermediate results),
and conduct either a hashing- or sorting-based join processing.



\emph{Hashing-based Join}.
Pipelining hash join~\cite{pdis91:Wilschut} is one of the most classical stream join operators, which takes advantage of parallel main-memory to speed up the join processing. Nonetheless, to keep the entire join state,  a large enough main memory is always required by this mechanism.
To tackle this issue, the double-pipelined hash join~\cite{sigmod99:Ives}, XJoin~\cite{vldb01:Urhan}, and hash-merge join~\cite{icde04:Mokbel} flush partial hash table to disks for later processing. Furthermore, in order to maximize the output rate, some operators~\cite{sigmod05:Tao, dasfaa07:Tok, sigmod10:Chen} adopt a statistics-based flushing policy, in which only the
tuples that are more likely to be joined would be kept in memory.



\emph{Sorting-based Join}.
As hashing based join is more suitable for equality predicates rather than inequality ones,
sorting-based join is proposed to handle inequality joins. However,
traditional sorting operation requires the entire input before producing any output.
Hence, progressive merge join~\cite{vldb02:Dittrich}
partitions the memory into two parts, both of which carry a stream, sort the join keys, and perform join processing when
the memory is filled up, while the price of this approach is a high delay on output.



\subsubsection{Multi-Core based Stream Join}
Modern multi-core technology brings parallelism for stream join on a stand-alone machine. For example, Gedik \emph{et al.}~\cite{vldb07:Gedik} utilizes multi-core Cell processor to enhance the join processing efficiency, although the efficiency highly relies on the hardware parallelism, which is often not well-supported by commodity hardwares. In addition, optimizations that combine multi-core and shared memory are also used by operators to further improve their join capability and efficiency.
Handshake join~\cite{sigmod11:Teubner} exemplifies this kind of methods,
in which each tuple in one stream ``handshakes'' with (i.e., join with) tuples in the opposite stream.
%Besides, handshake join applies a dataflow-oriented process model with a decentralized maintenance of the join window
Nevertheless, since all of these multi-core based stream join operators are highly customized for execution on a single machine, their scalability are limited, and
their mechanisms are usually hard to be extended to distributed environments



\begin{figure*}[!t]
\centering
\epsfig{file=pic/architecture.eps, width=0.8\textwidth}
\caption{Architecture of S2J.}
\label{fig:arch}
\end{figure*}



\subsection{Distributed Stream Join}

To achieve a high processing capability and scalability, more recent research focuses on how to carry out stream join
distributedly.
Nonetheless, existing solutions often have  minor flaws in generality, result integrity, and communication overhead.


Photon~\cite{sigmod13:Ananthanarayanan} is a fault-tolerant, distributed stream join
operator proposed by Google. Nevertheless, it is specifically designed and optimized for joining data streams of web search queries and user clicks on advertisements, sacrificing some generality of porting to various applications.


D-Stream~\cite{sosp13:Zaharia} breaks continuous streams by discrete units and processes them as batch jobs on Spark~\cite{nsdi12:Zaharia}. However, this batch processing on streams gives no guarantee on the  integrity of join results, since
a few target tuple pairs in separated batches may miss each other for join. In like manner, MapReduce-based stream join operators \cite{sigmod10:Blanas, vldb08:Logothetis} also have the similar problem.


TimeStream~\cite{eurosys13:Qian} exploits the dependencies of tuples to conduct the stream join, while the maintenance of dependencies is communication-consuming, and may become the  bottleneck of its performance. In addition, multiple join predicates would complicate this dependency-based solution.


PSP~\cite{edbt09:Wang} separates a macro join operator into a series of smaller sub-operators by time-slicing of the states. The processing is distributed to these sub-operators which are connected in a ring architecture.
But, as it has to synchronize the distributed join states, its communication overhead may be high, which could be exponential to the number of sub-operators.



%=============================================================================================================
\section{S2J Operator}
\label{sec:design}

In this section, we first present the architecture of our S2J operator, followed by
elaborating how does S2J
(1) adapt to varying workload,
(2) save communication cost, and
(3) enhance join processing efficiency.


%-------------------------------------------------------------------------------------------------------------
\subsection{Architecture}
S2J operator aims at an efficient processing of the stream join in a large-scale distributed environment.
Towards this, as shown in Fig.~\ref{fig:arch},
we propose an architecture composed of a scalable join engine,  and a set of peripherals, including an input adapter, a load shedder, a  materialization, a query  proxy and a query  processor.



\subsubsection{Scalable Join Engine}
To maximize the computational scalability, S2J applies
a dataflow-oriented processing model, in which workers are its basic processing units, and deployed on nodes of the distributed environment (one node can carry one or multiple workers).
These workers are connected by stream channels in a cascading
manner, and new workers can be added on the fly as needed.


Meanwhile, to concurrently carry out multiple join tasks
upon the same stream pair, a separated set
of chained worker instances are assigned to the individual
task (marked by different shades in Fig.~\ref{fig:arch}.
New worker instances will be initialized automatically when there are new join tasks on the input stream pair arriving.
All worker instances belonging to the same worker
share the stream channels and  data flow inside, and help S2J save the communication cost
from duplicating the input streams for each join task.


To achieve a global workload balancing, S2J implements
a criterion that a worker transfers part of its workload
to its successor (w.r.t.~the stream direction) if{f} its workload is greater than that of its successor by a threshold. This criterion guarantees that the global workload is distributed over all workers
evenly and automatically.


\subsubsection{Input Adapter and Load Shedder}
In order to make S2J applicable to diverse input streams, an
input adapter is adopted to convert the external data source
into a standardized streaming input, and at the same time
performs the pre-selection and pre-projection for corresponding predicates raised by the streaming query. For example, only the join related attributes of every tuple are projected before the join processing.
Additionally, the load shedder is used
to tolerate the fluctuation of the incoming rate of input stream pair.


\subsubsection{Materialization}
The latest outputs of the join engine are collected
by a memory buffer, while the older ones are
structured as \emph{snapshots} and materialized to persistent
storages.
Each snapshot groups a set of records that share a common identifier (e.g. committing time), and facilitates later retrieval based on this common identifier. Moreover, S2J supports two committing strategy for the snapshots, i.e., committing  periodically or
according to punctuations \cite{tkde03:Tucker} in the input streams.


\subsubsection{Query  Proxy and Query Processor}
Users retrieve the join results via queries. S2J supports
both the continuous query, which keeps requesting the up-to-the-minute
results, and the one-time query, which requests
the results within a time span.
By applying the client-server
model for query requesting and responding, multiple queries
can be supported simultaneously.
As the client-end, each query proxy converts query requests into stream events, and reversely,
transfers the query response stream into the clients� format.
As the server-end,
the query processor parses the stream events carrying the query requests, answers the queries via necessary selection and projection operations, and return the results to the corresponding client-ends via query response streams.



\begin{figure}[t]
\centering
\epsfig{file=pic/adapt_to_workload.eps, width=0.8\linewidth}
\caption{Adapting to the varying workload.}
\label{fig:adapt_to_workload}
\end{figure}



%-------------------------------------------------------------------------------------------------------------
\subsection{Adapting to Varying Workload}

The workload of a join task depends on the join window size and the incoming rate of the input stream pair, both of which vary with applications, resulting in varying workload between different applications.
On the other hand, in a specific application, the join window size is fixed though, the fluctuation of streams' incoming rates also incur varying workload during join processing. Hence, how to adapt to these varying workload
is of practical significance for a stream join operator.


To this end, our S2J operator adopts a sectional solution  based on the average utilization ratio of the workers. To be specific, let $\tau$ denote the current average utilization ratio, i.e., $\tau=\frac{w}{W}$, where $w$ is the current average workload (evaluated by the number of tuples) over the workers, and $W$ is the capacity of each worker (i.e., the maximal number of tuples that the worker can handle),
the following three user-defined thresholds of $\tau$ are used to trigger different strategies to adapt to the current workload.


\begin{itemize}
\item Threshold \textbf{$\tau_{0}$} for deallocating  workers (optional). %superfluous

\item Threshold \textbf{$\tau_{1}$} for starting adaptive load shedding.

\item Threshold \textbf{$\tau_{2}$} for allocating extra workers.
\end{itemize}


Fig.~\ref{fig:adapt_to_workload} illustrates this sectional solution, where $\tau^*$
refers to expected initial value of $\tau$, and the
thresholds satisfy the following relationship.
\[0 < \tau_{0} < \tau^* < \tau_{1} \leqslant \tau_{2} \leqslant 1.\]


Based on the above thresholds, in what follows we discuss
the proper initial deployment of computation resources (i.e., workers of S2J) for different applications,
and  introduce two approaches that dynamically fit S2J to the varying workload during the execution, i.e.,  shedding the load  adaptively and adjusting the number of workers on the fly.




\subsubsection{Initial Deployment}
For a join task to submit,
how to select an appropriate number $m$ of workers for the initial deployment of S2J is a trade-off. This is because deploying less workers avoids waste of computation resources, while more workers bring with a greater total capacity which can afford larger fluctuation on workload.


Formally, let $\varphi$ denote the join window size of the join task, and $E(\varpi)$ be the expectation of the incoming rate of an input stream, which can be roughly estimated by users' prior knowledge or a transient monitoring on the input steam, in order to achieve an expected average utilization ratio $\tau^*$ for the workers, the value of $m$ can be estimated as
\[m=\Big\lceil\frac{E(\varpi)\cdot\varphi}{W\cdot \tau^*}\Big\rceil,\]
and then, the trade-off problem is converted to the selection of an appropriate value for $\tau^*$.


As $\tau^*<\tau_1$, we set $\tau^*=\beta\cdot\tau_1$ ($0<\beta<1$). If $\tau^* \to 1\cdot \tau_1$, all workers would be effectively exploited, but tiny fluctuations of the input streams would trigger a false positive of load shedding. In contrast, if $\tau^* \to 0\cdot \tau_1$, a large  number of workers would be allocated with low utilization and incur additional cost on communication, although they could afford stream fluctuations with greater amplitudes. Given the discussion above, we suggest that set $\tau^*=0.5\cdot\tau_1$ in practice, so that each worker can not only achieve a reasonable utilization, but also be able to afford a relatively high amplitude of fluctuation.


\begin{figure}[t]
\centering
\epsfig{file=pic/load_shedding.eps, width=0.7\linewidth}
\caption{Example of load shedding model.}
\label{fig:shed_factor}
\end{figure}



\subsubsection{Adaptive Load Shedding}
During the execution of a join task, it is often the case that there are many
transitory increases of workload caused by the fluctuations of streams' incoming rates.
To handle this kind of situations, S2J adopts an adaptive load shedding, i.e., shedding a percentage of incoming tuples when the utilization of each worker is (about to be) saturated. This percentage is known as \emph{shed ratio} (SR), which can be defined as follows.
\[SR= \begin{cases}
B + \dfrac{1 - B}{1 - \tau_{1}} \cdot (\tau - \tau_{1}),~\text{if}~\tau >\tau_1\\
0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~\text{otherwise}
\end{cases} \]
where $B \in [0, 1]$ is the base percentage of shedding.

The above shed ratio use a linear shedding model, i.e., when $\tau$ exceeds threshold $\tau_{1}$, the shed factor is proportional to the exceeded load size. In practice, the shed ratio could use other alternative shedding model, e.g., a quadratic shedding model. Fig.~\ref{fig:shed_factor} illustrates
an example of the changing trend of shed ratio in these two kinds of shedding models.




\subsubsection{Scaling Up}\label{scaling_up}
Besides transitory increases, the workload on $m$ workers may go larger non-transitorily.
This situation would caused by
a under-estimated expectation $E(\varpi)$ of streams' incoming rates (leading to a small initial $m$) at the beginning, or
a sustained increase of streams incoming rates during the execution.
To mitigate the overload, S2J will automatically increasing its precessing capacity by allocating extra workers if  $\tau>\tau_2$ lasting for a certain time. Then, the average utilization ratio $\tau$ will decrease due to the newly allocated workers, and S2J will keep adding workers until $\tau$ reduces to $\tau^{*}$ around or the worker resources are exhausted.



\subsubsection{Scaling Down}
S2J can also easily support shrinking the processing capacity if necessary. For example, if $\tau < \tau_{0}$ lasting for a long time, S2J will deallocate some workers until $\tau$ rises to $\tau^{*}$ around. The released workers would be recycled by the system and become available in the resource pool.



%-------------------------------------------------------------------------------------------------------------
\subsection{Saving Communication Cost}
S2J adopts a distributed architecture to  achieve high scalability. Nonetheless, the message passing between the distributed workers brings a cost of communication, which could become the performance bottleneck for the system performance, especially when the network bandwidth is limited~\cite{sc11:Palanisamy}.
%
To save the communication cost,
S2J employs the following two approaches, namely (1) eliminating stream duplication and  (2) optimizing the message passing.


\subsubsection{Eliminating Stream Duplication}
To extract more information from the input stream pair,
there are often various join tasks needed to be executed concurrently.
To achieve this, a straightforward solution
is duplicating the input stream pair for each join task.
However, this method multiplies the communication cost with the increase of the number of join tasks.
To avoid this, S2J eliminates the stream duplication and carries out the concurrent join tasks in a source-sharing manner.
Specifically, for each worker, its worker instances (which belong to different join tasks) share the data flow of the input stream
pair. In other words, each worker instance retrieves what it wants from the same data source.
Compared against the approach of stream duplication, the  communication cost saved in this source-sharing manner could be
remarkable when the number of concurrent join tasks goes large, .



\subsubsection{Optimized Message Passing} \label{opt_message}
The source-sharing manner for the concurrent join tasks
helps save communication cost by minimizing the number of stream pair handled by S2J.
For the only one stream pair, the main overhead of communication is from the serialization of the message (tuples with message head and end)
passed between the workers\footnote{The cost of passing tuples of the stream pair is indispensable and fixed. Passing the message heads and ends generated by serialization is main communication overhead which should be reduced.}.
%
To cut the communication overhead, S2J can reduce the total times of serialization by using a relatively big block of tuples for each message passing. Nonetheless, a too big tuple block would degrade the efficiency of information exchange, and
incur a workload imbalance between the workers. To address this trade-off problem, in what follows we first present (1) a tuple-block-based protocol for message passing, followed by discussing (2) how to optimize the tuple block size in the protocol.




\emph{1) MP-2PF Protocol}.
S2J adopts two-phase forwarding mode~\cite{sigmod11:Teubner} to assist the join processing. Based on this mode, we propose a message passing protocol known as MP-2PF to conduct a passive exchange of information between workers, i.e., when the state of a worker changes, it immediately informs such change to its neighbors which are dependent on this information to carry out their next step of operation.


There are three types of messages in MP-2PF, i.e.,

\begin{itemize}
\item \textbf{TUPLE\_BLK}: transmitting a block of tuples between neighbors.

\item \textbf{ACK}: acknowledgement for the received block of tuples.

\item \textbf{SIZE\_CHG}: informing predecessor about its workload size.
\end{itemize}


\noindent
where TUPLE\_BLK and ACK are used to transfer workload between adjacent workers. Specifically, a copy of the forwarded tuple would be kept in the origin worker until the corresponding acknowledgement message is received. In other words, the TUPLE\_BLK message carries a forwarding block of tuples and leaves a copy in the origin worker, and the ACK message triggers the deletion of the copy of successfully forwarded content.
In this way, S2J can avoid the missing-join pair problem~\cite{sigmod11:Teubner}, and make sure that each join operation between a tuple pair
will be executed exactly once. In addition,
%
the SIZE\_CHG message is used to indicate the workload size so that the workload status can be consistent between the collaborative workers.



\begin{figure}[t]
\centering
\epsfig{file=pic/message_passing_protocol.eps, width=0.95\linewidth}
\caption{Message passing protocol for two-phase forwarding (MP-2PF).}
\label{fig:mp_proto}
\end{figure}



As shown in Fig.~\ref{fig:mp_proto}, our MP-2PF protocol works as follows.
%
The workers of S2J transit among three states during the processing of a tuple block. Each worker starts with and eventually stays in the \texttt{Processing} state. When a new tuple block arrives, the worker processes the join for it, sends an acknowledgement to its predecessor, and then checks the forwarding condition to decide whether the tuple block forwarding procedure should be invoked. The \textit{forwarding condition} refers to the threshold of the difference of workload sizes between adjacent workers (the threshold is exactly the tuple block size). If the forwarding condition is met, the worker transits to the \texttt{Forwarding} state. Then, this work sends the tuple block to its successor, leaves a forwarded copy in its site, and finally transits back to the \texttt{Processing} state. If the worker receives one or more acknowledgements in the \texttt{Processing} state, it transits to the \texttt{Deleting} state, followed by deleting the copies of the previously forwarded tuples w.r.t. the corresponding acknowledgement messages, and informing the change of workload size to its predecessor. Finally, the worker also transits back to the \texttt{Processing} state. Since the operations in the \texttt{Forwarding} state and the \texttt{Deleting} state do not block the join processing in the \texttt{Processing} state, our S2J operator always makes progress under this asynchronous MP-2PF protocol.



\emph{2) Optimization of Tuple Block Size}.
During message passing, the tuples
transferred between two workers need to be serialized as a message with  message head and end, which increase the transformation size.
%
To transfer a certain number of tuples, the communication overhead is proportional to the serialization times needed.
%
Hence, instead of  forwarding one tuple per transfer~\cite{sigmod11:Teubner},
S2J uses a tuple block containing $\varsigma$ tuples as the transfer unit,
%
i.e.,
a worker will transfer $\varsigma$ tuples to its successor only when its workload exceeds its successor's by $\varsigma$, like the case in Fig.~\ref{fig:tuple_blk}~(a); otherwise, no tuple transfer occurs, such as the case in Fig.~\ref{fig:tuple_blk}~(b).
Furthermore, it should be noted that the maximum workload differential between any two workers in S2J is exact $\varsigma$ tuples.



\begin{figure}[t]
\centering
\epsfig{file=pic/tuple_block.eps, width=0.95\linewidth}
\caption{Workload difference between workers.}
\label{fig:tuple_blk}
\end{figure}



How to choose an appropriate $\varsigma$ is a trade-off problem.
Formally, let $\varpi$ denotes the average incoming rate of an input stream during a time span $\varphi$, each of the $m$ workers should carry about $\frac{\varpi\varphi}{m}$ tuples in this time span
as S2J keeps a relative load balance between workers. For each worker, to update (or pass) the $\frac{\varpi\varphi}{m}$ tuples, the required serialization times (denoted by $ST$, which is equal to the required transfer times of tuple blocks) can be estimated by
\[ST=\frac{\varpi\varphi}{m\varsigma},\]
where $ST$ should be a natural number, i.e., $ST\in \mathbb{Z}^+$. As mentioned above, a greater $\varsigma$ will diminish the serialization times and help reduce the communication overhead. 
%
%
%
%
Nevertheless, a small $\varsigma$ makes S2J pass the incoming tuples to successor works as soon as possible and distribute the workload, especially a transitorily increased workload, over the workers quickly and evenly. In other words, a small $\varsigma$ 
can help S2J tolerate fluctuations on the incoming rate of input streams, and avoid a  workload imbalance between any two workers, which can be evaluted by an imbalance coefficient ($IC$) as follows.
\[IC = \dfrac{m\varsigma}{\varpi\varphi}.\]
where the average workload $\frac{\varpi\varphi}{m}$ works as a normalization factor. 
%
%
%
%
Given the discussion above, an appropriate $\varsigma$ should simultaneously minimize the values of $ST$ and $IC$.
%
By assuming that S2J has allocated enough workers\footnote{If the assumption is not satisfied, we set $\varsigma=1$ to pass the newly incoming tuples to successive workers as soon as possible for the purpose of fully utilizing the processing capability of each worker.}, i.e., $\frac{\varpi\varphi}{m} < \tau_1\cdot W$,
we formulate the trade-off problem as the form below.
%
\begin{equation*}
\begin{split}
&\textbf{Problem~1}~~\min f(\varsigma)=(1-\alpha)\cdot ST + \alpha\cdot IC\\
&~~~~~~~~~~~~~~~~~~~s.t.\ \frac{\varpi\varphi}{m\varsigma}\in\mathbb{Z}^+, \\
&~~~~~~~~~~~~~~~~~~~ ~~~~~\varsigma + \dfrac{\varpi\varphi}{m} \leqslant \tau_1\cdot W.
\end{split}
%
\end{equation*}
where $\alpha\in(0,1)$ is a tunable parameter, and the inequality constraint makes sure that each worker's capability can afford the tuple block size $\varsigma$ and not trigger a load shedding. In addition, we define $\varsigma^*$ and $\varsigma_i$ as
%
\[\varsigma^* = \sqrt{\frac{1-\alpha}{\alpha}}\cdot\frac{\varpi\varphi}{m},\]
\[\varsigma_i=\frac{\varpi\varphi}{m\cdot\big(\lceil\frac{\varpi\varphi}{m\cdot\beta}\rceil+i-1\big)},~~\forall i=1, 2, \ldots,\]
%
where $\beta$ = $\min\big\{\tau_1\cdot W-\frac{\varpi\varphi}{m}, \frac{\varpi\varphi}{m}\big\}$, and $\forall i$, $\varsigma_i>\varsigma_{i+1}$ holds.
%
Then, the solution of the above trade-off problem (i.e., the optimized tuple block size $\varsigma$) is as follows.
%
\begin{equation}\label{opt_block_size}
\varsigma= \begin{cases}
\arg\min_{\varsigma\in\{\varsigma_t,\varsigma_{t+1}\}}f(\varsigma),~\text{if}~\alpha\in(0.5,1),~\varsigma^*<\beta\\%,~\varsigma_1>\sqrt{\frac{1-\alpha}{\alpha}}\cdot\frac{\varpi\varphi}{m}
\varsigma_{1},~~~~~~~~~~~~~~~~~~~~~~~~~~~\text{otherwise}
\end{cases}
\end{equation}
%
where $\varsigma_{t}=\min\big\{\varsigma_{i}\mid\varsigma_{i}\geqslant\varsigma^*\big\}$.
Detailed deduction of this solution can be found in Appendix A.%~\ref{append:blocksize}.



%-------------------------------------------------------------------------------------------------------------
\subsection{Enhancing Join Processing Efficiency}

To avoid missing any join result, a simple approach is carrying out join operation between all tuple pairs (a tuple pair consists of two tuples that are from the two input streams respectively) in a worker \cite{sigmod11:Teubner}. Nonetheless, this approach is often far from efficiency unless all (or most of) the tuple pairs satisfy the join predicates, which is unusual in practice. To enhance the join processing efficiency,
S2J uses in-memory indices to reduce unnecessary join operations and
accelerate the processing progress. Specifically, when a tuple joins with the opposite stream, it searches for the join keys satisfying the predicate  via an index, and then joins with the opposite tuples under the found join keys.



To facilitate joins with different predicates, S2J  adopts
a hash index for equality joins to directly locate the target
join key and get the correspond tuples under the key, and
utilizes a balanced binary search tree (BST) index for inequality joins,
since it is more convenient and efficient to  access a range  of join keys
indicated by the predicate ($<$, $>$, $\leqslant$, $\geqslant$).
A quantitative discussion can be found in Appendix B.%~\ref{append:join}.



According to our experimental results, the above strategy can
reduce  up to 2 to 3 order of the number of required join operations, and significantly enhance the join processing efficiency.



%=============================================================================================================
\section{Discussion}
\label{sec:discussion}

Based on the aforementioned architecture and functions of  S2J, its characteristics can be summarized as follows.


\begin{claim}
S2J is independent of applications.
\end{claim}


S2J applies a most general join processing manner, i.e., evaluating the join predicates by comparing the join key values, rather than customizing the join processing based on contents of specific input streams. Moreover, to adapt to diverse data sources, S2J utilizes an input adapter to convert them into a standardized streaming input.


\begin{claim}
S2J supports theta- and multi-attribute join.
\end{claim}


S2J supports an efficient theta-join by using hash map and BST to accelerate the equality and inequality join respectively.
For the compound predicates in a multi-attribute join, S2J piggybacks the processing of auxiliary predicates on that of a main predicate.
%
Furthermore, if the domain distribution of input streams is given as a priori knowledge, the selection of the main predicate
can be further optimized to achieve a least skewed distribution of join keys, resulting in a more efficient join processing.


\begin{claim}
S2J supports concurrent join tasks on an input stream pair.
\end{claim}


S2J can carry out multiple join tasks simultaneously upon the same stream pair. Each join task is processed by a set of cascading worker instance.
All worker instances belonging to the same join worker share the dataflow of the input stream pair.
Hence,
more concurrent join tasks will save more communication cost from duplicating the streams for each task.


\begin{claim}
S2J is scalable to the workload of join tasks.
\end{claim}



The workload of join tasks vary with applications. Towards this, S2J can set up a reasonable initial deployment of join workers based on the workload and  processing capability of each worker. Moreover, due to the fluctuations on the incoming rates of input streams, the workload also varies during the join processing. To address this issue, S2J  conducts an adaptive load shedding when the fluctuations are transitory and small, and allocates extra workers on the fly to increase its precessing capacity when the workload go larger non-transitorily.



\begin{claim}
S2J provides a strict real-time guarantee for join processing.
\end{claim}


Real-time processing requires that the submitted task must be done in the deterministic time span~\cite{book:Barlow}. As streaming join is constrained by a finite sliding window, the output for each input is expected to be within the same time span. Hence, S2J joins each incoming tuple with all valid matching within the join window. Each tuple is assigned with a lifespan (equals to the join window size) when it enters the  processing topology of S2J. Active tuples will be joined with the matching tuples within its lifespan, while the expired tuples will be evicted from the processing topology. As long as the tuple is expired, all join results related to it have already been outputted.

On the other hand, to achieve a real-time processing,
the control of join window should not be the bottleneck. As centralized join window control suffers from a high communication cost, especially when dealing with big window size, S2J applies the dataflow-oriented processing, in which communication cost is only from message passing (which is indispensable) without any centralized control.


Given the above two designs, the join  processing in S2J is guaranteed to be real-time.




%=============================================================================================================
\section{Evaluation}
\label{sec:evaluation}

We have implemented a prototype based on Apache S4~\cite{icdm10w:Neumeyer} for the proposed S2J operator. With the prototype, we first verify the scalability of the join processing capability of S2J on a stand-alone machine, distributed environments, and in terms of supporting multiple concurrent join tasks, following which we report the join processing efficiency of the operator and discuss the effect of tuple block size before a case study of how S2J adapts to varying workload.

%-------------------------------------------------------------------------------------------------------------
\subsection{Join Processing Capability Study}

\subsubsection{Performance on a stand-alone machine}
S2J vs.~HandShake Join.
%
See Fig.~\ref{fig:standalone}.


\begin{figure*}[t]
\begin{center}
\subfigure[$\varphi = 5$ min]{\includegraphics[width=4.25cm]{fig/standalone_5min.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$\varphi = 10$ min]{\includegraphics[width=4.25cm]{fig/standalone_10min.eps}}
\subfigure[$\varphi = 15$ min]{\includegraphics[width=4.25cm]{fig/standalone_15min.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$\varphi = 20$ min]{\includegraphics[width=4.25cm]{fig/standalone_20min.eps}}
\caption{Throughput performance on stand-alone machine with different join window size $\varphi$\label{fig:standalone}}
\end{center}
\end{figure*}

\subsubsection{Performance in distributed environments}
S2J vs.~Naive S2J (i.e., baseline).
%
See Fig.~\ref{fig:distributed}.


\begin{figure*}[t]
\begin{center}
\subfigure[$\varphi = 5$ min]{\includegraphics[width=4.25cm]{fig/cluster_5min.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$\varphi = 10$ min]{\includegraphics[width=4.25cm]{fig/cluster_10min.eps}}
\subfigure[$\varphi = 15$ min]{\includegraphics[width=4.25cm]{fig/cluster_15min.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$\varphi = 20$ min]{\includegraphics[width=4.25cm]{fig/cluster_20min.eps}}
\caption{Throughput performance in distributed environment with different join window size $\varphi$\label{fig:distributed}}
\end{center}
\end{figure*}


\subsubsection{Performance of running concurrent join tasks}
Multiple concurrent join tasks vs.~throughput.
%
See Fig.~\ref{fig:multi_task}.


Multiple concurrent join tasks vs.~bandwidth used.
%
See Fig.~(not done yet).


\begin{figure*}[t]
\begin{center}
\subfigure[$m=2$]{\includegraphics[width=4.25cm]{fig/multitask_2node.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$m=4$]{\includegraphics[width=4.25cm]{fig/multitask_4node.eps}}
\subfigure[$m=6$]{\includegraphics[width=4.25cm]{fig/multitask_6node.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$m=8$]{\includegraphics[width=4.25cm]{fig/multitask_8node.eps}}
\caption{Throughput performance in distributed environment with multiple concurrent join tasks\label{fig:multi_task}}
\end{center}
\end{figure*}


%-------------------------------------------------------------------------------------------------------------
\subsection{Join Processing Efficiency Study}

Join Processing efficiency of S2J vs.~that of naive S2J (i.e., baseline).
%
See Fig.~\ref{fig:effi}


\begin{figure*}[t]
\begin{center}
\subfigure[$m=4$, $\varpi=1000$]{\includegraphics[width=4.25cm]{fig/effi_m4_rate1000.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$m=4$, $\varpi=2000$]{\includegraphics[width=4.25cm]{fig/effi_m4_rate2000.eps}}
\subfigure[$m=8$, $\varpi=1000$]{\includegraphics[width=4.25cm]{fig/effi_m8_rate1000.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$m=8$, $\varpi=2000$]{\includegraphics[width=4.25cm]{fig/effi_m8_rate2000.eps}}
\caption{Processing efficiency under different workload\label{fig:effi}}
\end{center}
\end{figure*}


%-------------------------------------------------------------------------------------------------------------
\subsection{Effect of Tuple Block Size}

Tuple block size vs.~tolerance of fluctuation on input incoming rate.
%
See Fig.~\ref{fig:multi_task}.


Tuple block size vs.~increased communication overhead.
%
See Fig.~(not done yet).


\begin{figure}[t]
\begin{center}
\subfigure[Varying amplitude]{\includegraphics[width=4.15cm]{fig/s_vs_amp.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[amplitude]{\includegraphics[width=4.15cm]{fig/s_vs_freq.eps}}
\caption{The effect of tuple block size on load shedding when there are different fluctuation amplitudes and frequencies on the incoming rate
of input streams\label{fig:multi_task}}
\end{center}
\end{figure}


\subsection{Case study of adapting to varying workload}
Case study





%===========================================================================================================
\section{Conclusion}
\label{sec:conclusion}

In this paper, we have proposed S2J, a general operator for stream join in the cloud.





\balance

\bibliographystyle{abbrv}
\bibliography{Distributed_Streaming_Join_VLDB14}

\section*{Appendix A: Solution Deduction}
In this Appendix, we present the detailed deduction steps for the solution (i.e., Eq.~(\ref{opt_block_size}) in Section~\ref{opt_message}) of Problem~1.


According to the constraints in Problem~1, $\varsigma$ should satisfy $\varsigma\leqslant\beta$ and $\frac{\varpi\varphi}{m\varsigma}\in\mathbb{Z}^+$, where $\beta=\min\{\tau_1\cdot W-\frac{\varpi\varphi}{m}, \frac{\varpi\varphi}{m}\}$.  Hence, the search space of $\varsigma$ consists of  values below.
%
\[\varsigma_i=\frac{\varpi\varphi}{m\cdot\big(\lceil\frac{\varpi\varphi}{m\cdot\beta}\rceil+i-1\big)},~~\forall i=1, 2, \ldots,\]
%
where $(\lceil\frac{\varpi\varphi}{m\cdot\beta}\rceil+i-1)$ refers to the corresponding value of $ST$ (i.e., serialization times), and  $\forall i$, $\varsigma_i > \varsigma_{i+1}$ holds.


In addition, in Problem~1, the minimal value of $f(\varsigma)$  can be achieve at its stationary point $\varsigma^* = \sqrt{\frac{1-\alpha}{\alpha}}\cdot\frac{\varpi\varphi}{m}$.


(1) If $\alpha\in (0, 0.5]$, then $\varsigma^*\geqslant\frac{\varpi\varphi}{m}$, which is equal to $\varsigma^*\geqslant\beta$.
In other words, the search space $\{\varsigma_i\}$ of $\varsigma$ is always at the left side of the stationary point $\varsigma^*$.
Furthermore, since $f(\varsigma)$ is a convex function, the greatest $\varsigma_1$ would lead to a smallest value of $f(\varsigma)$, compared against the other $\varsigma_i$ ($i>1$). Thus, when $\alpha\in (0,0.5]$, the soluation of Problem~1 is $\varsigma_1$.


(2) If $\alpha\in (0.5, 1)$, relationship $\varsigma^*<\frac{\varpi\varphi}{m}$ always holds. Then, if $\varsigma^*\geqslant\tau_1\cdot W-\frac{\varpi\varphi}{m}$, which is equal to $\varsigma^*\geqslant\beta$, and the solation of Problem~1 is also $\varsigma_1$ (the reason behind is similar to the situation discussed above); otherwise, $\varsigma^*<\tau_1\cdot W-\frac{\varpi\varphi}{m}$, which is equal to $\varsigma^*<\beta$, and the two $\varsigma_i$ that are most close to  $\varsigma^*$ should be the candidate values of $\varsigma$. Formally, let $\varsigma_{t}=\min\big\{\varsigma_{i}\mid\varsigma_{i}\geqslant\sqrt{\frac{1-\alpha}{\alpha}}\cdot\frac{\varpi\varphi}{m}\big\}$ be the candidate right to (i.e., greater than)  $\varsigma^*$, and then $\varsigma_{t+1}$ would be the candidate left to (i.e., less than) $\varsigma^*$. Thus, in this case, the solution of Problem~1 should be $\varsigma = \arg\min_{\varsigma\in\{\varsigma_t,\varsigma_{t+1}\}}f(\varsigma)$.


In summary, the solution of Problem~1 is as follows.
%
\begin{equation*}
\varsigma= \begin{cases}
\arg\min_{\varsigma\in\{\varsigma_t,\varsigma_{t+1}\}}f(\varsigma),~\text{if}~\alpha\in(0.5,1),~\varsigma^*<\beta\\%,~\varsigma_1>\sqrt{\frac{1-\alpha}{\alpha}}\cdot\frac{\varpi\varphi}{m}
\varsigma_{1},~~~~~~~~~~~~~~~~~~~~~~~~~~~\text{otherwise}
\end{cases}
\end{equation*}



\section*{Appendix B: Join Efficiency Analysis}
During join processing, to output a certain number of join results, the amount of computation  is proportional to that of  comparison operations used to find target tuple pairs.
Hence, the join processing efficiency $\mathcal{P}$ of a operator can be estimated by the following criterion.
%
\begin{equation*}
\mathcal{P} = \dfrac{number\ of\ outputs}{number\ of\ comparisons}
\end{equation*}
%
A greater value of $\mathcal{P}$ indicates that the operator can conduct a more efficient join processing.


Many exiting operators compare each two tuples from the opposite input streams to avoid missing any target tuple pair \cite{sigmod11:Teubner}, and
thus get a join processing efficiency as follows.
%
\begin{equation*}
\mathcal{P}_{pairwise} = \dfrac{\kappa}{r \cdot s}
\end{equation*}
%
where $r$ and $s$ respectively denote the number of tuples in the two opposite streams, and $\kappa$ is the number of  target tuple pairs. In practice,
it is often the case that $\kappa \ll r \times s$, resulting in $\mathcal{P}_{pairwise} \ll 1$, i.e., most of computation are useless and output nothing.



To improve the join processing efficiency, our S2J operator adopts
BST- and hash-based indices to accelerate non-equality join and equality join respectively.
Specifically, when a tuple arrives at a worker,
the target opposite tuples can be found within $O(\log n)$ time via the BST-based index and $O(1)$ time via the hash-based index.
Then, the processing efficiency of non-equality join operations in S2J is as follows.
%
\begin{equation*}
\mathcal{P}_{bst}
= \dfrac{\kappa}{r \cdot O(\log s) + s \cdot O(\log r)},
\end{equation*}
%
and thus
%
\begin{equation*}
\dfrac{\mathcal{P}_{pairwise}}{\mathcal{P}_{bst}} =  \dfrac{O(\log s)}{s} + \dfrac{O(\log r)}{r}.
\end{equation*}
In practice, as $s \gg O(\log s)$ and $r \gg O(\log r)$, we have
the relationship below
\[\mathcal{P}_{bst} \gg \mathcal{P}_{pairwise},\]
which indicates that the processing efficiency of the non-equality join operations can be significantly improved by using a BST-based index.


Similarly, the processing efficiency of equality join operations in S2J is as follows.
%
\begin{equation*}
\mathcal{P}_{hash} = \dfrac{\kappa}{r \cdot O(1) + s \cdot O(1)},
\end{equation*}
%
and thus
%
\begin{equation*}
\dfrac{\mathcal{P}_{pairwise}}{\mathcal{P}_{hash}} =  \dfrac{O(1)}{s} + \dfrac{O(1)}{r}.
\end{equation*}
%
Since $s \gg O(1)$ and $r \gg O(1)$, we also have the following relationship, i.e.,
\[\mathcal{P}_{hash} \gg \mathcal{P}_{pairwise},\]
indicating that the processing efficiency of the equality join operations can be significantly improved by using a hash-based index.

In brief, the BST- and hash-based indices
facilitate joins with different predicates, and help our S2J operator achieve a high join processing efficiency.


\end{document}
