\documentclass{sig-alternate}
\usepackage{graphicx, amsmath, balance, subfigure, cite}

\def\EndOfProof{\nolinebreak\ \hfill \rule{1.3mm}{2.3mm}}

%\newtheorem{assumption}{Assumption}
%\newtheorem{definition}{Definition}
\newtheorem{lemma}{Lemma}
\newtheorem{theorem}{Theorem}
%\newtheorem{condition}{Condition}
\newtheorem{claim}{Claim}

%\newtheorem{property}{Property}
%\newtheorem{observation}{Observation}
%\newtheorem{corollary}{Corollary}


\begin{document}

\title{A Framework for Scalable Stream Join Processing in the Cloud}

\author{}

\maketitle

\begin{abstract}
  Join operation is very important for stream query processing.
  Multiple stream queries are often posed on a single input stream pair,
  resulting in concurrent join tasks.
  Consequently, the workload of join operations is increased,
  with larger join window and higher stream input rates.
  This scenario challenges for an all-purpose stream operator
  that is not only application-agnostic,
  but also capable of handling many concurrent join tasks effectively and efficiently.
  To achieve this goal,
  in this paper we propose S2J, a scalable stream join processing framework, that adopts a dataflow-oriented processing model,
  to perform each join task by distributing the load to an appropriate number of chained join workers and employing a tuple-block-based message passing protocol to
  reduce the communication overhead.
  This framework is efficient for theta-join as well as multi-attribute join,
  and provides real-time and result-integrity guarantees for the join processing.
  Extensive experimental studies confirm the effectiveness and efficiency of this S2J stream join processing framework.
\end{abstract}


%=============================================================================================================
\section{Introduction}
\label{sec:introduction}


With the advancement of technology and urban migration,
smart city planners and decision makers
are relying on the information
and communication technologies for urban competitiveness.
In particular, there have been government driven efforts in wiring
up every corner of the city,
embracing the ``Internet of Things'' by putting in place a sensor fabric network
to connect everything everywhere.
With the sensor network,
more data streams are being generated,
stored and analyzed in real-time.
Hence,
stream queries that extract useful information from
%%% qian add citation: book:Barlow, vldb02:Carney
vast volumes of continuous incoming data in a real-time manner are common~\cite{book:Barlow, vldb02:Carney}.
Answering these queries typically involves three main types of operations, namely selection, projection, and join.
%%% qian add citation: book:Aggarwal
Selection and projection operations are unary reductive operations and easy to process~\cite{book:Aggarwal},
%%% qian add citation: sigmodrec03:Golab, pods02:Babcock
while join operation is not so straightforward as two input streams may correlate~\cite{sigmodrec03:Golab, pods02:Babcock}.
Furthermore,
it is often the case that multiple join tasks run simultaneously on an input stream pair,
and new join tasks are initiated when new queries on the stream pair arrive.
To answer the queries within an acceptable latency,
a processing framework for stream join must be capable of handling the join tasks efficiently.


Let's consider the example in Figure~\ref{fig:example_psi},
queries 1 \& 2
are two typical stream queries from online pollutant analysis,
which help investigate the changes of contamination factors in air pollutant.
These two queries run on streams $C$ and $H$ that respectively carry the monitoring results of current AQI (Air Quality Index)
and  historical records one month ago.
Each query keeps checking the differences between the current and historical PM10 (or PM2.5)
when there is a significant gap between
the current and  historical PM2.5 (or PM10) within half an hour interval.
To answer the queries, two concurrent join tasks $C\Join_{\mid C.PM2\_5 - H.PM2\_5\mid > t}H$ and $C\Join_{\mid C.PM10 - H.PM10\mid > t}H$
have to be executed in parallel
on streams $C$ and $H$ with a join window size of 30 minutes.
In practice, the  number of concurrent join tasks,  the join window size, and the incoming rates of input streams significantly vary with different applications,
all of which should be taken into account in the design of a
scalable stream join processing framework.


Given the above discussion, the key requirements of a stream join processing framework are as follows.


\begin{figure}[t]
\centering
\framebox[0.95\linewidth]{\epsfig{file=pic/example_psi.eps, width=0.85\linewidth}}
\caption{Examples of streaming queries on AQI}
\label{fig:example_psi}
\end{figure}


\begin{enumerate}
\item \textbf{Generality.}
  Instead of being designed and optimized for specific applications, a processing framework should be application-agnostic. It should  support as many types of join predicates as possible rather than specific types of predicates (e.g., equality join).

\item \textbf{Handling concurrent join tasks.}
  There are often multiple stream queries running on an input stream pair simultaneously, resulting in a set of concurrent join tasks.
  The processing framework
  should schedule these concurrent join tasks efficiently.

\item \textbf{Processing big workload.}
  Large join windows and high incoming rates of streams increase
  the workload of join tasks, and
  challenge the processing efficiency and capacity of a stream join operator.
  Hence, the processing framework should address
  the issue of a very big workload and avoid  workload imbalance.
  Moreover,
  its control and maintenance of a large join window should not be the bottleneck of the processing efficiency.
\end{enumerate}


Although stream join processing
on a stand-alone machine 
%%% qian add citation: tkde10:Bornea, vldb03:Golab
has been well studied in the past decade~\cite{tkde10:Bornea, icde04:Mokbel, vldb03:Golab, vldb02:Dittrich, vldb01:Urhan, sigmod99:Ives, pdis91:Wilschut},
%%%
most existing methods cannot be easily extended to distributed environments
for higher scalability.
%%% qian add citation: dasfaa06:Zhou
Recent works on distributed stream join processing~\cite{sigmod13:Ananthanarayanan, sosp13:Zaharia, eurosys13:Qian, edbt09:Wang, vldb08:Logothetis, dasfaa06:Zhou} 
have some limitation either in result integrity or
communication overhead,
and were usually designed and optimized for specific applications.
To handle concurrent join tasks on an input stream pair,
a straightforward approach is to duplicate
the input streams for each task.
However,
the duplicated streams will markedly increase
the cost of communication, and make the approach to be impractical.


In this paper,
we propose S2J (\textbf{S}calable \textbf{S}tream \textbf{J}oin),
a stream join processing framework
that can effectively and efficiently handle a large
workload of concurrent join tasks on one input stream pair.
S2J automatically allocates an appropriate
number of cascading processing units
known as join workers to distribute the workload of join processing,
and deploys concurrent join tasks on the join workers in
a source-sharing manner
to save communication cost from not having to duplicate streams.
To manage the information exchange between join workers,
S2J applies a tuple-block-based message passing protocol known as MP-2PF,
which
ensures that the join operation for each target tuple pair
is executed exactly once.
With an appropriately selected tuple block size,
S2J can reduce the required bandwidth, and
effectively exploit the processing capability of each join worker.
In addition, S2J adopts an input adapter
and a load shedder that adapt
to diverse input sources
and their different incoming rates.
%Consequently,
%S2J can efficiently support a large number of query accesses, and
%facilitate the join result materialization.


Our key contributions can be summarized as follows.
\begin{itemize}
\item
We propose a scalable stream
join processing framework S2J which dynamically adapts
to varying workload,
 including huge workload.
 Specifically, based on the current utilization of each worker,
 S2J optimizes the number of join workers for an efficient use of computing resources.
 S2J is able to handle the false positive overload caused by transitory
 increase of streams' incoming rates, and
help the workers recapture adequate processing capacity,
via an adaptive load shedding on the input streams.

\item
We present a tuple-block-based message
 passing protocol MP-2PF with a selection method for tuple block size.
The MP-2PF protocol enables S2J to guarantee the result integrity of join processing,
while the selected tuple block size enables S2J  to reduce the communication overhead,
avoid a workload imbalance,
and tolerate the fluctuations of the incoming rate
of input streams.
\end{itemize}



The remaining sections are organized as follows.
We review related work in Section~\ref{sec:related_work}, a
nd elaborate the design and functionality of our S2J operator in Section~\ref{sec:design},
following which we report the experimental results and our findings in Section~\ref{sec:evaluation}.
before concluding the paper in Section~\ref{sec:conclusion}.


%=============================================================================================================
\section{Related Work}
\label{sec:related_work}

Existing research on stream join processing can be broadly
classified into two main categories, namely (1) approaches that focus on handling and optimizing stream join processing on a stand-alone machine,
and (2) approaches that run large-scale stream join in distributed environments.


\subsection{Stream Join on a Stand-Alone Machine}

\subsubsection{Centralized Stream Join}
Earlier stream join methods often
entail a centralized maintenance of join states (e.g., intermediate results),
and employ either a hash- or sort-based join processing.

\emph{Hash-based Join Methods}.
Pipelining hash join~\cite{pdis91:Wilschut} is one of the most
classical stream join methods,
which takes advantage of parallel main-memory to speed up the join processing.
However, to keep the entire join state,
a sufficiently large main memory is required.
To address this issue,
there are the double-pipelined hash join~\cite{sigmod99:Ives}, XJoin~\cite{vldb01:Urhan}, and hash-merge join~\cite{icde04:Mokbel} flush partial hash table to disks for subsequent processing.
In order to maximize the output rate,
some proposals~\cite{sigmod05:Tao, dasfaa07:Tok, sigmod10:Chen}
adopt a statistics-based flushing policy, in which only the
tuples that are more likely to be joined are kept in memory.


\emph{Sort-based Join Methods}.
As hash-based join is more suitable
for equality predicates than inequality joins,
sort-based join was proposed to handle inequality joins.
However,
traditional sorting operation requires the entire input
before producing any output.
Hence, progressive merge join~\cite{vldb02:Dittrich}
partitions the memory into two parts,
both of which carry a stream,
sorts the join keys,
and performs join processing when
the memory is filled up.
This however results in a significant delay on output.


\begin{figure*}[!t]
\centering
\epsfig{file=pic/architecture.eps, width=0.82\textwidth}
\caption{Architecture of S2J}
\label{fig:arch}
\end{figure*}


\subsubsection{Multi-Core based Stream Join}
Modern multi-core technology brings parallelism
%%% qian add citation: damon13:Karnagel, micro12:Qian
for stream join on a stand-alone machine~\cite{sigmod11:Teubner, vldb07:Gedik, damon13:Karnagel, micro12:Qian}.
For example, Gedik \emph{et al.}~\cite{vldb07:Gedik} proposed
to use multi-core Cell processors to enhance join processing efficiency,
although the efficiency highly relies on the hardware parallelism,
which is often not well-supported by commodity hardware.
In addition,
optimizations that combine multi-core and
shared memory are also used by the operators
to further improve their join capability and efficiency.
Handshake join~\cite{sigmod11:Teubner} exemplifies such join method,
in which each tuple in one stream handshakes with (i.e.~join with) tuples in the other stream.
Nevertheless,
since all of these multi-core based stream join operators
are highly customized for execution on a single machine,
they have not been designed for scalability in a distributed environment,
and parallelizing them is not straightforward.


\subsection{Distributed Stream Join Processing}
%%%cy:changed
To achieve a high processing capability and scalability,
recent research focuses on distributed stream join processing;
however most existing solutions are often application specific,
and some achieve efficiency at the expense of the integrity of join results.

%in generality, result integrity, and communication overhead.
%%%% ooibc3: this is very sweeping and it is the 2nd time u said it!
%%%%          if i read this as reviewer, i would be annoyed!

Photon~\cite{sigmod13:Ananthanarayanan} is a fault-tolerant,
distributed stream join system proposed by Google.
It has been specifically designed and optimized for joining data
streams of web search queries and user clicks on advertisements,
sacrificing some generality of porting to other applications.

D-Stream~\cite{sosp13:Zaharia} breaks continuous
streams by discrete units and processes them as batch jobs on Spark~\cite{nsdi12:Zaharia}.
However, this batch processing on streams provides no guarantee on the
integrity of join results, since
a few target tuple pairs in separated batches may miss each other for join.
Similarly, most MapReduce-based stream join processing methods
such as \cite{sigmod10:Blanas, vldb08:Logothetis} also face the same problem.


TimeStream~\cite{eurosys13:Qian} exploits the dependencies of
tuples to perform stream join.
However, the maintenance of dependencies incurs communication overhead,
and may become the  bottleneck of its performance.
Multiple join predicates may further complicate this dependency-based solution.

PSP~\cite{edbt09:Wang} transforms a macro join operator into a series
of smaller sub-operators by time-slicing of the states.
The processing is distributed to these sub-operators
which are connected in a ring architecture.
However, since it has to synchronize the distributed join states,
its communication overhead may be high,
which could be exponential to the number of sub-operators.


%=============================================================================================================
\section{S2J Processing Framework}
\label{sec:design}

This section  first presents the architecture of S2J processing framework,
followed by details on
(1) how S2J adapts to varying workload,
(2) S2J  message passing mechanism, and
(3) optimizations to enhance the join processing efficiency.


%-------------------------------------------------------------------------------------------------------------
\subsection{System Architecture}
The main objective of S2J processing framework is
the efficiency of stream join processing in a large-scale distributed environment.
Figure~\ref{fig:arch} depicts
the S2J system architecture
that is composed of a scalable join engine,
an input adapter,
a load shedder, a  materialization,
a query  proxy and a query  processor.


\subsubsection{Scalable Join Engine}

To maximize the computational scalability,
the join engine of S2J adopts
%%%%%%
a distributed stream processing model,
in which workers are its basic processing units,
and deployed on nodes of the distributed platform,
where one node can carry one or multiple workers.
These workers are connected by stream channels in a cascading
manner, and new workers can be added on the fly as needed.


%%%%%% added this paragraph --Dec 8
Instead of using a centralized join window control which suffers from a high communication cost, S2J assigns a lifespan to every tuple when it enters the join engine, and applies a dataflow-oriented processing which works as follows. When a tuple $r$ of source R in Figure~\ref{fig:arch}  arrives at the left end of the join engine, it starts a lifespan which is equal to the specified join window size, for example, 1 minute; then, it moves to the right and joins with every target tuples in source S when it meet them, and finally expires and moves out from the engine via the right end 1 minute later.
Note that at the moment of the tuple $r$ entering the engine, for source S, the tuples of last 1 minute is already in the engine, and the tuples of subsequent 1 minute will move in to the engine before $r$ expires. Thus, all join operations related to $r$ (i.e., join $r$ with all valid matching within the join window size) are guaranteed to be done in a specified time limit (i.e., 1 minute here). In other words, this join engine provides a strict real-time guarantee for join processing.


To achieve a global workload balancing,
the message passing in this join engine implements
a criterion that a worker must transfer part of its workload
to its successor (w.r.t.~the stream direction) if{f} its workload
is greater than that of its successor by a threshold.
This criterion guarantees that the global workload
is distributed over all workers
evenly and automatically.


Meanwhile, to concurrently carry out multiple
join tasks on the same stream pair,
the join engine assigns a separated set of chained worker instances
(marked by different shades in Figure~\ref{fig:arch}) to
each task.
New worker instances will
be initialized automatically when there are new join
tasks on the arriving stream pair.
All worker instances belonging to the same worker
share the stream channels and  data flow,
and therefore help S2J save the communication cost
by not duplicating the input streams for each join task.


\subsubsection{Input Adapter and Load Shedder}

In order to make S2J to be independent of diverse input streams, an
input adapter is adopted to convert the external data source
into a standardized streaming input,
and at the same time
performs the pre-selection
and pre-projection for corresponding predicates
raised by the streaming query.
This is similar in spirit to some popular large scale processing engines
that are independent of formats by having an all-purpose reader.
In our case, for example,
only the join related attributes of every tuple are
projected before the join processing.
Additionally, the load shedder is used
%%% qian add citation: vldb03:Tatbul
to handle the transitory increases of the incoming rate of input stream pair~\cite{vldb03:Tatbul}.


\subsubsection{Materialization}
The latest outputs of the join engine
are maintained by a memory buffer,
while the older ones are
structured as \emph{snapshots} and materialized to persistent storages.
Each snapshot groups a set of records that share a common identifier (e.g. committing time), and
facilitates subsequent retrieval based on this common identifier.
Moreover, S2J supports two committing strategies for the snapshots, i.e., committing  periodically or
according to punctuations
\cite{tkde03:Tucker} in the input streams.


\subsubsection{Query Proxy and Query Processor}
S2J supports
both continuous query,
which keeps requesting up-to-date results,
and one-time query,
which requests
the results within a time span.
By applying the client-server model for query processing,
multiple queries can be supported simultaneously.
At the client-end, each query proxy converts query requests into stream events, and
transforms the query response stream into the clients' format.
As the server-end,
the query processor parses the stream events carrying the query requests,
answers the queries using necessary operations, and
returns the results to the corresponding clients via query response streams.



\begin{figure}[t]
\centering
\epsfig{file=pic/adapt_to_workload.eps, width=0.8\linewidth}
\caption{Adapting to varying workload}
\label{fig:adapt_to_workload}
\end{figure}



%-------------------------------------------------------------------------------------------------------------
\subsection{Adapting to Varying Workload}

The workload of a join task depends on the join window size and the incoming rate of the input stream pair, both of which vary with applications,
resulting in varying workload between different applications.
When the join window size is fixed in a specific application,
the fluctuation of streams' incoming rates may also incur
varying workload during join processing.
Hence, how to adapt to these varying workload
is of practical significance for a stream join operator.


To this end, our S2J processor adopts a sectional
solution based on the average utilization ratio of the workers.
Let $\tau$ denote the current average utilization ratio, i.e., $\tau=\frac{w}{W}$,
where $w$ is the current average workload (evaluated by the number of tuples) over the workers,
and $W$ is the capacity of each worker (i.e., the maximal number of tuples that the worker can handle),
the following three user-defined thresholds of $\tau$ are used to trigger different strategies to adapt to a given workload.


\begin{itemize}
\item Threshold \textbf{$\tau_{0}$} for releasing superfluous workers.

\item Threshold \textbf{$\tau_{1}$} for starting adaptive load shedding.

\item Threshold \textbf{$\tau_{2}$} for allocating extra workers.
\end{itemize}


Figure~\ref{fig:adapt_to_workload} illustrates this sectional solution,
where $\tau^*$
refers to expected initial value of $\tau$, and the
thresholds satisfy the following relationship.
\[0 < \tau_{0} < \tau^* < \tau_{1} \leqslant \tau_{2} \leqslant 1.\]


Based on the above thresholds, in what follows we discuss
the initial deployment of computation
resources (i.e., join workers of S2J) for different applications,
and introduce two approaches
that dynamically adapt S2J to the varying workload during
the execution, i.e.,  shedding the load
adaptively and adjusting the number of join workers on the fly.


\subsubsection{Initialization} \label{initial_deploy}
When a join task is about to be initiated,
an appropriate number $m$ of join workers for the initial deployment of S2J
need to be specified, which is a trade-off decision to make.
This is because deploying fewer workers
reduces the waste of computation resources,
while more workers
can bring with a greater total capacity which can
support larger fluctuation on workload.


Formally,
let $\varphi$ denote the join window size of the join task,
and $E(\varpi)$ be the expectation of the incoming
rate of an input stream,
which can be roughly estimated by users' prior knowledge or a transient monitoring on the input steam, in order to achieve an expected average utilization ratio $\tau^*$ for the workers,
%%
the value of $m$ can be estimated as
\[m=\Big\lceil\frac{E(\varpi)\cdot\varphi}{W\cdot \tau^*}\Big\rceil,\]
and then, the trade-off problem is converted to the selection of an appropriate value for $\tau^*$.


As $\tau^*<\tau_1$,
we set $\tau^*=\beta\cdot\tau_1$ ($0<\beta<1$).
If $\tau^* \to 1\cdot \tau_1$, all workers would be effectively exploited though,
even a small transitory increase on the incoming rate of input streams
would trigger a false positive of load shedding.
In contrast, if $\tau^* \to 0\cdot \tau_1$,
there would be a large number of workers allocated with low utilization, which will also incur additional cost on communication,
even though it is appearing that they could afford stream fluctuations with greater amplitudes.
Given the discussion as above, we tested $\tau^*=0.5\cdot\tau_1$ in our experiment
to ensure each worker can not only achieve a reasonable utilization,
but also be able to withstand a relatively high amplitude of fluctuation.



\begin{figure}[t]
\centering
\epsfig{file=pic/load_shedding.eps, width=0.7\linewidth}
\caption{Example of load shedding model}
\label{fig:shed_factor}
\end{figure}



\subsubsection{Adaptive Load Shedding}
During the execution of a join task,
it is often the case that there are many
transitory increases of workload caused by the fluctuations of streams' incoming rates.
To handle this kind of situations,
S2J adopts an adaptive load shedding, i.e., shedding a percentage of incoming tuples when the utilization of each worker is (about to be) saturated.
This percentage is known as \emph{shedding ratio} ($SR$), which is defined as follows.
\[SR = \begin{cases}
B + \dfrac{1 - B}{1 - \tau_{1}} \cdot (\tau - \tau_{1}),~\text{if}~\tau >\tau_1\\
0, ~~~~~~~~~~~~~~~~~~~~~~~~~~~\text{otherwise}
\end{cases} \]
where $B \in [0, 1]$ is the base percentage of shedding.


The above shedding ratio uses a linear shedding model, i.e., when $\tau$ exceeds threshold $\tau_{1}$, the shedding ratio is proportional to the exceeded load size.
In practice, the shedding ratio could use other alternative shedding model,
e.g., a quadratic shedding model.
Figure~\ref{fig:shed_factor} illustrates
an example of the changing trend of shedding ratio in these two kinds of shedding models.


\subsubsection{Join Worker Management}\label{scaling_up}
In order to handle sustained increases and decreases of stream incoming rates during execution,
S2J adopts a dynamic management of its join workers.


If  $\tau>\tau_2$ lasts for a certain time period,
to mitigate the overload,
S2J can automatically increase its precessing capacity by allocating extra workers.
Then, the average utilization ratio $\tau$ will decrease due to the newly allocated workers,
and S2J will keep adding workers until
either the value $\tau$ reduces to about $\tau^{*}$ (see Section~\ref{initial_deploy})
or available nodes in the cluster run out.


If $\tau<\tau_0$ lasts for a relative long time,
to solve the underload,
S2J can also easily save the computing resources in use
by releasing superfluous workers.
To be specific, according to the value of current $\tau$,
S2J will deallocate an appropriate number of connected workers at one end
such that the average utilization ratio of remaining workers rises to about $\tau^{*}$.
The nodes that used to carry the deallocated workers will
be  recycled by S2J and become available in the cluster.



%-------------------------------------------------------------------------------------------------------------
\subsection{Message Passing Mechanism}

In distributed processing,
message passing between the distributed workers incurs a considerable amount of communications,
which could become the performance bottleneck for the system,
especially when the network bandwidth is limited~\cite{sc11:Palanisamy}.
In order to save the communication cost,
concurrent join tasks  share the stream channels
and data flow to minimize the number of stream pairs handled by S2J.
For a given stream pair,
the main overhead of communication is caused by  the serialization
of messages\footnote{The cost of passing tuples of the stream pair is indispensable and fixed.
Passing the message heads generated by serialization is the main communication overhead which should be reduced.}.
To reduce this overhead, S2J
lowers the serialization cost by using a
tuple block containing multiple tuples for each
message passing.


For the tuple-block-based message passing,
in what follows, we propose a protocol called MP-2PF to
schedule the passing procedure and associate it with the progress of join processing, and
%%%%% revised --Dec 9
present a selection method of tuple block size, which dynamically adjusts the granularity of a tuple block
to satisfy the  key needs in different periods,
such as diminishing bandwidth and avoiding  a false positive load shedding.
%In addition,
%we also propose strategy to to select a suitable tuple block size to
%help S2J speed up the message passing and
%simultaneously reduce the communication overhead.
% %%%%% ooibc3: the above is a bit redundant as you have said before and
% %%%%     it did not say any new or details


\subsubsection{MP-2PF Protocol} \label{opt_message}
S2J adopts two-phase forwarding model~\cite{sigmod11:Teubner} to facilitate join processing.
Based on this model,
we propose a message passing protocol known as MP-2PF to
conduct a passive exchange of information between workers, i.e., when the state of a worker changes, it immediately informs such a change to its neighbors which are dependent on this information to perform their next operation.


There are three types of messages in MP-2PF, i.e.,

\begin{itemize}
\item \textbf{SIZE\_CHG}: informing predecessor about its workload size.

\item \textbf{TUPLE\_BLK}: transmitting a block of tuples between neighbor workers.

\item \textbf{ACK}: acknowledgement for the received block of tuples.
\end{itemize}


\noindent
%%% revised. --Dec 9
When the workload size (measured by number of tuples) on a join worker changes,  a SIZE\_CHG message will be sent to the anterior worker.
If the workload size is less than that on the anterior worker by at least one tuple block size, there will be a workload transfer between the two adjacent workers. In one workload transfer, a copy of the forwarded tuple would be kept by the anterior
worker until a corresponding acknowledgement is fed back.
The TUPLE\_BLK message carries this forwarding block of tuples, and the kept copy  will be deleted once the worker
receives an ACK message from its successor.
In this way, S2J can avoid the
 missing-join pair problem~\cite{sigmod11:Teubner},
and make sure that each join operation between a tuple pair
will be executed exactly once.
% In addition,
% the SIZE\_CHG message indicates the workload size so that the workload status can
% be maintained consistent between the collaborative workers.
% %%%% ooibc3: is the above sentcen any use?


\begin{figure}[t]
\centering
\epsfig{file=pic/message_passing_protocol.eps, width=0.95\linewidth}
\caption{Message passing protocol for two-phase forwarding (MP-2PF).}
\label{fig:mp_proto}
\end{figure}


Figure~\ref{fig:mp_proto} outlines our MP-2PF protocol.
The workers of S2J transit among three states during the processing of a tuple block.
Each worker starts with a \texttt{Processing}
state and goes back to that state after taking in a new tuple.
When a new tuple block arrives,
the worker processes the join using the new tuple,
sends an acknowledgement to its predecessor,
and then checks the forwarding condition
to decide whether the tuple block forwarding procedure should be invoked.
The \textit{forwarding condition} refers to the threshold of the difference
of workload sizes between adjacent workers (the threshold is exactly the tuple block size).
If the forwarding condition is met, the worker transits to a \texttt{Forwarding} state.
The worker then sends the tuple block to its successor,
leaves a forwarded copy in its site,
and finally transits back to a \texttt{Processing} state.
If the worker receives one or more acknowledgements in the \texttt{Processing} state,
it transits to a \texttt{Deleting} state,
followed by deleting the copies of the previously forwarded tuples w.r.t. the corresponding acknowledgement messages,
and informing the change of workload size to its predecessor.
Finally, the worker also transits back to a \texttt{Processing} state.
Since the operations in the \texttt{Forwarding} state and the \texttt{Deleting} state do not block the join processing in the \texttt{Processing} state,
our S2J operator always makes progress under this asynchronous MP-2PF protocol.



\subsubsection{Selection of Tuple Block Size}

For each message passing, the tuples
transferred between two workers need to be serialized
as a message with a message head,
which increases the communication volume.
In other words,
the communication overhead of transferring a fixed amount of tuples is proportional to the serialization times used.
Hence, instead of  forwarding one tuple at a time~\cite{sigmod11:Teubner},
S2J uses a tuple block containing $\varsigma$ tuples as the transfer unit,
i.e.,
a worker will pass $\varsigma$ tuples to its
successor only when its workload exceeds its successor's by $\varsigma$,
like the case in Figure~\ref{fig:tuple_blk}~(a); otherwise, no tuple transfer occurs, like the case in Figure~\ref{fig:tuple_blk}~(b).
Furthermore, it should be noted that the maximum workload differential between any two workers in S2J is exact $\varsigma$ tuples.


\begin{figure}[t]
\centering
\epsfig{file=pic/tuple_block.eps, width=0.95\linewidth}
\caption{Workload difference between workers}
\label{fig:tuple_blk}
\end{figure}



To minimize the serialization cost for the purpose of reducing
communication overhead, the value of $\varsigma$ should be as great as possible.
Nonetheless, with the increase of $\varsigma$,
the reduced communication overhead will become smaller,
and can be neglected if $\varsigma$ is big enough.
This phenomena can be explained by the following lemma and theorem.

\begin{lemma}
\label{lemma:1}
Let $\varpi$ be the incoming rate of  input stream pair,
and $c$ denote the size of each message head which is a constant,
then in S2J with $m$ join workers,
the tuple block size $\varsigma$ contributes the bandwidth $b$ as follows.
%%
\[b(\varsigma) =  2m\cdot(\varpi + \frac{\varpi\cdot c}{\varsigma}) + b_{out}.\]
%%
where $b_{out}$ is bandwidth used to output join results.
\end{lemma}

\begin{proof}
Please refer to Appendix A.~~~~~~~~~~~~~~~~~~~~~~%\hfill $\blacksquare$
\end{proof}

$b_{out}$ is determinate (and also immutable if ) will not vary with different $\varsigma$



To be more intuitive,
Figure~\ref{fig:bandwidth_vs_s}(a) illustrates the variation tendency of bandwidth $b$ with varying tuple block size $\varsigma$ ($\varsigma\geqslant 1$), where
$b_{max}=(\varpi + \varpi\cdot c)\cdot 2m + b_{out}$ is the maximum value the bandwidth can achieve,
and $b_{min}=2m\varpi+b_{out}$ the lower bound of the bandwidth.
%%
From the diagram, we can observe that when the value of $\varsigma$ exceeds a threshold $\varsigma^*$ (e.g., $20$), the variation $\beta$ of $b$ could be
very little (e.g., $5\%$).
The relationship between $\varsigma^*$ and $\beta$ is as follows, i.e.,
\begin{equation*}
\varsigma^* = \frac{1}{\beta},
\end{equation*}
which can be obtained by simplifying the following equation
$\beta=\frac{b(\varsigma^*)-b_{min}}{b_{max}-b_{min}}$. Hence, S2J can achieve a relatively low bandwidth $b_{min}\cdot(1+\beta)$ with a modest tuple block size $\varsigma^*$.


-- -- the theorem below will replace the paragraph above. %--Dec 9


\begin{theorem}
\label{theorem:1}
The bandwidth required by S2J will be $(1+\beta)$ times higher than the low bound of the bandwidth if the used tuple block size  is $\varsigma^*$, where
\[\varsigma^* = \frac{1}{\beta}.\]
\end{theorem}

\begin{proof}
Please refer to Appendix B.~~~~~~~~~~~~~~~~~~~~~~%\hfill $\blacksquare$
\end{proof}



%\begin{figure}[t]
%\centering
%\epsfig{file=pic/bandwidth_vs_s.eps, width=0.95\linewidth}
%\caption{(a) Bandwidth in S2J with varying $\varsigma$; (b) $\varsigma$ selected under different $\tau$}
%\label{fig:bandwidth_vs_s}
%\end{figure}


%%On the other hand,
In essence, the benefit of using a relatively small tuple block size
is two-fold.
(1) A small $\varsigma$ can even the workload
on all workers and avoid a workload imbalance,
since as mentioned earlier,
the workload differential between any two workers in S2J will not be
 more than one tuple block size.
(2) A small $\varsigma$ can also help S2J avoid a false positive load shedding,
especially
when the fluctuation on the incoming rate of input streams temporarily increases
the utilization of each join workers .
Take the following scenario for example.
Assume that there are three join workers in use,
each of which can afford
additional 30 tuples before its utilization ratio exceeds
the load shedding threshold $\tau_1$.
If the used $\varsigma$ is too large and more than 90,
then the peak utilization ratio on the first worker
will increase by more than $\frac{90}{W}$ where $W$ is the capacity of each worker, and make the average utilization ratio $\tau$ exceed $\tau_1$.
As a result, load shedding will be conducted,
 although the other workers still have processing capability.
In contrast, a small $\varsigma$ will help avoid this situation, and trigger the load shedding processing when the utilization of each worker is really to be saturated.

In short,
a small $\varsigma$ enables S2J to effectively exploit
the processing capability of each join worker,
and thus
avoid a workload imbalance and be able to tolerate the
fluctuation on the incoming rate of input
streams.


Based on the above analysis,
we propose a selection method for tuple block size $\varsigma$ as follows.
\[\varsigma = \begin{cases}
\Big\lceil\varsigma^{*}\big(\frac{\tau_1 - \tau}{\tau^*}\big)^{\alpha}\Big\rceil,~\text{if}~\tau^*\leqslant\tau\leqslant\tau_1\\
1,~~~~~~~~~~~~~~~~\text{otherwise}
\end{cases}\]
where $\alpha\in(0,1]$ is a tunable coefficient, and $\varsigma^{*}$ determines the smallest bandwidth the system can achieve, namely $b_{min}\cdot(1+\beta)$.
In our experiments,
 we usually set $\varsigma^{*}$ to 20 so as to have a smallest bandwidth which is only
$5\%$ more than the lower bound of the bandwidth.


For this selection method, we have the following claim.
\begin{claim}
\label{claim:1}
By using the proposed selection method of tuple block size,  S2J can effectively reduce the times and false positive load shedding.
\end{claim}

\begin{proof}
Please refer to Appendix C.~~~~~~~~~~~~~~~~~~~~~~%\hfill $\blacksquare$
\end{proof}



Figure~\ref{fig:bandwidth_vs_s}(b) illustrates this selection method, from which we can see that
(1) before $\tau$ achieves $\tau_1$, the tuple block size $\varsigma$ selected by the method is kept at about $\varsigma^{*}$ in most cases to enable S2J to reduce communication overhead and required bandwidth;
(2) when $\tau$ is close to $\tau_1$,
the method selects a smaller $\varsigma$ for the purpose of avoiding
a false positive load shedding;
%%%% ooibc3: what do you mean?
and
(3) when $\tau$ exceeds $\tau_1$,
the tuple block size reduces to 1 for an effective exploitation of each join worker.



%-------------------------------------------------------------------------------------------------------------
\subsection{Optimizations} \label{sec:effi}

%%During join processing,
To output a certain number of join results,
the amount of computation
is proportional to that
of comparison operations used to find target tuple pairs.
Hence, the join processing efficiency $\mathcal{P}$ of an
 operator can be estimated by the following equation.
%
\begin{equation}\label{eq:process_effi}
\mathcal{P} = \dfrac{number\ of\ outputs}{number\ of\ comparisons}
\end{equation}
%
A bigger value of $\mathcal{P}$ indicates
that the operator can conduct a more efficient
processing for stream join.
In what follows,
we  briefly introduce optimizations in S2J to improve this join processing efficiency.


%%%% cy: re-worded some sentences in the paragraph.
To avoid missing any join result,
some existing stream join processing frameworks \cite{sigmod11:Teubner}
carry out join operation between all tuple pairs\footnote{A tuple pair consists of two tuples that are from
the two input streams respectively.} in a worker ,
which is far from efficient unless
all (or most of) the tuple pairs satisfy the join predicates,
but this is not an usual situation in practice.
To enhance the join processing efficiency,
S2J uses in-memory indices to reduce unnecessary join operations and
accelerate the processing progress --
when a tuple joins with the opposite stream,
it searches for the join keys satisfying the predicate  via an index,
and then joins with the opposite tuples with matching join keys.

To facilitate joins with different predicates, S2J  adopts
a hash index for equality joins to directly locate the target
join key and get the corresponding tuples that have the same key value,
and
utilizes a balanced binary search tree (BST) index for inequality joins,
since it is more convenient and efficient to  access a range  of join keys
%
indicated by the predicates like $<$, $>$, $\leqslant$, $\geqslant$.
A quantitative discussion can be found in Appendix.%~\ref{append:join}.
%%%% ooibc3: it is funny to use BST in today systems

%%% cy: changed
As presented later in our experimental studies,
the strategy described above can reduce up
to 2 to 3 order of the number of required join operations,
and significantly improve the join processing efficiency.





--  -- the following paragraphs will be merged into \subsection{Optimizations} (not modified yet)


\emph{\textbf{Supporting Multi-Attribute Join.}}
S2J supports efficient theta-join by using hash map and BST
to accelerate the equality and inequality join respectively.
For the compound predicates in a multi-attribute join,
S2J piggybacks the processing of auxiliary predicates on that of a main predicate.
%%%% ooibc3: english


For example, if we add one more conjunction predicate $C.Temperature=H.Temperature$ (i.e.,
the current air temperature is equal to a historical one) to the WHERE clause of Query 2 in Figure~\ref{fig:example_psi} and choose it as the main predicate,
S2J can quickly find out the
synthermal periods with the help of Hash index,
and then reduce the search scope to conduct the join operation $C\Join_{\mid C.PM2\_5 - H.PM2\_5\mid > t}H$.


Moreover,
if all the predicates are inequality,
the selection of the main predicate
can be also optimized
as long as the domain distribution of input streams is given as a priori knowledge,
with which
%%% qian add citation: icde11:Khalefa
S2J can find  the predicate with a least skewed distribution of join keys~\cite{icde11:Khalefa}.
Using it as the main predicate
accelerates the join processing.



%=============================================================================================================
\section{Evaluation}
\label{sec:evaluation}

We have implemented a prototype based on Apache S4~\cite{icdm10w:Neumeyer} for the proposed S2J stream join processing framework. With the prototype, we first evaluate the  processing capability of S2J on a stand-alone machine, in distributed environments, and in terms of running multiple concurrent join tasks, following which we report the processing efficiency of S2J and discuss the effect of tuple block size $\varsigma$
before a case study of how  S2J adapts to varying workload.

%-------------------------------------------------------------------------------------------------------------
\subsection{Processing Capability Study}

Since our S2J operator can be deployed on both a stand-alone machine (S2J can deploy a join worker on a core) and a distributed environment,
in what follows we investigate its capability of join processing in these two running modes. In addition, as S2J can support more than one join task at the same time, we also report its capability of running these concurrent join tasks.


\subsubsection{Performance on a stand-alone machine}
This experiment compares the join processing capability of  S2J  with that of a classic multi-core-based stream join processing approach
called HandShake Join \cite{sigmod11:Teubner} in term of maximum input rate they can afford.
Both operators run on a same stand-alone machine in turn, using different number of cores in each execution.

-- -- Using varying incoming rate, not "the same incoming rate".

Figure~\ref{fig:standalone} illustrates the comparison results under different workload
(with the same incoming rate $\varpi$ of input streams, and different join window size $\varphi$), from which we can observe that
%%
(1) compared with HandShake Join, our proposed S2J framework can carry our significantly higher maximum input rate,
indicating that it has a greater capability of join processing;
%%
(2) the maximum throughput of S2J grows reasonably faster than that of HandShake Join
when there are more cores in use, indicating that S2J also has a better scalability on join processing capability.


\begin{figure*}[ht]
\begin{center}
\subfigure[$\varphi = 5$ min]{\includegraphics[width=4.25cm]{fig/standalone_5min.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$\varphi = 10$ min]{\includegraphics[width=4.25cm]{fig/standalone_10min.eps}}
\subfigure[$\varphi = 15$ min]{\includegraphics[width=4.25cm]{fig/standalone_15min.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$\varphi = 20$ min]{\includegraphics[width=4.25cm]{fig/standalone_20min.eps}}
\caption{The maximum input rate each processing framework can afford on stand-alone machine\label{fig:standalone}}
\end{center}
\end{figure*}
%%
%%
\begin{figure*}[ht]
\begin{center}
\subfigure[$\varphi = 5$ min]{\includegraphics[width=4.25cm]{fig/cluster_5min.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$\varphi = 10$ min]{\includegraphics[width=4.25cm]{fig/cluster_10min.eps}}
\subfigure[$\varphi = 15$ min]{\includegraphics[width=4.25cm]{fig/cluster_15min.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$\varphi = 20$ min]{\includegraphics[width=4.25cm]{fig/cluster_20min.eps}}
\caption{The maximum input rate each processing framework can afford in distributed environments\label{fig:distributed}}
\end{center}
\end{figure*}
%%
%%
\begin{figure*}[ht]
\begin{center}
\subfigure[$m=2$]{\includegraphics[width=4.25cm]{fig/multitask_2node.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$m=4$]{\includegraphics[width=4.25cm]{fig/multitask_4node.eps}}
\subfigure[$m=6$]{\includegraphics[width=4.25cm]{fig/multitask_6node.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$m=8$]{\includegraphics[width=4.25cm]{fig/multitask_8node.eps}}
\caption{The maximum input rate S2J can afford with varying number of concurrent join tasks\label{fig:multi_task}}
\end{center}
\end{figure*}



\subsubsection{Performance in distributed environments}
This experiment compares the join processing capability between  S2J  and a state-of-the-art distributed stream join processing framework known as D-Stream~\cite{sosp13:Zaharia} in distributed environments.
Note that D-Stream can only support equality join since its underlying implementation is a hash-based join, which cannot support inequality join naturally. Hence, the comparison results only contain D-Stream's performance on equality join processing.
In addition, HandShake is excluded in this comparison since it can only run on a stand-alone machine. Each of the tested frameworks is deployed on a same cluster in turn, adopting different number of nodes in each execution.


Figure~\ref{fig:distributed} shows the comparison results under different workload (with the same $\varpi$, and different $\varphi$), from which we can observe that when perform equality join operations, the processing capability of S2J is comparable  to D-Stream which
is characterized by its high precessing capability.
For comparison, other non-join operations such as word count and top-$k$ count on D-Stream is at least 2 times faster than that on S4 D-Stream~\cite{sosp13:Zaharia} (S2J is implemented based on S4 and optimizes join operations).




\subsubsection{Performance of running concurrent join tasks}

This experiment investigates S2J's capability of running  concurrent join tasks from the following two aspects,
namely  how dose these concurrent join tasks affect (1) the maximum throughput of S2J  and  (2) the bandwidth it requires.

%Others are excluded since none of them are designed for carrying out concurrent join


For the first aspect, Figure~\ref{fig:multi_task} depicts the variation trends of the maximum throughput of S2J with the varying number of concurrent join tasks. We can observe that the declines in maximum throughput are not linear to the number of tasks. Instead,
they become less and less with more tasks plugged in. Moreover, these slight declines can be made up by adding extra nodes. This is because
as shown in Figure~\ref{fig:multi_task}(a) to (d), more nodes in use can effectively rise the maximum throughput of S2J.


For the second aspect, Figure~\ref{fig:bandwidth}(a) illustrates the bandwidth used by S2J when it executes concurrent join tasks. Just for a reference, there is also a baseline, i.e., the bandwidth required by a  naive version of S2J which duplicates streams for each task.
From the figure we can find that (1) the number of concurrent join tasks nearly has no effect on the bandwidth of S2J, and (2) the saved bandwidth is significant with the increase of the number of concurrent join tasks, compared against the stream duplication approach.


In summary, our S2J can efficiently support the execution of concurrent join tasks, especially a relatively great number of concurrent join tasks.








%\begin{figure}[t]
%\begin{center}
%\subfigure[Multiple tasks ($\varsigma=200$)]{\includegraphics[width=4.15cm]{fig/bandwidth_vs_multitask.eps}}
%\makeatletter\def\@captype{figure}\makeatother\subfigure[Different $\varsigma$ (one task)]{\includegraphics[width=4.15cm]{fig/bandwidth_vs_tupleblocksize.eps}}
%\caption{Bandwidth in S2J with different number of concurrent join tasks and tuple block size $\varsigma$\label{fig:bandwidth}}
%\end{center}
%\end{figure}
%

%-------------------------------------------------------------------------------------------------------------
\subsection{Processing Efficiency Study}

In stream join operators, not all computation lead to the output of join results, and much of them are usually wasted due to an inefficient join processing approach. The percentage of useful computation can be evaluated by the join processing efficiency (see Equation~(\ref{eq:process_effi}) in Section~\ref{sec:effi}), and a better (i.e., higher) join processing efficiency indicates that an operator can obtain the same the join results with less runtime. Hence, in what follows we evaluate join processing efficiency
of our S2J operator, and compare it with that of Handshake Join and D-Stream.



Figure~\ref{fig:effi} shows the comparison results. Note that D-Stream can only support equality join as it adopts a MapReduce-based processing procedure.
-- -- lack of throughput of D-Stream (flaw on result integrity, hash-based join not support inequality join).



\begin{figure*}[t]
\begin{center}
\subfigure[$m=4$, $\varpi=1000$]{\includegraphics[width=4.25cm]{fig/effi_m4_rate1000.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$m=4$, $\varpi=2000$]{\includegraphics[width=4.25cm]{fig/effi_m4_rate2000.eps}}
\subfigure[$m=8$, $\varpi=1000$]{\includegraphics[width=4.25cm]{fig/effi_m8_rate1000.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[$m=8$, $\varpi=2000$]{\includegraphics[width=4.25cm]{fig/effi_m8_rate2000.eps}}
\caption{Processing efficiency under different workload\label{fig:effi}}
\end{center}
\end{figure*}



%-------------------------------------------------------------------------------------------------------------
\subsection{Effect of Tuple Block Size}

-- -- Need to rewrite this Section, since we changed the selection method of tuple block size with a better one.

In this experiment, we discuss the effect of tuple block size $\varsigma$ used for message passing in S2J in terms of
(1) cost of communication and (2) tolerance of fluctuation on input incoming rate.

As pointed out in Section~\ref{opt_message}, a small $\varsigma$ helps S2J...
See Figure~\ref{fig:fluctuation}.


On the other, from Figure~\ref{fig:bandwidth}(b), we can find that using a $\varsigma\in[20\frac{\tau^*}{\tau_1},20\frac{\tau^*}{\tau_0}]$ can help S2J effective reduce the load shedding operations, and enhance the its tolerance of fluctuation on input incoming rate.







\begin{figure}[t]
\begin{center}
\subfigure[Varying amplitude]{\includegraphics[width=4.15cm]{fig/s_vs_amp.eps}}
\makeatletter\def\@captype{figure}\makeatother\subfigure[Varying frequency]{\includegraphics[width=4.15cm]{fig/s_vs_freq.eps}}
\caption{The effect of tuple block size on load shedding when there are different fluctuation amplitudes and frequencies on the incoming rate
of input streams\label{fig:fluctuation}}
\end{center}
\end{figure}


\subsection{Case study of adapting to varying workload}
In this experiment, we conduct a case study to explain how S2J adapts to vary workload during its execution.
Towards this, we adopt an input stream pair with a varying incoming rate as shown in Figure~\ref{fig:case}(a), allocate
$4$ nodes for S2J as its initial join workers (one node carries one worker), use the join window size $\varphi$ of $60$ seconds, and set $\tau_0=20\%$, $\tau_1=80\%$, $\tau_2=82\%$.
%%%
Meanwhile, the rules for adding and deallocating nodes (i.e., workers) are set as follows, i.e.,
(1) if the average utilization ratio $\tau$ over the workers satisfies $\tau > \tau_2$ and lasts for more than 30 seconds, extra nodes will be added to reduce $\tau$ to $\tau^*$ (i.e., $40\%$, as $\tau^*=0.5\cdot\tau_1$) around, and (2) if $\tau < \tau_0$ lasts for 5 minutes, a certain number of nodes will be deallocated so as to rise $\tau$  back to $\tau^*$ around.


Figure~\ref{fig:case}(b) to (d) respectively illustrate the value of $\tau$, real-time shedding ratio, and number of nodes during the execution, from which we have the follow observations (in time sequence).

(1) S2J can effectively tolerate the  fluctuation on workload caused by the transitory changes on input incoming rate at about the 400--500th second.

(2) Once the utilization of each join worker is about to be saturated (i.e., $\tau>\tau_1$), as shown in Figure~\ref{fig:case}(c),
S2J  sheds a little load  and helps its workers recapture a few processing capacities.

(3) When the workload keeps goes larger non-transitorily such that $\tau > \tau_2$ lasts for more than 30 seconds,
as shown in Figure~\ref{fig:case}(d), there are four new join workers (i.e., nodes) added into S2J to help handle the increased workload. After this operation, the average workload over all workers drops to $40\%$ around in only a few seconds. This is because according to our MP-2PF protocol,
the tuples on original workers are fleetly passed to the newly appended workers  until the workload imbalance between them is removed.

(4) As the average utilization ratio $\tau$ drops below $\tau_0$  and keeps this state for more than 5 minutes, S2J starts to deallocate superfluous join workers. It can automatically select an appropriate number of workers (i.e., 4 in this case) to deallocate and make
$\tau$ return to a most approximate value to $\tau^*$.
The release operation finishes within 60 seconds which is the join window size.
The reason behind is that each newly arrived tuple will be passed by workers of S2J
and move out when its life span (namely join window size) is expired.





%\begin{figure*}[t]
%\begin{center}
%\subfigure[Incoming rare of inputs]{\includegraphics[width=4.25cm]{fig/case_input.eps}}
%\makeatletter\def\@captype{figure}\makeatother\subfigure[Corresponding $\tau$]{\includegraphics[width=4.25cm]{fig/case_tau.eps}}
%\subfigure[Real-time shedding ratio]{\includegraphics[width=4.25cm]{fig/case_shed.eps}}
%\makeatletter\def\@captype{figure}\makeatother\subfigure[Number of nodes used]{\includegraphics[width=4.25cm]{fig/case_m.eps}}
%\caption{Case study of how S2J adapts to varying workload\label{fig:case}}
%\end{center}
%\end{figure*}



%===========================================================================================================
\section{Conclusion}
\label{sec:conclusion}

In this paper, we have proposed S2J, a general operator for stream join in the cloud.





\balance

\bibliographystyle{abbrv}
%%% qian: Dec 9
\bibliography{s2j_sigmod}


\section*{Appendix A: Proof of Lemma~1}
In S2J, the required bandwidth $b$ consists of two main parts, namely (1)
the bandwidth $b_{message}$ used for message passing between join workers, and (2)
the bandwidth $b_{out}$ used for output the join results from join engine
to materialization and from materialization to database.

For each worker in S2J, receiving and passing the tuples of input streams occupy a bandwidth of $2\cdot \varpi$. The communication overhead
is from the message head brought by serialization, of which the production ratio is $\frac{2\cdot \varpi}{\varsigma}$ (including message heads generated in both receiving and passing operations). For a specified  serialization method, the size $c$ of each message head is almost fixed. Thus,
the bandwidth $b_{message}$ used for message passing between all $m$ join workers of S2J can be estimated as
\[b_{message}=2m\cdot(\varpi+\frac{\varpi\cdot c}{\varsigma}.\]

Moreover, since the value $b_{out}$ has nothing to do with the tuple block size, the relationship between the overall bandwidth $b$ and tuple block size $\varsigma$ is as follows.
\[b=2m\cdot(\varpi + \frac{\varpi\cdot c}{\varsigma}) + b_{out}.\]
%%
Hence, the proof is completed.

\section*{Appendix B: Proof of Theorem 1}



\section*{Appendix C: Proof of Claim 1}




\section*{Appendix D: Join Efficiency Analysis}

In this Appendix, we explain how the BST- and hash-based indices help S2J improve its join processing efficiency in a quantitative view.

Many exiting operators compare each two tuples from the opposite input streams to avoid missing any target tuple pair \cite{sigmod11:Teubner}.
According to Equation~(\ref{eq:process_effi}) in Section~\ref{sec:effi}, their
join processing efficiency is as follows.
%
\begin{equation*}
\mathcal{P}_{pairwise} = \dfrac{\kappa}{r \cdot s}
\end{equation*}
%
where $r$ and $s$ respectively denote the number of tuples in the two opposite streams, and $\kappa$ is the number of  target tuple pairs. In practice,
it is often the case that $\kappa \ll r \times s$, resulting in $\mathcal{P}_{pairwise} \ll 1$, i.e., most of computation are useless and output nothing.



To improve the join processing efficiency, our S2J operator adopts
BST- and hash-based indices to accelerate non-equality join and equality join respectively.
Specifically, when a tuple arrives at a worker,
the target opposite tuples can be found within $O(\log n)$ time via the BST-based index and $O(1)$ time via the hash-based index.
Then, the processing efficiency of non-equality join operations in S2J is as follows.
%
\begin{equation*}
\mathcal{P}_{bst}
= \dfrac{\kappa}{r \cdot O(\log s) + s \cdot O(\log r)},
\end{equation*}
%
and thus
%
\begin{equation*}
\dfrac{\mathcal{P}_{pairwise}}{\mathcal{P}_{bst}} =  \dfrac{O(\log s)}{s} + \dfrac{O(\log r)}{r}.
\end{equation*}
In practice, as $s \gg O(\log s)$ and $r \gg O(\log r)$, we have
the relationship below
\[\mathcal{P}_{bst} \gg \mathcal{P}_{pairwise},\]
which indicates that the processing efficiency of the non-equality join operations can be significantly improved by using a BST-based index.


Similarly, the processing efficiency of equality join operations in S2J is as follows.
%
\begin{equation*}
\mathcal{P}_{hash} = \dfrac{\kappa}{r \cdot O(1) + s \cdot O(1)},
\end{equation*}
%
and thus
%
\begin{equation*}
\dfrac{\mathcal{P}_{pairwise}}{\mathcal{P}_{hash}} =  \dfrac{O(1)}{s} + \dfrac{O(1)}{r}.
\end{equation*}
%
Since $s \gg O(1)$ and $r \gg O(1)$, we also have the following relationship, i.e.,
\[\mathcal{P}_{hash} \gg \mathcal{P}_{pairwise},\]
indicating that the processing efficiency of the equality join operations can be significantly improved by using a hash-based index.

In brief, the BST- and hash-based indices
facilitate joins with different predicates, and help our S2J operator achieve a high join processing efficiency.


\end{document}
