\section{Introduction}
\label{sec:introduction}

As data explode everywhere in contemporary digital world, people care more about the information behind the data rather than the data themselves. Many applications require the dependent service that extracts meaningful information from the vast volume of continuous incoming data in a real-time manner. Thus large-scale stream processing becomes attractive. The logistic of stream processing is to ingest continuous incoming data, refine information or detect events with respect to application specific requirements, and output results with timeliness guarantees. The paradigm of streaming data processing differs from that used in relational DBMS. The ``store-first-query-later'' approach widely used in relation DBMS becomes impractical for streaming data processing in that data arrive continuously without bound. Therefore, efficient stream processing should exploit such dynamic properties of data volume and velocity.

\begin{figure}[ht]
\centering
\framebox[0.95\linewidth]{\epsfig{file=pic/example_psi.eps, width=0.85\linewidth}}
\caption{Example: Pollutant Standards Index (PSI) Queries. Stream \textit{Current} carries the current PSI data and stream \textit{History} carries the PSI data one month before. Every tuple in both streams contains attributes: $\langle PM2\_5, PM10, ...\rangle$ where each stores the value of PM2.5, PM10 and PSI, respectively. Initially, only Query~1 and Query~2 are submitted, and Query~3 is added later. All submitted queries are processed simultaneously.}
\label{fig:example_psi}
\end{figure}

The majority of user queries falls into the compound operations of selection, projection and join. Streaming selection and projection are easy to process because both are unary reduction operators. However, streaming join is not so straightforward since two streams correlate. Moreover, more than one join predicates may be needed to run simultaneously according to the same two streams. For example, as shown in Figure~\ref{fig:example_psi}, three continuous queries for pollution analysis run simultaneously upon the same input streams -- one is the current PSI data and the other is the PSI data one month before. These three queries aim to perform real-time comparisons between the current monitoring result and the historical result. Query~1 and Query~2 check the PSI difference between current monitoring and historical records when having the same PM2.5 and PM10, respectively. Oppositely, Query~3 checks the PM2.5 and PM10 difference when the current PSI equals the historical PSI. Only the most recent 30 minutes' data of the current monitoring and the most recent 15 minutes' data of the historical records are taken into account; expired data must be evicted in real-time manner. Therefore, the output accuracy and timeliness guarantee highly rely on the real-time control of join windows. Another example is on the model simulation of the chemical gas generator. One stream is the intermediate states of the simulation, and the other stream is the real state data collected from the physical experiment. The goal is to evaluate the accuracy of the modeling by comparing the simulation with the real procedure. Queries are raised to monitor the online comparison. Several attributes need to be compared within the most recent 30 seconds upon the two streams. Thus a set of join operations with predicates wiring the demanding attributes need to run continuously and simultaneously. The output of the queries must also align to the timeliness so that it synchronizes  with the simulation. More examples could be found in bioinformatics, environmental monitoring, stock trading, etc.. It is worth noting that most of these applications could be formalize to the SELECT-FROM-WHERE-WINDOW (SFWW) queries.

Streaming join plays a key role in the SFWW query processing. As seen in the examples above, large-scale streaming join operator has several general requirements:

\begin{enumerate}
\item \textbf{Simultaneous multi-attribute joins.} Upon the same two streams, more than one SFWW queries with different join predicates need to run simultaneously. Since streaming data often cannot be re-scanned like relational table, a join operator against two streams must be able to handle multiple join tasks simultaneously. 

\item \textbf{Timeliness.} As incoming data within specific time window are more preferred than obsolete ones, especially for real-time applications, the join operator needs to maintain the sliding window effectively and efficiently. With a lifespan defined on each streaming tuple, expired ones must be evicted immediately by the join operator.

\item \textbf{Dealing with big data.} Volume and velocity of the streaming input vary between applications. As a general join operator, it must be able to handle high incoming rate and large join window. Meanwhile, it should intelligently distribute workload to avoid workload imbalance among processing units. Moreover, the control of sliding windows on individual streams should not be the bottleneck of the overall performance.
\end{enumerate}

Many existing works of join processing focused on part of these requirements, and plenty of designs and optimizations were proposed. However, few of them gave a general and scalable solution to meet all these requirements. Researches on main-memory and multi-core based join processing~\cite{sigmod12:Begley, sigmod11:Blanas, vldb13:Balkesen, vldb12:Albutiu, icde13:Balkesen, icde11:Khalefa} provided insightful optimizations, but they were hard to scale to the distributed environment because most of them highly relied on the shared memory and multi-core features which are not the basic assumptions in distributed computing. On the other hand, researches on MapReduce-based join processing~\cite{sigmod11:Lin, sigmod11:Okcan, vldb12:Zhang, tkde11:Afrati} provided scalable solutions for join processing in large-scale clusters, but they showed no timeliness guarantee for the processing due to the inherent nature of batch job processing in MapReduce paradigm. Moreover, all these works were oriented to relational join operator, assuming that tuples exist in tables without lifespan. Such strong assumption is not compatible with stream processing which deals with unbounded series of tuples and temporal conditions. Therefore, the design of streaming join processing must take the stream properties into account. Some recent researches on streaming join processing~\cite{sigmod13:Ananthanarayanan, tkde10:Bornea, icde11:Bornea, edbt09:Wang} provided some solutions for both effectiveness and efficiency; however, most of them were ad hoc so that they showed less generality to various applications. Furthermore, although processing multi-attribute joins in relational DBMS seems to be trivial, it is not the case with the streaming join since tuples are unlikely to be revisited after being consumed. To the best of our knowledge, there is no prior study working on the design issue of multi-attribute join processing.

To address the issue of processing source-sharing multi-attribute joins, we propose the distributed real-time streaming theta-join operator. We adopt the handshake join proposal~\cite{sigmod11:Teubner} as the base model of streaming join processing, and extend it to support source-sharing multi-attribute theta-join in the cluster-based topology. Our solution is based on message passing instead of shared memory. We also build a corresponding prototype using Apache S4~\cite{icdm10w:Neumeyer} as well as ecosystem components such as input adapter, join result materialization and query proxy. Additionally with the build-in functions of selection and projection, it can fully serve streaming SFWW queries in large-scale clusters. Our distributed real-time streaming join operator is endowed with the following properties:

\begin{itemize}
\item \textbf{General and extendable operator.} Our design is application agnostic so that it can be used for any multi-attribute theta-join sharing the same streaming sources. Of course, it is downward compatible with the conventional single join processing. Moreover, it could be extended to support additional piggyback processing along with the join processing. 

\item \textbf{Strict real-time guarantee.} Each incoming tuple will join with all candidates satisfying the join predicate within the defined join window. The join window also serves as the deadline of join processing. The join processing for each incoming tuple is guaranteed to be done in such deadline.

\item \textbf{Scalability.} Running in the cluster-based environment, the join operator instantiates the processing units automatically, which scales with the number of join tasks. Meanwhile, it favors autonomic load balancing and adaptive load shedding so that it benefits the processing of streaming joins with large window and high ingesting rate. Moreover, we apply the dataflow-oriented processing so that the maintenance of join windows is amortized to every streaming data transfer without any centralized control.
\end{itemize}

The rest of this paper is organized as follows. Section~\ref{sec:related_work} summarizes related work on the optimization of join processing. Section~\ref{sec:background} introduces the primitive handshake join idea and its properties that motivates our design. Section~\ref{sec:design} describes the design of the distributed real-time streaming join operator, and Section~\ref{sec:implementation} presents the corresponding prototype implementation. Section~\ref{sec:evaluation} demonstrates the performance evaluation. We conclude our work in Section~\ref{sec:conclusion}.

% section introduction (end)