\section{Related Work}
\label{sec:related_work}

\subsection{Stream Joins}
As stream join is a fundamental operation for relating information from different streams, it has been the focus of much research on streams.

(Description of diffrent algorithms)

Pipelining hash-join is the pioneer algorithm of stream join for parallel database systems~\cite{pdis91:Wilschut}. 
%It requires the entire join state must be kept in main memory for hashing. This requirement is not feasible in practice since large input streams could generate large amount of join states exceeding main memory capacity. 
It requires that the main memory must be large enough to keep the entire join state in a hash table, which may not be feasible when the input streams are large.
Although the sliding-window semantics and the corresponding data structures~\cite{edbt04:Golab} could reduce the number of join states, they do not fundamentally resolve the constraint of hash table size. 
To address this problem, \cite{sigmod99:Ives} allows parts of the hash table to be spilled out to disk for later processing, i.e., after both inputs are exhausted. But this could incur much timeliness penalty as the processing of disk-resident tuples may starves if the inputs are unbounded. 
%XJoin~\cite{vldb01:Urhan} proposes further enhancement that schedules joins involving disk-resident tuples whenever the inputs are blocked, and therefore is better suited for stream joins with unbounded inputs. Although XJoin produces fast join results, its shortcoming is high I/O complexity due to the naive flushing strategy such that when memory gets filled, the largest hash partition is flushed. 
XJoin~\cite{vldb01:Urhan} proposes further enhancement that schedules joins involving disk-resident tuples whenever the inputs are blocked. Although XJoin produces fast join results with unbounded inputs, its shortcoming is high I/O complexity due to the naive flushing strategy such that when memory gets filled, the largest hash partition is flushed. 
%HMJ (hash-merge join)~\cite{icde04:Mokbel} mitigates the I/O issue with an improved flushing strategy that it flushes corresponding partitions of both streams together and tries to balance memory allocation between incoming streams. 
HMJ (hash-merge join)~\cite{icde04:Mokbel} mitigates the I/O issue with an improved flushing strategy that tries to balance memory allocation between incoming streams. 
However, neither XJoin and HMJ takas the context of join processing into consideration when flushing tuples. 
%Later RPJ~\cite{sigmod05:Tao} proposes a statistics-based flushing strategy that tries to keep in memory those tuples that are more likely to join. It exploits the statistics to prioritize the flushing partitions in order to maximize output rate. 
Later RPJ~\cite{sigmod05:Tao} proposes a statistics-based flushing strategy that exploits the statistics of tuple join probability to prioritize the flushing partitions in order to maximize output rate. 

(Their common shortcomings of the above algorithms, and how ESJ address them)

(A. Do not support inequality join)

Overall, the inherence of hashing-based join processing make these algorithms insufficient to handle inequality join. Alternatively, sorting-based join algorithms can support inequality joins, but they have been traditionally deemed inappropriate for stream joins because sorting is a blocking operation that requires seeing the entire input before producing any output, which contradicts the unbounded inputs. To circumvent this problem, the PMJ (progressive merge join) algorithm~\cite{vldb02:Dittrich} is developed to be sorting-based but non-blocking so that it has the ability to handle inequality joins upon streams. PMJ partitions the memory into two partitions, one for each stream, and performs sort-merge-like join when both partitions get fiilled. Thus PMJ produces no join results until the memory gets filled, resulting in high delay of the outputs.

%By exploiting the dataflow-oriented processing model, ESJ supports both equality and inequality joins (i.e., theta-join) without the sorting-based subroutine. ESJ's real-time join window control guarantees the timeliness of join outputs.
By piggybacking the resolution of relational operator with the dataflow processing, ESJ supports both equality and inequality joins (i.e., theta-join) without the sorting-based subroutine. Meanwhile, ESJ's real-time join window control guarantees the timeliness of join outputs.

(B. Not suitable for distributed computing)

Moreover, the evolution of above hashing-based join algorithms is under the architecture model of single node with shared-memory support. 
%However, they are not suitable for distributed computing. The main problem is that the join processing highly relies on the join states maintained in a global hash table. Scattering the partitions of hash table to distrubted nodes is not practical in terms of consistency and availability. 
However, they are not suitable for distributed computing since the join processing highly relies on the join states maintained in a global hash table. Scattering the partitions of hash table to distrubted nodes is not practical in terms of consistency and availability. Although a centralized scheme of maintaining the hash table in one dedicated node could provide consistency guarantee, the high communication overhead constrains the system's scalability. 

To get rid of maintaining the centralized states, \cite{sigmod11:Teubner} proposes the handshake join idea for processing stream joins based on multi-core and shared memory. It is characterized for decentralized dataflow-oriented processing model and autonomic load balancing. It adopted the tuple-nested loop join processing that carries out the brute-force pairwise comparison on each tuple against all tuples in the opposite stream, leading to poor processing efficiency. Further, it is highly customized for single machine with modern multi-core hardware and share-memory paradigm, which could reversely be its limitation of scalability, especially when being applied in the distributed enviroment. ESJ enriches this model based on message passing, and enhances the processing for theta-join. ESJ applies the stateless join processing so that the processing units (i.e., join workers) to be orgainzed in a decentralized way, leading it highly scalable in the distributed environment. 

\subsection{Distributed Join Processing}
MapReduce is the prevalent paradigm for distributed parallel computation. Many recent researches on MapReduce-based join processing~\cite{sigmod11:Lin, sigmod11:Okcan, vldb12:Zhang, tkde11:Afrati} provide scalable solutions for join processing in large-scale clusters, but all of them are oriented to relational join operator and conducts iterative processing upon fixed size inputs. These works are not compatible with stream join because of the one-pass constraints on the unbounded inputs~\cite{pods02:Babcock}. Moreover, because of the the inherent nature of batch job processing, MapReduce-based join processing showes no timeliness guarantee for the outputs, which violates the temporal semantics of stream. To preserve the stream property within the distribued join processing, some recent researches on stream join processing~\cite{sigmod13:Ananthanarayanan, tkde10:Bornea, icde11:Bornea, edbt09:Wang} provide some solutions for both effectiveness and efficiency; however, most of them are ad hoc so that they show less generality to various applications. ESJ is a general operator for stream join in the distributed environment. It performs real-time stream join processing with respect to the sliding join windows of input streams.

\subsection{Multi-Core and Main-Memory Join Optimizations}
Optimizing the join processing has already been extensively studied in the context of multi-core and main-memory. Many recent researches such as \cite{sigmod12:Begley, sigmod11:Blanas, vldb13:Balkesen, vldb12:Albutiu, icde13:Balkesen, icde11:Khalefa} provided insightful optimizations. But they are hard to scale to the distributed environment because most of them highly relied on the multi-core and shared-memory features which are not the basic assumptions in distributed computing. Moreover, these optimizations adopts centralized control of the join states or join windows, which could be the bottleneck when applied in the distributed environment since it introduces much communication overhead for synchronizing the controls. 

%ESJ applies decentralized join window control and local enchancement for join processing.

% section related_work (end)