\newpage

\subsection{Adapting to the varying workload}

The workload of a stream operator varies among different applications as well as during the application's running. The challenge is to efficiently support different sizes of streaming windows for diverse applications. Moreover, for a given size of streaming window in an application, it is significant to properly handle large workload caused by the high incoming rate of input streams. Therefore, we propose and formalize a systematic solution of adapting to the varying workload inter- and intra-application. This solution benefits the proper deployment of distributed computation resources such as the workers of the stream operator.

\begin{figure}[t]
\centering
\epsfig{file=pic/adapt_to_workload.eps, width=0.7\linewidth}
\caption{Adapting to the Varying Workload.}
\label{fig:adapt_to_workload}
\end{figure}

Let $\tau$ be the utilization ratio of a worker, i.e., $\tau = \dfrac{r}{W}$ where $r$ is the instant number of tuples in a worker and $W$ is the capacity per worker. We define four thresholds of $\tau$ for triggering different strategies:

\begin{itemize}
\item Threshold \textbf{$\tau_{0}$} for deallocating workers.

\item Threshold \textbf{$\tau_{1}$} for the expected initial load size.

\item Threshold \textbf{$\tau_{2}$} for starting load shedding.

\item Threshold \textbf{$\tau_{3}$} for allocating extra workers.
\end{itemize}

\noindent
These thresholds are specified by the user and constrained as follows: (shown in Figure~\ref{fig:adapt_to_workload}) 

\begin{equation}
0 < \tau_{0} < \tau_{1} < \tau_{2} < \tau_{3} \leq 1
\end{equation}

\subsubsection{Initial deployment}
Usually, user has some estimation of the workload before submitting the application. So when initializing the application, it is advisable to initialize reasonable number of workers with respect to this estimation. Moreover, a trade-off should be taken into consideration: for a roughly fixed size of run-time workload, deploying less workers is good for efficient utilization, yet deploying more workers is good for affording large fluctuation of the input streams.

Let $m$ be the number of workers allocated, $\varpi$ be the incoming rate of an input stream, and $\varphi$ be the size of join window. Initially, for given expectation of $\varpi$ and specified $\varphi$ and $W$, choose $m = M$ to satisfy 

\begin{equation}
M \geq \dfrac{E(\varpi) \cdot \varphi}{W \cdot \tau_{1}}
\end{equation}

\noindent
so that $\tau \leq \tau_{1}$ could be expected. Usually, $M = \lceil \dfrac{E(\varpi) \cdot \varphi}{W \cdot \tau_{1}} \rceil$ is preferred. The selection of $\tau_{1}$ reflects the judgement of the above trade-off between efficient utilization of workers and affording large fluctuation of the input streams. If $\tau_{1} \to \tau_{2}$, all workers would be effectively exploited, but tiny fluctuation of the input streams would trigger a false positive of load shedding. In contrast, if $\tau_{1} \to 0$, a large number of workers would be allocated with low utilization and incurs high communication cost, though they could afford frequent, big amplitude fluctuation of the input streams. Empirically, $\tau_{1} = 0.5\tau_{2}$ could be adopted in practice so that each worker is expected for moderate utilization and able to afford the amplitude of fluctuation not greater than the average workload size.

\subsubsection{Adaptive load shedding}
Load shedding would be applied when the worker is (about to be) saturated. ESJ adopts adaptive load shedding. For $\tau < \tau_{2}$, all input tuples would be processed, i.e., nothing is shed. Otherwise, a percentage of the incoming tuples are shed by a \textit{shed factor} (SF) defined as follows:

\begin{equation}
SF = B + \dfrac{1 - B}{1 - \tau_{2}} \cdot (\tau - \tau_{2})
\end{equation}
where $B \in [0, 1]$ is the base percentage of shedding.

This is a linear shedding model: When $\tau \geq \tau_{2}$, the shed factor is proportional to the load size. If the worker is fully saturated, all incoming tuples are dropped. Alternatively, the shed factor could be redefined with other shedding model, e.g., by applying a quadratic function.

\subsubsection{Scaling up}
While load shedding is mainly useful for handling transient workload increase caused by the fluctuation of the input streams, the workload may keep increasing steadily due to the overall increase of the incoming rate. Meanwhile, overload leads to greatly reduced accuracy of the processing since most of the inputs are shed. In such case, increasing the processing capacity would mitigate the overload. When $\tau \geq \tau_{3}$ lasting for certain time, ESJ will allocate extra workers, if available. The workloads will be distributed to the newly added workers so that $\tau$ will decrease. Basically, ESJ keeps adding workers until $\tau$ decrease to $\tau_{1}$ around or the worker resources are exhausted. 

\subsubsection{Scaling down}
ESJ can also easily support shrinking the processing capacity if necessary. For example, if $\tau \leq \tau_{0}$ lasting for a long time, ESJ will deallocate some workers until $\tau$ rises to $\tau_{1}$ around or $m = M$. The released workers would be recycled by the system and become available in the resource pool.

%Empirically, $\tau_{0} = 0.5\tau_{1} / 2$ could be adopted in practice.
