%% In the first step, the algorithm partitions the data stream into
%% segments, where each segment is an instance of a concept. 
The goal of the 2nd Step is to merge segments representing the same
concepts.  Due to space restriction, we introduce a naive hierarchical
clustering algorithm to achieve this goal. See~\cite{techreport} for
more sophisticated and efficient solutions.

In hierarchical clustering, we must decide which pair of clusters
should be merged first, and how to find the final set of clusters when
everything is merged.  In our case, initially, each segment is a
cluster by itself. The algorithm then iteratively merge small clusters
into big clusters~\cite{shixi08}.

%% \begin{figure}[!htb]
%%     \subfloat[Clustering Blocks]{%
%%         \includegraphics[height=0.46\columnwidth]{Building/Graph1.eps}%
%%     }
%%     \hfill
%%     \subfloat[Clustering Chunks]{%
%%         \includegraphics[height=0.46\columnwidth]{Building/Graph2.eps}%
%%     }
%%     \caption{Input to the Clustering Algorithm}
%%     \label{fig:build:input}
%% \end{figure}

%% Figure~\ref{fig:build:input} shows two set of inputs to the clustering
%% algorithm (corresponding to two steps respectively).  We use an edge
%% between two nodes to indicate that the two nodes can be merged. Web
%% %% adapt the agglomerative hierarchical clustering algorithm for both
%% %% cases by giving different sets of edges.  Figure~\ref{fig:build:input}
%% %% shows the inputs to the clustering algorithm. 
%% In step 1, edges are added between only neighboring nodes (data
%% blocks), which means each cluster is made up of contiguous data,
%% while in step 2, the input is a complete graph, meaning any two nodes
%% (data chunks) can be merged to form a concept. This unifies the two
%% clustering problems into one framework.


%% Various strategies are available for deciding which pair of clusters
%% should be merged first. 
We merge two clusters %One of the best method is to choose the
if the merge results in the least increase (or greatest decrease) in
$Q$. Assuming we merge two clusters $D_u$ and $D_v$ into $D_w$, we can
compute the increase of $Q$ as follows:
\begin{equation}
    \begin{split}
        \Delta_Q(u,v)&=|D_w|\cdot Err_w-|D_u|\cdot Err_u-|D_v|\cdot Err_v \\
                     &=|D_u|\cdot(Err_w-Err_u)+|D_v|\cdot(Err_w-Err_v)
    \end{split}
    \label{eq:build:deltaq}
\end{equation}
This strategy finds the local optimal partition for each merger.
However, in order to compute $\Delta_Q(u,v)$, we need to know $Err_w$,
which is not available unless we train a classifier on $D_u \cup D_v$.
Consequently, in order to find the best merge among $n$ clusters, we
need to train and test $n(n-1)/2$ classifiers, which is time
consuming. 

We propose another strategy based on model similarity.  We estimate
$(Err_w-Err_u)$ and $(Err_w-Err_v)$ in Eq~\ref{eq:build:deltaq} as:
$$Err_w-Err_u = Err_w-Err_v = dist_S(M_u,M_v)$$
where
%% Assume there
%% is a function $s(M_u,M_v)$, which measures the similarity between the
%% models $M_u$ and $M_v$ learned from cluster $D_u$ and $D_v$, then we
%% can express the ``distance'' between the two clusters by:
%% \begin{equation}
%%     dist(u,v)=|D_u|\cdot (1-s(M_u,M_v)) + |D_v| \cdot (1-s(M_u,M_v))
%%     \label{eq:build:dist}
%% \end{equation}
$dist_S$ measures the difference between two models based on a sample
test data $S$. More specifically, $dist_S$ is evaluated by checking to
what extent the two models disagree with each other when classifying a
sample dataset $S$.
\begin{equation}
    dist_S(M_i,M_j)=
        \frac
            {|\left\{x\in S|M_i(x)=M_j(x)\right\}|}
            {|S|}
\end{equation}
where $M_i(x)$ is $x$'s class predicted by base model $M_i$.

A problem may arise in choosing the sample test data $S$. To make
results consistent, all comparisons must be based on sample datasets
of the same distribution. If the sample set is too large, it consumes
too much time to measure the similarity. If the sample set is too
small, the result may be inaccurate. The problem can be solved by
using a dynamically formed sample set~\cite{shixi08}.

Let $D_i^{test}$ be the data left out for testing at node $D_i$.
Before performing any mergers, we gather the test data
$\{D_i^{test}\}$ from all nodes, and randomly shuffle them into a
list $L$. Let $L_k$ denote the top $k$ records of $L$. Clearly,
$L_k$ is a random sample of the original dataset.

Each time two clusters are merged to a new cluster $D_u$, we train a
new model $M_u$ from the merged data, and we use $M_u$ to predict
records in $L_k$, where $k=|D_u^{test}|$, and store its predictions
in an array $A_u[1,\cdots,k]$. To measure the similarity between
clusters $D_u$ and $D_v$, we compare the predictions stored in
$A_u[1,\cdots,i]$ and $A_v[1,\cdots,i]$, where
$i=\min(|D_u^{test}|,|D_v^{test}|)$. Thus, there are no more than
$i$ comparisons for one evaluation of the similarity function, and
the number of predictions is minimized by sharing the same sample
dataset.

%% Based on above discussion, we use Esq.~\ref{eq:build:deltaq} (step 1)
%% and Eq.~\ref{eq:build:dist} (step 2) to find the closest pair of
%% clusters and merge them. Our experiments show that these strategies
%% work well in practice.

%% In our implementation, we store candidate mergers in a linked list,
%% and a min-heap is maintained to manage all candidate mergers along
%% with their distances as keys, so that we can find the best candidate
%% merger by picking the top element from the heap in logarithmic time
%% at each step. After each merger, the candidate mergers and the heap
%% are updated by linking the neighbors of old clusters to new ones and
%% recalculating their distances. The mergers are repeated until the
%% list is empty, or only one cluster remains.

The process of merging can be visualized by a tree, where each leaf
node is a segment, and each parent node represents the merge of its
two leaf nodes. After merging is complete, we obtain a tree with a
single root node. We prune some of the last merges (nodes close to the
root) to find the best partition that gives the smallest $Q$.
%% Each node in the dendrogram represents a dataset, with the root node
%% representing the entire dataset, and each of its descendant a subset
%% of the entire dataset. 

For each node $w$, let $D_w$ denote the
dataset represented by $w$. We define $Err_w^*$ as the error of the
local optimal partition, that is, $Err_w^*=\frac{1}{|D_w|}\min
\{Q(P)\ | \text{ P is a partition of $D_w$ } \}$.

Let $P_w$ be the optimal partition of $D_w$, i.e.,
$Err_w^*=\frac{1}{|D_w|}Q(P_w)$. By the property of our clustering
algorithm, if $D_w$ is merged from $D_u$ and $D_v$, then $P_w$ is
either a partition consisting of the sole member $D_w$, or an union
of $P_u$ and $P_v$, the best partition for $D_u$ and $D_v$.

This enables us to compute $Err_w^*$ in the process of merging as
follows. If $w$ is a leaf node, we have:
\[Err_w^* =  Err_w \]
where $Err_w$ is the error of the classifier built on $D_w$,
otherwise assuming $w$ has two child nodes $u$ and $v$, then
\[Err_w^* = \min \left \{ Err_w, \frac{|D_u| \cdot Err_u^* + |D_v| \cdot Err_v^*}{|D_w|} \right \} \]

After we build the complete tree, we perform the final cut from top to
bottom. Since $Err_w^*=\frac{1}{|D_w|}Q(P_w)$ for every cluster, the
best partition $P_w$ can be established by checking values of
$Err_w^*$ and $Err_w$. In words, we split the nodes in the tree
from top to bottom. Initially, the root node is the sole member of the
current partition. For each node $w$ in the partition, if
$Err_w^*<Err_w$, it is split into the two corresponding child nodes in
the tree, and the current partition is updated by replacing $w$
with its child nodes. The splits are repeated until $Err_w^*=Err_w$
for every remaining node $w$ in the partition.

Note that we cannot cut during the process of merging by simply stop
merging a node $w$ with other nodes if $Err_w^* < Err_w$ because the
value of $Q(P)$ is not monotonically decreasing in the process of
merging.

%\subsection{Complexity Analysis and Optimizations}

The concept clustering algorithm is executed offline. The most time
consuming part is building classifiers and measuring classifiers'
similarity.  More specifically, the algorithm preforms $n-1$ mergers
and trains $O(n)$ classifiers, where $n$ is the number of nodes.
Ideally, if every merger is between two clusters similar in size, then
the total time is logarithmically linear to the size of the dataset.

%% There are several opportunities for optimization. First, mergers in
%% the final stage of clustering are usually wasted: the clusters to be
%% merged are of different concepts, and their merger will be discarded
%% in the final cut. However, these mergers are the most time consuming
%% ones, because clusters in the final stage are on the same scale in
%% size as the entire dataset. We can optimize by terminating the merging
%% in advance when we are confident that no additional mergers can help
%% finding better partitions. For example, we can remove all candidate
%% mergers with a cluster $u$ if $u$ contains at least $2000$ records and
%% $Err_u$ is at least $20\%$ greater than $Err_u^*$.

%% Second, if clusters to be merged are always unbalanced in size, time
%% complexity will become higher (unless the base classifier supports
%% incremental learning). More specifically, if the tree is a balanced
%% tree, the aggregated size of data in all of the mergers is $O(n)$.
%% However, in the extreme case when small clusters successively merge
%% with a large cluster, the tree will be extremely unbalanced, and this
%% number becomes $O(n^2)$. Fortunately, it is highly unlikely that
%% concept clustering will result in an extremely unbalanced tree.
%% If occasionally we do need to merge a large cluster with a very small
%% one, a possible optimization is to simply reuse the existing
%% classifier from the large cluster.

%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "vem-icde09"
%%% End: 
