\documentclass{article}
\usepackage{graphicx}

\title{Concept Clustering}


\begin{document}

\maketitle

\section{Similarity measure}
\subsection{Clustering}
Clustering algorithms in data mining has been extensively
investigated. Generally, there exist four kinds of clustering
algorithms: partitioning algorithms, hierarchical algorithms, density
based algorithms and grid based algorithms.

The goal of clustering is to partition a data set so that data in the
same partition is similar while data in different partitions is not.
Here, similarity is defined by some distance measure.  An important
step in any clustering is to select a distance measure that determines
how similarity between two elements is calculated.

%% Given two objects $x$ and
%% $y$, a distance function $d(x,y)$ indicates how close $x$ and $y$ are.

The most widely used distance function is the Manhattan distance
(1-norm distance) or the Euclidean distance (2-norm distance). The
process of clustering, as in hierarchical clustering, k-means, or
others is the process of grouping and partitioning objects based on
the distance measure.

In most cases, the distance measure is defined independent of the data
(for example, the 1-norm and 2-norm distances). In some cases,
however, the distance function is data dependent.  For example, in
spectral clustering, given a set of data points A, we define a
similarity matrix as a matrix $S$ where $S_{ij}$ represents a measure
of the similarity between points $i, j\in A$.  Spectral clustering
techniques make use of the spectrum of the similarity matrix of the
data to perform dimensionality reduction for clustering in fewer
dimensions.  Thus, in other words, the final similarity is influenced
by the data.  However, once the data is decided, the spectrals are
decided, and the distance measure is decided.

However, there is another class of clustering where no distance
function can be devised for a pair of elements before hand.

\subsection{Classification}

Classification is also dependent on distance functions. For example,
in SVM, the first step is to find a kernel function, which defines the
similarity between two elements.

...

\section{Clustering without distance function between two elements}

There are cases where we do not have a pre-defined distance function,
yet we still want to perform clustering.

\subsection{Clustering evolving data}

As an example, let us assume we have a sequence of data, where each
element is in the form of $(\vec x,c)$. We want to cluster the data
sequence such that data in each cluster has similar class distribution
$P(\vec x|c)$, in other words, data in the same cluster is generated
by a same data generating mechanism.

\begin{figure}
    \centering
    \includegraphics[width=8cm]{dist1.eps}
    \caption{Concept-changing data stream}
    \label{fig:concept}
\end{figure}

In Figure~\ref{fig:concept}, we show data that arrives before and
after time $t$. At time $t$, there is a sudden shift of concept, which
is represented by the line in the figure.

Our goal is to detect the concept change and cluster data by concept.
Thus, based on Figure 1, we should have two clusters, one for data
shown in Figure~\ref{fig:concept}(a), one for data shown in
Figure~\ref{fig:concept}(b).

Clealy, given the 3 objects $(\vec x, black)$, $(\vec y, black)$, and
$(\vec z, white)$, we can see that $\vec x$ and $\vec y$ are very
close to each other based on both norm 1 and norm 2 distance, and they
have the same class label ($black$), but apparently they belong to
different clusters. On the other hand, objects $\vec y$ and $\vec z$
are far apart from each other, and they have different class labels,
yet they belong to the same cluster. Thus, existing distance functions
such as Euclidean and Manhattan are not appropriate for the clustering
task.
 
\subsection{Pattern-based clustering}


\begin{figure}[htp!]
\centering
\includegraphics[height=4.7cm]{expraw.eps}
\caption{A small data set of 3 objects and 10 attributes.
\label{fig:rawsample}}
\end{figure}
In Figure~\ref{fig:rawsample}, which shows a data set of 3 objects
and 10 attributes (columns), no patterns among the 3 objects are
visibly explicit.  However, if we pick the subset of the
attributes $\{b,c,h,j,e\}$, and plot the values of the 3 objects
on these attributes as shown in Figure~\ref{fig:sample}(a), it is
easy to see that they manifest similar patterns. However, these
objects may not be considered in a cluster by any traditional
(subspace) clustering model because the distance between any two
of them is not close to each other.


The same set of objects can form different patterns on different
sets of attributes. In Figure~\ref{fig:sample}(b), we show another
pattern in subspace $\{f,d,a,g,i\}$.  This time, the three curves
do not have a shifting relationship.  Instead, values of object 2
are roughly three times larger than those of object 3, and values
of object 1 are roughly three times larger than those of object 2.
If we think of columns $f,d,a,g,i$ as different environmental
stimuli or conditions, the pattern shows that the 3 objects
respond to these conditions coherently, although object 1 is more
responsive or more sensitive to the stimuli than the other two.

\begin{figure}[htp!]
\centering
\begin{tabular}{cc}
\includegraphics[height=4.7cm]{diffobj.eps}&
\includegraphics[height=4.7cm]{expobj.eps}\\
(a) objects in Figure~\ref{fig:rawsample} form a {\em Shifting
Pattern}&
(b) objects in Figure~\ref{fig:rawsample} form a {\em Scaling Pattern}\\
 in subspace $\{b,c,h,j,e\}$ &
 in subspace $\{f,d,a,g,i\}$ \\
\end{tabular}
\caption{Objects form patterns on a set of columns.\label{fig:sample}}
\end{figure}

Clearly, in this case, we cannot use pre-defined distance functions
for clustering. Only after the patterns are discovered, can we
construct distance functions for each pattern, such that objects in
the same cluster share proximity based on the distance function.
However, existing clustering algorithms require us to decide on a
distance function before clustering can be performed.

\section{Related work}

example: MDL. E. Terzi's work on segmentation summarization

\section{Approach and Method}

is this EM algorithm?

\subsection{Quality function}

\[ Q(P) = \sum_{P_i\in P} \frac{1}{|P_i|} Err(P_i) \]


\subsection{Exchange}



\end{document}
