\documentclass[10pt,conference,letterpaper]{llncs}
\usepackage{times,amsmath,epsfig}
\usepackage{graphicx}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{subfigure}
\usepackage{algorithmic}
\usepackage{algorithm}
% \theoremstyle{definition} 
\newtheorem{Lemma}{Lemma}


%****************** TITLE ****************************************
\title{Identify the most locally correlated factors}
 
\author{
{Qiyang Duan{\small $~^{\#1}$}, Peng Wang{\small $~^{\#2}$},  Wei Wang{\small $~^{\#3}$}}
\\
\fontsize{9}{9}\selectfont\ttfamily\upshape
$~^{1}$qduan@fudan.edu.cn, 
$~^{2}$pengwang5@fudan.edu.cn  \\
$~^{3}$weiwang1@fudan.edu.cn 
}



\institute{
$~^{\#}$Fudan University, No. 220, Handan Road, Shanghai, 200433, China \\
}


\begin{document}
\maketitle


\begin{abstract}
% We present a new, very important problem not discussed before.        %  \in [1,n]
Given $n$ data streams $(X_1, X_2, ..., X_n)$ and one target data stream $Y$, at any time point $t$, we want to find out which data stream $X_i$ is more correlated to the target $Y$. This technique can help detect the faulty or unstable components in a complex system, e.g. a super computer \cite{complex_system_oliner} or a satellite. It can also be used to identify the temporal correlations among the financial indicators, including stocks, bonds, etc. In this paper, we propose a method to calculate the probability of how each source streams are related to the target one, based on Gaussian Mixed Model. 
Unlike the traditional global distance functions (e.g. Pearson distance), our method gives the local correlation probability which depicts the system behavior at the specific time.


We tested our method on two real life data sets, the daily price of CSI 300 stocks and the performance metrics of an Oracle database.
We compared our method against a PCA/SVD based method by Spiros et.al.\cite{local_c_p}, and the naive local Pearson correlation calculation. The result shows that our new method can reveal the actual system interactions more accurately than those two ones.


\keywords{Gaussian Mixed Model, GMM, Local Correlation, Stream}

\end{abstract}


\section{Introduction}
\label{sec:intro}

\subsection{Problem Motivation}
\label{sec:intro:motivation}

Oracle Database \cite{oracle_11g_doc} was known for its high performance and the sophisticated tuning techniques. The effectiveness of those tuning methods relies on the performance metrics about its internal components, including memory, CPU, IO system, etc. Oracle database did a good job of accurately gathering over 300 metrics, including `SQL Service Response Time', `Executions Per Sec', `Background Checkpoints Per Sec', `CPU Usage Per Sec', etc. Those metrics give an advanced database administrator (DBA) a clear picture of what is going on inside the database system. 

When the database users experience a problem like the slow response, the DBA has to go through hundreds of different metrics to pinned down the one indicating the problem. This task has always been the most challenging part of a DBA's job. Due to the fact that every new version of database software may add some new system metrics or remove a few, this task is becoming more and more difficult even for the most experienced DBAs. 


%In another example, in telecom network, thousands of different devices, like routers, switches, can be installed in a same place, and may affect the system.  The diagnosis exercise has been performed by human before. When the number of components involved grows over thousands, manual inference may become short handed. One automated tool is highly desired.

%With emergence of Data Mining technology, people are also looking for some automated methods to tackle this problem. 

On a different topic, the financial market has also been an interesting research topic for a long time, since any tiny useful information may be leveraged for huge profit. The price variation of one stock is normally correlated with some other stocks or certain economical factors (e.g. CPI, or Petroleum Price) in a sophisticated way. For example, the banking industry can be highly influenced by the housing price as we have seen in subprime mortgage crisis in 2008. However, most of the times, we do not know exactly how they interact with each other, or to how much extent. While the stock price keeps changing every day, we always wonder, if there are some other stocks (or economical factors) closely related to this price variation, and to how much extent. When an extreme event happens, if we can quickly identify the most correlated stocks, it would be much easier to understand the situation and quickly make the right decision. 

In above two examples, both database system metrics and stock prices can be deemed as stream data. 
There has been large number of literatures regarding correlation or similarity search over stream data, such as clustering streams with various similarity distance functions \cite{subspace_cluster_AggYu00,clustering_stream_liao,distance_function_survey,similarity_wavelet}. 

Most of those existing correlation techniques focused on identifying the similar streams by calculating a global similarity score. However, two different internal problems may cause a same symptom. For example, in the 
Oracle Database, generally low buffer cache hit ratio means more data access from the disk, therefore it should slow down the query response. At the same time, locks can also cause queries to pause and lead to another identical slow query response. 
When we observe that at one time point, the query response time increased significantly, we must find out which metrics is closely related to it to understand the reason. Here, the correlation must be calculated in a local manner to capture the interim events. 

% Most likely, at the time of incident, most of the metrics would remain normal, and only a few may show certain correlated behaviours. 

% Talk about cite paper, standford.
\subsection{Related Work}
There has been some researches on identifying the local correlations among streams. One naive implementation could be calculating Pearson Coefficient on a sliding window. The Pearson Coefficient is defined by Equation \ref{equ:intro:pearson}, where $\overline{X}, \overline{Y}$ are the average of the vector $X, Y$ respectively.


\begin{center}
\begin{align}
\rho(X,Y) = \frac{\sum^N_{i=1}(X_i - \overline{X}) (Y_i - \overline{Y})}{\sqrt{\sum^N_{i=1}(X_i - \overline{X})} \sqrt{\sum^N_{i=1}(Y_i - \overline{Y})} }
\label{equ:intro:pearson}
\end{align}
\end{center}


Papadimitriou et al proposed a Singular Value Decomposition (SVD) based method and compared it with naive Pearson one over a Currency Exchange Data(\cite{local_c_p} \cite{local_pca}). They gathered the autovariance data among $m$ windows of length $w$, and form one $m*w$ matrix $\Gamma_t (X, w, m) $. The SVD is applied onto the $\Gamma$ and the  a few resutling largest Eigen vectors $U_k$ to define the correlation score:
\begin{center}
\begin{align}
l_t(X,Y) &= \frac{1}{2} ( \parallel U_X^T u_Y \parallel +  \parallel U_Y^T u_X \parallel  )
\label{equ:local_c_p}
\end{align}
\end{center}


Oliner et al proposed a similar method using SVD together with the lag correlation \cite{braid} to analyze the system log of a super computer behavior \cite{complex_system_oliner}. Jiang et al proposed an algorithm to detect local patterns in financial streams \cite{mcalp} and tested it on 1700 stocks. 





% Someone detect the event from stream data, as in \cite{event_stream_Cho} . C. Cho  et al.
%When we apply those methods onto our problem, two issues emerged:



\subsection{Our contribution}
In this paper, we propose a new method to detect the local correlations among streams. Our method uses the historical stream data to train a group of Gaussian Mixture Models (GMM). The GMM models encapsulate %represented
the information of how the components interact with each other. Based on the model, at an arbitrary time point, we calculate the partial probability of the target stream value regarding to current source stream values $P(Y \mid X_i )$. %We the pickup source streams with the highest probability 
This partial probability is the correlation score of each stream, and according to it we can easily pick up the most correlated source streams. 

\section{A simplified problem} 
\noindent 


If all the streams under the inspection are from a finite state set, our problem could be greatly simplified. Let's start from a silly toy system, in which we have one target stream $Y$ and two source streams $X_1$ and $X_2$. We assume that there are only limited choices for  $X_i$ and $Y$, e.g. $X_i \in \{A, B, ..., Z\}$ and $Y \in \{A, B, ..., Z\}$. Now, suppose we observed the three streams for 12 time points in the history as below:

\begin{align}
X_1 &= (D, B, D, C, D, A, C, D, B, B, D, A, ..., D)\\
Y_2 &= (C, A, B, A, B, A, C, C, A, D, B, B, ..., B)\\
Y   &= (A, B, A, B, A, B, A, A, B, B, A, B, ..., A)
\end{align}


%\begin{align}
%P(X_1=D  \mid Y=A) = \frac{5}{6},\\
%P(X_1=C  \mid Y=A) = \frac{1}{6},\\
%P(X_2=A  \mid Y=B) = \frac{5}{6},\\
%P(X_1=C  \mid Y=A) = \frac{1}{6},
%\end{align}


By simply counting the above events, we can calculate the joint frequency of $X_i$ and $Y$ events for the first 12 time points as in Table \ref{tab:simplified:eventcounts}.

\begin{table}
\begin{center}
\begin{tabular}{| c | c| c |c |c | c| c |c |c |}
\hline
$Y$ Event & $X_1 = A $ & $X_1 = B $ & $X_1 = C $ & $X_1 = D $ & $X_2 = A $ & $X_2 = B $ & $X_2 = C $  & $X_2 = D $ \\
\hline
$Y=A$ & 0 & 0 & 1  & 5  & 0 & 3  & 3  & 0  \\
\hline
$Y=B$ & 2 & 3  & 1  & 0 & 4 & 1  & 0 & 1  \\
\hline
\end{tabular}
\caption{The Source Event Frequency Counts}
\label{tab:simplified:eventcounts}
\end{center}
\end{table}


% According to PRML book at Page 36, when we pick up a red ball, we DO NOT know if it is from Box 1 or box 2, but according to $P(Ball=Read \mid Box=1)$ we can calculate the $P(Box=1 \mid ball=read)$. Right now, we do not need this probability, all we need is $P(Ball=Read \mid Box=1)$ to know that which box might caused the ball to be red!!!

Now at the last time point, we observed the event $Y=B$ at the target stream. Our mission is to determine which source stream is the reason of this target event. To measure how much one source stream is influencing the target stream, we turn to the partial probabilities $P(Y \mid X_i)$. Then this mission would be translated into finding the largest $P(Y \mid X_i)$ among all the source streams.

According to the product rule of the probability theory\cite{bishop_gmm_prml}, we have:
\begin{align}
P(Y  \mid X) = \frac{P(Y,X)}{P(X)}
\label{equ:conditional:prob}
\end{align}

From Table \ref{tab:simplified:eventcounts}, we can easily get:
\begin{align}
P(Y = A  \mid X_1 = D) &= \frac{P(Y = A, X_1 = D)}{P(X_1 = D)} = \frac{5/12}{5/12} = 1 \\
P(Y = A  \mid X_2 = B) &= \frac{P(Y = A, X_2 = B)}{P(X_2 = B)} = \frac{3/12}{4/12} = 0.75 
\end{align}


According to above result, we would conclude that $X_1$ influenced $Y$ more that $X_2$ did.

%  Then at the end, when event $Y=B$ happens, we can calculate the inversed event $P( C_3 \mid X_2 )$, $P(C_3 \mid Y_1)$. Then according to those values, we can tell if X is more relevant to current situation of $C$ or Y is.

% Question: Why can we not calculate the frequency of $P( C_3 \mid X_2 )$, $P(C_3 \mid Y_3)$ directly? stupid question, check prml book. it is because the values we observed in table 1 is the joint probability P(X,Y), not p(Y | X), or P(X|Y)

% From Discrete Counting to a probability distribution
This story looks quite easy. However, when we try to apply this method into the real life problems, a new challenge arises - most of the streams are continuous values, instead of countable events. To extend this method onto the numerical streams, instead of counting the event frequency, we train a probability distribution. Among all different type of distributions, we selected the Gaussian Mixture Model (GMM).

Before we go into the mathematical formulations, we summarize all our symbols in table \ref{tab:math:symbols}.


\begin{table}
\begin{center}
\begin{tabular}{| c | p{9cm} |}
\hline
Symbol &  Meaning  \\
\hline
N & The Stream length, and we assume all source and target streams are of a same length.\\
\hline
M & Number of the streams, and we assume all $M-1$ source and one target streams are of a same dimension. We also assume $X_m = Y$. \\
\hline
$X_i$ & The $i$-th dimension of source Stream $X$ \\
\hline
$Y$ & The target Stream\\
\hline
$K$ & Number of States in a GMM.\\
\hline
$\mu, \mu_k$ & $\mu$ is the centers of each Gaussian in the GMM model and $\mu_k$ is for $k$-th center.\\
\hline
$\Sigma, \Sigma_k$ & The covariance matrix of the GMM or one Gaussian distribution.\\
\hline
\end{tabular}
\caption{Mathmatical Symbols Summary}
\label{tab:math:symbols}
\end{center}
\end{table}



% How about extending it to continuous probability, real value? Since most of streams are numeric... Then problem is , we do not have those events $X_i, Y_j$ anymore... Discretize the real values to be atom shapes, and apply that same method?

% Is there a measure to describe the shape of line? A simple number can never do that, since shape does not have Pianxu relationship. Shape must be clusterd. and represented by  a sample shape!!! For example, 
% Silly question again, a distribution of multiple dimension is the answer.


%\begin{enumerate}
%\item Is it an ``Attribute Importance'' problem?
%\item This is an Unsupervised Learning !!!, no labeled data required!!, what if customer do have labeled data for training?
%\end{enumerate}



\section{Background}
\label{sec:background}

\subsection{Gaussian Mixture Model}
\label{sec:background:gmm}



If $x \in X $ is a $D$-Dimensional vector, then a Gaussian distribtuion can be written as:
\begin{align}
G(x  \mid \mu, \Sigma) = \frac{1}{(2 \pi)^{D/2}} \frac{1}{\Sigma^{1/2}} exp \{  - \frac{1}{2} (x - \mu)^T \Sigma^{-1} (x - \mu)  \}
\label{equ:background:Gaussian}
\end{align}
In Equation \ref{equ:background:Gaussian}, $\mu$ is the center (or the mean) of the Gaussian distribution, and $\Sigma$ is the covariance matrix of $x$. 
% Page 90 at prml
If we partition the vector $x$ into two parts, then marginal Gaussian distribution can be easily acquired by partitioning the Gaussian parameters in the same way:
\begin{equation*}
x  = \left(
\begin{array}{c}
x_a  \\
x_b  
\end{array} \right), 
\mu  = \left(
\begin{array}{c}
\mu_a  \\
\mu_b  
\end{array} \right), 
\Sigma  = \left(
\begin{array}{cc}
\Sigma_{aa}  & \Sigma_{ab}\\
\Sigma_{ba}  & \Sigma_{bb}\\
\end{array} \right)
\end{equation*}
The marginal Gaussian distribution is written as \cite{bishop_gmm_prml}:
\begin{equation*}
G(x_a)  = G(x_a \mid \mu_a , \Sigma_{aa} )
\label{equ:gaussian:marginal}
\end{equation*}

The Gaussian distribution is a perfect symmetrical bump, but most of our real life data does not look like that. To be able to model a sophisticated distribution, we use a linear combination of multiple Gaussians to simulate the multiple bumps and furthermore, any shapes. This forms a Gaussian Mixture Model (GMM), which is defined as:
\begin{align}
GMM(x)  = \sum^K_{k=1}{\pi_k G(x \mid \mu_k, \Sigma_k) }
\label{equ:background:GMM}
\end{align}

In Equation \ref{equ:background:GMM}, $K$ is the number of bumps mixed into a single distribution. We will call it number of states in this paper. Together with previous $ \mu, \Sigma$, those three parameters will determine a GMM.

\begin{figure}
\begin{center}
\includegraphics[width=7cm]{./img/oracle_gmm_sql_vs_execution_3_states.eps}
\caption{GMM distribution of `SQL Service Response Time vs `Executions Per Sec'. The red x are the centers and the circles represent the contour of one standard deviation away from the center. }
\label{fig:background:oraclegmmsqlvsexecution}
\end{center}
\end{figure}

We have chosen GMM for our stream data distribution because of its unique merit - with sufficient number of Gaussians, GMM can virtually approximate any other distributions to an arbitrary accuracy. \cite{bishop_gmm_prml} \cite{GMM_Sung_Rice}. %% page 111 in prml discuss this topic
For example, in Fig.\ref{fig:background:oraclegmmsqlvsexecution}, two database metrics  are plotted on X and Y axis. Though the data distribution does not look any way like a predefined distribution, we can still use a 3 states GMM to fit it.

\subsection{Expectation Maximization for GMM}
\label{sec:background:em}
Given a $D$ Dimensional data, to build the best fitting GMM, we will use the Expectation Maximization (EM) to learn the three parameters $\pi_k, \mu, \Sigma$. EM is an iterative method which assumes certain latent variables and then estimates the parameters % in any statistical models 
to the maximum likelihood \cite{EM_gentle}. In the GMM model training, $\gamma(n,k) $ is introduced as a latent variable, which means to how much extent the $n$-th data point belongs to the $k$-th state. The $\pi_k$ is the expectation of the $\gamma(n,k)$. 

EM method works similarly to its simplified version: K-Means algorithm \cite{bishop_gmm_prml}. Each EM iteration is consisted of two steps: Expectation(E), which evaluate the expectation while fixing the Gaussian parameters $\mu, \Sigma$, and Maximization (M), which again compute the Gaussian parameters for maximizing likelihood while fixing the priori $\pi$. 

% In prml, page 435 is EM for GMM.
% a problem here $\pi_k$ should not be the latent variable, but the prior, xxxx


% In our EM implementation, we use K-Means as the initial seeding for the GMM,  
At the beginning, $K$ is an arbitrary number specified according to human experiences. Then the K-Means method is used to compute the initial states, because K-Means is faster and yet provides a good approximation to the final GMM result. As the result of the K-Means, every state have certain number of data points, whose $\pi_{tk} = 1$. Then for each state, $\mu$ is provided, and $\Sigma$ can be computed as the standard covariance matrix. 


% The last parameter can be computed as Equation \ref{equ:background:em:compute:sigma}:


In every iteration, we start from the E-step, where $\gamma$ and $\pi$ are computed while fixing $\mu$ and $\Sigma$ according to Equation \ref{equ:background:em:compute:pi}. This step re-arranges the assignment probability between the states and the data points. 
\begin{align}
\gamma(n,k) &= \frac{\pi_k G(x_n  \mid \mu_k, \Sigma_k) }{\sum_j \pi_j G(x_n  \mid \mu_k, \Sigma_k)}\label{equ:background:em:compute:gamma} \\
N_k &= \sum^N_{n=1} \gamma(n,k)\\
\pi_k &= \frac{N_k}{N} \label{equ:background:em:compute:pi} 
\end{align}

In the M-step, the parameters $\mu$ and $\Sigma$ are computed with $\pi_k$ fixed according to Equations \ref{equ:background:em:compute:mu} and \ref{equ:background:em:compute:sigma}. This step builds the best fitting Gaussians for each state according to the data points assigned to it.  

\begin{align}
\mu_k &= \frac{1}{N_k} \sum^N_{n=1}\gamma(n,k) x_n
\label{equ:background:em:compute:mu}
\end{align}


\begin{align}
\label{equ:background:em:compute:sigma}
\Sigma_k &= \frac{1}{N_k} \sum^N_{n=1} \gamma(n,k) (x_n - \mu_k) (x_n - \mu_k)^T 
\end{align}


At the end of M-step, we will check if the overall likelihood according to equation \ref{equ:background:em:compute:loglikelihood} . If the likelihood converges, we will terminate the process, otherwise, we proceed to the E-step again.

\begin{align}
ln (p(X \mid \pi, \mu, \Sigma) = ln \{ \sum_{t=1}^N GMM(x_t \mid \pi, \mu, \Sigma ) \}
\label{equ:background:em:compute:loglikelihood}
\end{align}





 
\subsection{Haar Wavelet Synopsis}
\label{sec:background:Wavelet}
We use an example to illustrate how the Haar Wavelet transformation \cite{Vitter99,Chakrabarti01,Haar07} works. Given a stream data with 8 data points $S = \{ 5, 7, 1, 9, 3, 5, 2, 4 \}$, the Haar Wavelet transformation can be performed according to table \ref{tab:haar_demo}. There are always two types of coefficients: the Average Coefficient and the Detail Coefficient. In step 1 of table \ref{tab:haar_demo}, each two adjacent data points are transformed to one average coefficient and one difference value.  The difference is then moved to the end of whole stream as a detail coefficient. For example, the first two values $\{5, 7\}$ from step 0 become first average coefficient $\{ 6 \}$ and fifth detail coefficient $\{-1\}$. In the next steps, all the average coefficients are again paired and transformed in the same fashion, until there are only one average value at the head and all the other detail coefficients at the end for different resolution levels. 


\begin{table}
\begin{center}
\begin{tabular}{| c | c |}
\hline
Step No. & Transformed Stream Data \\
\hline
0 & \{ 5, 7, 1, 9, 3, 5, 2, 4 \}\\
\hline
1 &  \{ 6, 5,  4,   3, -1, -4, -1, -1  \} \\
\hline
2 &  \{ 5.5, 3.5,  0.5, 0.5,1,  -1, -4, -1, -1  \} \\
\hline
3 &  \{ 4.5, 1,  0.5 , 0.5,1,  -1, -4, -1, -1  \} \\
\hline

\end{tabular}
\label{tab:haar_demo}
\caption{{Haar Wavelet Transform} }
\end{center}
\end{table}

We will simply refer to Haar Wavelet as Wavelet in the rest of the paper. Since the first a few values in a wavelet kept the most important information at the low resolutions, we can keep the first a few ones as a compacted synopsis and discard the rest.



\section{Our Method}
\label{sec:our}
\subsubsection{Problem Statement:}

Given $m$ data streams $(X_1, X_2, ..., X_m)$ of length $N$ and one target data stream $Y$, at any time point $t \in [1,N]$,  we want to find out which data stream $X_i$ is more correlated to the target $Y$. 

\subsection{Local Correlation using GMM}
\label{sec:our:GMM}

First we  have to define the correlation. Traditionally, the streams correlation has been conceived as a similarity search task among a group of streams\cite{subspace_cluster_AggYu00,clustering_stream_liao,distance_function_survey,similarity_wavelet}. Under this assumption, two streams can be identified as correlated only when their patterns look alike. Otherwise, not matter how many times a pattern repeat itself, it will not be recognized. For example, the Pearson Coefficient of two streams shown in Fig.\ref{fig:our:pearson} is calculated to be zero, but it is not difficult for anyone to tell that the two streams are highly correlated. To fix this, we turn to a probability distribution to model the streams correlations.

\begin{figure}
\begin{center}
\includegraphics[width=7cm]{./img/stream.x.y.pearson.not.working.eps}
\caption{An example of Zero Pearson Coefficient. }
\label{fig:our:pearson}
\end{center}
\end{figure}


\begin{definition} At a given time point $t$, the local correlation score between two streams $X$ and $Y$ is defined as the conditional probability of $Y = y_t$ given $X = x_t$, i.e. $corr(X,Y) = P(Y = y_t \mid X = x_t)$
\end{definition}

To be able to compute the conditional probability, a Guassian Mixture Model is trained based on the historical streams. We shall first take the single point correlation score to explain our method and then extend it onto a line correlation.

Assume that there are $M$ source streams, which are all of same strength $N$. Then at each time point, we can align all the streams values to form a new vector of $M+1$ dimension, denoted as $z_t, t \in (1...N)$. Initially, we have to specify a parameter $K$ according to our understanding about the data. This initial value can be determined after a few trials. 

Once we gathered all the training streams data $Z = \{z_t \mid t \in (1...N)\}$, we use the EM algorithm to fit a $M+1$ Dimensional GMM model $GMM(z)$, according to Equations \ref{equ:background:em:compute:sigma}, \ref{equ:background:em:compute:mu}, \ref{equ:background:em:compute:pi}. 

\subsubsection{Interpretation.}
Within the model $GMM(z)$, each state $G(\mu,\Sigma)$ represents one state in the original system. For example, three states from Fig.\ref{fig:background:oraclegmmsqlvsexecution} were actually corresponding to three system states in an Oracle Database. In Fig.\ref{fig:background:oraclegmmsqlvsexecution}:

\begin{enumerate}
\item The left state at center (0, 0.02) is the state when the Oracle database is completely idle, with few active executions and at this state, the `response time' varies mostly from 0 to 0.02.
\item The state in the middle is when the execution is medium. We can not find obvious pattern from those points.
\item The right state at center (5700, 0.34) is when the Oracle is busy. In this state, we can see that `response time' generally decrease when the `executions' goes up and most of `response time' are within range of [0.02, 0.04].
\end{enumerate}


% Though we can not convert the distribution into a numerical value to compare the effectiveness against the old style measures, this distribution correlation gives us a unique capability: to calculate the correlation score at a single time point. 



\subsubsection{More Differences}
\label{sec:our:straight} 
Fig.\ref{fig:line:specialline} gives another special example of how different the new probability based correlation is comparing to the Pearson based methods. In Fig.\ref{fig:line:specialline}, the left line is when $X = 1 * Y$ and the right one is when $Y'=0 * X'$. As one can easily see, the Pearson coefficient for left line  $(X,Y)$ is 1, but is 0 for the right $(X',Y')$. On the contrary, if we train two GMMs for both sides, they look quite similar, as in Fig.3. Thus, in the new distribution based correlation, the probability correlations of $(X,Y)$ and $(X',Y')$ are almost same for both sides. 

% by test.gmm.special.straight.line.correlatin.m
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{./img/compare_special_line_score.eps}
\caption{The GMM local correlation score definition for two special lines. }
\label{fig:line:specialline}
\end{center}
\end{figure}

This difference is because multivariate Gaussian Model is preserved in linear (or affine) transformation. We can use a very simple linear transformation $\phi_{ab}$ from Equation \ref{lineartransform} to convert left line into the right one $[X Y] * \phi_{ab} = [X' Y']$ and use $\phi_{ba}$ to reverse.  Therefore, the two lines have similar correlation scores in the GMM method.
% Using Equation \ref{lineartransform}, we can also verify that $P(Y=0.5 | X=0.5) = P(Y'=0 | X=0.5)$. But this did not WORK!!!!! Why????????? 
% If one rewrites the right line as  $Y = 0 * X_i$, and left line as $Y = 1 * X_i$, it would easier to understand why GMM give same probabilities for them two. 
%http://en.wikipedia.org/wiki/Multivariate_normal_distribution
% Tested in test_special_line_gmm.m & test.gmm.special.straight.line.correlatin.m
\begin{equation}
\phi_{ab}  =  \left(
\begin{array}{cc}
0.5  & 0.5\\
-0.5 & 0.5\\
\end{array} \right), 
\phi_{ba}  =  \left(
\begin{array}{cc}
1 & -1\\
1 & 1\\
\end{array} \right) \label{lineartransform}
\end{equation}



\subsection{GMM vs SVD} 
\noindent 

Using GMM, we can have as many states $K$ as required by the data. In each state, all the streams interact with each other in a different way. For example, in Fig.\ref{fig:background:oraclegmmsqlvsexecution} we learnt three states over two streams and in Fig.\ref{fig:our:pearson}, we can intuitively imagine that a 4-state GMM should fit it perfectly.

This is in contrast to the SVD/PCA based method proposed in \cite{local_c_p,local_pca}. Those methods (such as SPIRIT, \cite{local_pca}) perform SVD decomposition over all the historical streams data, and acquires the significant Eigen vectors. The transformed Eigen vectors form a new subspace, where each Eigen vector becomes one dimension. Each Eigen vector (or subspace dimension) is taken as one system state for the original stream data. When a new data point arrives, it is projected it onto each Eigen vector. Then  the  projected length determines to which Eigen vector the new data point belongs.

Following this explanation, one obvious limitation surfaces. Those methods can only capture up to $M$ system states if there are $M$ streams, since the SVD/PCA methods can only reduce the dimensionality, but not increase it any how.

However, in a complex system, one component may impact any number of other ones, therefore, resulting in at least $M!$ events. Moreover, we should consider that there are more hidden or external factors not detected by our monitoring streams, which literally push the number of states to virtually unlimited.  As in Fig.\ref{fig:background:oraclegmmsqlvsexecution}, our GMM method detected at least three states over two streams, while SVD/PCA based methods would fail to capture them. 

For this reason, we believe the GMM based method should be better on capturing the system information and gives more accurate correlation scores. One thing worth noting is that it is still possible to use SVD as a standard dimension reduction technique before we feed all streams into the GMM training to alleviate curse of dimensionality problem.

%, but , the number of state $m$ is not limited by the number of input streams $n$. We can have any many Gaussian Distributions as we want in the trained model, to represent all the possible system states.



% Here i draw a graph, to show number of state for 3 streams.




\subsection{Paired GMM vs Wholistic GMM}
\label{sec:gmm_corr:why_paired}


In Fig.\ref{fig:background:oraclegmmsqlvsexecution}, we learnt the GMM on only two streams. As described in section \ref{sec:intro:motivation}, there are over 300 different performance metrics. Considering that different groups of components may respond to a specific event, ideally one global GMM over all streams should depict the original system states more precisely.

However, due to the well known \emph{Curse of Dimensionality}, when the number of streams (dimensions) increases, the distance difference among the data points may become less and less meaningful\cite{curse_dimension}. Just like any other clustering methods, the EM for GMM algorithm from section \ref{sec:background:em} will not be able to converge to a meaningful result. 

% Another Problem: What if multiple events from a source stream cause the target to change? I can easily calculate probability of a single event, but how to calculate their joint probability?

Though one global GMM is desired for capturing all system states precisely, in our problem statement, we are more concerned about calculating the correlation between each source stream and the target one. 
In the real world system, there are always one or a few special streams, which the end users cares more than others. For example, in an Oracle database, `SQL Response Time' is a direct indicator for the user experience quality. A DBA may set an alarm over this metric and carefully monitor it. When an abnormal event is detected, the DBA wants to quickly identify the most correlated source streams. 

Considering those usage scenarios, it would be straightforward choice to simply fit one GMM for each pair of source  and the target streams. 
Before using paired GMMs to derive the correlation score, we have to answer a question, how different are the result of paired GMMs compared against one global GMM?

% using a monolithic approach
% Here I explain why paired is good choice. since result is similar. 

\begin{Lemma} 
%Assume that the parameter $\pi_k$ is fixed and all source streams are independent, the paired GMM and the global one GMM training processes will produce identical results.
If all source streams are independent to each other, each global $M$-dimensional GMM can be expressed by a group of $(M-1)$  paired marginal GMMs.
\label{lemma1}
\end{Lemma}

\begin{proof}
%Since the EM algorithm from section \ref{sec:background:em} is an iterative process, we will show that in every iteration, the E-step and M-step produce same result under our assumptions. 

%Since global GMM method produces only one GMM while the paired GMM method produces $M-1$ GMM models, w
%We will first define the exact meaning of \emph{equivalent } models. One GMM $GMM$ is equivalent to a group of paired GMMs  $GMM'$ if:

%\begin{enumerate}
%\item $GMM$ and and all GMM model from $GMM'$ contain exactly $K$ states.
%\item $GMM$ is of $M$ dimensions and there are $M-1$ GMM models in $GMM'$. The $i$-th GMM model in $GMM'$ is defined on dimensions $X_i$ and $X_M$.
%\item The $i$-th GMM model from  $GMM'$  is equal to the marginal distribution of global GMM on $X_i,X_M$.
%\item The $i$-th GMM model from  $GMM'$  is equal to the marginal distribution of global GMM on $X_i,X_M$.
%\end{enumerate}


According to a $K$ states $M$-dimensional GMM model, we can define a group of GMMs ($GMM'$) containing $M-1$ GMMs by the following rules:

\begin{enumerate}
\item Each GMM model in $GMM'$ has K states.
\item The prior $\pi$ of $GMM$ is same as the prior $\pi'$ of each GMM model in $GMM'$, i.e. $\pi = \pi'$.
\item The mean vector $\mu'_ik$ of $GMM'$ ($k$-th state, $i$-th GMM) is the projection of the mean vector $\mu_k$ on $i$-th and $M$-th dimensions, i.e. $\mu'_{ik} = \Pi_{iM}(\mu_{k})$.
\item The covariance $\Sigma'_ik$ of $GMM'$ is the projection of the covariance $\Sigma_k$ on four elements at ($ii ,  iM, Mi, MM$), i.e. $\Sigma'_{ik} = \Pi_{iM}(\Sigma_{k})$.
\end{enumerate}

From above Rule 1, 2, 3, we can easily see that while projecting the means and the priors, we did not lose any information. Regarding rule 4, we can unfold $\Sigma_{k}$ and  $\Sigma'_{ik}$ to be Equation \ref{equ:sigma:array}. 

\begin{equation*}
\mathbf{\Sigma_k} = \left(
\begin{array}{cccc}
\sigma_{11} & \sigma_{12} & \ldots & \sigma_{1M}\\
\sigma_{21} & \sigma_{22} & \ldots & \sigma_{2M}\\
\vdots & \vdots & \ddots & \vdots  \\
\sigma_{M1} & \sigma_{M2} & \ldots & \sigma_{MM}\\
\end{array} \right)
\label{equ:sigma:array}
\end{equation*}
%Similarly, $\Sigma'_{lk}$ can be unfolded as:

\begin{center}
\begin{equation*}
\label{equ:sigma:pairedarray}
\mathbf{\Sigma'_{ik}} = \left(
\begin{array}{cc}
\sigma_{11}' & \sigma_{12}' \\
\sigma_{21}' & \sigma_{22}' 
\end{array} \right) = \left(
\begin{array}{cc}
\sigma_{ii} & \sigma_{iM} \\
\sigma_{Mi} & \sigma_{MM} 
\end{array} \right)
\end{equation*}
\end{center}



Because each source streams are independent to each other, in $\Sigma_{k}$ , we have $\sigma_{ij} = 0$, if $i\neq M$,  $j \neq M$ and $i \neq j$. Therefore, the GMM can be expressed by this group of GMMs without losing any information.

From its definition, we can also easily see that each model in $GMM'$ is in fact the marginal distribution:

\begin{center}
\begin{equation*}
\label{equ:sigma:marginalequal}
GMM'_{i} = \int_{-\inf}^{\inf}{GMM}dx_1x_2...x_{i-1}x_{x+1}...x_{M-1}
\end{equation*}
\end{center}



% Since the EM method is an iterative method, here we will prove that in every iteration, the E-step and M-step will produce same result.

% Then I show that they are equivalent! DONE!
% !!!!!!!!!!!! Though the P(x) is different for pair dimensional vs global one, but the expectation E(x) is same when there are sufficient number of points !!!

%In the first iteration, we initiate both the paired EM and the global EM by one K-Means process. Therefore, each data point belongs to same state in both paird and global GMM, i.e. the parameter $\gamma = \gamma'$ and  $\pi = \pi'$.  Same K-Means result also set the centers of paired and global GMM to be same, i.e. $\mu = \mu'$. 


\end{proof}

% the GMM model generated 


\subsubsection{Examples}
\label{sec:gmm_corr:why_paired:example}
At the first glance, the independent source streams looks a quite strong and impossible assumption, since this assumption does not hold in any real life dataset. However, due to GMM's well adaptivity, we found that the paired GMM can reach a very good approximation to the global GMM method.

% Lemma1 According to , if we assume that all source streams are  to each other, we can safely train a group of two dimensional GMM models to capture the all the system states. 
%, however, in our experiments, we can achieve pretty close result by training only those marginal GMMs.

Fig.\ref{fig:exp:oraclegmm1dim} and Fig.\ref{fig:paired:gmm2dimcompare} show the difference between global GMM and the paired one. one-dimensional and two-dimensional partial GMM model on a 4-dimensional Stock data.
In Fig.\ref{fig:exp:oraclegmm1dim}, the red line is the global GMM trained on 4 dimensioned and then calculated the marginal distribution on 1 dimension only. The blue line is the 1 dimensional directly trained GMM.


\begin{figure}
\begin{center}
\includegraphics[width=7cm]{./img/plot.1.dim.compare.4.dim.slice.vs.2.dim.eps}
\caption{One Dimensional GMM, directly trained  vs sliced  }
\label{fig:exp:oraclegmm1dim}
\end{center}
\end{figure}


\begin{figure}
\begin{center}
\includegraphics[width=7cm]{./img/mesh.compare.4.dim.slice.vs.2.dim.eps}
\caption{Two Dimensional GMM, directly trained  vs sliced  }
\label{fig:paired:gmm2dimcompare}
\end{center}
\end{figure}


\subsection{GMM Local Correlation Score}
\label{sec:our:gmmscore}
Once we trained GMMs over the historical data, we define the local correlation score as  Equation \ref{equ:our:localdef}. In Equation \ref{equ:our:localdef}, the $P(X_i)$ is the marginal distribution of $X_i$ of $P(X_i,Y)$.

\begin{align}
Corr(X_i, Y) &= P(Y  \mid X_i) = \frac{P(Y,X_i)}{P(X_i)} 
\label{equ:our:localdef}
\end{align}

Though it is easy to derive the analytical formula for $P(Y \mid X_i)$ if both $Y$ and $X_i$ are one dimensional streams, it would be difficult when we extend it onto multiple dimensions. To keep the formula simple, we stay with the dividing form.

\subsubsection{The advantage of single point correlation}
\label{sec:gmm_corr:advantage}
The instant result from Equation \ref{equ:our:localdef} is how to calculate local correlation score on a point by point basis.  This gives us an unique advantage while inspecting system status. For example, when we found that the SQL Response Time is too high at a time point, by the local correlation score, we can know EXACTLY at this time point, which performance metrics is more likely to be the reason. This is detailed in Section \ref{sec:exp:oracle}.

On the contrary, all the previous methods demand a time window to calculate the similarity. For example,  the Pearson  requires at least two time point to calculate the correlation, and the SVD based method needs a vector to identify its Eigen vector and then the cosine score.


\section{Line Correlation by GMM}
\label{sec:gmm_line_similarity}
Besides the unique merit of the single point correlation, the GMM based correlation can also be easily extended onto a series of time points (or a line). A line may contain much more information than a single data point, like a sudden increase, or a spike. Those line shapes can be recognized as a certain type of event. When an event happens to the target stream, understanding correlated events from the source stream would be very helpful. To do this, we use the most widely used concept: sliding window.

We first define a sliding window of length $w$ as $[t-w+1,t]$, in which the correlation is calculated. At each time point, all the data points are aligned to form a vector $X'_{it} = (X_{it-w+1},X_{it-w+2},...,X_{it})$. Then we have the new $w$-dimensional source stream $X'_i$ of length $N-w$:

\begin{center}
\begin{align}
X'_i =(X'_{i(w+1)},X'_{i(w+2)},...,X'_{iN})
\label{equ:line:xiwindow}
\end{align}
\end{center}

Similarly, the target stream $Y$ is transformed into a $(N-w) \times w$ matrix $Y'$. Then we apply the same procedure in Section \ref{sec:our} to build GMMs, with the exception that the 2-dimensional paired $(X_i,Y)$ GMMs now become $2w$-dimensional $(X'_i,Y')$ paired GMMs.  Then the local correlation can be calculated exactly the same way as equation \ref{equ:our:localdef}, where only $X_i,Y$ are replaced by $X'_i,Y'$.


Under this setting, we are actually looking for the most correlated streams in the sliding window. This can help identify a pattern over a period. In Section \ref{sec:exp:oracle:line}, we explain the patterns identified by line correlation.

\subsection{Embeding Other Transformations}
\label{sec:embedingwavelet}
We can also leverage most of state of art data transforming techniques as pre-processing step, before feeding the line into the GMM core. As defined in Section \ref{sec:gmm_line_similarity}, the input into GMM model could be a line $X'_t$. We can use the following synopsis structures \cite{duanapprox11}:
\begin{enumerate}
\item Haar Wavelet: the first $M$ coefficients at low resolution
\item DCT: the coefficients on the lowest $M$ frequencies
\item PLA: The $2\times M$ coefficient pairs $(a_i,b_i)$ after partitioning original stream into $M$ segment.
\end{enumerate}

In all above synopsis candidates, we have $M << w$. All the above methods can be embedded into our framework, and here we use Wavelet as an example to illustrate how it works.

Here we summarize the whole process with wavelet pre-processing in figure \ref{fig:line:waveletprocess}. At the beginning in step 1, we have the source and target streams $Y,X_i$ for the training purpose. In step 2, we first converted $Y,X_i$ into a series of vectors $X'_i, Y'$ according to Equation \ref{equ:line:xiwindow}, and then apply the wavelet transformation to reach $W_i, W_y$. Then in step 3, the training process from section \ref{sec:our:GMM} will produce the paired GMM models, which capture the system states. Then at any time point $t$, to understand how each source stream is correlated the target, we run the same wavelet pre-processing in the sliding window $w$ and get the vectors $W_{it},W_{yt}$. Then in step 6, we calculate the paired local correlation score by the GMM model and the data at the time point $t$.

\begin{figure}
\begin{center}
\includegraphics[width=7cm]{./img/wavelet_process.eps}
\caption{The process of GMM training and local correlation calculation with Wavelet pre-processing. }
\label{fig:line:waveletprocess}
\end{center}
\end{figure}



\subsection{Lagged Correlation}
\label{sec:line:lag}
%Also if there is a time sequence, like in above, what if $C_3$ is actually caused by $Y_3$, which is always 1 period ahead of $C_3$? Maybe for this one, we can take preceding events as another event. This may not grow the number of events to big, since in most of cases, the preceding event shall last for a while, or disappear. Then only 1 or 2 new events (Preceding Event) will be generated and taken into account. The final weight of each Stream shall be sum of Current Event Weight and Preceding Event Weight. Seems ok here...

It is natural for a system to have lagged signals. One example in the financial time series is that a decrease of the interest rate typically precedes the increase of the house sales. Also, a signal may pass through different sensors at different locations, thus leading to a series of lagged events in different sensors. For example, when a vehicle passes a bridge, the pressure sensors at different locations will generate a series of lagged events according to the vehicle's location.

Sakurai et al. \cite{braid} proposed a method to detect a global correlation lag. However, when considering the lag on the pressure sensors at a bridge, we found that there is no such a global common lag, but rather a few dynamic ones. For example, a long vehicle most likely passes the bridge at a speed of 80 KM/H, while a sedan typically passes the bridge at speed of 120KM/H. While analyzing the data from those sensors, it would be nice if we can correlate the events at several different lag values.

Recall from Section \ref{sec:gmm_line_similarity} that the line correlation is defined on the sliding window, where each paired GMM is defined one a $2w$ dimensional space: 
\begin{equation}
(X'_i,Y') = (X'_{i1},X'_{i2},...,X'_{iw}, Y'_{1},Y'_{2},...,Y'_{w})
\end{equation}

Now suppose that in $X_i$, there is an event of length $e$, and it is $l$ time points ahead of the corresponding event in $Y$. 
Though it is not required, for the sake of simplicity, we assume that the event of $X_i$ and $Y$ are of same length $p$. 

We assume that $w = p+l$, and the time $t$ is at the end of $Y$ event, we use $X'_a=(X'_{i(t-l-p+1)}, X'_{i(t-l-p+2)},..., X'_{i(t-l)})$ to represent the event inside $X'_i$ and $X'_b$ be rest of time points in $X'_i$. Similarly  we can partition $Y'$ to be $Y'_a=(Y'_{i(t-p+1)}, Y'_{i(t-p+2)},..., Y'_{it})$ and $Y'_b$. If $w > p+l$, we will simply allocate all other dimensions to $X'_b$ and $Y'_b$.

All the $2p$ dimensions from $X'_a$ and $Y'_a$ formed a subspace in the original  $2w$ dimensions. Once we trained the GMM models over $X'_i,Y'$, similar to the proof of Lemma \ref{lemma1}, if we assume $X'_a, Y'_a$ and $X'_b, Y'_b$ are independent, we can slice the GMM models to be:
\begin{equation}
GMM(X'_i,Y') = GMM(X'_a,Y'_a) * GMM(X'_b,Y'_b)
\label{equ:lag:gmmslice}
\end{equation}

From Equqation \ref{equ:lag:gmmslice} we can see that the GMM models on the full sliding window $GMM(X'_i,Y')$ completely encapsulate the event correlation information $GMM(X'_a,Y'_a)$. Also, if we assume $X'_b$ and $Y'_b$ are completely random varia bles, then $GMM(X'_i,Y')$ is solely determined by the correlated event $(X'_a,Y'_a)$. 


From above discussion, we can conclude that  if the sliding window is larger than the lag and the event length (e.g. $w>l+p$), the GMM line correlation can identify the correlated lagged events. To verify this result, we designed a small test. As show in Fig.\ref{fig:background:lagcorrelation}, $A,B$ are our source streams, and $Y$ is the target. In all three streams, the consecutive values higher than 0.8 is recognized as an event. In the time period one to 100, $A$ has five events which lead to similar events in $Y$ with a fixed lag of 8 time points while $B$ is completely random. Then in the time period 100 to 200, $B$ has five events of 5 time points lag, while $A$ is completely random. 



\begin{figure}
\begin{center}
\includegraphics[width=7cm]{./img/gmm.lag.cooked.data.eps}
\caption{Cooked lagged data. }
\label{fig:background:lagcorrelation}
\end{center}
\end{figure}

We then apply our method to calculate the local correlation score for each event recognized in $Y$ stream by a fixed sliding window of length 20. Our method successfully picked up the correlated source streams as in table \ref{tab:LaggedEventDetection}.

\begin{table}
	\centering
		\begin{tabular}{| c| c |c |}
		\hline
		Time Point & $P(Y \mid A)$ &  $P(Y \mid B)$ \\
		\hline
		39 & 590,921,877 & 390,504 \\
		\hline
		39 & 7,321,713,757 & 41,565,868 \\
		\hline
		... & ... & ... \\
		\hline
		399 & 68,018 & 25,363,528 \\
    \hline		
		\end{tabular}
	\caption{Lagged event detection}
	\label{tab:LaggedEventDetection}
\end{table}

\subsubsection{Sliding Window Length} We have to keep the sliding window length $w$ to a relatively small number, otherwise, the curse of dimensionality will show up again. To be able to identify longer lags in the streams, we can simply use multiple sliding windows. Each sliding window should be longer than expected event length, e.g. $w > p$. In this case, for a single $X_i, Y$ pair, our system will produce multiple GMMs. When we calculate the local correlation score at those sliding windows, the one holding the real lagged event will get a higher score.


\section{Experiment}
\label{sec:experiment}

We tested our method on two real life data sets. One is the China Security Index (CSI) 300, which contains daily close price of 300 stocks listed on Shanghai and Shenzhen Stock Markets for 5 years. The other data set is the Performance Metrics acquired from an Oracle database running the TPCC benchmark. In our testing, our method picked up the most correlated streams and identified the reasons of the high SQL Response Time. Besides, from the trained GMM models, we observed some patterns which can only be identified using our method.

The hardware configuration for all the experiments is Intel Core(TM)2 CPU U7600, 2GB Memory, running Windows XP system. All testing programs are coded and executed in Matlab 7.5. In our experiments, we also compare our result against another 2 methods: Pearson and Spiros. \cite{local_c_p}.


\subsection{Deciphering Oracle Database Performance Statistics}
\label{sec:exp:oracle}

%\subsubsection{Single Time Point Correlation}
%\label{sec:exp:oracle:point}

%Oracle data is here: D:\qduan\Fudan\Research\stream_inspection\data\Oracle_awr\awr_data\20111215-dell\oracle.awr.stream.data.txt
\noindent \textbf{Setup:}
The first data set is the system metrics gathered during a TPCC benchmark test \cite{tpcc} on an Oracle database. To be able to gather both busy and idle status of the Oracle database, the TPCC benchmark was executed for only the first half hour in every  hour. The executions lasted for about 20 hours, and at the end, we have 1150 time points. As explained in Section \ref{sec:intro:motivation}, the Oracle database provides hundreds of metrics regarding its operating status. Some of those metrics are generated every second while some others are at every minute. We aggregated every metric to the minute level to align them up. Some of the metrics contain very few values. For example, all values from `Physical Block Writes (Files-Long)' are null and there are only less than 30 values in `Soft Parse Ratio' out of 1149 time points. To be able to get a meaningful model, we removed all those sparse metrics. At the end, we extracted 210 metrics with sufficient values. 

 

%To be able to measure the correctness, we introduced two disturbances into the testing: copying a large file (IO) and calculating high precision PI (CPU). With the exact time and the type of disturbance, we want to verify if our method can pick up the right relevant metrics to those events.



Out of those 210 metrics, the `SQL Service Response Time' is the time taken to answer the queries by the database.  This is a very import performance indicator of the customer experience. Therefore, we choose this one as the target stream to monitor. Whenever we see a spike of high query response time, we investigate all the other metrics to identify the most related ones to understand the reason of the problem. 

To do this, we trained 209 paired GMM models with `SQL Service Response Time' as the target. Then, given a time point, we can calculate the relevance scores for each metric using three different methods: GMM, Pearson, and Spiros. 

\noindent \textbf{Goal:} For those exceptionally high value of `SQL Service Response Time', we want to test if our new method can suceessfully identify the mostly correlated streams given the time point. We also want to check if there are some abnormal local patterns among the different streams.



\noindent \textbf{Result:} 
%We will then analyze each of those 4 spikes and hope to find closely related metrics to understand the reasons. 
Fig.\ref{fig:exp:oracle_top_1_metric_4_points} shows the full series of 5 oracle metrics. In Fig.\ref{fig:exp:oracle_top_1_metric_4_points}, the line on the top is the `SQL Service Response Time'. Throughout this experiment, we will simply call it the target metric. We can see that there are four extremely large spikes in it. Since each spike represented a extremely slow response by the database system, to understand more, we calculated the relevance scores using our methods on each of those 4 time points. Then we plotted those identified metrics with highest scores on each time point below the target metric. To easily compare those source metrics against the target one, we draw four vertical dotted lines on each of those time points to align up all five metrics. 

At each of those four time points, we also draw the trained GMM models in Fig.\ref{fig:gmmdraw:4_metrics} for the same identified metrics for better understanding of how those scores are acquired by our method. In Fig.\ref{fig:gmmdraw:4_metrics}, each blue dot represents one time point with the target metric on the X axis and the source metrics on the Y axis. Each red ellipse represents one Gaussian Distribution around a red cross center. The yellow circle point is the time point when this source metric is identified as top 1 reason.

Now, let's take a close look at those 4 metrics against `SQL Service Response Time'. The second line shows the `Average Active Sessions', which is a direct indicator of the work load. The value of `Average Active Sessions' is mostly stable at around 2 at busy time, and zero at the idle time. When the system is busy, the `Average Active Sessions' is relatively stable around 0.02 seconds. The first 2 spikes seem to be highly correlated with the `SQL Service Response Time'. Our method confirmed that for second spike of `SQL Service Response Time', the `Average Active Sessions' is the No.1 related metric. The actual reason behind this situation is that at time point 63, we started a few more testing clients, thus the workload is higher than usual. 

In the third line of Fig.\ref{fig:exp:oracle_top_1_metric_4_points}, we can see that when system workload is high, the `Consistent Read Gets Per Txn' would be stable around 62. When the system is idle, this metric will be more volatile. According to the basic database query processing knowledge, high value of this metric means more data access for the single query, therefore, the query response time should be longer. As we can see from Fig\ref{fig:gmmdraw:4_metrics}.(d), the 2 metrics `SQL Service Response Time' and `Consistent Read Gets Per Txn' do have a positive correlation. In the 4th time point (733, as 4th dotted line), the `Consistent Read Gets Per Txn' is ranked No 2, and `Database Time Per Sec' is ranked No 1. After the investigation, we believe it is a single large background transaction causing this problem, and it read large amount of data.


In a similar approach, we can compare the  fourth (Redo Generated Per Txn) and fifth (Consistent Read Gets Per Txn) lines against the `SQL Service Response Time'. We can see `Redo Generated Per Txn' is the No 1 reason for the third spike and `Total Table Scans Per Sec' is the reason for the first one. 

In this test on the single time point, we can not compare our result against Pearson or Spiros since both requires a multiple points line.


\begin{figure}
\begin{center}
\includegraphics[width=9cm]{./img/top_1_4_metrics.eps}  %
\caption{5 Oracle Metrics.}
\label{fig:exp:oracle_top_1_metric_4_points}
\end{center}
\end{figure}


\begin{figure} \begin{center}
  \subfigure{
    %\label{fig:synopsis:execution} %% label for first subfigure
    %\includegraphics[width=4cm]{./img/oracle_gmm_sql_vs_execution.eps}}
    \includegraphics[width=4cm]{./img/metric_gmm_vs_1.eps}}
  \subfigure {
    % \label{fig:synopsis:cachehit} %% label for second subfigure
    %\includegraphics[width=4cm]{./img/oracle_gmm_sql_vs_cache_hit.eps}}
    \includegraphics[width=4cm]{./img/metric_gmm_vs_2.eps}}
  \subfigure {
    %\label{fig:synopsis:physicalread} %% label for second subfigure
    %\includegraphics[width=4cm]{./img/oracle_gmm_sql_vs_physical_read.eps}}
    \includegraphics[width=4cm]{./img/metric_gmm_vs_3.eps}}
  \subfigure {
    %\label{fig:synopsis:dbwrcheckpoint} %% label for second subfigure
    %\includegraphics[width=4cm]{./img/oracle_gmm_sql_vs_dbwr_checkpoint.eps}}
    \includegraphics[width=4cm]{./img/metric_gmm_vs_4.eps}}
  \end{center}
  \caption{The GMM models trained over four different source metrics against the target SQL Response Time}
  \label{fig:gmmdraw:4_metrics} %% label for entire figure
\end{figure}





\subsection{Line Correlation of Oracle Metricss}
\label{sec:exp:oracle:line}

\noindent \textbf{Setup:} 
Using the same dataset extracted from Section \ref{sec:exp:oracle}, we can easily apply the line similarity as introduced in Section \ref{sec:gmm_line_similarity} and identify correlations over a series of consecutive points. We set the sliding window length to be 5. When training the GMM model, we also set the number of states to 5. 

Beside the direct line based GMM, we also applied the wavelet transformation as the pre-processing step according to Section \ref{sec:embedingwavelet}. With the Wavelet Transformation, we can extend the sliding window much longer. In this experiment, we used a sliding window of 16 points, and then kept first 4 coefficients. Fig. \ref{fig:exp:oraclewaveletpattern}

\noindent \textbf{Goal:}
In this experiment, we would like to :
\begin{enumerate}
	\item test if our method can identify the most correlated source streams
	\item look for interesting patterns to describe the system behavior
	\item verify the effectiveness of the Wavelet as the pre-processing step to the GMM model.
\end{enumerate}

 
\noindent \textbf{Result:}

As Fig. \ref{fig:exp:oracle_top_1_metric_4_points}, using the line based correlation, we are still interested in identifying the reasons of the 4 spikes of `SQL Response Time'. Table \ref{tab:simplified:linecorr} shows the line and the wavelet results comparing with the single point correlation. In table \ref{tab:simplified:linecorr}, the second column  gives the top related source streams identified by the direct line GMM correlation. For those top source streams identified by single point correlation in section \ref{sec:exp:oracle}, we give their rank by the line GMM approach at each time point in the third column `Rank1'. Likewise, the fourth column contains the top source streams by the wavelet GMM approach, and the fifth column `Rank2' gives the rank of top source streams from single point GMM according to the Wavelet GMM method.

In table \ref{tab:simplified:linecorr}, we can see that the line, the Wavelet and the single point correlation can all identify the reasons and rank them above the rest. However, when we verify the result against our DBA knowledge, we believe that the single point correlation is more robust and accurate. 


\begin{table}
\begin{center}
\begin{tabular}{| c | c | c | c |c |}
\hline
Time & Top Correlated by Line & Rank1 & Top Correlated by Wavelet& Rank2  \\
\hline
55 & Total Table Scans Per Sec & 1 & Total Table Scans Per Sec & 1 \\
\hline
63 & CPU Wait Time & 6 &  DB Time Per User Call & 32 \\
\hline
667 & Physical Writes Per Txn & 32  &  Redo Writes Per Sec & 17 \\
\hline
734 & Database Time per Sec & 13  &  Database Time per Sec  & 25 \\
\hline
\end{tabular}
\caption{The correlated streams by line vs by single point}
\label{tab:simplified:linecorr}
\end{center}
\end{table}



\begin{figure}
\begin{center}
\includegraphics[width=7cm]{./img/oracle_gmm_centers.eps}
\caption{The Pearson coefficient of the center lines for the 5 GMM states.}
\label{fig:exp:oraclegmmlinecenters}
\end{center}
\end{figure}

Though the line based correlations lose to the singple point in accuracy, they revealed some interesting patterns in our testing system. First we see the naive line GMMs. We plotted all the centers of the five GMM states for the streams pair `Excutions' vs `SQL Response Time' in Fig \ref{fig:exp:oraclegmmlinecenters}, where the state 4 \& 5 are while the system is under heavy workload of 5000 executions per second. In the second state, while the `Execution Per Sec' gradually drops from the peak to near zero, the `SQL Response Time' drops at the beginning, but soon rises again. This looked weird to us, but soon we found the reason: it is caused by the background processes of the Oracle database. Oracle database keeps about ten background processes for internal house cleaning job, like statistics gathering, etc. And the metric `SQL Response Time' includes the queries from those background processes. In our second state, while the user workload disappears, the background processes started contributing most of the `SQL Response Time'.

We also calculated the Pearson score of those centers. The aforementioned pattern has a very low value of Pearson value, therefore, it can not be found by Pearson correlation.

When we use the wavelet transformation to check the system behavior at the end of each testing, e.g. point 126, 247, 431, 664, etc, we found one source streams always being at top 20 -- `Total Wait Counts'. It turns out, at the end of each testing, there is always an abnormal spike. This behavior was not expected, and we did not yet come up with good explanation.

\begin{figure}
\begin{center}
\includegraphics[width=6cm]{./img/wavelet_correlation_total_wait_count_point_431.eps}
\caption{The Wavelet Patterns}
\label{fig:exp:oraclewaveletpattern}
\end{center}
\end{figure}

\begin{figure}
\begin{center}
\includegraphics[width=7cm] {./img/sql_vs_total_wait_count_point_431.eps}
\caption{The Wavelet Patterns}
\label{fig:exp:oracletotalwait}
\end{center}
\end{figure}

\subsection{Finding Locally correlated Stocks}
\label{sec:exp:stock}
% Data: D:\qduan\Fudan\Research\stream_inspection\data\stock.csi300\data.1\tmp_price.stock.pct.1.features.txt
% name: D:\qduan\Fudan\Research\stream_inspection\data\stock.csi300\data.1\csi_300_english_names.txt

\noindent \textbf{Setup:} 
For all the 300 stocks of CSI 300 index, we gathered the daily stock price history from Jan 4th 2007 to 29th Aug 2011 for 1186 trading days. If in a day one stock is suspended, we fill in the same price from its last trading day.

Since the stock price tend to grow higher in the long term, to make it stationary, the daily price is converted to day over day change ratio. 


\noindent \textbf{Goal:}
When a stock rise or fall to a large scale, we will analyze if there are some other locally correlated stocks. This can help the financial analysts  understand the market situation in a much easier manner.

\noindent \textbf{Result:}

\begin{figure}
\begin{center}
\includegraphics[width=7cm]{./img/cofco.cms.jrj.price.history.eps}
\caption{3 Stock price.}
\label{fig:exp:stockprice}
\end{center}
\end{figure}

We focused on investigating the price variation of stock COFCO PROPERTY (000031.SZ). From its name, we can easily tell that it is a real estate company, and its daily price variation should be highly correlated to the other real estate companies. Indeed, when using the GMM model to look for the most locally correlated stocks for COFCO's high variation at most of time points, we found the top correlated stocks included those real estate companies like 000402-Financial-Street-Holding-Co-Ltd ((FSH)), 600376-Beijing-Tianhong-Baoye-Real-Estate-Co-Ltd (BTBRE), etc. 

The Fig \ref{fig:exp:stockprice} shows the stock price of 3 stocks from point 860(at 20100428) to 1040(20110124). At the time point 990 (20101112), the overall stock market drops, and the stock Cofco drops larger than most of other real estate companies. Our program found that it was closed related to another two brokerage companies including 'China Merchant Securities'. 

When we look into the stock COFCO, we found that COFCO held about 5\% of 'China Merchant Securities' share. At that day, the brokerage industry dropped more than real estate one, and because of this, the COFCO lost more than other ordinary real estate companies.

We tried other methods, but none of those methods identified this relationship.  Tables \ref{tab:stock:related} shows the top 10 related stocks found by the 3 methods.


\begin{table}
\begin{center}
\begin{tabular}{| c | p{3.4cm} | p{3.4cm} | p{3.4cm} |}
\hline
Rank & GMM Based & Pearson & Spiros\\
\hline
1&601818 China Everbright Bank                       & 601668 China State Construction Engineering Co      & 600153 Xiamen C\&D Inc                                  \\
\hline
2&601377 Industrial Securities                       & 600428 COSCO Shipping Co Ltd                        & 600320 Shanghai Zhenhua Port Machinery Co Ltd          \\
\hline
3&601018 Ningbo Port Co                              & 600153 Xiamen C\&D Inc                               & 600900 China Yangtze Power Co Ltd                      \\
\hline
4&601688 Huatai Securities Co Ltd                    & 600000 Shanghai Pudong Development Bank      & 600839 Sichuan Changhong Electric Co Ltd               \\
\hline
5&600999 China Merchants Securities Co Ltd           & 600026 China Shipping Development         & 600395 Guizhou Panjiang Refined Coal Co Ltd            \\
\hline
6&601618 Metallurgical Corporation of China Co Lt    & 600320 Shanghai Zhenhua Port Machinery      & 002202 Xinjiang Goldwind    \\
\hline
7&002493 Rongsheng Petro Chemical                    & 000402 Financial Street Holding            & 600690 Qingdao Haier Co Ltd                            \\
\hline
8&601788 Everbright Securities Co Ltd                & 600500 Sinochem International Corp                  & 600528 China Railway Erju Co Ltd                       \\
\hline
9&601179 China XD Electric Co Ltd                    & 600266 Beijing Urban Construction Investment        & 601088 China Shenhua Energy Co Ltd                     \\
\hline
10&601668 China State Construction Engineering Co     & 601328 Bank of Communications Co LTD                & 000002 China Vanke Co Ltd                              \\
\hline
\end{tabular}
\caption{{Top locally correlated Stocks to COFCO at time point 990 (12 Dec 2010)} }
\label{tab:stock:related}
\end{center}
\end{table}





\subsection{Performance evaluation}
\label{sec:exp:performance}
About the performance, i may do 2 graphs:

1. training time when GMM line size grow on stock

2. training time when stream length grow, 100~1000.

\subsection{Implementation Consideration}
\label{sec:exp:implement}
As mentioned in Section \ref{sec:our:GMM}, the number of states $K$ is specified manually. However, in most cases, this is not a easy job. In our observation, as long as the EM process converges, larger number of states can simulate the data better. Therefore, in our experiments, we always specify a relatively larger number. If the algorithm fails to converge at a larger number of states, we then gradually reduce it. The maximum number of iterations before convergence is set to 800. This strategy worked well in our experiments.

% And if state 3 works, the 5 also works, 

\bibliographystyle{splncs03}
\bibliography{stream_inspect}



\end{document}
