%% ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
%% Kapitel 3:
%% ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



\chapter{Non-parametric change detection} \label{chap:Non-parametric change detection}

Many statistical problems require change point to be identified in sequence of data. The main idea behind statistical approaches that all the sequence of observations can be drawn from some known distributions. In most practical situations, the parameters of underlying distributions will be unknown, and also possible that there are no information available about their underlying distribution. Many statistical approaches are avaible for the time series change point detection.

For this type of situations, change detection using non-parametric is beneficial because it does not require prior information about the underlying distribution \cite{Fredrik_2000}. Non-parametric change detection can be achieved for both batch processing and sequential change detection.

Non-parametric change detection approaches presented in this chapter contains both sequential and batch processing. While approaches presented here was previously proposed for sequential change detection. Non-parametric change detection approaches with the combination of filter was proposed in the \cite{Fredrik_2000}. Approaches like CUSUM was previously proposed with statistical notions. The statistical approach CUSUM was first designed for detection of change point in the industrial process control. This approach use statistical notion likelihood ratio as test statistics. This approach is good for detecting small and gradual change in the time-series.

This chapter gives brief introduction of the Non-parametric change detection approaches available with the adaptive filters. This chapter focuses on the detection of change in the mean and variance level of the signal. Sequential and offline both approaches are presented in this chapter. Problems related to the detection and implementation are also discussed in this chapter. For LMS received signal, effectiveness of mean and multipath changes are checked for already available approaches. Afterwords we modify already available approach for detection of LMS received signal. In this chapter, we use the idea of the adaptive filter for tracking and estimation of th LMS signal.

In the next section


In the next section, we provide change detection algorithms with the single filter approach.

\section{Single Filter Change Detection}

In the sequential signle algorithms the change is detected, when the deterministic part of the signal undergoes change. After detecting first change the procedure start again for detection of new change in the signal. In the batch processing, change detection algorithms try to detect the change points in the available dataset.  

The online change detection can be used in the situation where detection of abrupt change is necessary as soon as it occurs. Furthermore online change detection techniques can be classified in two different methods of operation.

Single Filter change detection algorithm belongs to the hypothesis testing. This algorithms indented to detect mean or variance or both the changes at same time. Single filter algorithms can be used with the signal model as shown in the next section. Afterwords, we provide the detailed description of the algorithm.


\subsection{Signal Model}


According to general signal model presented in previous chapter, we present the signal model for the single filter approach.

	Received signal at the mobile terminal contains the multiplicative fading caused by the $a_t$ or addative specular reflected component $\Gamma_t$. These changes are introduces in the channel $h(t,\tau)$. We are interested in the detection of change in the signal component not in the noise part because we assumed that our received signal at mobile terminal contains additive white gaussian noise with unit variance.
	
Now as previously defined, AR model has proved as beneficial for the change detection applications. Our received signal can be modelled as shown below:


	\begin{equation}\label{eq:wireless channel signal model}
	\begin{aligned}
	Y = \bm{H} S + W
	\end{aligned}
	\end{equation}
	
Here, $S$ is the transmitted signal vector, $W$ is circularly symmetric zero mean complex additive white gaussian noise with unit variance. $H$ is the channel matrix for the received signal. LOS condition can be modelled with rician fading and non-LOS condition can be modelled with rayleigh fading. So for the different type of environment change can occured only in the $H$. 

$H$ can be defined defined differently for the rayleigh and rician fading channles. We consider in the multipath fading or multiplicative fading, the channnel vector $H$ is defined as $H \times A$ and for additive or mean change $H$ is defined as $H + \Gamma$ \cite{L_1991}.

For simplification, we define our received signal as 

	\begin{equation}\label{eq:Single Filter signal model}
	\begin{aligned}
	Y = \Theta + W
	\end{aligned}
	\end{equation}
	
In above equation $\Theta$ defines the signal component, which undergoes change. The change is defined by change time $k$ and change magnitude of $V(k) = \Theta(k+1) - \Theta(k)$.


\subsection{Steps of Single Filter change detection}

Single Filter change detection is basically achieved by two steps:

\begin{itemize}
\item \emph{Estimation} or Generation of \emph{residuals}
\item \emph{change detection} or design of \emph{decision rule}
\end{itemize}


\emph{Estimation} is achieved by determining the $\hat{\theta}_t$ from received measurement sequence $y_t$. and \emph{change detection} can be achieved by finding abrupt or gradual change in the estimated sequences.

Residual can be refered as artificial measurement that reflect the changes of interest, ideally close to zero before change and nonzero after change. Decision rule can be designed to decide whether detected change is sufficient or not. These two steps can be achieved by modified operational change detection block diagram shown below \cite{Fredrik_2000}.

\tikzstyle{block} = [draw, fill=blue!20, rectangle, text width=5em, text centered, minimum height=3em, minimum width=6em]
\tikzstyle{sum} = [draw, fill=blue!20, circle, node distance=1cm]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thick,black}]

\begin{figure}[htbp]
\begin{subfigure}[b]{0.8\textwidth}
\centering
% The block diagram code is probably more verbose than necessary
\begin{tikzpicture}[auto, node distance=3cm,>=latex']
    % We start by placing the blocks
    \node [input, name=input] {};
    \node [block, right of=input] (Filter) {Filter};
    \node [block, right of=Filter, node distance=4cm] (Distance Measure) {Distance Measure};  
    \draw [->] (Filter) -- node {$\hat{\theta}_t, \varepsilon_t$} (Distance Measure);
    \node [block, right of=Distance Measure, node distance=4cm] (Stopping Rule) {Stopping Rule};
    \draw [->] (Distance Measure) -- node {$S_t$} (Stopping Rule);
    \node [output, right of=Stopping Rule] (output) {}; 
		\draw [draw,->] (input) -- node {$y_t$} (Filter);
		\draw [->] (Stopping Rule) -- node [name=y] {Alarm $\hat{k}$}(output);
		
    
\end{tikzpicture}
    
\caption{Non-Parametric change detection based on Adaptive filter}
\label{fig:Non-Parametric change detection based on Adaptive filter}
\end{subfigure}\\ \\


\begin{subfigure}[b]{0.8\textwidth}
\centering

\begin{tikzpicture}[auto, node distance=3cm,>=latex']
    % We start by placing the blocks
    \node [input, name=input] {};
    \node [block, right of=input] (Averaging) {Averaging};
    \node [block, right of=Averaging, node distance=4cm] (Threshold) {Threshold};
    \draw [->] (Averaging) -- node {$g_t$} (Threshold);
    \node [output, right of=Threshold] (output) {};
    \draw [draw,->] (input) -- node {$S_t$} (Filter);
		\draw [->] (Threshold) -- node [name=y] {Alarm $\hat{k}$}(output);
\end{tikzpicture}
    
\caption{Stopping Rule block diagrams}
\label{fig:Stopping Rule block diagrams}
\end{subfigure}


\caption{Steps of Non-Parametric change detection based on Adaptive filter}
\label{fig: Steps of Non-Parametric change detection based on Adaptive filter}
\end{figure}




\subsection{Residual Generation}

Residual is fundamental part in diagnosis system. A residual can be known as time varying signal, which is used for change detection \cite{Fredrik_2000}. In perfect world, residual would be zero before change and suddenly reacts when any change occurs. Since measurement noise and disturbance are there, so the actual value prdiction of the residual can not be possible.

In a linear filter, if model is correct and there is no change in the system then the residual would be so called white noise. After change there is mean or variance or both change in the residual, so basically residual would become larger than no change condition.

A residual generator is a filter, which is used to filter the measured received signal to produce the residual for change detector. A linear residual generator is a proper filter, which takes measured signal and observed signal as a input and produces error signal.

A main problem of designing the residual generator is to achieve disturbance decoupling means residual signal is not influenced by unknown inputs or noise, which is not change in the signal.

There are various methods available for residual generator based on smoothing filter, linear filter, non-linear filter, likelihood ratio and whiteness test. Whiteness test requires the complete knowledge about the amplitude distributions of the signal \cite{Fredrik_2000}.

Residual generation can also be identified as estimation. estimation or feature extraction is important pre-processing step in change detection or pattern recognition applications. Feature extraction can be refered as transforming the input data to set of features. 

We present some smoothing filter for residual generation or fast fading removal of the LMS signal.

\subsubsection{Smoothing Filter as Residual Generator}

To overcome this problem, this report presents some approaches based on the combination of Smoothing filter and whiteness test.

For stationarity change identification, estimation is interpreted as transforming the measured received signal to the residuals. Sometimes estimation can be performed by pre-processing filters, which reduces the noise level in the received signal. Basically, feature extraction tries to get rid of unwanted features of the signals, while maintaining the features necessary for change detection.

LMS received signal contains fast fading effects, which is not necessary for the additive change detection problems. This problem can be considered as a removal of the noise component from the received signal.

Various pre-processing Smoothing filters can be used to remove the fast fading component from the LMS received signal or for residual generation.

There are basically two requirements of the Smoothing filters \cite{Fredrik_2000} as shown below

\begin{itemize}
\item Good attenuation of noise
\item Better tracking ability
\end{itemize}

This requirements are contradictory for linear filters. Fast filter follows signal well, but also attenuates noise unsatisfactorily. While slow filters attenuates noise well, but with less tracking ability. The best filter is always compromise between these two cases.


\subsubsection{Moving Average filter (MA)}

To estimate the slow variations of the received signal, the frequency of the fast fading component are considered higher than slow variations. Moving average is a tool used for smoothing the time-series. Various moving averages are available, but for LMS signal change detection we focus only on some averaging methods listed below.

\begin{description}
\item[Rectangle window moving average] \hfill \\

For rectangle window moving average, we are using matlab inbuilt filter '\emph{filtfilt}' function. We presented results of rectangle window moving average and also compare with other filter approaches.

The drawbacks of rectangle moving average: -

\item[Exponentially weighted moving average (EWMA)] \hfill \\

The problem of rectangle window moving average is that it assigns each samples same weight means last sample has same weight as the first sample of the window. This problem is fixed by using the exponetially weighted moving averages in which recent sample has greater effect.

EWMA introduces smoothing parameters known as $\lambda$. $\lambda$ must be less than one. Under this condition, each sample weighted by multiplier instead of equal weights. 

The statistics that is calculated as shown below,

	\begin{equation}\label{eq:EWMA filtered output}
	\begin{aligned}
	\hat{\theta}(t) = \lambda y(t) + (1 - \lambda) \hat{\theta}(t-1)
	\end{aligned}
	\end{equation}
	
$\lambda$ is design parameter for the EWMA filter and also refered as forgetting factor for past observations. As mentioned before the fast or slow EWMA filter is depended on the value of forgetting factor. For $\lambda$ near to zero means fast EWMA filter follows signal well, but not satisfactorily to attenuate the noise in the received signal. For $\lambda$ near to one means slow filter attenuate the noise well, but not able to follow signal well.

Figure shows the comparison of the fast and slow EWMA filter.


\item[Hanning window moving average] \hfill \\

Hanning window based filter is widely used tool for effectevly removal of the noise variations from the received signal. Hanning window has wider central lobes compare to rectangle window as shown in Frequency response Figure ~\ref{fig:Frequency response of Rectangle and Hanning window} . Rectangle window has higher side lobes compare to hanning window, which makes difficult to remove higher frequency fast variations. This problem can be solved by hanning window.

Hanning window in matlab can be implemented with the '\emph{hanning}' function. Apply windowed samples to matlab inbuilt filter '\emph{filtfilt}' function for removal of the fast variations.

One approch is to use variable length hanning window method, we performed it but include it later.

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{./bilder/filter_response.pdf}
\end{center}
\caption{Frequency response of Rectangle and Hanning window}
\label{fig:Frequency response of Rectangle and Hanning window}
\end{figure}


\item[Savitzky-Golay Filter] \hfill \\

Savitzky-Golay Filter is a smoothing method based on local least squares polynomial fitting \cite{S_2011}. This filter preserve features of distribution such as relative maxima, minima and width, which are basically flattened by some moving average filters (Hanning moving average, Rectangle moving average).

This filter is implemented by the matlab function '\emph{sgolayfilt}' and compared performance for the other moving average filters.


\end{description}

{\bf Drawbacks of smoothing filters}

\begin{enumerate}
\item Smoothing filter follows signal well and also able to attenuate the noise present in the signal.

\item Smoothing filter can not be able to detect gradual changes means slow changes in the system.

\end{enumerate}

To overcome this drawbacks, requires to estimate the parameters of the underlying distributions. This can be achieved in the next section, where we present some parameter estimation techniques for the LMS received signal.

\section{Parametric Change Detection}

Parametric change detection requires the estimation of parameters of the given measured signal. In this section, we present several techniques like least square and statistical approach based to estimate the parameters of the following system. The procedure of parametric change detection can be achieved in the following steps shown in Figure ~\ref{fig: Steps of Parametric change detection}

\tikzstyle{block} = [draw, fill=blue!20, rectangle, text width=5em, text centered, minimum height=3em, minimum width=6em]
\tikzstyle{sum} = [draw, fill=blue!20, circle, node distance=1cm]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thick,black}]

\begin{figure}[htbp]
\centering
% The block diagram code is probably more verbose than necessary
\begin{tikzpicture}[auto, node distance=3cm,>=latex']
    % We start by placing the blocks
    \node [input, name=input] {};
    \node [block, right of=input] (Filter) {Parameter Estimation};
    \node [block, right of=Filter, node distance=4cm] (Distance Measure) {Distance Measure};  
    \draw [->] (Filter) -- node {$\hat{\theta}_t, \varepsilon_t$} (Distance Measure);
    \node [block, right of=Distance Measure, node distance=4cm] (Stopping Rule) {Stopping Rule};
    \draw [->] (Distance Measure) -- node {$S_t$} (Stopping Rule);
    \node [output, right of=Stopping Rule] (output) {}; 
		\draw [draw,->] (input) -- node {$y_t$} (Filter);
		\draw [->] (Stopping Rule) -- node [name=y] {Alarm $\hat{k}$}(output);
		
    
\end{tikzpicture}

\caption{Steps of Parametric change detection}
\label{fig: Steps of Parametric change detection}
\end{figure}

\subsection{Linear Least Square Estimators}

We consider here the identification of unknown parameters of the multiple linear regeression model shown by below Eq ~\ref{eq:Linear regression model}. This section presents some linear estimators based on the least square function. Estimated parameters may describe several properties of the received signal. The objective of this section is to estimate the parameter vector $\bm{\hat{\theta}}_t$ for the given received signal $y_t$ at time $t$.

	\begin{equation}\label{eq:Linear regression model}
	\begin{aligned}
	\bm{Y} = \bm{\Phi}^T \bm{\Theta} + \bm{E},
	\end{aligned}
	\end{equation}
	
	where, $\bm{Y}$ is column measurement vector, $\bm{\Phi}$ is regression matrix, $\bm{\Theta}$ is column parameter vector and $\bm{E}$ is the column vector of noise component.
	
	\begin{equation}
	\begin{aligned}
	\bm{Y} &= [y_1, y_2, ...., y_t]^T, \\
	\bm{\Phi} &= [\bm{\varphi}_1, \bm{\varphi}_2, ...., \bm{\varphi}_t]^T, \\
	\bm{E} &= [e_1, e_2, ...., e_t]^T, \\
	\end{aligned}
	\end{equation}

\subsubsection{Recursive Least squares (RLS) Estimator}

More often, we obtain our measurements sequentiallly and want to update our estimate with new measurement received. The recursive least square finds the parameters estimate recursively that minimize the linear least square cost function.

Given the set of measured signal samples, input for the RLS estimator [$y_1, y_2, y_3, ..., y_t]^T$.


For linear estimators, computation of predicted estimate is shown in below Eq ~\ref{eq:Estimated output},


	\begin{equation}\label{eq:Estimated output}
	\begin{aligned}
	\hat{y}_t = \sum_{p=0}^{P} \bm{\hat{\theta}}_t(p) \bm{\varphi}_{t} 
	\end{aligned}
	\end{equation}


where, $\bm{\hat{\theta}}_t(p) = [\bm{\hat{\theta}}_t(0); \bm{\hat{\theta}}_t(1); ....; \bm{\hat{\theta}}_t(p)]$, $\bm{\varphi}_{t} = [y_{t}; y_{t-1}; ....; y_{t-P}]$ and  $P$ is number of parameters.

Goal is to find recursively in time the parameters $\bm{\hat{\theta}}_t(p)$ such that sum of square errors are minimized.

For linear recursive least square estimator, we aims to minimize the least square error cost function given below \cite{Fredrik_2000}. After replacing the estimated signal with filter function, following cost function achieved,


	\begin{equation}\label{eq:Least square error function}
	\begin{aligned}
	V_{\theta} &= \sum_{k=1}^{t} \lambda^{t-k} (y_k - \hat{y_k})^2 \\
	V_{\theta} &= \sum_{k=1}^{t} \lambda^{t-k} [y_k - \sum_{p=0}^{P} \hat{\theta}_k(p) \varphi_{k} ]^2
	\end{aligned}
	\end{equation}
		
	
	So, the LS solution can be obtained by finding filter coefficients,
	
	\begin{equation}\label{eq:Least square solution}
	\begin{aligned}
	\bm{\hat{\theta}}_t &= (\sum_{k=1}^{t} \lambda^{t-k} \bm{\varphi}_k \bm{\varphi}_k^T)^{-1} (\sum_{k=1}^{t} \lambda^{t-k} \bm{\varphi}_k y_k), \\
	\bm{\hat{\theta}}_t &= [\bm{R}_t]^{-1} \bm{P}_t	
	\end{aligned}
	\end{equation}
	
	\begin{equation}
	\begin{aligned}
	\bm{R}_t &= \sum_{k=1}^{t} \lambda^{t-k} \bm{\varphi}_k \bm{\varphi}_k^T \\
	\bm{P}_t &= \sum_{k=1}^{t} \lambda^{t-k} \bm{\varphi}_k y_k
	\end{aligned}
	\end{equation}	
	
	Now for recursive estimate of least square can be achieved by using old informations available. For the information available at $(t-1)$,
	
	\begin{equation}
	\begin{aligned}
	\bm{\hat{\theta}}_{t-1} &= [\bm{R}_{t-1}]^{-1} \bm{P}_{t-1}	
	\end{aligned}
	\end{equation}
	
	Rewrite the formula $\bm{R}_t$ and $\bm{P}_t$ using this previous informations
	
	
	\begin{equation}
	\begin{aligned}
	\bm{R}_t &= \sum_{k=1}^{t} \lambda^{t-k} \bm{\varphi}_k \bm{\varphi}_k^T \\
	\bm{R}_t &= \lambda [\sum_{k=1}^{t-1} \lambda^{t-k-1} \bm{\varphi}_k \bm{\varphi}_k^T] + [\bm{\varphi}_t \bm{\varphi}_t^T] \\
	\bm{R}_t &= \lambda \bm{R}_{t-1} + \bm{\varphi}_t \bm{\varphi}_t^T 
	\end{aligned}
	\end{equation}
	
	
	\begin{equation}
	\begin{aligned}
	\bm{P}_t &= \sum_{k=1}^{t} \lambda^{t-k} \bm{\varphi}_k y_k \\
	\bm{P}_t &= \lambda [\sum_{k=1}^{t-1} \lambda^{t-k-1} \bm{\varphi}_k y_k] + \bm{\varphi}_t y_t \\
	\bm{P}_t &= \lambda \bm{\psi}_{t-1} + \bm{\varphi}_t y_t
	\end{aligned}
	\end{equation}
	
	
	To estimate the inversion of the matrix $\bm{R}_t$, matrix inversion lemma should be used here. In which $\bm{A}$, $\bm{B}$ are $M \times M$ positive definite matrices,	$\bm{D}$ is a $N \times N$ matrix and $\bm{C}$ is a $M \times N$ matrix.
 	\begin{equation}
	\begin{aligned}
	\bm{A} &= \bm{B}^{-1} + \bm{C} \bm{D}^{-1} \bm{C}^T 
	\bm{A}^{-1} &= \bm{B} - \bm{B}\bm{C}(\bm{D} + \bm{C}^T \bm{B} \bm{C})^{-1} \bm{C}^T \bm{B}
	\end{aligned}
	\end{equation}
	
	After applying matrix inversion formula to 
	
	\begin{equation}
	\begin{aligned}
	\bm{R}_t &= \lambda \Phi_{t-1} + \bm{\varphi}_t \bm{\varphi}_t^T 
	\end{aligned}
	\end{equation}
	
	We obtain,
	
	\begin{equation}
	\begin{aligned}
	\bm{R}_t^{-1} = \lambda^{-1} \bm{R}_{t-1}^{-1} - \frac{\lambda^{-2} \bm{R}_{t-1}^{-1} \bm{\varphi}_t \bm{\varphi}^T_t \bm{R}_{t-1}^{-1} } {1 + \lambda^{-1} \bm{\varphi}^T_t \bm{R}_{t-1}^{-1} \bm{\varphi}_t}
	\end{aligned}
	\end{equation}	
	
	Denoting
	
	\begin{equation}
	\begin{aligned}
	\bm{Q}_t = \bm{R}_t^{-1}
	\end{aligned}
	\end{equation}	
	
	and 
		
	\begin{equation}
	\begin{aligned}
	\bm{k}_t = \frac{\lambda^{-1} \bm{Q}_{t-1} \bm{\varphi}_t } {1 + \lambda^{-1} \bm{\varphi}^T_t \bm{Q}_t \bm{\varphi}_t}
	\end{aligned}
	\end{equation}
	
	we obtain,
	
	
	\begin{equation}
	\begin{aligned}
	\bm{Q}_t = \lambda^{-1} \bm{Q}_{t-1} - \lambda^{-1} \bm{k}_t \bm{\varphi}^T_t \bm{Q}_{t-1}
	\end{aligned}
	\end{equation}
		
	
	Now, we derive filter coefficients update equations,
	
	
	\begin{equation}
	\begin{aligned}
	\bm{\hat{\theta}}_t &= [\bm{R}_t]^{-1} \bm{P}_t, \\
	\bm{\hat{\theta}}_t &= \bm{Q}_t \bm{P}_t
	\end{aligned}
	\end{equation}
	
	
	After the replacing parameters, following filter update equation can be achieved,
	
	\begin{equation}
	\begin{aligned}
	\bm{\hat{\theta}}_t = \bm{\hat{\theta}}_{t-1} + \bm{k}_t \varepsilon_t
	\end{aligned}
	\end{equation}	
	
	where,

	\begin{equation}
	\begin{aligned}
	\varepsilon_t = y_t - \bm{\varphi}^T_t \bm{\hat{\theta}}_{t-1}
	\end{aligned}
	\end{equation}	
	
	
So, predicted signal $\hat{y}_t$ using estimated parameters $\bm{\hat{\theta}}_t$ can be calculated by below equation ~\ref{eq:Estimated signal using RLS filter},


	\begin{equation} \label{eq:Estimated signal using RLS filter}
	\begin{aligned}
	\hat{y}_t = \bm{\varphi}^T_t \bm{\hat{\theta}}_t
	\end{aligned}
	\end{equation}	


\begin{algorithm}
\caption{Recursive Least Square Parameter Estimation Algorithm}

\begin{tabbing}
Given \= measured received Data: - $y_1, y_2, y_3,.....y_t$ \\ \\
Initialize: - \\ \\
Select forgetting factor $\lambda$, \\ \\
Select Number of parameters $P$, \\ \\

\begin{math}
\begin{aligned}
&\bm{\hat{\theta}}_0 = 0 \\ \\
&\bm{Q}_0 = \delta \bm{I} \\ \\
&\bm{\varphi}_t = [y_{t}, y_{t-1}, ...., y_{t-P}]^T,
\end{aligned}
\end{math}
\\ \\
$\bm{\hat{\theta}}_0$ is column vector of size $P$, \\ \\
$\bm{I}$ is initialization matrix of size $P \times P$, \\ \\

\> \=For $k=1$ to $t$ \\ \\
\> \> 1. Calculate: -\= \hspace{5 mm} 
\begin{math}
\bm{k}_k = \frac{\lambda^{-1} \bm{Q}_{k-1} \bm{\varphi}_k } {1 + \lambda^{-1} \bm{\varphi}^T_k \bm{Q}_k \bm{\varphi}_k}
\end{math}
\\ \\
\> \> 2. Calculate apriori errors $\varepsilon_k$ \hspace{5 mm}
\begin{math}
\varepsilon_k = y_k - \bm{\varphi}^T_k \bm{\hat{\theta}}_{k-1} 
\end{math}
\\ \\
\> \> 3. Update estimated parameters \hspace{5 mm}
\begin{math}
\bm{\hat{\theta}}_k = \bm{\hat{\theta}}_{k-1} + \bm{k}_k \varepsilon_k 
\end{math}
\\ \\
\> \> 4. Update Inversion matrix \hspace{5 mm}
\begin{math}
\bm{k}_t = \lambda^{-1} \bm{Q}_{k-1} - \lambda^{-1} \bm{k}_k \bm{\varphi}^T_k \bm{Q}_{k-1}
\end{math}
\\ \\
\> end
\\
\end{tabbing}
\end{algorithm}	
	
	
	
	
\section{Results smoothing filters}
	
This section presents result of the change detection based on the smoothing filters availble for chnage detection.

We compared different smoothing filters for removing the fast fading effect of the received measurement data. Performance of these methods can be compared for the same state duration and also for the state transition periods as shown in Figure ~\ref{fig:Comparison of different smoothing approaches for same state duration} ~\ref{fig:Comparison of different smoothing approaches for state transition}.



\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{./bilder/Residual_comparison/smoothing_filter_compare}
\caption{Comparison of different smoothing approaches for same state duration}
\label{fig:Comparison of different smoothing approaches for same state duration}
\end{figure}

\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{./bilder/Residual_comparison/smoothing_filter_compare_slope}
\caption{Comparison of different smoothing approaches for state transition}
\label{fig:Comparison of different smoothing approaches for state transition}
\end{figure}
	
After presenting the result of the fast fading removal approaches, we compare the residual of these smoothing approaches. Residual of differnt smoothing approaches are presented in the Figure ~\ref{fig:Residual comparison for all smoothing approaches}
	
	
% Plot residuals
\begin{figure}[htbp]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/Input_signal_residual}
\caption{Synthetic signal generated by LMS simulator}
\label{fig:Synthetic signal generated by LMS simulator}
\end{subfigure}~
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/EWMA_residual}
\caption{Residual for EWMA filter smoothing approach}
\label{fig:Residual for EWMA filter smoothing approach}
\end{subfigure}\\
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/Rect_mov_residual}
\caption{Residual for Rectangle moving Average smoothing approach}
\label{fig:Residual for Rectangle moving Average smoothing approach}
\end{subfigure}~ 
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/Hann_mov_residual}
\caption{Residual for Hanning moving Average smoothing approach}
\label{fig:Residual for Hanning moving Average smoothing approach}
\end{subfigure}\\
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/sgolay_residual}
\caption{Residual for Sgolay filter smoothing approach}
\label{fig:Residual for Sgolay filter smoothing approach}
\end{subfigure}~
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/median_residual}
\caption{Residual for median filter smoothing approach}
\label{fig:Residual for median filter smoothing approach}
\end{subfigure}
\caption{Residual comparison for all smoothing approaches}
\label{fig:Residual comparison for all smoothing approaches}
\end{figure}


% plot Averaged values

\begin{figure}[htbp]
\begin{subfigure}[b]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/Input_signal_residual}
\caption{Synthetic signal generated by LMS simulator}
\label{fig:Synthetic signal generated by LMS simulator}
\end{subfigure}\\
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/EWMA_Averaged}
\caption{Averaged for EWMA filter smoothing approach}
\label{fig:Averaged for EWMA filter smoothing approach}
\end{subfigure}~
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/Rect_Averaged}
\caption{Averaged for Rectangle moving Average smoothing approach}
\label{fig:Averaged for Rectangle moving Average smoothing approach}
\end{subfigure}\\ 
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/Hann_Averaged}
\caption{Averaged for Hanning moving Average smoothing approach}
\label{fig:Averaged for Hanning moving Average smoothing approach}
\end{subfigure}~
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/Sgolay_Averaged}
\caption{Averaged for Sgolay filter smoothing approach}
\label{fig:Averaged for Sgolay filter smoothing approach}
\end{subfigure}~
\caption{Averaged comparison for all smoothing approaches}
\label{fig:Averaged comparison for all smoothing approaches}
\end{figure}
	
	
	
% plot RWLS estimator parameter

\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{./bilder/Residual_comparison/Least_squa_para_set}
\caption{Least square parameter estimation for synthetic signal}
\label{fig:Least square parameter estimation for synthetic signal}
\end{figure}


% plot residual for RWLS

\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{./bilder/Residual_comparison/RWLS_residual}
\caption{Residual generation using RLS Parameter estimation}
\label{fig:Residual generation using RLS Parameter estimation}
\end{figure}
	
% plot non parametric LLR samples
	
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{./bilder/Residual_comparison/Non_para_LLR_sample}
\caption{Non-Parametric Log likelihood ratio for synthetic signal}
\label{fig:Non-Parametric Log likelihood ratio for synthetic signal}
\end{figure}
	
% MLE LLR samples

\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{./bilder/Residual_comparison/MLE_LLR_sample}
\caption{Maximum likelihood ratio LLR samples for synthetic signal}
\label{fig:aximum likelihood ratio LLR samples for synthetic signal}
\end{figure}


% plot averaged value for real signal

% plot Averaged values

\begin{figure}[htbp]
\begin{subfigure}[b]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/Input_signal_residual_real}
\caption{Real signal generated by LMS simulator}
\label{fig:Real signal generated by LMS simulator}
\end{subfigure}\\
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/EWMA_Averaged_real}
\caption{Averaged for EWMA filter smoothing approach}
\label{fig:Averaged for EWMA filter smoothing approach}
\end{subfigure}~
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/Rect_Averaged_real}
\caption{Averaged for Rectangle moving Average smoothing approach}
\label{fig:Averaged for Rectangle moving Average smoothing approach}
\end{subfigure}\\
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/Hann_Averaged_real}
\caption{Averaged for Hanning moving Average smoothing approach}
\label{fig:Averaged for Hanning moving Average smoothing approach}
\end{subfigure}~
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{./bilder/Residual_comparison/Sgolay_Averaged_real}
\caption{Averaged for Sgolay filter smoothing approach}
\label{fig:Averaged for Sgolay filter smoothing approach}
\end{subfigure}~
\caption{Averaged comparison for all smoothing approaches}
\label{fig:Averaged comparison for all smoothing approaches}
\end{figure}


% Non_para_LLR_sample_real

\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{./bilder/Residual_comparison/Non_para_LLR_sample_real}
\caption{Non-Parametric Log likelihood ratio for synthetic signal}
\label{fig:Non-Parametric Log likelihood ratio for synthetic signal}
\end{figure}

% MLE LLR samples real

\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{./bilder/Residual_comparison/MLE_LLR_sample_real}
\caption{Maximum likelihood ratio LLR samples for real signal}
\label{fig:aximum likelihood ratio LLR samples for real signal}
\end{figure}








%\chapter{Signal Stationarity based on Correlation}


\section{correlation}


Correlation structure of received signal requires more attention because of all the propagation effects as shown in Figure ~\ref{fig:Propagation Characteristics of Satellite Communication Channel} on page ~\pageref{fig:Propagation Characteristics of Satellite Communication Channel} are available on LMS measurement data. In this chapter, we discuss different correlation matrices available to identify stationary region for wireless channel. More specifically, we emphasize on the usage of different correlation matrices methods to analyze stationarity of LMS received signal.

Stationary regions are decided based on this correlation matrix. It is therefore crucial to have reliable estimate of correlation matrices or equivalently, stationary regions. 



The local correlation matrices of LMS received signal at different local positions (k and l) can be theoretically defined as: 

\begin{equation}
\bm{R}(k) = \big[\bm{h}(k)*\bm{h}^T(k)\big]
\end{equation}
\begin{equation}
\bm{R}(l) = \big[\bm{h}(l)*\bm{h}^T(l)\big]
\end{equation}


If $\bm{h}$ is the $ N \times 1$ LMS received signal vector then local correlation matrices at position k and l defined as


where, $ k \in [kt,(k+1)t] $, $ l \in [lt,(l+1)t] $ and $ t \in N \times 1$.

This local correlation matrices are used to find the CMD and NCMD of the received signal $\bm{h}$.


\section{Correlation Coefficient Matrix (CCM)}

The concept of Local Stationarity Region (LRS) is discussed in \cite{filteri} to segment the wireless channel into the parts of stationary regions. Correlation Coefficient Matrix (CCM) is calculated based on each received signal impulse responses. Stationarity interval change is defined, where correlation coefficient of CCM exceed the threshold value.

For extraction of stochastic fading parameters, essential to realize LMS signal in terms of stationarity regions. Change in the LMS signal states are determined based on stationarity measure. 
	
The correlation coefficient between the location $t$ and $(t+\tau)$ are denoted as shown below equation ~\ref{eq:correlation coefficient matrix}
	
	
	\begin{equation}\label{eq:correlation coefficient matrix}
	\begin{aligned}
	CCM(t,t+\tau) = \frac{h(t)*h(t+\tau)}{\max{(h(t)^{2}, h(t+\tau)^{2})}}
	\end{aligned}
	\end{equation}
                                                                  
where, $\bm{h}$ is the $ N \times 1$ received signal vector, $CCM$ is the $N \times N $ correlation coefficient matrix (CCM), $h(t)$ is the received signal impulse response at $t$ position, $h(t+\tau)$ is the received signal impulse response at $(t+\tau)$ position, $ t = 1,2.....(N-1)$.

Figure ~\ref{fig:Correlation coefficient matrix for the measured route distance} shows resulting correlation coefficients matrix.



\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.95\columnwidth]{./bilder/correlation_matrix_db_4400_4900.pdf}
\end{center}
\caption{Correlation coefficient matrix (CCM) for the measured route as shown in Figure ~\ref{fig:Fixed distance received measurement signal}}
\label{fig:Correlation coefficient matrix for the measured route distance}
\end{figure}


The CCM gives the maximum normalized correlation coefficients between each position (x axis) in received signal to the other positions (y axis) in the LMS received signal. In the Figure ~\ref{fig:Correlation coefficient matrix for the measured route distance}, the red part indicates highly correlated area and blue part indicates uncorrelated sequence. 

As shown in the Figure ~\ref{fig:Correlation coefficient matrix for the measured route distance}, The CCM ranges between 0 (when the received signal positions are not correlated) and 1 (when the received signal positions are identical up to a scaling factor)


\section{Correlation Matrix Distance (CMD)}

For MIMO channel stationarity analysis, Correlation Matrix Distance (CMD) as proposed in \cite{cmd} is a method to find dissimilarity between two local correlation matrices at different positions on MIMO channel matrix.

In this report, concept of CMD channel stationarity analysis is discussed for the LMS received signal envelope.

The CMD approach basically works like: If we have two different local correlation matrices $\bm{R}(k)$ and $\bm{R}(l)$ at position k and l, then it's inner product is given by equation ~\ref{eq:Local Correlation matrices inner product},

\begin{equation}\label{eq:Local Correlation matrices inner product}
\langle vec(\bm{R}_k),vec(\bm{R}_l)\rangle = \sum_{i}\sum_{j} r_{ij}(k)r_{ij}(l) = \tr\{\bm{R}(k)\bm{R}(l)\},
\end{equation}

where, $\bm{R}(k)$ is the $ 2 \times 2 $ correlation matrix at position $k$, $\bm{R}(l)$ is the $ 2 \times 2 $ correlation matrix at $l$ position and $R,l = 1...N$.

The maximum value of this inner product is given by Cauchy-Schwarz inequality as:

\begin{equation}
\tr\{\bm{R}(k)\bm{R}(l)\} \leq \|\bm{R}(k)\|_{F}\|\bm{R}(l)\|_{F},
\end{equation}

\begin{equation}
\frac{\tr\{\bm{R}(k)\bm{R}(l)\}}{\|\bm{R}(k)\|_{F}\|\bm{R}(l)\|_{F}} \leq 1,
\end{equation}

where $\|.\|_{F}$ represents Frobenius norm.


The CMD matrix between two local correlation matrix can be defined as shown in equation  ~\ref{eq:Correlation Matrix Distance},

\begin{equation}\label{eq:Correlation Matrix Distance}
CMD(\bm{R}(k)\bm{R}(l)) = 1 - \frac{\tr\{\bm{R}(k)\bm{R}(l)\}}{\|\bm{R}(k)\|_{F}\|\bm{R}(l)\|_{F}}.
\end{equation}

As shown in the Figure ~\ref{fig:Correlation Matrix Distance for the measured route distance}, The CMD ranges between 0 and 1 such that, when the local correlation matrices uncorrelated gives 1 and the local correlation matrices are identical up to a scaling factor gives 0.

According to CMD approach, if both correlation matrices are identical, then value of CMD becomes 0. If both correlation matrices are not identical to each other, then value of CMD becomes 1. 


\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.95\columnwidth]{./bilder/cmd_dist_db_4400_4900}
\end{center}
\caption{Correlation Matrix Distance for the measured route as shown in Figure ~\ref{fig:Fixed distance received measurement signal}}
\label{fig:Correlation Matrix Distance for the measured route distance}
\end{figure}


CMD for envelope of LMS received signal gives $N \times N$ matrix which contains CMD value for each position (x axis) local correlation matrix $\bm{R}(k)$ to the other position (y axis) local correlation matrix $\bm{R}(l)$.


The resulting CMD matrix for the LMS measurement received signal stretch is shown in Figure ~\ref{fig:Correlation Matrix Distance for the measured route distance}.


In the Figure ~\ref{fig:Correlation Matrix Distance for the measured route distance}, the blue part indicates high correlation region and red part indicates uncorrelated region.
 

For MIMO channel matrix the full rank correlation matrices, the CMD value may be smaller than 1 for two different correlation matrices. Therefore it is required to normalize CMD matrix to achieve higher values of CMD near to 1.

There is some enhancement possible on CMD for MIMO channel matrix to achieve higher values. 

In the next section we discuss about the normalization process and new Normalized Correlation Matrix Distance (NCMD) approach.


\section{Normalized Correlation Matrix Distance (NCMD)}

In previous approach, if two different local correlation matrices $\bm{R}(k)$ and $\bm{R}(l)$  gives CMD value of 1 (highest) and inner product of 0. The correlation matrices of MIMO channel have a particular structure such that for some cases the CMD value between correlation matrix R(k) and positive semi-definite Hermitian correlation matrix $\bm{R}(l)$ may be less than 1. Normalization method for CMD value as proposed in \cite{mimosta} can be useful to achieve improvement on CMD matrix.

Information about maximum CMD value is essential for CMD normalization because maximum normalization helps to achieve highest CMD value 1. We calculate the maximum value of CMD between two different correlation matrix $\bm{R}(k)$ and second non zero Hermitian matrix $\bm{R}(l)$. 

CMD value between $\bm{R}(k)$ and $\bm{R}(l)$ is given by equation ~\ref{eq:Correlation Matrix Distance}, 


\begin{equation}
CMD(\bm{R}(k)\bm{R}(l)) = 1 - \frac{\tr\{\bm{R}(k)\bm{R}(l)\}}{\|\bm{R}(k)\|_{F}\|\bm{R}(l)\|_{F}}.
\end{equation}

Maximization of CMD is given below,

\begin{equation}
\underset{{\|\bm{R}(l)\|}_{F}\neq 0}{\max} \Bigg\{CMD(\bm{R}(k)\bm{R}(l))\Bigg\}  = \underset{{\|\bm{R}(l)\|}_{F}\neq 0}{\max} \Bigg\{ 1 - \frac{\tr\{\bm{R}(k)\bm{R}(l)\}}{\|\bm{R}(k)\|_{F}\|\bm{R}(l)\|_{F}}\Bigg\}
\end{equation}

After maximizing CMD it achieves 1 because CMD value ranges between 0 and 1.


\begin{equation}
1 - \frac{\tr\{\bm{R}(k)\bm{R}(l)\}}{\|\bm{R}(k)\|_{F}\|\bm{R}(l)\|_{F}} = max = 1.
\end{equation} 


Finally, normalization of CMD matrix can be achieved by minimizing the inner product of two correlation matrix instead of maximizing CMD value.


\begin{equation}
\frac{\tr\{\bm{R}(k)\bm{R}(l)\}}{\|\bm{R}(k)\|_{F}\|\bm{R}(l)\|_{F}} = 1 - 1 = 0 = min.
\end{equation}


\begin{equation}\label{eq:Minimization of correlation matrices inner product}
\underset{{\|\bm{R}(l)\|}_{F}\neq 0}{\min} \Bigg\{ \frac{\tr\{\bm{R}(k)\bm{R}(l)\}}{\|\bm{R}(k)\|_{F}\|\bm{R}(l)\|_{F}} \Bigg\}
\end{equation}


Eigenvalue decomposition of correlation matrices shown in above inner product is calculated because the exact solution of above is difficult.

Local correlation matrices can be rewritten in terms of eigenvalue decomposition are shown below,

\begin{equation}
\bm{R}(k) = \bm{U}\bm{\Lambda}(k)\bm{U}^{\bm{H}},
\bm{R}(l) = \bm{U}\bm{\Lambda}(l)\bm{U}^{\bm{H}}.
\end{equation}

In the above equations, $\bm{U}$ is unitary matrix and $ \bm{\Lambda}(k)$ and $\bm{\Lambda}(l)$ are the eigenvalue matrices containing n eigenvalues of two correlation matrices.

Substitute eigenvalue decomposition of local correlation matrices to equation ~\ref{eq:Minimization of correlation matrices inner product},

\begin{equation}
\frac{\tr\{\bm{R}(k)\bm{R}(l)\}}{\|\bm{R}(k)\|_{F}\|\bm{R}(l)\|_{F}} = \frac{\tr\{\bm{U}\bm{\Lambda}(k)\bm{U}^{\bm{H}} \bm{U}\bm{\Lambda}(l)\bm{U}^{\bm{H}} \}}{\|\bm{R}(k)\|_{F}\|\bm{R}(l)\|_{F}}
\end{equation}

\begin{equation}
\frac{\tr\{\bm{\Lambda}(k) \bm{\Lambda}(l)\}}{\|\bm{R}(k)\|_{F}\|\bm{R}(l)\|_{F}} = \frac{\tr\{\sum_{j=1}^n \lambda_j(k) \lambda_j(l) \}}{\|\bm{R}(k)\|_{F}\|\bm{R}(l)\|_{F}}
\end{equation}


Minimization of inner product of local correlation matrices can be achieved by minimizing eigenvalue of correlation matrices. More specifically, we propose to use minimum eigen value (except non-zero) of eigenvalue matrices to minimize the inner product.


\begin{equation}
\underset{{\|\bm{R}(l)\|}_{F}\neq 0}{\min} \Bigg\{ \frac{tr\{\bm{R}(k)\bm{R}(l)\}}{\|\bm{R}(k)\|_{F}\|\bm{R}(l)\|_{F}} \Bigg\} = \frac{\min{\{\bm{\Lambda}(k)\}} \min{\{\bm{\Lambda}(l)\}} }{\|\bm{R}(k)\|_{F}\|\bm{R}(l)\|_{F}} = \frac{\min{\{\bm{\Lambda}(k)\}}}{\|\bm{R}(k)\|_{F}}
\end{equation}


Maximum value of CMD is given by below equation 

\begin{equation}
\underset{{\|\bm{R}(l)\|}_{F}\neq 0}{\max} \Bigg\{CMD(\bm{R}(k)\bm{R}(l))\Bigg\} = 1 - \frac{\min{\{\bm{\Lambda}(k)\}}}{\|\bm{R}(k)\|_{F}} = 1 - \frac{\min{\{\bm{\Lambda}(k)\}}}{\sqrt{\sum_{j=1}^n \lambda_j^2(k)}}
\end{equation}

NCMD can be rewritten as shown in equation ~ref{eq:Normalized Correlation Matrix Distance}.

\begin{equation}
NCMD(\bm{R}(k)\bm{R}(l)) = \frac{CMD(\bm{R}(k)\bm{R}(l))}{1 - \frac{\min{\{\bm{\Lambda}(k)\}}}{\sqrt{\sum_{j=1}^n \lambda_j^2(k)}}} = \frac{CMD(\bm{R}(k)\bm{R}(l))}{K_N}
\label{eq:Normalized Correlation Matrix Distance}
\end{equation}

\begin{equation}\label{eq:Normalization factor for NCMD}
K_N = 1 - \frac{\min{\{\bm{\Lambda}(k)\}}}{\sqrt{\sum_{j=1}^n \lambda_j^2(k)}}
\end{equation}

Normalization factor $K_N$ for CMD is shown by equation ~\ref{eq:Normalization factor for NCMD}. 

NCMD value of 0 implies similarity of two correlation matrices and NCMD value of 1 indicates that both correlation matrices are different.



\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.95\columnwidth]{./bilder/Ncmd_dist_db_4400_4900}
\end{center}
\caption{Normalized Correlation Matrix Distance for the measured route as shown in Figure ~\ref{fig:Fixed distance received measurement signal}}
\label{fig:Normalized Correlation Matrix Distance for the measured route distance}
\end{figure}


NCMD matrix of size $N \times N$ gives NCMD value between each position local correlation matrix $\bm{R}(k)$ to the other positions correlation matrices $\bm{R}(l)$ in the LMS received signal vector.


Figure ~\ref{fig:Normalized Correlation Matrix Distance for the measured route distance} shows the resulting NCMD matrix of the LMS signal.


In the Figure ~\ref{fig:Normalized Correlation Matrix Distance for the measured route distance}, NCMD value 1 (different correlation matrices) corresponds to Blue color and NCMD value 0 (identical correlation matrices) correspondence to Red color.

Above shown normalization process helps to achieve higher NCMD value (equal to 1) compared to previous CMD approach. In the case of normalization factor is approximately one than our NCMD has the same value as CMD. 

For LMS received signal case always $K_N \approx 1$ because the smallest eigenvalue of $\bm{R}(k)$ is close to zero. On the other hand for MIMO channel matrix, if the $K_N < 1$, then the higher value of NCMD is achieved and smallest eigenvalue of $\bm{R}(k)$ is not close zero. 

For the low rank matrices the smallest nonzero eigenvalue is close to zero, which makes CMD and NCMD are same. Mostly in the case of equal eigenvalues the smallest value of normalization factor $K_N$ is achieved and it is $K_N = 1 - \frac{1}{\sqrt{n}}$.

For our LMS received signal envelope the NCMD matrix is approximately equal to the CMD matrix.



\section{Variable Window Length CMD and NCMD}


For better estimation of stationarity region, Variable size of window length M is proposed in \cite{windle}.

Using this window length, correlation matrices can be defines as

\begin{equation}
\bm{R}(k) = \big[\bm{h}(k)*\bm{h}^T(k)\big]
\end{equation}

\begin{equation}
\bm{R}(l) = \big[\bm{h}(l)*\bm{h}^T(l)\big]
\end{equation}

where, $ k \in [kt,(k+M)t] $, $ l \in [lt,(l+M)t] $ and $ t \in N \times 1$.

Window length selection is a key parameter for the better estimation of the stationarity interval using CMD and NCMD. It should be defined as follows:

\begin{itemize}

\item Window length should not be too small, it should need to be large enough for accurate estimate.

\item Window length should not be too big, otherwise some statistical parameters are lost.

\end{itemize}

Variable length local correlation matrices are used to calculate the CMD and NCMD matrices as defined in previous sections.


All this correlation approaches can be used to determine the signal stationarity structure and also used to segment the received signal. This report discuss about a method for the identification of stationarity region by using different correlation structures analysis.


In the next chapter, we discuss about the stationarity study of LMS received signal using the correlation structures analysis as mentioned above.



\section{original chapter 3 starts from here}


LMS channel Propagation characteristics are determined using Markov-Two State channel model. The two channel states (Good and Bad state) do not depend on the LOS or Non-LOS conditions. To identify the Good or Bad states of LMS received signal envelope, defining received signal in terms of stationary intervals will be beneficial. Identify the states of LMS received signal envelope in advance is essential to model LMS channel simulators. 

The most common assumption used in wireless channel modeling is Wide-Sense-Stationary (WSS). It has been shown that WSS assumption is only valid for very short range of intervals  \cite{wssn} \cite{nonveh}. Many application like exploratory data analysis to diagnosis or surveillance requires the automatic detection of the stationarity means abrupt changes in the signal. In the case of deep fade we lost stationarity and we need to detect this positions, where this deep fade occurs.

In the channel simulator change of the state is referred to the non-stationary behavior of the LMS received signal.

Usually, Measurement campaign done on a run basis, so measured LMS received signal envelope contains more than one channel impulse response. 

If we assumed constant speed of motion of mobile terminal, then traveled distance can be calculated as  $x = v * t$, where t = time required for campaign and v = speed of mobile vehicle.

In the next section, we discuss about some approaches used to extract the stationary parameter from received signal using correlation structures analysis.



\section{Local Region of Stationarity (LRS)}



According to the method as specified in \cite{filteri}, the Local Stationarity Region (LRS) is defined as the total number of stationary positions of the mobile station to it's current position during mobility of mobile vehicle.

LRS is identified as a region where any correlation coefficient value does not go below the threshold value for Correlation Coefficient Matrix (CCM) and vice versa for Correlation Matrix Distance (CMD) and Normalized Correlation Matrix Distance (NCMD).


It will be defined as equation ~\ref{eq:Local Region of Stationarity}

For CCM, LRS is defined as equation ~\ref{eq:Local Region of Stationarity}

\begin{equation}\label{eq:Local Region of Stationarity}
\bm{\Delta}_{LRS}(t) = \max \{ \tau |_{(\bm{C}(t+\tau)) \geq C_{th}}\}
\end{equation}

For CMD and NCMD, LRS is defined as equation ~\ref{eq:Local Region of Stationarity for CMD and NCMD}

\begin{equation}\label{eq:Local Region of Stationarity for CMD and NCMD}
\bm{\Delta}_{LRS}(t) = \max \{ \tau |_{(\bm{C}(t+\tau)) \leq C_{th}}\}
\end{equation}

where, $\bm{C}$ is the $N \times N $ correlation coefficient matrix, CMD matrix or NCMD matrix and $C_{th}$ is the threshold value for correlation matrix.

LRS using above method for CCM and threshold value of  $C_{th} = 0.9$ are shown below in Figure ~\ref{fig:LRS for correlation coefficient matrix and threshold $C_{th} = 0.9$ under the measured route distance}.



\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.95\columnwidth]{./bilder/stationary_interval_db_09_4400_4900}
\end{center}
\caption{LRS for correlation coefficient matrix (CCM) and threshold $C_{th} = 0.9$ under the measured route distance as Figure ~\ref{fig:Fixed distance received measurement signal}}
\label{fig:LRS for correlation coefficient matrix and threshold $C_{th} = 0.9$ under the measured route distance}
\end{figure}

Using this approach, number of stationary positions to the current position of LMS received signal is determined. Still identification of start and end position of stationary interval is not precisely determined.

Sometimes measured LMS received signal envelope contains various behaviors, if our mobile vehicle is passing through the URBAN environment.

LRS is applied on the pre-calculated correlation matrix (CCM, CMD or NCMD) of size $N \times N$, so it requires high computational time to calculate and store correlation matrix and also require high memory to evaluate it.


Limitation of the LRS approach:

\begin{itemize}
\item Unidentified Start and End point of stationary segment
\item High computational time
\item Memory
\item Computational complexity
\end{itemize}


To overcome this problem, we propose a method 'Correlation Trace-path Threshold' to define the start and end point of a stationary interval.


\section{Trace-path Correlation Threshold}


We propose a method in which Route from start position of correlation matrix structure to the end position of correlation structure is followed.

Calculate CCM, CMD or NCMD, check correlation coefficient between first position or window and the next position or window come into the path. Stationary interval change is identified at position, where correlation coefficient between two positions or windows go above or below the threshold value as according to the Correlation Structure. Trace-path Procedure is performed on the whole LMS measurement received signal envelope. 

The procedure is shown in below figure ~\ref{fig:Trace-path approach method to find starting point and End point}.

\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.95\columnwidth]{./bilder/traceexample_4400_4900}
\end{center}
\caption{Trace-path approach method to find starting point and End point}
\label{fig:Trace-path approach method to find starting point and End point}
\end{figure}

This procedure checks correlation coefficient value of all points coming down to measurement route using fixed threshold value. Useful to remove the problem of identification of the start and end point. Still some problems related to previous approach have remained with this approach.

This approach works on pre-computed CCM, CMD, NCMD. Due to pre computation of this correlation matrices this algorithm is highly computationally complex.

\subsection{Trace-path Correlation Calculation Run bases}

To reduce a complexity, memory and computational time, we propose to operate previous algorithm on run bases instead of calculating the full correlation matrix in advance.

As the name suggest, Calculation of the correlation coefficient is performed on run bases. Defined as: it only calculates the correlation coefficients throughout the path as indicated in the above Figure ~\ref{fig:Trace-path approach method to find starting point and End point}. In  previous 'Trace-path Correlation Threshold' full $N \times N$ correlation coefficient matrix is calculated, but in this it will only calculate the $N \times 1$ correlation coefficients.

Calculation on run bases reduce the complexity from $N \times N$ to $N \times 1$.

To overcome the problems as mentioned in previous method, some enhancement performed on that.

As mentioned in the Motivation, For the accuracy of the channel simulators, it should need to change it's channel state as frequently as possible \cite{deterministic}. But for the channel simulator it will increase the computational complexity. The generation of new channel state is computationally expensive \cite{deterministic}. For this reason the change of channel state should be made according to the measured signal change but not at higher rate. 

Trace-path Correlation Threshold Approach has some limitation:

\begin{itemize}
\item Detect every single point abrupt change
\end{itemize}

Due to this effect it will be highly complex for channel simulator to change state on each single point change in signal.

To overcome this problem, we propose some enhancement on previous algorithm and it is discussed in the next section.

\section{Trace-path Correlation Threshold with MinStateLength}

As the name suggest, it calculates the correlation coefficients throughout the path as indicated in the Figure ~\ref{fig:Trace-path with Ignore value approach method to find starting point and End point}. MinStateLength is defined as Minimum State length. MinStateLength removes the state change indicators occurs before predefined Minimum state length.

Figure ~\ref{fig:Trace-path with Ignore value approach method to find starting point and End point} indicates procedure of operation with MinStateLength. Before changing state, this method checks correlation coefficients in advance up to MinStateLength.

It is noticeable in Figure ~\ref{fig:Trace-path with Ignore value approach method to find starting point and End point} that this approach has a new look in advance feature. MinStateLength a new parameter is added to previous 'Trace-path Correlation Threshold' method means it will look in advance up to this value. MinStateLength feature helps to avoid single point abrupt change.

The working principle of this approach is explained in the Figure ~\ref{fig:Trace-path with Ignore value approach method to find starting point and End point} below.

\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.95\columnwidth]{./bilder/traceexample_ignor_4400_4900}
\end{center}
\caption{Trace-path with MinStateLength method to find starting point and End point}
\label{fig:Trace-path with Ignore value approach method to find starting point and End point}
\end{figure}

This example has a correlation coefficients calculated with threshold value.

Meaning of MinStateLength is that it is bound to state change indicators.

We can apply all this approaches to the different types of correlation coefficient structure as mentioned in previous chapter.


