\section{Proprietary Distribution}

At the request of Aleks Chechkin, this chapter is not available for public viewing. Those wishing to read this chapter must contact Professor Xinming Huang.

Due to the importance of this distribution to the rest of the thesis, we will say a few words about the distribution here. 

\subsection{Basics of Proprietary distribution}
This distribution is a four parameter distribution. The fitting algorithm is identical to the algorithm described in Chapter 2 (this is the reason chapter 2 is presented in full detail). The system view of the fitting algorithm is shown in
figure ~\ref{fig:AEPDsystem} 

%-------- Figure -----------------------------------
\begin{figure}[h!]
\epsfig{file=figures/AEPDsystem.eps}
\caption{System View of Fitting Algorithm}
\label{fig:AEPDsystem}
\end{figure}

%------- End Figure---------------------------------------

The input to the algorithm is financial time series data: $x_1, x_2, \cdots, x_N$. In this case N = 500 and the input is a window of 500 data points. The outputs of the algorithm are four parameters, $\beta =(\alpha, \theta, \sigma, \kappa)^T$ that describe a distribution. 

The algorithm works by finding the minimum entropy by using deterministic equations for parameters $\sigma, \kappa$ and varying parameters $\theta, \alpha$. Specifically: the algothm looks to maximize:
\begin{equation} \label{eq:AEPDlogLike2}
\emph{L}^* = n(\log\frac{\alpha}{\Gamma(\frac{1}{\alpha})}
+\log\frac{\kappa}{1+\kappa^2} - \log\sigma)
-\frac{\kappa^\alpha}{\sigma^\alpha}X_{\theta}^+
-\frac{X_{\theta}^-}{\kappa^\alpha\sigma^\alpha}
\end{equation}
where 
$X_{\theta}^+ = \sum_{i=1}^{r}(x_{(i)}-\theta)^\alpha
X_{\theta}^- = \sum_{i=r+1}^{n}(\theta-x_{(i)})^\alpha$
and $x_{(1)}, x_{(2)}, \cdots, x_{(n)}$ are the sorted version of the input $x_1, x_2, \cdots, x_n$. 
\newline
The deterministic expressions for $\kappa$ and $\sigma$ are:

\begin{equation} \label{eq:MLEkappa}
\hat{\kappa} = 
\left[
\frac{X_{\theta}^-}{X_{\theta}^+}
\right]
^{\frac{1}{2(\alpha+1)}}
\end{equation}

\begin{equation} \label{eq:MLEsigma2}
\hat{\sigma} =
\big[
\alpha(X_{\theta}^+ X_{\theta}^-)^{\frac{\alpha}{2(\alpha+1)}} 
\big(
(X_{\theta}^+)^{\frac{1}{\alpha+1}} +
(X_{\theta}^-)^{\frac{1}{\alpha+1}}
\big)
\big]
^{\frac{1}{\alpha}}
\end{equation}
The values of $\alpha$ and $\theta$ are allowed to vary over a range and the set of parameters that produce the maximum from equation ~\ref{eq:AEPDlogLike2} are the parameter of interest. This is very similar to the method used in Log-Laplace, but now there are two parameters to evaluate, increasing the number of calculations involved per fit and ultimately algorithm running time. Note that the possible values of $\theta$ are simply the inputs. 