\documentclass[11pt, oneside]{book}
\usepackage{fullpage}
\usepackage{graphicx}
\usepackage{color}
\usepackage[round]{natbib}


\newcommand{\be}{\begin{equation}}
\newcommand{\ee}{\end{equation}}
\newcommand{\mydefn}[1]
{{\em #1}}
\newcommand{\eof}[1]
{\mbox{{\bf E}}[#1]}


%
%  How to import graphics: alt-prtscrn, then paste as image into gimp, then save as .eps, then run epstopdf
%

\title{Cronos User's Guide}
\author{A. E. Brockwell}

\bibliographystyle{plainnat}


\newcommand{\menuoption}[2]
{#1 $\rightarrow$ #2}

\definecolor{gray}{rgb}{0.8,0.8,0.8}
\newcommand{\button}[1]
{\fcolorbox{black}{gray}{#1} button}

\newcommand{\todo}
{{\em To be filled in ...}}

\parindent 0pt
\parskip 12pt

\begin{document}

\maketitle

\tableofcontents

\chapter{Introduction}

Cronos is an open-source time series analysis package, which contains
a range of sophisticated tools for data manipulation and time series analysis.  It supports a range of different kinds of models,
and provides a plugin system so that users can build custom models and transforms.

This version is written in C\#, unlike the original (pre-version 2.00) Cronos, which was written in C++.
This reflects my shift of personal preference in programming language, and
hopefully has led to a more stable and better-structured piece of software.
Unfortunately, in this transition, cross-platform portability has been sacrificed.
The new version uses Microsoft's .NET framework.

I am hopeful that in due course, the Linux open source community will develop
ways to make Microsoft .NET GUI applications run.  This would solve the cross-platform problem.

However, in the short term, the sacrifice has saved me time and effort and
allowed me to concentrate on building a substantially improved version of Cronos.

Readers who never used the original (pre-version 2.00) version of Cronos will
probably want to skip straight ahead to the next section.  For those familiar with the old version, the
primary changes can be summarized briefly as follows.

\begin{itemize}
\item {\bf Data Handling.} The GUI has been substantially revised.  It now consists of two main display areas:
a workspace, and a display area.  The workspace contains nodes representing data objects, transformations, models, etc.
The user can click on any node in the workspace, and an associated visual display will be shown in the display area.
This allows for much better visual display of the analysis process, and provides more flexibility.
For example, the workspace can contain multiple time series objects, each one can be linked to one or more models, 
and so on.  Furthermore, the workspace itself can be saved and or loaded.
\item {\bf Multivariate Support.} The package now fully supports multivariate and univariate time series within
the same framework.
\item  {\bf Extendability.} Internal data structures have been revised to make it easier for
contributors to develop their own models.  Furthermore, a plugin system has been added, providing a clean
way to encapsulate custom models/transforms in DLLs so that they can be distributed separately from the
core software itself.
\end{itemize}

\chapter{Tutorial}

Perhaps the quickest way to get a sense of how Cronos works is to go through a simple tutorial.
In this exercise we analyze the S\&P 500 index by getting data from Yahoo Finance, loading it into Cronos, and fitting some different models.
We then demonstrate forecasting using the fitted models.

\section{Part 1: Importing Data and Fitting Models}

\begin{enumerate}
\item Start Cronos.  You should get a blank screen that looks something like this.
The gray area at the left of the window is the ``Workspace", and the area to the right is the ``Display Area".
These are both empty for now.

\begin{center}
\includegraphics[width=12cm]{startscrn.pdf}
\end{center}
\item In a browser window, go to {\tt http://finance.yahoo.com}.
Then next to the GetQuotes button, enter
``\^{ }GSPC'' (without the quotes).  In the bar on the left, click on ``Historical Prices".
Then select a date range, for example, Jan. 1st, 2001 to Jan. 1st, 2009, and click on "Get Prices".
Now near the bottom of the window, find the \button{Download to Spreadsheet} and click on it,
and save the file somewhere.

\item You can import the data either directly from the file, or through the clipboard.
\begin{enumerate}
\item
(From clipboard) Open the downloaded file in Excel (or other spreadsheet).  You should see some multi-column data, beginning with a header row.
Select the entire contents of the spreadsheet (probably Ctrl-A), and copy it to the clipboard (Ctrl-C).
Then in Cronos, select \menuoption{Data}{Import from Clipboard}.  
\item
(From file) In Cronos, select \menuoption{Data}{Open File}.  Open the file you just downloaded.
\end{enumerate}
After following one of these two procedures, a dialog box will appear, allowing you to decide which columns you wish
to import.  Make sure only the last column, labelled {\bf Adj Close}, is selected and click on OK.
Nothing appears to happen, but you can now "drop" the data into the workspace.
Move the cursor over the workspace, you will see the cursor change into a ``+'' symbol, and click where you
want to drop the data.  A data node coloured green will appear.  It should be labelled ``Adj Close''.

\item 
Click on the newly-dropped data node.  It will be highlighted, and a plot of the data will appear in the display area.
Now Cronos should look something like this.

\begin{center}
\includegraphics[width=12cm]{tut01.pdf}
\end{center}

\item
Right-click on the data node.  A dialog box will appear.  You can enter a text description.
Just put in some arbitrary text.   This does not affect any analysis, 
but when you move the cursor over the node, this description will be displayed.
As you add more and more nodes, it's useful to be able to attach descriptions to nodes to remind
yourself what you were doing.

\item For financial time series like this one, it is typical to fit some kind of model to the log-returns.
To do this, we begin by taking log-returns.  Select \menuoption{Transforms}{Log-Return}.  Now, as before, you can move the
cursor over the workspace, it will turn into a plus symbol, and you can click to drop your log-return transformation.

At this point the log-return transformation has no input.
You must assign the transformation an input before it can generate output.
To do this, you will make a link from the data node to the transformation node.
Right-click on the data node and hold the right-mouse button down.  Now drag the cursor over the top of the
transformation node and release the mouse button.
An arrow will appear connecting the data to the transform.

To see the results of the transformation, select (click on) the transformation node.
Cronos should now look something like this.

\begin{center}
\includegraphics[width=12cm]{tut02.pdf}
\end{center}

\item
Although not strictly necessary, it is often useful to perform some kind of subsampling of our original time series.
Select \menuoption{Transforms}{Sampler} and drop the sampler into the workspace.
As in the step above, use the right-mouse button to make a connection from the log-return node to the new sampler node.
Then click on the sampler node.  You will see that the display area contains two areas, at the left is a collection
of "properties" of the transformation, with a \button{Recompute} underneath.  At the right you will see a plot of the result
of the transformation.

A sampling transformation simply specifies a collection of times at which the input should be sampled, and constructs as output
a new time series which is the original, sampled at these times.  If the original does not have a value at a sample time, then
the sampler uses the value with the \emph{last} time-stamp before the desired time.  (In other words, it regards the input as a
step function.)

You may want to change the start and end-time of the transformation.  In particular, the end time defaults to current clock time.
Since this is beyond the end of the original time series, you may want to change the end time of the transformation to
Jan. 1st, 2009.  Otherwise the most recent values of the output will all be equal to the last available value, effectively
padding the end of the time series with a lot of constant values.
To change transform properties, simply modify the fields in the display area, then click on the \button{Recompute}.


\item Next we can fit some models to the sampled log-returns.
Select \menuoption{Models}{ARMA}.  Set the autoregressive order to 1, the moving average order to 0,
and click on OK.  This will create a first-order autoregressive model.  As before, drop the new model node somewhere in the workspace.
Use the right-mouse button to make a connection from the transformation node to the new ARMA model node.

Now click on the ARMA model node.  The display area will show model details, as well as a plot of the residuals for the model and the associated data.
By default the model is a simple white noise model; its parameters have not been optimized.  To obtain sensible parameters for the given data,
click on the \button{Fit by MLE} in the display area.  This maximizes the likelihood with respect to model parameters.
Wait for a few seconds, and then you should see parameters in the display area change, and residuals updated.

\item Drop a couple more models into the workspace.  You could put in a GARCH(1,1) model and an ARCH(1) (GARCH(1,0)) model.
Connect the sampled log-returns to these models as well.  (You can have multiple outgoing links from a node.)
For each model, fit parameters by maximum likelihood estimation.
At this point, Cronos should look something like this.

\begin{center}
\includegraphics[width=12cm]{tut03.pdf}
\end{center}

\item You can compare models quickly by clicking in turn on each of the three models and looking at the log-likelihood displayed in the display area.
In this case, not surprisingly, the GARCH(1,1) model appears to perform much better than either the AR(1) or the ARCH(1) model.
(This is fairly typical for financial time series, since the GARCH model captures heavy-tails and bursts in volatility.)

\item {\bf Saving your workspace.}  So that you do not have to repeat this whole process next time you do analysis, you can save the entire workspace
as a \emph{project}.  Just select \menuoption{File}{Save Project}, and choose a filename.  To retrieve the workspace later, use
\menuoption{File}{Open Project}.

\end{enumerate}

\section{Part 2: Forecasting}

So far in the tutorial you have seen how to import data, apply simple transforms, and fit models.
Often the next step is generation of forecasts.  Forecasting is handled in Cronos differently from most packages.
The idea is the forecasting can be regarded as a transformation that takes as input some initial time series data
and a model specification, and then generates forecasts of values at some specified time points in the future.
Thus it takes as input a time series, a model, and generates as output another time series.
To set this up in Cronos, continue from where you left off at the end of Part 1 by
carrying out the following steps.   (If you exited Cronos or for some reason don't have the project any more,
just use the \menuoption{File}{Open Project}, and load from the filename that you saved to at the end of Part 1.)

\begin{enumerate}
\item Go to \menuoption{Transforms}{Forecast} and drop a Forecast transform into the workspace.
\item Connect the output labelled ``The Model" from your ARMA(1,0) node to the input labelled "Model" of
your new forecast node.
\item Connect the data from the log-return node to the ``Starting Data" input of your new forecast node.
At this point your display should look a bit like this.

\begin{center}
\includegraphics[width=12cm]{tut04.pdf}
\end{center}

\item Click on the forecast node.  Then click on the ``Specify Times" button in the lower left of the display area.
A dialog box will appear, which allows you to specify the range of future time points (relative to the starting data)
that you want to generate forecasts at.
\item Click on the ``Recompute" button in the forecast transform display area.
At this point you should see a plot of forecasts (best linear predictors) in the forecast transform display area.
\item To append these forecasts to the original data, go to \menuoption{Transforms}{Merge} and drop a merge
transform into the workspace.  Then connect the original log-returns, as well as the forecast node output to the
new merge transform node, one of them as ``Time Series \#1" and one as ``Time Series \#2".
Click on the merge node.  You will see the combined time series.  To zoom in on the recent portion, use the
mouse to select a rectangular region at the right side of the plotted data.
Now you see your original log-returns, combined with the forecasted log-returns.
Cronos should now look something like this.  

\begin{center}
\includegraphics[width=12cm]{tut05.pdf}
\end{center}

(Remember that you can also export the merged time series
to Excel by right-clicking on the plot.)
\end{enumerate} 

\chapter{General Structure}

Conceptually, everything you do in Cronos revolves around the use of nodes in a workspace.
The workspace is represented visually in the gray area in the left of the Cronos application window,
and contains a set of connected nodes.  In the right part of the application window,
a display gives details about the currently-selected node.


\section{Nodes}

There are five different types of nodes.  Most nodes are created by selecting appropriate menu options.
For example, the \menuoption{Transforms}{(...)} menu options create transform nodes,
the \menuoption{Data}{(...)} options allow you to create data nodes by loading data from files, 
and the \menuoption{Model}{(...)} options allow you to create model nodes.

\begin{itemize}
\item {\bf Data Objects.}  Data objects have no inputs.  Rather, they are connected as inputs to other nodes.
\item {\bf Data Sources.}  Like data objects, these nodes have no inputs.  However, they provide connectivity to sources
of data outside Cronos.  For example, a data source could query a database in order to retrieve time series data.
\item {\bf Transformation Objects.}  Transformations have a fixed number of inputs and outputs.
They take one or more time series as inputs and construct one or more time series as output(s).   When selected, a transformation node
shows the transformed time series (output) in the display area.  Transforms typically have parameters, which can be edited 
in the display area.  After changing transform parameters, the output can be recomputed by clicking on the
\button{Recompute}.
\item {\bf Model Objects.} Models typically have one time series input.  By default, when created, model nodes have
default parameter values.  To fit a model, you need to make sure a time series is connected as input, 
then click on the \button{Fit by MLE} in the display area.  The display area shows model parameters, as well as residuals
for the current input and parameters.   Residuals can be taken as output from the model.
\item {\bf Visualization Objects.} These objects have one input and no outputs.  They simply provide additional
ways of visualizing a time series (or other data object) beyond those which are already used in other nodes.
\end{itemize}

Details of particular objects are shown in the display area to the right of the workspace.
To show details for a specific node (or to perform other object-specific operations), you just click on the
node in the workspace.

\section{The Workspace}

The nodes in the workspace, along with their connections, provide a visual representation
of your data analysis process.  They also provide a convenient way for you to repeat analysis
with different parameters or models.  If the output of one node changes, for example, if you
change transformation parameters and click on the \button{Recompute}, then every node
that depends on this output will be recomputed automatically.

To use Cronos effectively, it's useful to understand how to manipulate nodes in
the workspace.

\subsection{Dropping/Moving Nodes}

You create a node by selecting menu options, or in certain cases (model simulation, for example)
by clicking on buttons.

Nodes may be data objects, transformations, models, or visualizations.

When a node is created, it is not connected to any other nodes.  In fact, it is not even
visible since it has no location in the workspace.  Rather, it is simply ``attached" to the cursor.
When you move the cursor into the workspace, it will change to a ``+" symbol, indicating that
you can drop a node.  To do so, simply move the cursor to a desired position in the workspace
and click the mouse.  The node will appear in the workspace.  At this point the cursor reverts
to its default appearance.

After its initial placement, a node can be moved.  Just left-click on it, and hold the left mouse-button
down for at least about half a second.  At this point you can drag it to a new location.

\subsection{Connecting Nodes}

To connect nodes, use the right-mouse button.
Click on the source node (the node you are connecting from) with the right-mouse button, hold the button down,
and drag to the target node, and release the mouse button.

Nodes may have multiple inputs and outputs.  If the source node has exactly one output and the target
node has exactly one input, the attempted connection is unambiguous, and if the connection types are
valid, then you will see an arrow drawn in the workspace connecting the nodes.

If there are multiple inputs or outputs, then a dialog box will appear asking you to specify
which input is to be connected to which output.  Select the appropriate input and output
and click on the \button{OK}.  


\subsection{Labelling Nodes}

It is often convenient to attach text comments to a particular node.
For example, you might want to remind yourself of the source of some data,
or why you apply a particular transformation, etc.
To attach a tool-tip text description to a node, right-click on it.
A dialog box will appear, and you can enter your text.
The text will be associated with the node, and saved along with the
rest of the project.

\subsection{Copying/Pasting/Deleting Nodes}

It's also possible to copy and paste a node.
Select a node and press Ctrl-C to copy it to the clipboard.
Then press Ctrl-V to paste it.  When you press Ctrl-V, the copy of the node will
be ``attached" to the cursor so that you can drop the new copy
in the workspace.

To delete a node, simply select it, and then press the Del key.
The node will be removed, along with all of its connections to other nodes.

\section{The Display Area}

The display area provides information about the currently-selected node.
The nature of the display depends on the type of the object associated with the node.

\subsection{Display of Data Nodes}

\begin{itemize}
\item
{\bf Univariate Time Series:} A univariate time series is plotted along with its sample ACF and PACF in the display area.
\item
{\bf Multivariate Time Series:}
For a multivariate time series, one component at a time is plotted using a dark line, along with the corresponding ACF and PACF.
The remaining components are also plotted, but in light gray.  Remaining components are plotted on their own scale in the background.
The scale as indicated by labels on the y-axis only applies to the main dark-line-plotted component.
To change the main plotted component of a multivariate plot, just drag the vertical scroll bar at the far right of the plot.
{\bf Longitudinal Data:}
Longitudinal data objects are collections of many (usually small) time series.
In contrast with multivariate time series analysis, where one usually tries to explain relationships between the different components,
in longitudinal analysis, one usually tries to find a model that simultaneously explains all of the components assuming they are
independent of each other.
Longitudinal data is displayed in a similar manner to multivariate time series, but since the components may have different starting
points in time, they are re-aligned when displayed so that they all start at the same time.
\end{itemize}

There are a number of useful functions you can carry out using the data display area.

\begin{itemize}
\item Right-click on the plot to bring up a drop-down menu.  With this menu you can
\begin{itemize}
\item Change the title and/or description of a time series.
\item Save the time series to a file.
\item Export the time series to the clipboard (with or without a leading date/time column).
\end{itemize}
\item Drag over a rectangular area with the left-mouse button to zoom in on a plot.
To zoom out fully again, click on the data node in the workspace.
\item Copy the plot to the clipboard to be pasted into another document.
\item Export the plot in PDF format to a file.
\end{itemize}

Note on exporting plots:  You have two standard methods of copying plots into other documents.
Firstly, you can copy a bitmapped version of the plot to the clipboard.  It can then be pasted into another application.
Because the image is passed around as a bitmap, it may appear "pixellated" (blocky) in a final printout.
Because of this, it is much better in general to export plots to PDF files.  PDF files can represent lines in a native format
instead of translating them into sets of pixels.  Thus image quality for things like time series plots is much higher.
Thus if your document generation software supports inclusion of pdf images, you should use this option.
To see the difference in quality, consider the following two plots (residuals from the GARCH model fit in the tutorial).
The one on the left is copied as a bitmap from Cronos, the one on the right is exported as a pdf file from Cronos.
If you print this document on a good laser printer, the difference between the two should be noticeable.


\subsection{Display of Transformation Nodes}

Transformations take as input one or more univariate or multivariate time series (sometimes along with other objects), 
apply some transformation, and generate a univariate or multivariate time series (sometimes with other objects) as output.

Each transformation display shows transform parameters (if any), along with a plot of the result of the transformation.
Parameters can be modified directly by editing them in the left part of the display area.
After changing parameters, click on the \button{Re-Compute} to re-compute the transformation.



\subsection{Display of Model Nodes}

Model displays show a complete model description, including current model parameter values, as well as a plot of
residuals (if any).  
A brief description of the data connected to the model is shown, as well as the log-likelihood of this data
when computed using current model parameters.


\subsection{Display of Visualization Nodes}

%\chapter{Multivariate Analysis}
%To be filled in.

%\chapter{Node-Specific Documentation}
%To be filled in.

Visualizations typically provide specific ways of looking at objects.
Appearance and functionality are both highly specific to individual visualizations.


\chapter{Time Series Objects}

Before seeing how transforms work and how models are fit in Cronos, it is important to understand how time series are stored.

If you look at any standard textbook on time series, you will see that
a time series is defined as a time-indexed set of random variables,
$$
\{X_t,~t \mbox{ in } {\cal T}\}.
$$
Usually the index set $\cal T$ is taken to be the set of all integers,
and it is assumed that one only observes a finite subset of the whole process.
That is, we make observations of the random variables at a finite set of
times.  For example, we might record
$$
X_1 = 100, \quad X_2 = 98, \quad X_3 = 99, \quad X_4 = 96.
$$

For most theoretical analyses, the actual times
at which these four observations take place are not important.
The only thing that matters is that the sequence in time is correct,
and that they are (in some sense) 
equally-spaced-apart-in-time measurements.

Cronos takes a stricter approach in its representation of a time series.
It requires that every point has an exact timestamp that is measured
in real-world time units.
For example, the four observations above could have been taken at
\begin{eqnarray*}
t_1 & = & \mbox{1PM Jan. 1st, 2001}, \\
t_2 & = & \mbox{1PM Jan. 2nd, 2001}, \\
t_3 & = & \mbox{1PM Jan. 3st, 2001}, \\
t_4 & = & \mbox{1PM Jan. 4nd, 2001}.
\end{eqnarray*}

The stricter requirement serves two purposes.
\begin{enumerate}
\item It provides for better record keeping.  You don't have to keep a separate description of your time series that explains when the observations were recorded.
\item It allows for a much richer family of models to be fit.
In particular, it makes it possible to fit continuous-time models to data,
or to make use of other transforms and models that can take into account
possibly-irregular time gaps between successive measurements.
\end{enumerate}
However, having said that, many of the built-in transformations and models in Cronos perform analysis in the traditional way, by regarding the time series
simply as a series of equally-spaced measurements whose exact timestamps do not matter.

\section{Univariate Time Series}
Univariate time series have a single real-valued observation at each timestamp.
Much of the time series literature concentrates on transforms and models for
univariate time series.

Univariate time series are stored as a list of timestamped real values.
Perhaps the easiest way to construct such an object and import it into Cronos is
to create a two-column spreadsheet, where the first column contains timestamps,
and the second column contains values, copy it to the clipboard, and then
use the \menuoption{Data}{Import from Clipboard} menu option in Cronos.
If the first row of your spreadsheet contains text fields these will
be regarded as a header, and the title of the time series will be taken from
the first row of the second column (the first column contains dates, so its
header will be ignored).

\section{Multivariate Time Series}
Multivariate time series have vector-valued observations at each timestamp.
Many transforms and models for univariate time series can be extended in an
obvious way to the multivariate case.

Cronos provides support for multivariate analysis.

To create a multivariate time series for importing into Cronos,
you can construct a spreadsheet with more than two columns.
The first column should contain timestamps.  The second, third, ..., columns
contain the multiple values that make up the vector for the corresponding timestamp.  As in the univariate case, you just copy the full array to the
clipboard and then use the \menuoption{Data}{Import from Clipboard} option in Cronos.

It is also possible to construct multivariate time series by ``binding" two univariate time series together , or to extract univariate time series from
multivariate time series by performing a ``splitting" operation. (See the section in the next chapter on
multivariate transforms.)



\chapter{Transforms}

Much of the data analysis process involves 
applying preliminary transformations to data.
In Cronos this is handled through the use of transformation objects.
In this chapter we describe Cronos transformations in detail.

Each type of transform has one or more inputs, one or more outputs, 
and a number of parameters that can be set by the user.
The display area for a transform allows the user to edit these parameters,
and has a \button{Recompute} which can be clicked to force the
transform to be recomputed with the current set of parameters.
Consider for example, the figure below.  
It shows the display area associated with
a difference transformation whose input is the time series of
the S\&P 500 index.

\begin{center}
\includegraphics[width=12cm]{diffparms}
\end{center}

The parameters of the transform are circled in red.
By default, a differencing transform differences at lag 1, i.e.
it replaces the time series $\{X_t\}$ by the
time series $\{Y_t\}$, $Y_t=X_t-X_{t-1}.$
But differencing can be performed at different lags.
To do this, you would change the value of 1 to something else
in the display area, then you would click on the \button{Recompute}
to recompute the transformation with the new parameter.


\section{Differencing/Integrating Transforms}

Differencing transformations difference time series in time at a specified lag.
That is, for an input time series
$$
\{X_1,...,X_n\}
$$
with timestamps $\{t_1,...,t_n\}$,
the output of the transformation is
$$
\{Y_t,~t=1,...,n-\mbox{lag}\},
$$
where $Y_t = X_{t+\mbox{lag}} - X_{t}.$

Timestamps are assigned depending on the specified value of
the {\bf LeftTimeStamps} parameter.
If LeftTimeStamps is true, then the timestamps will be at the left of the differencing intervals, that is,
$$
t_1,...,t_{n - \mbox{lag}}.
$$
If LeftTimeStamps is false, then the timestamps will be
$$
t_{1+\mbox{lag}},...,t_n.
$$

\section{Log Transforms}

These transforms are fairly self-explanatory.
For an input time series
$$
\{X_1,...,X_n\}
$$
with timestamps $\{t_1,...,t_n\}$,
the output of the transformation is
$$
\{Y_t = \log(X_t),~t=1,...,n\},
$$
with the same timestamps.
This is the natural log (not log base 10).

\section{Log-Return Transforms}

This is simply a combination of two things: a log transformation and a differencing transformation.
Thus for input 
time series
$$
\{X_1,...,X_n\}
$$
with timestamps $\{t_1,...,t_n\}$,
the output of the transformation is
$$
\{Y_t = \log(X_t/X_{t-1}) = \log(X_t) - \log(X_{t-1}),~t=2,...,n\},
$$
with timestamps $\{t_2,\ldots,t_n\}.$
Again, this is the natural log (not log base 10).


\section{Sampling Transforms}

Sampling transforms allow you to
\begin{itemize}
\item clip a time series to a certain range of times, and/or
\item sample the time series at regular time intervals.
\end{itemize}

Parameters are fairly obvious.  There is a starting date/time, an ending date/time,
and a sampling interval.

Date/times are represented in the standard American format, that is, month/day/year hours:minutes.
For example, ``1/2/2009 3:21PM" (or ``1/2/2009 15:21") represents 3:21PM on Jan. 2nd, 2009.

Sampling intervals are represented in the format days.hours:minutes:seconds.
The default is ``1.00:00:00", which represents exactly one day.
If you want to take all of the points from the starting date to the ending date,
specify a sampling interval of 0.

To automatically set the start and end dates to the beginning and end of the range of the input,
you can click on the \button{Autorange}.

Sampling an irregularly-recorded time series at regular time intervals means that some samples may
fall between timestamps for the series.  In this case, Cronos regards the time series as a step-function,
so if a timestamp is missing, the sampled value is taken to be the value at the last timestamp before
the missing time.


\section{Linear Combination Transforms}

\section{Threshold Transforms}

\section{Merging Transforms}

\section{Custom Transforms}

\section{Finance-Related Transforms}

\section{Multivariate Transforms}



\chapter{Models}

Model nodes in Cronos keep track of particular models.
When you create a model, you specify the model order, but not the parameters.
Parameters can then be specified manually, or they can be estimated.
Most models support parameter estimation by numerical maximization of the log-likelihood,
although some (Vector ARs) only support moment-based parameter estimation.

Model nodes always take at least one set of data as input, and have a number of different
outputs.  Outputs typically include one-step predictors and residuals, among other things.

\section{ARMA Models}


$\{X_t\}$ is said to be an \mydefn{ARMA(p,q)} (autoregressive moving-average)
process if it is a stationary process satisfying the
difference equation
\be
\label{eq:arma}
X_t - \phi_1 X_{t-1} - \ldots - \phi_p X_{t-p}
 = Z_t + \theta_1 Z_{t-1} + \ldots + \theta_q Z_{t-q},
\ee
where $\{Z_t\} \sim \mbox{WN}(0,\sigma^2).$
(This implies that $\eof{X_t}=0$.)  The constants $p$ and $q$
are referred to, respectively, as the \mydefn{autoregressive order}
and \mydefn{moving average order}.

If $\{X_t-\mu\}$ is ARMA(p,q) for some constant $\mu \ne 0$,
then $\eof{X_t}=\mu$, and we say that $\{X_t\}$ is
``ARMA(p,q) with mean $\mu$''.

Constraints are placed on the parameters $\{\phi_1,\ldots,\phi_p,\theta_1,\ldots,\theta_q\}$
in order to ensure that the model is causal and invertible.  See \cite{B&D} or any other time series
text for more details.


\section{ARMAX Models}

ARMAX models take the same form as ARMA models, but include an extra driving term which can be 
regarded as an ``exogenous input".   Specifically, we say that if $\{X_t\}$ is governed by
an ARMAX model, it satisfies the difference equation
\be
\label{eq:armax}
X_t - \phi_1 X_{t-1} - \ldots - \phi_p X_{t-p}
 = \gamma U_{t-1} + Z_t + \theta_1 Z_{t-1} + \ldots + \theta_q Z_{t-q},
\ee
where $\{Z_t\} \sim \mbox{WN}(0,\sigma^2)$ and $\{U_t\}$ is the exogenous process,
assumed to be stationary.

\section{GARCH Models}

$\{X_t\}$ is said to be a zero-mean
GARCH($p,q$) (generalized autoregressive conditionally heteroscedastic order $p,q$)
process if
\begin{eqnarray*}
X_t & = & \sigma_t Z_t, \quad \{Z_t\} \sim \mbox{IIDN}(0,1) \\
\sigma_t^2 & = & \alpha_0 + \sum_{j=1}^p \alpha_j X_{t-j}^2
 + \sum_{j=1}^q \beta_j \sigma^2_{t-j}
\end{eqnarray*}
with $\alpha_0>0,~\alpha_j \ge 0, ~j=1,2,\ldots,p,$
and $\beta_j \ge 0,~j=1,2,\ldots,q .$

If $\{X_t - \mu\}$ is a zero-mean GARCH($p,q$) process, then $\{X_t\}$ is said to be a GARCH($p,q$) process with mean $\mu.$

GARCH models are often used to explain the bursts in volatility exhibited by time series of stock price log-returns.

\section{Vector AR Models}

\todo

\chapter{Programming Notes}

You can customize and/or extend Cronos directly.  
To do this you simply make your own modifications to the source code and recompile.
In general this will not be easy because you would have to understand the author's code.

A more useful approach is to construct extensions by making use of the plugin system.
The plugin system allows you to create custom models and/or custom transforms, which are contained in a DLL file
that can be distributed separately from Cronos itself.  As of version 2.00, Cronos has a \menuoption{Settings}{Load Plugin} menu option
that can load a plugin DLL.

\section{Plugin System}

\subsection{Overview}

To write a plugin, you need to
\begin{enumerate}
\item Create new model and/or transform objects.
\item Write functions that communicate with Cronos to extend its menu.  These functions will create instances of models/transforms.
\item Compile your code into a DLL file that Cronos can load at runtime.
\end{enumerate}

\subsection{Creating a New Model}

The procedure for a univariate time series model is as follows.
(Detailed sample code to be filled in later on ...)

\begin{enumerate}
\item  Derive a new class (the model class) from UnivariateTimeSeriesModel and implement all required methods.
\item  Create a public constructor.  The constructor should call a (private) LocalInitialize function that
fills in default parameter values for the model.  LocalInitialize should also set the ParameterStates property appropriately.
Make sure that InitializeParameters calls LocalInitialize.   
\item Create a local Vector property Parameters with get and set methods.  
For readability, it may be useful to set a whole lot of properties to get/set parameter vector components by name.
\item Fill in the get method of Description property; this should return complete description of the model, including parameter values.
Also override IConnectable.GetShortDescription to return a string to be displayed in the graph viewer.
\item Fill in ParameterToCube and CubeToParameter.
   CubeToParameter maps something on the p-dimensional hypercube to a valid p-dimensional parameter vector
   ParameterToCube should map any valid parameter to the hypercube, and
   CubeToParameter(ParameterToCube(x)) should be equal to x for valid parameters x.
\item Fill in CheckParameterValidity.  This returns true if and only if the current parameters are valid.
   If this returns true for a parameter, then you MUST be able to compute log-likelihood, forecasts, etc. with the specified parameter
\item Fill in ComputeResidualsAndLL, as well as LogLikelihood
   these functions are basically the same, but LogLikelihood takes a parameter vector as an argument (doesn't change current model parameter)
   and ComputeResidualsAndLL uses current model parameter vector, and fills in "residuals" and "LastLogLikelihood" before returning
\item Fill in ComputeConsequentialParameters.
   Any parameter i with ParameterStates[i]==ParameterState.Consequential
	should be filled in as a function of the other non-consequential parameters in the model as well as the data.
\end{enumerate}

\subsection{Creating a New Transform}
\todo

\subsection{Integration with the Cronos Menu}
\todo

\bibliography{rptbib}

\end{document}
