%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Report for the DASE projet, a Sequential Data Exploration and Analysis
% developped first in the In|Situ| lab at Université Paris-Sud and after that
% at Télécom Paristech
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Created by Olivier Le Floch on 2007-08-16.
% Last updated by Olivier Le Floch on 2008-04-01
% Copyright (c) 2007-2008. All rights reserved.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[a4paper,11pt,twoside]{article}

% Use utf-8 encoding for foreign characters
\usepackage[utf8]{inputenc}
\usepackage[francais, english]{babel}

% Setup for fullpage use
\usepackage{fullpage}

% Running headers and footers
% \usepackage{fancyhdr}
% \pagestyle{fancy}
% \addtolength{\headwidth}{40pt}
% \addtolength{\headheight}{30pt}
% \fancyhf{}
% \fancyhead[LE,RO]{\bfseries{\Large\thepage}}
% \fancyhead[RE]{\bfseries\rightmark}
% \fancyhead[LO]{\bfseries\MakeUppercase{\leftmark}}
% \fancypagestyle{plain}{\fancyhead[LO]{}}

% Package for including code in the document
\usepackage{listings}
\usepackage[usenames]{color}

\usepackage{amsfonts}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{latexsym}
\usepackage{pdfsync}
\usepackage{floatflt}
\usepackage{url}
\usepackage[pdftex]{graphicx}

% Code Formating
\usepackage{listings}
\lstloadlanguages{C, Python, SQL}
\newcommand{\preparePython}{\lstset{language=Python}}
\newcommand{\prepareSQL}{\lstset{language=SQL}}
\preparePython

\definecolor{lbcolor}{rgb}{0.9,0.9,0.9}
\lstset{
  basicstyle=\small\ttfamily,
  float=t,frame=tb,backgroundcolor=\color{lbcolor},rulecolor=\color{black},
  framexleftmargin=5pt,tabsize=4,showstringspaces=false,
  numbers=left,stepnumber=5,numberstyle=\tiny,
  numbersep=5pt,numberfirstline=true,numberblanklines=true,
  commentstyle=\itshape\color{Gray},keywordstyle=\color{blue}\bfseries,
  stringstyle=\color{red},emph={\%inf,\%T,\%F},
  emphstyle=\color{OliveGreen}\bfseries,
  literate={<>}{$\neq$}1 {!=}{$\neq$}1 {<=}{$\le$}1 {>=}{$\ge$}1}

\newcommand{\code}[1]{\texttt{#1}}

\NoAutoSpaceBeforeFDP
\AddThinSpaceBeforeFootnotes
\FrenchFootnotes
\DeclareGraphicsRule{.tif}{png}{.png}
  {`convert #1 `dirname #1`/`basename #1 .tif`.png}

\usepackage[nottoc]{tocbibind}
\setcounter{tocdepth}{2}

% définitions de symboles mathématiques
% ensemble des réels
\newcommand{\R}{\mathbb{R}}
% ensemble des entiers naturels
\newcommand{\N}{\mathbb{N}}
% d droit (pour les éléments différentiels)
\newcommand{\ud}{\, \mathrm{d}} 
\DeclareMathOperator{\Noise}{Noise}
\providecommand{\abs}[1]{\lvert#1\rvert}
\providecommand{\floor}[1]{\lfloor#1\rfloor}
\providecommand{\ceil}[1]{\lceil#1\rceil}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% START OF REPORT                                                              %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\title{
\vspace{-3cm}
\normalsize
\begin{tabular}{p{15cm}}
% ÉCOLE POLYTECHNIQUE\\
% PROMOTION X2004\\
TÉLÉCOM Paristech \\
LE FLOCH Olivier
\end{tabular}
\vspace{6cm}
\large
\begin{center}
RAPPORT DE BRIQUE DE PROJET\\
\vspace{1cm}
{\Huge DASE -- Data Analysis \& Stream Exploration}\\
% \vspace{1cm}
% NON CONFIDENTIEL
\end{center}
\vspace{6cm}
\normalsize
% \begin{tabular}{p{4cm} p{10cm}}
% Option :                  & Informatique\\
% Champ de l'option :       & Interaction Homme-Machine\\
% Directeur de l'option :   & Gilles Dowek\\
% Directeur de stage :      & Wendy Mackay, Michel Beaudouin-Lafon\\
% Dates du stage :          & 10 avril -- 17 août 2007\\
% Adresse de l'organisme :  & INRIA Futurs - In{\textbar}Situ{\textbar} Project\\
%                           & LRI - Bat 490\\
%                           & Université Paris-Sud\\
%                           & 91405 Orsay\\
% \end{tabular}
}
\author{}
\date{}

\begin{document}

\DeclareGraphicsExtensions{.pdf, .jpg, .tif}
\selectlanguage{english}

\thispagestyle{empty}
\maketitle

\clearpage \thispagestyle{empty} \cleardoublepage \section*{}
\markboth{Abstract}{}

\selectlanguage{francais}

\textbf{Résumé}

L'analyse exploratoire de données séquentielles (ESDA) complémente le data
mining et l'analyse statistique par des techniques de visualisation et de
manipulation de flux de données, visant à révéler les motifs, similarités et
relations de causalité présentes dans ceux-ci, aidant ainsi les chercheurs à
générer des hypothèses sur les informations récoltées au cours d'expériences
ou hors du laboratoire.

Nous identifions l'absence d'une application d'ESDA générique comme étant la
raison pour laquelle ce type d'analyse est rarement effectué, et souvent
réinventé de toutes pièces, dans les projets de recherche actuels sur
l'interaction homme machine. C'est pourquoi nous présentons DASE, un système
d'analyse exploratoire qui a vocation à être réutilisable dans le contexte de
nombreux projets. Ce système est basé sur une couche d'abstraction des flux et
une algèbre dynamique sur les flux très expressive. Nous proposons un outil de
visualisation chronologique pour des logs de serveurs web basé sur DASE, et
fournissons des éléments confirmant que DASE peut être utilisé pour faire de
l'ESDA afin d'améliorer l'analyse de flux de données variés.

\vspace{5truecm}

\selectlanguage{english}

\textbf{Abstract}

Exploratory sequential data analysis complements data mining and statistical
data analysis with stream data visualization and manipulation techniques aimed
at revealing patterns, similarities and causality relationships they contain,
thereby helping researchers to generate hypotheses on the information
collected during experiments or out of the lab.

We identify the lack of a generic ESDA application as the reason for which
this type of analysis is seldom used, and often invented anew, in current
research projects in the computer-human interaction field. That is why we
present DASE, an exploratory data analysis framework, that is re-usable in the
context of numerous projects. This framework is based on a stream abstraction
layer and a very expressive dynamic stream algebra. We propose a timeline
based web server log visualization tool based on DASE, and provide elements
confirming that DASE can be used for ESDA to improve the analysis of
diversified stream data.


\clearpage \thispagestyle{empty} \cleardoublepage \thispagestyle{plain}
\tableofcontents

\clearpage \thispagestyle{empty} \cleardoublepage \thispagestyle{plain}
\section*{Introduction}
\addcontentsline{toc}{section}{Introduction}
\markboth{Introduction}{}

In the computer-human interaction (CHI) field, researchers quite often
generate large sequential data sets, that can be logs of users' activity, data
collected during an experiment, or data entered manually by the experimenter.
Analyzing this information correctly and efficiently is crucial so as to
obtain convincing research results. Widely used and effective statistical and
graphical analysis tools (the R language) or exploratory data analysis tools
(JMP) exist but are ill suited for what we shall call exploratory sequential
data analysis (ESDA).

Sequential data can be regarded as time-based streams of values, instead of
unordered chunks of data. Studying and exploring them is essential in order to
generate hypotheses on the origin of the information, and discovering
patterns, correlations and extracting new details. To analyze such material,
statistical analysis and data mining must be complemented with ESDA, which is
characterized by its exploratory nature, and its approach of data sets as
sequential series of events.

In this project, we try to facilitate the analysis and exploration of
sequential data, by keeping in mind that a generic and reusable tool will help
researchers develop efficient and systematic methods for exploring various
data sets. We start by summarizing prior art to have a better view of how ESDA
is currently done, and then present several sample usage scenarios our
framework will have to be able to tackle. We then define an abstract stream
representation linked to a stream manipulation algebra, enabling users to
define complex expressions to reveal hidden information. Based upon this
stream manipulation algebra, we have written a reusable SQL powered Python
framework implementing its ideas and a basic viewer and log manipulation tool,
which we finally describe and evaluate.

\begin{figure}[h]
  \centering
    \includegraphics[width=15truecm]{figures/introduction/manyOperatorStreams.png}
  \caption
    {Sample visualization of several streams calculated using the stream algebra}
  \label{fig:manyOperatorStreams}
\end{figure}

\clearpage
\thispagestyle{empty}
\cleardoublepage
\thispagestyle{plain}
\section{State of the art}

In this section we try to explicit the problem we are trying to solve by
summarizing the current state of the art in exploratory sequential data
analysis, which is related to video annotation and stream data visualization.

\subsection{DIVA - Exploratory Data Analysis with Multimedia Streams}

\begin{figure}[h]
  \centering
  \includegraphics[height=5truecm]{figures/diva/fig-1.png}
  \caption{DIVA's stream visualization}
  \label{fig:DIVA_Streams}
\end{figure}

The DASE project is primarily based on the work done by Wendy Mackay and
Michel Beaudouin-Lafon for the DIVA\cite{DIVA} project on Exploratory
Sequential Data Analysis with Multimedia Streams. For this project, they had
vast amounts of logging data obtained from observing the work of air
controllers. They wished to determine how to improve their workflow, for
example by detecting bottlenecks, or instants during which two controllers
tried to use the same device simultaneously. They used a real-time perspective
visualization of data streams linked to a visualization of the video of the
current instant (Figure~\ref{fig:DIVA_Streams}), and to realize their
exploratory sequential data analysis, they decided to develop a stream
manipulation algebra (Figure~\ref{fig:DIVA_Algebra}) so as to be able to do
more than simple linear exploration of their data. This algebra provides
operators to expand or contract all events in a stream, filter it, replace
values with the values in a another stream, do arithmetic operations on
values... Algebraic manipulation of video streams has also been studied in
\cite{AlgebraicVideo}, which describes many operators : concatenation,
looping, stretching, intersection, etc.

\begin{figure}[h!]
  \centering
    \includegraphics[height=2truecm]{figures/diva/fig-2.png}
    \includegraphics[height=2truecm]{figures/diva/fig-4.png}
    \includegraphics[width=15truecm]{figures/diva/fig-3.png}
  \caption
    {DIVA's stream manipulation algebra : edit, cross-product, and stretching}
  \label{fig:DIVA_Algebra}
\end{figure}

However, DIVA was a prototype, and several of its features can be improved :

\begin{itemize}
  \item Extensibility and reusability : although the algebra was quite general
        -- it provided means of doing ESDA not only on logging data of the
        activity of air controllers, the implementation was quite specific to
        the needs of the DIVA project;
  \item Focus on video data and its annotation : today, the nature of the data
        that is logged has slightly changed, and logging produces even larger
        quantities of data. Instead of mainly video acquisition, data capture
        now mainly focuses on sensor or automatized monitoring data. This
        means that the data is in a sense automatically annotated, and our
        framework will be able to use this to reveal information that hasn't
        even been seen by human analysts, whereas when handling manually
        annotated-video data, annotations only exist for video that has
        already been viewed;
  \item An API to implement new operators, or extend existing ones;
  \item Multiple data architectures -- data values and stream types -- and
        input data formats;
  \item Scalability and performance.
\end{itemize}

As we shall see in
Section~\ref{subsec:exploring_continuous_observational_data}, most of these
aspects should be considered very important for a generic ESDA application.
Exploratory Sequential Data Analysis can be decomposed into three main points
:

\begin{itemize}
  \item Capturing and annotating the data ;
  \item Manipulating and extracting the data ;
  \item Visualizing the data.
\end{itemize}

Many papers have described video annotation and exploration tools (see
\cite{VideoNoter,CapturingWithSTREAMS,Marquee,VideoMosaic,VACA}), most often
focusing on capture, annotation, and visualization of multimedia streams. We
have in 2007 concentrated our efforts on the main new feature of DIVA's
approach : the stream manipulation algebra. In 2008, we have improved DASE's
infrastructure. In this report, we tackle the problems of storing,
manipulating and extracting the data in such a way that interaction and
visualization are easy to perform, or can be implemented as an extension to
our work.

\subsection{Exploratory Sequential Data Analysis -\\
  Exploring Continuous Observational Data}
\label{subsec:exploring_continuous_observational_data}

In \cite{Fisher}, C. Fisher and P. Sanderson try to lay paths across the
interdisciplinary field of analyzing observational data. They set forth how
one should do ESDA, and what a good ESDA tool should achieve and enable the
user to do. This paper exposes a number of constraints that our framework
should implement by outlining the requirements good ESDA software tools should
meet, i.e. fulfilling these makes the task of exploring data easier.

\begin{itemize}
  \item Replay data as it occurred : real-time visualization;
  \item Important flexibility for the encoding and structure of data;
  \item Ability to describe the data at different grains of analysis,
        multi-scale visualization and data representation;
  \item Ease of expression of relationships between data;
  \item Open structure for interaction with other applications, expandability,
        automation;
  \item Ease of aggregation between different data sources or different cases,
        during an analysis;
\end{itemize}

Similar ideas are developed in \cite{Bigbee}, which focuses more on video
annotation and exploration, but also insists on multi-scale annotation, open
architecture, links between data, and an open data structure, so that multiple
types of data can be stored and manipulated.

Building upon the ideas described in \cite{Tukey}, and supported by
\cite{EVA}, the authors of \cite{Fisher} express the opinion that the
techniques of exploratory data analysis (EDA), which encourage analysts to
find powerful re-expressions of data in order to reveal new structure and to
focus on hypotheses generation rather than hypotheses testing, are also
relevant for ESDA. They propose eight fundamental smoothing operations, aiming
at manipulating the data so that its essential structure becomes apparent
(Figure~\ref{fig:the8Cs}).

\begin{figure}[h!]
  \centering
    \includegraphics[width=15truecm]{figures/the8Cs.pdf}
  \caption{The 8 C's - Smoothing operations to reveal the data's structure}
  \label{fig:the8Cs}
\end{figure}

These operations will have to be feasible either by expanding the framework,
or by directly using the operators it provides.

\subsection{New projects need to create their own tools}

To manipulate large quantities of stream data in an efficient way, many
projects have to develop ad hoc tools that are fit for the task at hand. This
is the case for the previously mentioned video-logging and exploration tools,
and is also the case for \code{wmtrace} \cite{wmtrace} -- a user-interface
activity logging program that logs window manager events, LifeLines
\cite{LifeLines} -- a personal history visualization environment -- as well as
for the handling of familiar personal data \cite{MemoireEpisodique}. It can
also be useful in many server log analysis situations.

In each of these scenarios, a data storage system and more or less complete
exploratory mechanisms have to be developed anew. Because of this, analysts
cannot reproduce similar analyses, reuse techniques, experiment across
projects, and easily concentrate on hypotheses generation. To avoid this, the
abstractions and framework we develop should be easily re-used, easily adapted
to new projects, by being sufficiently generic, expandable and adjustable.

\subsection{MacSHAPA and the enterprise of ESDA}

In \cite{MacSHAPA}, Sanderson et al. develop MacSHAPA, which aims to enable
users to ask themselves questions they would not dare otherwise ask because of
an excess of data. This project focuses most on data acquisition in a
spreadsheet like interface, with annotation and visualization features, yet
uses statistical tools for the analysis of the data. It tries to handle more
generic data than most Video-annotation and exploration tools, and partially
answers the question of a generic approach to ESDA, but the authors of this
article also claim that it is impossible to provide a universal solution for
ESDA. While this is obviously true for the data visualization aspect, it is
not entirely true of the data storage and manipulation part. We claim that on
the contrary, having a generic framework is feasible and interesting, since
using the same tools for different projects will probably help researchers
develop more methodical analyses of their data sets, as they can currently do
with JMP \cite{JMP}.

Furthermore, MacSHAPA concentrates on statistical analysis of the data, which
does not reveal patterns in raw data, or allow visualization of hidden
information by extracting interesting sequences of events, sub sets or sub
calculations from the initial sequential data.


\subsection{A Visual Interface for Multivariate Temporal Data:\\
Finding Patterns of Events over Time}

In \cite{EventPatternsOverTime}, the authors present a simple pattern matching
system for streams of temporal data, where they define a temporal pattern as a
sequence of events (\code{EventSeq}) such as :

\begin{lstlisting}
  EventSeq = Event {[TimeSpan] EventSeq}
  TimeSpan = MinDays MaxDays
\end{lstlisting}

Using this, their system can match sequences of identical events separated by
a bounded number of days on several streams at once. This enables users to
apply similar queries to several different data sets, and extract hidden
information from vast quantities of data.

\begin{figure}[h]
  \centering
    \includegraphics[width=15truecm]{figures/patternfinder/fig.jpg}
  \caption{Visualization of matching patterns of events}
  \label{fig:figures_patternfinder_fig}
\end{figure}

This is a typical class of ``queries'' a user might be interested in
performing when exploring his data. Our framework will have to either give
access to such simple primitives, or make it easy to create a query reusable
between projects that would need to do similar filtering.


\subsection{Streaming Queries over Streaming Data}

As described in Section~\ref{subsubsec:SQL_and_DBAPI}, we have chosen to use
a database engine as our data storage system and for most operators (see
Section~\ref{sec:stream_manipulation_algebra}), and as a result, have also
looked at existing literature on temporal databases and streaming database
engines.

In \cite{StreamingQueries}, Chandrasekaran and Franklin have a very
theoretical approach describing ways to run dynamic queries on data streams.
In this approach, filtering on time is explicit : the query is run on
predefined times, and is not intended for extracting patterns of events over
the entire length of a stream :

\lstset{language=SQL,morekeywords={BEGIN,NOW}}
\begin{lstlisting}
  SELECT  *
    FROM  Data_Stream AS D_s
   WHERE  (D_s.a < v_1 OR D_s.b > v_2)
   BEGIN  (NOW - 10)
     END  (NOW)
\end{lstlisting}
\preparePython

We on the contrary want our filter to be applied to the entire stream at once,
and do not need the network oriented features these approaches also develop.

Similarly, temporal databases, which are relational databases that store
temporally evolving data are not exactly what we are looking for. Instead of
considering streams of data, they consider the data as chunks, which makes
global visualization of the streams difficult, and as said previously, they
are not suited for operators that wish to alter time for events, or consider
streams in their entirety.

This means that we will have to develop our own abstraction for streams, and
if we use a database engine, not all of its features might be fully exploited
by our approach.


\clearpage \thispagestyle{empty} \cleardoublepage \thispagestyle{plain}
\section{Sample usage scenarios}
\label{sec:sample_usage_scenarios}

Before starting to present our work, it is probably useful to show several
situations where ESDA adequately completes statistical analysis, so that the
reader may have a better understanding of the problems we are attempting to
solve.

\subsection{Yann Riche - MarkerClock}
\label{subsec:markerclock}

For his upcoming MarkerClock paper, Yann Riche \cite{markerclock} has followed
the activity of many elderly people. He logs both the human activity in front
of the screen that displays his communication-oriented augmented clock, and
clicks on markers -- the large green geometric symbols in
Figure~\ref{fig:markerclock}, for each user. Figure~\ref{fig:markerclock}
shows a typical visualization of this data in the context of the end-user
visualization interface.

\begin{figure}[h]
  \centering
    \includegraphics[height=7cm]{figures/markerclock.pdf}
  \caption{MarkerClock showing two connected users}
  \label{fig:markerclock}
\end{figure}

His data is stored every five minutes, so that for every five minutes he has
the length of time during which there is someone in front of the screen, and
the number and types of markers that are clicked.

Statistical analysis of this data is not sufficient : one must distinguish
between time periods (morning, meal times), and search for patterns among the
activity streams of the people who are supposed to communicate.

Typical questions might be :

\begin{itemize}
  \item In the morning, after the first activity, what markers are activated ?
  \item Is there a lot of movement in front of the clock right before meal
        times ?
  \item Are moments with lots of movement -- people -- in front of the clock
        also instants when lots of markers are triggered, so as to prove
        that MarkerClock improves communication between users ?
\end{itemize}

\subsection{Olivier Chapuis - \code{wmtrace}}

With \code{wmtrace} \cite{wmtrace}, Olivier Chapuis has access to all the
clicks, mouse movements, keyboard events, window and widget modifications and
actions that a user does on his machine -- with obfuscation for privacy
reasons. This generates massive amounts of data, which can be analyzed
statistically, but can benefit from stream-based filtering, for instance to
answer the following questions :

\begin{itemize}
  \item When popups appear for a very short time, what are the actions a user
        does right before, and right after rapidly closing the window ?
  \item Are situations where mouse movements are very rapid linked to other
        events ?
  \item When grouping events by hour, does a pattern reveal itself in some
        types of events ? Does the user have a special type of activity when
        waking up, one when working, and one when off hours ?
\end{itemize}

All this can help design better user interfaces, detect incoherences in
current software design, or analyze user's usage of their computer.


\subsection{System Log Access}

Many system administrators use statistical analysis tools for their system
logs, which can reveal problems in the software run on servers, or potential
hacks on their machines. Once a potential problem has been detected however,
tracing it back to its origin can be a gruesome task. Once again, using ESDA
can help answer questions such as :

\begin{itemize}
  \item What events happen just before the incident ?
  \item Do patterns of circumstances similar to the one considered exist in
        other parts of the log ?
  \item What events occur simultaneously in separate log files ?
\end{itemize}

When analyzing server logs, one can also analyze web site visiting trends,
internals of a gaming system, and so on.


\clearpage \thispagestyle{empty} \cleardoublepage \thispagestyle{plain}
\section{Abstract Stream Representation}
\label{sec:abstract_stream_representation}

To be able to manipulate log data in a powerful way, and in order for the
stream algebra to be easily reusable, we propose a data stream abstraction
model, based on the data that the users -- researchers or end users -- need to
represent, as described in Section~\ref{sec:sample_usage_scenarios}.

The first constraint however will be that streams must represent sequential,
temporal data, which means that one dimension of each and every stream will be
time, and that maintaining links between data as meta-data will mostly rely on
a higher layer than that which we are building. Adding meta-data must
nonetheless be easy. This limitation is acceptable and necessary, because
meta-data is extremely specific to each application in which the framework can
be used.

Hence our primary assumption will be that streams are structures that link
time to data. From a programmer's point of view, a stream will primarily be
able to return data when given a time value.


\subsection{Stream types}

For our stream abstraction to be useful, we have to start by looking at what
streams the users wish to represent. There are in fact four different types of
streams, depicted in Figure~\ref{fig:figures_stream_abstraction_stream_types},
explained below. For each stream, we give a series of sample sources of data,
some of them from outside the field of Computer-Human Interaction, because
this stream abstraction could of course be used in other fields, and also
since giving examples in CHI would often require more explanations than is
necessary for ``real-world'' examples.

\begin{figure}[h]
  \centering
    \includegraphics[width=12truecm]{figures/stream_abstraction/stream_types.png}
  \caption{Types of streams}
  \label{fig:figures_stream_abstraction_stream_types}
\end{figure}

\subsubsection{Instantaneous Event Streams}

The first stream in Figure~\ref{fig:figures_stream_abstraction_stream_types}
could represent for example instants when files are opened and closed : green
indicates that a file is open, red indicates that it is closed, and the
``value'' of each instantaneous event indicates what file is linked to this
event.

These streams' events represent instantaneous occurrences, with no defined
duration. The instants when runners finish a race, filesystem activity and
mouse clicks are examples of instantaneous event streams.

\subsubsection{Continuous Value Streams}

The second stream in Figure~\ref{fig:figures_stream_abstraction_stream_types}
could represent the number of people in a bus : at each stop, a new event is
started, its value indicating the number of people in the bus for the time
between two stops.

These streams' events represent a continuously defined value, whose value
changes when the value it represents varies. We can think of this as ``push''
versus ``pull'' : the data is pushed into the stream, and all variations in
the data that the stream represents are stored. Examples of such streams would
be the number of people in a room, the value of a slider in a computer
interface, or the total height of a building in construction.

A continuous value streams' variations can generally be represented as an
instantaneous event stream, and some manipulations will be easier to do on one
representation or the other.

\subsubsection{Continuous Sampling Streams}

While continuous value streams are populated by sensors that are able to
detect all changes in the data they were monitoring, streams can also be
populated via sampling. This means that all variations in the logged quantity
are not stored, as while the value in the stream remains constant over a time
interval, the real value has varied continuously.

Examples of this would be the intensity of light in a room, the position of
the mouse pointer, and many other values that are measured by sampling.

The main difference between this stream type and the previous is related to
the way the data is collected, which means that it impacts the way its
analysis will be done. It does not however impact the stream abstraction.

\subsubsection{Discrete Sampling Streams}

The last type of stream can for instance represent the number of open windows
in a system where detecting the opening and closing of windows is impossible.
As windows can be opened and closed very rapidly, interpolating linearly
between values, or even considering that the stream's value remains constant
between polling events, is not reasonable.

Examples of such streams would be once again physical values polled by a
sensor, but which can vary very rapidly in between measures, or that can not
be interpolated between events. For example, pictures taken by a long delay
street camera.

Once again, this only impacts the way the data should be analyzed, and its
values understood, and not the way it should be internally stored : for us,
this type of stream will be very similar to continuous value streams.

\subsubsection{Additional requirements}

In the previous examples and stream types, we have neglected two additional
requirements. Streams were presented at the beginning of
Section~\ref{sec:abstract_stream_representation} as linking data to time, but
for instantaneous event streams, this also means being able to handle the lack
of data.

On the other hand, it is sometimes convenient to be able to handle
simultaneous values, such as when monitoring filesystem activity : multiple
files can be open simultaneously, and having one stream per file is out of
question when thousands of files can be opened in one logging session.

\subsubsection{Summary of requirements}
\label{subsub:summary_of_requirements}

After analyzing the different types of data a stream must represent, we can
now better explicit what a stream must be able to do :

\begin{itemize}
  \item Link time to data;
  \item Data can be represented as events;
  \item Events have a precise start time, but can be instantaneous or have a
        non-zero duration;
  \item For a given time in a stream, having simultaneous events, and having
        no events, must be possible.
\end{itemize}

\subsection{Events}

In the previous subsection, we have tried to present the requirements to be
met by our stream representation, and have concluded that we can represent
data in the stream as events, possibly instantaneous. We will use the
following representation for events :

\begin{lstlisting}
  Event(startTime, endTime, value)
\end{lstlisting}

To represent instantaneous events, we will simply set
\code{startTime == endTime}.

The other possibility would have been to use a \code{(startTime, duration)}
pair to represent the time span of an event. But this is less efficient when
extracting events over a time interval from a stream, for instance for
visualization, or for some operators (see
Section~\ref{subsubsec:SQL_and_DBAPI}); since comparing two \code{endTime}s
would require calculating \code{startTime + duration} and hence would not be
able to exploit indexes.

\subsection{\code{Stream}s}

We distinguish between two types of streams : \code{InputStream}s, which
contain real data and are populated by the application that uses the
framework, and \code{OperatorStream}s, which are calculated by the framework
by accessing other streams. \code{InputStream}s can be modified, but the
latter cannot.

We will describe the stream abstraction in terms of classes, but in a language
independent way, since this layer is perfectly adaptable to other programming
languages.

Hence we will define a base class \code{Stream} for all streams, which is only
readable, and two separate classes, one which will be writable, and one which
will be defined by one or more operations in the algebra.

\begin{lstlisting}
  class Stream:
    get(time) -> value(s)
    get(startTime, endTime) -> list of events =
                               (startTime, endTime, value)
    
    Various other access methods to time-related event properties :
      min length, nextStartTime, lastEndTime, etc.
\end{lstlisting}

As for any given time, there may be in a \code{Stream} either no value, one
value, or several values, the access methods return a list of values or
events, that may be empty.

The \code{get(time)} method returns the list of values that the \code{Stream}
takes at a given \code{time}. It does not return \code{(startTime, endTime)}
intentionally, because they are not representative of the instantaneous state
of the \code{Stream} at the given \code{time}. On the other hand, the
\code{get(startTime, endTime)} method returns a list of \code{Event}s. Since
it must give access to complete \code{Event}s -- for a timeline visualization,
or higher level operations, this function returns \code{(startTime, endTime)}
as well as \code{Value}s.

\subsection{\code{InputStream}s}

Input Streams extend the base Stream class, and add two main methods :

\begin{lstlisting}
  class InputStream:
    set(startTime, endTime, value)
    unset(startTime, endTime)
\end{lstlisting}

These methods enable the host application to insert new values into a Stream,
and if necessary, to remove values, by unsetting them.

\subsection{\code{OperatorStream}s}

Operator Streams do not define any methods, as they in fact only allow
\code{get} operations on calculations they perform that are chosen at
declaration time. \code{InputStream}s in our implementation of the algebra are
based on an SQL engine, and we have only defined \code{OperatorStream}s for
\code{SQLInputStreams}s which extend \code{Stream}s. Indeed,
\code{OperatorStream}s are very linked to the underlying storage mechanism
used for the data. The algebra will be described in
Section~\ref{sec:stream_manipulation_algebra}, along with its implementation
in our framework.

We obtain the following class structure, where the double arrows indicate
inclusion (class \code{Stream} \emph{contains} \code{Event}s) and simple
arrows indicate inheritance (\code{SQLStream} \emph{derives from}
\code{Stream}):

\begin{figure}[h]
  \centering
    \includegraphics[width=15truecm]{figures/streamClasses.pdf}
  \caption{Structure of the Stream Classes}
  \label{fig:streamClassesStructure}
\end{figure}

\subsection{\code{SQLStreams}s}

Figure~\ref{fig:streamClassesStructure} introduces the inheritance diagram of
the stream classes, and \code{SQLInputStreams}s. These streams are \code{SQL}
based implementations of the \code{InputStream} class. In this section we
explain the \code{SQL} structure behind these streams. The choice of
\code{SQL} and having a database abstraction backend will be detailed in
Section~\ref{subsubsec:SQL_and_DBAPI}.

\code{SQLInputStreams}s are implemented as four column tables :

\begin{lstlisting}
  DESCRIBE SQLInputStreams :
    id         INTEGER, UNIQUE, INDEXED
    startTime  INTEGER, INDEXED
    endTime    INTEGER, INDEXED
    value      INTEGER, INDEXED
\end{lstlisting}

\code{id} is a unique identifier for each event in a stream. Since we are
implementing a new database abstraction model for the second version of DASE,
value is currently limited to integer values only, but will have to be
expanded to meet the required functionality.

As we only use SQL streams in our project, these are the only
\code{InputStream}s that can be allocated, by simply indicated the \code{name}
of the stream which uniquely identifies it.

\subsection{\code{SQLImportStream}s}

We are often manipulating data which is already present in the database engine
we are running on, for instance when we analyze data from a web server's logs.
In this case, we can use \code{SQLImportStream}s to maintain a link to the
original data while still representing streams as sequences of events.
\code{SQLImportStream}s are implemented as SQL views. This means updates or
additions to the server logs will be immediately available in our streams.


\subsection{\code{SQLOperatorStream}s}

In order for \code{SQLOperatorStream}s to be easily chained -- to extract
all events of length greater than some value from the union of two streams,
for instance, they are implemented as views, and their apparent structure must
be identical to that of \code{SQLInputStreams}s.

The syntax for instantiating \code{SQLOperatorStream}s depends on the
stream being allocated, except if one wants to access the low level SQL
constructor, which we will not detail here. Indeed, the aim is to provide a
complete abstraction from the operator implementation to the user.


\clearpage \thispagestyle{empty} \cleardoublepage \thispagestyle{plain}
\section{Stream Manipulation Algebra}
\label{sec:stream_manipulation_algebra}

Using this stream structure, we now describe the algebra enabling the user to
manipulate the streams at will :

\subsection{Static versus Dynamic \code{OperatorStream}s}
\label{subsec:static_versus_dynamic_operatorstreams}

Operators on streams can be of two types : dynamic or static. By dynamic, we
designate \code{OperatorStream}s that retain a link to the \code{Stream}s from
which they are calculated, and that reflect changes to the data in these
\code{Stream}s. Static streams, on the other hand, are calculated once, at
declaration time, and to not reflect changes to the streams they were
calculated from. Dynamic \code{OperatorStream}s, except if caching is used,
must be calculated once more at least at each modification to the
\code{Stream}s they originate from, and at most at each access.

This can be much slower, and maintaining links between data is more difficult
to implement (all the more so in a database independent manner, see
Section~\ref{subsubsec:SQL_and_DBAPI}). However, making as many operators as
possible dynamic is important since it helps dynamic data manipulation, and
also allows for editing of data without having to redeclare
\code{OperatorStream}s.

Moreover, a very important aspect of \code{OperatorStream}s will be their
performance, as the best data visualization techniques for temporal data
(which can be regarded as one-dimensional data) are interactive navigation
techniques, which mean that the visualization must be reasonably fast. Static
streams, once they are calculated, are as fast as \code{InputStream}s -- but
they require all the calculations to be done before being viewable. Dynamic
streams on the other hand, may provide implicit generation of the calculated
streams, by lazily computing the events to be displayed.

For streams to be dynamic, we have implemented streams as SQL tables, and used
SQL's view mechanism as much as possible. This means that our choice of
operators has been influenced by the choice of an SQL based database engine as
a storage engine, but we think it has not restricted the expressive power of
the final algebra.


\subsection{Basic operators}
\label{sub:basic_operators}

We now describe the basic operators in our algebra, and distinguish between
those that we have been able to implement dynamically, and those that need to
be calculated statically.

\subsubsection{Duplication, static}

Duplication is an intrinsically static operation, and consists in creating a
new \code{InputStream} and populating it with events from another
\code{Stream}.

\subsubsection{Union, dynamic}

The union operator, which combines all events from two streams into only one
stream, is a typical example of a dynamic SQL view : it is quite simple to
indicate to the database engine that events in the new stream should be
accessed in two different tables, and that the results should then be merged
into only one result set.

\subsection{Dynamic Operator Streams}

More complex operators currently require the user to provide conditions for
the operator he wishes to apply, most often as expressions of predefined
variable names. This of course is not ideal for end-users, but this framework
is intended to be reusable across projects. Researchers can gradually create
their own operators, and future work can implement higher level operators and
ease interaction and application of operators to streams.

\subsubsection{\code{OperatorStream}}

This operator's goal is to apply a function $f(v, w) \mapsto value$ that
combines simultaneous events from both streams into one stream, such as adding
both streams :

\begin{lstlisting}
  newOperatorStream(
    "ProductOfStream1AndStream0",
    "Stream1", "Stream0",
    "v * w")
\end{lstlisting}

\begin{figure}[h]
  \centering
    \includegraphics[width=15truecm]{figures/operators/operatorStream-product.png}
  \caption{Product of two \code{InputStream}s}
  \label{fig:operatorStream-product}
\end{figure}

As one can see in Figure~\ref{fig:operatorStream-product}, when events in both
streams are not temporally aligned (identical start and end times), new events
have to be created. We have chosen the convention that for times where one of
either streams does not have a value (on the edges in
Figure~\ref{fig:operatorStream-product}), the resulting stream does not have a
value either.

\subsubsection{\code{newFilterStream}}

This operator is applied on only one stream, and dynamically maps events from
this stream to a new stream, either filtering them -- events are dropped from
the stream -- or altering the event itself. To do this, the user must provide
filtering expressions, the syntax of which depend on the database engine :

\begin{lstlisting}
  newFilterStream(
    "FilteredStream", "Stream",
    filter="value > 0.2",
    startTimeFilter="startTime - 1",
    endTimeFilter="endTime + 1")
\end{lstlisting}

\begin{figure}[h]
  \centering
    \includegraphics[width=15truecm]{figures/operators/filterStream.png}
  \caption{Filtering a stream on values, and altering start and end times}
  \label{fig:filterStream}
\end{figure}

\clearpage \subsubsection{\code{newAggregationStream}}

Filtering is quite powerful, but can not act on several events at once to
aggregate them into a new event. To do so, we provide
\code{AggregationStream}s that can for instance group events by time :

\begin{figure}[h]
  \centering
    \includegraphics[width=15truecm]{figures/operators/aggregationStream.png}
  \caption{Sum of values for each group of events}
  \label{fig:figures_operators_aggregationStream}
\end{figure}

The syntax for declaring an aggregation stream is as follows :

\begin{lstlisting}
  newAggregationStream(
    name, sourceStreamName,
    aggregationStartTimeIterator, aggregationEndTimeIterator,
    aggregationValueIterator,
    aggregationGrouper)
\end{lstlisting}

This operator works by first grouping events using the
\code{aggregationGrouper} expression, whose evaluation value is used to
partition the stream. Using this partition, the iterators are called on each
event, and must return a valid partial value : for instance if we want to
calculate the mean value of events in each group,
\code{aggregationValueIterator} must return the mean value for the events it
has already been called on.


\subsection{Static Operator Streams}
\label{sub:static_operator_streams}

The previous dynamic operators allow most necessary operations, but a number
are still impossible :

\begin{itemize}
  \item Splitting events ;
  \item Altering or filtering events based on what happens right before or
        right after them ;
  \item Merging events.
\end{itemize}

This is because using SQL makes it very difficult for events to be duplicated
into multiple data groups. Since SQL is not made for this task, we must
implement additional operators at the Python level.

\subsubsection{\code{newIteratorStream}}

Similarly to \code{AggregationStream}s, \code{IteratorStreams} iterate over
events, but without separating events into groups, and without SQL constraints
: they have access to a source and target stream, iterate over the source
stream's events, and insert events into the target stream.

\begin{lstlisting}
  newIteratorStream(name, sourceStreamName, iterator)
  
  iterator(newInputStream, event)
\end{lstlisting}

The \code{iterator} is called for each event, and using static variables, can
implement a state machine to statically realize the missing operations
underlined in Section~\ref{sub:static_operator_streams}.


\subsubsection{\code{normalizeStream}}

One interesting operator is normalization. As we have seen in
Section~\ref{subsub:summary_of_requirements}, streams can have simultaneous
events, and this can either happen if events happen simultaneously in the
source data, or can happen when expanding events. This can be interesting for
instance for searching temporally close events. However, having multiple
simultaneous events complicates visualization, edition, might not reflect
realistic values (the temperature in a room can not take multiple values), and
might not be convenient for further calculations.

Stream normalization is a direct application of stream iteration, and when
called on a stream, simply creates a new static version of this stream that
only has unique events (Figure~\ref{fig:operatornormalize}), and for which
successive equal values are merged into one event.

\begin{figure}[h]
  \centering
    \includegraphics[width=15truecm]{figures/operators/normalize.png}
  \caption{Initial Data, Expanded Stream, and Normalized version}
  \label{fig:operatornormalize}
\end{figure}


\clearpage \thispagestyle{empty} \cleardoublepage \thispagestyle{plain}
\section{The DASE Framework : implementation of these paradigms}

In this section we present in more detail the choice of tools that was made
for this project, the DASE server and the action syntax, as well as the basic
data explorer and visualizer we have set up.

\subsection{Technical choices}

\subsubsection{SQL and the Database Abstraction}
\label{subsubsec:SQL_and_DBAPI}

Although using a relational database engine such as SQLite \cite{SQLite} or
MySQL \cite{MySQL} is not normally thought of as natural for storing and
manipulating stream-based data (more specialized systems such as STREAMS
\cite{STREAM}, or ad hoc data structures are often preferred), indexes and
views have proved invaluable for performance (see
Section~\ref{subsec:evaluation_performances}) and implementing operators (see
Section~\ref{subsec:static_versus_dynamic_operatorstreams}), as has data
persistence between sessions.

Indeed, maintaining links between data, indexes on the timeline or values, and
extracting subsets of data streams for visualization is precisely what SQL
does best. Some operators are very simple (union or duplication, see
Section~\ref{sub:basic_operators}), but most are made much easier through
using SQL queries to express these operations.

The first version of the DASE database engine was based on SQLite because it
is a lightweight embedded engine, that does not require running a separate
server application. It is in fact aimed precisely at being tightly integrated
into other applications. However, MySQL is very often used in real-world web
servers for instance, and closely integrating to one such system, as well as
implementing a database abstraction layer, was one of the goals of version 2
of the DASE framework. The framework now supports both SQLite and MySQL
database backends, and it is simple to add support for new database engines in
\code{DBAPI2Server}, as long as there exists a \code{DBAPI2}-compatible module
for Python. This means expanding to support PostgreSQL, Oracle, or ODBC would
be quite simple.

This choice however has unfortunately imposed some constraints on the Stream
Algebra, which was not thought of beforehand, and then ported to SQL, but
rather thought of so as to fulfill all the needs in terms of expressive power
(see Section~\ref{subsub:expressive_power_of_the_algebra}). For instance,
\code{AggregationStream}s could have merged their iterator functions if
returning multiple columns was possible for aggregate functions. We think
however that these constraints would have been present if we had integrated
another data storage technology, and the smaller development time, and
increased stability certainly compensate for the limitations using an SQL
database engine has imposed on the dynamic operators.

Indeed, SQLite's and MySQL's performance is proven, as is their reliability
and scalability, which are very important for users to be satisfied by the
system we conceive.


\subsubsection{Python}

On top of the storage engine, we needed a dynamic and expandable framework
that allowed us to instantiate complex operator expressions at run time, and
that did not require the user to tightly integrate his extensions to operators
into the framework.

We started by implementing data manipulation in \code{C++}, but this approach
rapidly revealed its limitations (memory management, development times,
extendibility). Using a dynamic programming language was necessary to have a
dynamic, expandable and pluggable framework. We chose to implement this in
Python \cite{Python}, and initially used its integration with Tcl/Tk for the
visualization interface. This proved to be a serious performance bottle neck,
and the initial infrastructure did not enable us to correctly separate the
visualization and stream manipulation parts of the project. This was a second
focal point for the second version of the framework.

\subsection{DASE Server}

In version 2 of the DASE Framework, the independence between data manipulation
and data visualization has been greatly improved by adding a DASE Server that
handles the existing streams and serves data over the HTTP protocol. Custom
urls give access to all the DASE methods, such as creating streams, setting
values, getting values, and listing existing streams. It is based on a
database abstraction class aimed at allowing users to connect to any database
system.

\subsubsection{\code{DBAPI2Server}}

The DASE Server uses a \code{DBAPI2Server} for all its database queries, thus
enabling it to run on a variety of database systems. DBAPI is a Python
abstraction of database access concepts, and \code{DBAPI2Server} is a
relatively thin wrapper enabling users to instantiate a database access object
by specifying a database engine and its connection parameters. This object is
used for all database queries in the DASE Server application, and allows for
parametrized database queries independent of the underlying storage engine.
Since DASE makes intensive use of callback functions in the SQL queries and of
aggregate operators, which do not have a predefined API in DBAPI for Python.
This means that \code{DBAPI2Server} needs to define a common API to create
functions and aggregate functions.

That is why we have added the two following methods

\begin{lstlisting}
  createfunction(name, args, expr)
  
  createaggregatefunction(name,
    startTimeIterator, endTimeIterator, valueIterator, grouper)
\end{lstlisting}

that behave differently depending on whether the database engine that the
\code{DBAPI2Server} is using is \code{SQLite} or \code{MySQL}. In both cases,
the user provides the expression that will be returned by the respective
functions. For \code{SQLite}, this is a Python expression, and for
\code{MySQL}, it is \code{PL/SQL} code. Obviously this will not change the
code much (the basic operators are the same), but ternary conditional
operators are not the same for instance, which unfortunately introduces a
difference between the two database engines, and might require rewriting some
macros the user might have defined when switching database engines.

Furthermore, the database backend, being central to the DASE Application, is
entirely unit-tested. This gives improved confidence in the backend, and
helped concentrate on the HTTP server aspect of the DASE Server.

\subsubsection{Available HTTP Queries}

In order to simplify integration with external visualization tools, in
particular an AJAX driven web visualization solution, we have chosen to give
access to all data through a dedicated HTTP server which can answer the
following requests :

\lstset{morekeywords={list,new,get,set,stream,operator,filter,union,aggregation,iterator,duplicate,normalize}}
\begin{lstlisting}
  - list/
  
  - new/stream/(streamName IS NAME)/
  - new/import/(query IS SQL)/(streamName IS NAME)/
  
  - new/operator/(leftStream IS NAME)/(rightStream IS NAME)
      /(operation IS BINARY_EXPR)/(streamName IS NAME)/
  - new/filter/(sourceStream IS NAME)/(startTimeFilter IS TERNARY_EXPR)
      /(endTimeFilter IS TERNARY_EXPR)/(valueFilter IS TERNARY_EXPR)
      /(filterCondition IS TERNARY_EXPR)/(streamName IS NAME)/
  - new/union/(leftStream IS NAME)/(rightStream IS NAME)
      /(streamName IS NAME)/
  - new/aggregation/(sourceStream IS NAME)
      /(aggregationStartTimeIterator IS QUATERNARY_EXPR)
      /(aggregationEndTimeIterator IS QUATERNARY_EXPR)
      /(aggregationValueIterator IS QUATERNARY_EXPR)
      /(aggregationGrouper IS TERNARY_EXPR)/(streamName IS NAME)/
  - new/iterator/(sourceStream IS NAME)/(iter IS CALLBACK)
      /(streamName IS NAME)/
  - new/duplicate/(sourceStream IS NAME)/(streamName IS NAME)/
  
  - new/normalize/(sourceStream IS NAME)/(streamName IS NAME)/
  
  - get/(stream IS NAME)/
  
  - set/(stream IS NAME)/(startTime IS TIME)/(endTime IS TIME)
      /(value IS INT)/
  
  - del/(stream IS NAME)/
\end{lstlisting}

The previous queries should each be on one line only, but page width
constraints have added new lines here.

All queries return a JSON object that can easily be parsed by the client data
visualization application. It contains status information about the query as
well as the data contents linked to for instance a \code{list} operation, or a
\code{get} query.

\begin{lstlisting}
  {"status": statusCode, "contents": dataForQuery}
\end{lstlisting}


\subsection{Basic Web-based Visualization}

\begin{figure}[h]
  \centering
    \includegraphics[width=15truecm]{figures/visualisation/basicInterface.png}
  \caption{The basic visualization interface}
  \label{fig:visualisation_basicInterface}
\end{figure}

The basic web based visualization interface is presented in
Figure~\ref{fig:visualisation_basicInterface}.

The large bottom pane shows the timeline view, that displays the values in the
streams that are selected in the top-center list. The top-right widget lists
the recently executed queries and the macro selection menu, and can be used to
view the queries that will be included in macros created when in macro mode.
The top left hand side part is the query specification section, with which one
can either enter queries manually or by using the partially predefined
placeholders. Zooming on the timeline can be done by drag-selecting the
desired viewing window. Clicking in the timeline scrolls horizontally through
time. The command line enables users to dynamically declare new streams,
execute series of commands, or apply predefined macros.

\subsection{Participative Conception of the web based interface}

In order to improve the viewing interface, we have interviewed two prospective
users of the system using participative conception techniques. We asked them
when they had to do sequential data analysis, what their latest experience in
doing so had been, and what could be improved.

The most important contributions and learnings were that the users were ready
to learn a new language or at least adapt to a subset of a known language for
their queries (SQL for instance), but that power of expressivity, and the
ability to complement statistical analyses was most important. The next two
requirements were the ability to add macros, and the ability to view links
between data points (i.e. to add semantics to the data streams for instance).

These interviews also helped us discover new areas of application of DASE,
apart from web server log analyses which was the main usage objective of our
subjects. They suggested that the framework might also be used for crash test
results analyses, for many operating and exploitation lines of work (operating
train lines for instance). It might also be used to analyze low level network
traffic.

\subsection{Some results}
\label{subsec:some_results}

The first results in this section were obtained using the first version of the
DASE framework. Even though it has since undergone massive changes, the
underlying logic, and operator manipulation, has stayed the same. These
analyses are therefore still relevant, even though the figures have not been
generated with the latest version of the tool.

In Section~\ref{sec:sample_usage_scenarios}, we presented a series of usage
scenarios where using our framework for ESDA would help analyze complex data
sets. In particular in \ref{subsec:markerclock}, we described a series of
questions for MarkerClock, for which we already have test data that we can
load into DASE.

\begin{figure}[h]
  \centering
    \includegraphics[width=15truecm]{figures/markerclockESDA.png}
  \caption{Multiple calculated streams for MarkerClock data analysis}
  \label{fig:markerclockESDA}
\end{figure}

In Figure~\ref{fig:markerclockESDA}, we see the successive steps undertaken to
explore sample MarkerClock data and from it extract preliminary results. The
first step was to correct the \code{MOTION} streams, which after about 20 days
suffered an important change of scale for their values. To correct this, we
used the following \code{FilterStream} :

\begin{lstlisting}
newFilterStream(
  "CorrectedJulieMotion", "julie_MOTION",
  valueFilter=
    lambda startTime, endTime, value:
      value if startTime > 1181309400 else value * 500)
\end{lstlisting}

Using \code{AggregatorStream}s, we then aggregated events for the two
corrected motion streams by day, and this revealed that MarkerClock was
probably used a lot during the first ten days of the experiment, and that its
usage increased once more towards the end, for both participants. Days of
increased activity for Lionel also seem to be days of increased activity for
Julie (left of the narrow red bar indicating the current time focus).

Finally, for our initial overview of this data, we filtered the
\code{CorrectedJulieMotion} stream on times when she was most active, and
intersected this \code{ActiveJulie} stream with here usage of markers, with
the following \code{OperatorStream} that simply selects the value in the
marker stream at times when both streams share events :

\begin{lstlisting}
  newOperatorStream(
    "JulieMarkersWhenActive", "ActiveJulie", "julie_NUM_MARKER",
    lambda activeValue, numMarkers: numMarkers)
\end{lstlisting}

This revealed that Julie was most active and using markers towards the middle
of the day, but also that days where Julie was active did not necessarily
correspond to times where she actively used markers.

This initial analysis was done in less than half an hour, with no initial aim
in mind. Admittedly, these results are quite limited, but further analysis
would most certainly reveal interesting information.

\subsection{Evaluation : Performances}
\label{subsec:evaluation_performances}

Even though we have not had time to complete a real-world evaluation of this
system by confronting it to the scenarios we presented in
Section~\ref{sec:sample_usage_scenarios}, quick tests such as the one done in
Section~\ref{subsec:some_results} indicate that providing that the framework
is indeed scalable, and the algebra flexible enough, DASE will certainly prove
to be an aid for exploratory data analysis. It is these performance and
flexibility questions that we address in this section.

\subsubsection{Benchmarks for common operations}

The following benchmarks were run with Python 2.5.2, SQLite 3.5.7, and MySQL
5.0.51. Version 1 of the DASE framework to which comparisons are made used
slightly older versions of Python and SQLite. We render static and dynamic
streams, in a web browser or with Tkinter, which was the previous bottleneck
of the framework, and compare to raw data retrieval times.

\begin{table}[h!b!p!]
\begin{tabular}{|r|cc|}

\hline
Num. Events/Stream &   Render :   & Max Time (s) \\
\hline

 10 000            & (no render)  & 0.06         \\
 10 000            & (Javascript) & 0.11         \\
 10 000            &  (Tkinter)   & 0.61         \\
100 000            & (no render)  & 6.00         \\
\hline
Stream Types     &               & \\
\hline
inputStreams     & 10 000 events & 0.10         \\
operatorStreams  & 10 000 events & 1.22         \\
\hline

\end{tabular}
\caption{DASE Visualization Benchmarks}
\label{tab:benchmarks}
\end{table}

We see a great improvement with the better separated architecture we have
chosen. Moreover, for reasonable data set sizes, the web interface is quite
fast and usable, which is a major improvement over version 1 of the framework,
and it is now possible to easily implement and OpenGL based visualization
tool. We also see transferring large data sets may take quite some time, so
that this will most certainly require implementing caching in the viewer
interface.

\subsubsection{Expressive power of the Algebra}
\label{subsub:expressive_power_of_the_algebra}

The second aspect that we must verify is that the stream algebra gives
sufficient power to the users, i.e. that the expressive power of the stream
manipulation algebra fulfills enables one to implement, for instance, all
smoothing operators described by C. Fisher and P. Sanderson in \cite{Fisher}
(see Section~\ref{subsec:exploring_continuous_observational_data}).

Chunks, comparisons, constraints, conversions and computations can relatively
easily be done, since filtering events, combining streams, altering individual
events, groups of events, or entire streams is possible, by using
\code{FilterStream}s, \code{OperatorStream}s, \code{aggregatorStream}s and
\code{IteratorStream}s.

This gives great freedom to the user, who can also extend functionality at any
level : implementing operators in Python, as a specialization of our algebra,
or by tapping into SQL directly.

Comments, Codes and Connections however, which we have regarded as annotations
and meta-data, are more specialized, and the DASE framework does not yet
provide simple conventions to do these.

\clearpage \thispagestyle{empty} \cleardoublepage \thispagestyle{plain}
\section*{Conclusion}
\addcontentsline{toc}{section}{Conclusion}
\markboth{Conclusion}{}

Based on the current state of the art in exploratory sequential data analysis,
which is mainly centered around either visualization of determined data with
specialized solutions, or on annotating video data, we have observed a need
for more generic ESDA tools, which can be tailored to multiple project's needs
without sacrificing the power of hypotheses generation and information
revelation that characterizes ESDA.

DASE provides a base engine for a general and re-usable ESDA application,
implementing a powerful algebra on sequential data streams. It is implemented
using widely used and extendable technologies, and its core is aimed at being
as dynamic as possible. The performances and adaptability of the core data
storage and manipulation framework are quite promising, although the current
visualization frontend, limited in functionality and performance, may not give
full access to all its potential.

The next step for DASE is to receive real world evaluation, that is to be used
as the primary ESDA tool for a number of research projects, and prove that it
indeed helps to answer the questions that come up in scenarios such as those
described in Section~\ref{sec:sample_usage_scenarios}.

Further evolutions and aspects that DASE could benefit from are integrated
meta-data, links between data chunks and annotation, real time playback for
video and audio, and support for searching and clustering of the data.

\clearpage
\begin{thebibliography}{99}

\newcommand{\auth}{\textsc}

\bibitem{Tukey} \auth{Tukey, J.W.,}
Exploratory data analysis,
{\em Addison-Wesley}, (1977).

\bibitem{Allen} \auth{Allen J.F.,}
Towards a General Theory of Action and Time,
{\em Artificial Intelligence} (1984), {\bf 23}, 2, 123--154.

\bibitem{EVA} \auth{Mackay W.E.,}
EVA: an experimental video annotator for symbolic analysis of video data,
{\em ACM SIGCHI Bulletin} (1989), {\bf 21}, 2, 68--71.

\bibitem{DIVA} \auth{Mackay W.E. and Beaudouin-Lafon M.,}
DIVA: Exploratory Data Analysis with Multimedia Streams,
{\em Proceedings of the SIGCHI conference on Human factors in computing systems}
(1998), 416--423.

\bibitem{AlgebraicVideo} \auth{Duda A., Weiss R., and Gifford D.K.,}
Content-Based Access to Algebraic Video,
{\em Proceedings of the International Conference on Multimedia Computing and
Systems (ICMCS)} (1994), 140--151.

\bibitem{VideoNoter} \auth{Roschelle J. and Goldman S.,}
VideoNoter: A productivity tool for video data analysis,
{\em Behavior Research Methods, Instruments, and Computers} (1991), {\bf 23},
219--224.

\bibitem{CapturingWithSTREAMS} \auth{Cruz G. and Hill R.,}
Capturing and playing multimedia events with STREAMS,
{\em Proceedings of the second ACM international conference on Multimedia}
(1994), 193--200.

\bibitem{Marquee} \auth{Weber K. and Poon A.,}
Marquee: A Tool For Real-Time Video Logging,
{\em Conference on Human factors in computing systems (CHI)} (1994), 203.

\bibitem{VideoMosaic} \auth{Mackay W.E. and Pagani D.S.,}
Video Mosaic : Laying out Space and Time in a Physical Space,
{\em Proceedings of the second ACM international conference on Multimedia}
(1994), 165--172.

\bibitem{VACA} \auth{Burr B.,}
VACA: A Tool for Qualitative Video Analysis,
{\em CHI '06 extended abstracts on Human factors in computing systems}
(2006), 622--627.

\bibitem{Fisher} \auth{Fisher C. and Sanderson P.,}
Exploratory Sequential Data Analysis: Exploring Continuous Observational Data,
{\em Interactions} (1996), {\bf 3}, 2, 25--34.

\bibitem{Bigbee} \auth{Bigbee T., Loehr D. and Harper L.,}
Emerging Requirements for Multi-Modal Annotation and Analysis Tools,
{\em The MITRE Corporation} (2001).

\bibitem{MacSHAPA}
\auth{Sanderson P., Scott J, Johnston T., Mainzer J., Watanabe L. and James J.,}
MacSHAPA and the enterprise of exploratory sequential data analysis (ESDA),
{\em International Journal of Human-Computer Studies} (1994), {\bf 41}, 633--681.

\bibitem{EventPatternsOverTime}
\auth{Fails J.A., Karlson A., Shahamat L. and Shneiderman B.,}
A Visual Interface for Multivariate Temporal Data: Finding Patterns of Events
over Time,
{\em IEEE Symposium on Visual Analytics Science and Technology (VAST)} (2006).

\bibitem{StreamingQueries} \auth{Chandrasekaran S. and Franklin M.J.,}
Streaming Queries over Streaming Data,
{\em Proceedings of the 28th International Conference on Very Large Data Bases
(VLDB)} (2002).

\bibitem{PLACE} \auth{Mokbel M.F., Xiong X., Hammad M.A. and Aref W.G.,}
Continuous Query Processing of Spatio-temporal Data Streams in PLACE,
{\em Geoinformatica} (2005), {\bf 9}, 4, 343--365.

\bibitem{markerclock} \auth{Riche Y. and Mackay W.,}
markerclock: a communicating augmented clock for elderly,
{\em Short Paper for INTERACT 2007} (2007).

\bibitem{wmtrace} \auth{Chapuis O.,}
Gestion des Fenêtres: enregistrement et visualisation de l'interaction,
{\em Proceedings of the 17th Conférence Francophone sur l'Interaction
Homme-Machine (IHM)} (2005), 255--258.

\bibitem{LifeLines}
\auth{Plaisant C., Miiash B., Rose A., Widoff S. and Shneiderman B.,}
LifeLines: Visualizing Personal Histories,
{\em Proceedings of the SIGCHI conference on Human factors in computing systems:
common ground} (1996), 221--ff.

\bibitem{MemoireEpisodique} \auth{Roussel N., Fekete J.-D. and Langet M.,}
Vers l'utilisation de la mémoire épisodique pour la gestion de données familières,
{\em Proceedings of the 17th conference on 17ème Conférence Francophone sur
l'Interaction Homme-Machine (IHM)} (2005), 247--250.

\bibitem{JMP} \auth{John Sall et al., SAS Institute,}
JMP statistical software,
{\em http://www.jmp.com/} (1989-2007).

\bibitem{MySQL} \auth{MySQL AB,}
MySQL database engine,
{\em http://www.mysql.com/} (1995-2008)

\bibitem{Python} \auth{Python Software Foundation,}
Python Programming Language,\\
{\em http://www.python.org/} (1991-2008).

\bibitem{SQLite} \auth{Hwaci - Applied Software Research,}
SQLite database engine,
{\em http://www.sqlite.org/} (2000-2008).

\bibitem{STREAM} \auth{Widom J., Motwani R. et al.,}
STREAM, Stanford Stream Data Manager,\\
{\em http://infolab.stanford.edu/stream/} (2002-2006).

\end{thebibliography}

\clearpage

\listoffigures

\listoftables

\end{document}
