The first step in the visualization pipeline is the extraction of a dataset
$\mathcal{D}$. For this purpose we implemented the class \texttt{Dataset}. This
class represents a sampled scalar or vector dataset over time, so the topological dimension of $\mathcal{D}$ is a subset of $\mathbb{R}^3$. The simplest datasets, hereafter called \emph{raw datasets} contain pointers to time dependent values\footnote{See section~\ref{subsec:timedimension} for a technical description of the buffers that keep track of the data values over time.} in the simulation and thus have access to raw simulation data at all timesteps. There are three raw datasets available in the application: Fluid
Density (scalar), Fluid Flow (vector), and Force Field (vector). Next to the
raw datasets, a number of \emph{enriched datasets} are available in the
application. See figure~\ref{fig:datasets} for a full overview of the available
datasets.

\begin{figure}[ht]
\centering
\includegraphics[trim=9mm 90mm 10mm 30mm,clip,width=.9\textwidth]{./images/datasets.pdf}
\caption{Hierarchy of datasets available in the application. All datasets cover the time dimension by default.}
\label{fig:datasets}
\end{figure}

\subsection{Enriched Datasets}
An enriched dataset is the result of applying one or more functions on the original dataset. The purpose of enriching is to adept the dataset to the visualization goal. Simply drawing the complete dataset on screen using whatever data visualization technique will in most cases not show the specific phenomena of interesest. Enriching the dataset can uncover or highlight these phenomena.

Our application contains multiple enriched datasets that can all be visualized in various ways. These enriched datasets are all instances of subclasses of the \texttt{Dataset} class. They all override some or all of the data accessing methods from the \texttt{Dataset} class used by the visualizers to access the data.

\paragraph{Magnitude datasets}
The \texttt{VectorToScalarDataset} can be used to obtain a scalar dataset from a vector dataset. Its constructor takes a \texttt{VectorDataset} instance and assigns scalar values to this vector dataset's grid. 

The simplest way to do this is to assign to each sample point $p$ in the vector
dataset's topological domain $\mathcal{D}$ a scalar value $s(p)=\lVert{\bf
v}(p)\rVert$,
where $\lVert{\bf v}(p)\rVert$ is the norm or magnitude of the vector field at point $p$.
This results in a scalar dataset defined on the same topological domain
$\mathcal{D}$.

The datasets obtained by this method are the Fluid Flow Velocity, the Force Magnitude, the Density Gradient Magnitude, and the Velocity Gradient Magnitude datasets, see figure~\ref{fig:datasets}. These datasets can for example be used to color vector glyphs by their magnitude, see section~\ref{subsec:glyphs}.

\paragraph{Angular datasets}
A second option for the \texttt{VectorToScalarDataset} is assigning to each sample point $p$ the angle of the vector field with the normal direction. The normal direction is set to straight upwards. The value $s(p)$ of the angular dataset at point $p$ can be calculated from the vector ${\bf v}(p) = (v_x(p), v_y(p))$ at point $p$:
\begin{equation}
s_p = \left\{
\begin{array}{l @{\qquad} l}
0 & \mathrm{if}\; v_x(p),v_y(p) = 0 \\
2 \mathrm{arctan} \left(\frac{v_y(p)}{\sqrt{v_x(p)^2 + v_y(p)^2} + v_x(p)}\right) & \mathrm{otherwise}
\end{array}
\right.
\label{eq:angle}
\end{equation}
For more intuitive user interaction, the angular value $s(p)$, which ranges from $-\frac{\pi}{2}$ to $\frac{\pi}{2}$ is scaled to $(-180, 180)$. For an example of a visualization of this dataset, which can be used to identify specific regions in the vector field, see figure~\ref{fig:circular} on page~\pageref{fig:circular}.

\paragraph{Gradient datasets}
The \texttt{GradientDataset} class takes a \texttt{ScalarDataset} instance and computes the gradient vector for a scalar field $s$ at each sample point $p$ of the original dataset. Formally, the gradient vector is defined as follows:
\[
{\bf v}(p) = \left( \frac{\partial s(p)}{\partial x}, \frac{\partial s(p)}{\partial y}\right)
\]
The partial derivatives at other points in the dataset can be computed by
linear interpolation of the derivatives at the cell
vertices~\cite{scivisbookch03}, and thus can be handled by the same routines
that compute the resampling of other vector fields, as described in
section~\ref{subsec:resampling}.

\subsection{Resampling the grid}
\label{subsec:resampling}
In the \texttt{Dataset} class a bilinear interpolation method is provided for all
datasets. This results in every \texttt{Dataset} instance having a method to
access data by grid index, but also by $(x,y)$ position, even if there is no
sample point present at that position. In this interpolation, the values of the
four nearest sample point are used. To be more precise, the four values are
given by:
\[
  v_{11} = v(x_1,y_1),
  v_{12} = v(x_1,y_2),
  v_{21} = v(x_2,y_1),
  v_{22} = v(x_2,y_2),
\]
where $x_1$, $x_2$, $y_1$ and $y_2$ are the coordinates of the sample points
nearest to $(x,y)$ such that $x_1 \leq x < x_2$ and $y_1  \leq y < leq
y_2$. One way to do bilinear interpolation is to lineairly interpolate  both
along the line connecting $(x_1, y_1)$ and $(x_1, y_2)$ and the line connecting
$(x_2, y_1)$ and $(x_2, y_2)$:
\[v_{x_1} = \frac{y_2 - y}{y_2 - y_1}v_{11} + \frac{y_1 - y}{y_2 - y_1}v_{12},\]
\[v_{x_2} = \frac{y_2 - y}{y_2 - y_1}v_{21} + \frac{y_1 - y}{y_2 - y_1}v_{22}.\]
The estimate of $v(x,y)$ can then be obtained by a second interpolation:
\[\hat{v}(x,y) = \frac{x_2 - x}{x_2 - x_1}v_{x_1} + \frac{x_1 - x}{x_2 -
x_1}v_{x_2}.\]
This procedure is the same for both scalar and vector datasets.




\subsection{Time Dimension}
\label{subsec:timedimension}
We implemented a set of circular buffers in the \texttt{Simulation} class to store data values over time. The circular buffers each consist of an array of values $v$ and an array of pointers $p$ to these values. The $i$-th pointer $p_i$ always points to $v_{t-i}$: the data at timestep $t-i$, where $t$ is the current timestep. 

When the simulation is updated, the pointer to the oldest timeframe is temporarily saved in $p_{temp}$. Then for all pointers $p_i$, except $p_0$ we set $p_i \leftarrow p_{i-1}$. Finally we set $p_0 \leftarrow p_{temp}$ and store the current timestep data in $v_0$, which is the memory address that previously stored the oldes timeframe. This way the amount of memory needed to store the all buffers is constant and only one write operation per datapoint per timestep is needed to keep track of a large number of timesteps\footnote{The maximum size of this buffer is set to 1000 in our application, but larger buffers are possible without having to change the data structure.}.




