As mentioned above, the performance of the database engine is currently a problematic bottleneck. It prevented us from exploiting the temporal relation within the observation database by providing interactive navigation through the data. A possibility to solve this issue would be to use or develop a more specialized storage engine and to avoid unnecessary data requests by means of (pre)caching. Although this is straightforward, the available time did not suffice to implement it.

Up to now, only a small fraction of our ideas for visualizing and understanding observation-based and census data could be implemented. Although the underlying data flow is already established, especially the \texttt{TimelineView} still needs to be integrated into the application. Another big building site are the visual guides for understanding trends and patterns in time. Essential for making connections between subsequent observations is the ability to smoothly play back the events in a time interval which addresses the performance problems. Secondly, fading the corresponding glyphs in and out should help to bridge gaps in the data set. The display of observations as Gaussian blobs on the map is also left as a future extension.

As a high level abstraction of the observation data, a flow field could be estimated and displayed on the map as an animated overlay. Although this may help to reveal otherwise hidden patterns, the results highly depend on the algorithm used for the motion field estimation as well as the density of the input data and are potentially very expensive.

A interesting and demanding field is the support for finding correlations between data layers containing semantically different data. This could include various sources like meteorological, geographical, historical or various anthropological databases. Even results of simulations or complex analysis based on the data contained in the other layers could be integrated in the future.
