\section{Evaluation}
	
	This section details our evaluation of the schema with regard to spatial and computational efficiency, as well as our evaluation of the transformation into our model as done by the ETL tool.
	
	\subsection*{Data Warehouse Schema}
			
		As mentioned in the modelling section, our data warehouse uses a hybrid between the star and snowflake schemas known as a starflake schema. 
		
		Recalling the schema diagram in figure \ref{warehouse}, it is evident that the date, time and location dimensions resemble a star schema. As such, they provide optimum efficiency, due to the simplicity of the queries required to search these dimensions. Moreover, the cost of this efficiency is only a small amount of redundancy, since the subdimensions are so simple.
		
		The bag dimension more closely resembles a snowflake schema, since the subdimensions route and route leg are kept in separate tables. Using a star schema in this case, would have caused massive redundancy. It is trivial to prove that the snowflake schema will use much less space in this case: per bag we add to the system, we will add \textbf{at most} one route. Thus, there will never be more routes than bags. Additionally, many bags may share routes, especially for short routes.
		
		For example, a local route such as Aalborg-Copenhagen has flights several times per day, many of them by the same airline. Let us assume that, on average, each flight has 20 passengers with one piece of baggage each, and furthermore that there are 10 flights per day, by 2 different airlines. This will generate $10 \cdot 20=200$ bags per day, but only 2 distinct routes.
		
		If we were to use a star schema in this case, we would have that same route data in every single bag entry, making for $\frac{200}{2}=100$ times the route data generated by the snowflake version.
		
		With regard to performance, while this will obviously cause queries of greater complexity when dealing with route-related questions. It will, however, not at all influence the query speed of questions on date, time, location, or bags. Note also that this model supports future changes in the BagTrack system, allowing for routes that are longer than 6 legs.
		
		Moreover, when only querying for distinct routes, the route legs table will not need to be accessed. If the performance of route-related questions were to cause problems, a materialized view should be considered.
		
		A large number of analytic questions are supported by the warehouse schema. Both purely time-related, such as ``how much time do bags averagely spend on sorters,'' and spatio-temporal, such as ``how many bags go through the sorters in Copenhagen airport on weekends?''
		
		Possibly the most interesting queries, however, are those that use the lost status, such as ``in which airports are the most bags lost,'' which can lead to very useful answers to questions such as ``which routes were these bags supposed to follow,'' as well as ``which locations did these bags visit immediately before being lost?'' This can help locate sorters or baggage storage systems that have higher failure-rates than others.
		
	\subsection*{Data Transformation}
		
		The original reading data generated by the BagTrack system has, for the purpose of this project, been assumed perfect. I.e., we have not at all considered data cleansing in this paper. As such, even though it is evident that the BagTrack dataset contains many anomalies, such as the same bag being read multiple times by the same reader, or multiple readers reading the same bag at the same time, we have designed our ETL tool without that in mind.
		
		While bags being read by readers in the wrong positions would cause problems with statistics extracted through our data warehouse, as well as causing unnecessarily large space consumption, the bags being read multiple times by the same reader would, despite the lack of data cleansing, actually not reduce the efficiency of our data warehouse computationally, nor spatially.
		
		In order to understand this, we look to the way the bag readings are aggregated in our schema. Since we combine succeeding readings of the same bag in the same location into a single \textit{stay} fact, simply expanding the duration of this, we never add more fact rows, no matter how many times the bag is read in the same location. Figure \ref{fig:eval_reading_error_good} shows a fact entry for this situation, where the same bag was detected multiple times within an interval of 60 seconds, in the same location.
		
		\begin{figure}[h!tb]
			\includegraphics[width=\columnwidth]{parts/images/factErrorGood}
			\caption{Fact entry after multiple readings in same location.}
			\label{fig:eval_reading_error_good}
		\end{figure}
		
		If, however, a nearby reader were to continously detect the bag concurrently with the first reader, our fact table would very quickly grow unnecessarily and erroneously. This would not only consume unwarranted amounts of space, but it would in fact disrupt the analytics performed on this bag. Figure \ref{fig:eval_reading_error_bad} shows the fact entries generated by two nearby readers alternatingly, every single second, for only 16 seconds. Note the alternating location ID.
		
		\begin{figure}[h!tb]
			\includegraphics[width=\columnwidth]{parts/images/factErrorBad}
			\caption{Fact entries after multiple readings from both correct and wrong location.}
			\label{fig:eval_reading_error_bad}
		\end{figure}
		
		Obviously, this would be a problem, but since it was outside the scope of this project, we have ignored it. Coupling our ETL tool with a cleansing tool would effectively solve this issue.